I craft a detailed and ordered set of lecture notes in a Quarto file and then have a dedicated claude code skill for translating those notes into Slidev slides, in the style that I like.
Once that's done, much like the author, I go through the slides and make commented annotations like "this should be broken into two slides" or "this should be a side-by-side" or "use your generate clipart skill to throw an image here alongside these bullets" and "pull in the code example from ../examples/foo." It works brilliantly.
And then I do one final pass of tweaking after that's done.
But yeah, annotations are super powerful. Token distance in-context and all that jazz.
Quarto can be used to output slides in various formats (Powerpoint, beamer for pdf, revealjs for HTML, etc.). I wonder why you use Slidev as you can just ask Claude Code to create another Quarto document.
It looks like Slidev is designed for presentations about software development, judging from its feature set. Quarto is more general-purpose. (That's not to say Quarto can't support the same features, but currently it doesn't.)
I'm not affiliated with Slidev. I was just curious.
Not yet... but also I'm not sure it makes a lot of sense to be open source. It's super specific to how I like to build slide decks and to my personal lecture style.
But it's not hard to build one. The key for me was describing, in great detail:
1. How I want it to read the source material (e.g., H1 means new section, H2 means at least one slide, a link to an example means I want code in the slide)
2. How to connect material to layouts (e.g., "comparison between two ideas should be a two-cols-title," "walkthrough of code should be two-cols with code on right," "learning objectives should be side-title align:left," "recall should be side-title align:right")
Then the workflow is:
1. Give all those details and have it do a first pass.
2. Give tons of feedback.
3. At the end of the session, ask it to "make a skill."
4. Manually edit the skill so that you're happy with the examples.
As a SE with over 15 years' professional experience, I find myself pointing out dumb mistakes to even the best frontier models in my coding agents, to refine the ouput. A "coder" who is not doing this on the regular is only a tool of their tool.
(in my mental model, a "vibe coder" does not do this, or at least does not do it regularly)
Well, the term lacks clarity and a shift of meaning.
If you define "vibe-coders" as people who just write prompts and don't look at code - no, they ain't coders now.
But if you mean people who do LLM-assistet coding, but still read code (like all of those who are upset by this change) - then sure, they always have been coders.
Sure, but it also guarantees that people will think twice about buying their service. Support should have reached out and informed them about whatever they did wrong, but I can't say that I'm surprised that an AI company wouldn't have an real support.
I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.
Not sure what your point is. They have the right to kick OP out. OP has the right to post about it. We have a right to make decisions on what service to use based on posts like these.
Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."
"I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express."
The article describes data showing a correlation between Ozempic use and slowed progression of certain brain conditions. The study aimed to determine whether that effect came from Ozempic itself or simply from weight loss. Once researchers controlled for weight loss, the effect disappeared. In other words, correlation, not causation.
That's an important caveat. But effectively it sounds like Ozempic typically results in a better diet, and a better diet typically results in slowed progression.
Having tried a few of these agent frameworks now, ADK-Python has easily been my favorite.
- It’s conceptually simple. An agent is just an object, you assign it tools that are just functions, and agents can call other agents.
- It’s "batteries included". You get a built-in code execution environment for doing math, session management, and web-server mode for debugging with a front-end.
- Optional callbacks provide clean hooks into the magic (for example, anonymizing or de-anonymizing data before and after LLM calls).
- It integrates with any model, supports MCP servers, and easy enough to hack in your existing session management system.
I'm working on a course in agent development and it's the framework I plan to teach with.
I would absolutely take this for a spin if I didn't hate Go so much :)
I've gotten 7 years out of my 2018 iPad Pro and, for my use case of video, browsing, and Procreate, it feels like new. And I believe a big part of that is that the A12X was wildly overpowered when I bought it.
I think someone deciding between an M4 and an M5 today should consider its value 5 years down the road, rather than its value today.
Same. Also have a super old iPad Pro, and it still works amazing. I always ponder upgrading, knowing that I’ve gotten so much use and enjoyment out of it, but then get wrapped around the axle about how the CPU is absurdly overpowered for what I do with it (YouTube, podcasts, music, drawing/note taking, reading). It’s my main device at home, too, so I never feel like I need to upgrade my phone - it’s definitely saved me money in that regard, too. :P
> In the era of Emperor Augustus (27 B.C. to 14 A.D.), a Roman centurion was paid 15,000 sestertii. Given that one gold aureus equaled 1,000 sestertii and given there was eight grams of gold in an aureus, the pay comes to 38.58 ounces of gold
Today, 38.58oz of gold would be a salary of $156K/yr.
If we do the same for silver, it comes out to about 470oz of silver. So $23,500/yr.
If we compare that to a US Army E-8 (say 80k/yr), we can argue that gold has doubled its value relative to labor and silver has dropped to almost a quarter.
I use Markov chains as an example of a "Small Language Model" in teaching LLMs.
My favorite thing about them is that you can use them to demonstrate temperature. The math is basically the same, and it has a similar effect of creating more creativity in the response.
from math import log, exp
from random import choices
# Likilihood of transitioning from of curr_word to next_word
transitions: dict[dict[str, float]] = {...}
def next_word(current_word, temp=1.0):
if current_word not in transitions:
return random.choice(list(transitions.keys()))
probabilities = transitions[current_word]
next_words = list(probabilities.keys())
pvals = list(probabilities.values())
logits = [log(p) for p in pvals]
scaled_logits = [logit/temp for logit in logits]
max_logit = max(scaled_logits)
exps = [exp(s - max_logit) for s in scaled_logits]
sum_exps = sum(exps)
softmax_probs = [exp_val/sum_exps for exp_val in exps]
return choices(next_words, weights=softmax_probs)[0]
def generate_sequence(start_word, length):
sequence = [start_word]
current_word = start_word
for _ in range(length - 1):
current_word = next_word(current_word)
sequence.append(current_word)
return sequence
Some outputs of this when the transitions are trained on a lyrics dataset.
> print(" ".join(generate_sequence("When", 20, temp=0.1)))
When I know that I know that I was a little thing that I know that I don't know that
> print(" ".join(generate_sequence("When", 20, temp=0.5)))
When I don't know you know I can do I see And the river to the light in the time
> print(" ".join(generate_sequence("When", 20, temp=1.0)))
When are melting Little darling, I feel more And if I was very slow (In control) For our troubles And
It's a lot more nonsensical than an LLM, but highlights what the logit manipulation is doing.
I'm surprised by this take. I love YAML for this use case. Easy to write and read by hand, while also being easy to write and read with code in just about every language.
YAML is a serialization format. I like YAML as much as I like base64, that is I don't care about it unless you make me write it by hand, then I care very much.
GitHub Actions have a lot of rules, logic and multiple sublanguages in lots of places (e.g. conditions, shell scripts, etc.) YAML is completely superficial, XML would be an improvement due to less whitespace sensitivity alone.
Sure, easy to read, but quite difficult to /reason/ about in your head, let alone have proper language server/compiler support given the abstraction over provider events and runner state. I have never written a CI pipeline correctly without multiple iterations of pushing updates to the pipeline definition, and I don't think I'm alone on that.
Easy to write and read until it gets about a page or two long. Then you have to figure out stuff like "Oh gee, I'm no nesting layer 18, so that's... The object.... That is.... The array of.... The objects of....."
Plus it has exactly enough convenience-feature-related sharp edges to be risky to hand to a newbie, while wearing the dress of something that should be too bog-simple to have that problem. I, too, enjoy languages that arbitrarily decide the Norwegian TLD is actually a Boolean "false."
This so much this.
Vscode has a very good syntax check github actions yaml so it's not yaml that's the problem.
It's the workflow for developing pipelines that's the problem. If I had something I could run locally - even in a debug dry-run only form that would go a long way to debugging job dependencies, etc. Testing failure cases flow conditional logic in the expected manner etc.
This is why I've become a fan of StrictYAML [0]. Of course it is not supported by many projects, but at least you are given the option to dispense with all the unnecessary features and their associated pitfalls in the context of your own projects.
Most notably it only offers three base types (scalar string, array, object) and moves the work of parsing values to stronger types (such as int8 or boolean) to your codebase where you tend to wrap values parsed from YAML into other types anyway.
Less surprises and headaches, but very niche, unfortunately.
That only matters if you're parsing the same yaml file with different parsers, which GitHub doesn't (and I doubt most people do - it's mostly used for config files)
I craft a detailed and ordered set of lecture notes in a Quarto file and then have a dedicated claude code skill for translating those notes into Slidev slides, in the style that I like.
Once that's done, much like the author, I go through the slides and make commented annotations like "this should be broken into two slides" or "this should be a side-by-side" or "use your generate clipart skill to throw an image here alongside these bullets" and "pull in the code example from ../examples/foo." It works brilliantly.
And then I do one final pass of tweaking after that's done.
But yeah, annotations are super powerful. Token distance in-context and all that jazz.
reply