The critical part is knowing that TTF fonts can include a virtual machine.. then he pops an llm into that and replaces instances of !!!!!! with whatever the llm outputs.
Not exactly. Harfbuzz, the font shaping library, has an optional feature to use WASM for shaping. Normal font hinting is much more restricted, precisely because Turing-complete fonts are a horrible idea.
I'm not talking about that specifically but your intuition is also correct and there's a lot of research going around constructing/defining hierarchies of "learning" behavior.
Author here. Yeah, that’s fair, it is over-generalizing (and I even admit to that in the post).
I do think there might be some value (insight?) into trying to generalise things sometimes. In this case it’s me trying to split all software engineers along a single axis and seeing whether it fits. I don’t think it’s a neat split, but the more I think about it the more I think I could, if pressed, sort a lot of engineers into one of these two buckets. Not at all times, but sometimes.
Generally speaking: over-generalisation can be lazy and it was in this case — I write these posts very quickly, as a braindump, and you shouldn’t read too much into it, except that it might be food (snack?) for thought.
I'm a coder with 30 years of experience and I'm also excited about the potential of ChatGPT to help with my work. However, it's very important to remember to double-check the code it generates for accuracy.
Especially for anything that is not boilerplate or tutorial level, I've seen a lot of mechanically incorrect and unhelpful code generated.
> I've seen a lot of mechanically incorrect and unhelpful code generated
This has been my experience. I've attempted to use ChatGPC 4 times to generate a functions or classes with various levels of complexity and every time it has produced incorrect code. It often fabricates functions, classes, or entire libraries and the few times it produced regular expression the regexp didn't actually do what ChatGPT thinks it will do.