> For our FL1 request handling layer, NGINX- and LuaJIT-based code, this cache reduction presented a significant challenge. But we didn't just assume it would be a problem; we measured it.
It’s not that the “it’s not just A, it’s B” pattern. It’s just that humans don’t write like that. You don’t go “I didn’t just assume; I measured”. People usually say something like “but we didn’t know for sure, so we decided to measure it”. The LLM text is over-confident.
> The choice to never invert raster images isn't a compromise, it's the design decision. The problem veil solves is exactly that: every dark mode reader today inverts everything, and the result on photos, histology, color charts, scans is unusable. Preserving all images is the conservative choice, and for my target (people reading scientific papers, medical reports, technical manuals) it's the right one.
It’s like a guy putting together a promo packet or something. A normal person would be a little hesitant and wouldn’t just go. “And what I’m doing isn’t because of constraints. It’s because I am making the right choices!”
It’s just an oddly stilted way of speaking in conversation. Imagine talking to someone like that in real life. It would be all like “And then I thought the problem was that the global variable was set wrong. But I didn’t just assume that, I verified it.”
No one’s accusing you of assuming it, dude. You don’t have to pre-emptively tell us you didn’t just assume it. Normal people don’t say that.
I don’t have much of a problem with LLM text because I just skip over flavor like this to charts, code, and tables but this is obviously LLM
Ah appreciate it. A year ago it was very clear when something was written by an LLM, but now you've gotta look for certain characteristics. I try not to infer to much, especially because llms are really helpful for non native English speakers to write faster.
I'd like to make it a bit more normalized to have public writing be transparent about if llms were used and how. That makes it quite a bit easier for readers to focus on the content instead of debating how something was written lol
I’m hopeful that future LLMs will be better at communicating information. If the facts are right, and the text is concise then I don’t care about the source. The problem is that the text is verbose bloviation.
> For our FL1 request handling layer, NGINX- and LuaJIT-based code, this cache reduction presented a significant challenge. But we didn't just assume it would be a problem; we measured it.
It’s not that the “it’s not just A, it’s B” pattern. It’s just that humans don’t write like that. You don’t go “I didn’t just assume; I measured”. People usually say something like “but we didn’t know for sure, so we decided to measure it”. The LLM text is over-confident.
Here’s an example on HN https://news.ycombinator.com/item?id=47538047
> The choice to never invert raster images isn't a compromise, it's the design decision. The problem veil solves is exactly that: every dark mode reader today inverts everything, and the result on photos, histology, color charts, scans is unusable. Preserving all images is the conservative choice, and for my target (people reading scientific papers, medical reports, technical manuals) it's the right one.
It’s like a guy putting together a promo packet or something. A normal person would be a little hesitant and wouldn’t just go. “And what I’m doing isn’t because of constraints. It’s because I am making the right choices!”
It’s just an oddly stilted way of speaking in conversation. Imagine talking to someone like that in real life. It would be all like “And then I thought the problem was that the global variable was set wrong. But I didn’t just assume that, I verified it.”
No one’s accusing you of assuming it, dude. You don’t have to pre-emptively tell us you didn’t just assume it. Normal people don’t say that.
I don’t have much of a problem with LLM text because I just skip over flavor like this to charts, code, and tables but this is obviously LLM