Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cloudflare has excellent (human) technical writers. I don’t see any indication this is “slop”, it’s the standard in-the-weeds but understandable blog post they’ve been doing for years.

AI text is everywhere, but this isn’t it.

 help



This is AI, but I can’t prove it, lol. :) The bulleted lists that are too short both in total list length and text per list item length. littles drama headers as parent noted.

To your point, this would register as “human bloviating for word count subtly” if llms didn’t exist, and at this point is probably the most useful feedback. I doubt it’s 100% one-shot AI, but someone definitely optimized it in parts, but the AI heard “concise” as “bullets and short sentences.”


Cloudflare definitely does have excellent technical writers, but a) this doesn't seem to be (entirely? substantially?) from them, and b) if there are AI tropes clearly visible, which they are to me, it's putting readers off regardless of whether the content is AI generated, and that's just bad marketing.

Agree to disagree. It is likely ai enhanced some where along the path to production. So many phrases reek AI but others do not. Is this a sprinkling of llm help or how a human genuinely writes, idk.

Out of curiosity, can you point to specific sections that reel of AI? I read the article and didn't see anything that immediately stuck out, but maybe I need to start looking for different signals.

This is LLM tropey:

> For our FL1 request handling layer, NGINX- and LuaJIT-based code, this cache reduction presented a significant challenge. But we didn't just assume it would be a problem; we measured it.

It’s not that the “it’s not just A, it’s B” pattern. It’s just that humans don’t write like that. You don’t go “I didn’t just assume; I measured”. People usually say something like “but we didn’t know for sure, so we decided to measure it”. The LLM text is over-confident.

Here’s an example on HN https://news.ycombinator.com/item?id=47538047

> The choice to never invert raster images isn't a compromise, it's the design decision. The problem veil solves is exactly that: every dark mode reader today inverts everything, and the result on photos, histology, color charts, scans is unusable. Preserving all images is the conservative choice, and for my target (people reading scientific papers, medical reports, technical manuals) it's the right one.

It’s like a guy putting together a promo packet or something. A normal person would be a little hesitant and wouldn’t just go. “And what I’m doing isn’t because of constraints. It’s because I am making the right choices!”

It’s just an oddly stilted way of speaking in conversation. Imagine talking to someone like that in real life. It would be all like “And then I thought the problem was that the global variable was set wrong. But I didn’t just assume that, I verified it.”

No one’s accusing you of assuming it, dude. You don’t have to pre-emptively tell us you didn’t just assume it. Normal people don’t say that.

I don’t have much of a problem with LLM text because I just skip over flavor like this to charts, code, and tables but this is obviously LLM


Ah appreciate it. A year ago it was very clear when something was written by an LLM, but now you've gotta look for certain characteristics. I try not to infer to much, especially because llms are really helpful for non native English speakers to write faster.

I'd like to make it a bit more normalized to have public writing be transparent about if llms were used and how. That makes it quite a bit easier for readers to focus on the content instead of debating how something was written lol


I’m hopeful that future LLMs will be better at communicating information. If the facts are right, and the text is concise then I don’t care about the source. The problem is that the text is verbose bloviation.

Cloudflare had excellent human technical writers. But for the past months/years they got slowly replaced by AI, and the quality of the posts dove down drastically.

Remember when they had "implemented a serverless post-quantum Matrix server", where they blatantly lied saying it's production ready, when most of the encryption features were not even implemented. (Then rushed to removed the LLM's 'todo' tags from the code). https://tech.lgbt/@JadedBlueEyes/115967791152135761




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: