Obviously no guarantees that it's exactly what was done in this case, but he talked about his general process recently at a conference and more in depth in a podcast:
Except it's not 100x revenues, and it's not 17% growth. I don't know where you got those numbers from?
The numbers OpenAI gave in the post would mean a 30x multiple pre-money. And the $20B -> $24B run-rate growth since the start of the year could plausibly mean anything from 110% to 200% annualized growth rate, depending on whether that happened over two or three months. The $24B is a lower bound as well, since they only gave use one significant digit for the monthly revenue.
You're right, I was thinking about 100x revenues and forgot to confirm the math. Updated to reflect your point. ChatGPT itself provided the 17% number (it's most recently available growth rate)...
What what? Are you surprised it's that low, that high, that they can tell what their revenue is, that they report it on a monthly rather than annual basis, or something totally different?
It's going to be pretty hard to get a good answer to whatever you're having difficulties understanding if you can't be bothered to write more than a word.
A theory that at least is consistent with the observed correlation seems vastly superior to a midbrow dismissal that doesn't. Your "raising kids is hard" theory would explain why people don't have a third child, but raising kids is hard universally. What was observed was that a third child was delayed for longer (even indefinitely) in states with higher age thresholds for mandatory car seats (even when controlling for demographics).
Their causal explanation relies on two additional observations that seem pretty hard to explain by other theories: the effect disappears for single-parent and carless households.
A new fab will need to be filled with advanced equipment like lithography machines. They are the most complex thing humanity has every built.
There is one supplier of EUV lithography machines in the world, ASML. They are basically acting as an integrator for hundreds of highly specialized components manufactured to unimaginable levels of precision. Each of them has roughly one eligible supplier in the world who are operating at full capacity. To expand, they'll need yet another set of specialized and almost impossible to build equipment.
So the supply chain moves incredibly slowly, and the slowness is intrinsic due to the complexity and depth of the supply chain. It can't be fixed with just money. IIRC ASML is aiming to merely double their production of EUV lithography machines by 2030.
Sure, I didn't mean to suggest that it would be easy or fast to increase manufacturing capabilities, just that the confidence I'm seeing around AI should extend to the manufacturers (if that confidence for the future growth and success of OpenAI and Anthropic is warranted). That is, the business decision to increase RAM and GPU supply should be "easy".
It wasn't actually that exact amount. It was "about 12 tons", and somebody did the 12000 kg / 29g calculation and used the answer with way too many significant digits. Probably the reporter trying to make the 12 ton number relatable.
(You might object that KitKats usually weigh 40g. So these were probably the new KitKat Icon F1 chocolates, which weigh exactly 29g.)
I think you've misunderstood something. This is not about rejecting LLM-written articles. It is about rejecting the articles of people who used LLMs for their reviews.
Those second-level reviewers, checking whether the first-level authors used LLMs in their reviews, also used LLMs to do their screening, and the latter missed it in many cases.
My original point (loosely based on the subject, not TFA) is that it's LLMs all the way down, way more than it's "measured" to be.
A practical question: what should readers do when they suspect a comment (or story) is AI-generated? Is that an appropriate reason for flagging? Email the mods? Do nothing?
I've been pretty wary about flagging AI slop that wasn't breaking other guidelines, and by default this will probably make me do it more. But it is a lot harder to be certain about something being AI-written than it is to judge other types of rules violations.
(But am definitely flagging every single "this was written by AI" joke comment posted on this story. What the hell is wrong with you people?)
It was a quote from your own link from the initial post?
https://www.freebsd.org/security/advisories/FreeBSD-SA-26:08...
> Credits: Nicholas Carlini using Claude, Anthropic
reply