Just try calculating how many RTX 5090 GPUs by volume would fit in a rectangular bounding box of a small sedan car, and you will understand how.
Honda Civic (2026) sedan has 184.8” (L) × 70.9” (W) × 55.7” (H) dimensions for an exterior bounding box. Volume of that would be ~12,000 liters.
An RTX 5090 GPU is 304mm × 137mm, with roughly 40mm of thickness for a typical 2-slot reference/FE model. This would make the bounding box of ~1.67 liters.
Do the math, and you will discover that a single Honda Civic would be an equivalent of ~7,180 RTX 5090 GPUs by volume. And that’s a small sedan, which is significantly smaller than an average or a median car on the US roads.
I didn’t do the napkin math on it earlier, because I don’t believe it really matters for making the point I was making.
I don’t care about looking up real numbers, so I will just overestimate heavily. Let’s say that for a large enough number of GPUs, the overhead of all the surrounding equipment would be around 20% (amortized).
So you can just take the number of GPUs I calculated in my previous comment, multiply by 0.8, and you get your answer.
Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?
Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.
I think the dynamic is different - before, they were writing and testing the functions and features as they went. Now, (some of) my coworkers just push a PR for the first or second thing copilot suggested. They generate code, test it once, it works that time, and then they ship it. So when I am looking through the PR it's effectively the _first_ time a human has actually looked over the suggested code.
Anecdote: In the 2 months after my org pushed copilot down to everyone the number of warnings in the codebase of our main project went from 2 to 65. I eventually cleaned those up and created a github action that rejects any PR if it emits new warnings, but it created a lot of pushback initially.
Then when you've taken an hour to be the first person to understand how their code works from top to bottom and point out obvious bugs, problems and design improvements (no, I don't think this component needs 8 useEffects added to it which deal exclusively with global state that's only relevant 2 layers down, which are effectively treating React components like an event handling system for data - don't believe people who tell you LLMs are good at React, if you see a useEffect with an obvious LLM comment above it, it's likely to be buggy or unnecessary), your questions about it are answered with an immediate flurry of commits and it's back to square one.
Yep, and if you're lucky they actually paste your comments back into the LLM. A lot of times it seems like they just prompted for some generic changes, and the next revision has tons of changes from the first draft. Your job basically becomes playing reviewer to someone else's interactions with an LLM.
It's about as productive as people who reply to questions with "ChatGPT says <...>" except they're getting paid to do it.
If that actually becomes material, they'll offer to buy shares in the next round. That's the point at which this whole conversation becomes interesting; right now, it's complexity for its own sake.
I know the feeling! I left a company some years back in a complicated way, and my instinct was to drill in as well. It seems like a big deal! It really isn't, though.
If that's going to happen, it's going to happen. I've heard as many stories of it happening as I've heard stories of people unhappy with the amount of liquidity they were able to achieve early in the life of a company that later became successful.
This is a joke right? Seed investors will get 10-30% of a company for under a million dollars which will be blown through in less than a year. That’s means they’re a drag on the cap table right?
That theoretically future investors will be reluctant to invest because the founder 10% is crowding out equity that could otherwise be used to attract key performers down the line.
The exact details are unclear from the original post, but he definitely isn't giving up 40%. If they've only raised the pre-seed (a reasonable inference given the low valuation), then 10% ownership after 18 months points to two co-founders and a combined investor and option pool dilution of 20%. Anything is possible, of course, but unless the deal terms were very non-standard, this scenario makes the most sense.
You're right that 10% isn't necessarily a huge deal for investors, though. Early-round investor models target a specific ownership stake, and the company has to issue the same number of shares for that no matter what the composition of existing shareholders is.
The challenge with founders leaving is more psychological, like an early engineer who's vested a quarter of their 1% grant realizing that they still have to work hard for three years just to get a tenth of what the guy leaving already has. That's an easy way to suffocate the remaining team's motivation. Potential investors will (and should) look into it, but most of the time it's fine.
I'm not saying I agree with the concern, I'm just articulating what it is. I think the answer here is super simple: walk away with the 10% vested. (Also: stop thinking in terms of %).
reply