I downvoted it because I think that forcibly taking away people's wealth (however much or little they have) is immoral in the extreme. However, I did just now vouch for it because the post isn't breaking any rules even if I do think it's a bad take.
Let’s not discount with what means that wealth has been acquired in the first place. One man’s wealth might be poverty for the thousands. That’s why we got social welfare, it’s just not doing enough. At this point I would say a random store floor cleaner is more valuable for society than someone like Jeff Bezos.
> Basically tells you have to make various dishes saying without specific amounts and just going more on feel and what tastes good.
This is how my mom taught me to cook, and she decided (unilaterally I assume) that it was the definition of gourmet cooking. I went a lot of years thinking that was the literal definition, ha. Though in retrospect, it is not 100% wrong, and I don't think she was joking when she said it.
There's no perfect OS. But for me at least, ads are worse than the other flaws I have to tolerate in another OS. This is obviously a question of taste but I really hate ads in a product that I paid good money for.
Dude, I know you touched on this but seriously. Just don't use AI then. It's not hard, it's your choice to use it or not. It's not even making you faster, so the pragmatism argument doesn't really work well! This is a totally self inflicted problem that you can undo any time you want.
That sucks, but honestly I’d get out of there as fast as possible. Life is too short to live under unfulfilling work conditions for any extended amount of time.
It's not hard to burn tokens on random bullshit (see moltbook). If you really can deliver results at full speed without AI, it shouldn't be hard to keep cover.
I have a Claude code set up in a folder with instructions on how to access iMessage. Ask it questions like “What did my wife say I should do next Friday?”
Reads the SQLite db and shit. So burn your tokens on that.
Big tech businesses are convinced that there must be some profitable business model for AI, and are undeterred by the fact that none has yet been found. They want to be the first to get there, raking in that sweet sweet money (even though there's no evidence yet that there is money to be made here). It's industry-wide FOMO, nothing more.
Typically in capitalism, if there is any profit, the race is towards zero profit. The alternative is a race to bankrupt all competitors at enormous cost in order to jack up prices and recoup the losses as a monopoly (or duopoly, or some other stable arrangement). I assume the latter is the goal, but that means burning through like 50%+ of american gdp growth just to be undercut by china.
Imo I would be extremely angry if I owned any spacex equity. At least nvidia might be selling to china in the short term... what's the upside for spacex?
taxi apps, delivery apps, social media apps—all of these require a market that's extremely expensive to build but is also extremely lucrative to exploit and difficult to unseat. You see this same model with big-box stores displacing local stores. The secret to making a lot of money under capitalism is to have a lot of money to begin with.
People keep saying this but it's simply untrue. AI inference is profitable. Openai and Anthropic have 40-60% gross margins. If they stopped training and building out future capacity they would already be raking in cash.
They're losing money now because they're making massive bets on future capacity needs. If those bets are wrong, they're going to be in very big trouble when demand levels off lower than expected. But that's not the same as demand being zero.
those gross profit margins aren't that useful since training at fixed capacity is continually getting cheaper, so there's a treadmill effect where staying in business requires training new models constantly to not fall behind. If the big companies stop training models, they only have a year before someone else catches up with way less debt and puts them out of business.
Only if training new models leads to better models. If the newly trained models are just a bit cheaper but not better most users wont switch. Then the entrenched labs can stop training so much and focus on profitable inference
Well thats why the labs are building these app level products like claude code/codex to lock their users in. Most of the money here is in business subscriptions I think, how much savings would be required for businesses to switch to products that arent better, just cheaper?
Stop this trope please. We (1) don't really know what their margins are and (2) because of the hard tie-in to GPU costs/maintenance we don't know (yet) what the useful life (and therefore associated OPEX) is of GPUs.
> If they stopped training and building out future capacity they would already be raking in cash.
That's like saying "if car companies stopped researching how to make their cars more efficient, safer, more reliable they'd be more profitable"
Honestly, as someone who strongly believes in federalism and hates what our country turned into over the 20th century, I hope the trend continues. The federal government was never meant to have as much power as it took on during the FDR administration, and it's high time we reversed some of the affronts to the Constitution that happened back then. Hopefully things like this can be the first step.
Yes but the quiet part out loud is that rewinding FDR unwinds the 'switch in time that saved 9' which reverse the SCOTUS decisions that ultimately allow the EPA, most applications of the NFA/GCA (gun control), civil rights act as it pertain to intrastate business, controlled substance act as it pertains to intrastate trade, most functions of regulatory agencies, etc.
So while your comment might be acceptable on face, if you actually explain what it means you will be damned for it.
Yeah I agree. Not only do I hand off my card, literally everyone I know does so. None have ever had problems. I'm not saying that such fraud never happens, because it obviously does happen. But I don't think it's so overwhelmingly common as is being claimed here.
In my lifetime, I had my card details stolen once (in Washington DC). It was an American Express. They caught it immediately and shipped me a new card before I even noticed.
It was basically “we caught some shady shit, here is your new card number, which will be delivered today”. It is one of the reasons I like Amex. They are johnny-on-the-spot when they get a sniff of fraud.
This source[0] is hardly unbiased, so take this with a heavy dose of "citation needed", but it claims:
> 62 million Americans had fraudulent charges on their credit or debit cards last year alone, with unauthorized purchases exceeding $6.2 billion annually.
That jives with the number of unauthorized transactions I've had on my cards. 62/260 million adults = about a quarter of adults each year. On average I probably average a fraudulent transaction in a quarter of the years.
It most certainly is not, lol. That's the hype that the parent was referring to. Most people have found AI to be a detriment, not a benefit, to their work.
Millions in marketing efforts? Anyways, it may be a key part in generating code, but that was always a lesser part of software engineering. If it's generating code it doesn't mean it is doing any engineering for you or becoming a "key part" of it in any way.
reply