The kid can learn and become better over time, while "AI" can only be retrained using better training data.
I'm not against using AI by any means, but I know what to use it for: for stuff where I can only do a worse than half the population because I can't be bothered to learn it properly. I don't want to toot my own horn, but I'd say I'm definitely better at my niche than 50% of the people. There are plenty of other niches where I'm not.
The AI doesn't know what good or bad code is. It doesn't know what surpassing someone means. It's been trained to generate text similar to its training data, and that's what it does.
If you feed it only good code, we'd expect a better result, but currently we're feeding it average code. The cost to evaluate code quality for the huge data set is too high.
The training data includes plenty of examples of labelled good and bad code. And comparisons between two implementations plus trade-offs and costs and benefits. I think it absolutely does "know" good code, in the sense that it can know anything at all.
There does exist some text making comparisons like that, but compared to the raw quantity of totally unlabeled code out there, it's tiny.
You can do some basic checks like "does it actually compile", but for the most part you'd really need to go out and do manual categorization, which would be brutally expensive.
Ditto. AI has the power to make you believe stuff without you noticing, why would they bother with garish ads when they could make you think it was YOUR idea to buy Chlorox?
I guess maybe the garish colors could increase your suggestibility indirectly maybe?
> The environment wins (less tokens burned = less energy consumed)
This is understandable logic, but at a systemic level it's not how things always go. Increasing efficiency can lead to increased consumption overall. You might save 50% in energy for your workload, but maybe now you can run it 3 times as much, or maybe 3 times more people will use it, because it's cheaper. The result might be a 50% INCREASE in energy consumed.
This is the standing reason that is always given for why we must all sit in freeway traffic clogs, and I think it's B.S., because it assumes that there are viable alternatives available in near-medium term, but that isn't always the case. The alternative to freeways that are supposed to compensate is a joint combination of denser housing and mass transit, which in California, is not happening at all...zoning laws and the slow pace of building mass transit due to regulation slow-down and the need to service urban sprawl, prevent that solution from relieving traffic pressure. Don't speak of busses, because taking two hours to get to work is not better than one hour. So..the freeways stay the same number of lanes and my commute time continues to grow, and I am tired of hearing it is for the best.
So yes, lower LLM costs would probably lead even more LLM usage and greater energy expenditures, but then again, so does having a moving economy, and all that comes with that.
Yeah, probably. I wonder where speed-running fixing all the low-hanging fruit for AI-related efficiency improvements will leave us? It still seems worth doing. Maybe combined with a carbon tax.
What I want is for THIS to stop. "Listen, no one wants to hear about your moral issues, just stfu."
Don't give up so easily. Let the discomfort in and try & figure out why people keep saying "omg LLMs" until you can hear what they are actually saying.
We need your help! Can you please use your creativity to build resilience to climate change in your community instead of experimenting with more ways to spend computing power?
People who know alcohol is bad for them and don't want to keep being drunks but keep drinking, people who believe phones are bad for their kids but still buy them, people who understand AI will significantly degrade the environment if it becomes ubiquitous but still work to help it become ubiquitous...
Mathematicians who publish proofs that are later proven inconsistent!
I suspect we have fundamentally different views of how humans work. I see our behavior and beliefs as _mostly_ irrational, with only a few "reasoning live-zones" where, with great effort, we can achieve logical thought.
Well most of your examples are about failure to act based on reasoning rather than failure to reason, except the mathematics one which is unfair as research mathematics is a very hard task - either subtle errors, reasoning in new fields, or extremely long chains of reasoning.
How can you know? One could argue that the entire phenomenon of cognitive dissonance is "people (internally) recognize the contradiction and then perform it"
I totally agree. I see this "people don't want to do hard stuff" argument used all over - completely disregarding tens of thousands of years of people doing hard stuff.
It comes off to me as the author not wanting to do the hard stuff of working towards their values. Just kind of defeatist and trying to make a splash but leaning on a pretty weak premise.
Most people do not give a rat's ass about the security of their data. They know their social media apps are tracking where they go and who they meet, and they'll say it's creepy if you ask them, but they don't actually care enough to lift a finger to do anything about it.
> completely disregarding tens of thousands of years of people doing hard stuff
a) Just because humanity as a whole did hard things, doesn't mean that most humans did or were willing to. It's perfectly possible that all the hard things we did were accomplished by a handful of remarkable individuals, doing things that the majority never would have been willing to.
b) just because people in one age have been willing to do things, doesn't mean they are willing to do so in all ages. So it's not like the past necessarily proves anything here.
> Either you own and control something, or you do not, there's no third option.
I think there's a full spectrum you're missing. You can own something with other people, and your level of control can be continuous, not discrete & binary. For example, my public library is funded by my local government, which I can influence with lobbying and voting. I can join the board of the library, and I can just go and talk to the librarians in charge to influence their decisions.
In an individualist consumerist mindset things are pretty stark : full self-hosting or full submission. If you reject that mindset there are many more options.
IMHO, the reasons not to use AI are social, not logical.
reply