Has the last 70 years of productivity increases led to a reduction in weekly work hours? No.
Some jobs will be automated away. Good thing. Braindead stuff that a machine can do should be done by a machine. Doesn't mean we'll all soon be just picking our noses. There will be other work to be done, and if unregulated capitalism has its say then it can easily lead to even more worker exploitation.
Of course, the effects aren't equally distributed across all countries. For example, annual work hours per worker have almost halved in Germany since 1950, but only seen a more modest decrease in the US. So political factors still play a role in how the benefits of increased productivity are used by society.
But it's a strong effect. And those numbers don't even consider other factors such as how increased life expectancy combined with mostly unchanged retirement age, and being older when we first start working, give people an extra decade or two of not being part of the workforce at all.
The 40-hour workweek was introduced in Germany in the mid-1960s. 60 years later, it's still standard. A few 39-, 38- or 37.5-hour weeks here and there, but even those are by and large 40-hour weeks.
The number of vacation week and public holidays has increased, which explains the majority of the difference in "annual work hours".
The 10x in productivity is in no way reflected by the number of work hours.
A policeman standing on a public square threatening to incarcerate anybody who is violent results in no violence actually happening at that square. Take away that regulation (in form of the policeman) and watch the actual violence start.
That’s a very narrow view of humanity and morality. Only psychopaths (in a clinical sense, not derogatory) model their actions strictly by what’s legal.
Many things are moral, but have no legal coverage, some things are moral but illegal, and some immoral but legal.
> Last week I did the amount of work that would’ve taken me give or take a month. A significant part of it was writing an API client for a system I needed to use. Pretty run of the mill stuff. Doing this ‘by hand’ just takes time. Go look at the docs, type out your data structures, wire things up for the new call, write tests. With “the robot” once the framework was largely in place, you just paste API docs for the endpoint you need, and it’s done in a minute. With tests and everything.
That just points to an inefficiency. Could be tackled in other ways than involving an LLM to produce essentially what's being done elsewhere every day over and over again. A framework automating and hiding all this would be just as effective. Perhaps even cleaner than all that duplication that the LLM created for you.
In other words, that month of busywork that you just saved is inherently unnecessary to do. But progress is not linear in the number of lines of code that you produce. If you think hard about a good architecture and design, coming up with that after 2 weeks of hard thinking, that could be 90% of the work. The remaining 10% are writing all that down. That could take 3 more months, and it taking more than 10% of the time points to the existing inefficiency in tooling / framework / ... But making that more efficient isn't necessarily achieved the best by using an LLM and writing it all out. There are still huge redundancies, which is what made the LLM possible. Once you boil these down in some common frameworks or tools, the LLM will also just produce the same few lines that you'd need to produce to get those 10% done in then just 10% of the total time.
The key ingredient in a formal course is the formal examination. Be it graded exercises or a written or oral exam. It forces studying at a level where you can reproduce and explain the key concepts and results and apply them to something new.
In theory one can do that also with self-study. Most people don't, they just watch some youtube video or read a Wikipedia page, and then they think they have understood something. But the deeper understanding that comes from applying this new knowledge is missing. That step is forced when you take a university course that has some form of examination in-built. Doing it yourself is possible but non-obvious, hard, perhaps even unpleasant, and it's rare people do it. Some do though, and their understanding isn't inferior to a university graduate.
Just to add some more motivation: in a typical physics undergraduate curriculum, you will spend roughly as much time doing homework as attending lectures. If you skip the exercises, you are quite literally skipping half of the education.
It's very easy to let your brain fool yourself in understanding maths - after staring at a statement for a while, you start to think you understand it, chances are you don't.
it took me 15 minutes once to understand one equation. It was the derivation of the Taylor series for the tangent function. It was a very old textbook for calculus (from the beginning of the 20th century) and it applied techniques which are not teached nowadays, so it is possible to understand mathematical expressions after staring at it ;)
Unfortunately it's really hard to put the same kind of accountability on yourself compared to that of a professor gatekeeping an accredited degree. Personally, arbitrary goals don't really motivate me as the acknowledgment of its arbitrariness overrides any pursuit of reward.
It’s entirely possible that creating a brain capable of controlling itself is more costly (in an evolutionary sense, measured by the number of generations needed to achieve this goal) than equipping a brain with the ability to check itself by communicating with others.
Nevertheless, some brains lack even that ability, gravitating instead toward echo chambers where everyone shares the same views, so no mutual checks are possible.
That's such a economical fallacy that I'd expect the HN crowd to have understood this ages ago.
Compare the average productivity of somebody working in a car factory 80 years ago with somebody today. How many person-hours did it take then and how many does it take today to manufacture a car? Did the number of jobs between then and now shrink by that factor? To the contrary. The car industry had an incredible boom.
Efficiency increase does not imply job loss since the market size is not static. If cost is reduced then things are suddenly viable which weren't before and market size can explode. In the end you can end up with more jobs. Not always, obviously, but there are more examples than you can count which show that.
This is all broadly true, historically. Automating jobs mostly results in creating more jobs elsewhere.
But let's assume you have true, fully general AI. Further assume that it can do human-level cognition for $2/hour, and it's roughly as smart as a Stanford grad.
So once the AI takes your job, it goes on to take your new job, and the job after that, and the job after that. It is smarter and cheaper than the average human, after all.
This scenario goes one of three ways, depending on who controls the AI:
1. We all become fabulously wealthy and no longer need to work at all. (I have trouble visualizing exactly how we get this outcome.)
2. A handful of billionaires and politicians control the AI. They don't need the rest of us.
3. The AI controls itself, in which case most economic benefits and power go to the AI.
The last historical analog of this was the Neanderthals, who were unable (for whatever reason) to compete with humans.
So the most important question is, how close actually are we to this scenario? Is impossible? A century away? Or something that will happen in the next decade?
> But let's assume you have true, fully general AI.
Very strong assumption and very narrow setting that is one of the counter examples.
AI researchers in the 80s already told you that AI is around the corner in the next 5 years. Didn't happen. I wouldn't hold my breath this time either.
"AI" is a misnomer. LLMs are not "intelligence". They are a lossy compression algorithm of everything that was put into their training set. Pretty good at that, but that's essentially it.
Indeed. My roommate has just been put on a new project at his workplace. No AI involved anywhere. But he inherited a half-done project. Code is even 90% done. But he is spending so much time trying to understand all that existing code, noting down the issues it has which he'll need to fix. It's not just completing the remaining 10%. It's understanding and fixing and partially reworking the existing 90%. Which he has to do, since he'll be responsible for the thing once released. It's approaching a point where just building it from scratch on his own would have been more time efficient.
It seems to me that LLM output creates a similar situation.
And some circles hand wave away all criticism of any new thing as luddism.
This article is a bit more balanced, though, and clearly isn't criticising use of AI in programming, but specifically the "Jesus take the wheel" style of vibe coding. It's the same old "if you write code as cleverly as you possibly can, you are not smart enough to debug it", but to the next level, where people are writing code that they aren't even smart enough to read.
I mean ... most code out there is pretty bad, so LLM assistants contributing pretty bad code just keeps the mean where it is. And obviously it has to be, how can anybody expect an LLM to produce output with quality that's higher than its training input? Expecting that is appealing to magic or some consciousness that doesn't actually exist or just plain anthropomorphising.
If you are working at a place where that quality level is standard -- and let's face it, a large number of companies produce average or below-average quality code (by definition) -- then using an LLM assistant isn't that bad. At least if such an assistant doesn't have some extra flaws beyond producing the best summary of its training data, which is exactly what an LLM does. It actually justifiably replaces developers in such an average-or-below place. But if you are aiming for the top end of the quality scale then there is no way this can be achieved by LLM output. Purely on principle.
This shouldn't even be a controversial opinion. I'm quite surprised every time this is questioned or even just debated.
"Bad" is doing s lot of work in your sentence. Do you mean slow? Or buggy? Or unmaintainable? Or unextendable? Uses patterns the tech lead hates? Hard to read? High cyclomatic complexity? Doesn't meet requirements? Security issues? Uses out of date libraries? Too much reliance on 3rd party code? Too much NIH? ...
I think "one shot ready for production code" is what AI cannot do yet. Which is why I am not worried for another 12 months at least :)