Hacker Newsnew | past | comments | ask | show | jobs | submit | crakhamster01's commentslogin

I can maybe see this argument being valid for OSS - as Carmack says, by nature it should be "no strings attached".

I don't think that's all anti-AI activists care about though. Honestly, I would say most activists don't talk about the use of OSS? The most prominent anti-AI sentiment seems to come from creatives. Artists, musicians, designers, etc.

They didn't publish their works with the same notion as OSS developers, but it was scraped up by corporations all the same. In many cases, these works were protected by copyright law and used anyways.

To me that feels like the equivalent of training on "private repos", which Carmack would call a violation [1].

[1] https://x.com/ID_AA_Carmack/status/2031769354401091988


I had a similar reaction to OP for a different post a few weeks back - I think some analysis on the health economy. Initially as I was reading I thought - "Wow, I've never read a financial article written so clearly". Everything in layman's terms. But as I continued to read, I began to notice the LLM-isms. Oversimplified concepts, "the honest truth" "like X for Y", etc.

Maybe the common factor here is not having deep/sufficient knowledge on the topic being discussed? For the article I mentioned, I feel like I was less focused on the strength of the writing and more on just understanding the content.

LLMs are very capable at simplifying concepts and meeting the reader at their level. Personally, I subscribe to the philosophy of - "if you couldn't be bothered to write it, I shouldn't bother to read it".


Alternate theory... a few months into the LLMism phenomenon, people are starting to copy the LLM writing style without realizing it :(

This happens to non-native English speakers a lot (like me). My style of writing is heavily influenced by everything I read. And since I also do research using LLMs, I'll probably sound more and more as an AI as well, just by reading its responses constantly.

I just don't know what's supposed to be natural writing anymore. It's not in the books, disappears from the internet, what's left? Some old blogs for now maybe.


The wave of LLM-style writing taking over the internet is definitely a bit scary. Feels like a similar problem to GenAI code/style eventually dominating the data that LLMs are trained on.

But luckily there's a large body of well written books/blogs/talks/speeches out there. Also anecdotally, I feel like a lot of the "bad writing" I see online these days is usually in the tech sphere.


Books definitely have natural writing, read more fiction! I recommend Children of Time by Adrian Tchaikovsky

> taste scales now.

Not having taste also scales now, and the majority of people like to think they're above average.

Before AI, friction to create was an implicit filter. It meant "good ideas" were often short-lived because the individual lacked conviction. The ideas that saw the light of day were sharpened through weeks of hard consideration and at least worth a look.

Now, anyone who can form mildly coherent thoughts can ship an app. Even if there are newly empowered unicorns, rapidly shipping incredible products, what are the odds we'll find them amongst a sea of slop?


I think this advice is pretty apt for small to medium sized companies. We're all invested in the company succeeding, but you don't want to become known as the person that always says "no".

At large companies, I've rarely found a reason to speak out on a project. Unless it has a considerable effect on my team/work (read: peace of mind), it just doesn't make sense to be the person casting doubt. There's not much ROI for being "right".

If you manage to kill the project before it starts, no one will ever know how bad of a disaster you prevented. If the project succeeds despite your objections, you look like an idiot. And if it fails - as the author notes, that doesn't get remembered either.

As a senior IC, the only real ROI I've found in these situations is when you can have a solution handy if things fail. People love a fixer. Even if you only manage to pull this off once or twice, your perception in the org/company gets a massive boost. "Wow, so-and-so is always thinking ahead."

A basic example I saw at my last company was automated E2E testing in production. My teammate had suggested this to improve our ability to detect regressions, but it was ultimately shot down as not being worth the investment over other features.

A few months later, we had seen multiple instances of users hitting significant issues before we could catch them. My teammate was able to whip out the test framework they had been building on the side, and was immediately showered with praise/organizational support (and I'm sure a great review as well).


The effort required to be ready with the fix is often SO much less than what you need to convince folks the problem exists in the first place. I find it's frequently the only viable option on an individual or team scale.


I've realized that climbing the corporate ladder doesn't make any sense. You put more effort, you take responsibility for stupid people's decisions, and then you get a disproportionately small reward. The smartest move is to find a bottom-tier position where they pay you enough to sustain your desired lifestyle, but where you cannot really be blamed for failures of the management.


Relevant: https://en.wikipedia.org/wiki/Dilbert_principle

> You put more effort, you take responsibility for stupid people's decisions, and then you get a disproportionately small reward

On that I disagree. Managers might have to take responsibility for bad decisions, sure, but get a disproportionately larger reward than those under them. It's certainly less stressful at the bottom of the ladder, but don't expect to get much praise or monetary reward, and you're the first to go as soon as something goes wrong. There's a reason why late-stage companies are full of middle managers, and few people actually doing the work.


> don't expect to get much praise or monetary reward

Yeah so I figured out that if I have a bullshit busyjob for €100k and my option is to actually start working my ass off and maybe double the salary in absolute best-case scenario, then fuck that. But I admit that my position might be exceptional.

> and you're the first to go as soon as something goes wrong.

I live in Europe so I assume I'd survive even a big fuckup as long as I'm following my manager's orders, even if HQ is American. Also, when there are bigger layoffs, they specifically by law must let go in the order of new hires to old hires, which means that I'm not in immediate danger even if they cut workforce.

The biggest danger is someone discovering that I mostly play video games at work and then giving me lots of useless tasks just to keep me occupied.


It still makes little sense to be a line level manager. You can make just as much as a senior+ IC at the right company.


My kind of approach as well. I don't care it shown as not being career oriented, as long as there are options to work elsewhere, even if outside IT.


> At large companies, I've rarely found a reason to speak out on a project.

That's true. And it is currently one of the main reason why startups are so efficient compared to MegaCorps.

In small companies, it takes few engineers voicing out ' this is bullshit ' to stop a disaster.

In large corps, it takes 2y, 10M USD and a team in burnout to reach the same result.

And the main reason is the usual source of all sins: *Politics*.


I feel like both of these examples are insights that won't be relevant in a year.

I agree that CC becoming omniscient is science fiction, but the goal of these interfaces is to make LLM-based coding more accessible. Any strategies we adopt to mitigate bad outcomes are destined to become part of the platform, no?

I've been coding with LLMs for maybe 3 years now. Obviously a dev who's experienced with the tools will be more adept than one who's not, but if someone started using CC today, I don't think it would take them anywhere near that time to get to a similar level of competency.


I base part of my skepticism about that on the huge number of people who seem to be unable to get good results out of LLMs for code, and who appear to think that's a commentary on the quality of the LLMs themselves as opposed to their own abilities to use them.


> huge number of people who seem to be unable to get good results out of LLMs for code

Could it be, they use other definition of "good"?


I suspect that's neither a skill issue nor a technical issue.

Being "a person who can code" carries some prestige and signals intelligence. For some, it has become an important part of their identity.

The fact that this can now be said of a machine is a grave insult if you feel that way.

It's quite sad in a way, since the tech really makes your skills even more valuable.


It's funny that you mention moving outside the city when Zohran's tax plan is centered on bringing the corporate tax rate in-line with our neighboring state.

I'll also caveat that any parallels you might see in Seattle don't really apply to NYC. Besides the low car ownership rates, wealthy individuals choose to in NYC for it's convenience and culture, which really are unique in the US.


> and now, what screen you’re on, what do you see?

There's a "follow me" feature to see what other users are doing. It's been around for several years.


I was referring to prototype viewing, Not about viewing the design itself.


There's undoubtedly a cohort of tourists that come to Japan with the "Disneyland" mindset, and I agree that some sort of government-level change is needed to curb abuse. But I would like to believe these folks are in the minority.

I think a greater proportion of the tourist population are individuals that visit Japan and maybe haven't done enough research, or are just unaware of norms here. Not understanding where to queue, how to order, navigate public transport, what to do at a temple, onsen, etc. This group isn't the 15% of "Best in Class tourists" Craig writes about, but rather the 75% that want to be respectful and don't know any better.

Many locals/expats will see this group and look down in disdain (or lament about them in a blog post...), but why don't more people just ask if they need help? It takes little effort to point someone in the right direction, and if it helps them better understand the country it's a win-win for both tourists and residents alike.

I feel like people love to talk about how considerate Japanese culture is, but don't care to practice it themselves when given the chance.


I'm increasingly certain that companies leaning too far into the AI hype are opening themselves up to disruption.

The author of this post is right, code is a liability, but AI leaders have somehow convinced the market that code generation on demand is a massive win. They're selling the industry on a future where companies can maintain "productivity" with a fraction of the headcount.

Surprisingly, no one seems to ask (or care) about how product quality fares in the vibe code era. Last month Satya Nadella famously claimed that 30% of Microsoft's code was written by AI. Is it a coincidence that Github has been averaging 20 incidents a month this year?[1] That's basically once a work day...

Nothing comes for free. My prediction is that companies over-prioritizing efficiency through LLMs will pay for it with quality. I'm not going to bet that this will bring down any giants, but not every company buying this snake oil is Microsoft. There are plenty of hungry entrepreneurs out there that will swarm if businesses fumble their core value prop.

[1] https://www.githubstatus.com/history


> I'm increasingly certain that companies leaning too far into the AI hype are opening themselves up to disruption.

I am in the other camp. Companies ignoring AI are in for a bad time.


Haha, I tried to couch this by adding "too far", but I agree. Companies should let their teams try out relevant tools in their workflows.

My point was more of a response to the inflated expectations that people have about AI. The current generation of AI tech is rife with gotchas and pitfalls. Many companies seem to be making decisions with the hope that they will out-innovate any consequences.


How so? Not enough art slop logos so they don't have to pay an artist? Other than in maximizing shareholder return I fail to see how foregoing AI is putting them "behind".

AI, especially for programming, is essentially no better than your typical foriegn offshore programming firm, with nonsensical comments and sprawling conflicting code styles.

If it eventually becomes everything the proponents say it will, they could always just start using it more.


I agree with this. "Companies which overuse AI now will inherit a long tail of costs" [1]

[1] AI: Accelerated Incompetence. https://www.slater.dev/accelerated-incompetence/


Is any of this pushback having a material impact on the company? It seems like their stock is still hovering around all-time highs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: