Hacker Newsnew | past | comments | ask | show | jobs | submit | vb7132's commentslogin

Having managed developers for over five years, I have seen two categories of devs (to simplify the argument, let's focus just on the smart ones):

- one group loves to work independently and gets you the results, they are fast and they figure things out

- second group needs direction, they can be creative in their space but check-ins and course corrections are needed.

AI feels like group1 but it's actually group2. In essence, it doesn't fully fit in either group. I am still figuring out this third group.


True, there are people who are good with people. And they should totally become managers.

But there are also the third kind: who like to design the systems and let them be built by someone else..


Same level of engineer here - I feel that the importance of expertise has only increased, just that the language has changed. Think about the engineer who was an expert in Cobol and Fortran but didn't catch the C++ / Java wave. What would you say to them?

LLMs goof up, hallucinate, make many mistakes - especially in design or architecting phase. That's where the experience truly shines.

Plus, it let's you integrate things that you aren't good at (UI for me).


I was at a big tech for last 10 years, quit my job last month - I feel 50x more productive outside than inside.

Here is my take on AI's impact on productivity:

First let's review what are LLMs objectively good at: 1. Writing boiler plate code 2. Translating between two different coding languages (migration) 3. Learning new things: Summarizing knowledge, explaining concepts 4. Documentation, menial tasks

At a big tech product company #1 #2 #3 are not as frequent as one would think - most of the time is spent in meetings and meetings about meetings. Things move slowly - it's designed to be like that. Majority devs are working on integrating systems - whatever their manager sold to their manager and so on. The only time AI really helped me at my job was when I did a one-week hackathon. Outside of that, integrations of AI felt like more work rather than less - without much productivity boost.

Outside, it has proven to be a real productivity boost for me. It checks all the four boxes. Plus, I don't have to worry about legal, integrations, production bugs (eventually those will come).

So, depends who you are asking -- it is a huge game changer (or not).


I think a lot of people will not like to hear this but we use AI almost for everything internally. The noob way to go about this is just give it a couple of tasks and just give it complete root access to your life. That's always going to end up in disappointment. Instead, I realised, AI always needs an architect. Opinionated. Strategic. Authoritative.

It is quite good at following most orders. Hence why you must ALWAYS be in the loop. AI can augment, but not replace. Maybe some day it might. But it's not now, even with the latest SOTA models.

I let AI write my emails for me. But never the ability to hit send. I let AI access to my data to make informed decisions, but never let it make the final decision.

You may think I'm being paranoid, but I'm a very cautious person. I don't jump into new technology fresh out of the oven and this has served me well for the last 15 years. (I learned my lesson courtesy of MongoDb).

With AI, I am taking the same approach. Experiment, understand the limits and only then implement. Working really well so far and have managed to automate tons of tedious tasks from emails to sales to even meetings.

I don't use Clawdbot, not any library. I wrote my own wrappers for everything using Elixir. I used Instructor and Ash framework with Phoenix and a bunch of generators to automate tedious tasks. I control the endpoints the models are loaded from (Open router) and use a multi-model flow so no one company has enough data about me. Only bits and pieces of random user IDs.

Privacy is the real challenge with AI.


"I think a lot of people will not like to hear this..."

Lol why? You've been suckered in and will eventually crash and burn. But carry on.

Just remember when things go wrong - it's your ass on the line.


Look, whether you like it or not AI is here and it is decent at some tasks and the world is using it to automate stuff. You saw how Clawdbot exploded, right? Despite users getting hacked left and right didn't stop the adoption. Yesterday there was again a hack incident. It's a burning pain that AI solves to the point where people don't care even if they get hacked.

Will I crash and burn? Maybe, you're right. But, that's why I'm taking things at a very slow pace. Only automating internal tasks. Only things I trust AI to do. Very very limited scope. What's really my alternative here?

Just sit back and watch the world move on? My alternative is not changing with the times and being stagnant. That's not really a solution. Even if I'm doing that, I want to have data points that AI is really a dead end instead of just assumptions. My alternative reality isn't a bed or roses - a lot of people at the top do believe they can replace me and my work (CTO) with AI, thanks to the hype. I'm just trying to evolve so I don't become a meme down the line. Can they actually replace me or my job with AI? Absolutely not from what I'm seeing. But hypes of cutting cost is always attractive to people at the top. Just trying to stay alive man, lol.


how so?

having been an early employee and founder of a few startups and then working at a few larger companies, most people who only ever worked at FAANG have no idea how much more productive tiny teams with ownership are.

Been a startup founder - work at Meta currently.

AI is making everyone faster that I’ve seen. I’d say 30% of the tickets I’ve seen in the last month have been solved by just clicking the delegate to AI button


How did you decide to work at Meta?

I'll be honest, just the idea of working there makes me feel like vomiting. For me, they are bizarrely evil. They're not evil like, "we're going to destroy our competition through anti competitive practices," (which they do), but "let's destroy a whole generation of minds."

And now with the glasses. I mean, jeeze. Can there be a stronger signal of not caring for others?

It's as if Meta sees people as cattle. Though I think a lot of techies see humans as cattle, truthfully.

What was your rationale?

I guess this question is out-of-the-blue, and I don't mean for you to justify your existence, but I've never understood why people choose to work for Meta.


I feel the same - would I like a meta paycheck, sure, but I couldn't look at myself in the mirror knowing what the company I'm giving my work to does to people's brains (not just the young, though that is the most reprehensible).

I told my son I would disown him if he worked for Facebook, for the reasons stated above.

Then he took a contracting gig for Meta. His rationalization was that the project was an ill-specified prototype that would never see the light of day - if they wanted to throw money at him for stuff like that, he would accept it.

That gig is finished, and he's now thoroughly disillusioned with working for big tech.


Guess who is running product and other related functions at OpenAI and Anthropic now

From this angle, what's the difference between Meta and a junk food company?

Both sell things that are bad for you, but that the consumer has complete control over whether or not to consume.

And not all of what Meta is selling is bad. There's a lot of information exchanged on Facebook, Instagram, etc. that are good for society. Like health/nutrition advice, etc.


I've always attributed it to people being very good at convincing themselves they aren't one of the bad guys. A big paycheck makes it even easier to ignore to what you are a part of.

Where livelihood is concerned, rational individuals with strong morals can do irrational, and immoral things (e.g., work at the Palantir's of the world).

TLDR: incentives don't just shape perception, they form it


I have a theory that when you have 2 developers working in synergy, you're at something like 1.8x what 1 person can do. As you add more people you approach 2x until some point after which you start to decrease productive. I don't think that point is far beyond 5.

This is very close to the thesis, or at least theme, of the essays in The Mythical Man-Month, Fred Brooks. Some elements are dated (1975), but many feel timeless.

Brooks law “Adding manpower to a late software project makes it later” is just the surface of some of the metaphorical language that has most stuck with me: large systems and teams quickening entanglement in tar pits through their struggle against coordination scaling pains, conceptual integrity in design akin to preserving architectural unity of Reims cathedral, roles and limitations attempting to expand surgical teams, etc.

Love a good metaphor, even when its foundation is overextended or out of date. Highly recommend.


My experience of pair programming is the opposite. In a pair I get maybe 4x as much done as when working alone.

Mostly it's because when we hit a point where one person would get stuck, the other usually knows what to do, and we sail through almost anything with little friction.


Maybe the multiplier is 4x and by the time you have a team of ten you're back down to 2x? My theory is a bit of a hyperbole and I don't know what the multipliers would be? But I know that many times you can move quick when you're small.

And to your point, a single person can easily get stuck, I know that applies to me many times.


There's that but youre missing a lot of variables. E.g. if one of you had perfect sleep and the other didn't the individual with perfect sleep will perform better for longer.

I don't get why people try to simplify - you're removing important details that determine performance and therefore output. This leads to false conclusions.


This. Hell even a company that is 100 people or more. Ive seen companies just grind to snails pace around 80-90 people and then still scale to 400-500 and then it's impossible to really do anything meaningful. I have tried to test for this in interviews over the years but ultimately I just end up disappointed. At this point I don't even look, just work in small independently organized groups or coops.

I'm excited about Agents helping many tiny teams succeed. There has been hype around the "who will be the first solo founder to a billion" but I am hoping for many small teams to succeed and I think this is the more interesting story.

I agree its in the 2-7 person range.

The challenge for those teams is distribution. They will crush at building, but I'm not sure how they can crack distribution. Some will, but maybe there is a way to help thousands of small teams distribute.


I love tiny teams. I hate big corp.

Big corporations are full with people who love to entertain 20+ people in video calls. 1-2 people speak, the other nod their heads while browsing Amazon.

I wouldn’t be sad if those jobs vanished.


Well, you should be terrified of those jobs vanishing I think.

All of these people will consequently be on the job market competing for your opportunities.

Yes you may feel superior to their capabilities - and may even be justified in your opinion (I know nothing about you beyond this comment)... But it'll still significantly impact your professional future if this actually happens. It would massively impact wages at the very least

Your viewpoint is incredibly short-sighted and not actually realizing the broad effect on the industry as a whole such a change would bring.


Maybe I’m naive but I’m not terrified about the future at all.

Every efficiency wave made life better for humans. Why should this one be different?

Assume many people lose their jobs. This in turn means companies will have higher margins. Higher margins attract more competition. More competition means lower margins since some will use the lower costs to offer lower prices.

Lower prices increase quality of life for everyone.

People who lost their job might be able to pick up doing something they actually enjoy…


> People who lost their job might be able to pick up doing something they actually enjoy…

That's so out of touch.

First, you're conveniently ignoring the possibility that people actually like the job they are about to lose.

And believe it or not most people aren't toiling away at jobs they hate because it never occurred to them to do something they like more. They work jobs they dislike because it's the only choice they have because they have to pay their bills so they can survive and so that their dependents can have an acceptable life.


Throughout history, what were once middle class and artisan professions were increasingly automated and tons of people and their families ended up in poverty until they died.

We just gloss over them and villify the ones who tried to do anything about it (the ones that weren't executed also died in poverty).


Yeah this always get's completely glossed over in these conversations.

People always say: "Things ended up working out in the end"

Things only worked out in the sense that society carried on without all the people who lost their jobs.

The U.S. has recent examples of large scale job destruction.

Michigan: From 2000-2009. Massive job destruction. 330,000 auto workers in 2000. Down to 109,000 in 2009. Estimates are that 1/3-1/2 of all those affected never achieved equal/similar employment. That is, somewhere around ~70k-120k workers never earned as much as they previously did. Since this was msotly contained within one city (Detroit), it's pretty easy for the country to ignore it and go on with their lives.

(Detroit was in decline since the 50's really. 2000-2009 is just a particularly bad snapshot.)

Coal mining towns have experienced the same phenomenon but more gradually. The poverty left behind by the destruction of those jobs has never been addressed.

With AI, we are heading into a situation where potentially a much larger amount of people will be affected. So maybe that changes the calculus on the government stepping in and fixing the problem. But I wouldn't count on it.

Sources for Michigan numbers:

https://lehd.ces.census.gov/doc/workshop/2010/LEDautopres031...

https://research.upjohn.org/cgi/viewcontent.cgi?article=1205...


> Since this was mostly contained within one city (Detroit)

It's concentrated in Detroit but also distributed throughout the state, as you can observe in the census.gov slides.

The devastation is regional. It's been a wild experience, watching it all fall apart over the last 40+ years. The decay is immense and impossible to convey to someone from a rich state. Someone from the Eastern Bloc might get it, but I've never been able to communicate it to a Californian. Hop in a car and drive from town to town. Once-prosperous communities are boarded up and gradually reclaimed by nature. Department stores are converted into soup kitchens or marijuana dispensaries.

"Things will work themselves out" is not a law of nature, unless we broaden our definition of "things working out" to include outcomes like "everyone young enough flees, everyone else clutches their savings until they eventually die impoverished."

But with AI, even outcomes like that might be overly optimistic. Where will young people flee to? Where can they go, what trade can they learn, to be safe enough to eventually die in comfort?

When I look at Michigan I see both the past and the future, and I am planning accordingly.


You need to be careful with these things. Such exaggerated narratives are the reason people are afraid.

during the Industrial Revolution many artisan and skilled trades lost their livelihoods.

And yet, while many people did suffer serious short-term hardship and wage collapse, most did not simply remain in lifelong poverty, because over time industrialization created new types of employment and average wages eventually rose.

You don’t want to go back to before the Industrial Revolution. Do you?


I think you need to read up more on living conditions and the violent labor movements in that era. Why they started, what they fought for and what they won for you.

Because your ignorance is painful.


> Because your ignorance is painful.

It's not acceptable to attack a fellow community member like this on HN. The guidelines make it clear we're aiming for better than this:

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Please don't use Hacker News for political or ideological battle. It tramples curiosity.

https://news.ycombinator.com/newsguidelines.html


Being kind to people who blatantly lie about history and hide suffering of thousands is how we got into the current mess.

We don't need a defiant mini-sermon and it's very poor conduct to use the term "blatantly lie" for a fellow community member who is just expressing their understanding of a topic. It is never morally necessary to abuse people on this site. This is a community not a battleground.

If you have a different understanding of the topic, share it, so all can benefit. That's what people do when they are sincere about contributing positively here.

If instead you insist on continuing to use abusive terms towards others here, we'll have to ban the account.


> People who lost their job might be able to pick up doing something they actually enjoy…

It's more probable they lose everything before ending up with a worse job that pays less.


I agree with your categories. The majority of the usage for me is (1) and (3).

(1) LLMs are basically Stack Overflow on steroids. No need to go look up examples or read the documentation in most cases, spit out a mostly working starting point.

(3) Learning. Ramping up on an unfamiliar project by asking Antigravity questions is really useful.

I do think it makes devs faster, in that it takes less time to do these two things. But you're running into the 80% of the job that does not involve writing code, especially at a larger company.

In theory, this should allow a company to do more with fewer devs, but in reality it just means that these two activities become easier, and the 80% is still the bottleneck.


> LLMs are basically Stack Overflow on steroids

That, and I've never had to beg an LLM for an answer, or waste 5 minutes of my life typing up a paragraph to pre-empt the XY Problem Problem. Also never had it close my question as a duplicate of an unrelated question.

The accuracy tends to be somewhat lower than SO, but IMO this is a fair tradeoff to avoid having to potentially fight for an answer.


Yes, throughput is determined by the bottleneck and above a certain organization size, the bottleneck often is coordination costs.

Interesting.

Are you generating revenue or, otherwise, what productivity are you measuring?

Without generating revenue (which to be clear is a very good proxy to measure impact) everyone can be indeed very prolific in their hobbies. But labor market is about making money for a living and unless you can directly impact your day-to-day needs from your work, it can't be called productive.


Very valid point. I will lay down the facts for you:

At my previous employer, I was generating $2.5million per year (revenue per employee). I didn't ship a single line of code. All the time was spent trying to convince various stake holders.

Now, I have already built a couple of apps that help me better manage my tech news (keeps me sane) plus I am writing a blog that generates $0. It's only been a month.

If you measure the immediate dollar value, you are right. But in life, pay-offs are not always realized immediately. Just my opinion anyway.


Can mostly second this.

Working on a side project, and it's truly incredible how good AI has been for MOST of it.

Also, bewildering how truly awful it was at some seemingly random things - like writing not terribly difficult Assembly that mostly exists already to do Go-style hot splitting (to even get it to understand what older versions of Go did).

I suspect it'll still be 3 years before AI is as good at the FAANGs as it is outside, just due to the ungodly huge context and the amount of proprietary stuff it would need to learn to use effectively, plus getting all the access to it, etc.

But, even when it does all that, that's maybe 33% of the job.

I just don't see mass layoffs at the really big tech companies, unless it's more focused on just cutting and cutting than actually because people have been made redundant.

Even at the management level, I'm not sure we're going to see managers managing teams of 30 instead of teams of 10.

At the end of the day, a manager needs to know what you're doing and if you're any good at it, and there's only so many people a person can do that effectively with.

Maybe low-level managers go away, and it's just TLMs, but someone still needs to do your 1-on-1s and babysit those that need babysat.


2. Translating between two different coding languages (migration)

I have a game written in XNA

100% of the code is there, including all the physics that I hand-wrote.

All the assets are there.

I tried to get Gemini and Claude to do it numerous times, always with utter failure of epic proportions with anything that's actually detailed. 1 - my transition from the lobby screen into gameplay? 0% replicated on all attempts 2 - the actual physics in gameplay? 0% replicated none of it works 3 - the lobby screen itself? non-functional

Okay so what did it even do? Well it put together sort of a boilerplate main menu and barebones options with weird looking text that isn't what I provided (given that I provided a font file), a lobby that I had to manually adjust numerous times before it could get into gameplay, and then nonfunctional gameplay that only handles directional movement and nothing else with sort of half-working fish traveling behavior.

I've tried this a dozen times since 2023 with AI and as late as late last year.

ALL of the source code is there every single thing that could be translated to be a functional game in another language is there. It NEVER once works or even comes remotely close.

The entire codebase is about 20,000 lines, with maybe 3,000 of it being really important stuff.

So yeah I don't really think AI is "really good" at anything complex. I haven't really been proven wrong in my 4 years of using it now.


I crave to see people saying "Here's the repo btw: ..." and others trying out porting it over, just so we see all of the ways how AI fails (and how each model does) and maybe in the middle there a few ways to improve its odds. Until it eventually gets included in training data, a bit like how LLMs are oddly good at making SVGs of pelicans on bicycles nowadays.

And then, maybe someone slightly crazy comes along and tries seeing how much they can do with regular codegen approaches, without any LLMs in the mix, but also not manual porting.


Agreed -- coding agents / LLMs are definitely imperfect, but it's always hard to contextualize "it failed at X" without knowing exactly what X was (or how the agent was instructed to perform X)

I'm sure someone who regularly programs games in the destination language I want who also has worked with XNA in the past as a game developer could port it in a week or something yeah

- Split it in different modules / tasks

- Do not say: "just convert this"

- On critical sections you do a method-per-method-translation

- Dont forget: your 20.000 lines source at a whole will make any model to be distracted on longer tasks (and sessions, for sure)

- Do dedicated projects within Claude per each sub-module


This matches my experience. Unless it's been done to death online (crud etc) it falls on its face every time.

It's okay shills and those with stock in the AI copies brainwashed themselves inside out and will spam forever on this website that if you just introduce the right test conditions agents can do anything. Never mind engineering considerations if it passes the tests it's good homie! Just spent an extra few hundred or thousand a month on it! Especially on the company I have stock in! Give me money!

Yes, you are right: amongst the four points, migration is the most contentious one. You need to be fairly prudent about migration and depending on the project complexity, it may or may not work.

But I do feel this is a solvable problem long term.


In those situations you basically need to guide llm to do it properly. It rarely one shots complex problems like this, especially in non web dev, but could make it faster than doing it manually.

Oh believe me I broke it down super finely, down to single files and even single functions in some places

It still is completely and utterly hopeless


I've done this multiple times in various codebases, both medium sized personal ones (approx 50k lines for one project, and a smaller 20k line one earlier) and am currently in the process of doing a similar migration at work (~1.4 million lines, but we didn't migrate the whole thing, more like 300k of it).

I found success with it pretty easily for those smaller projects. They were gamedev projects, and the process was basically to generate a source of truth AST and diff it vs a target language AST, and then do some more verifier steps of comparing log output, screenshot output, and getting it to write integration tests. I wrote up a bit of a blog on it. I'm not sure if this will be of any use to you, maybe your case is more difficult, but anyway here you go: https://sigsegv.land/blog/migrating-typescript-to-csharp-acc...

For me it worked great, and I would (and am) using a similar method for more projects.


"I also wanted to build a LOT of unit tests, integration tests, and static validation. From a bit of prior experience I found that this is where AI tooling really shines, and it can write tests with far more patience that I ever could. This lets it build up a large hoard of regression and correctness tests that help when I want to implement more things later and the codebase grows."

The tests it writes in my experience are extremely terrible, even with verbose descriptions of what they should do. Every single test I've ever written with an LLM I've had to modify manually to adjust it or straight up redo it. This was as recent as a couple months ago for a C# MAUI project, doing playwright-style UI-based functionality testing.

I'm not sure your AST idea would work for my scenario. I'd be wanting to convert XNA game-play code to PhaserJS. It wouldn't even be close to 95% similar. Several things done manually in XNA would just be automated away with PhaserJS built-ins.


Ya I could see where framework patterns and stuff will need a lot of corrections in post after that type of migration. For mine it was the other direction and only the server portion (Express server written in typescript for a Phaser game, and porting to Kestrel on C#, which was able to use pretty much identical code and at the end after it was done I just switch and refactor ba few things to make it more idiomatic C#).

For the tests, I'm not sure why we have such different results but essentially it took a codebase I had no tests in, and in the port it one shot a ton of tests that have already helped me in adding new features. My game server for it runs in kubernetes and has a "auto-distribute" system that matches players to servers and redistributes them if one server is taken offline. The integration tests it wrote for testing that auto-distribute system found a legit race condition that was there in both the old and new code (it migrated it accurately enough that it had the same bugs) and as part of implementing that test it fixed the bug.

Of course I wouldn't use it if it wasn't a good tool but for me the difference between doing this port via this method versus doing it manually in prior massive projects was such an insane time save that I would have been crazy to do it any other way. I'm super happy with the new code and after also getting the test infra and stuff like that up it's honestly a huge upgrade from my original code that I thought I had so painstakingly crafted.


super cool, don't have the time to read it right now but to think in terms of ASTs is pretty handy!

The only model that works well for complex things is Opus, and even then barely (but it does and you need to use api/token pricing if you want guarantee it’s the real thing).

This is a bot comment.

  IMO, the writer is overzealous with their comments on LLMs. As a coder, it feels like an outsider trying out a product that was amazed me over and over so many times.

  > They aren’t perfect, but the kind of analysis the program is able to do is past the point where technology looks like magic.

  But as you use this product over a long period of time, there are many obvious gaps - hallucinations / repeated tool calls / out of context outputs / etc.

  To me, refine.ink sounds like a company that has built heavy tooling around some super high context window LLMs and then some very good prompts. Their claim is to compare it against any good off-the-shelf LLM with any prompt. But when you are spending bunch of money to build a whole ecosystem around LLMs, it's obvious that it's not going to beat their output. 

  I won't be surprised if the next version of an LLM within the next few months completely outperforms their output -- that's usually the case with all the coding tools and scaffoldings. They are rendered useless by a superior LLM.


Your discord link doesn't seem to work. One basic question: As a hardware noob, where do I start? Maybe having a minimal getting started guide could really help.

Nevertheless, the initiative looks cool!


This is fantastic. The app is simple, useful and feels de-cluttered.

Two of my feature requests: 1. Allow cmd+f search on the whole app - I wanted to search your post on the app but I couldn't 2. A browser button to open the current page on an external browser.

Side note: I am trying to minimize my HN time via getting push notifications for relevant HN posts, and that's how I discovered your post. Would it be cool if one could write custom agents on top of an app? Maybe?


A link to my experiment: https://www.bvaibhav.info/knos-digest


This feature is very useful :)


This seems like a common problem. I am experimenting with how to consume less news (but still not miss the important bits). Built an agent that sends me daily summaries. And that's how I found this post!

I am maintaining the list of what I am reading: https://www.bvaibhav.info/knos-digest

Plan to extend this beyond HN.


This is an awesome. I had a similar one: Convert the dense non-fiction books into something more readable. eg. SAPIENS vs UNSTOPPABLE US.

But this makes me wonder: What is the barrier to entry for these apps now? Anyone can do it. There is going to be a barrage of apps/websites like this?


Why would you do that to Sapiens, it's an enjoyable read


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: