Tangentially, Burnham has a long history with these sorts of public-sector private vampires, having been up to his neck in PFI (of "£200 to change a lightbulb" fame) in his stint leading the NHS.
The fact that a huge amount of money is extracted from the UK government for no (or very little value) is a crying shame.
I know multiple people who work as consultants (hired via private agencies, paid for by Government) who have literally done nothing for six months plus.
They have no incentive to whistleblow, the agency employing them has no incentive to get rid of them as they take a cut, and then government department hiring them is non-the-wiser because they have no technical knowledge or understanding of what's being carried out.
Sure (knowing the underlying ideas and having proficiency in their application) - but producing software by conducting(?) LLMs is rapidly becoming a wide, deep and must-have skill and the lack thereof will be a weakness in any student entering the workplace.
This is an interesting and thoughtful article I think, but worth evaluating in the context of the service ("cognitive security") its author is trying to sell.
That's not to undermine the substance of the discussion on political/constitutional risk under the inference-hoarding of authority, but I think it would be useful to bear in mind the author's commercial framing (or more charitably the motivation for the service if this philosophical consideration preceded it).
A couple of arguments against the idea of singular control would be that it requires technical experts to produce and manage it, and would be distributed internationally given any countries advanced enough would have their own versions; but it would of course provide tricky questions for elected representatives in the democratic countries to answer.
There's not a direct tie to what I'm trying to sell admittedly. I just thought it was a worthwhile topic of discussion - it doesn't need to be politically divisive and I might as well post it on my company site.
I don't think there are easy answers to the questions I am posing and any engineering solution would fall short. Thanks for reading.
Not the person you are replying to but, even if the technical skills of AI increase (and stuff like Codex and Claude Code is indeed insanely good), you still need someone to make risky decisions that could take down prod.
Not sure management is eager to give permission to software owned by other companies (inference providers) the permission to delete prod DBs.
Also these roles usually involve talking to other teams and stakeholder more often than with a traditional SWE role.
Though
> There are no hiding places for any of us.
I agree with this statement. While the timeline is unclear (LLM use is heavily subsidized), I think this will translate into less demand for engineers, overall.
I think it is important to know that AI needs to be maintained. You can't reasonably expect it to have a 99.9% reliability rate. As long as this remains true work will exist in the foreseeable future.
The future of software engineering is SRE (257 points, 139 comments)
https://news.ycombinator.com/item?id=46759063
reply