I loved learning Computer Engineering in college because it de-mystified the black box that was the PC I used growing up. I learned how it worked holistically, from physics to logic gates to processing units to kernels/operating systems to networking/applications.
It's sad to think we may be going backwards and introducing more black boxes, our own apps.
I personally don't "hate" LLMs but I see the pattern of their usage as slightly alarming; but at the same time I see the appeal of it.
Offloading your thinking, typing all the garbled thoughts in your head with respect to a problem in a prompt and getting a coherent, tailored solution in almost an instant. A superpowered crutch that helps you coast through tiring work.
That crutch soon transforms into dependence and before you know it you start saying things like "Once you vibe code, you don't look at the code".
I think a lot of people, regardless of whether they vibe code or not are going to be replaced by a cheaper sollution. A lot of software that would've required programmers before can now be created by tech savy employees in their respective fields. Sure it'll suck, but it's not like that matters for a lot of software. Software Engineering and Computer Science aren't going away, but I suspect a lot of programming is.
I've been around for a while. The closest we ever got was probably RPA. This time it's different. In my organisation we have non-programmers writing software that brings them business value on quite a large scale. Right now it's mainly through the chat framework we provide them so that they aren't just spamming data into chatGPT or similar. A couple of them figured out how to work the API and set up their own agents though.
Most of it is rather terrible, but a lot of the times it really doesn't matter. At least most of it scales better than Excel, and for the most part they can debug/fix their issues with more prompts. The stuff that turns out to matter eventually makes it to my team, and then it usually gets rewritten from scratch.
I think you underestimate how easy it is to get something to work well enough with AI.
I assume he’s mostly joking but… how often do you look at the assembly of your code?
To the AI optimist, the idea of reading code line by line will see as antiquated as perusing CPU registers line by line. Something do when needed, but typically can just trust your tooling to do the right thing.
I wouldn’t say I am in that camp, but that’s one thought on the matter. That natural language becomes “the code” and the actual code becomes “machine language”.
And you could say that the difference is that high-level languages are deterministically transformed down, but in practice the compiler is so complex you'd have no idea what it's doing and most people don't look at the machine code anyway. You may as well take a look at the LLM's prompt and make an assumption of the high-level code that it spits out.
"Messages are stored on our servers and are technically accessible at the database level , we won't pretend otherwise. Kloak doesn't require email, phone, or personal info to create an account, your identity isn't tied to your messages the way it would be on other platforms.
Our goal is to implement end-to-end encryption for DMs so that even we can't read message content. But we're not there yet, since after all we need to make sure the platform is safe and not to shield illegal content being sent."
This is a message from one of their founders I found while exploring the app.
I still feel the same way about it. Feels like a weird mish mash of React and Svelte. I don't see any good reason to switch to it after working with Svelte and Solid in prod for the past couple of years.
Reminds me of the unfortunate book "Vibe Coding" by Steve Yegge, whom I otherwise enjoy. While it contained okay, if very light on actionable details, overview of the broad ideas behind LLM-assisted coding (how much of it was vibe coding, though?), much of it was co-written through the use of an LLM book editing pipeline, proudly advertised throughout the book. A treatise of otherwise one-tenth of the final length has been blown up into the size of a volume, not unlike a piece of meat is pumped with water to make it appear fattier.
Every time I see a title like this, I ask myself if I'm not being open enough, if my biases are interfering with any potential progress I could be making when it comes to utilising AI. Then I find out that the content is just more slop and it further solidifies my position on all of this. What a waste of energy. It really saddens me.
that's funny. I read an article about how the use EM prevalance in content as an indicator that it was AI generated. I should tell my agents to stop using them :)
Truly one of the statements of all time. I hope you look at the code, even frontier agents make serious lapses in "judgement".
reply