But very often the CI operations _are_ the problem. It's just YAML files with unlimited configuration options that have very limited documentation, without any type of LSP.
Personally I think this is an extreme waste of time. Every week you're learning something new that is already outdated the next week. You're telling me AI can write complex code but isn't able to figure out how to properly guide the user into writing usable prompts?
A somewhat intelligent junior will dive deep for one week and be on the same knowledge level as you in roughly 3 years.
No matter how good AI gets we will never be in a situation where a person with poor communication skills will be able to use it as effectively as someone who's communication skills are razor sharp.
But the examples you've posted have nothing to do with communication skills, they're just hacks to get particular tools to work better for you, and those will change whenever the next model/service decides to do things differently.
I'm generally skeptical of Simon's specific line of argument here, but I'm inclined to agree with the point about communication skill.
In particular, the idea of saying something like "use red/green TDD" is an expression of communication skill (and also, of course, awareness of software methodology jargon).
Ehhh, I don't know. "Communication" is for sapients. I'd call that "knowing the right keywords".
And if the hype is right, why would you need to know any of them? I've seen people unironically suggest telling the LLM to "write good code", which seems even easier.
I sympathize with your view on a philosophical level, but the consequence is really a meaningless semantic argument. The point is that prompting the AI with words that you'd actually use when asking a human to perform the task, generally works better than trying to "guess the password" that will magically get optimum performance out of the AI.
Telling an intern to care about code quality might actually cause an intern who hasn't been caring about code quality to care a little bit more. But it isn't going to help the intern understand the intended purpose of the software.
I'm not making a semantic argument, I'm making a practical one.
> prompting the AI with words that you'd actually use when asking a human to perform the task, generally works better
Ok, but why would you assume that would remain true? There's no reason it should.
As AI starts training on code made by AI, you're going to get feedback loops as more and more of the training data is going to be structured alike and the older handwritten code starts going stale.
If you're not writing the code and you don't care about the structure, why would you ever need to learn any of the jargon? You'd just copy and paste prompts out of Github until it works or just say "hey Alexa, make me an app like this other app".
Why do you bother with all this discussion? Like, I get it the first x times for some low x, it's fun to have the discussion. But after a while, aren't you just tired of the people who keep pushing back? You are right, they are wrong. It's obvious to anyone who has put the effort in.
It's also useful for figuring out what I think and how best to express that. Sometimes I get really great replies too - I compared ethical LLM objections to veganism today on Lobste.rs and got a superb reply explaining why the comparison doesn't hold: https://lobste.rs/s/cmsfbu/don_t_fall_into_anti_ai_hype#c_oc...
Yes and no. Knowing the terminology is a short-cut to make the LLM use the correct part of its "brain".
Like when working with video, if you use "timecode" instead of "timestamp", it'll use the video production part of the vector memory more. Video production people always talk about "timecodes", not "timestamps".
You can also explain the idea of red/green testing the long way without mentioning any of the keywords. It might work, but just knowing you can say "use red/green testing" is a magic shortcut to the correct result.
Thus: working with LLMs is a skill, but also an ever-changing skill.
Why can't both be true at the same time? Maybe their problems are more complex than yours. Why do you assume it's a skill issue and ignore the contextual variables?
On the rare occasions that I can convince them to share the details of the problems they are tackling and the exact prompts they are using it becomes very clear that they haven't learned how to use the tools yet.
I'm kind of curious about the things you're seeing since I find the best way is to have them come up with a plan for the work they're about to do and then make sure they actually finish it because they like to skip stuff if it requires too much effort.
I mean, I just think of them like a dog that'll get distracted and go off doing some other random thing if you don't supervise them enough and you certainly don't want to trust them to guard your sandwich.
Repairable laptops don't reduce e-waste. You replace the mainboard and then what? You have a spare mainboard that sits there collecting dust. The best way to prevent e-waste is to build durable laptops that last a lifetime. Like Dell, HP and Lenovo have been doing for years (while also being very repairable at the same time).
We have open source documentation and CAD around the Mainboards to enable people to reuse them as single board computers or mini PCs after upgrading them out of their laptops. Even if the original owner of the Mainboard has no use for that, the functionality means it has resale value for others to use, reducing waste.
Nice for experimentation, but if you want a daily driver that lasts for years: Dell Latitude (now Dell Pro), HP EliteBook or Lenovo ThinkPad. Literally laptops built to last. Will last a decade with ease. Higher segments ofcourse better than lower segments, but in general very very good if you stay away from lowest tier
Agreed. Trackpads on Windows are very good (approaching Mac quality) but on Linux it's hit and (mostly) miss. Gnome gestures are borderline unusable. Sometimes Gnome forgets how many fingers I'm using and every single finger mouse movement is suddenly a gesture, have to retry gestures to switch workspaces because the first two times it fails, etc. It becomes worse with more windows open. No back swipe gesture in Chrome, etc. Basic stuff that is annoying in every day use. Flawless mouse/touchpad support is not too much to ask.
reply