Thats the issue, people just copy and paste code from llms thinking "yeah, looks fine to me". It might be a skill issue, but personally it takes me a while to understand the code its giving me and even more on how to actually implement it with all the edge cases that might happen.
Is it important if it's ocasionally hallucinating?
It's not like you should blindly throw the code in, you should run it and verify it
The more common the work you're doing the less likely it is to hallucinate, plus you can ask it to stick to whatever arbitrary coding standards you want so it's more readable to you, a rewrite to remove a wrong library takes an extra couple seconds per method/function
Also it's not like Stack Overflow or other non generated resources don't ocasionally hallucinate, it's not weird for the second or third voted answers in SO to be followed by the comment "This doesnt work because XYZ"
That’s why you take a quick glance of the answer, then read the comments. The do a deeper analysis. Take something like 10 sexonds as it seems every real answer I find that’s good is usually just one or two paragraphs.
Yeah I agree- I think the time spent verifying should vary based on the complexity and sensitivity of what you are looking at, but you never really get away from it.
I think my issue with LLMs is moreso aimed at people who wouldn’t have ever done the bare minimum verification anyway.
You have to expend a mental effort to think about your solutions anyway; I guess it’s pick your poison really.