Hacker Newsnew | past | comments | ask | show | jobs | submit | beej71's commentslogin

This just seems like we're asking for it. We have historical examples of polio ripping through the US causing tremendous amounts of damage. We're going to have an unvaccinated populace and someone's going to get exposed.

And yet here we have a whole bunch of white guys tearing up the Constitution.

> The Supreme Court has told Trump to pound sand as often as it's upheld his policies.

Has it? Last I saw, they had overturned nearly 90% of lower conservative court rulings to be in Trump's favor, and a huge portion of those were on the shadow docket.

They also said it's fine to gift the justices, just not before they make a ruling.

And they gave the President a lot more immunity than he previously had.

If they're not actually corrupt, they look exactly as if they are.


psunavy03 is lying to you. Trump is very obviously not losing in front of the supreme court 50% of the time.

The thing is, right now we have very little evidence that there is any significant mail-in voting fraud.

But we do have a fair amount of evidence that there is suppression of in-person voting.

So neither of these systems is perfect, but we should go with the one that gives us the most accurate legitimate vote.

Someone else posted a list of ways that in-person voting would be more acceptable, e.g. having a large window to cast ballots. But instead, we see move the other way, trying to restrict the window in which we can cast ballots.

You put a free ID in the hands of every legitimate voter and give them enough time and opportunity to vote, and then I will consider in-person to be on par with mail-in.


> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient.

IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.

Curricula have to be modified significantly for this to work.

I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)


> powerful learning accelerator

You got any data on that? Because it's a bold claim that runs counter to all results I've seen so far. For example, this paper[^1] which is introduced in this blog post: https://theconversation.com/learning-with-ai-falls-short-com...

[^1]: https://doi.org/10.1093/pnasnexus/pgaf316


Only my own two eyes and my own learning experience. The fact is students will use LLMs no matter what you say. So any blanket "it's bad/good" results are not actionable.

But if you told me every student got access to a 1-on-1 tutor, I'd say that was a win (and there are studies to back that up). And that's one thing LLMs can do.

Of course, just asking your tutor to do the work for you is incredibly harmful. And that's something LLMs can do, as well.

Would you like to have someone 24-7 who can give you a code review? Now you can. Hell yeah, that's beneficial.

How about when you're stuck on a coding problem for 30 minutes and you want a hint? You already did a bunch of hard work and it's time to get unstuck.

LLMs can be great. They can also be horrible. The last thing I wrote in Rust I could have learned nothing by using LLMs. It would have take me a lot less time to get the program written! But that's not what I did. I painstakingly used it to explore all the avenues I did not understand and I gained a huge amount of knowledge writing my little 350 line program.


I don't think that study supports your assertion.

Parent is saying that AI tools can be useful in structured learning environments (i.e. curriculum and teacher-driven).

The study you linked is talking about unstructured research (i.e. participants decide how to use it and when they're done).


You can no true Scotsman it, but that study is a structured task. It's possible to generate an ever-more structured tutorial, but that's asking ever more more from teachers. And to what end? Why should they do that? Where's the data suggesting it's worth the trouble? And cui bono?

Students have had access to modern LLMs for years now, which is plenty long to spin up and read out a study...


To quote the article:

"To be clear, we do not believe the solution to these issues is to avoid using LLMs, especially given the undeniable benefits they offer in many contexts. Rather, our message is that people simply need to become smarter or more strategic users of LLMs – which starts by understanding the domains wherein LLMs are beneficial versus harmful to their goals."

And that is also our goal as instructors.

I agree with that study when using an LLM for search. But there's more to life than search.

The best argument I have to why we should not ban LLMs in school is this: students will use it anyway and they will harm themselves. That is reason enough.

So the question becomes, "What do instructors do with LLMs in school so the LLM's effect is at least neutral?"

And this is where we're still figuring it out. And in my experience, there are things we can do to get there, and then some.


Comparing that study to how any classroom works, from kindergarten through high school, is ridiculous.

What grade school classes have you ever been in where the teacher said "Okay, get to it" and then ignored the class until the task was completed?

I'm not saying it's not a Scotsman: I'm saying you grabbed an orange in your rush to refute apples.


Because if an ICE agent violates your constitutional rights, you have zero recourse. There is no freedom.

If you're not afraid of that, you got your head in the sand.


In my experience (programmer since 1983), it's massively faster to leverage an LLM and obtain quality code when working with technology that I'm proficient in.

But when I don't have expertise, it's the same speed or even slower. The better I am at something, the faster the LLM coding goes.

I'm still trying to get better at Rust, and I'm past break-even now. So I could use LLMs for a speed boost. But I still hand-write all my code because I'm still gaining expertise. (Here I lean into LLMs in a student capacity, which is different.)

Related to this, I often ask LLMs for code reviews. The number of suggestions it makes that I think are good is inversely proportional to the experience I have with the particular tech used. The ability to discard bad suggestions is valuable.

This is why I think bring an excellent dev with the fundamentals is still important—critical, even—when coding with LLMs. If I were still in a hiring role, I'd hire people with good dev skills over people with poor dev skills every time, regardless of how adept they were at prompting.


Lots of things aren't fearsome until they're pointed at you.

> at least the next 5 years

That's not much of a flex. The people who are worried about China taking the lead are looking at velocity and acceleration, not position.


Yes, we did. And that discussion had a most definite conclusion.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: