Hacker Newsnew | past | comments | ask | show | jobs | submit | j2kun's commentslogin

Classic HN comment: ignore the article and respond directly to its title

Well I read the article discussing pypi packages but I think for a lot of people it’s more single use tools. My little apks are ugly and buggy but work for me

They were not. The rule now is that they have to go into a special bag that cannot be opened while school is in session. Before they could be left in a backpack and snuck out or used between classes.

The legislature (of states and the federal government) routinely passes laws explicitly giving the head of state the power to make decisions like this without passing a law. The most recent one in Oregon about schooling was SB 141.

+1, PageRank was taken from academia. They even cited it in their original work. Funny how the origins of these things get forgotten.

Beyond the content, I have to say I love the aesthetic vibe of this website.

The article is full of PR-speak. What is really going on in this law?

Because it was the "hackers" (Musk) that created this situation, and the "hackers" (DOGE staffers) that participated.

This is a bit of a straw man. The harms of AI in OSS are not from people needing accessibility tooling.


I disagree. I've done nothing to argue that the harm isn't real, downplayed it, nor misrepresented it.

I do agree that at large, the theoretical upsides of accessibility are almost certainly completely overshadowed by obvious downsides of AI. At least, for now anyway. Accessibility is a single instance of the general argument that "of course there are major upsides to using AI", and there a good chance the future only gets brighter.

My point, essentially, is that I think this is (yet another) area in life where you can't solve the problem by saying "don't do it", and enforcing it is cost-prohibitive. Saying "no AI!" isn't going to stop PR spam. It's not going to stop slop code. What is it going to stop (see edit)? "Bad" people won't care, and "good" people (who use/depend-on AI) will contribute less.

Thus I think we need to focus on developing robust systems around integrating AI. Certainly I'd love to see people adopt responsible disclosure policies as a starting point.

--

[edit] -- To answer some of my own question, there are obvious legal concerns that frequently come up. I have my opinions, but as in many legal matters, especially around IP, the water is murky and opinions are strongly held at both extremes and all to often having to fight a legal battle at all* is immediately a loss regardless of outcome.


> I've done nothing to argue that the harm isn't real, downplayed it, nor misrepresented it.

You're literally saying that the upsides of hallucinanigenic gifts are worth the downside of collapsing society. I'd say that that is downplaying and misrepreting the issue. You even go so far to say

>Telling people "no AI!" (even if very well defined on what that means) is toothless against people with little regard for making the world (or just one specific repo) a better place.

These aren't balanced arguments taking both sides into considerations. It's a decision that your mindset is the only right one and anyone else is a opposing progress.


> are worth the downside of collapsing society.

At least in the US, society has been well on it's way to collapse before the LLM came out. "Fake news" is a great example of this.

>It's a decision that your mindset is the only right one and anyone else is a opposing progress.

So pretty much every religious group that's ever existed for any amount of time. Fundamentalism is totally unproblematic, right?


> At least in the US, society has been well on it's way to collapse before the LLM came out. "Fake news" is a great example of this.

IMO you can blame this on ML and the ability to microtarget[1] constituencies with propaganda that's been optimized, workshopped, focus grouped, etc to death.

Proto-AI got us there, LLMs are an accelerator in the same direction.

[1] https://en.wikipedia.org/wiki/Microtargeting


welp, flip another one from the "they definitely could do this and might be" pile to the "they've already been doing this for a long time" pile


Sure. I always said Ai was a catalyst. It could have made society build up faster and accelerate progress, definitely.

But as modern society is, it is simply accelerating the low trust factors of it and collapsing jobs (even if it can't do them yet), because that's what was already happening. But hey, assets also accelerated up. For now.

>So pretty much every religious group that's ever existed for any amount of time. Fundamentalism is totally unproblematic, right?

Religion is a very interesting factor. I have many thoughts on it, but for now I'll just say that a good 95% of religious devouts utterly fail at following what their relevant scriptures say to do. We can extrapolate the meaning of that in so many ways from there.


>You're literally saying that the upsides of hallucinanigenic gifts are worth the downside of collapsing society.

No, literally, he didn't.


Yes, I literally quoted it.


You quoted him and then put words into his mouth based on your own strongly held beliefs. Words he neither said nor implied.

It's absolutely not a straw man, because OP and people like OP will be affected by any policy which limits or bans LLMs. Whether or not the policy writer intended it. So he deserves a voice.


He doesn't think others deserve a voice, so why should I consider his?


The fact that you are engaging in this thread shows me you have considered my opinions, even if you reject them. I think thats great, even in the face of being told I advocate for the collapse of civilization and that I want others to shut up and not be heard.

It is a bit insulting, but I get that these issues are important and people feel like the stakes are sky-high: job loss, misallocation of resources, enshitification, increased social stratification, abrogation of personal responsibility, runaway corporate irresponsibility, amplification of bad actors, and just maybe that `p(doom)` is way higher than AI-optimists are willing to consider. Especially as AI makes advances into warfare, justice, and surveillance.

Even if you think AI is great, it's easy to acknowledge that all it may take is zealotry and the rot within politics to turn it into a disaster. You're absolutely right to identify that there are some eerie similarities to the "gun's don't kill people, people kill people" line of thinking.

There IS a lot to grapple with. However, I disagree with these conclusions (so far) and especially that AI is a unique danger to humanity. I also disagree that AI in any form is our salvation and going to elevate humanity to unfathomable heights (or anything close to that).

But, to bring it back to this specific topic, I think OSS projects stand to benefit (increasingly so as improvements continue) from AI and should avoid taking hardline stances against it.


Sure. I don't necessarily think your opinion is radical. But it's also important to consider biases within oneself, especially when making use of text as a medium where the nuance of body language is lost.

The main thing that put me off on the comment was the outright dismissal of other opinions. That's rarely a recipe for a productive conversation.

>However, I disagree with these conclusions (so far) and especially that AI is a unique danger to humanity. I

I don't think it's unique. It's simply a catalyst. In good times with a system that looks out for its people, AI could do great things and accelerate productivity. It could even create jobs. None of that is out of reach, in theory.

But part of understanding the negative sentiment is understanding that we aren't in that high trust society with systems working for the citizen. So any bouts of productivity will only be used to accelerate that distrust. Looking at the marketing of AI these past few years confirms this. So why would anyone trust it this time?

Rampant layoffs, vague hand waves of "UBI will help" despite no structures in place for that, more than a dozen high profile kerfuffles that can only be described as a grift that made millions anyway, and persistent lobbying to try and make it illegal to regulate AI. These aren't the actions of people who have the best interests of the public masses in mind. It's modern day robber barons.

>I think OSS projects stand to benefit (increasingly so as improvements continue) from AI and should avoid taking hardline stances against it.

I don't have a hard line stance on how organizations handle AI. But from my end I hear that Ai has mostly lead to being a stressor on contributors trying to weed out the flood of low quality submissions. Ai or not (again, Ai is a catalyst. Not the root cause), that's a problem for what's ultimately a volunteer position that requires highly specialized skills.

If the choice comes between banning Ai submissions, restricting submissions altogether with a different system, or burning out talent trying to review all this slop: I don't think most orgs will choose the latter.


> These use cases are like blaming MySQL for storing the lat/long of the school.

A storage layer versus a decision making system? What a ridiculous comparison.


Oregon has some decent things going for it. Multnomah county is rolling out Preschool for All and it's wildly popular. I know lots of people who were going to move, but stayed in Oregon just because they got into the early lottery for it.


There’s no way preschool for all is broadly popular.

It soaks the “rich” with an income threshold that isn’t indexed to inflation and kicks in at an income level where preschool is still a major affordability challenge.

And then you pay PFA and don’t get preschool for your kid because we’re still years away from having enough seats for everyone.

So it is preschool for some (multco paying for seats in existing preschool, aka kicking your kid out of their preschool spot) paid for by the broad middle class.

Even Kotek was ragging on it.

2020’s 125k/200k thresholds should be today’s 150/250 thresholds. They are not.

https://www.opb.org/article/2025/06/26/kotek-multnomah-count...


This is all a temporary problem. PFA will roll out to everyone, income thresholds can be (and are) renegotiated, and as someone who has a large PFA tax burden, I'm happy to pay for it even if my kids will age out before I get the benefit. I have never met anyone outside of ranting internet commenters who is actually mad about this situation.

Establishing free universal child care as the norm that everyone agrees we have to find a way to provide is the real virtue here. Detractors like you are missing the forest for the trees.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: