I only see two outcomes for this problem : an internet of verified identities (start by uploading your ID card). Or a paid internet, where it doesn't matter who you are, but since you're going to pay for that email or that reddit account, the probability that it's AI spam is greatly reduced.
I want cool cryptography where I can, e.g. verify where I'm writing from and what my age is without giving away any other information.
Or if I want, I can verify that I'm myself, and eschew anonymity, and certain platforms should only accept contributions from people who don't hide their identity.
People in the town square only see my face, they do not automatically have my name, birth date and ID available unless I give it to them or they go to lengths obtaining those (il)legally.
Anonymity is important for many things. But on the flip side it's responsible of many issues with the internet today, because it makes moderation pretty much impossible (anyone can always just create a new account).
What we're missing is a way to have cryptographically secure pseudonymity: you log in to a website, you don't give any information whatsoever, but you cannot make two different accounts.
Most likely because your second sentence is impossible in one way or another.
Even if it's some kind of government encoded key, governments cannot be trusted to create imaginary people and hand them out to companies like palantir for large scale population manipulation.
I can imagine a government creating a moderate number of fake profiles for use by police and intelligence services, and honestly I'm fine with it, but creating a ghost population for propaganda purpose is entirely different and if you live in a country where you cannot trust your government not to do something that bad, you're already screwd.
In any case, it is still better than the status quo where even foreign authoritarian states can do that in countries where the local government wouldn't.
Do you propose to only let people from a whitelist of countries use internet? Because many countries would have no qualms giving their troll farms bunch of fake electronic ids.
It wouldn't be the internet as a whole, and instead be done at the individual service level (potentially with big plateforms being regulated in what they accept)
And indeed, it is to be expected that some countries be banned from most of the internet, or at least get a read-only version of it, because their digital credentials aren't deemed trustworthy enough. Not unlike how the travel visa system works nowadays.
“A zero-knowledge rollup (zk-rollup) is a layer-2 scaling solution that moves computation and state off-chain into off-chain networks while storing transaction data on-chain on a layer-1 network (for example, Ethereum). State changes are computed off-chain and are then proven as valid on-chain using zero-knowledge proofs.”
It's kind of bizarre that Zoom is still bothering to keep the lights on at Keybase when it's been completely fossilized for six years now. The writing is so obviously on the wall that nobody should be relying on it for anything, and yet they just won't let it die.
It's not fossilized, it's just that no one uses it. Put hot chicks on there or make it mandatory for logging into Slack and suddenly everyone will be using keybase.io, and honestly I think web of trust is a good idea and if a webapp can make it seem easy or intuitive then I'm all for it.
We're scratching our heads wondering why there's no forward motion when it's simply that no one is pushing it.
They haven't added or really changed anything since the acquisition AFAICT, it's just trucking along exactly as it was the day Zoom bought them out. Twitter account proofs were broken by the API changes years ago and nobody is at the wheel to fix or even just deprecate them.
Switzerland just voted recently to officially implement Selective Disclosure JWT, which does exactly all that. Social network registration can ask "are you 18?" and run with that - and only that. Or the club entrance. Or whatever, because it's all controlled by yourself in your app.
That seems like a good idea. The question is how the JWT is generated. A standard one would be more akin to a traditional crypto keypair. That is a "signal" key insomuch as it tells us who controls an account. It can't tell us the owner is the controller and that is the current weakness of crypto right now. To know the owner, we need another type of keypair to go alongside the traditional kind. That would be a "tone key" and is generated by a refreshing seed derived from the entropy of long-running, unfakeable conversations. The same way a friend might recognize us as being ourselves.
But you don't need to prove to all others that you are yourself, do you? You are only asked whether you're 18, the bouncer doesn't care about your name. So you can still hold the phone (like last summer the ID) of someone else and fake their answer.
Paid option doesn't really deter this behavior, it encourages it - a botter will see a price tag on a "real" account (see what happened to twitter's blue checkmark sub) and go oh goody, I can pay for people to think I'm real.
If you make the price high enough sure, but I'm unsure you can find the right price to simultaneously 1) deter bot traffic and 2) be appealing to actual users.
in other words, it just becomes the cost of doing business.
the individual user is now priced out and cannot speak candidly and anonymously, while large, wealthy orgs simply price that into their market-capture and consensus-building techniques
I'm trying to imagine this new paid app in different angles and versions, ie a new Reddit... Pay to be in there, get paid for being in there, only humans can be in there, ads pay for humans being there, humans use some govt on-line ID system, karma systems improve so that only humans are rewarded, Voight-Kampf captchas, humans mail the app their dna to verify their identity, humans login on a street 24/7 login post (think phone booths).... I just don't see any good, unbreakable, viable and/or sustainable way. We just need to get used to coexist with bots everywhere while we adjust our expectations and social codes. Fast forward until AI is massively on the streets and indistinguishable from us physically (or very distinct and fascinating), and all supposing that we can keep them in control...
Dead internet is the prequel to dead world, let's seize the opportunity to learn how to coexist with synthetics and develop the code that will make life with a higher intelligence species possible on Earth. And remember, we humans vary widely, and just like there are people happy to share LinkedIn slop today, there will be humans gladly living surrounded exclusively by overpowering synthetics. So lower your expectations for universal solutions and focus on niche.
I see a simpler outcome, smaller communities where you can verify humans are human. I've already started doing this, and mostly with people that already live in my community.
The corporate internet was never good to begin with, it was just forced on the masses.
If by worked you mean "worked so well they replaced all the big actors" then sure, nothing has worked.
But plenty has worked on a smaller scale. Raph Levien's Advogato worked fine.
There's also a reason most new social networks start up as invite only - it works great for cutting down on spam accounts. But once they pivot to prioritizing growth at all costs, it goes out the window.
PGP is niche. This would be far more mainstream. If you applied it to HN I could probably verify > 50 people already. For PGP I wouldn't know anybody...
Someone, somewhere, salivating at the idea of combining both ideas. A paid for digital ID service that you can use as authentication for the web.
Actually, if I'm thinking about it. Social Media platforms already started this with the paid blue badge for verification, and it's also monthly subscription. But it's for their respective platform only, not universal.
Isn’t this what World Coin is? Definitively not a fan of the project but I think the general goal is to get people to verify they are human and then somehow “waves hands blockchain” that can be carried with them on the internet.
Would that work though? Unless it checks your pulse every 30 minutes I don't see how that would make it better. Bots would use stolen IDs for that. It would only contain it at a smaller scale probably
There's definitely a price where it doesn't scale and that price is almost certainly lower than what people would be willing to pay once for themselves.
It would have to integrate with some kind of official government ID, so that there can be extremely serious criminal penalties for ID theft. But that's something for the next republic, because the current one's justice system is unlikely to be up to the task.
Neither of those solves it, just tries to conserve the status quo.
The issue, as I understand it, is literally a new Eternal November, just that instead of “noobs” there are “clankers” this time.
Personally, I don’t give a flying fuck about things like gender, organs (like skin or genitalia) or absence thereof, or anything alike when someone posts something online, unless posted content is strongly related to one of those topics. Ideas matter no matter who or what produces them. Species fit into the same aspects-I-don’t-care-about list just fine - on the Internet nobody knows^W cares you’re a dog. Or a bunch of matrices in a trench coat. As long as you behave socially appropriate.
The problem with bots is that they’re not just noobs - unlike us meatbags they don’t just do wrong and stupid things but can’t possibly learn to stop (because models are static). Solving that, I think, is the true solution, bringing Internet back to life. Anything else seems to be just addressing the correlations to the symptoms.
(Yea, I’m leaning towards technooptimist and transhumanist views - I was raised in culture that had a lot of those, and was sold a dream of a progress that transcends worlds, and haven’t found a reason to denounce that. Your mileage may vary.)
There, sadly, needs to be some gatekeeping and then it can work.
For example I'm member, since years, of a petrolhead forum where it works like that: a fancy car brand, with lots of "tifosi" (and you don't necessarily want all these would-be owners on the forum). To be part of the forum you must be introduced by some other members who have met you in real-life and who confirm that you did show up with a car of that brand.
If you're not a "confirmed owner", you can only access the forum in read-only mode.
It's not 100% foolproof but it does greatly raise the bar.
It's international too: people do travel and they organize meetups / see each others at cars and coffee, etc.
Or take a real extreme, maybe the most expensive social network: the Bloomberg terminal. People/companies paying $30K/year or so per seat each year probably won't be going to let employees hook a LLM to chat for them and risk screwing their reputation. Although I take it you never know.
It is the way it is but gatekeeping does exist and it does work.
Yes but I think bots can be very good, and many people have legitimate online-only relationships. It gets hairy quickly, with real users getting culled and bots slipping through.
Also, if the bots are smart, they'll add real people too and take them down with them.
Yeah that’s the trade off of this implementation. Lobste.rs already uses this implementation
https://lobste.rs/about#invitations
The comments are considerably better. I’m not even a member but get more out of reading those comments than hn, and I’ve worked at multiple YC’s. This place is not what it used to be.
I'm assuming there's tracking on the invites. So a recursive kick on X and all who X invited would still do the trick. If an IP address appears more than 5 times in an invite tree, ban the /24 or ASN if not from a friendly country for 10 minutes or other reasonable timeframe.
Getting unique IPs in any country you want is trivial for anyone but people building toy bots.
How far up the tree do you kick? Going too far up makes it so malicious people can "sabotage" by botting to get huge swatch of legitimate users banned.
Going to shallow means I just need to create N+1 distance between myself and my bot accounts
Inviting people who invited bots chould also hurt your "social credit" score in various ways.
Your tree could for instance be pruned - you can still invite people, but the people you invited can no longer invite people.
There are not a lot of sites which have tried this and failed. Those which have tried to be even a little bit clever about it, have succeeded pretty well (Advogato was a really early example).
What there have been, are sites which rejected such restrictions after a while, because they would rather have a big number to show to investors than real people. Many have even run the fake accounts themselves (e.g. Reddit).
>Where you need an invite to comment also solves the problem. You need to know a real human to get access.
Bittorrent trackers, as absolute retarded as they are, have performed this experiment for us and the lesson we're supposed to learn is that this does not work. Someone, somewhere, has an incentive to invite the wrong sort eventually, which because of the social network graph math stuff, eventually means "soon". Once that happens, that bot will invite 10 trillion other bots.
Absolutely. If anything, private torrent trackers and NZB indexers are proof that it works overwhelmingly well.
The few I'm part of all have a real community (like in the net of old), civil conversation, and verified, quality materials being shared. Almost everybody behaves and doesn't abuse the invite system, because nobody wants to lose their access to such a wonderful oasis among the slop web. It's a great motivator to stay decent and follow the rules. When things go bad, it's usually not because of malice, but because someone got their account stolen. Prune the invitee tree and things are mostly under control again.
Entire trees of invitees, going back months and years, are pruned. Mercilessly, indiscriminately, and self-servingly for the few people privileged enough that they are above suspicion. And if you're unlucky to be on the wrong side of it, there's nothing like an appeals process.
>and doesn't abuse the invite system
That's wild.
>When things go bad, it's usually not because of malice,
I never said it was malice. It's because the system itself is pathologically flawed and there's no way to make it work.
Honestly the $10 barrier to SomethingAwful back in the day (and I guess now since it’s still around) definitely made a huge difference. I hate the idea of subscribing to a site like HN or Reddit… but one time $10 to post? I’d accept that if it meant less bots.
Odds are it would harm real discussions more than it would harm bot spam.
The bots exist for a reason, usually to covertly advertise a product, and by themselves already cost money to run. Someone looking to astroturf their AI B2B SaaS would probably be more willing to pay $10 to post than a random user from a less wealthy country who just wants to leave a comment on an interesting discussion.
I would probably not pay $10 to post on HN, but many spammers who expect some kind of tangible return would pay that, so the fee just makes the problem worse.
The spammers wouldn't pay it once though - the idea is that it's a good way to scale moderation. Each time an admin needs to ban a user there is a 10$ subsidy supporting that action - and if the bots come back then they get to pay 10$ to be banned again.
Assuming the money isn't wasted and is actually used to fund moderation 10$ is probably comfortably above the cost to detect and ban most malicious users.
There are large swaths of spammers that indeed would not pay it. There are on the other hand plenty of NGO's that would pay it without a second thought to promote specific topics and dogpile on others. Those are the movements I would expect AI to take over if not already. AI does not sleep, humans do. AI won't miss the comments that groups believe need to be amplified or squelched.
Yeah, I love HN, but I wouldn't pay and I know many if not the majority of other people wouldn't. It would increase quality for awhile for sure, but what happens a year or two down the road? It would kill the user count and reduce comments and become less valuable over time.
reminds me of Bill Gates in the 90s when asked about email spam. He said it would make sense to make an email cost like 1 cent so the spammers can't spam as much but this didn't sit right with the mindset of the people at the time.
Also, while real people probably would not be willing to pay to E-mail, spammers who are making money would pay and consider it a cost of doing business. So the fee is having the opposite of its intended effect.
I don't think the current firehose model of spam would be sustainable anymore, though. Those spammers send millions of mails a day. Even with a 1 cent cost, they'd have to be much more selective about their address lists, given the low success rate. It may not solve the problem but I'm almost sure it would help a little. It also may be an additional qualitative barrier for crime-linked spam such as phishing mails, because they'd have to try and find a non-traceable was of payment, which is not trivial and always carries a slight risk of being identified anyway.
Hashcash was a proof-of-work system that would have put a computational tax on email. I don't know what kept it from getting more traction other than simple chicken-and-egg network effects, but it's a good idea, and worth resurrecting.
TLDR: Mail storage is the sender's responsibility. The message isn't copied to the receiver. All the receiver needs is a brief notification that a message is available.
Sounds like a horrible system where you retain many of the problems of email (you still need to deliver notifications) and new surveillance and persistence and mutability problems layered on top..
We need something else, we need an "extreme" (~$1) fine that anyone can claim from any sender who bothered them, no questions asked. Spammers will stop instantly overnight. This would work for phone spam as well.
I read about an idea for an incentive/check system like that before. Something like: make the cost 10c instead of 1c, but implement a system where recipients can mark mails as confirmed "wanted" mail, upon which the sender would be reimbursed 9c. Increasing the cost for unsolicited mails while keeping the cost low for well-behaved newsletters.
it's also something that was in my mind when i wrote about those two options. I still keep this idea in the back of my head since those days (i'm old enough to remember when gates had this atrocious, yet interesting idea).
payment would need a delay too.
Pay $10 and then wait a week or so for the payment to clear without it being reversed. Hopefully that stops the card stealers from dumping as much as possible before getting booted.
Could we just add complex and varied captcha to the comment & posting forms?
That's not a bad idea, sending mail could simply be an authorization for a $1 or $10 charge. And if the receiver said the message was unwanted, then the charge would go through.
There's just the pesky problem of incentives on the other side of the coin - who gets the $? The spammee? But there would be enshitification issues like:
1. Those who are incentivized to take as big a cut as possible.
2. Those who would put it in their EULA that you must accept their spam and not chargeback or else you lose access to something you value like their services (EULA Ransom... not much different to today "accept our EULA or lose access to what you've already paid for!")
I'm sure there are many other perverse incentives which would creep in..
I recall a WSJ article during the 2024 election that was about the fact that Tim Walz and JD Vance were both big consumers of Diet Mountain Dew, and how basically America ran across the board on various types of Mountain Dew. Can you really call yourself "American" if you're not doing the dew?
Maybe some proof-of-work scheme where uploading content would require the uploader to solve a cryptographic puzzle, hence reducing overall number of posts? The PoW difficulty should somehow be correlated to economics where it wouldn't be too expensive but also wouldn't make sense to do mass uploading via bot farms?
Good one, honestly didn't think about it. But visual or other kind of human-accessible captchas can be solved by bots. My suggested PoW would be computational.
You could have easily said this twenty years ago when photoshopped photos were going viral on the early internet. Turns out people are completely fine with ai content and photoshop.
I have not seen or heard of a single person who is excited about AI generated blog posts, or TikToks, or commercials, or images. In fact it’s the opposite, the internet coined the term AI slop, and my non-internet addicted friends hate the fact that chatGPT is killing the environment.
The only people I’ve ever seen champion AI are the few who are excited by the bleeding edge, and the many many peddlers
The most common people just seem to be the elderly who don't care / don't know any better. The same ones who told us never to believe anything from the internet. They seem to be hooked on weird AI jesus facebook posts, daily AI generated motivational content, talking to the chatbot in Whatsapp, etc.
There are probably more than 10^17 AI model executions occurring per day. I know in ye olde HN there are many Purists that are Too Good For AI, but the majority of the human race is consuming AI at a blinding rate, and if they really didn't like it, they would stop.
> and if they really didn't like it, they would stop.
I can’t really articulate why, but this doesn’t feel true to me. There are plenty of things humans do especially at scale that we don’t like, or we do that we don’t like others doing, and don’t stop
>The "Moloch problem" or "Moloch trap" refers to a scenario where individual, rational self-interest leads to a collective outcome that is disastrous for everyone. It describes competitive, zero-sum dynamics—often called a "race to the bottom"—where participants sacrifice long-term sustainability for short-term gains, resulting in a loss for all involved.
Hence why we have to keep feeding the orphan crushing machine.
And how much of that consumption is voluntary or willful? I don't want to get AI slop in my search results or in my forum discussions, it muddys the water with shallow at best information, often in excessively verbose ways that helps hide its more subtle falsehoods that it picked up.
Your comment doesn't make sense because the fact that "dead internet" has been coined since then (along with the popularization of "slop" and "hallucination") means there is a line and we have crossed it. Denial doesn't stand up to any scrutiny.
It's too bad we weren't more skeptical about the ways emerging technologies would eventually be used against us. Some warned about it but many (including me) ignored them. Perhaps we could be forgiven for that naivete, but there's no excuse to be ignorant of what's going on now.
I pay for my ISP and the financial institution the money comes from has age verification
Social media, HN and the rest of internet first business can go broke
I don't see anyone out there propping me up directly. Why would I give crap if some open source hacker or etsy dealer doesn't have a home next month? Yeah I don't because they're not caring in the same way
Thoughts and prayers everyone else but your effort is clear, not going to be 1984'd into caring for people who clearly don't care back.
Third option: a web-of-trust that allows you to see the vectors required to connect you to a given commenter, and which of your known friends and friends-of-friends has already attested their humanity.
i've been through a few hype cycles as well, but this one looks just as big as the invention of the internet, at the very very least (IMHO it's much much more than that).
My way of coping with it is to just go with the flow and learn all the new technics there is to learn, until the machine replaces us all.
random fact : duke nukem had this weird configuration bug where maxing sensitivity on the microsoft sidewinder strafe axis could make you go much faster than possible by moving in diagonal ( the sum of strafe + forward combined to a larger vector). That lead to everyone in our cybercafe buying one :)))
sidenote : are we going to still see new languages appear after AI becomes the one that writes the code ?
I'd say that for a new language to appear in that new world, it would need to offer new compile-time properties that AI could benefit from. Something like expressing general program properties / invariants that the compiler could check and the AI could iterate on.
looking at the feature parity page, i realized how big openclaw ecosystem has become. It's completely crazy for such a young project to be able to interface with so many subsystems so fast.
At this rate, it's going to be simply impossible to catchup in just a few months.
And i'm looking forward to none of them.
reply