Hacker Newsnew | past | comments | ask | show | jobs | submit | Hizonner's commentslogin

Lot of unsupported "impossibles" in there.

If the actual bank app does that, or is even easy to fool into doing that, then the bank should be responsible. That's the world "regular people" want and it's the world as it should be.

If random malware the user chose to install does that, then that is not the bank's fault. The bank is no more involved than anybody else. And no, I don't think "regular people" want to make that the bank's fault.


The legal infrastructure for banking and securities ownership has long had defaults for liability assignment.

For securities, if I own stock outright, the company has to indemnify if they do a transfer for somebody else or if I lack legal capacity. So transfer agents require Medallion Signature Guarantees from a bank or broker. MSGs thereby require a lengthy banking relationship and probably showing up in person.

For broker to broker transfers, there is ACATS. The receiving broker is in fact liable in a strict, no-fault way.

As far as I know, these liabilities are never waived. Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.

These defaults are probably unknown for most people, even those with large amounts of securities. The system is expected to work since it has been set up this way.

Clearly a large number of programmers have a bent to go the complete opposite direction from MSGs, where everything is private keys or caveat emptor no matter the technical sophistication of the customer. I, well, disagree with that sentiment. The regime where it’s possible for no capitalized entity to be liable for wrongful transfers (defined as when the customer believes they are transferring to a different human-readable payee than actually receiving funds) should not be the default.


> Clearly a large number of programmers have a bent [...]

Perhaps programmers have a clear idea of what's given up when you do things your way.

I'm not sure you do.


> Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.

But that is expensive, so my impression is that for non-sizeable transfers, and beyond banking, for basically anything dealing with lots of regular people doing regular-people-sized operations, the default in the industry is to try and outsource as much liability onto end-users. So instead of treating user's computers as untrusted and make system secure on the back end, the trend is to treat them as trusted, and then deal with increased risk by a) legal means that make end-users liable in practice (keeping users uninformed about their rights helps), and b) technical means that make end-user devices less untrusted.

b) is how we end up with developer registries and remote attestation. And the sad thing is, it scales well - if device and OS vendors cooperate (like they do today), they can enable "endpoint security" for everyone who seeks to externalize liability.


Yeah, "memory" just means "context pollution", at least for the general chat interface.

You have to trust someone to verify age.

You don't have to trust somebody not to track how the resulting credential is used. And that is what "zero knowledge" means. It means that after you finish the protocol, nobody has learned anything but what they were supposed to learn (in this case, "the person at the other end of this connection is over 18"). If it leaks anything else about the person, it's not zero knowledge. If somebody learns which of the issued credentials was used, it's not zero knowledge. If parties can collude to get information they're not supposed to get, it's not zero knowledge.

It's a technical term of art, not some politician's bullshit. And it isn't complicated to understand.


IPFS doesn't even try to do any kind of anonymity or censorship resistance. In a practical sense it's probably worse than BitTorrent, although neither one of them is up to the task. Actually resilient data distribution is hard, and I don't think there are any systems that have all the needed elements.

... and if you create one, they can, and it's starting to look like they will, outlaw using it, regardless of what you use it for.


I should have said "I2P" instead of "IPFS".

> Government can track all salts for your tokens, site can collect all salts, they can compare notes.

That is not zero knowledge. Given that actual zero-knowledge systems are well understood, the only reason to deploy a system that allows that would be if you planned to abuse it.


What is your definition of zero knowledge?


By this definition bbs+ signatures are ZK.

Zero knowledge in such a system requires a minimum of 3 independent parties. There are quite a few solutions out there, I think the most developed ones are online voting systems, because tracking and de duplication is essential.

The impossibly high bar they set "Perfect" at in order to make it the enemy of good, and fight against any progress being made to keep children out of adult spaces.

That being said, it's my personal opinion that I'd love to simply have my device store a token and send it to any site when requested. I'd then like those sites to give me toggles to remove all non-verified content - and therefore my internet experience could be sans-juvenile squeakers.


"Society" isn't demanding anything. A vocal minority of idiots, unfortunately overrepresented among the kind of people who tend to run for office, is demanding things, 95 percent based on stupid delusions and childish prejudices.

You're assuming that that "message" would persuade anybody.

It'd be more likely to make more people do it.


You know what's really cheap and scalable? Not doing such moronic shit at all.

Colorado is cordially invited to eat shit.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: