Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The answer is that it just takes a lot of people.

The more of those people you hire, the higher the chance that a bad actor will slip through and push malicious things through for a fee. If the scammer has a good enough system, they'll do this one time with one person and then move on to the next one, so now you need to verify that all your verifiers are in fact perfect in their adherence to the rules. Now you need a verification system for your verification system, which will eventually need a verification system^3 for the verification system^2, ad infinitum.



This is simply not true in every single domain. The fact people think tech is different doesn't mean it necessarily is. It might just mean they want to believe it's different.

At the end of the day, I can't make an ad and put it on a billboard pretending to be JP Morgan and Chase. I just can't.


> This is simply not true in every single domain. The fact people think tech is different doesn't mean it necessarily is. It might just mean they want to believe it's different.

Worldwide and over history, this behaviour has been observed in elections (gerrymandering), police forces (investigating complaints against themselves), regulatory bodies (Boeing staff helping the FAA decide how airworthy Boeing planes are), academia (who decides what gets into prestigious journals), newspapers (who owns them, who funds them with advertisements, who regulates them), and broadcasts (ditto).

> At the end of the day, I can't make an ad and put it on a billboard pretending to be JP Morgan and Chase. I just can't.

JP Morgan and Chase would sue you after the fact if they didn't like it.

Unless the owners of the billboard already had a direct relationship with JP Morgan and Chase, they wouldn't have much of a way to tell in advance. If they do already have a relationship with JP Morgan and Chase, they may deny the use of the billboard for legal adverts that are critical of JP Morgan and Chase and their business interests.

The same applies to web ads, the primary difference being each ad is bid on in the first blink of an eye of the page opening in your browser, and this makes it hard to gather evidence.


> The more of those people you hire, the higher the chance that a bad actor will slip through and push malicious things through for a fee.

Again, the newspaper model already solves this. Moderation should be highly localized, from the communities for which they are moderating the content. That maximizes the chance that the moderator's values will align with the community. Small groups are harder to hide bad actors, especially when you can be named and shamed by people that you see every day. Managers and their coworkers and the community itself are the "verifiers."

Again, this model has worked since the beginning of time and it's 1000x better than what FB has now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: