Hacker Newsnew | past | comments | ask | show | jobs | submit | autoexec's commentslogin

I was surprised by the number of bibles too! I don't think I've ever seen one as litter (not counting those left in hotel rooms), but I've seen other kinds of religious literature like tracts, booklets, and watchtower magazines

It'd be an interesting jobs program. Cleaning up neighborhoods can have a lot of beneficial effects like reducing the amount of new litter. It could even reduce crime. It's also a job that would get people outside and keep them moving which is probably better for their health than being chained to desk all day, and it can't be done (even poorly) by a chatbot

I'm starting to suspect I might be cynical. I was pretty impressed at the "1,000,000 cigarette butts that I removed from the environment" but I couldn't help but think "moved into what?" which brought this (https://youtu.be/3m5qxZm_JqM) to mind:

   [Interviewer:] Into another environment….

   [Senator Collins:] No, no, no. it’s been towed beyond the environment, it’s not in the environment

   [Interviewer:] Yeah, but from one environment to another environment.

   [Senator Collins:] No, it’s beyond the environment, it’s not in an environment. It has been towed beyond the environment.

   [Interviewer:] Well, what’s out there?

   [Senator Collins:] Nothing’s out there…

Also, I couldn't help but wonder if he was removing trash at a faster rate than it was being added. Picking up litter is a good thing certainly, but we really need to get people to stop creating it in the first place. Even properly disposed of all that trash is a massive problem, but I'd love to see more effort getting people to clean up after themselves. A very long time ago I'd see PSAs with owls imploring us to "Give a hoot" and fake indians crying. Was that helpful? Does that kind of thing even exist today? Now that nobody watches TV are they pushed at kids on tiktok?

It looks like he might keep them in his own local environment for photo documentary / artistic purposes.

He's got to have a decent bit of land to keep it all which makes it all the more impressive that he found all that trash in his city.

There are good reasons to not trust signal. The very first line of their privacy & terms page says "Signal is designed to never collect or store any sensitive information" but then they started collecting and permanently storing sensitive user data in the cloud and never updated that page. Much more recently they started collected and storing message content in the cloud for some users, but they still refuse to update that page. I'm pretty sure it's big fat dead canary warning users away from Signal. Any service that markets itself to whistleblowers and activists then also outright lies to them about the risks they take when using it can't be trusted for anything.

We could already use social media posts to detect mental illness, by admission as people talk openly about their diagnosis, but also by analysis of the content/tone/frequency of their posts that don't mention mental illness.

Data brokers already compile lists of people with mental illness so that they can be targeted by advertisers and anyone else willing to pay. Not only are they targeted, but they can get ads/suggestions/scams pushed at them during specific times such as when it looks like they're entering a manic phase, or when it's more likely that their meds might be wearing off. Even before chatbots came into the mix, algorithms were already being used to drive us toward a dystopian future.


> But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise".

You know what I've never had a DM do in a fantasy campaign? Suggest that my half-elf call the suicide hotline. That's not something you'd usually offer to somebody in a roleplaying scenario and strongly suggests that they weren't playing a game.


That logic seems strained to the point of breaking. Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help. Right? And we certainly wouldn't blame the DM or the game for the subsequent suicide. Right?

So why are you trying to blame the AI here, except because it reinforces your priors about the technology (I think more likely given that this is after all HN) its manufacturer?


> Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help.

If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action, and I wouldn't expect the DM to continue the game until he was satisfied that it was safe for the player to continue. I would expect the DM to stop the game if he thought the player was going to actually harm himself. If the DM did continue the game, and did continue to encourage the player to actually hurt himself until the player finally did, that DM might very well be locked up for it.

If an AI does something that a human would be locked up for doing, a human still needs to be locked up.

> So why are you trying to blame the AI here

I'm not blaming the AI, I'm blaming the humans at the company. It doesn't matter to me which LLM did this, or who made it. What matters to me is that actual humans at companies are held fully accountable for what their AI does. To give you another example, if a company creates an AI system to screen job applicants and that AI rejects every resume with what it thinks has a women's name on it, a human at that company needs to be held accountable for their discriminatory hiring practices. They must not be allowed to say "it's not our fault, our AI did it so we can't be blamed". AI cannot be used as a shield to avoid accountability. Ultimately a human was responsible for allowing that AI system to do that job, and they should be responsible for whatever that AI does.


If a human would go to jail for this then at least one or more humans at google should go to jail for it. "Our AI did it, not us!" should never be allowed to be an excuse.

Gemini didn't "know" he wasn't a child when it told him to kill himself or to "stage a mass casualty attack while armed with knives and tactical gear."

There are things you shouldn't encourage people of any age to do. If a human telling him these things would be found liable then google should be. If a human would get time behind bars for it, at least one person at google needs to spend time behind bars for this.


> If a human telling him these things would be found liable then google should be.

Sounds like a big if, actually. Can a human be found liable for this? I’d imagine they might be liable for damages in a civil suit, but I’m not even sure about that.


>Can a human be found liable for this?

A father in Georgia was just convicted of second degree murder, child cruelty, and other charges because he failed to prevent his kid from shooting up his school.


More accurately it was because the father had multiple warnings that his child was mentally unstable but ignored them and handed his 14 year old a semiautomatic rifle even as the boy's mother (who did not live with them) pleaded to the father to lock all the guns and ammo up to prevent the kid from shooting people.

If he had only "failed to prevent his kid from shooting up a school" he wouldn't have even been charged with anything.


Doesn't google have the capability to have multiple warnings and yet still ignores them?

Google has legal personhood, but as a corporation its ethical responsibilities are much looser than those of an individual, and it's extremely hard to win a criminal case against a corporation even when its agents and representatives act in ways that would be criminal if they happened in a non-corporate context.

The law - in practice - is heavily weighted towards giving corporations a pass for criminal behaviour.

If the behaviour is really egregious and lobbying is light really bad cases may lead to changes in regulation.

But generally the worst that happens is a corporation can be sued for harm in a civil suit and penalties are purely financial.

You see this over and over in finance. Banks are regularly pulled up for fraud, insider dealing, money laundering, and so on. Individuals - mostly low/mid ranking - sometimes go to jail. But banks as a whole are hardly ever shut down, and the worst offenders almost never make any serious effort to clean up their culture.


When HSBC was caught knowingly laundering money for terrorists, cartels, and drug dealers all they had to do was apologize and hand the US government a cut of the action. It really seems less like the action of a justice system and more like a racketeering. Corporations really need to be reined in, but it's hard to find a politician willing to do it when they're all getting their pockets stuffed with corporate cash.

> as a corporation its ethical responsibilities are much looser than those of an individual

This seems ass backwards


ChatGPT thinks that they can identify when someone may not be mentally well. There's no reason to think that Google can't. In fact, I'm pretty sure Google has a list of the mental health issues of just about every person with a Google account in that user's dossier.


>Can a human be found liable for this? I’d imagine they might be liable for damages in a civil suit

it is generally frowned upon (legally) to encourage someone to suicide. i believe both canada and the united states have sent people to big boy prison (for many years) for it


Yes, people have gone to prison for it.


Preferably the C-Suite.

I understand the impulse in this direction, but I’m not sure it would serve as much of a disincentive, as there would likely just be a highly-paid scapegoat. Why not something more lasting and less difficult to ignore, like compulsory disclosure of the model’s source code (in addition to compensation for the victim(s)). Compulsory disclosure of the source would be a massive disadvantage.

exactly. That's why they get the big bucks. They're ultimately responsible

It sounds more poetic than an invitation or an insult that invites someone directly or not to kill themselves, in its own, in my opinion.

This isn't Gemini's words, it's many people's words in different contexts.

It's a tragedy. Finding one to blame will be of no help at all.


> It's a tragedy. Finding one to blame will be of no help at all.

Agreed with the first part, but holding the designers of those products responsible for the death they've incited will help making sure they put more safeguards around this (and I'm not talking about additional warnings)


None of what Gemini says is "Gemini's words". It's always just training data and prompt input remixed and regurgitated out.

How about Firefox just not fill their context menu with bullshit bloat and ads for shit nobody asked for like google lens and make it fully/easily customizable so that most users are happy and power users can add whatever they want.

It's pretty damn easy to make everyone happy.


It literally already is fully customizable. between userChrome, about:config, and extensions, you can do literally anything you like to your right click menu on Firefox.

I'd argue that you shouldn't need third party add-ons plus modifications to both userChrome and about:config to do it, so it could be easier. A "Customize Context Menu" under Edit would be nice and easy for even regular users to discover and take advantage of.

Why is my Edit menu so long? What is this "Customize Context Menu" thing that I never use, or will use at most once a year?

Just kidding, but it does illustrate that there's always a tradeoff with these things. (I would like to have the ability to customize the context menu too, fwiw, though it's not as straightforward as the other customizable bits of UI since the context menu is, well, contextual.)


about:config where you need a search engine to find all the key strings does not count as easy in this context. And it's unreasonable to pretend it is.

>makes it fully/easily customizable so that most users are happy [...] It's pretty damn easy to make everyone happy.

considering that it is already fully customizable, yet you are still complaining about it, i dont think so


> How about Firefox just not fill their context menu with bullshit bloat and ads for shit nobody asked for like google lens and [...] It's pretty damn easy to make everyone happy

>shit nobody asked for

i use (or have used) most of them. other people in this thread have said they used all of them at one point or another.

just because you dont use it does not make it "bullshit bloat and ads for shit nobody asked for". thats why you have the option to remove them :)

whats the next complaint?


That's why https://addons.mozilla.org/en-US/firefox/addon/google-lens-s... exists. Google lens is exactly the kind of thing add-ons are for. Some people might like it, they should be able to install it, but it doesn't belong in the browser by default.

This is the same mistake they made with Pocket and I'm guessing it was done for the same reason (money) since they went with a Google product and not Bing Visual Search or for that matter letting users configure what service they'd like to use for image searches. This was pure bloat. It's no different from Windows adding candy crush to the desktop by default where the same argument "Some people play it and it can be removed!" does nothing to change what it is: bloat that nobody asked for.


Try cats.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: