I have mixed feelings on this (besides obviously being sad about the loss of a good person). I think one of the useful things about AI chat is that you can talk about things that are difficult to talk to another human about, whether it's an embarrassing question or just things you don't want people to know about you. So it strikes me that trying to add a guard rail for all the things that reflect poorly on a chat agent seems like it'd reduce the utility of it. I think people have trouble talking about suicidal thoughts to real therapists because AFAIK therapists have a duty to report self harm, which makes people less likely to talk about it. One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!". Honestly, most my questions are not "great", nor are my insights "sharp", but flattery will get you a lot of places.. I just worry that these things attempting to be agreeable lets people walk down paths where a human would be like "ok, no"
> One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!"
100%
In ChatGPT I have the Basic Style and Tone set to "Efficient: concise and plain". For Characteristics I've set:
- Warm: less
- Enthusiastic: less
- Headers and lists: default
- Emoji: less
And custom instructions:
> Minimize sycophancy. Do not congratulate or praise me in any response. Minimize, though not eliminate, the use of em dashes and over-use of “marketing speak”.
Yeah why are basically all models so sycophantic anyway. I'm so done with getting encouragement and appreciation of my choices even when they're clearly wrong.
I tried similar prompts but they didn't really work.
> Like, all the time chatGPT is like "Great question!".
I've been trying out Gemini for a little while, and quickly got annoyed by that pattern. They're overly trained to agree maximally.
However, in the Gemini web app you can add instructions that are inserted in each conversation. I've added that it shouldn't assume my suggestions as good per default, but offer critique where appropriate.
And so every now and then it adds a critique section, where it states why it thinks what I'm suggesting is a really bad idea or similar.
It's overall doing a good job, and I feel it's something it should have had by default in a similar fashion.
I assume so, just haven't tried the others yet. Main point was rather that the model can behave differently if the provider wanted, without any additional training.
But having lived through the 80's and 90's, the satanic panic I gotta say this is dangerous ground to tread. If this was a forum user, rather than a LLM, who had done all the same things, and not reached out, it would have been a tragedy but the story would just have been one among many.
The only reason we're talking about this is because anything related to AI gets eyeballs right now. And our youth suicides epidemic outweighs other issues that get lots more attention and money at the moment.
Some of us have, and some of us still use it. The functionality and the need for an archive not subject to the same constraints as the wayback machine and other institutions outweighs the blackhat hijinks and bickering between a blogger and the archive.is person/team.
My own ethical calculus is that they shouldn't be ddos attacking, but on the other hand, it's the internet equivalent of a house egging, and not that big a deal in the grand scheme of things. It probably got gyrovague far more attention than they'd have gotten otherwise, so maybe they can cash in on that and thumb their nose at the archive.is people.
Regardless - maybe "we" shouldn't be telling people what sites to use or not use -if you want to talk morals and ethics, then you better stop using gmail, amazon, ebay, Apple, Microsoft, any frontier AI, and hell, your ISP has probably done more evil things since last tuesday than the average person gets up to in a lifetime, so no internet, either. And totally forget about cellular service. What about the state you live in, or the country? Are they appropriately pure and ethical, or are you going to start telling people they need to defect to some bastion of ethics and nobility?
Real life is messy. Purity tests are stupid. Use archive.is for what it is, and the value it provides which you can't get elsewhere, for as long as you can, because once they're unmasked, that sort of thing is gone from the internet, and that'd be a damn shame.
My guess is that you’ve not had your house egged, or have some poverty of imagination about it. I grew up in the midwest where this did happen. A house egging would take hours to clean up, and likely cause permanent damage to paint and finishes.
Or perhaps you think it’s no big deal to damage someone else’s property, as long as you only do it a little.
they just wrote a paragraph about evil being easy, convenient and providing value, how the evilness of others legitimizes their own, how the inability to achieve absolute moral purity means that one small evil deed is indistinguishable from being evil all the time, discredited trying to avoid evil as stupid, claimed that only those who have unachievable moral purity should be allowed to lecture about ethics in favor of good, and literally gave a shout out to hell. I don't think property damage is what we need to worry about. Walk away slowly and do not accept any deals or whatabouts.
I’d be happy if people stop linking to paywalled sites in the first place. There’s usually a small blog on the same topic and ironically the small blogs poster here are better quality.
But otherwise, without an alternative, the entire thread becomes useless. We’d have even more RTFA, degrading the site even for people who pay for the articles. I much prefer keeping archive.today to that.
Many of us aren't, and it's why it's hard to blame the businesses like OpenAI for doing nothing.
The parent's jokey tone is unwarranted, but their overall point is sound. The more blame we assign to inanimate systems like ChatGPT, the more consent we furnish for inhumane surveillance.
Why? Because you can’t guilt trip me into submission I need to be removed? And because I don’t buy media’s blatant abuse of the situation I lack empathy?
False equivalence; a hammer and a chatbot are not the same. Browsers and operating systems are tools designed to facilitate actions, not to give mental health opinions on free-text inquiries. Once it starts writing suicide notes you don’t get to pretend it’s a hammer anymore.
I think the distinction is a bit more subtle than "designed to facilitate actions", which you could argue also applies to an LLM. But a browser is a conduit for ideas from elsewhere or from its user. An LLM... well, kind of breaks the categorization of conduit vs originator, but that's sufficient to show the equivalence is false.
The leaders of these LLM companies should be held criminally liable for their products in the same way that regular people would be if they did the same thing. We've got to stop throwing up our hands and shrugging when giant corporations are evil
I just did a BOTE calculation for my iPhone (A17 Pro chip; GPU rated at 4 Tflops). According to the sales blurbage in TFA, the Cray 1 performed at 80 Mflops. (Yes, that is OBVIOUSLY not comparing apples to Apples -- pun intended). Unless I've dropped a decimal point, my iPhone is (capable of) 50,000 times the floating point speed of a Cray 1.
The doco mentions "left" and "right" mouse. I have the ctrl-click already mapped to right mouse on my trackpad. Before I take the plunge, how well does this work with a trackpad on a MB Air?
Such things have been measured and mapped for quite some time.
The grandfather of all modern gravimeters was invented in 1936 by LaCoste and Romberg.
reply