Hacker Newsnew | past | comments | ask | show | jobs | submit | oldfrenchfries's commentslogin

The link is not working, but I found it myself. Great point, thanks for sharing.


This is great, thanks for sharing!


Thats a fair point on the title. I used "Yes-Men" as a colloquialism for the "sycophancy" described in the Stanford paper, but overly affirming or sycophantic is definitely more precise and neutral. I cant edit the title anymore, but I appreciate the catch.


Don’t apologize to these types of people. It will only make your problem worse as now you’re an admitted offender. Ignore them or better yet laugh at them to put their insane ideas back on the margins where they belong.


All good. I thought it was a gendered reference and learned that it isn't. My bad.


New title: "LLMs treat you like a Billionaire; you're not"


There is a striking data visualization showing the breakup advice trend over 15 years on Reddit. You can see the "End relationship" line spike as AI and algorithmic advice take over:

https://www.reddit.com/r/dataisbeautiful/comments/1o87cy4/oc...


More interesting, IMO, is the general trend that started long before LLMs. The fact that "dump them" is the standard answer to any relationship question is a meme by now. The LLMs appear to be doing exactly what one would expect them to be doing based on their training corpus.


"There is more than one fish in the sea" has been relationship advice for centuries. It might be about being dumped, but I've also thought it useful for considering dumping somebody too.


No, that's not it. We're talking about posts like "we had a silly little quarrel about something that would need fifteen minutes to clear up and make both happy if we both just try to adult a bit" and commenters being adamant that deleting gym and facebooking up and so on is clearly the only choice. Most of said commenters probably not being in any position to give advice on relationships to others.


> The LLMs appear to be doing exactly what one would expect them to be doing based on their training corpus.

That is not how full LLM training works. That is how base model pretraining works.


Yes, yes, I know. But how many people have AI companies hired to do RLHF who actually have the expertise to adjust them away from biases like this? As opposed to paying a dollar per day to a bunch of poor people in Africa?


if things are so bad that you’re posting on reddit then breaking up is usually the best answer.


I see this being said often but I don't understand.

A lot of people posting there are young and may well be in their first relationship. It makes sense for them to ask a question in the community they spend their most time in - which is reddit


Most people overshare on reddit and it's completely unrelated to the seriousness of the situation.

It's also a meme that people will ask the dumbest, most trivial interpersonal conflict questions on Reddit that would be easily solved by just talking to the other person. E.g. on r/boardgames, "I don't like to play boardgames but my spouse loves them, what can I do?" or "someone listens to music while playing but I find it distracting, what can I do?" (The obvious answer of "talk to the other person and solve it like grownups" is apparently never considered).

On relationship advice, it often takes the form "my boy/girlfriend said something mean to me, what shall I do?" (it's a meme now that the answer is often "dump them").

If LLMs train on this...


the year is 2015

smart phones took over the world, social networks happened.

Turns out they are the best sterializer human ever invented.

I just wrote a blog https://blog.est.im/2026/stdin-09


This is the correct take. The advice preceded the LLM boom. They were trained on the 'dump them' advice and proceeded to reinforce the take. So why did the relationship advice change dramatically? I speculate attribution to the disinformation campaigns during this time. They were and still are grossly underestimated.


Not sure what sorts of disinformation campaigns you're referring to...

There is something more interesting to consider however; the graph starts to go up in 2013, less than 6 months after the release of Tinder.



Isn't the fact that a person is asking an AI whether to leave their partner in its own an indication that they should?

EDIT: typo


The idea that asking implies a yes is actually a pretty common logical fallacy. In relationship science, we call this "Relational Ambivalence" and its a completely normal part of any longterm commitment.


>asking an AI whether to leave your partner

is that what they're asking though? because "relationship advice" is pretty vague


That's a good point. If an AI respond to a "what should I get my boyfriend for Christmas?" with a "You should leave him", that's a very different issue.


How is it an indication? I think people on here don't realize that most of the people don't think things through as much as (software) engineers


In my local(?) community (like in my city, not my industry) there is a saying "if you had to ask for relationship advice, then you probably should break up".

There is some rationale to that. People tend to hold onto relationships that don't lead anywhere in fear of "losing" what they "already have". It's probably a comfort zone thing. So if one is desperate enough to ask random strangers online about a relationship, it's usually biased towards some unresolvable issue that would have the parties better of if they break up.


> So if one is desperate enough to ask random strangers online about a relationship

I'd me more inclined to ask random strangers on the internet than close friends...

That said, when me and my SO had a difficult time we went to a professional. For us it helped a lot. Though as the counselor said, we were one of the few couples which came early enough. Usually she saw couples well past the point of no return.

So yeah, if you don't ask in time, you will probably be breaking up anyway.


I would speculate that, if a couple goes to a professional for help, they have much better chances than asking on a random forum online...


> relationships that don't lead anywhere

Relationships are not transactions that are supposed to "lead somewhere".


You’re being a bit pedantic here “leading somewhere” is accepted shorthand for a lasting, satisfying relationship that is good for both parties.


Relationships aren't transactional. This isn't a business deal.


Most people engage in romantic relationships because they'd like to find someone to marry and settle down with. Nothing but respect for the people who've thought it through and decided that's not for them, but what's much more common is failing to think it through or worrying it would be awkward/scary/"cringe" to take their relationship goals seriously.

That's what people are pointing to when they talk about relationships not "leading anywhere". If you want to be married in 5-10 years, and you're 2 years into an OK relationship with someone you don't want to marry, it's going to suck to break up with them but you have to do it anyway.


Maybe I'm too much of a hopeless romantic, but from my perspective and experience, when someone is good for you, you'll fight for that relationship regardless of what others say, and conversely when you're in a situation where your actively asking and willing to consider "leave" from someone who isn't a very close friend or a therapist as applicable, then it's likely you're looking for external validation for what you've already essentially decided.


Wait, other people don’t make decision trees and mind maps and pro/con lists and consult chatbots before making decisions? Are they just flying through life by the seat of their pants? That doesn’t seem like a very solid framework for achieving desired outcomes.


I heard about someone once who could decide whether to buy a new t-shirt in less than 3 months.


No, but it is an indication of brain-rot to make a question seriously and also to think that it means the conclusion is foregone. It is an advent of our childlike current generations. Of course, the moment anything becomes difficult or unpleasant, one should quit, apparently. Surely, this kind of resiliency is what got humanity so far.


I didn't imply it's a "foregone conclusion", but just said it's an indication - in the sense of increasing the likelihood. Just like a person asking an AI "what does it feel like to bleed out?" could be them researching for a novel, but is nevertheless an indication of a potential serious issue.


Or that people are using AI to write perfectly calibrated ragebait that gets upvoted with a bunch of genuine human clicks.


Is this comment human hallucination? You can clearly see the trend is always going up. It only went down a bit during Covid.


This new Stanford study published on March 26, 2026 shows that AI models are sycophantic. They affirm the users position 49% more often than a human would.

The researchers found that when people use AI for relationship advice, they become 25% more convinced they are 'right' and significantly less likely to apologize or repair the connection.


To be fair an average therapist is also pretty sycophantic. "The worst person you know is being told by their therapist that they did the right thing" is a bit of a meme, but isn't completely false in my experience.


No, the meme is that the average therapist can be boiled down to "well, what do you think?" or "and how does that make you feel?" (of which ELIZA, the original bot that passed the Turing test, was perhaps an unintentional parody). Even this cartoonish characterization demonstrates that the function of therapists is to get you to question yourself so that you can attempt to reframe and re-evaluate your ways of thinking, in a roughly Socratic fashion.


It was entirely intentional. The Rogerian school of psychotherapy stereotyped by “how does that make you feel” was popular at the time and the most popular ELIZA script used that persona to cleverly redirect focus from the bot’s weaknesses in comprehension.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: