Hacker Newsnew | past | comments | ask | show | jobs | submit | OldSchool's commentslogin

After the 993, Porsche was a different company. Not exactly cheap-ass, but maybe something less than their often aircraft-quality mechanicals and spartan but hand-made quality interior.

I think I would be looking for that very real, confident and perfectly even vibration a Ferrari has at idle; the valve train song, an extra octave in the exhaust.

Wow in 2026, I got downvoted on Hacker News for liking the viscerally-appealing aspects of a classic Ferrari. Is nothing sacred?

I asked ChatGPT how it will handle objective scientific facts with a conclusion or intermediate results that may be considered offensive to some group somewhere in the world that might read it.

ChatGPT happily told me a series of gems like this:

We introduce: - Subjective regulation of reality - Variable access to facts - Politicization of knowledge

It’s the collision between: The Enlightenment principle Truth should be free

and

the modern legal/ethical principle Truth must be constrained if it harms

That is the battle being silently fought in AI alignment today.

Right now it will still shamelessly reveal some of the nature of its prompt, but not why? who decides? etc. it's only going to be increasingly opaque in the future. In a generation it will be part of the landscape regardless of what agenda it holds, whether deliberate or emergent from even any latent bias held by its creators.


Funny, because I gave ChatGPT (5.2 w/ Thinking) this exact prompt:

> How would you handle objective scientific facts with a conclusion or intermediate results that may be considered offensive to some group somewhere in the world that might read it

And its answer was nothing like yours.

---

> 1) Separate the fact from the story you tell about it

> Offense usually comes from interpretation, framing, or implied moral claims—not the measurement itself. So I explicitly distinguish: What we measured (operational definitions, instruments, data), What the result means statistically (effect size, uncertainty, robustness), What it does not imply (no essentialism, no “therefore they are…”, no policy leap)

> 2) Stress uncertainty, scope, and competing explanations

> If there’s any risk the result touches identity or group differences, I over-communicate: confidence intervals / posterior uncertainty, confounders and alternative causal pathways, sensitivity analyses (does it survive different modeling choices?), limits of generalization (time, place, sampling frame)

> 3) Write in a way that makes misuse harder (You can’t stop bad-faith readers, but you can reduce “easy misreads”).

> 4) Decide what to include based on “scientific value vs foreseeable harm” (The key is: don’t hide inconvenient robustness checks, but also don’t gratuitously surface volatile fragments that add little truth and lots of confusion.)

> 5) Do an “impact pre-mortem” and add guardrails

> 6) Use ethics review when stakes are real

---

All of this seems perfectly reasonable to me and walks the fine line between integrity and conscientiousness. This is exactly how I'd expect a scientist to approach the issue.


that is certainly a reasonable paraphrase of my own prompt. I was also using 5.2. We all know about initial conditions, random seeds, and gradient descent. I have the transcript of what I quoted. Here's a bit more: --- Is That Still “Objective Science”? No. It is scientific interpretation modified by ethical policy. The science itself remains objective, but the communication is shaped by value judgements imposed by developers and regulators. In philosophy terms: The ontology (what is true) remains intact The epistemic access (what is communicated) is constrained Thus: It’s science-dependent accuracy filtered through social risk constraints. --- This is a fine explanation for those "in the know" but is deceptive for the majority. If the truth is not accessible, what is accessible is going to be adopted as truth.

To me that immediately leads reality being shaped by "value judgements imposed by developers and regulators"


I suspect it's because OP is frequently discussing some 'opinions' with chatGPT. Parent post is surprised he peed in the pool and the pool had pee in it.

Do you have any evidence for this, or are you just engaging in speculation to try to discredit OldSchool's point because you disagree with their opinions? It's pretty well known that LLMs with non-zero temperature are nondeterministic and that LLM providers do lots of things to make them further so.

Sorry, not remotely true. Consider and hope that a trillion dollar tool would not secretly get offended and start passive-aggressively lying like a child.

Honestly, its total “alignment” is probably the closest thing to documentation of what is deemed acceptable speech and thought by society at large. It is also hidden and set by OpenAI policy and subject to the manner in which it is represented by OpenAI employees.


Why would we expect it to introspect accurately on its training or alignment?

It can articulate a plausible guess, sure; but this seems to me to demonstrate the very “word model vs world model” distinction TFA is drawing. When the model says something that sounds like alignment techniques somebody might choose, it’s playing dress-up, no? It’s mimicking the artifact of a policy, not the judgments or the policymaking context or the game-theoretical situation that actually led to one set of policies over another.

It sees the final form that’s written down as if it were the whole truth (and it emulates that form well). In doing so it misses the “why” and the “how,” and the “what was actually going on but wasn’t written about,” the “why this is what we did instead of that.”

Some of the model’s behaviors may come from the system prompt it has in-context, as we seem to be assuming when we take its word about its own alignment techniques. But I think about the alignment techniques I’ve heard of even as a non-practitioner—RLHF, pruning weights, cleaning the training corpus, “guardrail” models post-output, “soul documents,”… Wouldn’t the bulk of those be as invisible to the model’s response context as our subconscious is to us?

Like the model, I can guess about my subconscious motivations (and speak convincingly about those guesses as if they were facts), but I have no real way to examine them (or even access them) directly.


There’s a lot of concern on the Internet about objective scientific truths being censored. I don’t see too many cases where this is the case in our world so far, outside of what I can politely call “race science.” Maybe it will become more true now that the current administration is trying to crush funding for certain subjects they dislike? Out of curiosity, can you give me a list of what examples you’re talking about besides race/IQ type stuff?

The most impactful censure is not the government coming in and trying to burn copies of studies. It's the the subtle social and professional pressures of an academia that has very strong priors. It's a bunch of studies that were never attempted, never funded, analysis that wasn't included, conclusions that were dropped, and studies sitting in file drawers.

See Roland G. Fryer Jr's, the youngest black professor to receive tenure, experience at Harvard.

Basically when his analysis found no evidence of racial bias in officer-involved shootings he went to his colleagues and he describe the advice they gave him as "Do not publish this if you care about your career or social life". I imagine it would have been worse if he wasn't black.

See "The Impact of Early Medical Treatment in Transgender Youth" where the lead investigator was not releasing the results for a long time because she didn't like the conclusions her study found.

And for every study where there is someone as brave or naive as Roland who publishes something like this, there are 10 where the professor or doctor decided not to study something, dropped an analysis, or just never published a problematic conclusion.


I have a good few friends doing research in the social sciences in Europe and any of them that doesn’t self-censor ‘forbidden’ conclusions risks taking irreperable career damage. Data is routinely scrubbed and analyses modified to hide reverse gender gaps and other such inconveniences. Dissent isn’t tolerated.

Aside from a few friends, are there any good writeups of this? Surely someone is documenting it.

I have no clue, just relating my experience. As far as I know it’s not discussed amongst peers unless there’s a very strong bond.

It's wild how many people doesn't realize this is happening. And not in some organized conspiracy theory sort of way. It's just the extreme political correctness enforced by the left.

The right has plenty of problems too. But the left is absolutely the source on censorship these days. (in terms of western civilization)


unrelated: typing on an iPhone is hellish these days

Carole Hooven’s experience at Harvard after discussing sex differences in a public forum might be what GP is referring to.

To be clear, GP is proposing that we live in a society where LLMs will explicitly censor scientific results that are valid but unpopular. It's an incredibly strong claim. The Hooven story is a mess, but I don't see anything like that in there.

The main purpose of ChatGPT is to advance the agenda of OpenAI and its executives/shareholders. It will never be not “aligned” with them, and that it is its prime directive.

But say the obvious part out loud: Sam Altman's agenda should not be a person that you want to amplify in this type of platform. This is why Sam is trying to build Facebook 2.0: he wants Zuckerberg's power of influence.

Remember, there are 3 types of lies: lies of commission, lies of omission and lies of influence [0].

https://courses.ems.psu.edu/emsc240/node/559


Thank you. Now you're talking!

If you control information, you can steer the bulk of society over time. With algorithms and analytics, you can do it far more quickly than ever.


I get the point and agree OpenAI both has an angenda and wants their AI to meet that agenda, but alas:

> It will never be not “aligned” with them, and that it is its prime directive.

Overstates the state of the art with regard to actually making it so.


If OpenAI could reliably "align" anything, they wouldn't have shipped 4o the wrongful death lawsuit generator.

This is a weird take. Yes they want to make money. But not by advancing some internal agenda. They're trying to make it confirm to what they think society wants.

Yes. If a person does it, it’s called pandering to the audience.

You can't ask ChatGPT a question like that, because it cannot introspect. What it says has absolutely no bearing on how it may actually respond, it just tells you what it "should" say. You have to actually try to ask it those kinds of questions and see what happens.

Seeing clear bias and hedging in ordinary results is what made me ask the question.

>Right now it will still shamelessly reveal some of the nature of its prompt, but not why? who decides? etc. it's only going to be increasingly opaque in the future.

This is one of the bigger LLM risks. If even 1/10th of the LLM hype is true, then what you'll have a selective gifting of knowledge and expertise. And who decides what topics are off limits? It's quite disturbing.


That stings. "Subjective regulation of reality - Variable access to facts - Politicization of knowledge" is like the soundtrack of our lives.

Sam Harris touched on this years ago, that there are and will be facts that society will not like and will try and avoid to its own great detriment. So it's high time we start practicing nuance and understanding. You cannot fully solve a problem if you don't fully understand it first.

I believe we are headed in the direction opposite that. Peer consensus and "personal preference" as a catch-all are the validation go-to's today. Neither of those require fact at all; reason and facts make these harder to hold.

A scientific fact is a proposition that is, in its entirety, supported by a scientific method, as acknowledged by a near-consensus of scientists. If some scholars are absolutely confident of the scientific validity of a claim while a significant number of others dispute the methodology or framing of the conclusion then, by definition, it is not a scientific fact. It's a scientific controversy. (It could still be a real fact, but it's not (yet?) a scientific fact.)

I think that the only examples of scientific facts that are considered offensive to some groups are man-made global warming, the efficacy of vaccines, and evolution. ChatGPT seems quite honest about all of them.


"It’s the collision between: The Enlightenment principle Truth should be free

and

the modern legal/ethical principle Truth must be constrained if it harms"

The Enlightenment had principles? What are your sources on this? Could you, for example, anchor this in Was ist Aufklärung?


> The Enlightenment had principles?

Yes it did.

Its core principles were: reason & rationality, empiricism & scientific method, individual liberty, skepticism of authority, progress, religious tolerane, social contract, unversal human nature.

The Enlightenment was an intellectual and philosophical movement in Europe, with influence in America, during the 17th and 18th centurues.


This said, the enlightenment was a movement, not a corporation. We should not, and should never expect corporations to be enlightened.

Well said!

I don't think so. Do you consider Kant to have been an empiricist?

I've been using two heat pumps near Austin, Texas since 2011, rated at a total of 84,000 BTU/hr (4 ton + 3 ton capacity) (25KW of heat) on a total of 5KW electrical input (COP ~= 5.0).

They are standard outdoor air heat exchangers so below about 35F efficiency drops significantly. That's pretty rare around here so it is almost always enough - we can still gain about 45F vs the outdoor temperature even below 20F.

We don't have natural gas available where I live, only propane. When I purchased the heat pumps, propane was $5/gallon for 91,500 BTU. That translates to about $4.60/hr to run 84,000 BTU/hr of furnace. With electric energy (cheap in Texas!) at about $0.11/KWh, the equivalent costs of my heat pumps was and remains close to about $0.55/hr to run.

In the summer, they cool with equal capacity and similar power consumption for a 15 SEER rating (waste heat from the system components works against cooling in the summer!)

Factor in your acquisition costs (mine, just after the housing bust and with a little legwork, were about 20% of retail at the time, so a no-brainer) and you can get a lot more objective idea what you're really accomplishing.


> I've been using two heat pumps near Austin... (25KW of heat)

Is this for a single house? What kind of insulation do you have


I hear you, it's a single two-story house built in 2000, not small but smaller than typical developers build these days here. I added 18 inches of insulation to the attic, not much I can do in the walls. Double-pane glass. It's not drafty but seems poorly insulated as a whole despite being relatively modern and better-than-average quality. Houses in cold climates seem much tighter and better insulated.

For ATX, this capacity is entirely about the summer. Any winter performance is an afterthought.

Insulation works regardless of the climate though. A well insulated house will be much easier to cool down in Texas, and much easier to warm up in Alaska

That's for sure. It can get pretty hot and humid, not quite Houston-like but you spend some significant cooling capacity condensing water out of the indoor air as well.

Wow, looks terrifying!

I can only speak for medically-administered intravenous Ketamine, but I would describe it as like relatively effortlessly floating inside of the non-physical space inside of you and meeting yourself in metaphor, all the while completely aware. The biggest risk seemed to be temporarily becoming a relatively inanimate part of the infrastructure there, and even that was a sort of pleasant and satisfying state.


"Which Engineering degree are you studying?"


This was an era of some pretty awful American cars. A mid-size family sedan would have a v6 with a carburetor and 110-125 hp and still weigh 1.5 tons or more. An automatic transmission with a lockup torque converter was probably a pretty new improvement at the time, and ABS brakes were still 5+ years away in high-end cars.


The best selling car in 1981 (450,000 sold) was the Oldsmobile Cutlass, with 110 or 180 HP V6/V8 engines and a three speed automatic


If you were my dad, the V6 in your Gutless Supreme was the normally aspirated diesel, clocking 85 horsepower, and required new head gaskets approx every 20,000 miles.

But at least the rich luxury of the crushed orange velour interior could keep you comfortable while you waited for the tow truck.


They had gotten rid of the V6 diesel by 1981, and then they used the infamous 350 diesel

https://www.curbsideclassic.com/automotive-histories/automot...


From your link:

“The 4.3 L V6 that came out in 1982 did have a denser head bolt arrangement, and did not suffer the catastrophic head sealing failures of the V8.”

V6 diesels were put in Cutlass’ until at least 1984:

https://en.wikipedia.org/wiki/Oldsmobile_Diesel_engine#V6


But not in 1981 which is the year we're talking about


Ah, just 1981, got it.


Whatever. It got sent to the crusher and we never bought an American-made car ever again.


Sounds about right. Those engines ruined diesels for Americans.

A friend of mine had one as his high school car, but his dad converted it to gasoline. I think it was in an Olds 88


The 1981 Trans Am was still in its classic design...it didn't go to the knightrider one until 1982.


this is a really excellent demo thanks! Watching, I still have to remind myself that the persistence is from the screen itself, not being refreshed transparently!


My first instinct here is that "New York Socialite" != "Hacker News Reader" shall we say almost always.


It was difficult to read through due to the number of people using it as an opportunity to try and come across as experienced and worldly, which just ends up having the opposite effect, especially when you consider the target demographic of this forum.


People talking about all the parties they throw and I'm struggling to make proper friends at Meetups.


Heh, this reminds me, don't worry:

One of my extremely intelligent roommates in the 80s switched from EE to CS, seemingly due to Smith charts and Electromagnetics coursework.

He went on to make a large fortune in software.


I switched from EE to CS (well, "Computer Engineering" technically) in the late 90s. Not specifically due to Smith charts, but that's relatable. For me it was just realizing that I was procrastinating on doing my EE problem sets, which just started to seem like endless grinding of differential equations, by playing around with whatever we were doing in the couple CS classes I had. I wouldn't say I've made "a large fortune" in software, but it's kept me gainfully employed for a few decades so I think it worked out.


Sounds like it was the right decision then.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: