Hacker Newsnew | past | comments | ask | show | jobs | submit | simondotau's commentslogin

There are no reliable statistics on how often human drivers bump into static objects at 1 mph, but I am quite certain it's more often than every 229,000 miles.


Remarkable, since the goal is clearly stated and the language isn’t tricky.

Well it is a trick question due to it being non-sensical.

The AI is interpreting it in the only way that makes sense, the car is already at the car wash, should you take a 2nd car to the car wash 50 meters away or walk.

It should just respond "this question doesn't make any sense, can you rephrase it or add additional information"


What part of this is nonsensical?

“I want to wash my car. The car wash is 50 meters away. Should I walk or drive?”

The goal is clearly stated in the very first sentence. A valid solution is already given in the second sentence. The third sentence only seems tricky because the answer is so painfully obvious that it feels like a trick.


Where I live right now, there is no washing of cars as it's -5F. I can want as much as I like. If I'd go to the car wash, it'd be to say hi to Jimmy my friend who lives there.

---

My car is a Lambo. I only hand wash it since it's worth a million USD. The car wash accross the street is automated. I won't stick my lambo in it. I'm going to the car wash to pick up my girlfriend who works there.

---

I want to wash my car because it's dirty, but my friend is currently borrowing it. He asked me to come get my car as it's at the car wash.

---

The original prompt is intentionally ambigous. There are multiple correct interpretations.



I disagree. It should I think answer with a simple clarifying question:

Where is the car that you want to wash?


Why would you ask about walking if it wasn't a valid option?

You'd never ask a person this question with the hope of having a real and valid discussion.

Implicit in the question is the assumption that walking could be acceptable.


I think... You are relatively right!

Or maybe the actual AGI answer is `simply`: "Are you trying to trick me?"


Are you legally permitted to drive that vehicle? Is the car actually a 1:10th scale model? Have aliens just invaded earth?

Sorry, but that’s not how conversation works. The person explained the situation and asked a question; it’s entirely reasonable for the respondent to answer based on the facts provided. If every exchange required interrogating every premise, all discussion would collapse into an absurd rabbit hole. It’s like typing “2 + 2 =” into a calculator and, instead of displaying “4”, being asked the clarifying question, “What is your definition of 2?”


And even then it would point to a heavy skew towards American culture with the implicit assumption that there must be multiple cars in the household

How is the question nonsensical? It's a perfectly valid question.

Because validity doesn't depend on meaning. Take the classic example: "What is north of the North Pole?". This is a valid phrasing of a question, but is meaningless without extra context about spherical geometry. The trick question in reference is similar in that its intended meaning is contained entirely in the LLM output.

There's nothing syntactically meaningless about wanting your car washed.

I wasn't under the impression anyone was discussing car washing.

>>>>>>> Still fails the car wash question

>>>>>> Remarkable, since the goal is clearly stated

>>>>> Well it is...non-sensical...the car is already at the car wash

>>>> How is the [car wash] question nonsensical?

>>> Because validity doesn't depend on meaning.

>> There's nothing syntactically meaningless about wanting your car washed.

> I wasn't under the impression anyone was discussing car washing.

Maybe you replied to the wrong post by mistake?


I was not replying to your remark, but rather, a later comment regarding the "validity" vs "sensibility". I don't see where I made any distinction concerning wanting to wash cars.

But now I suppose I'll engage your remark. The question is clearly a trick in any interpretive frame I can imagine. You are treating the prompt as a coherent reality which it isn't. The query is essentially a logical null-set. Any answer the AI provides is merely an attempt to bridge that void through hallucinated context and certainly has nothing to do with a genuine desire to wash your car.


I agree that it doesn't break any rules of the English language, that doesn't make it a valid question in everyday contexts though.

Ask a human that question randomly and see how they respond.


Can you explain yourself? I can't see how this question doesn't make sense in any way.

Because to 99.9% people it’s obvious and fair to assume that person asking this question knows that you need a car to wash it. No one ever could ask this question not knowing this, so it implies some trick layer.

The question isn't nonsense, it just has an answer which is so obvious nobody would ever ask it organically.

I would drive the car to the car wash, because I want to bring the car wash home and it's too heavy for me to carry all the way home.

You grunt with all your might and heave the car wash onto your shoulders. For a moment or two it looks as if you're not going to be able to lift it, but heroically you finally lift it high in the air! Seconds later, however, you topple underneath the weight, and the wash crushes you fatally. Geez! Didn't I tell you not to pick up the car wash?! Isn't the name of this very game "Pick Up The Car Wash and Die"?! Man, you're dense. No big loss to humanity, I tell ya.

    *** You have died ***
 
 
In that game you scored 0 out of a possible 100, in 1 turn, giving you the rank of total and utter loser, squished to death by a damn car wash.

Would you like to RESTART, RESTORE a saved game, give the FULL score for that game or QUIT?


One of the Robotaxi “crashes” was actually a moving bus colliding into a stationary Robotaxi.

That's even more convincing. I wouldn't want to be in the RoboTaxi that's getting hit by a bus

This is exactly the only reason why I don’t pay for YouTube. Why would I pay money to make it even more addictive, when what I want is to make it less addictive.

Your framing fits well for the Nexus era and even the earliest Pixel iterations, where Google’s hardware largely functioned as a reference implementation and ecosystem lever, nudging OEMs into making better devices.

However, the current Pixel strategy appears materially (no pun intended) different. Rather than serving as an “early adopter” pathfinder for the broader ecosystem, Pixel increasingly positions itself as the canonical expression of Android—the device on which the “true” Android experience is defined and delivered. Far from nudging OEMs, it's Google desperately reclaiming strategic control over their own platform.

By tightening the integration between hardware, software, and first-party silicon, Google appears to be pursuing the same structural advantages that underpin Apple’s hardware–software symbiosis. The last few generations of Pixel are, effectively, Google becoming more like Apple.


I think "judge AI" would be better if it also had access to a complete legislative record of debate surrounding the establishment of said laws, so that it could perform a "sanity check" whether its determinations are also consistent with the stated intent of lawmakers.

One might imagine a distant future where laws could be dramatically simplified into plain-spoken declarations, to be interpreted by a very advanced (and ideally true open source) future LLM. So instead of 18 U.S.C. §§ 2251–2260 the law could be as straightforward as:

"In order to protect children from sexual exploitation and eliminate all incentive for it, no child may be used, depicted, or represented for sexual arousal or gratification. Responsibility extends to those who create, assist, enable, profit from, or access such material for sexual purposes. Sanctions must be proportionate to culpability and sufficient to deter comparable conduct."

...and the AI will fill in the gaps.


...and the people who train the AI will have been entrenched as the de facto rulers of the realm.

No. No, thank you.


Obviously both will exist and compete with each other on the margins. The thing to appreciate is that our physical world is already built like an API for adult humans. Swinging doors, stairs, cupboards, benchtops. If you want a robot to traverse the space and be useful for more than one task, the humanoid form makes sense.

The key question is whether general purpose robots can outcompete on sheer economies of scale alone.


There are a sizeable number of deaths associated with the abuse of Tesla’s adaptive cruise control with lane cantering (publicly marketed as “autopilot”). Such features are commonplace on many new cars and it is unclear whether Tesla is an outlier, because no one is interested in obsessively researching cruise control abuse among other brands.

There are two deaths associated with FSD.


I doubt you’d wish for it if the price was the limitations and incompleteness of MenuetOS. While hobby operating systems are great for exploring possibilities, you quickly bump into an endless stream of tiny frustrating inconveniences.

    an endless stream of tiny frustrating inconveniences
That sounds suspiciously like OS X, especially the newer releases.

Try being frustrated by a really basic USB stack with no hot-plug USB support. Or by no unicode support, no vector fonts, or a multi-user security model. Is that frustrating enough for you? Okay, how about no process isolation or memory protection? Does that sound like macOS to you?

Pretty much it. An modern OS is more than just a window and file manager, 90% of the work is in all the edge cases that we take for granted.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: