Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Thank you, Elon - you saved a lot of money and ruin decades of car UX

The motivation behind a single touchscreen (from the horse’s mouth) is the gradual move to full self driving. Given that all Teslas are capable of fully autonomous driving, pending software, it doesn’t make sense to pollute the car interior with controls that will become obsolete during the lifetime of the car, not even accounting for extra cost associated with manufacturing dozens/hundreds of extra parts.

Now if others are copying it without understanding why it’s done that way, it’s on them.



The way I see autonomous driving is trough reality, not trough fanboism or 'tech fetishism'.

The current stable driving assistant systems must be secondary to real driving. They are capable of critical prevention and effective safe-nets and when done properly they reduce risk tenfold. Idea of beta-testing on the public roads by creating consumer disinformation is outrageous and must be stopped.

It is beyond believe, how regulations are soft towards applications of technologies with direct effect on human lives. My area of expertise is in UI/UX and the simplest logic possible is called ergonomic prerequisite.

Every system must exist as a 100% functionality before some form of control surface is introduced. Every control surface must be human oriented. In car UX, the main function is driving and everything that reduces risk of removing focus from the road is good UX, everything that creates distraction and complication is not only bad UX - it is dangerous.

In car touch interfaces are exciting for people who hate driving and love their phones. This is the main selling point. It is the result of marketing driven design and enormous amount of production savings (when removing physical controls for dedicated functions) as a business motivational vector.

This will not work long-term. The Idea of vehicles as a SaaS platform also will not work. This will be CX hell and people will reject it.


> Idea of beta-testing on the public roads by creating consumer disinformation is outrageous and must be stopped.

Everyone driving the FSD Beta knows pretty well what they're doing and what the whole thing means, hence the very strict criteria the drivers must meet to have that feature enabled, so I don't know what disinformation you're talking about.

Calling it outrageous, dangerous, etc doesn't make it so, and evidently, after many month of FSD Beta, there is yet an accident to be reported resulting from use of it, in a world where every Tesla accident makes it to front page of the news. Is it an accident waiting to happen? Well, every time you get into your car, you're entering the accident lottery. It's all about the likelihood of getting into an accident and so far numbers are in Tesla's favor.

And how do you imagine a system whose improvements relies heavily on real world data is meant to obtain it without large scale participation? Have a hundred employees doing rounds in a test track?

The way I see autonomous driving through being well informed, and not speaking based on outdated intuitions, Tesla's pushing the envelope just enough to make meaningful progress toward full self-driving while not overly putting people at risk.

> It is beyond believe, how regulations are soft towards applications of technologies with direct effect on human lives.

Heavily regulating something that's in its infancy is the surest way to kill it. Those people who are constantly bitching about why robo-taxis aren't here yet even with lax regulations, can be certain to dream of it for the rest of their lives should walls be put up too early.

> In car touch interfaces are exciting for people who hate driving and love their phones.

Not only. I drive a M3 and I love driving it. I don't use my phone and nor find the urge to. I enjoy the lack of clutter, and the beautiful large screen when I need to use it. Every other car I sit in, feels like a 60s Russian rocket and stresses me out. Even Tesla's own Model S and Model X feels too cluttered now with the extra driver's dash screen.

> This will not work long-term. The Idea of vehicles as a SaaS platform also will not work. This will be CX hell and people will reject it.

Ye, that's what they said about SaaS on the Internet. Who would, in their right minds, give up control of their service and store their data and code at the mercy of a third party. Saw how that turned out.


> Everyone driving the FSD Beta knows pretty well what they're doing and what the whole thing means.

An example/analogy. If I am in the position of creating technology for automation I will implement the proven practices in aerospace industry, you have plenty of innovation and effective (with some exceptions in recent years, hello Boeing) regulatory process. Testing autonomous vehicles on the public road is dangerous and non productive in any way or form (except gathering data cheaply).

> And how do you imagine a system whose improvements relies heavily on real world data is meant to obtain it without large scale participation? Have a hundred employees doing rounds in a test track?

Exactly. Last time I checked R&D is corporate responsibility not public data exploit or "free work". People don't get it at all. If you will use my driving data, you must pay me or give me some incentive. I work for money, you are selling expensive product so how about half the price?

> Heavily regulating something that's in its infancy is the surest way to kill it. Those people who are constantly bitching about why robo-taxis aren't here yet even with lax regulations, can be certain to dream of it for the rest of their lives should walls be put up too early.

Again, your argument is weak. Example: Texting and Driving statistics: https://www.simplyinsurance.com/texting-and-driving-statisti...

The technology must be produced and tested to death, before is introduced to public roads. Touch interfaces implemented without any concern of ergonomic prerequisite are proven distraction and risk.

>Ye, that's what they said about SaaS on the Internet. Who would, in their right minds, give up control of their service and store their data and code at the mercy of a third party. Saw how that turned out.

Again, the big and bold SaaS brush, yep I see how this is turning out. You can check the sales stat of NAS servers in recent years or resurrection of the flip phone.

People are stupid, until they are not. Information Security and Control will be the defining factor for businesses in the near future.

Yes, some people got madly rich on marketing Cloud and AI scams and "user generated content", so what?

This sounds as Davos idea of the future. You will own nothing and you will be happy.

Sorry, but not at all.

SaaS services have use-cases. Moving everything to SaaS model is clear madness.


> Testing autonomous vehicles on the public road is dangerous and non productive in any way or form

Where are the accidents after many months? None, so nope, not dangerous just because you think it is.

> Exactly. Last time I checked R&D is corporate responsibility not public data exploit or "free work". People don't get it at all. If you will use my driving data, you must pay me or give me some incentive. I work for money, you are selling expensive product so how about half the price?

Oh so it's money problem now. If Tesla paid people for Beta testing, it'd be fine? So you're ok for people to allegedly (which I don't agree with anyway, but most of the rest of you do) risk each other's lives as long as they get paid.

Well, you can say the same about Apple, Google and pretty much any other company that runs Beta testing. People rush to do it for free and wait in line. They are compensated by having access to latest and greatest without waiting.

You think getting paid means they are not exploited? How about paying you for you kidney. That's OK? You don't seem to understand what being exploited means.

> The technology must be produced and tested to death, before is introduced to public roads. Touch interfaces implemented without any concern of ergonomic prerequisite are proven distraction and risk.

Stop with the blanket statements. Where are you numbers that shows Tesla's are getting into more accidents? You don't have it because it's not only there but it's the opposite. Tesla's get into far less accidents that general fleet (those numbers on Tesla's report).

> You can check the sales stat of NAS servers in recent years or resurrection of the flip phone.

You look at a jump in NAS servers and flip phones and conclude that they're going to take over SaaS and slabs? Go look at the relative jump and magnitude of SaaS increases too.

> Yes, some people got madly rich on marketing Cloud and AI scams and "user generated content", so what?

Ye, the fact that a one person with a few bucks can now access what was only available to few multi-nationsal a decade ago is a scam. Do you also think the Earth is flat and COVID is a hoax?

> Moving everything to SaaS model is clear madness.

If you're actually trying to do something of value, and someone takes off a huge payload off your shoulder so you can get there faster, you'd be a fool to not do it. Time is the ultimate currency here and anything that buys you time is a win. So in principle, SaaS, IaaS and any other XaaS is a win.

The only time it's not a win is if one is either mandated to keep sensitive data in-house, or runs such a large/custom workload that no other provider can economically meet.


Man, I see you don't understand my argument at all. You think that I care about Tesla? Tesla is just a recent example of pushing the limits of regulation.

No. I care for human centered design. I care about technology with responsibility without data exploitation. I care about privacy and transparency. Your arguments are apologetic in my view and confirm the state of current affairs. As a consumer I reserve my right to vote with my wallet. And I will never succumb to this perverse product design philosophy that SaaS model is trying to propagate outside the effective and usable territory.

>If you're actually trying to do something of value, and someone takes off a huge payload off your shoulder so you can get there faster, you'd be a fool to not do it. Time is the ultimate currency here and anything that buys you time is a win. So in principle, SaaS, IaaS and any other XaaS is a win.

There is no "professional" lingo that can hide the core business ethics behind this race for "eating and dominating the world". Faster? For what? May be tomorrow you will rationalize mass control with chip implants, because it is "progress" and this will get you faster toward the ultimate life goal : corporate profits.

What a dream. We will see how this "futuristic projections" unfolds. Remember: Implementation is a key to success.:)


> I care for human centered design. I care about technology with responsibility without data exploitation. I care about privacy and transparency.

That is idealism, definitely necessary in small quantities but what moves things forward is pragmatism. The former is more of a guide than a prescription. E.g. human centered design is costly - when you're pushing the envelope, you can't have everything in utmost polish (and a mere glance at any part of history of new tech should make that obvious).

Responsibility without data exploitation: again, pragmatism - if Tesla is still doing this in 10 years, then ye, I'd find it questionable, but right now, Tesla is the only entity on which to build hope on, if you actually believe and care about what they're trying to achieve. If you don't or think it's all for Elon's pocket, then you're mistaken (with plenty of proof around) and I hope you spend some time to challenge your beliefs.

Privacy: this one's actually a balancing act. You want perfect privacy, go in the woods and hunt. In SaaS case, you trade some privacy for time and money saved.

> Your arguments are apologetic in my view.

I'm not sure how to express it without it sounding that way!

> Faster? For what? May be tomorrow you will rationalize mass control with chip implants, because it is "progress" and this will get you faster toward the ultimate life goal : corporate profits.

Pick 10 things your life depends on today (some hints: medicine and medical tech, transport, ...) and you'd be hard pressed to find any of them that don't trace back to so called "corporate profits" in capitalist nations. It's not the ultimate goal as you put it. It's the requirements to be in the game. If you wanna make large scale impact, you need capital, which comes from investors whom you need to keep happy. That's not to say no one goes in for the money, but if you spot a smart person who's earning it the hard way, it should be a good signal that they're not doing it for the money (hint: leading the first publicly traded car company in over half a century where every other ones in the country have gone bankrupt - many easier paths to making butt load of money faster).


I am not at all idealist. I am pragmatist to the core. Data is the new petrol. And exactly as in the petrol starting times, there are "entrepreneurs" and others who have no idea about value of personal data.

Moving forward everyone will have this idea, and will not share freely like in this moment. People don't care, "Because they have nothing to hide", wait and see what they will do when realize how someone is profiting over this big time. People are stupid until they are not.

Second point: Security. All our software is big security risk. We don't have a single os created with memory safe language in mind. In the future everything will be hackable and exploitable. You can check what is going on in Apple camp, today. Not tomorrow.

There are powers in the corporate world who don't want to see this as a problem, they care about marketing, sales and profit. All of this will crumble to the ground.

In near future treats will be bigger, that's why your idea is "optimistic" and mine is realistic.

On human centered design....I am not interested to argue at all.


> Given that all Teslas are capable of fully autonomous driving, pending software

They are not. Please stop propagating this falsehood.


They are, and it's true, even TODAY, and I'm going to repeat it every chance I get, because I've learnt that it takes repetition to correct misunderstanding and ignorance! Just go on youtube and search for FSD Beta 10. Yes, it makes mistakes, and yes it has a long way before you can sleep at the wheel but it's doing door to door driving today across the entire US, like no other car even comes close to doing.


FSD is level 2 self driving by Tesla’s own admission to CA DMV. So no, they are not even close to being “full self driving” capable.


The only difference between level 2 and up is whether the system needs attention should it make a mistake, and ATM Tesla must have a fully attentive driver at all times. The problem with the SAE levels is that level 2 covers such a broad category of autonomous capabilities that it’s kind of meaningless calling a car level 2. There are cars at level two manufactured a decade ago and Tesla FSD is also level 2 and the latter may as well be a spaceship in comparison. Knowing there are 5 levels and Tesla is at level 2 does not in any way give one a notion of how autonomous the car is, only that it may make mistakes x% of the time.

An analogy with a SaaS is having a full featured service that’s available 99% of time and dismissing it as every other toy service out there, only because it’s not available 99.99% of the time. That’s the situation with Tesla more or less. Granted the march towards more 9s is going to take some time.

EDIT: I looked up the "Tesla admitting level 2" and it's in regards to their *current* Autopilot software that's been around for years, not FSD Beta which is currently only rolled out to a few thousand people in US. So seems like you're not understanding the difference between Autopilot and FSD Beta. I recommend again that you watch "FSD Beta 10" videos on Youtube since you evidently have not.

EDIT: removed hostile comment


> Knowing there are 5 levels and Tesla is at level 2 does not in any way give one a notion of how autonomous the car is

It literally does. SAE levels [1] are very clear in terms of who is in control, human supervision/take over expectations and Operational Design Domain. It’s not at all about mistake percentage, so the SaaS analogy is irrelevant.

At the end of the day, if you’re still in level 2, it means you’re not fully autonomous because you can’t take passengers from point A to point B and handle mistakes, including safely pulling over when necessary. Tesla is nowhere close to doing this.

[1] https://www.sae.org/blog/sae-j3016-update


> It’s not at all about mistake percentage

Yes it is and SaaS is very relevant. The very page you've linked lists all sorts of "features" up to level 2 but level 3 and up is all about whether the driver should take over if the car asks them to. In fact, re-reading the levels, FSD Beta is at level 3 because:

- You are NOT driving when the features are active (the car accelerates, stops, turns, gives way, negotiates with traffic, waits for pedestrians, changes lanes, follows navigation, etc.).

- When the feature requests, you must drive. Check. FSD Beta will prompt the user to take over at times when it can't figure out the way forward.

- These features drive under limited condition and won't do so if conditions aren't met. Check too. FSD Beta will only become available when it has a good sense of the environment.

Well, thanks for the refresher. So FSD Beta is clearly at level 3 now.


> The very page you've linked lists all sorts of "features" up to level 2 but level 3 and up is all about whether the driver should take over if the car asks them to.

Thanks for making my point for me and conceding autonomy is, in fact, determined by who's responsible for safe operations and not just mistake percentage improvement.

> In fact, re-reading the levels, FSD Beta is at level 3 because:

> - You are NOT driving when the features are active (the car accelerates, stops, turns, gives way, negotiates with traffic, waits for pedestrians, changes lanes, follows navigation, etc.).

Clearly, you either didn't read the chart fully or didn't comprehend it well because you are going against Tesla's own claim that FSD is level 2. You also spectacularly missed the biggest condition why it's still level 2.

SAE level 2 in the chart says says:

> You are driving whenever these driver support features are engaged — even if your feet are off the pedals and you are not steering

> You must constantly supervise these support features; you must steer, brake or accelerate as needed to maintain safety.

Last I checked, FSD requires constant driver supervision and hands on the wheel. Tesla is very clear about this. So it is, in fact, level 2 by definition.

The key difference with level 3 is it doesn't require constant supervision — meaning a driver can read a book or watch a movie — and the system will alert in advance within a reasonable amount of time for a driver take over. Of course, nobody can really determine what "reasonable amount of time" is for a safety alert, which is why level 3 remains the most ambiguous of all the SAE levels.

You should really try to inform yourself of SAE level basics before spreading misinformation about a safety critical system.


Christ you are insufferable. The thing can drive. We all understand that it requires an attentive driver. There's nothing to argue about.


You seem to be hung up on SAE definitions and missing my point which I'm going re-iterate, maybe from a different angle.

Moving from supervising the car to not supervising the car isn't a binary flip where suddenly in one software revision, you can take a nap while yesterday you couldn't. It's a spectrum. And that's why SAE levels between 2 and 3 are poorly describing this (not to mention how poorly level 2 itself is defined, as it covers a huge range of functionality, from something a bunch of cheap sensors can achieve with graduate level of CS knowledge to something Tesla AI has achieved with FSD Beta which has required custom computers, millions of miles of driving data and some of the biggest brains in AI world).

Since it's a spectrum, the only variable changing is how likely it is you as the supervisor need to take over control *because* your car either made a mistake or was about to. That's all it's reduced to — how often does you car make a mistake. That's all level 3 and up is. All the descriptions and charts do nothing but fog up this, which is unfortunate. Once you have a car that makes very few mistakes, you don't need to supervise it because the probability of it making mistakes is less than you as a human driver at which point it's a better driver than you anyway.

You can of course argue that reduction in mistakes is itself functionality. Well, I make a distinction between continuous and minor refinements and major enabling technology, like vector reconstruction of 3D space from images of cameras, or AI based route planning, which given more data, can plan better.

As far as I understand from Tesla's progress, they need to merely cover ever more corner cases to go up the levels.

And on the topic of supervision: whether you have to keep your hands on the wheel or otherwise supervise the car has a lot to do with policy and regulation. You can have a car today that is safe enough to drive while you asleep and good luck trying to sell it without telling the customers they must stay alert. This basically makes level 3 as defined in SAE subject not only to actual capabilities of a car but the regulatory environment in which it's sold.


> As far as I understand from Tesla's progress, they need to merely cover ever more corner cases to go up the levels.

I think this is the crux of the disagreements here. You say they need to merely cover more corner cases, while I think many (including myself) think that this endless list of corner cases is the primary almost-insurmountable problem.

From what I've seen of Tesla FSD (and competitors), these systems do pretty well in highly structured and orderly environments during clear desert weather. In order to deal with chaos in a blizzard etc, we're going to need far more than just a few tweaks. At this point, none of these companies are even doing any testing in extreme environments. They're still trying to stop their cars from hitting pedestrians in well lit areas on known crossing points. [1]

--

[1] https://news.ycombinator.com/item?id=28566376


> They are, and it's true, even TODAY

That’s a very strong claim that needs to be substantiated with evidence.

Do you have evidence of a ‘Full Self Driving’ Level 5, Robotaxi without a steering wheel made by Tesla Inc. existing on the roads today?


FSD Beta (not Tesla Autopilot) drives door to door right now. Go on youtube and search for "FSD Beta 10" to get a feel of it. It makes mistakes enough that you need an attentive driver who can take over, so you still need a steering wheel. Level 3-5 is all about ever less driver intervention but as far as functionality goes, Teslas can drive autonomously on highways and urban settings across the entire US.


'Go on youtube and search' is not evidence.

I was under the assumption that 'Level 5' robo-taxis were coming in 2020 but it seems that we got something else called 'FSD' that still requires driver intervention and is admittedly 'Level 2' by Tesla Inc.

So where are the Level 5 approved Tesla robo-taxis?


> 'Go on youtube and search' is not evidence.

Yes it is. I even told you what to search for exactly and I've already pasted direct links to some of them in a sibling post.

> I was under the assumption that 'Level 5' robo-taxis were coming in 2020

Be prepared to be disappointed again and again when it comes to cutting edge of tech. You're not waiting for the next Xbox.


> Yes it is. I even told you what to search for exactly and I've already pasted direct links to some of them in a sibling post.

No it is not. Even if one did search, there is not one video that describes what Tesla Inc. promised or predicted in 2020 which was a Level 5 approved Tesla Robotaxi.

> Be prepared to be disappointed again and again when it comes to cutting edge of tech. You're not waiting for the next Xbox.

Did you not realise that you just admitted that you don't know where the Level 5 approved Tesla Robotaxis that was supposed to be due for 2020 were?


> They are, and it's true, even TODAY

Tesla claimed it was 5 years ago:

https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

But it wasn't. And it still isn't.


> Given that all Teslas are capable of fully autonomous driving

accidents. All Teslas are capable of fully autonomous accidents.

https://www.youtube.com/watch?v=jrWH_0YA5XM 'Tesla Autopilot Hits a Deer (and I think it will happen again.)'


You will never hear about all the times it's saved people's lives and avoided accidents, so until you show me that it has in fact increased accident rates, a link to a video means nothing, unless you are a journalist and your newspaper is on the verge of going under, in which case you need some juicy drama to sell ads.


You drank quite a lot of that kool-aid, huh


That's what well informed looks like from the outside!


Video of Tesla emergency braking or avoiding hitting a pedestrian that ran in front of it would go viral instantly. Can you show me a few? maybe one?


Oh whoops, I posted the wrong list of videos (those are FSD Beta ones). For emergency braking and "maneuvering" out of the way, Whaam Baam Tesla Cam channel has you covered. Here are a few videos with some examples (and no, they do not go viral - no accident and no drama does not sell ads):

https://www.youtube.com/watch?v=HSeIOg4SyOg

https://www.youtube.com/watch?v=krKkuIEMXHI

https://www.youtube.com/watch?v=ZebkknqGkxE

https://www.youtube.com/watch?v=q0_eKr8jnNo

https://www.youtube.com/watch?v=pHM6xY2Ys4U

Oh and I have personally avoided an accident so far when a whole bunch of cars on the highway slammed on the break and autopilot sounded an alarm and braked. So there you go.


Seems you didnt understand me. "pedestrian" is not a car. I meant avoiding hitting living things, not property.

1 Mountain lion runs across, cant see car slowing down at all despite what the description text says. Cat barely makes it and car keeps going.

2 Bird almost hits windshield. Owners own description is he took over.

3 Property and "funny home videos"

4 Well, what do you know. EXACT SAME situation as in Jessa Jones clip happens at @2:20 "all of a sudden a deer ran out onto the road but autopilot didn't react at all, gracie's applied the brakes only moments before impact but it was too late " Living creature hit, $4500 bill an 2 weeks without the car. @6:30 plows into road debris roughly size and shape of a human lying on the road. @9:00 finally a human, on a crosswalk, 50 meters in front. @11:00 Coyote, zero reaction and human takes over to avoid accident.

5 @3:30 At first looks like Tesla finally detects animal on the road, but no. It simply reacts to car in front (braking for the animal) and starts accelerating as soon as the car in front moves on, almost driving over the small deer in the process.

Tesla seems to be really good as spotting cars, and might even have a special case for humans on crosswalks, but every single clip with animal ends with accident or human taking over. 4 @6:30 shows what would happen if an unconscious person fell on the road.

Off topic: I love clip 4 @11:50. Hitting small wooden post at 20mph = totaled Tesla :o


You are literally confirming what my grandparent post said:

Where do you hear about all the time Teslas saved lives? You don't, because even throwing evidence in people's face, they only list the fuck-ups.

I rest my case.


Not driving into a pedestrian slowly walking on a crosswalk is not saving lives. That was the _only_ example of Tesla reacting correctly to a living thing, all others ended in accidents, death or human taking over. And those were your "good" examples.


> And those were your "good" examples.

That was me searching for a minute. If you actually care to learn something, you'd spend a few hours of your own time looking for what's actually true. If I was to go out of my way to show good examples (which I'm not going to do, for your lazy ass), you would say something like "oh great, my 10 year old nephew can do that".

So I'm not gonna bother with you pal — I'll convince people on the fence, they'll take care of convincing you.

Enjoy your video games.


Its not about humans being able to "do the same", its about Tesla not being programmed _at all_ to avoid anything other than Cars and maybe pedestrians on a crosswalk. It doesnt consider any other obstacles on the road and plows into them.



These are FSD Beta videos — I posted the saves videos in a sibling reply.


i’m sure that’s their cover story. but in reality they aren’t anywhere close to the level of self driving needed where you could give your full attention to the touch screen.


Sure, hence "pending software". And if you happen to follow their progress on youtube (just search FSD Beta 10), you'll see just how advanced it already is. And it's getting better at an exponential rate.


> Given that all Teslas are capable of fully autonomous driving, pending software

All new ones are. There's still old ones out there that didn't ship with the HW3 computer and haven't yet had it retrofitted.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: