Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have made this prediction multiple times throughout the last 8 years based on my knowledge of robotics. I have argued with family members who are VCs as well as enthusiastic technical observers based on simple first principles reasoning.

It is highly unlikely that completely autonomous vehicles will be present on the streets - interacting with humans, cars, and random events in real-time - within the next decade. What is more feasible and likely to occur is that large chunks of trucking may become "automated" in the form of a hyper-advanced cruise control, but these vehicles will lack Level 5 autonomy for their whole route. To understand why, requires a deeper examination of where this technology came from; Stanley, Thrun’s robotic car that was acquired by Google and is the basis for Waymo’s tech. Thrun notes;

> “In the last Grand Challenge, it didn't really matter whether an obstacle was a rock or a bush, because either way you'd just drive around it," says Sebastian Thrun, an associate professor of computer science and electrical engineering. "The current challenge is to move from just sensing the environment to understanding the environment."

http://news.stanford.edu/pr/2007/pr-junior-021407.html

More precisely, to interact and function in the wild, the robot needs to understand, distinguish, and model the properties of wildly different entities. And these entities aren't just other people - they are everything in its environment. And this includes random events like; people walking into traffic, objects falling onto the road, sudden rain and hail, or just potholes.

This is not a problem that can be trivially solved with better sensors or cheaper LIDAR. It is a conceptual problem that can not be trivially brute forced nor simulated in advance as each unique permutation is just different enough to confound existing models. Even if such different and confounding events are six sigma in nature and have a probability of occurring 0.00034% for every mile driven, then at 3.22 trillion miles/year (https://www.npr.org/sections/thetwo-way/2017/02/21/516512439...) Americans will experience 10,948,000 anomalies per year. Or, about 20.8 events per minute.

20.8 anomalies per minute is a pretty big number. That’s a potential accident every 3 seconds or so. And that’s a six sigma estimate. It is hard to emphasize how hard six sigma performance is for the current generation of autonomous vehicle hardware.

You might dispute the numbers, but they are meant to be an illustration and an intuitive explanation for why this problem is Hard with a capital H. Unlike some professionals, I don’t believe Strong AI is necessary to solve this problem, but Weak AI is still a form of AI. Technology that's just out of reach today.

At this point, people counter with Google’s cars. After all, seeing is believing, and haven’t they been seen to run successfully for millions of miles at this point?

Yes, but... Waymo/Google's success is hard to replicate. No one else can bend the rules far enough to achieve their headway.

Most of us forget that before starting the project, Google used LIDAR to create highly detailed maps of every environment their cars will ever be in. Engineers then captured standard human driving on that route over and over across different hours. These elements were then combined within their model to pre-compute routes before any real world driving happened.

Google was able to achieve its miraculous results because of its ownership of GMaps, Street View and the resources to deploy a fleet to map everything in advance. No one else has these resources. No one else can acquire them cheaply. And even Alphabet, a trillion-ish dollar corporation can't replicate this package for the entire world. Maybe someone will figure out a way to create precise sub-eighth-of-an-inch level 3D models of environments using drones and satellite images, but until that time this technology is too expensive to deploy.

It is hard to emphasize how important a distinction this is. The "only" thing a Google/Waymo robot does in real-time is that it checks the difference between the recorded dataset and whatever the real-time LIDAR + camera feed is saying. Or in other words, for every point of the route, Google has generated an expected set of values, and any deviations from these are used by the car to make real-time decisions. https://spectrum.ieee.org/automaton/robotics/artificial-inte...

However, even this partial solution is tempting. Because it works. But, sadly, that's not the whole story. What gets lost in the re-telling of this story is that this technique means that the cars can't work in unstructured environments. They are unsafe for most roads, especially uncontrollable, unpredictable and narrow residential roads. It's why these cars exist inside strict geo-fences even when they're ferrying passangers around, like in Phoenix. Where the car seems to mostly serve the downtown area with very little residential coverage (caveats apply, I couldn't find a map).

But, even getting this far is a huge technical achievement, even if it's not the fundamental breakthrough we've been looking for.

We should be prudent, but optimistic. There are a lot of smart people out there who are working on this problem right now. They are figuring out really clever ways to deal with these edge cases and they will solve this problem. Even if it's not on the hype cycle's schedule.

With that in mind, I'd like to suggest a better model for thinking about Driverless Vehicles is another Google service, Translate. It's quite good, but when was the last time someone translated an entire book with it (and it came out to be readable/sensible)? The system's performance - due to its statistical roots - seems to be fundamentally asymptotic w.r.t. the data presented to it. It can't be improved beyond a certain point even if you throw more data at it. I suspect that the current generation of driverless vehicles will be the same. http://people.csail.mit.edu/brooks/papers/elephants.pdf http://arxiv.org/pdf/1604.00289.pdf

Before the needed breakthrough happens, it is likely that the cutting edge won't advance beyond Level 3 in the next decade;

> I would guess Tesla’s position on this would be that most of the time, yes, you can rely on it, but because Tesla has no idea when you won’t be able to rely on it, you can’t really rely on it.

https://spectrum.ieee.org/cars-that-think/transportation/sel...

And that's okay. We might not have driverless cars by 2020, but we probably will by the mid 2030's and that sounds pretty darn fantastic to me.



> Americans will experience 10,948,000 anomalies per year. Or, about 20.8 events per minute.

One possibility is that driving is simply too challenging in the general case.

Note one of the problems discussed in the article, that self-driving algorithms can't cope with bad weather such as snow when road markings are invisible. Humans also can't cope with this situation. We don't drive with strict adherence to the rules of the road, and for example the tracks left by the car in front of you are far more important than any buried lane marking.

If we demand (for example) that autonomous vehicles cause less than one accident per 1 million km driven, then some situations that humans brave on a daily basis (thunderstorms, blizzards, icy roads, residential streets with children around) may never meet that standard even with a "perfect" autonomous system. Thus far, we've papered over the contradiction because we have human drivers to blame.


> Humans also can't cope with this situation.

What exactly does that mean? People will happily drive on I85/95 in Atlanta in torrential downpours where the lane markings are almost impossible to see. For the most part they don’t crash, because they know where the lanes should be. Humans are unmatched at filling in missing information like that. People drive in rain, sleet, snow. And they still manage to go 500,000 miles between crashes on average.


In Atlanta people are coped to that situation, but in places like Los Angeles people come to a crawl in a drizzle with otherwise zero traffic. In places with snowstorms, people still regularly loose traction and spin off the roads or start sliding backwards down a hill.

I remember a picture a couple years ago from a snowstorm that hit the south, could have been Georgia even, that looked like a scene from an apocalyptic movie complete with a flipped over car burning in the distance.

Some people crash, some people do fine with skill. Will a self driving car be able to counter steer an ice induced drift into safely regaining traction like an experienced human car driver could? All up in the air right now.


Subsurface mapping is worth keeping an eye on. [1] It offers a more-or-less instant solution to most of these concerns, if it can be made to work.

[1] http://news.mit.edu/2017/lincoln-laboratory-enters-licensing...


> Yes, but... The issue of using Waymo as an example is that Waymo/Google bent the rules to achieve these numbers.

Why does it matter whether or not Waymo uses maps, so long as they can solve the problem of cheaper, safer, reliable transportation between point A and point B? Yes, L5 autonomy is significantly harder and has huge advantages, but true L4 autonomy (which I do not believe currently exists) is still revolutionary. You're completely focused on the robotics challenge without understanding that the actual problem is just getting someone where they want to go. If they can solve the problem, it's not cheating - they just figured out an easier way to do it.

That said, I think L4 is still more than a decade away from where it can compete with Uber and Lyft.


Assume 0.00034% chance of an anomaly for every mile driven, we have 340 anomalies per 100 millions miles driven.

According to AAA, drivers between the ages of 25 and 29 have 526 crashes per 100 million miles driven.

Humans are incompetent drivers to an extent we would never accept for robots. If we tried releasing automonaous vehicles that were even 100x safer then humans, we would end up banning them for a generation.


That’s not how the math works. At a 0.00034% chance of an anomaly every mile, you have a 50% chance of an anomaly every 200,000 miles. (1-0.0000034)^200,000. Humans go 500,000 miles between crashes. Of course an anomaly doesn’t necessarily mean a crash, but if you don’t require the human operator to be paying attention all the time, it could well lead to a crash with a high probability. Indeed, because a anomaly would likely confuse a whole bunch of cars on the road at the same time, it could well lead to catastrophic and cascading failures. Unless you posit the existence of inter-car communications technology which doesn’t get exist.


Considering the average person drives 10,000 miles per year, that means the average person could live to 10,000 and not have a single crash.

> Humans are incompetent drivers

No, not really. Humans are actually such good drivers that computers have absolutely no chance of even coming close. Computer can't even stay running without crashing that long, never mind actually driving a car.


Considering the average person drives 10,000 miles per year, that means the average person could live to 10,000 and not have a single crash.

Your math is a little off. 100,000,000 / 526 is a crash every 190,000 miles. So every 19 years, not 10,000 years.

I guess I've been unlucky. In probably 500,000 miles of driving in my life, I've been rear-ended 3 times. Fortunately all at low speed, by distracted drivers, e.g. a mom with screaming small children.

That doesn't even count the time I tried to turn right from the left lane. It didn't end well for my car and was the only accident I was at fault for. And there were a few other accidents as well.

Perhaps "crash" is defined as including an injury? That makes the statistic more believable, because nobody ever got hurt in any of my misadventures. All my "crashes" just involved car body damage.


I didn't look into their methodology, but I would guess that crash was implicitly defined as reported crash. I've had a fair number of low speed collisions where we never bothered to report it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: