Rodney Brooks is misleading. He says things like, "It was actually done in Germany in 1987." but neglects to mention that the road had no other cars on it.[1] He also claims that the first autonomous coast to coast drive happened in 1995, but actually the project was autonomously controlling steering, not gas or brakes, and 150 miles of the trip were driven by humans.[2]
Modern autonomous vehicles are much more impressive. They were successfully navigating urban environments 12 years ago.[3] Waymo's cars have driven over 10 million miles and disengagement rate is once every 11,000 miles.[4] We're at the point where no major breakthroughs are needed, just incremental improvements. That's what's different from earlier eras of autonomous vehicles.
> In the 1980s, a vision-guided Mercedes-Benz robotic van, designed by Ernst Dickmanns and his team at the Bundeswehr University Munich in Munich, Germany, achieved a speed of 39 miles per hour (63 km/h) on streets without traffic.
Think about what that means. Waymo has been doing this for many years, drives in basically ideal conditions (Phoenix rather than Philadelphia), and still disengages at a rate that amounts to once a year for a typical personal vehicle. That’s not enough for “real self driving” because at that disengagement rate the human must be actively engaged the whole time. Human drivers go about 500,000 miles between crashes. (And that’s not 500,000 miles driving through Phoenix. That includes miles driven through places like DC where freeways have no acceleration lanes. That includes drunk drivers and teen drivers. You can’t control the other people on the road, but if you don’t yourself drive distracted, drunk, speed, etc., I’d bet you can expect to go at least a million miles between crashes.) Disengagement rates would have to improve by a factor of 50 to allow a human to not be paying attention at all times while achieving an acceptable level of safety.
A disengagement doesn't necessarily mean that the car would have crashed. There are amusing disengagement stories such as a cyclist doing a track stand, causing the car to get stuck.[1] Many disengagements are false positives. Think of how often you've had to do the equivalent to human drivers by telling them to slow down or watch out. For me it's certainly more often than every 11,000 miles.
Given that humans aren't great drivers, you'd think that after 10 million miles Waymo would be at fault for some crashes. But only one of Waymo's crashes was even partially their fault. One of their cars was moving at 2mph to get around some sandbags in the road and hit a muni bus that was trying to squeeze by at 15mph.[2] Waymo has since tweaked their software to account for aggressive bus drivers. That's over 10 million miles and only one collision that was even partially their fault. That sounds pretty safe to me.
And regarding weather: Though their pilot program is in Phoenix, Waymo doesn't just drive in places with nice weather. They've been testing in Michigan for the past 2 winters.
A disengagement generally measures when a human test driver had to take control. It’s not just telling a human driver to watch out—it’s taking the wheel from them. They wouldn’t necessarily lead to a crash, but there is a pretty good chance they would. If even 10% of disengagement’s would’ve led to a collision, you’re still not close to a good human driver.
One crash in 10 million miles isn’t as great as it sounds. First, it’s a meaningless number because a human is intervening every 11,000 miles. It’s not a true measurement of what a purely autonomous collision rate would be. Second, humans crash once in every 500,000 miles, and that’s under the full gamut of circumstances (drunk drivers, unfamiliar roads, teenagers, etc). Waymo is running with trained drivers on thoroughly mapped test areas in a place with easy traffic and weather. You’d expect humans doing nothing but running the same routes over and over in that carefully geofenced area to do better than one collision per 500,000 miles. (Especially with someone looking over their shoulder, like the self driving car is doing!)
>> A disengagement doesn't necessarily mean that the car would have crashed.
Conversely, lack of disengagement doesn't mean that the car is driving safely. There is simply no way to know how close to an accident a car came, without the human driver having to take control.
Like I say above, disengagements don't really tell us anything about the car's real world driving ability. They're just a silly proxy mandated by bureaucracy, and only a very weak measure of real progress.
Yes, of course there has been progress since the '80s and '90s. Things would be really bad otherwise.
>> Modern autonomous vehicles are much more impressive.
It really depends on what you consider impressive. The DARPA Grand Challenge involved one road loop with stunt drivers instead of real traffic. These are still strictly controlled, laboratory conditions that tell us nothing about the ability of robot cars to operate in the real world.
Waymo's disengagement rates don't really say anything, either. Perhaps Waymo is now driving its cars in easier conditions after noticing that they tended to disengage too often. What we know for sure is that Waymo doesn't have autonomous cars -as noone else does. If they did, they'd be out on the streets without safety drivers and counting autonomous miles, not miles without disengagement.
According to the post you link, reporting rate of disengagement is required, but the fact that Waymo chooses to advertise theirs as a measure of improvement of their cars tells me that they have no real results to show and instead choose to tout a meaningless proxy just to make people believe that they are further ahead on the road to autonomy than they really are.
> the fact that Waymo chooses to advertise theirs as a measure of improvement of their cars tells me that they have no real results to show
It tells me that's the weakest number they can possibly report. If I were Google and I knew I had the strongest ML teams in the world by miles, the strongest internal results by miles, I would say as little as possible for as long as possible. Get as far ahead as possible.
I think the Alphabet board learned their lesson on announcing early. They still shut down products, of course, so does Intel, Facebook, and every other company. And I think they're learning their lessons about sales (Cloud's hiring 10k sales people) and customer support (I'm sure they're painfully aware of the issues).
Btw, this is from the same source as your link no 1. above ("Prof. Schmidhuber's highlights of robot car history"):
>> 1995: UniBW Munich's fast Mercedes robot does 1000 autonomous miles on the highway - in traffic - no GPS!
>> Dickmanns' famous S-class car autonomously drives 1678 km on public Autobahns from Munich to Denmark and back, up to 158 km without human intervention, at up to 180 km/h, automatically passing other cars
A most impressive result that is easily the equal of modern results- but in 1995. That puts your claim that "incremental improvements" are all that's needed, in perspective. Major breakthroughs are needed.
The 1995 result was nowhere close to what we have today. 158km was the maximum distance between disengagements. Waymo's average distance between disengagements is over 100x that, and they're going on more than just highways.
>> 158km was the maximum distance between disengagements.
That was 158km doing 180 km/h on an authobahn with no upper speed limit and with unrestricted traffic.
Anyway, like I say in my other comment Waymo's disengagements mean nothing because, unlike Dickmann's authobahn experiment, there is noone there watching the performance of their cars, other than Waymo employees. As far as anyone can tell, their impressive disengagement record is the result of their cars being driven in the mildest, friendliest conditions possible. It certainly seems that way, taking into account where they drive their cars - in sunny, peaceful Phoenix AZ, and then again, only on roads they've actually mapped.
Put these two things together and it's obvious that Dickmann's experiments were run in as close to real-world conditions as possible, whereas Waymo is consistently keeping its cars in closed, controlled, simple environments that tell us nothing about their capability in the real world.
But they're still using easy environments. Not sure a German motorway without speed limits is easier to navigate than what Waymo faces.
I doubt that maximizing the miles between disengagements should be our goal. The goal should be for the car to face the worst conditions imaginable (snow, ice, dirt roads, other drivers ignoring traffic) and somehow manage to survive in those situations.
A German motorway without traffic is very easy to navigate, but a typical day to day situation involves several kinds of cars driving at different speed limits and engaging in various maneuvers like overtaking, exiting, merging, switching lanes. There's also traffic jams or heavy traffic situations.
If one wants to drive dynamically, one has to overtake and switch lanes quite a lot, which makes this challenging. If one is content with driving like a snail they can stick to the first lane, which is pretty simple and could be managed even by a so-called self-driving car.
The usual roads are probably very easy, they have wide markings. The problem there arises with repair works which mess up the markings and lane width in all possible ways. During summer they are very frequent, and I bet they were the cause of human interventions in those old experiments.
No problem, disengage there. The problem seems to br no safe way to disengage.
Tesla crashes kill people because of this. (Ding ding, you have 1 second before impact.) Waymo probably wants to nail city driving first, because it's much lower speed, safer.
A limited access road is absolutely easier to navigate than a city. Once you can stay in one lane with sufficient following distance, you're done. The problem space is tiny.
If you have a lane. A gravel road with two way traffic (not that uncommon in rural areas) only works because drivers communicate their intentions (esp if one needs to wait at a point where the road is wider). They don't work by fixed rules.
Or nothing is needed other than cranking up the safety factor. Let me read a book while driving and if anything is out of the truly ordinary start to slow down. Much better than what Tesla does (keep velocity and signal the human godspeed, what could go wrong!?)
I don't think his point is that autonomous vehicles haven't made any progress, rather the rate of progress has been pretty slow if you adjust the start point to the 80's/90's rather than the last 10 years or so that the public usually thinks of. Of course, the rate of progress isn't necessarily linear, especially given the rapid advancement in machine learning, so there's no reason at all to assume any further progress here on out will take just as long. Even so, I don't know if it has been demonstrated adequately if AV's perform just as well (if not better) than humans, given the vast, vast variety of unideal situations human drivers have been subjected to over the past century.
Modern autonomous vehicles are much more impressive. They were successfully navigating urban environments 12 years ago.[3] Waymo's cars have driven over 10 million miles and disengagement rate is once every 11,000 miles.[4] We're at the point where no major breakthroughs are needed, just incremental improvements. That's what's different from earlier eras of autonomous vehicles.
1. https://en.wikipedia.org/wiki/History_of_self-driving_cars#1... has this quote:
> In the 1980s, a vision-guided Mercedes-Benz robotic van, designed by Ernst Dickmanns and his team at the Bundeswehr University Munich in Munich, Germany, achieved a speed of 39 miles per hour (63 km/h) on streets without traffic.
2. http://www.cs.cmu.edu/afs/cs/usr/tjochem/www/nhaa/nhaa_home_...
3. https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2007)
4. https://medium.com/waymo/an-update-on-waymo-disengagements-i...