Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the aviation community, there is the major concerns over pilots becoming over-reliant cockpit automation instead of flying the jet.

Asiana 214 [0] is a classic example of crashing a perfectly good airliner into a seawall on landing.

In the Boeing 777, one example of the (auto)pilot interface showing safety critical information is the stall speed indication on the cockpit display [1], warning the pilot if they are are approaching that stall speed.

Hopefully Tesla will optimize the autopilot interface to minimize driver inattention, without becoming annoying.

[0] https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214

[1] http://imgur.com/bGsFTCG



In aviation, autopilots became successful because the human-machine handoff latency required is relatively large --- despite how fast planes fly, the separation between them and other objects is large and there is usually still time (seconds) to react when the autopilot decides it can't do the job ( https://www.youtube.com/watch?v=8XxEFFX586k )

On the road, where relative separation is much less (and there's even been talk of how self-driving cars can reduce following distances significantly, which just scares me more), the driver might not have even a second to react when he/she needs to take over from the autopilot.


In other words:

The driver might have needed to react before the auto-pilot realized it needed to react (let along could humanly respond).

There are two things that I take away from this.

* Auto-pilot should probably just keep going (or bring to a as controlled stop as possible).

* It should also collect more data to hopefully warn the driver more in advance.


My conclusion is that "autopilot" is insufficient in this context, and that a fully automatic AI driver is needed.


My understanding of the Asiana crash was that the autopilot would have landed the plane fine, and that it was the humans turning it off that caused the problem.

Your point is still valid, but perhaps we approach a time when over-reliance is better than all but the best human pilots (Sully, perhaps).


The Asiana pilots were not able to fly a coupled (automatic) landing due to the ILS glideslope being out of service.

The pilots were under the misguided impression that the aircraft would automatically spool-up the engines if the aircraft became to slow. This was a safety feature that didn't engage for a obscure technical reason. Even with a manual visual approach the pilot can still use the autothrust for landing.

A more rigorously trained pilot (eg. Capt. Sully) would have aborted the approach and performed an immediate go-around if he got below the glidepath (or too slow) below a certain altitude (eg. 400ft Above Ground Level).

The rules requiring a go-around (or missed approach) apply for a fully automated approach and landing, just as much as manually flown approach and landing.


The Air France 447 accident is a better fitting example of pitfalls that may obtain in complex "humans-with-automation" types of systems.

There, automation lowered both the standard for situational awareness and fundamental stick and rudder skills. Then, when a quirky corner case happened, the pilots did all manner of wrong on the problem: so much so, they amplified a condition from "mostly harmless" to fatal for all.

Vanity Fair has a nice piece on this accident that's easy to dig up. Good read.


I heard it was the Airbus weirdness of steering setup that noticeably added to the problems (Separate, disjointed joysticks) One pilot pulled up as hard as he could while the other one thought he was pushing down, making the confusion this much worse


That's true, but was well known (and trained on), so I'd categorize that domain as "How the machine responds when you're hands are on the controls," which is nearly a synonym for "stick and rudder skills" category I cited.

Sure, to nearly every pilot that behavior is wacky, but it shouldn't have been a surprise for more than an instant to pilots who were "operating as designed."

It seems there's no free lunch: when skills atrophy as a natural response to helpful automation it requires advancement in some other skills, should the goal of an ever improving error (accident) rate be achieved.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: