Lance Eliot Contributor
The advent of Automated Emergency Braking systems involves tough choices and potential untoward... [+] Getty
When you were a young child, you might have placed your hand near a stovetop that was hot or tried to grab the handle of a pot or pan that had been on the stove and instinctively snatched back your hand. Having felt the intense heat, your autonomic nervous system kicked-in to save you from a severe burn. At a later age, you likely used conscious reasoning to anticipate that a stovetop can be extremely hot, or a handle of a boiling pot or pan can be scalding, and so you cognitively make a mindful decision to avoid touching those potentially third-degree burn producing artifacts.
There is a delicate balance and tight dance between what your instinctive reaction is, and what your mindful reasoning consists of, and at times they might even be at odds with each other.
I remember one time I saw a pan on a stove that was starting to ignite its contents, potentially leading to a deadly fire, and I quickly reasoned that the most expedient act was to grab the pan by the handle and get it off the stovetop, yet I risked burning my hand in doing so. My instincts fought against my reasoning, namely that instinctively my hand did not want to grasp that scorching hot handle, but I did so anyway, averting a larger disaster.
When it comes to key aspects of driverless autonomous cars, the same kind of delicate balance and tight dance takes place between automated instinctive features and a kind of more elaborated AI reasoning of sorts, particularly when it comes to braking an in-motion self-driving car.
The recently filed Tesla crash lawsuit involving the death of a human driver in a Tesla Model X provides a case in point on this kind of matter.
Uber Incident As Exemplar Of System Instinct Versus Reasoning
Let’s first review the Uber self-driving car deadly incident as a forerunner case for some insightful lessons.
You might recall the headline-making case last year of the Uber self-driving car incident that led to the unfortunate death of a pedestrian. In my initial analysis posted just shortly after the incident was reported in the media, I predicted that there might have been some systems-related confounding issues that might have led to the brakes not being applied in a timelier matter. When the NTSB report was released, my prediction was characterized as prescient since indeed there was a system-related braking aspect involved.
It turned out that the Uber self-driving car engineers had previously disabled the inherent Volvo emergency braking capability, generically often referred to as Automatic Emergency Braking (AEB).
Why would such a seemingly oddball and risk-heightening act have been undertaken? Because the self-driving car engineers had devised their own AI-led emergency braking system and were concerned that having essentially two kinds of braking systems in the driverless car would lead to erratic behavior.
Imagine you were helping a novice teenage driver learn to drive a car, and suppose the car was equipped with two sets of brakes, one for you as the front seat passenger coaching the teenager, and then the usual brakes accessed by the skittish novice driver. At any time, either of you could potentially apply the brakes. If you ponder this for a moment, you realize that it can lead to a great deal of confusion, and possible calamity, since either of you might suddenly slam on the brakes, catching the other one off-guard.
There’s an added twist from a systems perspective.
The Automatic Emergency Braking can be implemented as an essentially simplistic and almost instinctive approach, detecting what it believes is the presence of an object ahead and based on a quite-and-dirty calculation the AEB urgently applies the brakes if it guesses that the object is going to get struck by the car. Alternatively, the Automatic Emergency Braking can be a more “reasoned” capability, involving a more robust AI system that assesses a wide variety of factors and sensory data, trying to arrive at a more intricately derived decision about hitting the brakes.
The more instinctive kind of AEB tends to be a faster form of responsiveness, yet it can also lead to falsely applying the brakes when the situation might not truly warrant it. The AEB that involves more of an AI analytic approach tends to be more well-rounded but can chew-up precious time, and as such lose opportunistic time that might have been spent in the actual braking of the car.
As a human driver, you at times need to make the same kind of instantaneous decisions. There’s a possum in the roadway, do you jam on the brakes and hope to stop before you slam into the animal, but meanwhile perhaps there are other cars behind you that will ram into your car, and so maybe it is “safer” to not reactively apply the brakes and instead proceed ahead. Instinct versus reasoning.
You can have both types of AEB’s on a self-driving car, though it is somewhat akin to having two masters and analogous to the novice teenage driver having access to the brakes and simultaneously so does the watchful parent.
Should the AI robust version of AEB be able to override the more simplistic instinctive AEB version?
Should the instinctive version be able to invoke itself even if the AI elaborated capability one has not yet ascertained whether urgent braking is a valid action to be undertaken?
It can be a bit of a conundrum.
Tesla Crash Lawsuit And The Automatic Emergency Braking System
In the Tesla crash lawsuit that involved the death of the driver, Walter Huang, there is a claim that the “2017 Model X was designed, built, and introduced into the stream of commerce without having been equipped with an effective automatic emergency braking system.” Furthermore, the claim asserts that “Notwithstanding the fact the Tesla Model X vehicle was marketed and sold as a state-of-the-art automobile, the vehicle was without safe and effective automatic emergency braking safety features that were operable on the date of this collision.”
In my earlier posting about the lawsuit boon that I’ve predicted is going to emerge in the autonomous car realm, I had emphasized that one area of close scrutiny in such cases will be what the automaker did or did not do in terms of the design, building, and fielding of their driverless car features.
This will undoubtedly force to the surface a lot of the engineering choices being made about self-driving cars, which are essentially unknown to the public per se right now, taking place in the backrooms and development labs, and will be potentially revealed once these lawsuits play out.
Having been an expert witness, I can also predict that the question of what was feasible at the time of an incident will become paramount, including what other automakers and tech firms were placing into operation at the time, and how the maker of the driverless car abided by or diverged from what was considered “standard” practice.
There’s another angle to keep in mind especially about crashes involving Level 2 and Level 3 semi-autonomous cars, which are considered co-sharing of the driving task with a human driver and are decidedly not a true Level 5 fully autonomous car, namely what the human driver knew or didn’t know about the AEB, it’s capabilities and status, and whether the human and the AEB worked at odds with each other.
Some Level 2 and Level 3 semi-autonomous cars even allow the human driver to turn-off the AEB, which might be sensible for a human driver that doesn’t want an automated system to suddenly be making braking decisions. On the other hand, if the human driver has disabled the feature, perhaps their life or the lives of others might have been saved if the AEB had been activated.
You can bet that lawsuits will be considering this aspect of the co-sharing relationship, plus whether the automaker sufficiently informed the human driver about the risks in turning off the AEB or in allowing the AEB to be on. Some say it’s darned if you do, darned if you don’t kind of predicament.
We are entering into an era of semi-autonomous cars that portend a plethora of risks when you have a co-sharing relationship between a human driver and the automation, at times sadly leading to an indelicate dance and untoward out-of-balance results.