The study, called Habits, Attitudes, and Expectations of Regular Users of Partial Driving Automation Systems, is based on responses to questions from a pool of 604 respondents who own cars with one of the three most common L2 semi-automated driving systems, broken down like this:  General Motors Super Cruise (200), Nissan/Infiniti ProPILOT Assist (202), and Tesla Autopilot (202).

What Level 2 Driving Automation Systems Actually Do

Now, before I get into the results of this study, let’s just make sure we’re all on the same page regarding exactly what each of these driver-assist semi-automated systems actually do. Fundamentally, they they all control speed and braking, dynamically keeping set distances from the car in front like dynamic cruise control, and they do some steering, keeping centered in a lane and following road curves and so on. Some systems do a good deal more, like Tesla’s Autopilot and GM’s SuperCruise, allowing for lane changes and some degree of navigating on streets other than highways, and so on. What all these systems have in common is that they are very much not actually self-driving systems, and the person in the driver’s seat must remain alert and ready to take control at any moment, because any of these systems could disengage or make a poor decision at any time, with zero warning. This fact that the driver must remain on constant alert even when the semi-automated systems appear to be doing most of the work associated with the driving task, is at the heart of the problem that this study reveals, and, frustratingly, is something that researchers who study automation have known about since at least 1948.

The Problem

The problem even has a name, taken from N.H. Mackworth’s 1948 study called The Breakdown of Vigilance during Prolonged Visual Search: the Vigilance Problem. Essentially, the vigilance problem is not a technological one, it’s a human one. Given a system that operates almost independently but requires passive oversight by a human, that human will not be good at paying attention to the system, and as a result will not likely be ready to take action when needed. In the case of these semi-automated driving systems, the situation is actually worse, because the naming and marketing and perception of these systems suggests more capabilities than they actually have, which makes people even less vigilant. Nissan’s ProPILOT Assist is a bit of an exception to this, as its marketing and implementation do a lot less to suggest capability, and the numbers from this IIHS study back this up, as we’ll see soon.

What Drivers Think They Need To Do When Using Automated Driver Assist Systems

In fact, let’s look at some numbers right now, starting with this really basic question: Are drivers using these systems comfortable with not paying attention to the road while the systems are engaged? Here’s what the study found: Holy crap, over half of Super Cruise users didn’t think they needed to “watch what was happening on the road?” It’s worth noting that Super Cruise is the only one of these three systems that requires no physical contact with the steering wheel when engaged, instead relying on a camera-based system to track the driver’s eyes. Tesla owners at 42% aren’t much better, but Nissan owners actually are, at only 12%. The study explains a bit:

What The IIHS Study Reveals Drivers Are Doing Instead Of Driving

So what the hell are these “non-driving-related-activities” that people are doing instead of paying attention to the road? Here’s what the study found:

Some of these seem like no big deal: eating and/or drinking is something we all have done while driving, same with talking to passengers or talking on a cellphone call via Bluetooth or looking at scenery. Most of those people have been doing since before cars even shifted themselves, let alone drove themselves in any sort of way. Other activities on the chart a lot more alarming: per the survey, 49% of Super Cruise, 44% of Autopilot and 19% of ProPILOT users say they’ve been texting, which isn’t great. But 19% of Tesla Autopilot users have apparently used a laptop or tablet? And 20% watch videos? 18% read a book? What the hell? And 10% of Autopilot users admitted to sleeping? Sleeping! The fuck is wrong with these people? That chart notes the difference in what people do with the systems both on and off, and come to the very obvious conclusion that drivers are much more likely to do stupider things when their semi-automated systems are on: Interestingly, Super Cruise users were the most likely  “to characterize phone and other peripheral device use, watching videos, grooming, reading, and having hands off the wheel as safe to do while using the system,” which I would suspect is related to the fact that Super Cruise does not require contact with the steering wheel. ProPILOT users were the least likely.

Attention Reminders In Level 2 Systems

All these systems to have Attention Reminders, where if the system thinks you’re not paying adequate attention, measured by either steering wheel torque sensors or eye-tracking cameras, it gives the driver a warning, and, if that warning isn’t heeded, the semi-automated driving assist system may be suspended or disabled. The study surveyed driver’s “perceived annoyance” at these reminders:

Sometimes ignoring these warnings can cause the driver to be locked out of the driver assist systems if they don’t respond to prompts quick enough, and we get some fascinating details of some of these lockout scenarios Super Cruise users noted, maybe with more specificity than you’d expect: Look at that; IIHS has recorded two dropped food incidents, one a taco-class incident and one a burger-class incident. Fascinating. In the attention reminder context again we see differences between the systems, though for the most part, drivers seem to understand the need for these reminders. Nissan ProPILOT Assist users got the fewest reminders, and the study has some good insight into why (emphasis mine):

The Benefit of Cooperation

Unlike Autopilot or Super Cruise, Nissan’s system is designed to be cooperative, and doesn’t disconnect when the driver gives steering, throttle, or brake inputs. It’s not modal, like the other systems, and as such the driver is never in a position where they would “give up” control of the car to the machine. While this seems like a less advanced way of doing things, I think for semi-automated L2 systems it makes so much more sense, because the real problems arise when the driver thinks the machine has more control than it actually does. If you never imply that the car is in complete control, with the driver only monitoring and instead always keep the driver an active participant in the loop, then many of the vigilance problem issues simply won’t happen. The study notes this difference between ProPILOT Assist and the other two systems: Perhaps counterintuitively, the more work the semi-automation system does, the less it seems to make drivers want to take control when needed, out of fear of the system disengaging. While I don’t think this study has definitively proven this to be the case, it nevertheless is a pretty alarming concept: Driver reluctance to take control so as not to cause the system to disengage has a lot of potential to be dangerous. There should be no hesitation for drivers to take control if they feel uncomfortable in any way.

The Unexpected Effects Of Camera-Based Driver Monitoring Systems

While driver monitoring systems are getting better, even the camera-based ones are not foolproof, and this study also noted Tesla drivers using a water bottle to fool steering wheel torque sensors. Additionally, camera-based driver monitoring systems that don’t require holding the wheel have another disadvantage in that they further the illusion that the system is more capable than it actually is, which permits dangerous behavior like not looking at the road (emphasis mine again); The study also notes that we’re still in the early stages of adoption of semi-automated systems, and as a result these findings may be skewed to specific “early adopter” behaviors: What I find alarming about the early adopter idea is that of anyone, early adopters should be the most aware of a system’s technical abilities and limitations, shouldn’t they, since they’re so damn interested? Or, could this just mean that the eagerness of an early adopter is enough to make them victims of their own wishful thinking about the capabilities of these systems?

Perception And Marketing Is A Big Deal

Of course, a lot of the issue is with how these systems are marketed and portrayed; names like Autopilot and a heavy emphasis on hands-free driving does lead to a lot of overestimation of these things: There’s also some interesting data about the demographics of the groups in the study, none of which is all that surprising: GM’s system, showing up mostly on Cadillacs, skews old. Teslas skew younger and overwhelmingly male, and the Nissans are the group that has demographics closest to just the general population. Again, not shocking. What is a bit shocking – well, actually, it’s not shocking, because what’s happening is exactly what we’ve known happens with systems that do most of the work, and still somehow try to demand constant vigilance: It doesn’t work. Maybe alarming is a better word, because these systems are already on the roads, and they’re continuing to be built and sold, and they are all, I think, fundamentally, inherently, broken.

The Takeaway

This isn’t something that needs a technological fix, because it’s not a technological problem. It’s a human problem, and it doesn’t matter how many over-the-air updates you send out or what revision the latest Tesla Full Self-Driving Beta is up to, because if any of these systems still demand that a person be ready to take over without warning, they’re doomed. The better the system is at driving, the worse it will be for keeping people ready and alert, so there’s a bit of a paradox here, too. The more featured the system, the more advanced it is, the more it does, the less it demands from the driver, the worse it actually is. Because if it still may need you to intervene at any moment, the less engaged you are, the worse the result will be if anything should go wrong. That’s why of all of these, Nissan seems to be on the most workable track: keep the interaction between human and machine cooperative, not one or the other in charge at any given time, and you can still get the safety and reduced-stress benefits of semi-automated assisted driving with less of the inherent dangers caused by vigilance problem-related issues. Level 2 semi-automated systems are flawed, and I think the IIHS made my point for me, so thanks.     I know someone who is an engineer and fully understands the limitations of FSD Beta, but still straps a wrist weight to his Model 3’s steering wheel so he can go hands free and stream shows on his commute to and from work every day. If someone who has some level of understanding does this, you can only imagine what others are doing. OR just look at the survey results above.
People are getting injured and dying because of intentionally vague product descriptions intended to sell half baked systems that are years from actually living up to their names. It’s dishonest, irresponsible, and dangerous. Right now, most drivers encounter mild hazards on every single journey, and navigate them without too much thought. Vehicles stopped in weird places, people driving erratically, inattentive pedestrians – we all learn the actions to take when we see an upcoming threat by doing them over and over and over again. How automatic is it to cover the brake and the horn when you see a car nudging just a little further into an intersection than you would like? When someone a few hundred yards ahead of you on the freeway screeches to a halt, it’s basically automatic to get on your own brake and have a finger ready to poke the four-way button to warn the person behind you. 99.9% of the time, we practise these actions in low-stakes situations, where the worst thing that could happen might be a fender-bender. In other words, we tolerate a low ambient level of risk, but the payoff is that essentially THE SAME actions are also effective when something potentially way more dangerous is about to happen. We change lanes under ‘normal’ conditions frequently, so it’s stressful, but within the realm of possibility, to make a quick lane change to avoid colliding with the truck that pushed a little too far into a turn, or the granny dropping her walker into the crosswalk early. If you’ve never made a quick lane change in hairy traffic, or never had the guy in front of you drop his sandwich WITHOUT Autopilot, etc, etc, what are the odds of taking the right actions at the right time when it’s really important? Soldiers, firefighters, basically anyone doing performance-critical work practises over and over and over again, just to make sure they’re ready when the chips are down. In other words, it seems that the better the systems get, the more catastrophic the consequences when they do encounter something they can’t handle. So what do we do? Require road tests every time you renew your license? I don’t know but I think it’s a big problem!

New IIHS Study Confirms What We Suspected About Tesla s Autopilot And Other Level 2 Driver Assist Systems  People Are Dangerously Confused - 47New IIHS Study Confirms What We Suspected About Tesla s Autopilot And Other Level 2 Driver Assist Systems  People Are Dangerously Confused - 89New IIHS Study Confirms What We Suspected About Tesla s Autopilot And Other Level 2 Driver Assist Systems  People Are Dangerously Confused - 40