Viral Video of Cop Struggling to Pull Over AV Raises Questions on Tech’s Readiness

A viral video shows a police officer struggling to pull over a driverless vehicle. Is autonomous vehicle technology ready for the mainstream?
Written by Andrew Kidd
Reviewed by Kathleen Flear
If a cop pulls over a car and there’s
no one driving
, who gets a ticket?
One police officer in San Francisco recently wondered the same thing after pulling over a vehicle just to find out it was driving itself.
As
USA Today
reports, a viral video showed a San Francisco police officer’s confusion after pulling over an autonomous vehicle for driving without its headlights on.
The officer pulled over a
self-driving car
from the
San Francisco
-based company Cruise for driving at night with its headlights off. The car, however, assumed it was pulled over for driving at a different speed limit. When the officer approached the vehicle and left to consult his partner, the car took off, crossing another intersection before the officer pulled it over again.
Cruise ended up not receiving a ticket for its driverless vehicle, noting that the headlight issue was “human error.”

Opening Pandora’s box

Cruise has previously published a video giving law enforcement and other first responders a walkthrough on how to safely interact with a Cruise autonomous vehicle. The cars’ audio sensors are programmed to react to police and other emergency sirens, causing them to yield to first responders.
But as the bystander video shows, even the
best-laid plans
on paper can be foiled by reality. The testing of autonomous vehicles has raised quite a few questions over the past few years, but primarily: can we trust autonomous vehicles to react appropriately—and safely—in real-world situations?
Let Jerry find your price in only 45 seconds
No spam · No long forms · No fees
Find insurance savings

What is the trolley problem?

One thought experiment posed by ethicists since the beginning of the 20th century to demonstrate the moral ambiguity of no-win situations is the Trolley Problem. In this (completely hypothetical) experiment, an unstoppable trolley is sent down a track that forks into two up ahead. If it maintains its current course, it will run down a person that is entangled in the track ahead.
As the bystander in this situation, you are next to a switch that could redirect the trolley down the other track. The twist? There’s another person entangled on the other track. The ethical dilemma is this: do you swap the track to keep the first person from being run down, or do you stand by and do nothing, ensuring their death?
Some versions of this thought experiment replace one of the strangers with a child or a loved one. Others place five people on the main track, with you being given the utilitarian choice to divert it to save five lives at the expense of one.

The trolly problem and AI

AI ethicists have debated the efficacy of this thought experiment as well when it comes to autonomous vehicles and how they process the value of human life when faced with a similar no-win situation: the inevitable crash.
If using the Trolley Problem as a setup, imagine an autonomous vehicle driving along a two-lane road with sidewalks on either side when another vehicle—driven by a human—pulls out in front of it. The AV is unable to come to a complete stop in time to avoid striking the vehicle. Does its onboard “brain” allow it to
As
Heather M. Roff
of the Brookings Institution writes, the computational decision-making of artificial intelligence exists in a sort of gray area of “overlapping probability distributions,” rather than the often black-and-white thinking of us as humans.
Autonomous vehicles, even with all their sensors, won’t have complete knowledge of their environment as, say, a super computer whose decision-making is confined to a chess board. 
Rather than waiting for their “opponent” to make a move and then calculating all possible moves that could be made in response in a closed environment, autonomous vehicles will need to be capable of learning from previous inputs to determine the best possible course of action in a chaotic world.
Essentially, the greatest demonstrated danger of autonomous vehicles is their ability to detect and avoid pedestrians. Such was the case in the 2018 death of Elaine Herzberg, who was the first recorded pedestrian fatality at the “hands” of a driverless vehicle after being struck by an Uber autonomous car in Arizona. 
Granted, the backup driver behind the wheel was charged with negligent homicide, which could indicate this would have been unavoidable even with an organic brain manning the vehicle.
MORE: GM Is Going to Sell a Self Driving Car by 2025

Young people don’t seem to mind these potential challenges

While the moral dilemmas of trusting human life to machines has sparked debate among ethicists and
not-always-ethical
automakers, young Americans still seem keen on getting behind the (not?) wheel of an autonomous vehicle.
According to survey data published by
Jerry
, 30% of Generation Z drivers expect to ride in an autonomous vehicle within the next 5 years and are confident in the near-term success of autonomous vehicle technology, while older generations are less confident.
Where do you stand on the debate around self-driving cars?
Are you overpaying for car insurance?
Compare quotes and find out in 45 seconds.
Try Jerry

Easiest way to compare and buy car insurance

√
No long forms
√
No spam or unwanted phone calls
√
Quotes from top insurance companies
Find insurance savings