Florida drivers may wait longer than predicted for autonomous vehicles to become common the roads if the analysis of one computer professor turns out to be correct. According to the professor, the major error of those researching and building autonomous vehicles is that the vehicles are learning human driving behavior. The professor says this inherently makes the vehicles less safe.
Aviral Shrivastava, is a computer science associate professor at Arizona State University's Ira A. Fulton Schools of Engineering. Arizona is where a self-driving Uber vehicle killed a pedestrian in March. Video shows that the road was dark and the pedestrian was not in a crosswalk. According to the professor, the problem in that accident is that the car behaved like a human driver, proceeding along the road despite the darkness. He says that an autonomous car should only travel at a speed that allows it to stop within its range of vision. In other words, the car should have been driving as though there was an obstacle in the road outside of the area it could detect.
His own research focuses on similar types of autonomous systems and guaranteeing how the system will behave. For example, he says that his research is looking into how a build a car in which brakes can be applied within a millisecond of the detection of an obstacle. He also points out that an accident involving an autonomous vehicle can destroy the industry.
Even fallible autonomous vehicles do not engage in the dangerous behaviors many people do while driving. These include driving while drowsy or drunk and distracting behaviors such as texting while driving. When people cause motor vehicle accidents by engaging in these types of behaviors, they may be responsible for paying the medical expenses and other costs incurred by other drivers and passengers who are injured. If the insurance company of the driver who caused the accident will not pay these costs, the injured person may file a lawsuit.