The theme of self-driving cars continues today. Our last two posts have been about the dangers presented by autonomous vehicles, and the issues that the Uber self-driving car fleet has had so far. Today, we want to talk about the company Waymo, which was formerly Google’s self-driving car division. One of Waymo’s self-driving vehicles was involved in a car accident recently for a reason that is similar to the problem outlined in our last post.
In a lot of car accidents that involve self-driving cars, a human driver rear-ends the self-driving car. But there are also some accidents where these circumstances aren’t involved. In the Waymo crash, the human driver behind the wheel of the self-driving car (as required by law currently) turned off self-driving mode in an effort to avoid a crash — but of course, an accident still occurred. It was minor in nature and no injuries were reported.
This is nearly identical to the Uber crash, and it again raises the question about liability in these crashes. An interesting note in our source article today referenced the engineers that work on Waymo’s self-driving technology, and how they have to run thousands of simulations before their self-driving technology can properly predict and react to the millions of scenarios that can occur out on the road.
It raises the question: if engineers program specific reactions into these autonomous vehicles, and then an accident occurs as a result of these programmed decisions, could the engineers be held responsible for the wreck?