Self-driving cars rely on internet-connected computing systems to navigate from place to place. These sophisticated systems allow the car to, among other things, respond immediately to map updates,
The Tesla Model S at a Super charge station for refueling
but being internet-connected also makes these vehicles vulnerable to cyber attack. To protect self-driving systems, engineers at Google Brain – Google’s artificial intelligence division – recently built an artificial neural network that can identify and respond to attacks intelligently.
In a previous post (Car Hacking Development: Hackers Take Remote Control of a Tesla), I wrote about how security contractors at Keen Security Lab uncovered vulnerabilities in the Tesla Model S that allowed a team of hackers to take control of the car’s autopilot mode from up to twelve miles away. Vulnerabilities like this put the car’s passengers at risk for abduction or accident. Although Tesla used Keen Security Lab’s report to close loopholes in its code and to strengthen its autopilot protocols, the need for strong, innovative security solutions for self-driving cars is clear.
In a report published this October, Google Brain computer scientists Martin Abadi and David Andersen presented an innovative method of protecting communications. Their method uses an adversarial neural cryptography that could be applied directly to self-driving car security.
A representation of machine learning algorithm concept with related subject such as decision tree
Artificial neural networks, like the ones Abadi and Andersen propose, are networks of computing units that are connected in a way that loosely resembles the neural makeup of a brain. As information passes through the network, each unit enables or inhibits the flow of information to certain areas of the network. Neural networks can solve problems that traditional computer programs cannot, since the unconventional flow of information makes the system self-learning.
In their report, Abadi and Andersen discussed the ways artificial neural networks can learn how to encrypt data in a larger system with multiple agents. (Self-driving cars use a multi agent system to navigate and, eventually, to avoid each other on the road.)
A neural network communicating with a second neural network can, they discovered, learn to encrypt its communication against a third party network that is trying to eavesdrop. This could protect the integrity of a self-driving car’s location, destination, and route from hackers.
Advancements in artificial neural networking could also benefit self-driving car technology in other ways. For example, ideally self-driving cars will eventually be able to synch with one another to coordinate things like traffic merging onto a busy freeway and to do things like redirect traffic patterns to minimize congestion. Right now, almost all of the other cars on the road aren’t self-driving, so they behave in ways self-driving cars can’t predict. Artificially neural networks may be able to let self-driving cars predict and react to being suddenly cut off by another driver.
Artificial neural networks could also let self-driving cars decide who, in which situations, to let override its network. According to a piece in the MIT Technology Review, for example, self-driving cars still can’t tell a policeman redirecting traffic from any other pedestrian on the road. Although a comical photo of a Google self-driving car prototype getting pulled over by police for driving too slow circulated on the internet in 2015, engineers are still testing ways to have self-driving cars respond appropriately to police and emergency vehicles.
Eventually, artificial intelligence like artificial neural networks could also let self-driving cars make their own moral decisions on the road.
Car swerving dangerously
Cars could, for example, eventually decide themselves whether or not to swerve dangerously to avoid an animal on the road. I’ve written about the ethics of automation several times (The Ethics of Automation: When Self-Driving Cars Decide Who Lives and Who Dies), but always with the assumption that humans would decide what kind of moral creatures self-driving cars would be. Artificial neural networks advance (very slowly) the possibility of the car itself making that decision for itself.
Please share, tweet and add to this story. Thanks