Car Hacking Development: Hackers Take Remote Control of a Tesla
A few weeks ago, the independent security research team Keen Security Lab managed to hack into the autopilot mode of a Tesla Model S and take control of the vehicle from a computer as far as 12 miles away. Researchers at Chinese-based Keen Security Lab spent several months uncovering multiple vulnerabilities that put the Tesla Model S at risk for car hacking.
In this video, Keen Security Lab researchers explain how they compromised the CAN bus – that controls many of the car’s vehicle systems – and took control of the Model S’s brakes. The researcher-hackers also unlocked the car’s doors remotely and opened the car’s trunk. Less critically, they turned the windshield wipers on and off and adjusted the seat settings. They did all this from afar and all this while the vehicle was in motion.
It’s what those resistant to self-driving cars fear the most: a vehicle that’s out of the passengers’ control and, instead, in the control of someone with malicious intentions. While designers, engineers, investors, and technology experts all agree that automated cars are, overall, safer than those driven by humans, successful car hackings, such as this one, cause the public to raise a collective eyebrow.
Whether they’re used overtly as weapons in, for example, the Bastille Day truck attack in Nice, France, or accidently so, cars can kill. In the United States, the number of fatal crashes in proportion to the number of miles that drivers travel annually has decreased steadily since the early 1900s, but road traffic accidents still claim around 30,000 lives each year.
Safety is a central concern in car development in general, but especially in the development of self-driving cars. Automated vehicles are an unknown quantity and what is unknown frightens many. As rare as actual incidences of car hackings are, they are, nevertheless, a public relations headache for Tesla and other developers.
It is standard practice in the technology industry for security firms like Keen Security Lab to disclose any security vulnerabilities they find to the product’s developer. The developer can then take the proper measures to fix faulty code. Keeping with this practice, Keen Security Lab sent Tesla a comprehensive list of the Model S’s vulnerabilities.
In the past, security firms that disclose product vulnerabilities are sometimes met with resistance from the products’ developers. Security firm IOActive, for example, claims SimpliSafe ignored its attempts to contact them regarding a vulnerability that made it possible for hackers to repeat security system pins to disarm any of the 300,000 SimploSafe systems that protect American homes.
Self-driving cars are, of course, even bigger potatoes than home security systems. According to the Keen Security Labs, Tesla responded quickly to its vulnerability report and took immediate action to resolve the Model S’s faulty code. For companies that take the security of their product seriously, vulnerability reports are, if not celebrated, at least seen as helpful tools with which to built safer products.
Despite the rarity of car hackings, many believe that all automated cars should have some way to manually override the system in case of an emergency. Manual override, which the Tesla S does have, could protect car passengers from malicious car hijackings, for example. Manual override has already become policy in some places: the UK Code of Practice for self-driving car, for example, requires that any car tested on public roads must have a manual overdrive option.
Please add to this conversation or share the article with friends.