Vehicle Cybersecurity: Mitigating the Threat of Connectedness

Earlier this year, authors David Morris, Garikayi Madzudzo and Alexeis Garcia-Perez released a report on threats to vehicle cybersecurity, noting that the systems in automobiles are as vulnerable as any other connected device.

In large part, that is true. Cars are becoming more and more connected, and with that connectedness comes the threat of attacks to vehicle cybersecurity. According to Psych.org, “Additional feature and function in a connected car brings with it digital security risks and vulnerabilities that could expose critical vehicle systems to those who might exploit them for illegal activity.”

According to a new report, Securing Connected and Autonomous Cars for a Smarter World“: “At present cars have approximately 100 ECUs and more than 100 million lines of code which provides a massive attack surface for hackers. Hackers can exploit and gain access to any vulnerable, peripheral ECU such as Bluetooth, to take control of critical core ECUs which controls brakes or engine.”

The concern with connected cars is the potential scale of each attack. Until now, the attacks reported on connected cars have been done by white hat hackers. 

Evolving Threats in Connected Cars

Threats to connected cars have not really evolved as much as the potential attack surface has expanded. As cars become more connected, the threats of not only an an attacker discovering and then exploiting a vulnerability but the car itself posing risks becomes a greater concern, particularly in light of the fatal crash involving one of Uber’s autonomous test cars. 

The reported hacks on connected cars have been the work of ethical hackers, but “those attacks are becoming more and more sophisticated with an agenda behind them,” said David Barzilai, chairman and co-founder of Karamba Security.

One of the most well-known attacks is that of the Jeep Cherokee by white hat researchers, which was the result of a single vulnerability exploited on an entertainment system in that Jeep.

Because 1.4 million vehicles all used the same entertainment system, Chrysler was forced to recall all of those vehicles. And while there is no evidence that attackers are targeting specific individuals, it is possible for a rogue state to go after individual targets—think political figures, world leaders and others.

One interesting and likely unexpected twist to the business of hacking connected cars, Barzilai said, is that those attacks may not serve traditional agendas for traditional hackers.

A recent vulnerability in a Tesla car was disclosed by researchers at China’s Keen Security Labs of Tencent, which is owned by the Chinese government. The vulnerability was fixed, then the Tesla car was hacked again by Keen Security Labs. “A few months after, Tesla is selling 5 percent of its shares to its fourth largest shareholder, Tencent,” Barzilai said.

Keen Security Labs then reported 13 vulnerabilities to BMW and has been working for more than a year to disclose multiple vulnerabilities to BMW. “Why would Keen Labs put a team with BMW for over a year to find vulnerabilities in their connected cars? The more connected a car is, the more exploitable it is, and the more lucrative targets they become—even for ethical hackers,” Barzilai added.

What’s different about connected cars is that a single vulnerability puts an entire fleet of cars at risk, which is why Barzilai said, “Developers should do things better. They need to look more carefully for hidden security vulnerabilities in their code.”

Is Secure Coding Enough?

The industry can’t possibly fight the developer’s nature. It’s human nature to make mistakes. Bugs exist in the code somewhere, unfortunately. “What makes car more secure is that the vehicle is not user-changeable,” Barzilai said.

Look, for example, at the programming habits of Microsoft developers, who years ago were not so security-focused. Then Microsoft ordered all developers to develop more carefully. As a result, Microsoft now regularly reviews code with tools that highlight bugs to remediate before products go to production.

“Yet, even with such stern measures and best practices, Microsoft releases a weekly Patch Tuesday. It says something about the inevitability of risk that such a mighty and excellent organization in software development keeps delivering bug fixes,” Barzilai said.

How to Build Better Systems

Car manufacturers, for the most part, are concerned about risks to the mass fleet, not to one car, said David Kennedy, CEO at TrustedSec. “It’s the technology inside the car that needs to be protected, which means having a team come in and tear apart the system to find the vulnerabilities so that the area of entry for attackers is isolated.”

More often than not, the entertainment system is the main method for attacking vehicle cybersecurity. Most cars, though, are not built by manufacturers, and aside from Tesla, car companies are manufacturers first, not software developers. As a result, the systems in the cars are coming from multiple developers.

That’s why the industry has a long way to go. Typically, the process is to test reactively after the car has been designed. “The problem then can’t be fixed until the next generation. It’s about flipping the model, and while the industry is so far behind, it also recognizes the importance of changing. But, changing a process is a challenging problem,” Kennedy said.

Another perspective to ensuring vehicle cybersecurity is to proactively develop solutions by looking at the vehicles and the servers. “If you unbox data center servers, after one week they are different because system admins deploy different software and each is deployed differently. But, vehicles from same production line must run according to their factory standards,” Barzilai said.

The idea is that the car maker is the only entity able to make the changes. “It is possible to harden the controllers that may be hacked, and those that have the external interface can be hardened according to factory settings. If an attacker is exploiting a bug to change the system’s behavior, like a change in runtime not delivered by the vendor, that change is blocked,” he continued.

Once the attempt at exploitation is identified, the exploit is prevented because it is recognized as an attempt to deviate from an external actor. In this way, the bug can’t be exploited. If a bug can’t be exploited, the attack was prevented, thereby negating the need to disclose any vulnerability.

Kacy Zurkus

Avatar photo

Kacy Zurkus

Prior to joining RSA Conference as a Content Strategist, Kacy Zurkus was a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition's security portfolio. Zurkus was a regular contributor to Dark Reading, Infosecurity Magazine, Security Boulevard and IBM's Security Intelligence. She has also contributed to several industry publications, including CSO Online, The Parallax, and K12 Tech Decisions. During her time as a journalist, she covered a variety of security and risk topics and also spoke on a range of cybersecurity topics at conferences and universities, including Secure World and NICE K12 Cybersecurity in Education. Zurkus has nearly 20 years experience as a high school teacher on English and holds an MFA in Creative Writing from Lesley University (2011). She earned a Master's in Education from University of Massachusetts (1999) and a BA in English from Regis College (1996). In addition, she's also spoken on a range of cybersecurity topics at conferences and universities, including SecureWorld Denver and the University of Southern California.

kacy-zurkus has 62 posts and counting.See all posts by kacy-zurkus