Filter By:

Recent Podcasts

Back to Insights

Poulin: The Economics of IoT Fear and Uncertainty

September 18, 2017 | Embedded Systems and Internet of Things
By Chris Poulin, IANS Faculty


IANS Faculty Chris Poulin

You’re driving your new car, a 2022 Electron Motors Trajectory class SUV – or rather it’s driving you. You’re answering email, texting your spouse and watching videos, all while commuting in your zero-emission, rolling cocoon. You weren’t sold on the concept of fully autonomous vehicles at first, but after a few years of road testing, you’re on board.

Suddenly, the car jerks to the right, disengaging you from your immersion in Episode 2 of the Lost reboot. No big deal, you think. It does that sometimes. Another car may have merged into your lane too quickly and cut yours off (those ridiculous throwbacks who insist on driving themselves; you can’t figure out if it’s quaint or technological treason). Or, sometimes the self-driving algorithm misinterprets a dip in the road as a curve – maybe you should have bought a car from a maker that embraced LIDAR instead of just video for road analysis. Still, you can’t help but keep in the back of your mind the possibility of someone hacking your vehicle.

Information- (and now device-) security is an underappreciated field. In the perfect case, nothing happens. It’s anti-climactic and difficult to justify the cost of building security in, bolting security on and implementing security controls for the operating environment. I believe that’s one reason we resort to fear and uncertainty (I’ve never understood where doubt fits into the FUD equation, but that’s for another day).

A recent Gartner survey showed that 55 percent of people won’t ride in a fully autonomous vehicle, citing fears of both technology failures and cybersecurity. Those fears, in part, are due to security researchers seeking fame in outlets such as the Today Show and Wired. In contrast, the research that was the foundation for the 2015 proof-of-concept hack has remained in the domain of the academic and the automotive community.

But beyond the myopia of the single-minded security community, there’s often a benefit to technology. In the case of self-driving cars, it’s the potential to save lives. A National Highway Traffic Safety Administration (NHTSA) report concluded that 94 percent of road accidents are caused by human error, mainly inattention and distractions, speed and illegal maneuvers, and alcohol and lack of sleep. Technology is immune to the foibles of humans, and even if a car does someday feel the need to tweet about how much it loves the latest Katy Perry song, it is capable of true multitasking; humans just believe we are.

Driverless technology is purported to drastically lower road deaths, saving almost 33,000 lives in the U.S. alone. In addition, autonomous vehicles alleviate congestion, save fuel and reduce parking needs, at an estimated cost savings of over $121 billion per year. Add that to the $400 billion in annual cost of accidents, then factor that savings worldwide, and the benefits could ultimately exceed $1 trillion annually.

It’s important to note, though that these figures are predicated on total adoption of autonomous vehicles – which is unlikely to come to pass for at least a decade, if ever – as well as infrastructure upgrades and the evolution of self-driving technology to achieve near perfection in decision making.

But automotive isn’t the only industry affected by fear and uncertainty, either. Recently, a pacemaker was recalled due to vulnerabilities in the communications between caregivers and equipment used for home monitoring. The remedy was a firmware update, which requires visiting a physician. The risks of an update include loss of device settings (needs to be reprogrammed), loss of diagnostic data, reversion to original firmware due to a failed update, and – the worst case – a complete loss of device functionality. This last risk has a 0.003 percent chance of occurring, which would be about 14 patients out of an estimated 465,000 who have the vulnerable pacemaker.

Weighing the Risks and Rewards

The question then becomes: how many are at risk for exploitation by an attacker with a motive to harm the patient? FUD advocates will claim that even though there have been no attacks on these devices to date, that is not a predictor of what the future holds. A more measured response may be to encourage patients with a high public profile (i.e., those who are more likely to be targeted, such as political figures or business leaders) to get the update, and for others to weigh the likelihood of exploitation.

I’m not suggesting that patients shouldn’t get the update – in a perfect world, everyone should receive the update and there would be zero chance of failure – but the public should be armed to make an informed decision, not simply messaged the Chicken Little scenario.

The other side of the coin is that if Valasek and Miller hadn’t publicized their findings, automakers may have not felt the urgency to address vehicle cybersecurity. Many of the issues were discovered by researchers at the University of California at San Diego and the University of Washington back in 2010 and 2011, and socialized with the automotive community.

Automakers may have been taking steps to address the vulnerabilities, but they certainly weren’t talking about it publicly. It takes years to retrofit or redesign vehicles, so by 2020, some vehicles may have closed off the majority of attack vectors, for example, and the public would be none the wiser (perhaps increasing the overall acceptance of autonomous vehicles).

In the end, we have a responsibility in the security community to be wise about what we disclose and how we disclose it. Self-promotion at the expense of adopting technology that has the potential to save hundreds of thousands of lives is a poor trade-off. There’s a cost/benefit crossover point that needs to be evaluated.

And if we’re worried about the remote potential for exploitation, then it’s up to us to be early adopters and work with the manufacturers to make the products as secure as possible for the general public. Fear as a tool needs to be left to the autocrats.

***

Chris Poulin is Director of IoT Security and Threat Intel for Booz-Allen Hamilton's Strategic Initiatives Group, where he is responsible for building countermeasures for threats to the Internet of Things. He has a particular focus on connected vehicles, as well as researching and analyzing security trends in cybercrime, cyber warfare, corporate espionage, hacktivism, and emerging threats.


Related Research

10/16/2017 | Tools & Templates
Employee Termination Checklist


9/18/2017 | Event Takeaway
Infosec Risk Management: How to Focus on the Business Units


9/6/2017 | Ask-an-Expert
Avoid the Pitfalls of Using FAIR for Risk Management