Is It Possible That Driverless Cars Could Be Hacked To Cause Accidents?

Is It Possible That Driverless Cars Could Be Hacked To Cause Accidents?


Yes. But not necessarily in the way you imagine. In theory, any device connected to the internet can be ‘hacked’, but this question opens a slightly wider debate on whether driverless and autonomous cars can be ‘fooled’ into causing accidents, damage and injury.

Talking to the Business Insider, a cybersecurity expert called Jonathan Petit managed to ‘spoof’ or confuse the expensive and sophisticated ‘LIDAR’ detection systems that driverless cars use, using a self-assembled device costing just $43, combined with a laser pointer. This device allows for the ‘hacker’ to alter the vehicle’s route which can lead to accidents and in some cases, the ability to alter the vehicle’s overall route; allowing for hijackings and robberies.

In a slightly less serious, but still alarming incident in 2015, two hackers were able to take control of a Jeep Cherokee’s Uconnect system, enabling the hackers to change the car’s air conditioning settings, radio volume and more concerningly, to disable the car’s transmission. After a demonstration of this, Fiat Chrysler recalled 1.4million vehicles to add anti-hacking software.

It’s not just driverless cars that are at risk, today’s internet-connected cars can be targeted.

In today’s legal environment this poses an interesting question, what recourse might an injured party have after falling victim to a crash in an autonomous or hacked vehicle?

Presently, the Automated and Electric Vehicles Bill 2017 is moving through parliament with various amendments taking place between the house of lords and the house of commons. Originally, there was talk of insurance claims becoming more of a ‘product liability’ issue, centred on the autonomous vehicle manufacturer, as some investigations into Tesla vehicles have led some to suggest, but the UK government has decided that ‘driverless accidents’ ought to be covered within the existing motor insurance settlement framework. Assuming the bill passes without further radical change, we may find that claims involving autonomous cars are resolved in much the same way as they always have dealt with by the usual panel of UK insurers.

That said, this conversation focusses on driverless cars, the reality is that for at least another decade or more, there will be a combination of autonomous and ‘normal’ cars on the road, where collisions between the two are likely. So where human error will be responsible for some crashes, others could be instigated by hackers which pose a threat to all road users.

Insurers themselves have responded, suggesting that drivers will continue to need just one policy which is extended to permit the use of an ‘automated mode’ wherever road regulations permit them. Insurers also say that a ‘driver’ will not be unfairly held responsible for car accident claims if they’re caused by autonomous system failures – where insurers will claim the resulting costs from the car manufacturers if a fault can be proven and demonstrated.

With the above insurance framework in mind, there’s still a sizable responsibility on the part of the manufacturer to defend against malicious cyber-attacks that might compromise user safety.

Fortunately, it’s felt that as autonomous car technology develops, it’ll become more complicated to hack as a natural consequence of system complexity. Presently, attackers need only to manipulate select signals to achieve the needed outcome, but as systems become better at visualising the world around them, hackers may need to invent ‘fictitious worlds’ to confuse the systems of the future, which arguably, is much harder to do.

Hacking has yet to be the reported cause of any ‘driverless’ car fatalities, but as the use of autonomous vehicles becomes more widespread, the prevention of driver accidents and injury will be more closely intertwined with the advancement of cyber-security and insurance policy-making.