We often see a lot of stories related to self-driving cars and the progress companies are making with their prototypes. Many people are now genuinely enthusiastic and optimistic about the future of this technology and its ability to make transportation simpler, more accessible, and more convenient.

While we’re intensely focused on hyping up these autonomous beasts, we often forget to look at the downsides that might come up along the way. Throughout the testing phases we are quick to mention all the positives and sometimes fail to spot the importance of some of the drawbacks.

On 6 August 2017, a report showed up on Engadget where hackers have successfully confused a self-driving car into thinking that a stop sign is actually a speed limit sign. One could only begin to imagine the trouble we would get into if this was a widespread phenomenon. For this reason we must take a step back and acknowledge the challenges that testers still face with regards to autonomous vehicle technology.

Confusing Signs

Self-driving vehicles are at a very simple phase at this point. There is no electronic chain signaling like you would experience with modern railroad systems. Instead, almost everything is done visually. Signaling requires a significant investment in an infrastructure that is so ubiquitous it would be easy to vandalize.

Since you’d seldom find a mayor willing to dip into a high-maintenance investment like this, we probably won’t be seeing this kind of technology anytime soon. The point is that these cars have to use the same tools that humans have used for their entire existence to navigate their environments: a pair of eyes.

Unlike humans, however, cars have a very primitive way of analyzing their environments. The pattern-seeking algorithms written into these machines are still in a stage where they can easily be thwarted by human intervention like we’ve seen in the piece I linked to earlier. If we want to prevent vehicles from crashing, we need to make absolutely sure that they are able to understand when a sign is ambiguous so that they can stop and wait for human input before proceeding. In the aforementioned case, this didn’t happen. The car simply saw the stop sign as a speed limit sign and kept on going.

These kinds of glitches can lead to catastrophic circumstances in a world where self-driving cars are ubiquitous.

If It Connects, It Can Be Hacked

No matter how shielded a system is, if it connects to the Internet in one way or another, hackers will find some way to tamper with it. This is especially true if multiple runtimes in that system have access to the Web.

Google accounts seldom get hacked, but that is because the company constantly updates its systems to ensure that it stays ahead of the curb. Virtually every compromise of a Gmail account could be attributed to a mistake by the user. But the same could be said about self-driving vehicles. People could naively hand over access of their cars to a hacker. And then things get much uglier as we explore this scenario further.

This entire scenario prompted the UK government to pass stricter security guidelines for “smart” vehicles, although there are serious doubts about the effectiveness of the regulations aside from a negative effect on any up-and-coming companies that might want to get their cars on the market.

The idea here is that companies are already aware of potential points of failure for their vehicles. But they need to ensure that they rigorously test the models. They also need to make the software upgradeable, which also means future-proofing the hardware. We’ve already seen that this isn’t necessarily the easiest thing to do with smartphones and other devices which end up obsolete after a certain number of years. With a vehicle, people’s lives and a valuable piece of property are at stake.

One way to deal with this is to make the cars modular. They should be able to get hardware upgrades to continue to future-proof older vehicles. This is by far the easiest solution and only requires a few simple design adaptations. The second way to deal with the issue is to overpower the hardware so that its maintenance lifecycle outlasts the expected lifespan of the vehicle. The problem with this is that it makes people pay a heftier initial price for their roadsters. For both the consumer and the manufacturer, the first method makes more sense.

All in all, we have to recognize that self-driving cars are far from a mature technology. To reach that stage we face some cumbersome challenges that require careful decision-making and planning for the road (pun intended) ahead.

What other challenges do you think autonomous vehicle manufacturers will face in the future? Let’s discuss them over some coffee in the comments!

Miguel has been a business growth and technology expert for more than a decade and has written software for even longer. From his little castle in Romania, he presents cold and analytical perspectives to things that affect the tech world.

Our latest tutorials delivered straight to your inbox