Will Self-Driving Cars Be As Futuristic As They Are in the Movies?
Autonomous vehicles have been a pop-culture staple for decades, and their presence in films has only increased over the years. From the original Batmobile and Herbie the Love Bug all the way to the more modern iteration of self-driving cars in Minority Report and I, Robot, vehicles with the capability of driving their passengers around without any assistance from a human have inspired filmmakers for generations.
Now that self-driving cars are no longer the stuff of fiction, how will they measure up to their depiction in film? Let’s take a look.
Self-Driving Vehicles Can Change Everything
Car ownership has been an integral part of the American Dream since the assembly line revolutionized the car industry and made motor vehicles affordable for most people. However, this idea that cars represent a certain amount of American freedom is being challenged in a sense by the rise of the self-driving car. The increasing presence of self-driving cars on the roadway has led to a huge debate on how they will affect American culture and even jobs as they continue to become more prevalent.
The film that immediately comes to mind when talking about how automated vehicles might have an effect of jobs in the U.S. is the 1990 classic Total Recall, where taxis have been outright replaced by the Johnny Cab. The Johnny Cab is an automated vehicle that has a robotic torso sitting in the driver’s seat which is meant to serve as an amalgamated simulacrum of everything that we have come to expect from a taxi driver: a conversational, sarcastic stand-in for a human driver.
This speaks to a deeper idea that while we may eventually be cruising around with no control over the vehicle ourselves, we will still yearn for some modicum of human connection as we do so. Even The Fifth Element’s Korben Dallas, played wonderfully by Bruce Willis, has taxis that are shown to be perfectly capable of operating without a human being present, yet a human “driver” is still a part of the experience.
A part of this apparent need for at least the idea of a human in the driver seat might stem from the collective fear of being out of control. While companies like Waymo, Tesla, and Uber all have vehicles that can drive themselves, the public still has fears regarding not only the safety of the vehicles themselves but also the potential for hackers to breach security systems and wreak havoc. This has caused the companies leading the development of self-driving cars to become more vigilant about cybersecurity in autonomous vehicles, not only to assure the fears of the public but also to prevent a derailing of the progress of autonomous vehicles.
Trusting The New Method Of Transportation
Though companies like Volvo already have a prototype for a car that will be so advanced that riders will be able to completely forget about driving, allowing them the freedom to nap or watch a movie while they ride, society might have a ways to go before they trust autonomous vehicles enough to do so. One way to increase trust in self-driving cars is to actually allow riders to form a bond with their vehicles. Even today, countless drivers can point out their car’s “personality” in the various hiccups it displays while driving. Many people go so far as to name their cars and grow a serious attachment to them.
When considering that this personification of vehicles is already the norm without any sort of self-driving or autonomous functionality, it isn’t that much of a leap to think that the self-driving cars of the future might have an actual personality. Like the aforementioned Johnny Cabs of Total Recall or Benny the Cab of Who Framed Roger Rabbit? fame, self-driving vehicles with a bit of personality go a long way to endear themselves with human riders. While we may not see vehicles develop a witty rapport with their owners a la Knight Rider, there is a chance that we will see a future where our vehicles do interact with us in some way.
A handful of car companies are planning to implement artificial intelligence (AI) as a standard in their vehicles down the road, and Honda has gone so far as to unveil a concept car that very much echos Knight Industries Two Thousand (KITT). Honda’s New Electric Urban Vehicle, or “NeuV,” is a self-driving concept car that makes use of Honda’s talking Automated Network Assistant, also known as HANA. HANA’s entire purpose is to act as a rolling personal assistant, gathering data about driver and passenger preferences and behavior in order to make suggestions based on that collected data.
Though it is unlikely that HANA or any other AI in a vehicle will have the dry wit or jovial nature of KITT within the next couple of decades, making the self-driving car experience more personal is a step in the right direction of creating a level of trust between rider and vehicle.
Safety And Liability
Ultimately, the main obstacle preventing autonomous vehicles from becoming the norm on the roads and highways of the U.S. comes down to safety and liability. Whenever I think of safety measures in self-driving vehicles or really any futuristic vehicles, I always think about the scene from Demolition Man in which Sylvester Stallone is spared a gruesome death in a violent crash by the comical deployment of “SecureFoam”, as seen in the clip above.
While it is, of course, important that our vehicles protect us while we are inside them, the true measure of safety will come from how safe they are for pedestrians and other drivers. Regardless of how connected we become to self-driving vehicles or how much we trust them with our own lives, it will all boil down to how safe these vehicles will be for everyone on the road, not just those riding within them.
Human error currently causes more than 90% of all car crashes on the road today, so it would seem like having autonomous vehicles take on the duty of driving for us is a no-brainer. However, there are unique issues regarding liability when it comes to self-driving vehicles and the sometimes fatal accidents that they cause. While under current laws, drivers can be charged for driving while distracted, driving aggressively, or driving under the influence of drugs or alcohol. Since fully autonomous vehicles take the human element out of the equation, this leaves questions as to exactly who should be blamed if a serious accident occurred.
Another potential problem arises when we consider the classic “trolley problem,” a moral quandary in which the ethics of choosing what lives to spare comes into question. How can we be sure that an autonomous vehicle will make the appropriate ethical choice when faced with a dilemma in which it must choose between two outcomes that involve the loss of human life?
Scientists at MIT have spent a significant amount of time studying this particular issue, and have developed an experiment called Moral Machine, which crowdsources ethical choices from millions of people across the globe. The findings of Moral Machine have made this problem even more frustrating, as ethics and morality fluctuate widely across cultures, economic sets, and geographical locations. Individualistic societies like France and the U.K. are far more likely to spare young lives, while collectivist societies like China and South Korea swing far the other way, prioritizing the lives of the elderly. Across many different question sets, there were wild fluctuations in what was deemed morally reprehensible, and that in and of itself makes programming morality into automated vehicles extremely challenging.
The future has arrived with self-driving vehicles, and in many ways, they have both exceeded and fallen short of the predictions made by film and television over the years. Generally, the roads are much safer when humans aren’t involved in driving, but we have yet to reach a level of technological advancement that allows for a utopian vision of highways and streets full of driverless vehicles. Whatever happens down the road, we can only hope that scientists and engineers get to work on getting sassy AI’s working in self-driving cars, because that is the real promise that needs keeping.