Elon Musk is right. Driverless cars will arrive by 2021

Tristan Greene

Writing about AI is either an act of sharing hope or one of spreading dismay. People either get excited about the imminent arrival of driverless cars, or become terrified that the machines are inching ever nearer to put us all in danger.

No matter how you feel, autonomous vehicles are probably closer than you think.

While some feel driverless cars are decades away, the reality is they’re nearly here. There are three major reasons for these pessimistic predictions:

  1. Technological: Is AI smart enough to drive already?
  2. Political: Isn’t the government standing in the way of this?
  3. Ethical: How does a car choose between killing pedestrians and killing it’s passengers?

Technology

Overcoming technological limitations is the most complex of the three, but not because science is hard. It’s because marketing is hard and financing these things requires tiny little startups to suddenly become thrust into the realm of Ford and Toyota.

We’re not talking about a room of people wearing lab coats, each with a name-tag that shows which company they work for in a big building labelled “AI science.” Instead it’s a race, but sharing research takes time.

There are hundreds of AI companies great and small working on the problem. Microsoft just committed itself to being an AI company. Ford practically abandoned its entire business plan in favor of becoming a manufacturer and operator of driverless vehicles. Technologically speaking, the experts think we’re only a few years away.

Elon Musk thinks we’ll get there by 2021, a statement he doubled down on by saying Tesla would have the technology by next year, with expectations for government approval by 2021. Ford’s new CEO said he plans on deploying self-driving cars by then and Toyota, in the same report, says it expects to field them by 2021 as well.

They’re already being tested; self-driving cars have been on the roads. Sometimes people even pretend to be driverless cars, but otherwise they already exist. So if we’re almost there, technologically speaking, then surely it’ll be decades before governments can legislate the rules of autonomous vehicles. Lives are at stake, right?

Politics

The legislation, in the US, is actually going better than most people could have predicted. So far, federal regulatory efforts have passed in the House with full bi-partisan approval, and the Senate is expected to follow. The current legislation, to be clear, says that manufacturers can field up to 100,000 driverless vehicles ahead of full State and Federal safety guidelines.

There’s more work to be done, and the legislation hasn’t been a slam-dunk. Transportation unions got involved and there’s a 4,536 K/g (10,000 lbs) limit restriction baked into the bill now, meaning no big rigs. Otherwise there’s a lot of reason, in the US, to be optimistic at the political outlook for the immediate implementation of autonomous vehicles.

Unless, of course, the ethical concerns become too great. Perhaps the average consumer simply isn’t ready for driverless cars. Studies, however, indicate we are.

Ethics

A recent study conducted by Erie Insurance asked 3,000 people about self-driving cars, and it seems like people are warming up to the idea. Cody Cook, Erie Insurance vice president, said:

According to the National Highway Traffic Safety Administration (NHTSA), human error is a factor in 94 percent of car crashes. While we believe that fully autonomous vehicles will greatly reduce that number, it’s hard to predict how soon they will be widely available. Current technology is going a long way to keep us safer on the road, but the last thing we want is for people to become over-confident as this technology continues to evolve. Unfortunately, our survey finds that many people are getting ahead of themselves—making plans for what they’ll do in the car instead of paying attention to the road.

Who among us hasn’t tried to imagine what our commute would look like if we could just hand-off all responsibility to a computer?

One of the foremost experts on the ethical concerns surrounding AI is Oxford University Professor Nick Bostrom, founder of the Future of Humanity Institute. TNW talked to him about these particular issues with AI previously. He believes if technology can save millions of lives a year, we don’t have to be so concerned with extremely rare circumstances that we allow progress to stall.

If human-error plays a role in 94 percent of car accidents, then it’s a problem AI could potentially eliminate.

The outlook for self-driving cars is good, if you’re into that sort of thing.

It doesn’t seem like everyone in America is going to have a driverless car in their garage by 2021. In fact, most people will probably experience autonomous vehicles through companies like Uber, or Ford’s future fleet that’ll work a lot like Uber.

2021 might be a tad optimistic, but it seems we’re closer to 2021 than ‘decades away.’