With the future of driverless technology on the rise, one can only wonder about the safety of this new autonomous phenomenon. Are they safe? Do they create more distractions? Could they be hacked? Just like any new gadget, technical issues may arise; so, it is important to remain conscious about the “what-ifs.”
If you’re a movie buff like me then you have probably seen every robot movie. Needless to say, they all share a common plot. The world has changed and along with that change they have integrated a selection of artificial intelligences, also known as robots! What starts as a great idea by some unsuspecting scientist, soon becomes too much to handle. Robots begin to learn, adapt, grow, and as a result, they become more and more powerful.
Now I’m not saying that autonomous vehicles are the robots that will take over the human race, but one could wonder about where artificial intelligence will lead us next. Self driving cars are being programed to make quick decisions on the road. After all, you never know what deer, tree, car, or child will jump in front of your car at any given moment.
Many companies have even started testing driverless cars, including Uber, Waymo, General Motors, Toyota, and Tesla. While some companies like Waymo and Tesla have gotten great feedback, other companies, such as Uber, haven’t had the same luck. Just recently, Uber’s driverless car struck and killed a female pedestrian in Arizona during a test drive. While there was a driver behind the wheel, the vehicle was in auto-mode during the time of the crash.
There has been clear evidence that autonomous vehicles plan to reduce annual car accidents by up to 90%. That’s huge! But have you ever thought about in an event that an unavoidable crash occurs? Think about all the split-second decisions we have to make driving. Could these occurrences be unpredictable? What instincts will a driverless car have?
One of the most common misconceptions about autonomous vehicles is that they are preprogrammed with calculated algorithms that will tell them what to do in a given situation. For example, slow down at a yellow light, stop at a stop sign, or swerve right for sudden car stops.
Here’s a moral dilemma to think about: If a child suddenly runs out into the street without giving the vehicle enough time to brake, does the car veer left causing an accident, veer right possibly injuring pedestrians, swerve into a tree killing the passenger, or keep straight and injure the child. In those split seconds, there isn’t a huge amount of time to calculate an ethical conclusion.
But on the contrary, these vehicles will rely heavily on machine learning and pattern recognition, similar to the approach in artificial intelligence. So as the car drives and learns daily driving behaviors, it will adapt and grow to make better decisions. Through learning, simulation, and mimicking real driver's actions, self-driving cars can actually begin to think for themselves. Cool huh?
Despite these hiccups during the testing period, companies are still confident in the integration of self-driving cars. There is no surprise that these vehicles will be joining us on the road very soon, possibly as soon as 2020.
You may also enjoy reading:
- The Future of the Self-Driving Car
- The Shift to Smart Technology: Smart Home Automation Systems, Smart Lighting, Home Security Systems, and More
- Smart Home Automation Systems
Connect With Us!
What are your thoughts about self-driving cars? We would love to hear from you!
Share with us in the comment section below!