Subscribe to DSC Newsletter

By Tyler Schnoebelen, June 21, 2016.

You can turn right on red in Iowa. Except not where I was last night, from Washington Street on to Linn, which I only realized as I read the “no right on red” sign mid-turn. You’re definitely not supposed to turn left on red, which is what I did a few blocks earlier going from Iowa St. to Clinton. I have no excuse except—I’m not kidding—my mind was preoccupied by thoughts about self-driving cars.

Johnson County, Iowa was one of the first places in the US to officially welcome the testing and operating of self-driving cars. Iowa’s already had self-driving tractors for about two decades—20 ton machines that steer themselves through the fields to plant and harvest. Johnson County is also home to the National Advanced Driving Simulator, which has been operating since the early 1990s.

The number you regularly see for the market size of self-driving technology is $40 billion by 2025. There are two ways of looking at the effect. The first mindset imagines only slight changes to transportation networks: cars will increasingly shift from waking up drowsy drivers to being on autopilot during typical commutes so that people can read their text messages without swerving into another lane. Alternatively, once everyone can call up self-driving taxis, who-owns-how-many-cars could shift dramatically. And the more self-driving vehicles there are on the road, the more there can be global route optimization, not just for commutes but for deliveries.

There are a bunch of players working on self-driving besides John Deere. Self-driving cars are now a reality with companies like Mercedes, BMW, and Tesla having already released, or are soon to release, self-driving features that give the car some ability to drive itself. It’s estimated that by 2020 there will be 10 million cars on the world’s roads with som....  

Volvo, Toyota, BMW, and Tesla say they’ll have fully autonomous vehicles ready between 2018 (Tesla) and 2021 (BMW). GM invested $500M in Lyft and bought 40-person autonomous vehicle technology company Cruise Automation for over $1B.Toyota has the most number of autonomous driving patents.  

But car manufacturers have healthy competition from outside the automotive industry as well. Google’s self-driving cars have logged over a million miles.  Many industry observers believe Apple’s Project Titan is about building an electric self-driving car. Baidu, the search giant in China, plans to have fully autonomous cars available by 2019 after a recent successful 18 mile test drive through Beijing. Back stateside, Uber is now testing a Ford Fusion driverless car on the streets of Pittsburgh so that eventually when you open their app, the car that comes to get you may have driven itself to you.  

Tractors versus cars: training in the field

There are obstacles in a cornfield—fence lines with posts or poplars for wind breaks, curves in the topography, creeks and deer. But a 100-acre square is simpler than the middle of Iowa City, where college students are notorious for darting across streets at all hours. Self-driving tractors can get by with GPS and farmers can put transponders in the field that lasers can bounce off of. Self-driving cars require a lot more sensors and computing to figure out which of the many things that could be happening in the environment around them actually are.

To succeed at self-driving cars, you have to succeed at interpreting the world. That means you need a huge amount of training data. In addition to gathering camera and sensor data, you need to know if squiggles of pixels indicate a pedestrian or a statue. What our eyes can do easily—distinguishing the edge of one thing from another—is not as straight-forward for a computer.

Self-driving cars don’t like faded lane markers, damaged street signs, or the variety of traffic lights that you find all over America and many other countries that don’t take standards or consistency as seriously as, say, Sweden.

That said, Mercedes’s E Class 2017 sedans have 23 sensors that work to keep it safe on roads without lane markings. In addition to cameras, there’s radar and lidar that can get information back by bouncing radio waves or light off other objects. Having 64 lasers create a 3D map is useful but expensive—the roof of Google’s self-driving car has a Velodyne system that costs $75,000. There are cheaper options already and more on the way, though they still have a ways to go to get to the $100 price point that automobile manufacturers would want.

Pedestrian protection

The National Advanced Driving Simulator has special expertise in studying driver distractions and I was able to take a spin in its earliest incarnations in 1992. My favorite part was that if you drove off the road at just the right angle you could exploit a bug in the software and get the car to fly. That made it a lot easier to avoid virtual pedestrians.

Existing technology can already avoid collisions with pedestrians as long as the car isn’t going above 18 mph. Between 18 and 37 mph, systems can “mitigate”, though not fully avoid a collision. In March, 20 automakers agreed to make this technology standard in all vehicles by 2022. And for what it’s worth, Google has recently patented a kind of adhesive glue to stick pedestrians to the car after collision to keep them safer. You may want to go see an artist’s rendering of that.

The artificial intelligence systems behind self-driving technologies need to know something about what they’re seeing. Outlines and edges are partly detectable by just throwing enough examples at a deep learning system. Here’s how Google’s artificial intelligence, Deep Dream, sees a few famous pedestrians.


Pedestrian protection


Actually, this would be a pretty reasonable thing for a car to see, provided it doesn’t treat a dog-headed John Lennon any differently than it’d treat a long-haired John Lennon. (That is, it hasn’t learned “Run over monsters”). But this image also shows you the importance of the right training data. The Internet is full of dogs and cats, so that’s what Deep Dreaming commonly hallucinates.

So you’d prefer to train cars on representatives of what they will encounter. Except that cars don’t see a picture—they see picture after picture after picture as they move down the road.

Looking at this as a machine learning problem, you really don’t want any false negatives, in other words, no pedestrians you should’ve recognized but didn’t. But even false-positives are dangerous. If there are other humans driving cars, paying great attention, and suddenly your self-driving car abruptly stops for no discernable reason—that can be dangerous, too.

Because there’s so much going on, the first thing you want to do is get rid of all of the easy stuff: the sky isn’t usually interesting, for example and it’d be a waste of computing time and power to worry about it much. But there’s a lot of other stuff happening that requires interpretation. If you want a job as a “driver” in Google’s self-driving cars, you’ll sit behind the wheel for six to eight hours, getting $20/hr. But there’s another requirement: the ability to type 40 words a minute so you can give detailed reports to Google engineers about what you’re encountering.

It’s fairly normal for Americans to log 20,000 miles of driving in a year—so you might think that Google’s million plus miles of data-gathering are ample. But humans are extraordinary learners and information processors even compared to state-of-the-art machines.

Etymologically, the difference between “self-driving car” and “autonomous vehicle” is something like ‘wheeled thing that urges itself forward’ versus ‘carrying thing that follows its own laws’. Artificial intelligence systems inherently follow their own laws based on their training data. This is, of course, over-rideable.

UPS has spent hundreds of millions of dollars to get algorithms to find better ways of traveling their 55,000 routes. If these optimized routes can reduce each drive’s path by one mile day, they save $50 million a year. UPS drivers are not required to use the suggested paths, but they do have to write up a report if they don’t use it. This Wall Street Journal article by Steven Rosenbush and Laura Stevens notes several drives feeling like the optimized routes don’t really make sense or make them do things like back up or turn left. Even with all the human heuristics built into the UPS system, what a machine sees as optimal is not always what feels right to a human.

It may be worth noting that a couple of years ago, the University of Iowa surveyed a few thousand people and found that half of them didn’t recognize the tire pressure symbol on their dashboards. Other research shows that 40% of gas use is for finding a parking spot and that over 90% of accidents can be attributed to human error.

Self-driving cars aren’t and won’t be perfect and many people associate their own autonomy with driving cars themselves. For an incrementalist, something like Toyota’sguardian angel feature will save lives without forcing people to give up control.

Who wins?

One argument against self-driving cars is that people aren’t ready for it–for example, someone at Tesla first alerted me to this video. Observing the evolution of self-driving market over the next few years will be fascinating as we watch the interplay between technology, economics, environment, legal and social norms. Companies such as Toyota are taking a multidisciplinary approach by bringing together the geeks, suits, and wonksat Toyota Research Institute.

Instead of explicitly predicting winners, let’s follow the data: companies with the largest volume of high-quality image training data will greatly increase the odds of winning. The cars and trucks that make the earliest big spash in the market will have a leg up on everyone else. Because a lot of this data is unstructured, CrowdFlower has a front seat view, helping a number of the players accelerate the structuring of their image training data.

Let’s wrap up with another view of the data beyond Ringo and his squad. In the image below, the problem is knowing which shape is a car versus a shadow cast by an overhanging tree. Where does the car end and the shadow start? Where does the car start if it’s only partially visible? To answers these questions at scale and reliably these companies have to label every pixel in 100s of millions of images.




Over the next few weeks I’ll write about the application of machine learning to other new emerging markets such as drones and conversational bots, which have similar problems and possibilities.

Thanks! Comments, thoughts welcome. :)

Originally posted here

Views: 524

Tags: artificial, crowdflower, intelligence, learning, machine


You need to be a member of Data Science Central to add comments!

Join Data Science Central


  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service