What is Level 3 Autonomous Driving?


Futuristic car

Level 3 driving automation represents a huge leap forward in the classification of autonomy. It’s known as “conditional driving automation.”

What is Level 3 Automation? In level 3, the car’s systems perform the entire dynamic driving task (DDT) within the area that it is designed to do so. The human driver is only expected to be responsible for the DDT-fallback when the car essentially “asks” the driver to take over if something goes wrong or the car is about to leave the zone where it is able to operate.

If something does go wrong on a level 3 car, it will alert the human driver that they must take over driving, but it will be able to operate for at least a few seconds as it waits for the driver to do so.

In some cases, the car will be able to maintain control of the DDT even if something does go wrong without needing the human driver to intervene. For example, if the car detects that a system has failed and a shoulder is available on the road, the car will simply pull onto the shoulder, rather than request that the human driver take over.

Read about DDT in the article on level 0.

Read about DDT-fallback in the article on level 2.

The other large distinction between levels 2 and 3 is that the car’s systems perform the object and event detection and response (OEDR). The car itself is now responsible for perceiving everything that is happening around it and being able to respond to them all. This is a crucial point, and it’s the reason that you’ll currently see asterisks on almost all documentation about driving automation in cars today, which are at level 1 or 2. Car manufacturers make it clear that the human driver is still responsible for paying attention to her surroundings — and ultimately for what the car does.

At level 3, this is no longer true in the same way. That is, at level 3, the driver is responsible for responding to alerts that the car makes, rather than constantly monitoring the environment (as one must at levels 1 and 2) because the car is now responsible for doing that.

This is an important change. The driver now has to be receptive to what the car is perceiving, rather than perceiving the surroundings herself. She must also respond to any physical signs that the car is failing (even if the system doesn’t alert her). For example, if a tire blows and the system somehow doesn’t give an alert that this has happened, the driver is still expected to notice that the tire has blown based on how the car is moving and take over driving.

While levels 1 and 2 are broadly defined as “driving automation systems” (lowercase), level 3 is the first level that can be described as an Automated Driving System (ADS).

What is an Automated Driving System (ADS)?

Automated Driving System (ADS) is a shorthand term that is only used for levels 3, 4, and 5 of driving automation. It represents a collection of hardware and software that perform the entire dynamic driving task (DDT). In the case of levels 3 and 4, the ADS operates within a limited and defined scope. That scope is technically referred to as the operational design domain (ODD). A level 5 ADS can drive anywhere, and therefore does not have a limited scope.

Level 3 is special in that it’s the only ADS that expects the human driver to be ready to take over driving in the event that a system fails. If the car becomes unable to continue to its destination because of some malfunction, the driver is alerted to take over and then achieve the minimal risk condition.

What is Minimal Risk Condition?

Minimal risk condition is essentially a phrase that means doing what is necessary to decrease the possibility of a crash.

Depending on the situation, achieving the minimal risk condition might mean stopping in place, changing lanes to avoid something in front of you, or any number of other maneuvers based on the situation.

Are there any level 3 cars on the road today?

In short, no.

A few years back, Audi proclaimed that they’d offer level 3 automation in their 2017 Audi A8 by way of their “Traffic Jam Pilot.” However, several obstacles arose as the technology was being developed. While the technology is arguably available, laws have not yet been put into place to properly deal with the new level of autonomy. Similarly, insurance companies don’t yet know how to deal with these types of systems. Non-standardized infrastructure (like road signs, lane markings, etc.) has also been raised as a major problem. Other research cites the lack of required redundant backup systems in current level 3 hardware and software as the real issue.

And then there’s the human factor. As mentioned in my article about level 2 automation, one serious consideration about driving autonomy is how human drivers will use (or potentially abuse) the technology. Driver monitoring systems are in place on both level 2 and level 3 systems to ensure that the driver is active in the necessary ways (eyes on the road, hands on the wheel, etc.), but can we really trust drivers to not get too comfortable with letting the car handle more than it’s meant to?

Level 3 is particularly tricky because it actually allows for the drivers to take their eyes off of the road. They only have to be ready to take over the car at the car’s request. Lawmakers are understandably worried about putting drivers into cars that they don’t full understand, telling them they don’t have to pay attention all the time, and then totally relying on them to not literally be asleep at the wheel when the car asks them to take over.

These issues that arise at level 3 are precisely why car manufacturers have begun marketing their features as level 2+. I wrote about level 2+ in my article on level 2 automation. It should be noted here that SAE International doesn’t recognize level 2+, as it makes the conversation confusing and therefore defeats the whole purpose of an automation taxonomy. (This is the same reason that levels of autonomy are mutually exclusive — so you couldn’t have a car that has one feature and level 2 and another at level 3, for example. The levels have been created to make the discussion of automation easier, not more complicated.)

Some car manufacturers may consider skipping level 3 all together to focus on developing cars with level 4 automation — expressly to remove the unpredictable human element from driving.

I’ll write about this more extensively in my article on level 4, but it’s worth noting here that — while there aren’t any level 3 cars on the roads today because of the reasons mentioned here — there are actually some level 4 cars on (a few) roads.

Specifically, Alphabet’s company, Waymo, already has level 4 cars on roads in Arizona as part of their pilot program.

When can we expect level 3 cars to his the market in the U.S.?

Most of the leaders in autonomous vehicles estimate that they’ll take the next leap forward in either 2020 or 2021. However, since the issues preventing the cars from being on the road today are not primarily technological, it’s really difficult for anyone to estimate when this will actually happen. At this point, we’re taking the word of car manufacturers themselves, which have historically proven to be overly optimistic.

Again, it wouldn’t be completely surprising if technological advances outpace legal policy changes, which would mean that level 3 cars would never hit the roads in large numbers. Instead, level 4 cars may be favored due to the lack of gray area between human and vehicle interaction and responsibility at level 4.

What would push a car to level 4 driving autonomy?

An ADS would be considered to have level 4 driving automation if it does everything that a level 3 car can do, but it does not rely on the human driver in the case of DDT-fallback.

For a car to be a level 4 system, it has to be capable of taking the human driver out of the equation all together within the zone that it is expected to operate.

In the case of Waymo’s pilot, they’ve done just that by allowing for a truly driverless (what they refer to as a “rider-only” service) that is confined to specific roads in Phoenix, Arizona.

Recent Posts