Home » Level 3 Autonomy Is Confusing Garbage

Level 3 Autonomy Is Confusing Garbage

Pasted
ADVERTISEMENT

The Society of Automotive Engineers has done many wonderful things in its 117 years of existence, like making everybody’s horsepower drop when they went from SAE gross to SAE net power ratings in the early 1970s, or forming the Fuels and Lubricants committee in 1933. More recently, we’re talking about the SAE because of its classification of driving automation levels, which range from 0 to 5 and is used to classify automated vehicles. The problem is that this classification system really isn’t well understood, and especially recently, as the first cars that claim to be SAE Level 3 arrive on the market, it’s becoming important to highlight that some levels, like Level 3, are severely, perhaps even dangerously, confusing.

The SAE Automated Driving Levels

One crucial problem that I’ve discussed before is that fundamentally, the classification system used by the SAE is flawed. The level system used suggests some sort of qualitative, hierarchical ranking, with higher levels being more advanced, when this really isn’t the case at all. For example, here is the current SAE J3016 Levels of Driving Automation Chart:

Vidframe Min Top
Vidframe Min Bottom

J3016Graphic_2021.png

If you actually really look at what the criteria are for each level, you’ll realize that these are not gradations of the ability of a machine to drive, but rather are descriptions of how the driving task is shared between the human and the car.

In Levels 0 through 2, the human is always, unquestionably in charge. This is obvious in L0 and L1, because if the human is not in charge for any period of time, then you’re going to crash, which makes it pretty obvious who should have been running the show.

ADVERTISEMENT

Level 2 has the human in charge, but the car can be performing so many aspects of the task of driving that the human’s role changes from one of actively controlling the machine to what is known as a vigilance role, where the human is monitoring the behavior of the car, and must be ready at any moment to take over should anything go wrong.

My opinion on L2 is that it is inherently flawed, not for any specific technical reason, but because conceptually it puts humans into roles they’re just not good at, and there’s the paradox that the more capable a Level 2 system is, the worse people tend to be at remaining vigilant, because the more it does and the better it does it, the less the human feels the need to monitor carefully and be ready to take over. So the fundamental safety net of the system is, in some ways, lost.

I’m not the only one who feels this way, by the way; many people far smarter than I am agree. Researcher Missy Cummings goes into this phenomenon in detail in her paper Evaluating the Reliability of Tesla Model 3 Driver Assist Functions from October of 2020:

Drivers who think their cars are more capable that they are may be more susceptible to increased states of distractions, and thus at higher risk of crashes. This susceptibility is highlighted by several fatal crashes of distracted Tesla drivers with Autopilot engaged (12,13). Past research has characterized a variety of ways in which autonomous systems can influence an operator’s attentional levels. When an operator’s task load is low while supervising an autonomous system, he can experience increased boredom which results in reduced alertness and vigilance (ability to respond to external stimuli) (14).

I’ve written about my problems with L2 plenty, though. I’d like to grow a little in my feckless griping and complaining and move onto Level 3, which is, excitingly, even more confusing and deeply flawed than Level 2!

ADVERTISEMENT

The Conceptual Problems with Level 3

Level 3 is worth talking about now because cars that claim to have L3 automation, like the Mercedes-Benz EQS with the Drive Pilot system are now set to come to America this year, so it seems like the peculiar traits and issues associated with Level 3 should be widely discussed before cars operating under L3 parameters actually start to be deployed on roads in quantity.

According to the SAE standard, L3 is actually an automated driving system, which means that the person in the driver’s seat does not need to always pay attention to the driving task. The SAE calls this “Conditional Driving Automation” and the chart states

“You are not driving when these automated driving features are engaged – even if you are seated in “the driver’s seat.”

The chart then adds this qualifier block of text for L3:

“When the feature requests, you must drive.

My problem with L3 is that the SAE has set up a standard that carmakers can use that is extremely vague and woefully incomplete. The difference between zero driver demand (“you are not driving”) and absolute attention (“when the feature requests, you must drive“) is vast and the circumstances that trigger this transition are quite unclear.

ADVERTISEMENT

For something to be considered an L3 system, how much time must elapse between a “request” to drive and getting an inattentive driver to realize they need to start driving, assess the situation around them, and actually take over? What if the person in the driver’s seat, who was informed they do not need to do anything related to driving, is asleep? What if they’re drunk? What if they’re making out or wrestling with an unruly ferret or have their hands full of sloppy chicken wings?

It does not seem reasonable at all that you can tell a human in a car that they do not need to drive or even pay attention to what a car is doing, as you have to in Level 2, and then simultaneously expect them to take over upon request, when a car is in motion on an active road.

I’ve just asked a lot of questions there, so it makes sense to try to find someone who can answer them; luckily, we know someone — an automated driving engineer with years of experience who currently works for a major automaker. Though this automated driving engineer’s current position at the OEM isn’t directly part of its AV program, he still prefers to remain anonymous at this time, but I can assure you we have confirmed his credentials.

I relayed my questions about L3, and asked him if my assessment of the clarity and specifics of this level were accurate, at least as he understood them.

“Absolutely. Level 3 is hot garbage, and potentially incredibly dangerous.”

ADVERTISEMENT

While I can’t say I’m surprised, it’s not exactly fun to hear that from an expert.

The Level 3 Standard is Not Clear About How or When to Handoff Control to a Human

Our source agreed that the basic conditions of Level 3 are confusing at best, and confirmed that there really are no clearly defined parameters for what a successful handoff should be, how long it should take, or when and how it should happen.

He did mention that some potential failure situations do have a good degree of built-in failovers, and used the example of bad lane markings. If the lane markings disappear, most AV systems are able to extrapolate at least 50 to 120 meters ahead, giving about three seconds of continued safe driving, even with only vision data.

Other sensors that are fused into the overall perception system can help, too, such as GPS data and even following a car in front, and these together can buy up to 30 seconds of safe driving. Sensors can also self-report as they become obscured or marginalized in some way, so ideally there should be some degree of warning before they are determined to be inoperable.

This is all good to hear, but it is assuming that the takeover request happens because of some criteria or situation that is recognized and processed by the car’s sensors and related computing hardware; if that’s the case, then that’s the best possible situation for a handoff, but in reality, there is absolutely no guarantee a given automated driving system will even fully know it needs help in every situation.

ADVERTISEMENT

While poor road markings and sensor obscuring are well-understood examples that are being compensated for, the real world is full of unexpected situations that can confuse an AV. Sometimes, even expected things, like, say, the moon can throw an AV for a loop, and in those cases where the need for handoff isn’t recognized by the car, there really aren’t any good clear procedures for how to best respond.

Maybe you’ll realize something is going wrong in time, if you’re paying some kind of attention. Maybe you won’t. If I had to guess, chances aren’t great you’d be paying attention because, remember, a L3 system tells you you don’t have to.

Simultaneously telling a human in the driver’s seat that they do not need to pay attention unless, you know, they do, is a recipe for trouble. There’s no way this contradictory approach can end well, especially because the L3 specification makes absolutely no mention of what sort of failover procedures should be in place when the human is not able to take over in time–and that amount of time, again, isn’t clear at all.

Everyone is Just Guessing

I asked our automated driving engineer about this, and while he said that “processes are being designed,” he also added that the fundamental parameters of Level 3 are “poorly designed,” and as a result “everyone is just guessing” when it comes to how best to implement whatever L3 actually is.

“There is absolutely a lot of guessing going on,” he added, noting that there simply isn’t any real well of L3 driving data to work with at this time.

ADVERTISEMENT

 

What Should Happen When A Handoff Fails?

Ideally, an automated vehicle should be able to get out of active traffic lanes and park itself somewhere as safe as possible when it needs a human to take over but the human does not do so.

At this point in automated vehicle development, no car on the market does this, with the only currently implemented solution being that the car slows to a stop in an active traffic lane and sits there, like nobody in their right minds would do.

This isn’t a viable solution moving forward.

ADVERTISEMENT

I asked our automated driving engineer about this as well, and he told me that while of course engineers had been talking about getting the cars off the road in situations where a successful handoff doesn’t happen, he had not encountered any serious work on getting it to actually happen, and while that doesn’t mean no one is trying (I suspect there are efforts to figure it out), our engineer felt that currently

“There isn’t an agressive, industry-wide push to figure it [getting the car to a safe shoulder] out, because most companies don’t care about failover states as much as getting the cars to just stay in their lanes and not crash into anything.”

The vagueness of L3 and the lack of any real serious development about what should AVs do in situations where they are no longer functioning properly reveals a fundamental flaw about how all automated vehicle development is currently progressing; at this moment, most development efforts are being put into how these systems should work, when the truth is that we’re at the crucial moment where effort needs to be directed to determining how these systems will fail.

That’s what really matters right now; we can’t make any real progress towards vehicles that never require human intervention until we can make vehicles, or, significantly, overall systems for vehicles, that can manage them when they fail.

The Regulations That Allow L3 Cars on the Road Are Very Limiting

Previously, Mercedes-Benz had cited to me that UN regulations prevented them from doing anything but coming to a controlled stop in an active lane of traffic, citing this regulation:

ADVERTISEMENT

5.1.5. If the driver fails to resume control of the DDT during the transition phase, the system shall perform a minimum risk manoeuvre. During a minimum risk manoeuvre, the system shall minimise risks to safety of the vehicle occupants and other road users. 5.1.6. T

[…]

5.5.1 During the minimum risk manoeuvre the vehicle shall be slowed down inside the lane or, in case the lane markings are not visible, remain on an appropriate trajectory taking into account surrounding traffic and road infrastructure, with an aim of achieving a deceleration demand not greater than 4.0 m/s².

Okay. I don’t think parking on an active traffic lane on a highway should be considered a “minimum risk manoeuvre” even if they do spell “maneuver” all weird. It seems the German government would agree, as they have recent automated driving regulations that address this in a far more viable way: Level 3 systems are only permitted for highway travel, and then only for speeds of 60 kph (37 mph) or less–essentially, high-traffic, low-speed situations, like driving on the 405 in Los Angeles at pretty much any time.

The 2021 law does require that faster automated driving and higher than L3 systems will need to actually deal with non-reacting humans in a more reasonable way:

If the driver or the technical oversight does not react, the system is able (unlike in level 3) to autonomously place the motor vehicle in a minimal risk condition (e.g. stop the vehicle on a hard shoulder).

See the difference? One is a minimal risk maneuver–the car doesn’t take risks in the process of coming to a stop, but after that maneuver, all bets are off, because the car is potentially sitting there, blocking a lane on the highway. The new law specifies a minimal risk condition, which is where the car is effectively out of harm’s way. That’s so much better.

So, at this moment, Mercedes-Benz is able to release a vehicle they can claim L3 automation on because that L3 automation is limited to low-speed, low complexity situations where the inherent flaws of L3 itself aren’t as likely to be too pronounced.

But, it’s important to remember they are still very much there. L3 is not a well-thought out level, period. The human-machine interaction specified is at best vague and confusing, and at worst has the potential to put people in dangerous situations.

ADVERTISEMENT

Our engineer also had a lot to say about the 37 mph limit that’s currently the law. He explained that such a limit effectively restricts these L3 systems to traffic jams, and mentioned that he felt that in these situations

“The driver doesn’t get to relax. They have to monitor constantly if the system is off or on–going over 37 mph will turn it off, for example–and the opinion of the engineers I work with is that no one will want this, given the cost.”

If there were any real overarching, industry-wide plan for automation, this would be the point where the technology and procedures and regulations that deal with how these systems can fail safely and in as controlled a way as possible would be being developed. But they’re not.

Why Is It Like This?

The engineer we reached out to offered a bit of an explanation for why there was no overarching plan to figure out these hard but crucial issues at the beginning, and he said it had to do with how the industry tends to use something known as agile software development.

He described this as an “iterative approach to get to a minimum viable product, then improve the software with updates after the fact.” He also noted that a lot of the sticky safety and failover/handoff issues had yet to be resolved because so much of the industry is “still working on smooth lane changes.”

ADVERTISEMENT

This agile approach may work great for the rapid development of software for our phones, but the difference between your phone crashing and your car crashing I think is generally agreed to be pretty significant.

Our automated driving engineer does think that some level of automated driving that approximates what the Level 3 standard describes is coming, but, realistically, he says that it’s at least “five years out. That’s for highway speeds, on highways only. I don’t see it sooner than that.”

It’s not like doing the hard, unglamorous work of managing AV failure situations is impossible, because it’s not. The auto industry works because of incredible numbers of standards and practices and procedures, in the ways cars are built, sold, licensed, inspected, and the infrastructure they operate in.

We have traffic laws and road markings and traffic signals and safety equipment on cars, from bumpers to airbags to seat belts to those taillights I fetishize. If we’re really serious about automated driving, it’s time to figure out how we want to deal with these machines when they go wrong, and part of that process may very well be taking a hard look at automation levels like Level 3 and acknowledging the harsh reality that any system that can both demand nothing and then everything from a human is doomed to failure.

 

ADVERTISEMENT

 

Share on facebook
Facebook
Share on whatsapp
WhatsApp
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on reddit
Reddit
Subscribe
Notify of
50 Comments
Inline Feedbacks
View all comments
Dest
Dest
2 years ago

Ah blaming agile, classic. There’s nothing in agile that says you have to release your several ton death machines onto the public. As if a waterfall approach would be yielding any better results than we have now.

Arrest-me Red
Arrest-me Red
2 years ago

I looked quickly at the cover photo and was “Why is a Amish person driving a Merc?” Then I saw it was the seat back. However my stupid male brain keeps going – Amish?

_CB_
_CB_
2 years ago

As a resident of rural Saskatchewan, I’ve often wondered: how the heck are these gonna handle rural grid roads? We have thousands of kilometres of roads that are dirt/gravel with absolutely no lane markings. In the winter, they get covered in packed snow. Same with the roads of my village, which doesn’t even have lane markings.
I don’t think we’ve planned for a lot of use cases on this!

Dingleberryhandpump-
Dingleberryhandpump-
2 years ago
Reply to  _CB_

Without lane markings, it just moves to the right as far as it considers safe. Unmarked roads actually work pretty well with todays tech.

Snow is a real issue though. Not only does it hide where the road ends, it also likes to cover up things like cameras and lidar sensors…

Jason Hinton
Jason Hinton
2 years ago
Reply to  _CB_

AI isn’t going to handle that and don’t need to. That is why Level 5 is a pipe dream.

staaaab
staaaab
2 years ago

I really enjoy driving with adaptive cruise control recently, but it definitely was a learning curve – you really have to learn the situations, this thing will fuck up – and it does fuck up a lot (BMW). Most common are tight turns, where it will just not see the car in front anymore and accelerate full throttle (even nicer, when you forgot to turn it down after leaving a highway and its set to 100mph in the city). Other times it slams the brakes, because it sees a weirdly parked car on the side of the road. Other times it won’t brake, when someone enters your lane, because it only reacts when that car arrives at 12 oclock, which often would be too late..
Depending on the situation, when it will brake for standing traffic in front can differ greatly, so I always hover my foot over the pedal just in case..

I now know pretty much when its just best to disengage it proactively, but I don’t fool myself into thinking this thing has any safety advantages over driving yourself

Dave Beth
Dave Beth
2 years ago

I have maintained, and still maintain, that the driving force behind autonomous vehicles is not safety and never has been safety. Every time anyone mentions that they don’t like these vehicles, invariably someone will bring up how much safer they supposedly are versus human drivers. I don’t think most people care as much about that as they say they do. Not at all.

What people want is time. They want the time back that they are forced to spend behind the wheel. They want to be productive, or watch a movie, or talk on the phone, or tweet, or whatever else they’d otherwise be doing. People see the act of driving, especially in traffic, as a tedious and unnecessary part of their day. Long gone are the days where people just gleefully drove long distances as the default mode of vacation travel. Commutes are more rage and anger than cruising down the road.

People are willing to put up with a lot of unsafe and hazardous situations if they feel that they are getting their time back. Just watch how many people are willing to drive on the shoulder, through parking lots, swerve around vehicles, and other things that shave off just seconds from their travel times. Are these people really that concerned about safety? They’re concerned about time.

People also seem to think of the Autonomous driving levels as some smooth escalation. Cars don’t graduate from one level to the next. There’s is no amount of data that will make a Level 2 car into a Level 3 car. They are different systems that react in different ways. All of the non-participating driver out there who have to interact with these experiments are doing so unwillingly. It seems like the public population of non-voluntary participants should get some say in whether they are allowed on public roadways too.

Jason Hinton
Jason Hinton
2 years ago
Reply to  Dave Beth

The driving force behind autonomous driving is replacing commercial drivers. That is where there is the payback required to shell out the huge upfront costs required for a true Level 4 system.

Ron888
Ron888
2 years ago

Not so much related to this article but the site in general:
YOU GUYS ARE KNOCKING IT OUTTA THE PARK!!
I expected a little more than Jason and David’s usual fun articles but a another genuine car engineer?A real car designer?A navy pilot?
This is flat out overachieving 😀
Freaking awesome content guys

Ric Troll
Ric Troll
2 years ago

I don’t even use cruise control so I am not interested in any feature that encourages hands-off or disengaged behavior.

If the goal is to have a (train) car that you don’t have to drive, the cars will need to communicate to each other and the infrastructure will need to really change.

Given the present course, I predict Senate hearings and much grandstanding.

TriangleRAD
TriangleRAD
2 years ago

Point of order: ALL ferrets are unruly, all the time.

Bite Me
Bite Me
2 years ago

Having a whole system centered around competition is going to continually bite us in the ass as we become more and more reliant on technology. Humans are bad enough at being consistent and predictable drivers, now we’re gonna get proprietary OEM trademarked emergency stops on the highway that vary from model to model. Kias slowly scoot to the shoulder, Chevys just come to a halt and put on the flashers, Tesla aims for the semi trailer with the best ride height for decapitation, Ford inflates a bubble around the car, Mercedes turn 45 degrees to take up two lanes to stop in and sends out a drone that alerts a Certified Roadside Assistant 45 minutes away. Exciting times!

SonOfLP500
SonOfLP500
2 years ago

People are being led down the garden path by having stages between no automation and full automation, like having stages to jumping over a chasm. On this side of the chasm are Levels 0 and 1, with the driver in control all the time, and on the other side is Level 5, a vehicle that can completely control itself in all mixed traffic scenarios. Any level that requires the driver to decide whether they or the vehicle are in control in an emergency is jumping partway across the chasm.
The Unruly Ferret Test should be the baseline.

Tom Jennings
Tom Jennings
2 years ago

Umm, the very goal is stupid. From a transportation POV self driving is at best a distraction.

It’s taken as a given, some path of development ordained by… who and why?

It’s just not that useful or worth the noise.

And of course you’re right re: vigilance. It’s a thing we are inherently bad at and to rely on it is folly.

Technology is something humans do, it’s not exalted or magical. There’s nothing automatic about increased complexity. We are collectively really terrible about choosing paths and have let the ones with the most money win.

Ben
Ben
2 years ago
Reply to  Tom Jennings

“It’s taken as a given, some path of development ordained by… who and why?”

The answer is Big Tech. Some techbros realized that if they could crack AVs they stood to make boatloads of cash. What they didn’t realize is exactly how hard that was going to be. Now that they’ve raised billions in venture capital, they can’t change their story so they keep pushing it to make sure the VC gravy train runs on schedule.

Rafael
Rafael
2 years ago
Reply to  Ben

That is a very interesting take. They are both hostages of and believers in their own bravado

MacAttac
MacAttac
2 years ago

The question I have when it comes to self driving is more in line with what we’ll be able to do once we actually achieve it. If tesla comes out with a level 5 cybertruck next week and I go buy one, what do I tell the cop when I get pulled over for taking a nap and eating hot wings in the driver’s seat

Rafael
Rafael
2 years ago
Reply to  MacAttac

Eating while sleep is a choking hazard, the cop was right in pulling you over, sir!

Fix It Again Tony
Fix It Again Tony
2 years ago

Shouldn’t there be some sort of standardized test that the car has to pass to get labeled level whatever?
Otherwise you can just say that your car that needs driver intervention 99.99999999999999% of the time is a level 3 car.

pip bip
pip bip
2 years ago

self driving is an expensive boondoggle that will never fully arrive. ever.

Kurt Hahn
Kurt Hahn
2 years ago

I think the question about what happens if the driver doesn’t take over is a bit of a red herring (I hope I’m using this correctly…). I can only think of one (or two) reasons why a driver wouldn’t take over: he’s either totally incapacitated, or he’s so deep into something else that he’s literally oblivious to the world around him. Both of these things are just as likely to happen with or without driver assistance. And in both of these cases, the alternative (no driver assistance whatsoever) would be a vehicle driving straight until it hits an obstacle. I agree that coming to a stop in a possibly busy lane is dangerous, but still much less so than the vehicle being totally uncontrolled.

Doctor Nine
Doctor Nine
2 years ago

I was looking at the SAE specification, and notice this little gem:

“.. system can drive everywhere in all conditions…”

I mean, just look at that. It’s so dumb I can’t even.

They need to at least write the specification correctly, if they want engineers to be able to demonstrate the spec. You can’t drive in a Cat 5 hurricane or autopilot your way through a tornado. There will be no autonomous lava crossings. This is just a ridiculously poor SAE classification system. Somebody please talk to them. They need to put an adult in charge.

Halftrack_El_Camino
Halftrack_El_Camino
2 years ago

I’m perfectly fine with automated driving starting out with traffic jams. Driving in traffic jams blows, and if I could just chill out and let the car handle it that would be a big win. It also seems like a situation where there could be a decent amount of handover time—just because traffic broke the 37 mph barrier doesn’t mean the car will immediately crash or anything, so there should be time to get the driver’s attention. After traffic jams, let’s do highway cruising in clear traffic. After that, low-speed urban driving.

I do agree that the lack of a graceful failover mechanism is a big problem. If the car can’t pilot itself to safety, it should at least have to be able to put on a big light show to alert other drivers that it’s stopped in the middle of the road. I’m talking like, ambulance-level flashy flashies, and it should be going crazy on the inside too in order to try and get the driver to wake the fuck up. It should also broadcast a signal to other semi-autonomous vehicles, and after a minute or so it should call 911 and broadcast its location to EMS. If you let that happen without there being some kind of actual emergency, you should be on the hook for making a false 911 call.

Thomas Larsen
Thomas Larsen
2 years ago

Maybe also, until self-driving cars are able to drive without making occasional choices like a mostly-blind 100-year-old on acid, it could be mandated they must have a light – say a solid green roof mounted light visible 360 degrees – automatically switch on whenever the vehicle is operating in automated mode, to alert drivers.

Then once (or not unlikely *if*) this is all figured out and automated cars are eventually able to drive smoothly and seamlessly with other traffic, dogs, kids, bikers, drunk and stoned pedestrians, and the odd escaped birthday balloon, at all times and not require driver input, ever, the rule could be reversed to apply to manually controlled vehicles (if they’re still allowed on public roads by then), for the same reason.

Factoryhack
Factoryhack
2 years ago

So, I have level 2 in my Stelvio (which seems kind of silly since the whole selling point of an Alfa Romeo is to enjoy driving the damn thing because it’s Italian and endowed with super special Italian steering feel), but that’s beside the point.

The level 2 actually works pretty good. The lane centering is dead on, even in gradual turns, and the adaptive cruise does exactly what it’s supposed to do, etc.

EXCEPT, when it suddenly decides to just drop out, shit can hit the fan really quickly if you’re not paying attention.

The whole idea of having to just put a finger on the steering wheel sounds good on paper but is a horrible idea in the real world.

You absolutely need to have a reasonable grip on the steering wheel at all times because when the system randomly decides it doesn’t have a. clear enough lane marker or too tight of a turn, it turns off RIGHT EFFING NOW with absolutely no warning which usually happens on a curve in heavy traffic.

We have a long way to go before we can start reading and let the car do the driving at any level.

Alfa Romeo Tango
Alfa Romeo Tango
2 years ago
Reply to  Factoryhack

Can confirm. I sell cars that have level 2 as an option so I drive them quite a bit and am trained on how to demonstrate them, and it’s so spotty. You can know what to watch for, when it “should” give up, how tight a bend it’ll take, and it’ll just decide to go straight at a bend with clearly defined markings all around and a car to lock onto ahead. We really considered a Stelvio, glad the Macan Sport Edition we found doesn’t have adaptive cruise..

Thomas Larsen
Thomas Larsen
2 years ago

As a guiding standard I think the SAE J3016 is just fine. It’s clear what it covers and doesn’t. It’s not the standard’s problem that marketing people abuse it.

The real problem is the lack of rules of what’s allowed on the road, and consequently, how car manufacturer go about implementing this. As long as everything goes, manufacturers will go for what can be done (the fastest) which is patching additional functionality onto the old-fashioned cruise control and call it “AI”.

And, as the article says, they do this with a traditional MVP (minimal viable product) agile development approach which, by the inherent nature of the process, does not take edge cases or even major fundamental topics into consideration, since that is simple deemed “not part of the MVP” (meaning “we’ll get to it later, once we’ve got these other things working. Maybe. If there’s a demand for it and we’re not distracted by anything else the next decade”).

As I see it there’s only two ways to fix this, and both unfortunately requires firm regulation.

1. Automated driving (for example defined as “any function where steering is done other than by direct control by the driver” is permitted only on dedicated roads which has been prepared with the necessary infrastructure to support this, by cars whose implementation has been evaluated and approved by an independent auditor appointed and accredited by the road authorities only.

I.e. a “regulated level 3” where the government defines a set of specific operation and handover conditions for the “automation supported road”. On that road the car will operate as level 4, while it will be level 0 or 1 outside those roads.

2. Automated driving on non-dedicated roads is permitted allowed by cars whose implementation has been evaluated and approved by an independent auditor appointed and accredited by the road authorities. This implementation must take a top-down approach to emulate the perception and decision process of the human it is replacing to eliminate the problem of edge cases and allow the car to handle any unforeseen situations in a safe manner.

I.e. level 5 only, potentially with a limited manual control option to allow for very low speed manual maneuvering for parking, servicing, etc. on private property and for emergency roadside recovery (like moving out of the way after a crash, loading onto and off a flatbed for towing, etc).

Either method should only be approved for public use once it has been proven, to the auditor’s satisfaction, that the required driving can be performed without human interaction in all licensed situations. Level 2 steering (such as lane centering, etc.) and “unregulated” Level 3 are expressly not allowed under any circumstances.

OFFLINE
OFFLINE
2 years ago

So, engineer here but not on SAE level (whatever) stuff: the incremental approach is how we’re going to get there. If you’re expecting a big ‘ol blob of “Here’s level (whatever)” to drop it ain’t happening. Why? Too many variables. And by variables I don’t mean that the cheap way to solve a lot of issues — paint the damn roads — isn’t feasible. It’s the standards of the project team, and that includes the standards bodies. Committees produce compromises, and compromises are known to not produce excellence. You’re never going to get a good standard before the problem is solved. Someone will solve the problem and then the standard will get written.

So no matter how we visualize this going, there’s going to be a lot of trial and copying of each others work. Agile development works really well for this type of thing. I think the best metric to track progress is, “Is this safer than the average driver”? The best way to track that is with accident statistics and associated factors. Honestly, the average driver ain’t that great, and as a society we’re aging so it’s going to get worse. For folks concerned that the new technology is out on the road with them, I’d advise them not to check how bad the average driver is. Or hide under their beds if required.

I currently have experience with two of the systems out there: Tesla (full Autopilot) and Ford (BlueCruise). Both have their flaws, but after driving cross country with both, I’d pick AP. It’s better at anticipating issues (like moving out out of the entry lane when approaching an overpass) and more consistent in how it handles exceptions. It’s also usable in more places. Bluecruise is nifty when it works, but it stops working in very strange ways. AP has gotten much better since I bought the car, I haven’t seen much update on Bluecruise. What have other folks experienced?

Thomas Larsen
Thomas Larsen
2 years ago
Reply to  OFFLINE

I disgree. Agile development works really *terribly* for this type of thing, *unless* it is done as a method to meet a set of rigid standard that have been defined by outside the economic interests of the developer (in this case the society in which the cars will operate, or, if you prefer “the government”).

It’s all about the MVP.

The MVP must be set by us, not the car manufacturers. Like emissions, lights, brakes, airbags, windscreen wipers, crash standards, etc. etc.

OFFLINE
OFFLINE
2 years ago
Reply to  Thomas Larsen

It already is. MVP = not getting sued out of existence. We set that. You’ll find a lot of this pervades AI research for mission critical tasks.

Thomas Larsen
Thomas Larsen
2 years ago
Reply to  OFFLINE

That just covers the not-crashing-into-too-many-women-pulling-a-bike-across-a-dark-road scenario. And only barely that. Gratuitously crashing into police cars, ambulances, and while semi trailers should already be a sued out of existence scenario, but here we are.

What lawsuits doesn’t to even worse at resolving, is all the near-misses, erratically hesitating for no good reason, and all the other unpredictable behavior that, while not actually injuring anyone, make all other road users unsure of what the vehicle is doing.

People do those things too, but they at least risk being pulled over, given a sobriety test, and carted off to the slammer to sleep it off. None of that seem to be happening with automated cars. So, no, the system is unfortunately not working.

Thomas Larsen
Thomas Larsen
2 years ago
Reply to  Thomas Larsen

[OT comment] Spelling and writing coherent sentences is hard. I really wish there was an option to edit posts (incidentally likely another MVP “casualty”, so maybe not so off-topic after all :-).

OFFLINE
OFFLINE
2 years ago
Reply to  Thomas Larsen

Unfortunately, I think it’s the best we’re going to get. I don’t see any other way that will work, because someday, no matter how much testing we do, they have to let the cars onto the road. That’s when you find out which cases you didn’t cover.

Citrus
Citrus
2 years ago
Reply to  OFFLINE

One of the problems with the “safer than the average driver” argument is that proponents never really explain HOW it’s safer than the average driver. In what metrics is it doing better? What are the issues the average driver has that this solution fixes? What are the shortcomings compared to the average driver, such as surprising edge cases? Is the behavior predictable to other drivers on the road? I would be interested in seeing this! But I usually just see “it’s safer!”

But the real question, is it actually safer? For that you have to do a ton of statistical rigor that a lot of the boosters straight up haven’t. For example, statistically, there are substantially fewer accidents with AutoPilot active per miles driven than the national average. See, it’s safer right? Except, what I haven’t been able to find is the same statistical rigor applied to the actual driving situations and whether or not AutoPilot is safer than average in the exact same situations. After all, AutoPilot is only really usable in relatively stable situations – Full Self Driving might be tested on city streets, but AutoPilot is largely designed for places like highways, which are relatively low in accident rates anyway because of fewer variables a driver has to account for.

And this post isn’t saying it is, or isn’t, safer. It’s saying I haven’t seen data that actually can be used to illustrate whether or not it is, and a lot of the data presented needs way more context than people like to give.

OFFLINE
OFFLINE
2 years ago
Reply to  Citrus

A fair point. It is within the bounds of a government to set those metrics. I’d like to see the raw data too. The “what does it mean” question that the SAE tried to address is mostly secondary to safety metrics, and won’t be resolved until someone solves the problem. To Torch’s point: it doesn’t mean much. But does it have to as long as it’s driving safety?

Note: Autopilot works pretty well for non highway situations too. Driving down a busy street in traffic? AP works well. So my n=1 experience is that it’s safer, but I pay attention too.

Vetatur Fumare
Vetatur Fumare
2 years ago
Reply to  OFFLINE

There is another question everyone seems to skip: is Level 5 autonomy at all desirable? All it will do is produce infinitely more traffic. Examples:
All those a-holes who remote start their cars and let them idle for fifteen minutes will put their car on AutoPilot while they are at the Gym downtown, looping around the block ad infinitum rather than pay for parking. My friend Maude will send an empty car from Manhattan to East Hampton (and back) with her daughter’s second iPad that she left in her room on the Upper West Side.

Ivan256
Ivan256
2 years ago
Reply to  OFFLINE

An incremental approach might be how we “get there,” but these don’t have to be the increments.

Level 3 is just bad. Humans are fundamentally incapable of doing what Level 3 requires. Some people might be able to pull it off some of the time. But NOBODY can do it all of the time.

Additionally, the way these levels are defined isn’t a nice linear progression. Each level is exponentially more difficult than the one before it. So what are we supposed to do incrementally between level 3 and level 4? It will probably take 10x as long or more to get there than it will to go from level 2 to level 3….

And then there’s this: “The best way to track that is with accident statistics and associated factors.”

That’s reprehensible. That means your plan is to experiment with the technology on bystanders and judge it after the fact based on its kill rate?

This shit needs to be banned until the people planning to profit off of it can prove themselves trustworthy. They certainly haven’t yet….

Pietro Sparacino
Pietro Sparacino
2 years ago

so much of the industry is “still working on smooth lane changes.”

We’re doomed.

These car companies are rushing to get a sexy self driving car on the market, and then beta testing the safety features in the real world with real humans who didn’t sign up for this. It shouldn’t be allowed.

Alfa Romeo Tango
Alfa Romeo Tango
2 years ago

And I’ll add to that before the weird nerds kick in with “you agree to beta test when you buy the car,” no one else on the road has agreed to be a part of this beta test. They should be putting those engineers’ brainpower towards perfecting passive safety systems that still fail regularly before even touching autonomy.

OFFLINE
OFFLINE
2 years ago

I didn’t agree to any of the bad behavior I see on a drive. Yet here we are.

Ivan256
Ivan256
2 years ago
Reply to  OFFLINE

Was the bad behavior legally sanctioned?

Did somebody change the rules and suddenly two wrongs makes a right?

Der Foo
Der Foo
2 years ago

If I’m not even able to understand the SAE Levels well enough to contemplate the differences while walking and NOT run into walls, am I still allowed to operate a L2 or L3 vehicle?

Andrew Wyman
Andrew Wyman
2 years ago

There are also the problems of pattern recognition. Computers are great at this, but only for specified parameters. Meanwhile my brain can abstractly connect to items. So we need a lot more learning in that area.

Drew
Drew
2 years ago

The levels of automation are garbage. If I told someone my car was capable of Level 2 automation, they would laugh. But it meets the criteria, despite the requirement for near constant input and the lane centering being rather limited.
Just like EVs, the better solution is better public transit and less focus on car ownership. Especially since this means you aren’t even enjoying driving.

Mr.Asa
Mr.Asa
2 years ago

“There isn’t an agressive, industry-wide push to figure it [getting the car to a safe shoulder] out, because most companies don’t care about failover states as much as getting the cars to just stay in their lanes and not crash into anything.”

So how do we, as citizens sharing these roads but not necessarily buying these cars, fix this? What’s the process to get the automakers to change their tune?
About the only way that I know of for a corporation to change its tune is a massive upwelling of negative sentiment (and that just isn’t happening,) or a lawsuit and without an actual crash and death of someone(s) that’s going to be ignored.

We can lobby our congresspeople and Senators, cause that works so well the rest of the time.

Are we just stuck in this dystopia until people die?

OFFLINE
OFFLINE
2 years ago
Reply to  Mr.Asa

Nope. Lawsuits are going to be how this is determined. It’s a classic function of introducing negative feedback, and that’s how it’s going to happen.

Mr.Asa
Mr.Asa
2 years ago
Reply to  OFFLINE

So we can sue now and claim that this is unsafe and they are releasing a dangerous product? Before someone dies? That lawsuit won’t make any headway.

OFFLINE
OFFLINE
2 years ago
Reply to  Mr.Asa

I hear you. The fact is, you *can* sue if you think something is unsafe before someone dies. You’re going to have to prove it in front of a court though.

The very challenge is this: What’s unsafe? A computer has WAY faster reflexes than we do. What *we* think is unsafe may be completely okay. Example: in the 70′ and 80’s, there was a rash of F-111 crashes at Mt. Home USAF base in Montana. It turns out that the ground following autopilot in the F-111 was much faster than human reflexes. Pilots were disengaging the autopilot when they were heading into a mountain and didn’t have reflexes fast enough to pull up, where the computer did. Boom. So what’s less safe: you thinking you know what the computer is going to do, or your assumption on what you can do?

Thomas Larsen
Thomas Larsen
2 years ago
Reply to  OFFLINE

Safer is one thing, the perception of justice is quite another. Unless the automated car is literally flawless it is going to be a problem.

If some driver injures or kills someone close to me, and assuming it’s their actual fault and there’s at least some negligence involved, I want that person to suffer. Economically, sure, but physically in particular. I want them to go to jail.

The law doesn’t work that way today with automated cars. If the car is at fault, will the software engineer who wrote the code, the engineer who was supposed to review it, their manager, the company CEO and the chairman of the board ever go to jail, who are all in their way responsible by their choices, actions, and non-actions, for the product being released?

Of course they won’t. Just looks at Boeing who killed 346 people with the 737 Max MCAS f-up with no significant consequences at all for anyone in charge.

50
0
Would love your thoughts, please comment.x
()
x