I know that amongst hardcore proponents of automated driving, if you dare criticize the state of AVs or or express skepticism in the future of AVs you’ll often be branded a luddite who just wants to see humanity reduced back to mud farming and burning dung for fuel. I think that accusation will be hard to level at the authors of a TechBrief feature about the state of automated vehicles, the Association for Computing Machinery (ACM). Founded in 1947, it’s the world’s largest scientific and educational computing society.
These are hardcore computer nerds, and they can probably do complex binary math in their heads and yell out Z80 opcodes when they have orgasms. They’re the opposite of luddites. So when they say that “fully automated AVs may never be able to operate safely without a human’s active attention” I think it makes sense to listen to them.
This assessment comes from a TechBrief recently released by the organization, one that states the problem it’s addressing as
Deficiencies in critical testing data and automated vehicle technology are impeding informed regulation and possible deployment of demonstrably safe automated vehicles.
They’re noting that the lack of testing data and the fundamental AV tech are lacking, which is getting in the way of developing AV regulatory rules and the deployment of such vehicles. They also note some very interesting and impactful “policy implications”:
• Regulators should not assume that fully automated vehicles will necessarily reduce road injuries and fatalities.
• It is unclear that fully automated vehicles will be able to operate safely without a human driver’s attention, except on limited roadways and under controlled conditions.
• Improved safety outcomes depend on appropriately regulating the safety engineering, testing, and ongoing performance of automated vehicles.
(I do appreciate that they set the timeline for the start of AV development to 99 years ago, when the Houndina company tested a radio-controlled car, shortly before the famous escape artist Harry Houdini trashed their offices. Yes, that’s all true.)
That first implication is a real blow to the biggest claim made by AV proponents, that computer-driven cars will be much safer than human-driven ones, a claim that, really, we have no hard evidence for or against at this moment. The third implication is really common sense: if you want safe AVs, we need to pay a lot of attention to how we regulate testing, safety engineering, and performance. That feels almost obvious, and yet we’re not really doing that.
But it’s that second implication that I think is actually the most important one, because I think within that lies the key to how we can actually have a functional system of automated vehicles. The implication first notes that it is not clear – and I think that’s almost an understatement – if we can have AVs that will operate without a human’s supervision and attention. Then, it states “except on limited roadways and under controlled conditions,” and I think that’s the part most major AV developers – at least in America – seem to be ignoring.
If we go back and think about the SAE’s Levels of Driving Automation, we can tell that the ACM’s TechBrief is saying that we don’t know if AVs can ever move past Level 2 – always supervised – but could possibly hit Level 4, full, human-attention-free automated driving, but only in controlled situations and areas.
The problem is that most AV companies or companies that are trying hard to develop AV, like Tesla, seem to be focused on achieving Level 5, which is where the car can drive anywhere, anytime, all by itself. I think the focus on Level 5 as the end goal is actually going to be what keeps AVs from becoming viable at all.
It’s sort of that old “perfect is the enemy of the good” problem; trying to make something that works perfectly in all contexts and situations is such a colossal problem to solve, that it may prove to not even be worth solving. That’s because if we can develop special AV-friendly roadways and zones, making the very road infrastructure a willing and helpful participant in the automated driving task, we can accomplish most of what we really want AVs for – letting you sleep or not pay attention on long drives or in slow, tedious traffic, helping people who are unable to drive enjoy some of that mobility, increased safety, all that – in most of the places that actually matter, instead of beating our collective heads against the big, hard wall that is Level 5 autonomy.
Look, if you want total driving freedom to drive anywhere, anytime, no matter what, there’s already a solution: drive your own damn car.
If you want your car to drive for you in the most common areas you’re likely to drive, for at least significant parts of your driving needs, then I think we can make some form of Level 4 autonomy work.
The focus on Level 5 robotaxis, for example, like what Elon Musk has promised on the given-some-history-unsettling date of August 8 of this year:
Tesla Robotaxi unveil on 8/8
— Elon Musk (@elonmusk) April 5, 2024
…I’m not sure is even a goal that makes any sense. I’m also very skeptical we’ll see a fully functional robotaxi in early August from Tesla, but, you know, weirder things have happened?
I think a far better use of everyone’s time and resources would be to come together and really establish some good standards for what Level 4 AVs could look like, what sort of infrastructure help could be desirable and feasible, and get all these various companies on the same page, working to develop a unified autonomous driving system as opposed to everyone trying to make sky-based-pie Level 5 automated cars that will not be viable for many decades to come, if at all. But the good news is we don’t really need those to get most of the AV benefits that matter.
The ACM’s TechBrief comes up with some very rational conclusions:
• Automated vehicle technologies must be subject in development to the same rigor, testing, and assessment standards as all other complex safety-critical systems.
• Claimed benefits and risks of automated vehicle technologies must be rigorously and empirically verified before basing policy and regulation upon them.
• Automated vehicle safety policy should address critical safety data, cybersecurity, and machine learning issues.
These all make sense. They’re not the sexiest part of AVs to focus on, but this is the kind of thing that actually matters, the sorts of problems that need to be addressed if we’re to actually make any of this a viable part of life at all.
And remember, this isn’t me saying this; it’s the biggest group of computer geeks in the world. I suggest reading the brief; it’s short and gives a really clear-eyed look at the actual state of AVs, free from all the hype.
These Tesla FSD Hands-Free Mod Devices Are Such A Bad Idea And Promoting Them Is Dumb
Internal Report Shows Cruise Didn’t Think Its Robotaxi Dragging A Pedestrian Was A Big Enough Deal To Fix The Cars
Cruise Stopping Its Driverless Taxi Service Reveals What Self-Driving Cars Need To Focus On
We’re Entering The Wising-Up Phase Of Self-Driving Cars And It’s About Damn Time
I’m late, I know. But I have to give my 2 cents. As long as the DRIVER is the backup system, this will never work. If the driver isn’t focused on the task, then the driver will never be able to deal with an emergency.
I’m much more comfortable with the computer being the backup system for the attentive, task oriented driver. My car has activated the impact reduction system twice. Each time I was already on the brake.
If you don’t want to drive, don’t drive. Why would someone who doesn’t want to drive even want to own a car?
It amazes me how every pundit on this issue manages to get it wrong. The people (Musk) saying it’s ready to be deployed are obviously and, frankly, criminally wrong. But the people saying it will never work are also wrong.
They’re assuming a continuous iterative development forever, as though 300 years from now the 8000486 will work essentially in the same way as computers today, only much faster. They fail to understand that at some point, artificial intelligence actually will be intelligent because it will be capable of comprehension. Once that happens, cars will be truly autonomous but it won’t matter because we’ll all be out of work and won’t be able to afford to drive anywhere.
Is Tesla going to get us there? Hell no, they can’t even make an accelerator pedal right. But at some point a company that isn’t run by a syphilitic ass-pimple is going to pull it off and change the world.
I’ve been trying to explain this to people for years! My dad works with a quantum computing professor that told us “if you could get AV’s the awareness of a squirrel, that would be a monumental leap forward from current technology”…
now do you want something that is less aware than a squirrel driving on open roads? I don’t.
you can make the case for controlled environment use.
now for those of you who like tin-foil hats, this one is for you!
*puts tin-foil hat on
I believe the push for fully autonomous vehicles is directly linked to the current “gold rush” on personal data.
The reason being, your car has “the 4th screen”, and this is the one screen in your life that is not actively marketing to you.
The 4 screens in most peoples’ lives: TV, computer, phone, car infotainment.
IF, they would like people in cars to fully interact with an infotainment screen, they need to take driving away from the user. That way they can interact with the screen without worry. Then manufacturers can use the screen to suggest things to you (like GM’s system suggesting that there’s a starbuck’s ahead) or have you use apps / phone on it and capture your personal data. And then sell your personal data to the highest bidder.
I think that’s the real motivation behind the push for AV’s.
*takes hat back off
Tesla Robotaxi unveil on 8/8
My hot take- Mr Musk did not state a year. So if robotaxi is unveiled 7/8/2136 that would technically be a month ahead of prediction.
August 3008?
For me, I feel level 5 would require human level AI. Which leads to the question of ethics. Would it be right to make something that intelligent and then tell it you must do as I say, you belong to me. Sounds like slavery to me.
Actually, our system has some critical 1s and 0s flip every nanosecond, so it’s technically a new, separate AI at any given moment. Nobody suffers for long enough to be quantifiable.
Interesting. But I would still feel bad. Humans, well, humanize. Well, at least the humane ones.
Oh, yeah. Wasn’t disagreeing with you. Was just an abstract idea for “getting around” that.
Lol. Yup. But my philosophy is simple. We are still wrapping our collective heads around human rights, let’s not make ai so advanced we have questions of their rights. That and the whole Skynet problem…
This will not surprise anyone who read “Robot, Take the Wheel” by “famed automotive expert” Jason Torchinsky.
I discovered my programmers were using only the core 8080 instruction set for backward compatibility. I am confident that with the full Z80 set, full self driving can be achieved. – Elon Musk
My take is level 5 is a pipe dream outside of controlled access roadways (i.e. the US Interstate system and similar) – that will simplify the environment enough for the computer to reliably handle all the variables. Level 4 will forever be a disaster waiting to happen, as the human won’t be paying enough attention to take over when needed.
I’m also guessing Tesla’s FSD will never be properly implemented on the current generation of their cars. Cameras alone aren’t an adequate sensor suite.
Maybe I should have posted this on that hot takes article… Let me grab my thermal underwear.
I think you mean Level 3 will forever be a disaster waiting to happen, and I agree with that. Level 4 will not require the human to take over driving on short notice.
Probably. It’s been a while since I read through the definitions of each level. Either way, I don’t have much faith in automated driving systems in cars. I had a rental with an automatic braking system that went off when it didn’t need to, and almost got rear ended. My wife’s car’s adaptive cruise drives me crazy, as I frequently find myself doing 5 to 10 mph under the speed limit because it slowed me down way too far behind somebody driving well under the speed limit. I’m subconsciously waiting to get close enough to pass, but never get there.
Technology is great when properly applied (ahem, automatic transmissions) but putting too much drive assist into the act of driving will make things less safe, at least for the foreseeable future.
thank you for your homage to the bug-eyed Sprite, the purest of all sports cars
I hope the self-driving dreams die out they way the flying car dream did 60 years ago. All this money could be invested in better public transit instead. All the people who don’t want to drive can use that if it were a viable option, and all the people who do want to drive can drive on roads with less congestion and less folks who don’t want to be there, making it less stressful.
It is always so weird to me when people don’t want robust public transit and walkable cities. Nobody wants to drive downtown, and if I don’t have to think about practicality as much I’m going to get a way more interesting car than I have now – I love my car, make no mistake, but imagine if I didn’t have to buy a daily driver and could instead get something cooler. I looooved when I could walk to work.
That it has been framed as “taking away your car” instead of “taking away the worst parts of driving” is really infuriating.
Wait, what? The flying car dream is dead? Maaannn…
The biggest limitation towards more public transit is “the public”. Have you met them? Ugh.
This is the real reason.
Level 4 driving seems possible, from what I can tell we’re not far off right now. Isn’t that kinda what SuperCruise is?
Level 5 may never happen, but that’s hard to define, because it will never be possible for the computer to drive in ALL conditions: there are already plenty of situations where it is not safe for a person to drive, at all, and you just have to wait out a storm. So if the computer can drive, without supervision, in all places and all situations EXCEPT a snowstorm, do you call it Level 4 or Level 5?
Hey, you’re alive! I miss highly opinionated RootWyrm comments.
I freely admit that I am not especially knowledgeable on self driving, but isn’t SuperCruise hands free in mapped zones, making it necessarily Level 3+? The SAE chart makes it pretty clear that if you’re steering, it’s level 2 or less, and if your hands are off the wheel, it’s level 3 or higher.
So sounds like our difference comes exclusively from working by different definitions. If you’re not using the SAE definition from the chart, like Torch is, what definitions are you using?
BolShevik it will never work because:
1. Computer programing is if X happens do Y. Fine but it can’t take into account if that object then does A do C unless object M does T. You need to program ALL eventualities.
2. Computer programmers are hacks. The biggest marketing scammers ever. They have over promised and underdelivered in EVERYTHING they have done.
3. The programmers do not drive. How does one program the right responses when they have no idea what the proper responses are?
4. Road conditions cause things to happen. Forget how many cameras, lasers etc. Your backend slides out how do you program how to recover from a water slide, a snow slide, a black ice slide, a power slide differently?
5. Insurance tracking hard breaking etc how does it measure needed braking VS unnecessary braking? You can blame all humans but looking for the perfect AI DRIVE is a impossible dream.
Just a few issues.
You’re extremely uninformed about the state of software engineering.
And so is the oldest most educated group ever. Did you read the article or just want to seem stupid?
I read the article and the ACM paper, and I’ve been a professional software engineer who’s built ML and AI systems since before your Vehicross came out. Maybe lay off the intoxicants before commenting, OK?
That escalated quickly like a battery fire!
I thought it worth noting that none of your specific statements are found in the TechBrief. Only your conclusions are the same. Your foundations are drastically different.
These are hardcore computer nerds, and they can probably do complex binary math in their heads and yell out Z80 opcodes when they have orgasms.
Um, it’s a little disconcerting you know me that well. Speaking as a lifelong Z80 fan whose career was inspired by that processor I find it quite amusing that the man who gets his jollies from an Apple II holds up the Z80 as the exemplar of pleasure. I mean, you’re right that the 6502 just doesn’t have the same sex appeal, but still.
My debutante software project was a Z80 assembly language CP/M BIOS for a home-built computer. I learned two things. One, I could probably make a living slinging code, and Two, I wanted to marry the girl who wirewrapped the processor board.
Hey, I thought about marrying the first girl to “wirewrap my processor board”! (Is that what the kids are calling it?). She was into it too! So hot.
Since I did marry that girl, tonight I’ll be asking, ‘Hey babe, wanna wirewrap my processor board?”
I’m still a 6502 man but I have plenty of Z80 fondness
You know what? I primarily blame Musk for the myth that we are anywhere near Level 5. He made so many BS claims about AP and FSD that I think otherwise rational people bought into it. I suspect that eventually morphed into the group think that allowed the burning of billions in the race to eat some robotaxi sky pie.
I’m selling shares of various bridges around the world. If you are a Tesla owner who has paid for AP or FSD, then you entitled to a 50% discount on all share purchases. Contact now me for further details about this amazing opportunity. Don’t miss out!
I think without human level AGI, Musk’s plan of vision only autonomous driving is never going to happen. It’s odd that Musk says “humans don’t need more than vision, so that is enough input”, which ignores the big brain on Brad. Without human reasoning, it is impossible to program for edge cases, and neural nets only work as well as the training data and by definition then, can’t interpret edge cases. That, and cameras need to be kept clear, which is nigh impossible 8 months of the year in Canada.
I think with multiple sensors and graceful ways to fail, one day we might get reliable enough autonomous driving in some use cases. Eventually this may evolve to all operational domains but that’s a long ways off.
Whenever people talk about “edge cases” in terms of autonomous vehicles I feel like they’re ignoring that driving is all edge cases. Even on a very routine drive you’re probably seeing something you haven’t seen before, and even if you don’t realize it, it might be something that throws the computer.
Plus the idea that humans only need eyes is functionally insane, we’ve got all sorts of senses that help us while driving.
You have all kinds of senses that help you while driving, but really vision and proprioception are the only ones that you need, and the second one is not important in most driving(becomes immensely important in things like drifting).
I sometimes use my hearing and sense of smell while driving, but I can’t think of many situations where it’s actually required.
Car horns and emergency vehicles come to mind for hearing, for smell I guess gas leaks are an obvious one (but ideally the car should know before I am able to smell it).
If hearing input is relevant while you are driving, something wrong may be occurring; be alert and keep listening.
If smelling input is relevant while you are driving, something wrong is probably happening; explore the issue, maybe pull over.
If tasting input is relevant while you are driving, something wrong is *definitely* happening. Get the eff out of your car!
Teslas also incorporate IMU telemetry (thankfully). There was talk of the cars’ cabin mics being put to use, but that likely hasn’t yet happened.
He really went with two 8s huh. He has people comparing him to Henry Ford (but the bad parts) and he goes with the numbers that will get your plate flagged by the California DMV* because of certain connotations. Nobody around him went “Elon, maybe given the world environment, at least some of which you’re responsible for right now, maybe pick a date that isn’t going to raise an eyebrow?”
*The California DMV bot on bsky is a source of joy for me.
This should be part of the minimum standard for ALL policy & regulation. Good intentions don’t equal good results!
I’ve said it before and will continue to say it: the only way to have fully-autonomous driving is if there was a system like in I, Robot (the Will Smith movie). All cars are connected to a central server/supercomputer that controls every car on the road, no humans doing the driving.
This is still not all of the problem though – even if the cars are all automated and synchronized you still have to deal with unknown objects entering the roadways.
Exactly – just this week I’ve had to avoid deer, the guy in front of me avoiding some deer, a dead deer that someone didn’t avoid, at least two asshats from Coraopolis (so many asshats…), more flood debris than I count because it hasn’t stopped raining in weeks, and about 50 potholes.
I’ve often had the same thought that it makes a lot more sense to have autonomous driving systems take over when you, say, get on an entrance ramp to a limited-access Interstate-level highway. Compared to safely navigating city streets and intersections, the act of driving at a steady speed down an interstate is less complex by orders of magnitude. That situation is also where autonomous driving provides the greatest benefit. It seems like a no-brainer to focus on getting AV’s to work in that situation first.
Let’s say, hypothetically, that the very first widespread AV adoption program was limited to the right or second-from-the-right lane on non-urban interstates. In this situation, AV’s might be programmed to maintain a certain distance apart and the same speed, leaving plenty of room for manually-driven vehicles to cross over that lane between the exit lane and the left lanes. Even if regulations required AV’s to maintain the speed limit in that lane, how many people would gladly accept a slightly longer trip if it meant they could sleep, watch Youtube, or catch up on The Autopian on the way?
Here’s the problem though, non-AV cars and drivers.
At 70mph I like to stay at least 6-7 car lengths behind the other cars. This always just opens the door to crappy rude drives thinking 1 car length is for them to fit into and tailgate the person. I see it happen everyday on my freeway commute. Sucks to be a safe driver surrounded by morons.
or re-purpose the already existing separated HOV lanes to level-4 vehicle lanes. once you leave the protected lane, you need to take over driving controls.
Yeah but it’s very difficult to monetize this and that’s why the car companies are spending inordinate amounts of money trying to solve the more difficult problems.
Why is it difficult to monetize? You have a captive audience inside a self-driving car, often for hours at a time. Partnerships with streaming services and other entertainment options are a no-brainer.
I just don’t think people are going to pay more than kind of a marginal amount of money for super cruise compared with full automation. Maybe the automaker can squeeze an extra $1 or $2k out of the transaction at best.
That’s just not in the same ballpark with what they could get out of full automation.
Thankfully we don’t have them in the ATL area, but have been visiting the Phoenix area and I see Waymo Jags all over there. For some quasi irrational reason, these things really piss me tf off whenever I see them.
I walk around in Tempe during vacation and those things are all over the place. I give them a wide berth. With drivers, I can make eye contact so I know they have seen me before I cross the street. These things? Who knows.
I think for limited-access highways and closed-loop systems, these can be made to work.
“For some quasi irrational reason, these things really piss me tf off whenever I see them”
 because at least sub-consciously, you realize they should not be allowed on public roadways, and represent a total failure of oversight as this report describes.