I’ve been thinking a lot about GM’s abandonment of Cruise and its robotaxi program this week. Something fundamentally had to change and it wasn’t just GM’s priorities. How did a company go from a forecast of generating billions to costing billions so quickly?
John Krafcik, former CEO of Waymo and Hyundai, has a theory, backed up by at least one former Cruise exec. The basic idea is that how you think about safety will ultimately guide how successful you are. In a race to commercialize, Cruise set its safety targets too low and the price of revising them was too high, thus GM pulled the plug.
This is relevant this morning because there’s a report that says the Trump Administration is going to waive safety reporting standards for autonomous vehicles, something Tesla wants quite badly.
It’s just one of many changes coming with this new President, and automakers are doing their best to act like it’ll be fine, including at Ford. Do you know who else seems fine? Carlos Tavares, who says his departure from Stellantis was “amicable” and that, given the chance, he wouldn’t do much differently.
GM ‘Had No Chance To Catch Waymo’
I wonder what it says about me that I continue to use Twitter even though it’s a way worse product than it used to be. Is it because I have so many followers? Laziness? Force of habit?
I ask this question for two reasons. First, because in thinking about humans and machines it’s worth recognizing that human beings are not logical. Second because, unfortunately, I’m going to have to chase people across both Instagram’s Threads and Bluesky for this story, which is annoying.
Let’s start with this thread from robotics engineer Adam Cook on Bluesky:
1/10 I am not as “bullish” on the business of self-driving car fleets as some are out there… but indeed… this is an embarrassment.
GM should be embarrassed.
Detroit is embarrassed here.
And it comes from Kyle Vogt’s incompetence on how to build a safety culture and GM’s negligence on oversight.
— Adam Cook (@motorcityadam.com) December 11, 2024 at 9:15 AM
The theme of his thread is that Cruise founder Kyle Vogt wasn’t interested in building a safety culture that would ultimately be successful. Specifically, this struck me:
6/10 The development of a systems safety lifecycle and a safety culture is THE value in any organization that develops safety-critical systems.
That’s it.
Not the “AI”.
Not the whatever.
The maturity of safety lifecycle.
BECAUSE it is that maturity that provides quantification of BUSINESS RISK.
— Adam Cook (@motorcityadam.com) December 11, 2024 at 9:15 AM
That sounds right to me and, if that’s the case, the cost of making Cruise successful would be enormous. John Krafcik, the former CEO of Hyundai here in North America and CEO of Waymo during its rollout, said basically the same on Threads:
View on Threads
This is echoed by former Cruiser, and our ex-boss, Ray Wert over on Threads again:
View on Threads
Ok, enough of the embeds.
In this way of looking at the world, once GM realized that Cruise could not deliver the expected safety level and would need more money to get there, it suddenly became untenable.
But “safety” isn’t a binary concept. There’s no specific agreed-upon level of safety for autonomous cars. Is a vehicle that keeps its occupants safe at the expense of everyone around it safe? How many points is an injury prevented worth relative to a death prevented?
The big challenge for automakers is not just in building cars that are “safe” but “safe enough” for people to accept. How safe is that? Normal human drivers get in fatal crashes all the time and it’s not a national conversation every time it happens, but it’s a big deal somehow when it’s a robotic car (as happened with Uber’s autonomous car). Even if a Level 4 robotaxi is safer than a human driver that probably isn’t enough.
This is why I’m not as hyped about Tesla’s Cybercab as other people. I’m not convinced that the company’s safety level will hold up to a nationwide driverless rollout, though to even get close to that it’ll have to overcome a lot of regulatory hurdles.
Trump Administration Reportedly To Cut ADAS Crash Reporting Requirements
Oh, hey, look at that. The Trump transition team wants the new administration to remove the same ADAS (Advanced Driver-Assistance System) crash reporting system that Tesla CEO Elon Musk, who spent more than a quarter of a billion dollars on President Trump’s reelection, wants to get rid of.
Weird coincidence!
From Reuters, who broke the story:
The recommendation to kill the crash-reporting rule came from a transition team tasked with producing a 100-day strategy for automotive policy. The group called the measure a mandate for “excessive” data collection, the document seen by Reuters shows.
Neat.
A Reuters analysis of the NHTSA crash data shows Tesla accounted for 40 out of 45 fatal crashes reported to NHTSA through Oct. 15.
Among the Tesla crashes NHTSA investigated under the provision were a 2023 fatal accident in Virginia where a driver using the car’s “Autopilot” feature slammed into a tractor-trailer and a California wreck the same year where an Autopiloted Tesla hit a firetruck, killing the driver and injuring four firefighters.
In Tesla’s defense, no automaker seems to like this reporting requirement and would rather it be gone, so this isn’t necessarily a case of Musk trying to get an advantage over his competition (as with removing the IRA tax credit). Some also think Tesla is better at reporting than other automakers, which is unfair to Tesla and skews the data.
One of the other big changes that Tesla wants is a national framework for driverless car regulations as opposed to the onerous state-by-state approach. That is not a particularly divisive viewpoint as having each state set its own policy for these systems is annoying and likely stifles innovation.
Ford CEO Thinks Company Is Ready For A Trump Administration
It’s not clear to anyone what President Trump will actually do with tariffs, autonomous cars, or anything. As with any politician, there’s a gap between campaign promises and political reality.
A lot of Trump’s messaging focused on American jobs and, if you’re the CEO of Ford, that’s not a terrible thing to hear. At least that’s what CEO Jim Farley told reporters in a scrum this week:
“After 120 years, we’re pretty experienced with policy change,” he said. “I think Ford is very well-positioned.”
He went on to say:
- “We have the highest number of U.S. employees of any car company.”
- “We have the largest number of production of U.S. vehicles.”
- “We have the largest exports from the United States of vehicles.”
- “We have hybrid and electric, so people can choose.”
Ford has, indeed, gone through many eras of politics. The company itself is political, though its leaders and employees are historically Republican-leaning, though a quick glance at patterns of political giving at Ford shows the usual corporate skew towards incumbents.
Carlos Tavares Is Doing Great, Don’t Worry
Don’t cry for Carlos Tavares, the truth is he never left you. The deposed Stellantis CEO, pictured above on the beach, is doing great according to an article in the Portuguese newspaper Expresso, which shows that the Portuguese exec is having a great time.
That’s a paywalled article, but you can read the highlights here:
Tavares told the newspaper that the main concern had been to “protect the company so that a difference in points of view wouldn’t create the risk of misaligning the company”.
“A company that has 250,000 employees, revenues of 190 billion euros, 15 brands that it sells all over the world, is not a company that can be managed with a lack of alignment – which immediately has an impact on strategic management,” he added.
Asked if he felt hurt by the outcome, he replied: “No, not at all”. He said he would act the same way if he could go back in time.
See, it’s all good.
“When you’re facing a storm, you have to steer the boat according to the waves. You can’t have a discussion about the best way to face them.”
Wise words.
What I’m Listening To While Writing TMD
I’m a longtime Tyler The Creator fan, and not just because he has excellent taste in cars. Honestly, the LaFerrari here isn’t even his most interesting automotive choice for one of his projects (his pink Abarth is untouchable). It’s a new album and I’ve been enjoying “Noid,” so I’ll go with it for today.
The Big Question
How safe does an autonomous car need to be?
Before I can decide how safe autonomous cars have to be I’ll have to start seeing some apples-to-apples comparisons. No more comparing only few-year-old cars with modern safety features only operating on well-marked roads in good conditions with everything else on every other road in everything from sunshine to blizzards.
Something I hadn’t thought of before. That Tesla robotaxi could actually have a good use case in Musk’s tunnels. It wouldn’t be a big volume business, but at least they won’t be on public roads. The funny question is, in that case, why make it so aerodynamic?
Can someone from the manufacturer’s C-suite be imprisoned when it kills someone?
I’m on-board after that happens.
We’re on board with board members too, not just C-peeps.
Jails are already too overcrowded. I say exile them to Russia.
To paraphrase the old IBM quote; A computer can never be held accountable, therefore a computer must never make a life and death decision
Solve the accountability problem and people will accept self driving cars
In a related vein, we screwed up bigtime as a country when we gave corporations the rights of a person without any of the responsibilities/liabilities. We completely decoupled individual decisions (CEOs/executives) from consequences or liability, and here we are.
I argue that is the single most damaging thing that started the downfall of this country.
If the recent events have taught us anything, it is that the c-suite only gets justice when it is meted out vigilante style.
How safe? I guess as safe as I am or safer, though my wife would disagree.(says I’m a terrible driver, but who put the dent in the Bolt fender???)
For a car to truly be that good though, I’m thinking real AI would be needed. As in, there’s a squirrel in the road, but if I slam on my brakes the car behind me could rear-end me, so does the squirrel die today? Also squirrels are quick so maybe he can dodge me. But then what if it’s a little kid in the road, gotta lock em up.
As a tangent we were driving in our previous neighborhood years ago and saw a toddler strolling across the street, we weren’t anywhere near hitting him but we stopped middle of the street and put on our hazards then went knocking on doors to find the parent, who then freaked(they were visiting and the toddler got out through the garage). But in that scenario how do you stop the car suddenly and tell it ‘hazard mode’ or what not, there’s just too many variables, and again would need true AI that can think like a person to help(not fully control) what’s going on, are there other toddlers around? Is there a parent going to come run after them?
No way that is Carlos Tavares, unless he has gained 50 kg in a fortnight… AI starts again. Wonder how long before the Carlos in the photo sues for defamation…
40 out of 45 fatal autonomous driving systems being tesla related is absolutely insane, I get that there are a gazillion of those stupid things out there but holy shit
Nice to hear that Roy Wort is still out there doing stuff.
The Orange asshole doesn’t even know what he’s going to do from minute to minute. Between his dementia and being so easily bought by the highest bidder there’s no way of knowing his psychotic agenda. We’re in for a hell of a ride.
Autonomous vehicles need to be significantly less risky (no such thing as safe) than human drivers or what’s the point? The problem arises when trying to prove risk reduction. How can manufacturers prove their vehicles are – let’s say – only 20% as likely to get in an accident as human drivers for an equal population of vehicles driven without tracking and reporting? Are we supposed to take their word for it? Fat chance. Personally, I’d prefer autonomous vehicles to be no more than 10% as risky as human drivers and I’m going to need to see the proof before I cut them loose on the roads because even that level of “safety” will decline in mixed operation with autonomous/human combined traffic.
I think society at large will want to see safety similiar to other forms of mass transit before giving up control. If autonomous driving can provide safety levels similiar to air travel or rail, then society will accept it. This (of course), means they would have to be orders of magnitude safer than an average driver.
If my childhood taught me anything, it’s that I need to “Avoid the Noid” so I’m really conflicted about this album.
Until they’re K.I.T.T. levels of capable I’m really not that interested in AVs. It’s always bugged me that EVs and AVs seem to be so conflated. Remember the first VW ID concepts had steering wheels that disappeared into the dash because they were all going to be autonomous by the time the cars were released?
I don’t believe AVs can get to an acceptable threshold until we figure out a few things. V2I and V2V infrastructure and protocols need agreed on and built out before we can get to a high enough success rate on a large scale. I doubt an AV is going to be able to truly account for all vehicles around it via camera/LiDAR if there is an evasive maneuver needed.
And until we get to 100% of vehicles on the road being AVs, they’re going to have to account for the human factor of driving. The adaptive cruise control in my car leaves a large opening that people will (and definitely do!) take advantage of if they want to make a pass. I’ll “blame” Mazda’s programming of the system, but it’s very aggressive at making sure the ascribed following distance is met. Although I’ve used better adaptive systems, all of them give that window that will be taken advantage of by a human driver (or even another AV if it believes it should be in a different lane). If a human element is still going to muck things up, how are the systems programmed to solve the Trolley Problem?
I have no doubt we’ll get there eventually, but it’s not right around the corner like Elon wants everyone to believe.
Let’s start with “equally as safe, on average, in all conditions”. There’s been a lot of data cherry picking going on when companies (cough cough Tesla) claim their autonomous systems are after than humans. If your data comes overwhelmingly from highway cruising conditions in clear weather, you’re already comparing against the safest baseline, but they’re happy to compare with overall human-driven crash statistics that include the conditions they won’t even try to handle.
I think the real question isn’t a matter of safety analysis, its a question of liability. What’s the first question anytime anything goes wrong – Who’s to blame? Something went wrong so who let it happen? I’m not saying its right, but I am saying that’s how we’re wired. We don’t way to handle the idea of NOT crediting fault to a person. So who is to blame if something goes wrong? Who is the victim and who is the perpetrator? Unless we can come to a way to deal with that, then the answer to your question is 100% safe. The faults that lead to casualty need to be statistically insignificant.
Yep, until this is sorted, all discussions of safety thresholds are on hold.
If it’s the auto companies liable, they won’t release anything until they know it won’t cost them more in claims.
If it’s the drivers, the big insurance companies will be making the same calculations.
Either way, I don’t think it’s going to be decided by the government.
This is exactly it. I don’t think there’s a quantifiable level of “safe” that is a tipping point. It’s about responsibility when something goes bad.
In particular, who gets sued, and how much of a pain in the ass it will be to resolve these things above and beyond how it is today with human drivers.
How good do autonomous cars have to be? Damn near perfect. We accept that we are all human and we understand that state of being. We also need that target for our emotions when things go wrong. So who do I blame when something DOES go wrong? Who holds the liability? Who’s fault is it? We need, as humans, the ability to assign those things to someone. Not a computer, not a corporation. Its that same need that is at the root of the unfortunate murder of a CEO recently. We, as humans, want to hold another human accountable for our suffering if a human was at all involved. And humans make the cars and programming.
To build off this extremely good point, legally we need liability answers, and a chain of individuals being legally liable for similar things. As it sits, companies (namely Tesla) hide behind basic disclaimers that users barely comprehend that puts every bit of liability on the consumer, rather than the manufacturer of the hardware and software. Tesla I point out as egregious because Musk promises the moon, the company regularly actively misleads the public on their systems capability, and then points the finger at dead consumers post crash and claim they did nothing wrong. Until some legal framework has been cemented, I don’t think the majority of the public will accept AVs as valid.
They need to be perfect. If these companies are spending insane amounts of money to make cars supposedly “drive themselves,” then I want zero crashes that were caused by the car not knowing what to do in a certain situation. Car dragging lady 100 feet underneath it, tesla’s repeatedly slamming into emergency vehicles, are you fucking kidding me?
I sincerely doubt you can get to that 100% threshold until all cars on the road are AVs. One chucklehead human driving can bring huge unpredictability to the scene.
That’s the problem with AV’s, it’s either all or nothing to prevent tragedy, and they can pry my steering wheel from my cold, dead hands
It’s worse than that. All would have to be AVs, but also no pedestrians allowed, somehow getting rid of wildlife, and weather…
The thing with driving is that it’s all edge cases, and everything is unpredictable.
Given the season, if an autonomous car can prevent Grandma getting run over by a reindeer, then it’s safe enough…
Dies Santa have autonomous reindeer?
xkcd: Horses
“If you tried to ride into a tree, the horse could be like “No.””
“How safe does an autonomous car need to be?”
Safer than 85 percent of human drivers to start and work upwards from there.
The nice thing about AVs is they record events of a crash and ALL AVs can learn from those mistakes so improvements can happen very quickly. Humans, not so much.
As a practical metric? They need to be cheaper to insure for the same coverage.
“How safe does an autonomous car need to be?”
Are we talking robotaxis or are we talking cars I own?
Robotaxis, eh, good enough is good enough, as far as I’m concerned. I probably won’t be using them enough to move my care-o-meter.
But a car I own? It had better be able to drive better than me. Otherwise, I’m not accepting the liability if it crashes.
But in hoth cases, if I’m in an autonomous vehicle and it hits something… https://youtu.be/2g5Hz17C4is?si=XS2kKz4jqKQlpVOb
This is going to be the great Tesla robotaxi con: you can buy it but can’t drive it and you will still be liable for it.
We can’t see it from down here, but this is all just part of a very big brain strategy to bring down the price of eggs. Don’t worry guys, it’s gonna happen. It’s hard to bring things down once they’re up, but the very stable geniuses are on it. Eggs will be cheaper soon
I just paid $2 for a dozen extra large eggs at the Kroger this week.
How much cheaper do they need to be?
Whatever the price of eggs was when Reagan was president.
You made me look it up. Eggs were 86 cents in 1994 so $1.83 adjusted for inflation.
Problem is they are expecting that 86 cent price.
The Venn diagram of magats and thinking/acting rationally are light years apart from touching.
I agree with you but…take a chill pill.
That comment was after I’d already taken 5.
They should be free for the magats and $85 a dozen for everyone else. Oh, and minorities are not allowed to buy them.
I just want more hot-breakfast options without eggs.
This! This right here.
I’ve never been a huge egg fan but there’s something off about eggs I don’t make at home. Like the eggs on an egg mcmuffin just seem sketch.
Only the Karens know.
When your mother is hit by a car and lying in the street, then hit and dragged by an autonomous car – You may have a more definite opinion on this matter.
https://www.theverge.com/2024/9/30/24258445/cruise-nhtsa-fine-robotaxi-pedestrian-drag
My mother is already dead. Happy?
Way to completely miss the point.
You missed mine too.
Sorry I over reacted, Urban Runabout, you accidently hit a nerve.
What I was trying to point out is that for a certain percentage of people, they will never be safe enough. We could be driving around in Nerf-mobiles and they would still be scared.
Thus the joke of asking the Karens.
If you want reality, it doesn’t matter what we think. The bureaucrats will decide for themselves based on their data and influences of those in political power. Not the readers and commenters of a car site.
The questions posted are always just a thinking exercise and not meant seriously. Sarcastic jokes should always be expected.
So who in your more definitive opinion is the greater danger:
The AV that pulled over as soon as possible after having a pedestrian thrown against it
or
The meat bag driver that hit the pedestrian hard enough to launch her into the AV and then fled the scene and is still at large today?