I’ve been thinking a lot about GM’s abandonment of Cruise and its robotaxi program this week. Something fundamentally had to change and it wasn’t just GM’s priorities. How did a company go from a forecast of generating billions to costing billions so quickly?
John Krafcik, former CEO of Waymo and Hyundai, has a theory, backed up by at least one former Cruise exec. The basic idea is that how you think about safety will ultimately guide how successful you are. In a race to commercialize, Cruise set its safety targets too low and the price of revising them was too high, thus GM pulled the plug.
This is relevant this morning because there’s a report that says the Trump Administration is going to waive safety reporting standards for autonomous vehicles, something Tesla wants quite badly.
It’s just one of many changes coming with this new President, and automakers are doing their best to act like it’ll be fine, including at Ford. Do you know who else seems fine? Carlos Tavares, who says his departure from Stellantis was “amicable” and that, given the chance, he wouldn’t do much differently.
GM ‘Had No Chance To Catch Waymo’
I wonder what it says about me that I continue to use Twitter even though it’s a way worse product than it used to be. Is it because I have so many followers? Laziness? Force of habit?
I ask this question for two reasons. First, because in thinking about humans and machines it’s worth recognizing that human beings are not logical. Second because, unfortunately, I’m going to have to chase people across both Instagram’s Threads and Bluesky for this story, which is annoying.
Let’s start with this thread from robotics engineer Adam Cook on Bluesky:
1/10 I am not as “bullish” on the business of self-driving car fleets as some are out there… but indeed… this is an embarrassment.
GM should be embarrassed.
Detroit is embarrassed here.
And it comes from Kyle Vogt’s incompetence on how to build a safety culture and GM’s negligence on oversight.
— Adam Cook (@motorcityadam.com) December 11, 2024 at 9:15 AM
The theme of his thread is that Cruise founder Kyle Vogt wasn’t interested in building a safety culture that would ultimately be successful. Specifically, this struck me:
6/10 The development of a systems safety lifecycle and a safety culture is THE value in any organization that develops safety-critical systems.
That’s it.
Not the “AI”.
Not the whatever.
The maturity of safety lifecycle.
BECAUSE it is that maturity that provides quantification of BUSINESS RISK.
— Adam Cook (@motorcityadam.com) December 11, 2024 at 9:15 AM
That sounds right to me and, if that’s the case, the cost of making Cruise successful would be enormous. John Krafcik, the former CEO of Hyundai here in North America and CEO of Waymo during its rollout, said basically the same on Threads:
View on Threads
This is echoed by former Cruiser, and our ex-boss, Ray Wert over on Threads again:
View on Threads
Ok, enough of the embeds.
In this way of looking at the world, once GM realized that Cruise could not deliver the expected safety level and would need more money to get there, it suddenly became untenable.
But “safety” isn’t a binary concept. There’s no specific agreed-upon level of safety for autonomous cars. Is a vehicle that keeps its occupants safe at the expense of everyone around it safe? How many points is an injury prevented worth relative to a death prevented?
The big challenge for automakers is not just in building cars that are “safe” but “safe enough” for people to accept. How safe is that? Normal human drivers get in fatal crashes all the time and it’s not a national conversation every time it happens, but it’s a big deal somehow when it’s a robotic car (as happened with Uber’s autonomous car). Even if a Level 4 robotaxi is safer than a human driver that probably isn’t enough.
This is why I’m not as hyped about Tesla’s Cybercab as other people. I’m not convinced that the company’s safety level will hold up to a nationwide driverless rollout, though to even get close to that it’ll have to overcome a lot of regulatory hurdles.
Trump Administration Reportedly To Cut ADAS Crash Reporting Requirements
Oh, hey, look at that. The Trump transition team wants the new administration to remove the same ADAS (Advanced Driver-Assistance System) crash reporting system that Tesla CEO Elon Musk, who spent more than a quarter of a billion dollars on President Trump’s reelection, wants to get rid of.
Weird coincidence!
From Reuters, who broke the story:
The recommendation to kill the crash-reporting rule came from a transition team tasked with producing a 100-day strategy for automotive policy. The group called the measure a mandate for “excessive” data collection, the document seen by Reuters shows.
Neat.
A Reuters analysis of the NHTSA crash data shows Tesla accounted for 40 out of 45 fatal crashes reported to NHTSA through Oct. 15.
Among the Tesla crashes NHTSA investigated under the provision were a 2023 fatal accident in Virginia where a driver using the car’s “Autopilot” feature slammed into a tractor-trailer and a California wreck the same year where an Autopiloted Tesla hit a firetruck, killing the driver and injuring four firefighters.
In Tesla’s defense, no automaker seems to like this reporting requirement and would rather it be gone, so this isn’t necessarily a case of Musk trying to get an advantage over his competition (as with removing the IRA tax credit). Some also think Tesla is better at reporting than other automakers, which is unfair to Tesla and skews the data.
One of the other big changes that Tesla wants is a national framework for driverless car regulations as opposed to the onerous state-by-state approach. That is not a particularly divisive viewpoint as having each state set its own policy for these systems is annoying and likely stifles innovation.
Ford CEO Thinks Company Is Ready For A Trump Administration
It’s not clear to anyone what President Trump will actually do with tariffs, autonomous cars, or anything. As with any politician, there’s a gap between campaign promises and political reality.
A lot of Trump’s messaging focused on American jobs and, if you’re the CEO of Ford, that’s not a terrible thing to hear. At least that’s what CEO Jim Farley told reporters in a scrum this week:
“After 120 years, we’re pretty experienced with policy change,” he said. “I think Ford is very well-positioned.”
He went on to say:
- “We have the highest number of U.S. employees of any car company.”
- “We have the largest number of production of U.S. vehicles.”
- “We have the largest exports from the United States of vehicles.”
- “We have hybrid and electric, so people can choose.”
Ford has, indeed, gone through many eras of politics. The company itself is political, though its leaders and employees are historically Republican-leaning, though a quick glance at patterns of political giving at Ford shows the usual corporate skew towards incumbents.
Carlos Tavares Is Doing Great, Don’t Worry
Don’t cry for Carlos Tavares, the truth is he never left you. The deposed Stellantis CEO, pictured above on the beach, is doing great according to an article in the Portuguese newspaper Expresso, which shows that the Portuguese exec is having a great time.
That’s a paywalled article, but you can read the highlights here:
Tavares told the newspaper that the main concern had been to “protect the company so that a difference in points of view wouldn’t create the risk of misaligning the company”.
“A company that has 250,000 employees, revenues of 190 billion euros, 15 brands that it sells all over the world, is not a company that can be managed with a lack of alignment – which immediately has an impact on strategic management,” he added.
Asked if he felt hurt by the outcome, he replied: “No, not at all”. He said he would act the same way if he could go back in time.
See, it’s all good.
“When you’re facing a storm, you have to steer the boat according to the waves. You can’t have a discussion about the best way to face them.”
Wise words.
What I’m Listening To While Writing TMD
I’m a longtime Tyler The Creator fan, and not just because he has excellent taste in cars. Honestly, the LaFerrari here isn’t even his most interesting automotive choice for one of his projects (his pink Abarth is untouchable). It’s a new album and I’ve been enjoying “Noid,” so I’ll go with it for today.
The Big Question
How safe does an autonomous car need to be?
We can fine the distracted driver, put interlocks on the drunk’s steering wheel, don’t tell me we shouldn’t want to know why your proprietary algorithm wanted to see highway worker blood on traffic cones.
The issue is transparency, liability, and repeatability. Accident investigator does a recreation, field interviews, examines the equipment per se, both organic and inorganic. Autonomous driving companies deliberately make their products and methods as black box as possible to prevent competitor theft, and to elevate stock pricing in this age of vapor valuation. The AI models are in many cases inherently black box, you can only do a rough eyeball on the feedstock information to see if garbage in, garbage out applies. Sensor info is too dense to keep for long, would scale out of control with mass adoption. Data center power would eclipse transportation even sooner.
So we have tech”disrupters” who don’t want anyone seeing what they’re doing, finger pointing manufacturers, who are finger pointing drivers for trying to get exactly what they paid for, and none of the parties can say what is happening. How about this? When an accident occurs, all parties are warranted to appear before a judge and explain in full detail how their part failed, and how exactly it will be remedied in full, to never happen again. If their system refuses this retrospection, it cannot be used on public highways, and any further usage requires open source software licensing only.
Melon Husk/Leon Mush dipshit should be in jail w/ Trump. I hope melonhead and Tesla get sued into oblivion for murdering people due to all this stupid shit like “electric” doors, “autonomous driving BS” and all the other bunch of lawsuits against them. These deaths never should have happened in the first place.
I will NEVER, EVER EVER trust autonomous cars. EVER. I will NEVER, EVER EVER have an EV. EVER. Over my dead body. EV’s are TRASH and are not real cars.
Gasoline forever!
The second story is why I will never trust the first story’s product. Such things dictate a strong oversight from both the company and the government and we live in a nation that values neither.
I would need to see a few things:
The only way I trust autonomous cars is if the corporations designing and programming them are held accountable for crashes.
As long as they can dodge culpability for shoddy sensors or programming, they can’t be trusted with human lives.
One other thought, safety is good, but sometimes there is no perfect answer to a scenario and all the choices are bad. (See trolly problem) Or sometimes all the choices are fine but different, and maybe local custom is different than other local customs (see New jersey drivers in New York and vice versa, and whatever the hell they call driving in Boston)
If the AV can’t be perfect, being predictable and identifiable as an AV would be nice.
Commercial aviation is as safe as it is because it is so extremely regulated.
Given the fact that an entire political party is committed to the proposition that there is a God-given right to shift risk and responsibility to other people rather than take responsibility for it, automobile safety in general is pretty much of a non-starter in the United States.
The quantification of safety risk is exactly the game set and match of it all, and I’m glad someone finally articulated it.
The industry that most typified this until recently was aviation. Up until the early 2000s commercial airline crashes were far more common than they are today.
Even given Boeings incredible breach of safety with the Max and their recent quality issues, commercial aviation is so safe in the United States that the accident and facility rate is actually too low to calculate.
Autonomous vehicles should be striving for something approaching this, or at least have a frank and open discussion as to what the safety baseline should be.
I think that part of the problem is that General Motors has a well-deserved reputation for ignoring edge cases. I don’t know if they are that much worse than other auto makers, but they have certainly developed a reputation for ignoring situations that are never supposed to happen. That combined with AI systems that are trained on “real world“ data, which is also notably, lacking in edge cases is a bad combination.
CEO Jim Farley told reporters in a scrum this week:
“We have trucks and cars, so people can choose.”
Oh wait , he didn’t say that?
Farley is either dying his hair or wearing a rug and it doesn’t look good on him.
Men (and women too, imo, but it’s not my place to say) should embrace their real selves more.
You don’t mention men’s hairstyles on the Autopian.
Autonomous cars need to be perfect citizens, not in their skill necessarily, but in their behavior. It’s unacceptable to have programs like the Tesla one that peeks out and tries to skip traffic like an impatient human, a robot should never misbehave.
Robots can’t get angry, bored, uncomfortable or feel any other emotions that would cause a human to disobey the rules.
They must maintain a driving-school-perfect following distance to the car in front at all times, stop in the most efficient way, never make risky turns, block drivers out of their lane, swerve to take an exit at the last moment or try to get over when blocked out. If they miss a turn, they can simply go around and do it another way, and this costs nothing because the occupants are free to be productive and/or entertained inside, completely divorced from the act of driving.
I don’t expect autopilot to pull an F1-worthy trail-braking S-turn on the limit of traction to avoid a pedestrian, I don’t expect perfect performance, but I do expect perfect behavior. As long as these are neural networks or learning models, learning from people and footage, it’ll be impossible to guarantee that they follow the rules of robotics, because it’s impossible to fully understand the parameters.
Safety-critical programs need to be manually programmed and tested the expensive way.
This is all true, but as long at there are %90 of drivers following common sense unwritten rules rather than the actual law, really bad signage that if followed literally would stop traffic entirely, and worst of all, pavement markings that seem to be designed to cause collisions, strictly following the rules is not going to work.
Oh, and turn signals and “eye contact”. The signal to noise ratio on turn signals is in the weeds, and I don’t know where to start on “eye contact”
Fixing signage and pavement markings would be a start.
Absolutely. As long as the operating environment remains problematic, self-driving vehicles will remain non-viable.
Well you stated humana aren’t logical, then proceeded to prove it by posting an illogical article. I would say an autonomous car safety needs are a moving target, and even then they won’t sell. The car needs to be significantly safer than the driver of the car feels they are. And since 99% of drivers think they are better than they actually are the auto car needs to be far safer. And be able to convince buyers they are that much better. If I were the market developer and knowing I could only develop and produce a small amount of cars at first I would be doing it so different. I would get the bonafide and approval to develop a self driving car for drunk drivers with multiple offenses or just the worst drivers who can’t stop having accidents. Then get approved to lease the cars to people who have no choice except no driving at all. Then use the results to push out to seniors losing their license, obese people who don’t fit behind the wheel etc. But really until you agree as a manufacturer to cover accidents noone is buying the self driving car. Being responsible for the results of a object programmed by people who will not stand behind their work is a reason to not buy it. Self driving cars are not a purchase where you go all in for one reason it is an onion you solve one layer and another appears and there are many layers grasshopper.
Tavares made what, 40 million euros last year or whatever and probably got his full golden parachute for the firing. Of course he doesn’t care.
Am I the only one who thinks he looks like Jon Lovitz in that pic? For a second, I thought it WAS Lovitz, and we were just being trolled.
Maybe a company that large shouldn’t exist in the first place. Maybe we don’t need 15 brands consolidated under the auspices of someone who clearly doesn’t understand even half of them.
I work in industrial controls, we have PLCs that do the same thing day in and day out and still sometimes get confused and act up. If I can’t trust a computer to work 100% of the time in a controlled environment, I can’t trust a car to reliably make decisions 100% of the time in environments with constantly changing variables, without some kind of redundancy.
NASA’s use of 3 computers acting as a decision making council would work, but might be too expensive.
Side note, do automakers collect environment data and track how drivers respond to help create driving models? Collecting that kind of data from all available cars (a la captcha) would probably help AVs make decisions more confidently as a human would. GFunk mentioned the deer problem and I can’t think of any more realistic way to train a car for that.
Yeah, but training on real world data has serious blind spots.
Google how Margaret Hamilton‘s daughter saved Apollo 11.
Side note, do automakers collect environment data and track how drivers respond to help create driving models?
My understanding is that’s exactly what Tesla is doing. The problem is that to use such a massive amount of data you’re pretty much required to do machine learning, and as we should all be aware now that has some pretty serious limitations (see also: hands in AI-generated images).
A couple of days ago I was coming home with quite a bit of snow falling but absolutely nothing on roads but road salt and liquid water. My Camry pitched a fit because (I’m guessing) salt spray covered one of its sensors, shutting down multiple safety systems. I was more than capable of driving the car myself (clear roads, clear windshield, etc.), but the car couldn’t do its thing. Does an AV pull over and force me to find/clean its sensors along the side of the highway? Can it find the side of the highway? Will every resident of Western PA spend all winter along the side of the highway?
Another example: deer running parallel to me on the side of the road yesterday morning – will the AV recognize that it is an aggressively stupid, potentially rutting animal that might dart out and rapidly turn into torpedo that rams the side of my car or suicide bomber that dives in front of it or will it think it is a (presumed) rational human being? I know to slow down and keep an eye on that antlered menace it until it turns and goes back into the woods (and to watch out for more because there are always more), but I don’t yet trust a computer on wheels to know the difference.
Finally, I’ve said this before and will say it again – computer programmers are people and make mistakes, have bad days, slack off, etc. just like the rest of us. If your latest half-assed “customers = beta testers” update screws up or bricks my laptop or my phone, it is an inconvenience. Cars are different.
Bottom line: If you’re taking away my steering wheel and pedals, it better be just about perfect. Seeing the shit coming and being unable to do anything about the shit will make me very, very angry as I’m hitting the shit.
Agreed. 100%
I had the same question as I was driving my Mach E up to Erie. Snow covered the front radar and the car wouldn’t even do dumb cruise control, let alone BlueCruise.
Well, in the case of animals reacting to something large moving towards them, millions of years of evolution have hardwired them to react unpredictably at the last possible moment because that gives him a 50% chance of survival compared to acting predictably with a 0% chance of survival. Of course, a big factor is that millions of years of evolution have taught predators that the winning strategy is to seem to be going just passed their prey, seeming to ignore the prey, and then veer off at the last moment to intersect with the prey, so it’s all logical behavior on the part of the animals.
Of course, there are other animals that know all about cars, and are just running down the side of the road because it’s easier then running across open countryside.
We have AVs that are 99% safe and have solved the accountability/liability problem. They’re called trains.
Bingo!
Besides, if you’re that obsessed with having a safe personal car, just go old school, skip the computers and add a fourth row of seating to your SUV. That logic has worked perfectly for a while now. /s
I’m not getting in any vehicle that crashes 1 out of 100 times. That said, I’m pretty sure trains are well above 99% safe. 😉
Just to get nerdy for a minute, part of the discussion needs to be that the average driver is way worse than the median driver. If self driving cars were as good as the median driver in all situations and there was a good way to handle liability I’d probably be ok with them. But if they’re only as good as the average, absolutely not.
Given the insane level of driving I see on a daily basis by “human-controlled” vehicles today, a lobotomized squirrel in a box would be a better, safer driver. How many people died on US roads in 2023?
40,990 according to the NHTSA
https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813561
So…the car company most widely known for driver aids (Tesla) also has the highest deaths per mile driven, per the article below. Training people to not be able to drive by having blind spot monitoring, lane keep assist, adaptive cruise control, proximity sensors, etc. makes vehicles more dangerous. Training drivers how to drive makes vehicles safer.
https://www.roadandtrack.com/news/a62919131/tesla-has-highest-fatal-accident-rate-of-all-auto-brands-study/