At the beginning of this month, there was an unfortunate accident in San Francisco where a normal, human-driven vehicle struck a pedestrian. This is already bad, but it gets worse: the human-driven car sped off, but the impact from that car upon the woman who was walking threw her into the path of a Cruise autonomous vehicle, which made an emergency stop, but not before running over the woman and trapping her under the rear axle. Cruise and Waymo are currently the only two licensed driverless taxi services in America, and now that list is down to one because the California DMV has just revoked Cruise’s license to operate driverless vehicles. A major reason for this decision was that Cruise withheld crucial video of the October 2 trapped pedestrian incident, according to reporting conducted by Aaron Gordon, writing for Vice.
The text of the DMV’s suspension news release gives some important details:
The California DMV today notified Cruise that the department is suspending Cruise’s autonomous vehicle deployment and driverless testing permits, effective immediately. The DMV has provided Cruise with the steps needed to apply to reinstate its suspended permits, which the DMV will not approve until the company has fulfilled the requirements to the department’s satisfaction. This decision does not impact the company’s permit for testing with a safety driver.
So, it looks like Cruise is still permitted to operate their vehicles if they have a human safety driver ready to take over if needed.
The specific reasons for the suspension are listed as well:
Today’s suspensions are based on the following:
-
13 CCR §228.20 (b) (6) – Based upon the performance of the vehicles, the Department determines the manufacturer’s vehicles are not safe for the public’s operation.
-
13 CCR §228.20 (b) (3) – The manufacturer has misrepresented any information related to safety of the autonomous technology of its vehicles.
-
13 CCR §227.42 (b)(5) – Any act or omission of the manufacturer or one of its agents, employees, contractors, or designees which the department finds makes the conduct of autonomous vehicle testing on public roads by the manufacturer an unreasonable risk to the public.
-
13 CCR §227.42 (c)- The department shall immediately suspend or revoke the Manufacturer’s Testing Permit or a Manufacturer’s Testing Permit – Driverless Vehicles if a manufacturer is engaging in a practice in such a manner that immediate suspension is required for the safety of persons on a public road.
These are pretty severe: “the Department determines the manufacturer’s vehicles are not safe for the public’s operation,” is a pretty clear statement, but more disturbing is this: “The manufacturer has misrepresented any information related to safety of the autonomous technology of its vehicle,” referring to the withholding of the video.
Person trapped under cruise vehicle https://t.co/IsNvQgLQlF
— FriscoLive415 (@friscolive415) October 3, 2023
The missing video in question has to do with what happened after the Cruise AV came to the first emergency stop. The video of the incident provided by Cruise to the investigation apparently only showed up to the point of the emergency stop, where the pedestrian was first pinned under the car. After the emergency stop, the Cruise vehicle attempted a “pullover maneuver while the pedestrian was underneath the vehicle,” which refers to the vehicle pulling off active traffic lanes. Normally, this is a very good idea, but in this case that involved dragging the person under the car for 20 feet at seven mph, which is a significantly worse idea.
According to Vice, the DMV only found out that footage of the pullover maneuver existed via learning about it from “another government agency,” prompting the DMV to ask Cruise for the footage, which was then provided.
Cruise spokesperson Hannah Lindow was reached out to by Vice, but has yet to respond regarding the video footage that was not initially provided.
Cruise did provide a statement on ex-Twitter, which I’ll quote here:
“We learned today at 10:30 am PT of the California DMV’s suspension of our driverless permits. As a result, we will be pausing operations of our driverless AVs in San Francisco. Ultimately, we develop and deploy autonomous vehicles in an effort to save lives.
In the incident being reviewed by the DMV, a human hit and run driver tragically struck and propelled the pedestrian into the path of the AV. The AV braked aggressively before impact and because it detected a collision, it attempted to pull over to avoid further safety issues. When the AV tried to pull over, it continued before coming to a final stop, pulling the pedestrian forward.
Our thoughts continue to be with the victim as we hope for a rapid and complete recovery.
Shortly after the incident, our team proactively shared information with the CA DMV, CPUC, and NHTSA, including the full video. We’ve stayed in close contact with regulators to answer questions and assisted the police with identifying the vehicle of the hit and run driver.
Our teams are currently doing an analysis to identify potential enhancements to the AV’s response to this kind of extremely rare event.”
On October 3, 2023. representatives of the Department of Motor Vehicles and the California Highway Patrol met with representatives from Cruise to discuss the accident. During the meeting. the department was shown video footage of the accident captured by the AV’s onboard cameras. The video footage presented to the department ended with the AV initial stop following the hard-braking maneuver. Footage of the subsequent movement of the AV to perform a pullover maneuver was not shown to the department and Cruise did not disclose that any additional movement of the vehicle had occurred after the initial stop of the vehicle. The department only learned of the AV’s subsequent movement via discussion with another government agency. The department requested Cruise provide a copy of the video with the additional footage, which was received by the department on October 13. 2023.
The reasons why Cruise may have decided to not show that particular part of the footage aren’t known exactly, as Cruise isn’t talking about that in detail yet, but from an outside observer, it’s pretty telling. The initial part, where the AV made an emergency stop to avoid hitting the person, that’s good footage to show from Cruise’s perspective, because it shows the AV behaving as it should, taking whatever steps necessary to avoid harming a person; the fact that the person was hit can’t be blamed on the car – that’s just physics.
What happened next, though, is extremely important because it reveals one of the biggest unsolved problems about automated driving, which is that driving isn’t just a lot of sensory inputs and mechanical/physical reactions, it’s also something that requires a lot of general awareness of surroundings and situations, and that often can blur into understanding social contexts and a huge variety of human communication and behaviors. Even things as brutally simple and basic as “if there is a person stuck under the car, do not drive,” a nuance of human society that this Cruise vehicle failed to realize.
Automated driving means a car integrates into a mass of messy and wildly varied human behavior because at this moment, cars are still extensions of us and our bodies, magnifiers of actions we take and decisions we make. An AV that obeys the rules of the road and the mechanics of driving is a good way to accomplish the driving task, but, especially in a crowded urban environment like San Francisco, that’s not all the way there.
The Cruise’s problem here was pretty simple – it’s a machine that has no self-awareness, no knowledge of what it’s actually doing. That means when things aren’t happening as expected, it lacks the ability to reason through the new circumstances and come to a reasonable decision, so it relies on what it knows, which, in this case, ends up meaning that a person got dragged 20 feet across the pavement.
As it stands, Cruise is suspended from driverless operations, and as far as the future holds, the “DMV has provided Cruise with the steps needed to apply to reinstate its suspended permits, which the DMV will not approve until the company has fulfilled the requirements to the department’s satisfaction. “
I’m curious to see if this will affect how developers approach the scope of what AVs need to do in the future. As strange as it sounds, knowing not to drag someone trapped under the car is a pretty nuanced task for artificial intelligence. I just hope these blurrier, less clearly defined requirements of a functional automated vehicle are taken as seriously as the more obvious challenges.
“The reasons why Cruise may have decided to not show that particular part of the footage aren’t known exactly”
As an ex security camera installer, I’m surprised the telltale video wasn’t immediately “lost” by any number of excuses. I have personally witnessed business owners and CEOs purposely destroy incriminating evidence within mere seconds of having me replay it for them. Oops! I hit the wrong key! They say, and laugh. After that, it’s any excuse to cover their ass.
Make no mistake all those cameras out there are not for public benefit.
From the video it looks like SFPD still has some Crown Victorias out and about. That’s cool.
There was a crystallizing moment for me in the discussion of Automated Cars back in March. It was outlined in a Jalopnik article based on a Tweet from Matt Farah. Data was cited stating that human drivers have 5,250,837 crashes per 2,903,622,000,000 miles driven. (almost 3 Trillion miles!) Do the math and that’s 99.999819% crash free.
The Automated Car industry has constantly stated that their cars will be safer than human driven cars. Fair enough, one day they may be. But to get to 6 Nines of accuracy will take far longer than most projections.
For a little reference in my world of IT and uptime stats. An IT system or systems that delivered 6 Nines of uptime during one year, would only be allowed to have approximately 30 seconds of downtime to the user per year. Without a nearly unlimited budget that is virtually impossible. Even Amazon Web Services only offer 99.99% uptime in their SLA’s.
Humans are very good at taking the information learned in one situation and applying it to another that isn’t identical. We are wonderfully adaptable creatures. Automated Cars have a lot of catching up, even with all the progress made so far.
BTW, don’t use an X-formerly-Twitter link for video.
For the increasing numbers of people who aren’t signed up for it, the link may or may not hit a hard signup wall based seemingly on what kind of mood Elon’s in.
This likely isn’t noticeable to actual journalists who need to maintain accounts for professional reasons, at least until the rest of us agree on a replacement platform, so I thought I’d call it out.
“The cover-up is always worse than the crime,”
Not always 100% true (the incident here was pretty awful), but seriously, who thought that this wouldn’t be the response and who could have possibly thought they could cover up the video? The mental gymnastics some folks go through to convince themselves that no one will bother digging into something like this and finding the coverup attempt must truly be Olympic level.
In AV (and AI) circles people really underestimate just how good people are at driving or performing other tasks that were invented by people for people to do. Successful performance in 99% of scenarios is one thing, but sometimes that last 1% is what really matters.
But I thought that consumer ready full autonomous vehicles were just 10 years away.
I still firmly believe that we will ever see consumer available full AV vehicles available within any of our lifetimes, especially now that the free money is drying up
What percentage of human drivers would understand what is happening if a person fell under their car? What percentage of those would make the immediate right decision? Expecting driverless cars to be perfect or make the same decisions as the best human on their best day is a ridiculous standard. On aggregate, they might someday be better than the average of all human drivers and that will hamper adoption because most humans think they are better than average.
That said, Cruise hiding the information is clear-cut and worthy of punishment.
“Expecting driverless cars to be perfect or make the same decisions as the best human on their best day is a ridiculous standard.”
Kinda disagree. Isn’t that the whole point? Obviously perfection is impossible but IMO I would expect driving decision-making at least as good as the top couple percent of drivers.
Oh, and a person would probably hear the screaming.
Certainly making self-driving as good as the top few percent of drivers is the goal and no one knows if it will ever get there. The real question is when is it good enough to start using? When it is better than the average driver? Top 25%? Top 5%? If it never gets deployed it will never become good enough.
Eh, I don’t even think there should be a discussion about it until they are at least average. But, they also need to completely eliminate “edge cases”. There are MILLIONS of “edge case” situations that occur each day. It is not acceptable for an EV to be doing stuff like this that only the most senile drivers would do.
If there are massive safety shortcomings, there is ZERO reason to implement AVs on a personal transit scale. Trains are a way safer solution that are already run autonomously on at least small lines (such as airport shuttle trains).
Of course L2 systems are a different argument entirely, IMO as long as they are vigilant at keeping the driver aware, they are only a positive.
You cannot eliminate all edge cases. They are largely the situations that you haven’t planned for. These systems operate on training a known set of situations, and sometimes they can ad-lib correctly based on their training, and other times they cannot. It’s really a limitation of current AI. It’s possible that we don’t have the technology currently to make them good enough and that another leap in AI is required. It’s also possible that we could get “good enough”, but there will always be edge cases and they may end tragically, but hopefully less so than with humans behind the wheel.
The issue is that there are a ton of these edge cases that aren’t hard for a human to solve. Stuff like “the truck was on its side so it became invisible and I just ran into it” or “I ran over a human so I kept going.” I would agree that it’s a limitation of current AI, which is sorta lacking the “I.” I am no expert though.
Edge cases are tough for AI as d0nut explains. We look at them and wonder at the stupidity. However, they are edge cases which by definition are rare and humans make lots of mistakes in edge cases also, some of which AI might handle better. Ultimately it is a statistical game. If AI handles the normal stuff better than 95% of humans but the edge cases only 25% as well, it might still be safer per mile to let AI drive. At least bad drivers might be safer.
No, if the goal and stated aim of AV technology is to be better than human drivers, then “expecting driverless cars to be perfect or make the same decisions as the best human on their best day” is the only reasonable standard.
I would argue that the best human on their best day makes the best possible decision available so you are asking for perfection. Once self-driving is better than 50% of drivers (and IMO it is not), then it could save lives if deployed to the proper half of the population.
I think a high percentage of drivers would understand that having a person under their car means it’s a bad time to drive, even slowly. And a driver would likely get out to assess the situation.
And a human driver would understand that blocking the street when a human life is endangered makes sense. Pulling over could further endanger that person even if they weren’t dragged along. They could easily be run over again if you do not ensure other motorists know to stop.
My parents have a roughly 1/4 mile gravel driveway. About 10 years ago my younger brothers friend was being dropped off by his mom. My ~1 year old black lab/chow mix was running in front of her car(2000 era Grand Am) and barking at it, a behavior we were trying to correct that he learned from his older “siblings”. Somehow he ended up underneath her front bumper. She proceeded to drag him almost the full length of the driveway before slamming on the brakes where he pulled himself out from under the car and ran off into the woods. Her reasoning was “I realized he went under the car and was scared he was stuck so I floored it hoping he would come out from underneath.” So, at least in my experience, 0% make the right decision. For those who will ask, dog was mostly fine with some scrapes and missing fur but he did walk with a limp on one leg for a few months that came back around as he got older. Also, he did not stop chasing cars on the driveway although he stayed further back from them.
“As strange as it sounds, knowing not to drag someone trapped under the car is a pretty nuanced task for artificial intelligence.”
That’s EXACTLY the problem. There are hundreds of unprogrammable nuanced choices to be made by drivers every day.
Didn’t anyone else take philosophy and study this scenario?
You are driving a gasoline tanker down a hill; the brakes fail and nothing will stop the truck. You ARE going to hit something. You have three choices- a hospital on your left, a strip club on your right, and a Kindergarten straight ahead. Which do you choose?
There is no good answer, of course – someone is going to die- and you must choose.
Now program that into your AI vehicle. If you can’t perhaps the technology isn’t ready for use except in very limited circumstances.
Most people would aim at the strip club, however the radio announcer starts talking about how that particular strip club is a non-profit that raises funds for orphans, the hospital next door is a private one that caters exclusively to mob bosses, and the kindergarten is an expensive private school attended by the children of 3rd world dictators.
In this scenario are we in the US? If so, I would go for the hospital every single time.
No hesitation. The kindergarten is full of kids who haven’t had time to turn evil yet and any strip club is just an honest business providing honest services. Hospitals on the other hand are the Earth’s Mos Eisley franchises. Especially the religious “non-profit” ones, which is most of them.
I remember taking part in an online study that tried to crowdsource that kind of decision, years ago. iirc it was explicitly stated that they were looking for what behaviour to program into robots including and specifically self-driving cars. They had all sorts of “hit the elderly person or the school child?” decisions to make.
Honestly? At the end of the day I’d want my car to prioritise the safety of my passengers and me. imagine your car driving you and your family into a ditch because it figured your chances were better than those of the drunk dude stumbling into the road. thanks but no thanks
Not that I would want an AI to choose, but I’d much rather put myself and my passengers into a ditch in the relative safety of a vehicle than strike a pedestrian. Of course, a human can look at a lot of context to make a decision. A cliff on one side is very different than a ditch. A potential head-on collision could put even more people at risk of death or serious injury, even though the occupants of both vehicles have the protections afforded by their safety features. I don’t trust our tech to see the whole picture and make a good choice.
This kind of reasoning is what led that guy to plow into the back of a dozen cars on I70 instead of ditching the truck and risking a possible rollover. These situations are rarely as black-and-white as the hypotheticals make them sound.
Mandatory relevant comic
https://www.smbc-comics.com/comic/self-driving-car-ethics
“You are driving a gasoline tanker down a hill; the brakes fail and nothing will stop the truck. You ARE going to hit something. You have three choices- a hospital on your left, a strip club on your right, and a Kindergarten straight ahead. Which do you choose?”
I choose to go a little bit to the right and squeeze between the Kindergarten and the strip club.
Sidenote: Where is this place that has a strip club next to a Kindergarten?
Found the human! Can’t wait to see the code you write for the car!
Gotta love the vaguely positive sounding PR speak of “pulling the pedestrian forward”. They should adapt this into a slogan.
Cruise AV Vehicles: Pulling You Forward
As punishment, Cruise’s CEO was sentenced to registering an out-of-state vehicle at a DMV office without an appointment every week for 12 months.
You’re cruel.
And fair.
Death penalty sounds more merciful.
A bunch of the dialogue around this is talking about the safety of the car and the car’s decision making, which is irrelevant to the situation and why Cruise got their license pulled – Cruise got their license pulled because they lied to the DMV, full stop.
Whether or not the vehicle is safe is secondary. It’s important, but the sin here is the coverup, and revoking their right to operate is absolutely correct in this situation. I don’t think we should be beta testing autonomous vehicles on public streets, and I know enough other software engineers to know I sure won’t be trusting them anytime soon, but all of that can be argued by reasonable people. What can’t be argued is that if we’re going to allow these cars on the road, the companies have to be 100% transparent, full stop. Cruise wasn’t, so they cannot be allowed to operate their cars on public streets.
I do wonder how expensive the settlement will be with this victim. I’m sure all the lawyers of the land are knocking on her door. After all, a pedestrian surely didn’t sign a waiver like passengers would be required to do.
If the award or settlement is high enough it could solve the issue of beta testing these systems on public streets.
Oh thank goodness that they can still test with a human safety driver – that will certainly avoid any circumstance where a pedestrian is struck and//or killed by the “autonomous” vehicle /s (story 1, story 2).
And why does it take an accident like this for the government (state or federal) to stop fellating the goddamn tech corporations and allowing them to play games with people’s lives? How about building a test city and testing every single possible situation and condition before releasing these into the wild? I know, that’s a rhetorical question…
Because this accident happened, and the era of free money is over. With interest rates, you just can’t quite look past these anymore
Bribing SF supervisors is getting more expensive
Whatever happend to just plain ole automated highway convoy? I just want the assist on long boring drives. Give me convoy!
10-4, Rubber Duck
Mercedes-Benz has Drive Pilot for exactly that.
Let’s be fair what the car should do under these circumstances was never considered. And probably so many other situations are not considered. It is why non driver cars are not a good idea.
I don’t even know what I should do if someone throws themselves underneath my car, even after hearing this story.
Stop immediately, call 911, and render whatever assistance you feel qualified to offer
not if I am on a busy road and about to get smashed myself! There is a lot more to unpack here than just that simple solution. I am sure the lawyers will have fun with this and make a lot of money
All I know is Don’t Back Up!
When I was in driving school they told us that if we ran someone over we should let off the brake if we can’t stop in time so the wheels don’t lock up and drag them.
Pretty sure I wouldn’t be able to do that instinctively but I also hope to never find out.
You always make it worse when you try to hide something. Cruise could have provided all of the video from the start and pointed out how bizarre the situation was, and that a human driver in the same situation may have tried to pull over without realizing the person they hit was under the vehicle. When people are freaked the eff out by something, they often go into autopilot and lose full awareness of the situation. Once you get caught withholding the facts though your credibility goes right out the window. It makes me wonder what else Cruise has seen in their footage and data records that they’d prefer to keep quiet.
Huh
Any bets on how this would be presented as a captcha so we can all help the machines learn? My first thought is that it won’t be, due to ( presumed) scarcity of pictures of humans in situations where our choices would actually help the learning. Not articulating well, but assuming people know what I’m asking here.
I work with fairly simple machines way below any level of AI, so am genuinely asking others’ more-informed opinions.
Undercarriage cameras which stop if any human sizes object is under the car. With a remote operator to handle false alarms.
Good point: way, way better to occasionally have a self-driver immobilized due to a deer or traffic cone than another human being dragged under one. -that’s nightmare fuel right there
I like Space’s camera idea, and would add a few things :
1. An algorithm to detect “ah, crap, I just ran over something” using accelerometers to detect the jolt.
2. Detect light impacts to the car using microphones. After an incident, these would help the car know to stay stopped when someone hits it with their hand/foot/face.
3. Use exterior microphones to process voice commands after an emergency.
These all come with more vulnerabilities, which will be a pain to deal with; glad it’s not my job!
Good idea with accelerometers, the existing SRS could probably be repurposed as info for self driving vehicles.