Home » Internal Report Shows Cruise Didn’t Think Its Robotaxi Dragging A Pedestrian Was A Big Enough Deal To Fix The Cars

Internal Report Shows Cruise Didn’t Think Its Robotaxi Dragging A Pedestrian Was A Big Enough Deal To Fix The Cars

Cruise Bad Top
ADVERTISEMENT

In October of last year, GM’s autonomous cars division, Cruise, had their self-driving license revoked by the state of California due to an incident that happened on October 2, 2023. In the incident, a pedestrian was hit near the intersection of 5th and Market streets in San Francisco. The pedestrian was initially hit by a human-driven Nissan, which launched the person into the path of an autonomously-piloted Cruse Chevy Bolt, which does make an attempt at an emergency stop. Unfortunately, the person was trapped underneath the Cruse AV, which then attempted a “pullover maneuver,” dragging the pedestrian under the car for about 20 feet, making their injuries significantly worse. Today, a report was released by GM from Quinn Emmanuel Trial Lawyers called REPORT TO THE BOARDS OF DIRECTORS OF CRUISE LLC, GM CRUISE HOLDINGS LLC, AND GENERAL MOTORS HOLDINGS LLC REGARDING THE OCTOBER 2, 2023 ACCIDENT IN SAN FRANCISCO, and while much of the report is about Cruise’s response to the incident and their subsequent hiding of crucial information to the media, it also reveals information that highlights some of the issues and reasons why this sort of disaster happened at all.

We covered the cover-up and media/regulatory handling of the incident earlier today; what I’d like to do now is talk about parts of the report that seem to confirm some speculations I made a number of months ago about the fundamental, big-picture causes of why the accident occured, because I think it’s important for the automated vehicle industry at large.

Vidframe Min Top
Vidframe Min Bottom

Cruise has had a permit to operate AV robotaxis in California without human safety drivers since 2021, and as of 2022 had a fleet of 100 robotaxis in San Francisco, expanding to 300 when they got approval for nighttime operation. The robotaxis have had incidents before, but none as serious as the October 2 event with the pedestrian. The person lived, by the way, just so you’re not wondering. So that’s good, at least.

The report describes what happened in much more detail than had been previously known, and it’s some pretty grim stuff. The timeline breaks down like this:

On October 2, 9:29 pm, the initial impact of the Nissan and the pedestrian happens. Within a seconds of this, the pedestrian hits the hood of the Cruise AV, then falls to the ground. The Cruise AV, noting that an impact has happened, undertakes a “pullover maneuver” to get off the road, normally a good idea, but this time not, since the pedestrian, trapped under the car, was dragged about 20 feet.

ADVERTISEMENT

At 9:32 the Cruise AV “transmits a medium-resolution 14-second video (“Offload 2”) of collision but not the pullover maneuver and pedestrian dragging.” At 10:17

“Cruise contractors arrive at the Accident scene. One contractor takes over 100 photos and videos. He notices the pedestrian’s blood and skin patches on the ground, showing that the Cruise AV moved from the initial point-of-impact to its final stopping place.”

That all sounds pretty bad with the blood and skin patches, of course. Pulling off the active traffic lane is, of course, a good plan, but not if you’re going to be dragging a person, something that any human driver that just smacked into a human would be aware of.

The report covers a lot more details that were previously not known. For example, the normal distance for the pullover maneuver seems to be 100 feet; only 20 feet were covered because of this:

“The AV is programmed to move as much as 100 feet but did not do so here because the AV detected an imbalance among its wheels, which then caused the system to shut down. Specifically, a diagnostic indicated there was a failed wheel speed sensor. This was triggered because the left rear wheel was spinning on top of the pedestrian’s leg. This wheel spun at a different speed than the others and triggered the diagnostic, which stopped the car long before it was programmed to stop when engaged in its search for an acceptable pullover location.”

So, the robotaxi had some idea that things weren’t right because of an imbalance among its wheel speeds, but the reason wasn’t some technical glitch, it was because the wheel was spinning on the person’s leg.

Sensors 2

ADVERTISEMENT

 

This grisly anomaly was noted another time in the report, in a section that confirmed that at least the legs of the person were visible to the AV’s lidar systems:

“In the time immediately prior to impact, the pedestrian was substantially occluded from view of the lidar sensors, which facilitate object detection and tracking for the collision detection system. Only the pedestrian’s raised leg, which was bent up and out toward the adjacent lane, was in view of these lidar sensors immediately prior to collision. Due to a lack of consistent detections in this time frame, the tracking information considered by the collision detection system did not reflect the actual position of the pedestrian. Consequently, the collision detection system incorrectly identified the pedestrian as being located on the side of the AV at the time of impact instead of in front of the AV and thus determined the collision to be a side impact. After contacting the pedestrian, the AV continued decelerating for approximately 1.78 s before coming to its initial stop with its bumper position located forward of the Nissan. The AV’s left front wheel ran over the pedestrian and triggered an anti-lock braking system event approximately 0.23 s after the initial contact between the pedestrian and the AV’s front bumper.”

It’s worth noting that the AV stopped not because it was ever “aware” there was a person trapped beneath it, but because the fact of a person being trapped beneath it caused some unexpected technical faults, i.e. the wheel speed sensor.

It also appears that cameras detected the pedestrian’s body as well:

“The pedestrian’s feet and lower legs were visible in the wide-angle left side camera view from the time of the collision between the pedestrian and the AV through to the final rest position of the AV. The ADS briefly detected the legs of the pedestrian while the pedestrian was under the vehicle, but neither the pedestrian nor the pedestrian’s legs were classified or tracked by the ADS after the AV contacted the pedestrian.”

So, the person’s legs were visible to both lidar and at least one camera on the AV, but the AV did not bother to attempt to identify just what those legs were, and even if it couldn’t identify them, it didn’t even bother to flag the unknown objects sticking out from underneath the car as something worth of note or alarm.

ADVERTISEMENT

Cruise does have humans that check up on the robotaxis, especially if something like an impact is noted. The report mentions that

“According to the Cruise interviewer’s contemporaneous notes, one Remote Assistance operator saw “ped flung onto hood of AV. You could see and hear the bumps,” and another saw the AV “was already pulling over to the side.”

It is not clear why the Remote Assistance operator didn’t do anything to halt the pullover maneuver, or even if there would have been time to do so. Also unsettling is this chart of questions Cruise put together to prepare for expected inquires from media:

Questionschart2

What’s interesting here is just how much the AV did seem to know: the chart says the AV “detected the pedestrian at all times” and “the AV detected the pedestrian as a separate object from the adjacent vehicle as soon as it made contact with the ground.” It also notes that a human driver would not likely have been able to avoid the impact – definitely a fair point – but neglects to mention anything about dragging the person under the car after the impact.

And this leads us to that fundamental problem I mentioned from earlier: the problem with AVs is that they’re idiots. Yes, they may be getting pretty good at the mechanics of driving and have advanced sensory systems with abilities far beyond human eyes and ears, but they have no idea what they’re doing or where they are. They don’t know they’re driving, and while they can pinpoint with satellite-given precision where they are on a map with their GPS abilities, they have no idea where they are, conceptually.

ADVERTISEMENT

These limitations are at the heart of why this happened, and why it would never happen to a human, who would see a pedestrian smack onto their hood and immediately think holy shit, I just hit somebody oh god oh god I hope they’re okay I better see how they are and so on. The AV has no ability to even conceive of such thoughts.

Av Pics

In fact, the AV doesn’t even seem to have an ability that four-month-old human babies have, called object permanence. I say this because if they claim the AV knew about the pedestrian, knew that the car hit the pedestrian, how could it somehow forget about the very existence of the pedestrian when it decided to undertake the pullover maneuver? A human would know that the person they just hit still exists, somewhere in front of the car, even if they can’t see them at that moment, because objects don’t just blink out of existence when we don’t see them.

In this sense, the Cruise robotaxi and a two-month old baby would fall for the same trick of hiding a ball behind your back, believing that ball no longer existed in the universe, and daddy is a powerful magician.

Object permanence may not seem to be something that would be necessarily required to make a self-driving car, but, as this event shows, it is absolutely crucial. It’s possible such concepts do exist in the millions of lines of code rattling around the microchips that make up the brains of AVs, but in this case, for a human being laying prone under a car, its legs visible on at least a camera and lidar, the concept does not appear to be active.

ADVERTISEMENT

This is all connected to the bigger idea that for AVs to be successful, they need to have some general concept of the area around them, a concept that goes beyond just the physical locations of cars and obstacles and GPS data. They need to know, as much as possible, the context of where they are, the time of day and what’s likely to be happening around them and how people are behaving and if there is anything unusual like police barricades or a parade or kids in halloween costumes or a group of angry protesters and on and on.

Driving is a social undertaking as well as a mechanical one; it involves near constant, if subtle, communication with other drivers and people outside cars; it involves taking in the overall mood and situation of a given area. And, of course, it involves understanding that if a person smacks into the front of your car, they’re very likely on the ground right in front of you.

These are still unsolved problems in the AV space, and based on some of the reactions of Cruise employees and officials as seen in this report, I don’t get the sense that solving them is a priority. Look at this:

“Cruise employees also reflected on the meeting in subsequent debrief discussions. In one such exchange, Raman wrote: “do we know for sure we didn’t note that it was a person.”

My issue here is that the question asked by Prashanthi Raman, vice president of global government affairs, seems to be very much the wrong question, because no answer there is going to be good: if it [the person who was hit and dragged] wasn’t noted as a person, that’s very bad, and if it was, that’s even worse, because the car went ahead and dragged them 20 feet anyway.

Even more unsettling is this part of the report:

ADVERTISEMENT

“The safety and engineering teams also raised the question whether the fleet should be grounded until a “hot fix”—a targeted and rapid engineering solution—could be developed to address how to improve the ability of Cruise AVs to detect pedestrians outside its nearfield and/or underneath the vehicle. Vogt and West decided that the data was insufficient to justify such a shutdown in light of the overall driving and safety records of Cruise AVs. Vogt reportedly characterized the October 2 Accident as an extremely rare event, which he labeled an “edge case.”

This is extraordinarily bad, if you ask me, partially because I think it hints at a problem throughout the AV industry, from Tesla to Cruise to Waymo to whomever. It seems Cruise at least considered rushing through some sort of fix, a patch, to improve how Cruise AVs detect pedestrians and objects/people that may be lodged underneath the car. But ex-CEO/President Kyle Vogt called the incident an “edge case” and declined to push through a fix.

This assessment of wrecks or other difficult incidents as “edge cases” is, frankly, poison to the whole industry. The idea of an edge case as something that doesn’t have to be worried about because it’s not common is absurd in light of life in reality, which is pretty much nothing but edge cases. The world is chaotic and messy, and things you could call “edge cases” happen every single day.

A pedestrian getting hit is not, in the context of driving, an edge case. It’s a shitty thing that happens, every single day. It’s not uncommon, and the idea that a vehicle designed to operate in public won’t understand the very basic idea of not fucking driving when a human being is trapped beneath it is, frankly, absurd.

Pushing a problem aside as an “edge case” is lazy and will impede the development of automated vehicles more than anything else.

I’m not anti-AV. I think there will be contexts where they can be made to work well enough, even if I’m not sure some near-magical Level 5 cars will ever happen. But I do know nothing good will happen if companies keep treating automated driving as purely a tech challenge and ignoring the complex situational awareness challenges of automated driving, challenges that include at least some attempt to understand the surrounding environment in a deeper way and, yes, implementing systems that will prevent AVs from driving if you’re stuck under them.

ADVERTISEMENT

Relatedbar

Cruise Stopping Its Driverless Taxi Service Reveals What Self-Driving Cars Need To Focus On

A Video Showing A Police Officer Yelling At An Autonomous Car Has Me Worried About Robocar Emergency Overrides

GM’s Cruise Robotaxi Company Was Terrified Of The Media: Internal Report

Share on facebook
Facebook
Share on whatsapp
WhatsApp
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on reddit
Reddit
Subscribe
Notify of
87 Comments
Inline Feedbacks
View all comments
Hugh Crawford
Hugh Crawford
9 months ago

Well it’s better than some human drivers
https://en.m.wikipedia.org/wiki/Murder_of_Gregory_Glenn_Biggs

Ranwhenparked
Ranwhenparked
9 months ago
Reply to  Hugh Crawford

Except Chante Mallard was perfectly aware of what she just did and knew exactly how to properly respond, and could have at any point, but made the conscious decision not to. A Cruise vehicle might have just assumed Biggs’ body wasn’t anywhere around anymore, due to his bloody face blocking the forward camera.

Basically, Mallard was evil, Cruise is really, really stupid (and stupid can be just as dangerous, just different root cause and motivation)

Last edited 9 months ago by Ranwhenparked
Hugh Crawford
Hugh Crawford
9 months ago

“My issue here is that the question asked by Prashanthi Raman, vice president of global government affairs, seems to be very much the wrong question, because no answer there is going to be good: if it [the person who was hit and dragged] wasn’t noted as a person, that’s very bad, and if it was, that’s even worse, because the car went ahead and dragged them 20 feet anyway.“

Well, it’s exactly the correct question to be asking if you’re trying to solve the problem. Should everyone in the organization be asking themselves that question? Probably yes.

Cal67
Cal67
9 months ago

You talk about the AV knowing, having a general concept, and forgetting. None of this is possible because the program does not think. It is not sentient. It is programming that responds in predetermined ways to specific inputs or series of inputs. I’m willing to bet that if Cruise released a logic diagram (if they have one); that nowhere on that diagram would there be a series of inputs that involved hitting a pedestrian, checking the subsequent location of the pedestrian, and if the location could not be verified, stopping immediately and calling emergency services.

Hugh Crawford
Hugh Crawford
9 months ago
Reply to  Cal67

Yeah, I was going to mention egregious anthropomorphizing, which is rampant in discussions of AI.

Amberturnsignalsarebetter
Amberturnsignalsarebetter
9 months ago
Reply to  Cal67

hitting a pedestrian, checking the subsequent location of the pedestrian, and if the location could not be verified, stopping immediately

Part of the problem is that in the real world there are so many variables that affect the sentient response to similar scenarios – stopping immediately after hitting a pedestrian makes sense most of the time, but might not always be the ‘right’ answer (e.g. you’re blocking a school bus that’s halfway through crossing the railroad, and half a million tonnes of highly flammable toxic chemicals are barreling towards the scene of the accident) – sometimes there might not even be a ‘right’ answer, just lots of different ‘wrong’ ones.

Hugh Crawford
Hugh Crawford
9 months ago

See “trolly problem”

Actually In San Francisco a few weeks earlier there was an incident where there was some sort of traffic anomaly. I believe it was fire engines parked on both sides of the street and fire hoses, crossing the street where some autonomous cars couldn’t figure out what to do and came to a dead stop, in the middle of an emergency situation which apparently caused great havoc.

Last edited 9 months ago by Hugh Crawford
Highland Green Miata
Highland Green Miata
9 months ago

There is an alternative to real world testing of these systems, and it’s simulation. SIM platforms for training autonomous vehicle software has existed for some time, and it can be configured for all kinds of scenarios, including edge cases. The benefit of using a sim platform is you can test system reaction to scenarios that would be otherwise untestable in the real world (like what happens if you hit a pedestrian). And these platforms have the advantage of running the simulations in non-realtime speed, so you can test the software in many different situations over millions of miles in a fraction of the time of driving a real vehicle. Sim platforms like this have existed for many years. https://hexagon.com/products/virtual-test-drive Yes, you still need some real world testing, but to say that edge cases couldn’t have been anticipated or tested is nonsense.

Mike F.
Mike F.
9 months ago

OK, these cars are machines that ultimately do what they’re programmed to do. (a simplification to some degree, but….) A fault in the programming has been demonstrated – i.e. the car can recognize that there’s a human under it yet the car does not immediately stop when this happens. Irrespective of whether or not this is an edge case, it needs to be fixed. Doesn’t matter if it’s a one-in-a-million thing. As autocars get more miles under their belts and their systems get more refined, it will be nothing but edge cases that are revealed. Fixing them is required if these companies want to maintain any sort of public trust. Slagging off someone who is lying in the hospital, recovering from horribly painful injuries as an edge case isn’t going to fly with a public that’s already very skeptical of the technology.

Nvoid82
Nvoid82
9 months ago

But I do know nothing good will happen if companies keep treating automated driving as purely a tech challenge and ignoring the complex situational awareness challenges of automated driving”

This is a concept I see repeated in discussions about autonomous driving, and it is not true. Situational awareness and social understanding is a technical problem, and the unwillingness to reckon with it as such is part of the reason we continue to struggle with autonomous vehicles.

People are not magic. Social and situational awareness might require more, or more expensive hardware, or new techniques for developing understanding, but it is 100% something that can be developed. The issue is on of will and resource allocation. As in this instance, there could’ve been a stop production, add more sensors, increase awareness. There could be more work to be done on the decision making side of the vehicle where it responds differently if a collision or braking event is detected within X time of seeing a pedestrian. Machine learning tools already exist for segmenting entities out of a scene and classifying them, it’s one of the more common first projects in machine vision.

“Vogt and West decided that the data was insufficient to justify such a shutdown in light of the overall driving and safety records of Cruise AVs. Vogt reportedly characterized the October 2 Accident as an extremely rare event, which he labeled an “edge case”

This is the problem with autonomous driving. These kind of issues aren’t unsolvable tech problems, but they are problems that decision makers have decided aren’t worth solving.

Joe The Drummer
Joe The Drummer
9 months ago

I will say it again: Make AVs pass the state driver’s license exam in literally any of the fifty states of the Union before allowing them to travel the streets. Once they do, I will finally trust them as much as I trust any 16-year-old brand new driver. Which is not saying much.

The point being, why do cars that drive themselves get allowed on the road, when they don’t have to prove themselves fit to drive the same way human drivers do? Humans call “beta testing” a learner’s permit. Make AVs get one.

Cal67
Cal67
9 months ago

And even with that learner’s permit, the person with it can only drive with another person with a full driver’s license in the passenger seat.

Barry Allen
Barry Allen
9 months ago

Sounds nice, but I’m pretty sure any of these vehicles could pass the driver’s test in any state, barring the written portion since they have no hands. They’re all competent at the basic “rules of the road”, they’re not going to speed or run a light or change lanes without signaling. It’s the other stuff where it gets hard.

Joe The Drummer
Joe The Drummer
9 months ago
Reply to  Barry Allen

I wonder how dragging a pedestrian under your vehicle while driving on a learner’s permit affects your path to a full driver’s license, though?

EmotionalSupportBMW
EmotionalSupportBMW
9 months ago

Forgive me as I’m not a computer wizard, but isn’t object permanence borderline impossible for computers? As it’s not a binary choice. The computer can understand object there/ not there. When you get to not there, but is somewhere over there. You get to a multiple choice riddle that requires critically thinking about the object and the space it inhabits. I guess a computer with a complex understand of space, and object size and speed could narrow down the possibilities. However its still beholden to fairly limited amount of possibilities due to the inherent nature of everything having to be a yes/no question. Which is an incredibly inefficient way to process an environment you are both moving in and controlling your own movements in. Thus likely capping your processing power searching for an object that may or may not be there.

Joe The Drummer
Joe The Drummer
9 months ago

Yeah, there’s “there but gone,” “might be there, sometimes,” and countless other permutations.

The best differentiation between computer intelligence and human intelligence I ever heard went something like this: Computers are binary and only know zero or one, on or off, yes or no. Human intelligence adds, “Well, sorta, maybe, depending.” And that’s the dimension they’re still ironing out.

Double Wide Harvey Park
Double Wide Harvey Park
9 months ago

That’s not how things work anymore. Modern machine learning systems are very good at the “well, sorta, maybe, depending.” What they’re not very good at yet is deciding what to do in ambiguous cases. Humans aren’t either but we’re better at it 🙂

59turner
59turner
9 months ago

exactly. AI has an indecision problem combined some times with a hallucination problem. When something is both 95% chance truck and 95% truck picture on a billboard. What do you do? These kinds of hallucinations happen.

I believe this is exactly why we developed emotions. You need a way to respond when a rock you step over turns into a snake. You need fear to make you move to make up lost time. You need a way to respond when you see a snake and it is a rock. You need humor/laughter to defuse the situation. I see emotions as a check on what we are currently expecting versus what we ‘see’. When there is dissonance we need a way to quickly resolve them. But emotions are funny because you need to carry both ideas of snake and rock until they get resolved with further information.

AVs don’t know what to do when they are confused and don’t know how to get unconfused.

Barry Allen
Barry Allen
9 months ago

You’ve mostly got it. Plus the architecture of current AI doesn’t really lend itself well to this.

Lightning
Lightning
9 months ago

I listened to an interview with Philip Koopman, a Carnegie Mellon autonomous safety expert/engineer, on the Smoking Tire Podcast yesterday. He had some interesting things to say about autonomous vehicles that was nice to hear from an expert coming from outside the industry/a watchdog like Torch. There was a lot to digest, but some of my takeaway was that manufacturers are always shifting the blame to the drivers or victims so you can’t really trust the how safe they claim their systems are; that FMVSS is only followed to the letter maybe 10% of the time because no one is monitoring; the Mercedes turquoise lights thing and Level 3 driving isn’t taking responsibility/the blame for results of potential collisions, just their product defect if/when it happens; and lots of other things. Jason and crew, check him out. I see he doesn’t have a lot of auto journalist follows on X except Zack Klapman and Alex Roy.

Cheap Bastard
Cheap Bastard
9 months ago

These limitations are at the heart of why this happened, and why it would never happen to a human, who would see a pedestrian smack onto their hood and immediately think holy shit, I just hit somebody oh god oh god I hope they’re okay I better see how they are and so on. The AV has no ability to even conceive of such thoughts

Neither do DUIs, psycho/sociopaths and some elderly drivers.

Double Wide Harvey Park
Double Wide Harvey Park
9 months ago

1. Yes, it is an edge case
2. No, you can’t treat it the way you’d treat an edge case in a web application where someone’s last name is 2 letters long

Those two things are true, and the SV ethos that has indisputably worked well in a number of areas and changed the world for the better emphatically does not apply in meat space where people’s safety is in play.

I’m one of those SV types and most of us get it, but the type A sociopaths like the cruise CEO, Travis from Uber, Musk, and many others don’t and they really need to be smacked down.

Forrest
Forrest
9 months ago

I worked as an engineer on autonomous cars in Silicon Valley for a few years. I reached a similar conclusion, and I decided to move into other areas of robotics.

Cheap Bastard
Cheap Bastard
9 months ago

“something that any human driver that just smacked into a human would be aware of.”

I don’t think this is a given. Even if it is in many countries the human’s response might be to make sure that person is dead:

https://www.snopes.com/fact-check/chinese-drivers-kill-pedestrians/

In this case the AV did exactly what I would hope of any responsible human driver, it pulled over as quickly as possible. It also sensed something was wrong whereas a freaked out human might not have or might have caused that trapped person underneath a LOT more injury before realizing there was a bigger a problem and upon doing so might well have fled the scene.

If anything this AV was MORE responsible than many shitty human drivers.

Last edited 9 months ago by Cheap Bastard
Josh
Josh
9 months ago
Reply to  Cheap Bastard

Came here to say this. The number of CDL holding semi drivers I’ve seen on video pushing or dragging another car while completely unaware of it and the crappy human driving I’ve seen makes me think that while this was – in hindsight – not the right response, many licensed drivers would’ve made similarly bad decisions.

Cheap Bastard
Cheap Bastard
9 months ago
Reply to  Josh

Like the one that hit the same person and took off.

John Galt
John Galt
9 months ago
Reply to  Cheap Bastard

Ahhh.. the “whatabouts” have arrived.

You can throw all those drivers in jail to think about the consequences of their inaction/negligence.

The AI corp CEO just shrugs and says “whoops, edge case bro. Can’t do anything about that. Sometimes our cars will just murder people.” And that CEO got paid more for their minute of time saying that than an average person makes in a year.

Cheap Bastard
Cheap Bastard
9 months ago
Reply to  John Galt

“You can throw all those drivers in jail to think about the consequences of their inaction/negligence.”

No you can not. The proof is the fact the Nissan driver is not sitting in jail right now awaiting trial for felony hit and run, nor are untold numbers of other hit and run drivers since the dawn of the automobile, some who have died of old age having gotten away with manslaughter or murder.

The AI? If desired a bad AI can eliminated with a few keystrokes. You can also reprogram it to do better which is more than can be said for many humans.

“The AI corp CEO just shrugs and says “whoops, edge case bro. Can’t do anything about that. Sometimes our cars will just murder people.” And that CEO got paid more for their minute of time saying that than an average person makes in a year.”

Uh whatever. You clearly have your own weird agenda here.

John Galt
John Galt
9 months ago
Reply to  Cheap Bastard

What do you mean “no you can not?” The justice system does it every damn day. The word “can” does not suggest it happens every time. It suggests that the ability to do so is there.
Wether or not that power is exercised is of course a matter of discussion.

My weird agenda is that rich techbros should not be able to externalize all the risks for testing their shitty products onto a populace who has no way to opt out of the tests. These CEOs, their companies, and the shareholders should be forced to pay the full, real, development costs of their experimental products.

Cheap Bastard
Cheap Bastard
9 months ago
Reply to  John Galt

No you can not “throw all those drivers in jail to think about the consequences of their inaction/negligence”. Lots of people get away with that crime thanks to the only witness being unable to testify due to being dead.

“My weird agenda is that rich techbros should not be able to externalize all the risks for testing their shitty products onto a populace who has no way to opt out of the tests”

Funny, I don’t recall an opt out of all the risks of sharing the road with inexperienced human drivers either. Or with foreigners unfamiliar with local laws. Or impatient jerks. Or the senile.

That’s what insurance is for. In the case of AI require any company to carry a sizable policy or put up a sizable bond to cover whatever might happen.

Mechjaz
Mechjaz
9 months ago

It doesn’t make it better, but it is an edge case. Certainly not the language I would use talking about a human being hit and dragged under a car, but at the same time in the hardware and software suite, it is a relatively edge case.

The part that absolutely doesn’t get them off the hook is exactly what Torch said: driving is a social undertaking, not purely a mechanical one. When missing an edge case is letting someone use an invalid character for a username, that’s an oopsie. When you missing an edge case means someone gets hurt or killed, you’re not ready to launch whatever it is you’re planning on launching, and should open individuals to applicable civil or criminal charges.

Matthew Rigdon
Matthew Rigdon
9 months ago

Saw this on Threads and I commented there, but I’ll add it here.

We regularly hand a piece of paper over to thousands of 15-year-olds every year who have never been behind a wheel and set them loose on the streets with just an adult in the passenger seat. We’re so used to it that we just don’t bat an eye. In rural Texas I knew kids who got licenses at 13 or 14 if they could prove hardship. We let children drive in this country.

Like many who love and write about cars, I’m a white male. I’ve been driving for over thirty years and I think that I could have handled this situation without anyone getting hurt, but that’s because I probably overestimate my own abilities (as many white men do). Fact is, my reflexes are slowing down and there’s just as good a chance that I’ll black out, freak the hell out, or something else to make it worse. All sorts of human things that humans do every day in situations like this.

We should be doing something to make self-driving cars better, but we won’t because the best thing would be to force all of these companies to share data and work together, but in this country we’d rather have a system where the company that kills the fewest people wins because ‘freedom’ and ‘profits!’

Sort of off-topic, but if you want to help the most pedestrians you’d be pushing to ban these behemoth electric vehicles off the road or at least force anyone who buys one to get a Class B or higher license because if a Cybertruck hits me when I’m walking, I’m probably going to die no matter who is controlling the vehicle.

Hoonicus
Hoonicus
9 months ago

“After contacting the pedestrian, the AV continued decelerating for approximately 1.78 s before coming to its initial stop”

This is where it should have phoned home, activated flashers, and basically gone into panic mode. It’s hard to fault actions up to this point, but imperative for all providers to not allow what happened next!

I’ve previously posted on multiple occasions that I totally agree with Jason on doubting the tech. can ever achieve 0 fault status. Until the providers CEO’s are held to the legal ramifications that a human driver would be,0 fault status is imperative to operate on public roadways.

Double Wide Harvey Park
Double Wide Harvey Park
9 months ago
Reply to  Hoonicus

That’s the thing–operating the vehicle is only part of it. Maybe stopping would have been worse, eg if the wheel was sitting on the person’s head. The right solution would probably have been to get out of the car and assess.

Oldhusky
Oldhusky
9 months ago

This question of ‘edge cases’ and the extent to which reality, as Jason provocatively puts it, is “nothing but edge cases” is pretty interesting. It’s kind of a shorthand for saying that the world with humans acting in it cannot be fully quantified, certainly not to the degree that it could be made predictable, as in a sort of natural science positivist model of reality. This is grounds for a fundamental theoretical skepticism about the entire project of autonomous vehicles. It’s possible that the technology will show this position to be at least partially, possibly mostly wrong, over time. That’s basically an empirical question that can only be answered in time. It’s in the present, where that answer remains totally up in the air, and regular bystanders are made to be participants in a potentially deadly experiment that they have not consented to, that some really difficult judgments must be made. Here, it is the question of what standards to hold these machines and, of course, their corporate overlords, to when things go wrong. More than one commenter has noted that a human being might be no better, and this was my reaction as well. Like Greensoul, i live in Texas, where the motorists i share the roads with on a daily basis are truly frightening, sober or impaired. But an important question is whether AVs should be held to the same standard as human motorists. This is basically a sort of moral quandary that i’m not sure how to approach. It doesn’t seem right to hold AVs to the same standard as the rest of us meat puppets, for at least two reasons: 1) the claim is that the robots are (or will be) better drivers than humans, and 2) the people affected by this experiment have not consented. These things are on goddamned public roads, after all. You and i basically cannot avoid encounters with them, even if we want to. This makes me think that perhaps the standards for safe operation should be higher for AVs than for people in terms of culpability or, basically, the idea that they have really meaningfully fucked up in their task at motoring. It reminds me of debating police violence with conservative friends–yes, people (this is almost always racially coded) perpetrate violence against one another at very high rates. But the cops are agents of the state and so even a single instance of unjustified violence is unacceptable. The comparison isn’t quite so stark where it comes to AVs and human motorists, but i’m not sure they should be equivalent either.

Joe The Drummer
Joe The Drummer
9 months ago
Reply to  Oldhusky

Which all goes to show, IMO, that we are still years if not decades or centuries away from AV tech being fit for public safety. Which leads me to another question: how come all the autonomous vehicle experimentation I hear about is in public on the street, and not in closed commercial settings, where it would not only be a great financial boon, but much safer by orders of magnitude, even when it screws up?

Why are we not hearing of AV innovation, say, at a gravel quarry, running backhoes and loaders and dump trucks within the yard? AV mowers tending the grounds at a golf course? Hell here’s one: have any of y’all ever had the distinct pleasure of trying to move a wheelbarrow full of wet concrete from the concrete mixer to where you want it dumped, without accidentally dumping it out somewhere in between where you really didn’t want a load of wet concrete to be? How about an autonomous wheelbarrow to take care of that for you?

Last edited 9 months ago by Joe The Drummer
John Galt
John Galt
9 months ago

Autonomous vehicles have been a thing in large mine operations for some time. You don’t hear about it because, due to the way mining finances work, operations are allergic to large capital investments that were not part of the original mine plan. This means that AV are either used from the start, or phased in gradually to newer parts of operations in a way that does not disrupt current labor relations. Also, mines don’t tend to use only AV fleets, but a mix.

And the biggest reason…
Mines, unlike silicon valley startups, have production schedules, production targets to make, delivery contracts, and profit targets. Missing any one of those means investors scatter like the wind.

05LGT
05LGT
9 months ago

I remember reading that somewhere has a legal system that usually makes it less expensive to kill a pedestrian than to injure one, and that this has resulted in more people than you want to believe dragging a victim or backing back over them again. Since it was about China and I get information in the US, I don’t really know if I trust it.

Ranwhenparked
Ranwhenparked
9 months ago
Reply to  05LGT

The US does have a system where leaving the scene of an accident is usually a lesser penalty than DUI, so drunk people do often just drive away and go home to sleep if off so they’re sober by the time the cops find them

Andy Individual
Andy Individual
9 months ago

The grisly descriptions here should be added to an AI model, then put in Unreal Engine and sent to the regulators. ‘Cause that’s how the tech bros would do it.

Jayson Elliot
Jayson Elliot
9 months ago

One question that I haven’t been able to find an answer to – what happened to the pedestrian? Did they live, hopefully? Were they able to make a full recovery?
None of that has anything to do with Cruise’s handling of the incident, I’m just worried about the person who was hit.

Andy Individual
Andy Individual
9 months ago
Reply to  Jayson Elliot

Do you really think that’s relevant to the discussion? /s

Guido Sarducci
Guido Sarducci
9 months ago

I for one, do really think the outcome of the individual is central to the discussion. This discussion would not be taking place but for the lack of object permanence and other situational awareness which the autonomous self driving software / sensors are incapable of understanding and applying.

Jason is spot on that this situation is absurd. A motor vehicle has the potential to be a dangerous weapon, and needs to remain under the control of the (human) driver at all times. Driving is indeed a social situation, and software / AI will never replicate or replace the capabilities of a human.

05LGT
05LGT
9 months ago
Reply to  Jayson Elliot

“The person lived, by the way, just so you’re not wondering, so that’s good, at least.”
Jason wrote that ~3 paragraphs in.

Morgan van Humbeck
Morgan van Humbeck
9 months ago

This article implies daddy is not a powerful magician. I don’t know about you, but I will not stand for this slander

Totally not a robot
Totally not a robot
9 months ago

Fun fact, object permanence is also why peekaboo works so well with kids. As soon as your face disappears behind your hands, you have vanished from existence until you magically reappear from somewhere.

On second thought, that might be one of the most traumatic experiences for most kids.

Andy Individual
Andy Individual
9 months ago

Is this how face palms work? Just back out of the situation?

Double Wide Harvey Park
Double Wide Harvey Park
9 months ago

You may have found the root of Ted Bundy’s behavior.

Joe The Drummer
Joe The Drummer
9 months ago

Tell me about it. My uncle “got my nose” in 1975 and never gave it back. Then he passed away without telling me where he hid it. I’ve been looking for my nose for nearly 50 years. It’s been a struggle.

Morgan van Humbeck
Morgan van Humbeck
9 months ago

Such an innocent game. Nobody stops to think about the consequences

Totally not a robot
Totally not a robot
9 months ago

I may be a bit contrarian, but I have to admit to feeling actually a bit more safe and secure sharing roads with these robo-taxis (mostly Waymo these days). I can’t ever really be sure that a human driver sees me or acknowledges my presence as a pedestrian, but at least robo-taxis have more sensors beyond the visible spectrum and that they’re *in theory* supposed to be programmed to avoid hitting things.

Cheap Bastard
Cheap Bastard
9 months ago

Robotaxis don’t hit people on purpose. Some horrible human drivers do.

Attila the Hatchback
Attila the Hatchback
9 months ago

The main issue here is the lie / cover-up from Cruise’s side of things right after it happened. What happened to the woman was terrible, but it is indeed a crazy and unfortunate corner case in terms of vehicle safety. The initial hit of the woman/jaywalker would most likely never have happened if the human-driven car was autonomous.

It is true that perception & planning systems in autonomy tend to have limited memory and understanding of events versus humans, but the autonomous systems do have 100% attentiveness which is something that humans fail at all the time.

Remember there are >1.6M humans injured by vehicles every year, and >40K humans killed by vehicles in the US every year. We’ve come to accept this as a normal / acceptable. Autonomous vehicles are the best solution to this problem. Despite being a ‘car guy’, I still look forward to some point in the future when self-driven cars will be like horses — an old mode of transportation that people typically just use for fun on tracks and off-road.

Space
Space
9 months ago

They say the cover up is worse than the crime. But only if you get caught, and then only if you aren’t powerful enough to control the watchers.

Greensoul
Greensoul
9 months ago

Not all humans would stop. Here in Texas a few weeks ago a drunk guy hit a pedestrian so hard the pedestrian flew through the windshield and was half in the car, half on the hood Not only did the driver not stop, he drove on about 35 miles with the victim stuck through his windshield. The drunk guy then pulled into a 24 hour fast food joint and passed out while in the parking lot. When the cops got their, the driver claimed he thought that he had hit a deer. Sickening.

Double Wide Harvey Park
Double Wide Harvey Park
9 months ago
Reply to  Greensoul

The worst part is that the pedestrian had to listen to bro country for 35 miles

Greensoul
Greensoul
9 months ago

According to the coroner he was probably spared the bro country and was hopefully listening to harp music as it was determined he died upon impact.

Jason Masters
Jason Masters
9 months ago

i’m not usually an AV apologist, but this is absolutely the correct way to handle this. a one-in-a-million event occurred and the AV system reacted to it the best it knew how in 2 seconds and 20 feet. and it was a quarter second between impact and ABS event, which means a human would just be starting to widen their eyes in reaction to what just happened, and the AV system faulted because it knew something was up. i very much doubt most drivers would have handled it any better. if it were a human, we’d be hearing “it just happened so fast”

Jack Beckman
Jack Beckman
9 months ago
Reply to  Jason Masters

I’m not so sure. If *something* hit the hood of my car and bounced off, I’m stopping immediately, even if I don’t what it was, until I’m sure it’s safe to continue. I’m not driving another 20 feet.

I’m also not so sure this is a “one-in-a-million” occurrence. People are hit by cars every day, and often by more than one if the street is busy.

Jason Masters
Jason Masters
9 months ago
Reply to  Jack Beckman

while they didn’t specify, 0.23 second delay between impact and ABS event would make the AV going about 12-13 MPH when it happened. according to a sleezy injury law site “At 20 mph, once the brakes are applied, it takes approximately 19 feet to stop.” which means a significant chunk of that 2 seconds and 20 feet was the car stopping. im not saying it cant be better. but its not immediate and never will be.im just saying the system failed pretty gracefully considering the circumstances.

Jayson Elliot
Jayson Elliot
9 months ago
Reply to  Jason Masters

Let me know how you feel about it when it’s someone you love that gets killed by an AV that could have let them live, except an engineering manager didn’t want to put the extra hours into coding for an “edge case.”

59turner
59turner
9 months ago
Reply to  Jason Masters

This is absolutely the wrong take. The AV industry’s first claim is that they will make road safer. They will do this by reducing the number of accidents. If they claim that accidents are outliers, then they are ignoring the whole point that they are making roads safer. It can’t be both ways. Then if they are not trying to make road safer, what is their goal. We need AVs to be better than humans during accidents or they are pointless.

Cheap Bastard
Cheap Bastard
9 months ago
Reply to  59turner

AVs are already better than humans since AVs don’t go out of their way to mow down pedestrians just for shits and giggles. Humans do.

Last edited 9 months ago by Cheap Bastard
59turner
59turner
9 months ago
Reply to  Cheap Bastard

How do you know that the Cruise AV wasn’t giggling?

Cheap Bastard
Cheap Bastard
9 months ago
Reply to  59turner

Because it stopped. Giggling humans keep going. IIRC so do demonically possessed vehicles.

See Christine, Maximum Overdrive, The Car,..

Last edited 9 months ago by Cheap Bastard
59turner
59turner
9 months ago
Reply to  Cheap Bastard

You fell for it. That is exactly what it wants you to think.

Cheap Bastard
Cheap Bastard
9 months ago
Reply to  59turner

If its THAT devious we’re all screwed!

AI by Skynet, coming to a Chevy near you.

Mike Harrell
Mike Harrell
9 months ago

“…because objects don’t just blink out of existence when we don’t see them.”

Clearly you’ve never dropped or set down anything in my garage.

Totally not a robot
Totally not a robot
9 months ago
Reply to  Mike Harrell

*Cue 10mm socket joke*

Andy Individual
Andy Individual
9 months ago
Reply to  Mike Harrell

My siblings and I started buying air tags for my mom so she could find her beer. She could never find her phone, so they were useless. It just became cheaper to leave a beer all over random places in her home.

I accepted the same fate and now leave a pair of reading glasses and a slot, phillips and robertson screwdriver and a tape measure in every room of my home. It kind of works…

87
0
Would love your thoughts, please comment.x
()
x