I’ll be honest with you: I don’t want our site to become a place that just posts about every Tesla Cybertruck wreck, just because (and David’s review of the truck was actually fairly positive; it’s an impressive machine) — even the wrecks that may be at least in part caused by Tesla’s Level 2 semi-automated driving software, still confusingly-named Full Self Driving (FSD). But sometimes there’s just a perfect storm of irony and eye-rollery and embarrassing fanboyism and powerful, important lessons about the sometimes serious limitations of Level 2 driving systems. This is one of those times, thanks to a tweet that is going viral which hits all of these key points, and has a dramatic picture as well. It’s really the total package. We have to talk about it, at least a little.
I should mention that while the Cybertruck here looks pretty badly messed up and is likely totaled, nobody was actually hurt, so that’s good. Modern cars are very good about not killing the beings inside them in wrecks, and we’re seeing a great example of that here.
![Vidframe Min Top](https://images-stag.jazelc.com/uploads/theautopian-m2en/vidframe_min_top1.png)
![Vidframe Min Bottom](https://images-stag.jazelc.com/uploads/theautopian-m2en/vidframe_min_bottom1.png)
In fact, that’s one of the key points of the tweet, which has been seen by 2.4 million people:
Here’s a bigger screenshot of the text:
![Ct Tweet 1](https://images-stag.jazelc.com/uploads/theautopian-m2en/ct_tweet_1.jpg)
Complimenting The Car That Helped Cause The Crash To Keep The ‘Haters’ Away
What many seem to find amazing, and the reason the tweet is going viral, is that the poster seems worried about what the dashcam footage of his wreck will do for the “bears/haters” after his car crashed itself into a pole.
There’s an awful lot here to unpack. Of course, the first thing that jumps out to most people is how the Cybertruck owner thanks Tesla for “engineering the best passive safety in the world.” I don’t know if that’s actually true, though the Cybertruck does have some impressive passive safety features, and like many modern vehicles, it is generally safe in a wreck, and it is genuinely amazing that the owner here came out without a scratch. And then in the next sentence the owner states that the Cybertruck “failed to merge out of a lane that was ending (there was no one on my left) and made no attempt to slow down” which resulted in the Cybertruck hopping a curb and smacking right into that large streetlight pole, as you can see in the photograph included with the tweet.
Notice I said “owner” there instead of “driver,” because that leads to the next, far more important point. There’s the irony, of course, of thanking Tesla for making a vehicle that’s so safe in a wreck, and then going on to mention that the very same vehicle caused the wreck. It’s very good at dealing with the problem it created!
It’s Proof That These Level 2 Systems Are Problematic
Now, where this gets more complex, and leads to the owner/driver distinction I mentioned before, is that exactly who or what was in control of that 6,000+ pounds of Cybertruck is not clear. The tweet definitely makes it seem like Tesla’s FSD software (noted in the tweet to be version 13.2.4, which is less than a month old) was in full control of the car, which made the decisions that led to crashing into that light pole, and the fact that despite the car heading towards the pole, the human inside did nothing to stop the situation, pretty clearly showing they were not in control or paying attention.
This, of course, is precisely how one is not supposed to use a Level 2 system, which requires constant driver vigilance, and is also the fundamental reason why all semi-automated driving systems are doomed: because people are terrible at using them. And, even worse and more paradoxically, the better a job the Level 2 system does, the worse it is for people to actually use and the more likely it is to be abused, as we see in this situation.
By the way, if you don’t remember what those automated driving levels are, here’s the chart. Remember, the levels don’t indicate how good or advanced the system is, but rather dictate the relationship between the human and the machine when it comes to how the driving responsibility is shared:
All Tesla FSD systems on the road today are Level 2 systems, meaning the human is always responsible, no matter how well the system works or how long it’s been since your last intervention. Under FSD, you have to be ready to take over at a moment’s notice, unless you’re cool with, say, smashing your $100,000 stainless steel truck into a huge post.
Other tweets by the owner seem to suggest a pattern with taking FSD at its name and just treating it like an actual Level 4 or 5 self-driving system – which, again, it isn’t – as you can infer here:
Sometimes I decide to go somewhere and turn on @Tesla FSD and then I forget where I decided to go and then it starts turning into Taco Bell or whatever and I'm like wtf is it doing and then I'm like oh right Taco Bell
thank you for coming to my TED talk
— Jonathan Challinger (@MrChallinger) January 6, 2025
So, the Twitter-poster here gets in his Cybertruck and turns on FSD and is so distracted he doesn’t even remember where he asked it to go? Even when there’s Taco Bell at the end? Bizarre.
This is, of course, precisely the worst way to use any sort of Level 2 driving-assist system and remains the biggest issue with such systems. People are genuinely confused about just how much a driver-assist system does, and even when they’re not confused, people still get incredibly distracted when using driver-assist systems that they’re supposed to be monitoring, because of course they do.
We’ve known this would be a problem since 1948, when Mackworth’s famous study “The Breakdown of Vigilance during Prolonged Visual Search” demonstrated that intense and prolonged vigilance to a mostly independently-operating system is simply not something people are good at. At all. Tesla’s FSD not only calls itself a “Full Self-Driving” system, which sets up unrealistic expectations, but it’s also generally quite good at what it does, which means it is very easy for people to end up in the same situation as Mr.Pole-Pegged Cybertruck over here: too comfortable and then ultimately fucked.
Because no matter how good FSD may be most of the time, it’s not actually a Level 4 or 5 system, and even if it’s been doing fantastic for months, you have no idea when it may suddenly decide to not realize a lane is ending and crash into a pole. I know it’s tempting for ardent defenders of FSD or other automated driving systems to blame these sorts of wrecks on lighting or weather or confusing environmental setups, calling them “edge cases” and suggesting that these circumstances are uncommon, and not worth making a big fuss about.
But here’s the thing about “edge cases”: the world is full of edge cases. That’s what reality is! Messy, chaotic edge cases, with some veneer of order slapped down over everything. Look at where this wreck happened, for example, on that map of a section of Reno, Nevada there. That’s not a particularly challenging area, really. It looks like it was dark, but we’re not talking about some labyrinthine narrow, crowded streets in an ancient European city or anything like that: these are big, well-lit roads that aren’t even particularly dense, traffic-wise. I’m sure the FSD instance in that very same Cybertruck has navigated far more complex situations than this (FSD is a thoroughly impressive system). But, this time, for whatever arcane reason made up of ones and zeros, it got confused and drove right smack into a pole, like an idiot.
The owner should have been able to see this coming with plenty of time to react. If I understand what happened correctly, then the wreck is his fault, no question. He was using this software in a stupid way, and thankfully that stupidity didn’t hurt anyone else. But while the owner is likely entirely to blame, at the same time, that blame is shared with the very concept of a Level 2 driving system itself, which has been shown time and time again to foster this sort of dangerous inattention.
Yes, it appears our Cybertruck-lover wasn’t being particularly smart here, but he was guided to that idiocy by a technology that is technically very advanced, but practically crude and foolish. You can’t ask a machine to take over 90% or more of a task and then expect a person to remain alert and vigilant, at least not all the time. This wreck seems stupid on pretty much every level possible, the driver’s stupidity and the inherent stupidity of Level 2 semi-autonomy working hand-in-hand to make sure this person no longer has a working car.
We’ve reported on these sorts of wrecks before, we’ve told you about the studies and findings and research, and we will continue to do so, because wrecks like this show that the message is still not being heard. FSD, no matter what version number, no matter how few interventions you saw on some YouTube video of a drive, does not drive a car by itself. There still must be a human there, ready and able and willing to take over when it screws up. Which it may never do. Or it may do in the next minute. The point is, you just don’t know.
It’s sort of ridiculous we still are letting all this happen without really trying to find better solutions for the goal of self-driving. Sure, L2 is a great way to get self-driving data, but the way these systems are set up is kind of backwards. When the concept is that the machine is in more control and the human is watching or observing or monitoring, the tendency will always be less vigilance on the human’s part. If this was reversed, and a human was always in control with the software being ready to assist in emergencies or to compensate for driver failure, you could get all the data without introducing these vigilance-induced problems.
Of course, that’s a lot less sexy, and people want to not pay attention if they don’t have to. So here we are.
New IIHS Study Confirms What We Suspected About Tesla’s Autopilot And Other Level 2 Driver Assist Systems: People Are Dangerously Confused
This New Study Is Maddening Because We All Know This: People Don’t Pay Attention When Using Autopilot Or Other Driver Assistance Systems
These Tesla FSD Hands-Free Mod Devices Are Such A Bad Idea And Promoting Them Is Dumb
(Topshot: X, Jonathan Challinger)
I’d like to thank the assailant for saving my life by aiming low and shooting me in the leg instead of the head. Thank you fine fellow.
Don’t blame the Tesla. It mistook the pole for a federal employee.
Agree with pretty much every commenter here.
What strikes me as a little weird is that the SAE decided they had to trademark Levels 1 through 5.
But hey. Let’s talk about DEFCON. Most of the pop culture references to it have it all backwards. DEFCON 5 is the lowest (normal/default) level of readiness. DEFCON 1 is when shit is getting real. Like REALLY bad.
The truck swerved to avoid the pole several times before the pole hit it.
I understand this reference. Are you old like me?
I am indeed.
If the street lights flicker, it’s something like 60 Hz a second, and the video camera sees that, how does that affect the brain of an FSD vehicle?
You have probably seen a video where lights are flickering and could that confuse the computer?
All LED lights flicker, though I don’t know at what rate.
Hz already implies it’s seconds. It’s X cycles per second.
Was he pole-pegged or Musk-pegged?
Or Musk-poled? I’ll see myself out, sorry.
Imagine throating Elon so hard that you are eager to out yourself to insurance that you’re a massive liability.
I’m just going to call it now: when Elon gets ousted from Tesla, he’s going on a social media campaign to point out everything wrong with their vehicles and all his fans will blame them on his leaving the company. Never mind that he’s the reason those issues exist.