I think I understand why this video that is going viral right now has gotten such a grip on so many people’s attention: because it’s a cartoon trope, made real. And that’s always a good time. If someone could synthesize a way to walk off a cliff and not fall until you realized you were hovering in mid-air, for example, that would be huge. But what YouTuber (not like a potato, like a person who makes videos) Mark Rober has done is the next best thing: built a full-size version of the painting-of-the-road gag from the Road Runner and Coyote cartoons.
I should note what this video is really about if you look past all the cartoony frippery and the remarkable mid-video distraction involving sneaking a LIDAR rig into multiple Disney World rides, including Space Mountain and the Haunted mansion – which is absolutely worthwhile and I’m surprised didn’t end up as its own video. It’s about the differences between an all-camera-based driving sensor system and a LIDAR (light detectiopn and ranging)-assisted sensor array.


If anything, the classic Warner Bros painted-road gag is sort of the perfect way to distinguish between LIDAR- and camera-based systems. Oh, and if you somehow were denied a childhood and don’t know the cartoon trope I’m talking about, it’s this:
Oh man, that’s a classic. And Mr.Coyote is quite an effective and efficient artist, I have to hand it to him, even if his paintings do sort of violate everything I thought I understood about the nature of reality.
Anyway, here’s Rober’s video, so you can see it all for yourself; there are some things I want to talk about from the video, too:
I think what I personally find so engaging about this video is how well the fake-road-wall was executed. I mean, look at this:

Damn, that’s a mighty fine fake road wall, there. And thought was put into the potential smashing-through, which I respect. At the risk of spoiling things, here’s the Tesla smashing through the wall:

…and the aftermath:

As you can tell, the breakaway portion was made to give a very nice cartoony jagged-edged hole. That’s some top-notch real-world cartoon prop work!
Okay, now let’s address some of the issues with this video, and cover a bit why all of the hardcore Tesla people are so very cross about it. Now, I’m not usually one to side with the glassy-eyed, drool-providing Minions of Musk, but I think there are some valid suggestions that, in some ways, this video may make Tesla look a bit worse than it actually needed to.
I say this for a few reasons; first is that, fundamentally, this isn’t a LIDAR vs Tesla issue; it’s a lidar vs camera issue, and the video does make this clear:

The issues that the Tesla had would be the same ones any automated or semi-automated driving assist systems would have had, because, at least in the case of the big fake road wall, you’re seeing a two-dimensional representation of the road ahead – and that’s what camera-based systems see, all the time. Cameras take two dimensional images and interpolate depth and distance cues, unlike LIDAR systems, which actually shoot lasers out and see how long they take to bounce back, giving a true three-dimensional representation of the immediate environment.
Where a camera system may see a big photo of the road on a wall as the road, a lidar system sees, well, a wall:

Lidar can’t see color or what may be on the surface of an object, but it sure as hell can tell where an object is and how big it is (see above), what shape, and so on.
Almost any non-LIDAR-based car would have done just as bad here: a Ford Mach-e, a Lexus, a Cadillac, a whatever. But a Tesla is definitely going to get more clicks, and YouTubers aren’t dummies, at least not about that stuff.
Of course, you could argue that the use of a Tesla in a LIDAR-based competition makes sense, because Tesla’s reclusive, hermit-like, borderline silent CEO, Elon Musk, has opinions about LIDAR that he likes to share, like this quote from him from earlier this year:
“Obviously, humans drive without shooting lasers out of their eyes. I mean, unless you’re Superman…. humans drive with eyes and a neural net and a brain neural net, sort of biological… the digital equivalent of eyes and a brain are cameras and digital neural nets or AI,”
Now, the idea that “humans drive without shooting lasers out of their eyes” is objectively accurate, so good job noticing that Mr. Musk, but it’s also an insipid analogy. Just because humans have some limitations doesn’t mean our technology has to toe those limitations as well. Humans also don’t have electric motors or wheels or lights or any number of other parts of a car, and we’re fine adding those to our arsenal of tools to go faster and more comfortably and safely than we can on our own.
And also, driving isn’t what Superman uses his eye-lasers for. Those are for welding shit and burning holes in things.
But, other than that, this isn’t specific to Tesla.
A lot of hardcore Tesla geeks have taken issue with the fact that Rober decided to use Autopilot, Tesla’s original Level 2 driver-assist system instead of FSD (Full Self-Driving), their more recent and more advanced Level 2 driver-assist system. This has, of course, caused a lot of outrage among Tesla fans, with some even suggesting an outright attempt to deceive and suggesting Rober may be a target of a lawsuit:
I don’t really think that is likely, and, honestly, I’m not sure how much of a difference FSD may have made for the Wile E. Coyote test; maybe it would have done better, maybe the same, but ultimately it barely matters, because gigantic paintings or photographs of roads covering a massive wall are just not driving issues we generally need to worry about.

Other tests that were performed in the video, for foggy conditions or harsh rain? Sure, those are worth testing, and perhaps FSD could have performed better, but since both of those fundamentally impaired any camera’s ability to view a scene, that’s hardly guaranteed.
So, overall, I think this video had one real point: LIDAR can sometimes, in some situations, provide a better understanding of a car’s situation than cameras alone. I think the true path forward will incorporate both technologies, and Tesla, while not especially worse than any other camera-based system in these circumstances (and likely better than many), probably shouldn’t be so adamantly against LIDAR.
The video was fun, and maybe that’s enough. I mean, of course it’s not, the world being what it is and people being who they are and Tesla being Tesla, but one can dream.
Topshot: Warner Bros and Mark Rober/YouTube
Former Waymo Head Doesn’t Think Tesla Is Close To Ready To Deploy Robotaxis
The Fact You Can Buy Add-On Turn Signal Stalks For Your Tesla Feels Like A Joke But Isn’t
Tesla Applies To Start A Free Robotaxi Service In California, But With Human Drivers
The tags at the bottom of this article are fantastic.
There’s a reason Teslas drive into the back of emergency vehicles, they paint lines up the sides of them and the system can’t tell the high vis markings from road markings. The reason it’s almost always a Tesla is that they are the vast majority of camera based cars on the road and the drivers have been intentionally misled to believe that they’re more capable than they are
There is a reason most Chinese adas use lidar. It also so happens if you get a lidar robot vacuum they don’t crash as much either or fall down the stairs. Optical sensors don’t do depth perception well.
A human, with just their two eyes, would likely have seen the 2-D wall for what it was despite the image of a road continuing on. That’s because perspective continuously shifts as you move, but the image on the wall would remain static. A human would notice that something was off immediately.
Run the test again with just human drivers and no cameras or lidar – no driver assists at all. I’ll bet an alert driver in decent seeing conditions would see the fake wall almost immediately 100% of the time.
That guy has managed to get on Warner Brothers, Disney, and Musk’s enemies list with one video. That’s quite an accomplishment. The Space Mountain scan was so cool I could almost overlook the backward hat.
Humans get confused and run into things all the time. Being as good as humans isn’t even close to adequate.
Stereoscopy might have worked on the cartoon wall, but probably not the fog and the rain, Adding some structured illumination by projecting an array of infrared spots in conjunction with a camera and ranging by triangulation would be cheap and much more effective.
I take slight issue with this quote from the article: “The issues that the Tesla had would be the same ones any automated or semi-automated driving assist systems would have had.”
The video never claims that these issues were inherent to Tesla cameras specifically.
Sure, but that’s what’s going to be surmised by the video given how it was packaged — that this is a Tesla problem.
Does anyone else find it unsettling how much Musk apparently knows about human biology? Like, how long have his people been on our planet?
Dr. Chuck Jones was trying to prepare us and we didn’t listen! This is going to become a common problem and we will look back on this with regret.
Musk is just upset his car failed a trompe l’oeil test.
It hurts to say this, but Musk is right, in principle. Cameras can see in 3D. You just need two of them and the software to properly interpret the data.
Also the Superman character isn’t actually a human.
This is the guy that thinks the name of Harrison Ford’s character was “blade runner”. And that he would drive an ugly stainless steel truck instead of a bitchin flying police car.
Deckard would have driven whatever his owners/bosses told him to.
They just had to use styrofoam. They better have picked up every last nanoparticle of it.
Lawsuit hell, Elon’s probably angling to get him deported as we speak.
Idk I’m down with anything that pushes Tesla stock down.
It was an interesting video. While the wall was the big finale the real damning fails were the rain and fog.