Here at The Autopian, we have some very stern rules when it comes to the use of Artificial Intelligence (AI) in the content we produce. While our crack design team may occasionally employ AI as a tool in generating images, we’ll never just use AI on its own to do anything – not just for ethical reasons, but because we often want images of specific cars, and AI fundamentally doesn’t understand anything. When an AI generates an image of a car, it has no idea if that car ever actually existed or not. An AI doesn’t have ideas at all, in fact – it’s just scraped data being assembled with a glorified assembly of if-then-else commands.
This is an even bigger factor in AI-generated copy. We’ll never use it because AI has no idea what the hell it’s writing about, and so has no clue if anything is actually true, and since ChatGPT has never driven a car, I don’t really trust its “insights” into anything automotive.
These sort of rules are hardly universal in our industry, though, so if we ever wanted confirmation that our no-AI-copy rule was the right way, we’re lucky enough to be able to get such reassurance pretty easily. For example, all we have to do is read this dazzlingly shitty article re-published over on Yahoo Finance about the worst cars people have owned.
Maybe it’s not AI? Maybe this “Kellan Jansen” is an actual writer who actually wrote this, and in that case, I feel bad both for this coming excoriation and about whatever happened to them to cause them to be in the state they seem to be in. The article is shallow and terrible and gleefully, hilariously wrong in several places.
I guess I should also note that we don’t use AI because the 48K Sinclair Spectrum workstations we use here don’t quite have the power to run any AI. Well, we do have one AI that we use on them, our Artificial Ignorance system that we employ to get just that special je ne sais quoi in every post we write. Oh, and our AI (Artificial Indignation) tools help with our hot takes, too. So, two.
Okay, but let’s get back to the Yahoo Finance article, titled “The Worst Car I Ever Owned: 9 People Share Which Vehicles Aren’t Worth Your Money,” which is a conceptually lazy article that is just taking the responses to a Reddit post called “What’s the worst car you have personally owned?” which makes this story basically just a re-write of a Reddit post. It seems like the Reddit post was fed into whatever AI half-assed its way through generating the article, based on these results.
The results are, predictably, shitty, but also still worthy of pointing out because come on. There’s this, for example:
2023 BMW BRZ
BMWs are a frequent source of frustration for car owners on Reddit. Just ask user “Hurr1canE_.”
They bought a 2023 BMW BRZ and almost immediately started experiencing problems. Their turbo started blowing white smoke within two weeks of buying the car, and the engine blew up within 5,000 miles.
The Reddit user also had these issues with the car:
- Air valve
- Computer software and control module
- Transmission
Other users mention poor experiences with BMW X3s and 540i Sport Wagons. It’s enough to suggest you think carefully before making one of these your next vehicle.
The fuck? What is a BMW BRZ? This is such a perfect example of why AI-generated articles are garbage: they make shit up. Maybe that’s anthropomorphizing the un-sentient algorithm too much, but the point is that it’s writing, with all the confidence of a drunk uncle about to belly-flop into a pool, about a car that simply does not exist.
And, if you look at the Reddit post, it’s easy to see what happened:
The Redditor had their current car, a 2023 [Subaru] BRZ in their little under-name caption (their flair), and the dumb AI processed that into the mix, and, being a dumb computer algorithm that doesn’t know from cars or clams, conflated the car being talked about with the one the poster actually owns. You know, like how a drooling simpleton might.
There’s more of this, too. Like this one:
Ah, yes, the F10 550i. So many of us have been burned by that F10 brand, have we not? Or, at least, we would have, if such a brand existed, which it doesn’t. What seems to have happened here is the AI found a user complaining about a “2011 F10 550i” but didn’t know enough to realize this was a user talking about their BMW 5 series, and yes, F10 refers to the 5-series cars made between 2010 to 2016, but nobody would refer to this car out of context in a general-interest article on a financial site without mentioning BMW, would they? I mean, no human would, but we don’t seem to be dealing with a human, just a dumb machine.
Even if we ignore the made-up car makes and models, the vague and useless issues listed, and the fact that the article is nothing more than a re-tread of a random Reddit post, there’s no escaping that this entire thing is useless garbage, an unmitigated waste of time. What is learned by reading this article? What is gained? Nothing, absolutely nothing.
And it’s not like this is on some no-name site; it was published on Yahoo! Finance, well, after first appearing on GOBankingRates.com, that mainstay of automotive journalism. It all just makes me angry because there are innocent normies out there, reading Yahoo! Finance, maybe with some mild interest in cars, and now their heads are getting filled with information that is simply wrong.
People deserve better than this garbage. And this was just something innocuous; what if some overpaid seat-dampener at Yahoo decides that they’ll have AI write articles about actually driving or something that involves actual safety, and there’s no attempt made to confirm that the text AI poops out has any basis in fact at all?
We don’t need this. AI-generated crapticles like these are just going to clog Google searches and load the web up full of insipid, inaccurate garbage, and that’s my job, dammit.
Seriously, though, we’re at an interesting transition point right now; these kinds of articles are still new, and while I don’t know if there’s any way we can stop the internet from becoming polluted with this sort of crap, maybe we can at least complain about it, loudly. Then we can say we Did Something.
(Thanks, Isaac!)
Ha, Yahoo pulled the article, sadly. Wanted to use it as an example of how AI is propegating misinformation.
I counted at least seven things wrong with that Yahoo! Finance article on my AI-generated hand.
My concern is primarily Microsoft shoving copilot into everything as it flails for relevance. An omnipresent statistical autocorrect platform that copies everything you put in your clipboard sounds like a great attack vector.
This is exactly why I’ve replaced Google with DuckDuckGo for my searching. Big sites like Yahoo, CNN and CNet (rip old good CNet) are barfing up tons of “consumer advice” type articles and getting primo ranking on Google, so much so that I can’t even really find the original research they’re ripping off anymore. DDG doesn’t really have that problem yet, but I’m worried. A firehose of trash is getting unleashed upon the world and we’re not ready.
The thing about the current crop of “AI” systems isn’t that they sometimes hallucinate, it’s that hallucinating is What They Do, and the things they hallucinate are true a surprising amount of the time. It’s not a bug, it’s intrinsic to how they operate – they’re guessing the next word (or pixel or whatever), and then taking that guess and guessing the word that would come after that, and then guessing the word that would come after that. It’s all hallucinations, just sometimes they’re similar enough to reality to be useful.
The basic rule of thumb for using these systems is if getting a shitty answer fast is useful, they’re useful – if you need quality, they’re useless.
It’s only a matter of time before it does something like this somewhere that matters with someone’s real name –
“John Smith said he personally planned the insurrection” – and then the dipshit having AI do his work for him is going to get sued.