The illusion of truth online
We all know that the internet is full of lies, half-truths, and misinformation. But we trust Google because we believe it's basically a machine that can sift through all of humanity’s knowledge and separate fact from fiction. Except… it can’t. That's not even what it's supposed to do.
Google doesn’t know the truth, and it’s not even trying. Its only goal is to give you an answer as fast as possible. Which means what you get isn’t the true answer - it’s the most common answer to your exact question.
Which is a very different concept
The plastic problem (or not)
You’ve probably heard this: thanks to microplastics in the water we drink and the air we breathe, the average human ingests about a credit card’s worth of plastic every week. Horrifying, right?

Now Google that exact question: “Do we ingest a credit card worth of plastic every week?” - boom, every result says yes. AI Overview too.

Now change the wording slightly: “Is it false that we ingest a credit card of plastic every week?” Suddenly, Google agrees it’s false. AI Overview too. The search results flip.

Nothing changed except the question. And that’s because Google isn’t weighing evidence or fact-checking. It’s just mirroring the most common answer phrased the way you asked.
For the record? The truth is way less dramatic. If you do actual research - dig through studies, sources, and math - you’ll find we ingest about a credit card’s worth of plastic every 23,000 years. Not every week. Not every month. Every twenty-three thousand years. Which means you’d need to slice that card into over a million pieces and eat one sliver in your lifetime to get there. Suddenly… not such a big deal.
Google isn’t broken - we just expect too much
So what am I saying here? Never trust Google? Not exactly. For most trivia questions, it does a great job. Wikipedia is still one of the most reliable resources on the web. But it’s important to understand what Google actually is: a machine for retrieving the most popular answer, not the most accurate one.
And this goes double for ChatGPT and other AIs, which have quickly become “the new Google” for millions of people. But here’s the uncomfortable truth: ChatGPT isn’t a thinking machine. It’s a large language model - an autocomplete system on steroids. It doesn’t know anything, and it has no concept of truth. It just predicts what words should come next in a way that sounds smart.
Don’t believe everything you hear (still)
That old saying - “Don’t believe everything you hear on the internet” - is more relevant than ever. Because the internet is very good at repeating myths until they become “truth.”
The real truth? It’s out there. But it takes more than one question to find it.