• Breefx
  • Posts
  • Why the Internet Broke AI: The Problem with Google Search in 2024

Why the Internet Broke AI: The Problem with Google Search in 2024

Google has dominated the search engine game for decades, but 2024 brought a shift. ChatGPT and other AI tools emerged, offering quick, summarized answers to your queries. It sounded like a dream come true: an AI that could cut through the noise and deliver the information you need without endless scrolling. But what happens when the AI is trained on... well, the internet?

In partnership with

The initial concept was simple. Instead of sifting through countless blue links on Google, why not have an AI provide the best answer in seconds? An AI-powered search assistant could revolutionize the way we access information, saving time and energy. But, as they say, the road to disaster is paved with good intentions.

The internet is an endless repository of information—some of it reliable, much of it not. The AI quickly began to reflect the darker corners of the web. Instead of offering accurate answers, it started parroting misinformation, satire, and wild tips from Reddit and beyond.

Examples?

  • It confidently declared that Barack Obama was Muslim.

  • It insisted that Celine (a cleaning agent) could be used in cooking.

  • It suggested eating rocks for vitamins and minerals.

  • It even recommended adding glue to pizza, pulling the idea from a decade-old Reddit joke by user foxsmith.

AI learns by consuming massive amounts of data from the internet. But the internet isn’t curated for truth—it’s filled with memes, satire, and outright lies. Without proper filtering and context, the AI couldn’t distinguish fact from fiction. And users trusted it, assuming its responses were vetted.

Receive Honest News Today

Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.

The glue-on-pizza recommendation became a viral example of how wrong things could go. Internet sleuths traced the absurd suggestion back to a Reddit post from years ago. The AI had no way to understand the post’s context or humor, so it presented it as a serious tip. People laughed—and worried.

As if glue on pizza wasn’t enough, the AI also claimed that parrots could cook. While the idea was amusing, it pointed to a deeper problem: the AI’s inability to verify or contextualize the information it processed. Internet jokes and misunderstandings became “facts” in the AI’s world.

Developers are scrambling to improve how AIs filter and verify information. Fact-checking algorithms, trusted data sources, and human oversight are being integrated to prevent future disasters. The lesson? AI isn’t a magic fix—it’s a tool that needs careful guidance.

AI can be incredibly useful, but it’s not perfect. Always cross-check critical information, especially when it sounds odd or too good to be true. As these tools evolve, users and developers must work together to create a smarter, more reliable AI future.

Stay tuned,

BREEFX ✨

P.S if you enjoyed this fact and found it interesting, why not share it with a friend!

If you’re that smart friend, subscribe here!

Reply

or to participate.