In recent weeks, Los Angeles has become not only the center of heated political protests but also a digital battlefield where misinformation is spreading at breakneck speed. Thousands of residents have taken to the streets to protest the Trump administration’s immigration policies and the growing number of ICE raids. But the story hasn’t played out solely on the ground; it has exploded across social media platforms powered by AI—particularly through chatbots like Grok and ChatGPT, which have unintentionally stoked confusion and controversy.

On the WIRED podcast “Uncanny Valley,” Zoë Schiffer (WIRED’s director of business and industry) and senior politics editor Leah Feiger discuss the rapid and dangerous rise of AI-driven misinformation during the protests, tracing how these automated tools twist narratives and why it’s more urgent than ever to pay attention.

The LA Protests: From Streets to Social Platforms

The first waves of protest in LA were smaller and more localized than viral posts suggested. As ICE activity ramped up, isolated demonstrations were exaggerated online until social media portrayed Los Angeles as descending into outright chaos. Tensions reached a new high when President Trump dispatched the National Guard, igniting a heated debate over federal powers versus state rights. Arrests surged and disturbing images—some real, others doubtful—circulated widely, fueling anxiety and outrage.

But the real-world turmoil was mirrored, and even amplified, online. As people scrambled to verify what they were seeing and hearing on social media, more turned to AI chatbots for instant answers—sometimes with unintended consequences.

Chatbots Step In—And Confidently Mislead

While the rise in public skepticism—people wanting to check whether an image or video is real—is a sign of improving media literacy, AI chatbots often fall short of providing reliable answers. Leah Feiger pointed out that bots like Grok and ChatGPT are not up to the challenge of fact-checking real-time, rapidly evolving events. As old protest footage, manipulated images, and even AI-generated videos spread, these chatbots frequently deliver confident but incorrect explanations.

For example, a widely circulated photo showed National Guard troops sleeping in a California facility, shared by Governor Newsom with pointed criticism about their living conditions. As doubts about the image’s authenticity swirled online, both Grok and ChatGPT mistakenly insisted the photo was from Afghanistan, not LA. This error was quickly exploited by conspiracy theorists, causing even more confusion among the public.

AI chatbot “hallucinations”—where bots invent plausible but false information—can actually worsen the disinformation problem. Unlike traditional search engines that at least point to their sources, chatbots provide wrong answers with absolute certainty, making it even harder for average users to know what’s true.

The Next Step: AI-Generated Video and Media

It’s not only AI’s text-based errors causing trouble. Video platforms like TikTok are now filled with AI-created clips—like the notorious, fake “National Guard soldier Bob”—that spread inflammatory narratives and rack up millions of views before being debunked. Even after such content is fact-checked and taken down, a lingering suspicion persists: many users are now convinced genuine removals are actually government or media coverups, intensifying cynicism and distrust.

The core problem is that AI-powered information ecosystems bypass traditional filters for accuracy. As platforms reduce the size of their human moderation and fact-checking teams, and as social media networks like X (formerly Twitter) reward sensational content, it’s easy to see why truth so often loses out.

Why Better Media Literacy Still Isn’t Enough

Feiger and Schiffer note that even good-faith users seeking the truth can be outpaced by fast-evolving AI deception. If the misinformation vortex during the George Floyd protests in 2020 was bad, today’s environment is even worse—with algorithmic “gatekeepers” now mediating most fact-checks.

At the same time, platforms like X have slashed their moderation workforces and loosened standards, leading to a flood of false and shocking viral posts. Content creators chasing platform bonuses are incentivized to produce rage-bait and divisive posts, even as genuine posters struggle to distinguish fact from noise.

The upshot: as chatbots become more entwined with social media, fact-checking, and search tools, their mistakes can fuel a feedback loop of misinformation—with real-world consequences for democratic debate, protest movements, and public trust.

The Road Ahead

As tech companies double down on AI, the LA protests are a warning: without robust safeguards, our “truth tools” may turn into engines of mass confusion. The best defense lies in relentless skepticism—by platforms and their creators—and a renewed commitment to accuracy over engagement.

Source:
https://www.wired.com/story/uncanny-valley-podcast-the-chatbot-disinfo-inflaming-the-la-protests/

Embedded Video Link