Google’s AI Believes in “John Backflip” and Other Satirical Legends: A Gymnast’s Joke Exposes AI Flaws
In a world where artificial intelligence is increasingly shaping how we access information, a satirical TikTok video by American gymnast Ian Gunther has hilariously exposed some glaring flaws in Google’s AI-generated search summaries. The incident, which involves a fictional Medieval European gymnast named “John Backflip,” has become a cautionary tale about the limitations of AI in discerning fact from fiction.
The Curious Case of John Backflip
Six months ago, Google’s Gemini AI model made headlines for suggesting glue as a pizza ingredient. Now, it’s back in the spotlight for another blunder. Until recently, if you searched “who invented the backflip,” Google’s AI would confidently inform you about John Backflip, a supposed Medieval European gymnast who invented the move in 1316. A follow-up search for “who is John Backflip” would yield even more details, crediting him as the pioneer of the gymnastic skill.
The catch? John Backflip doesn’t exist. The source of this information is a TikTok video created by Ian Gunther, a former Team USA gymnast and 2023 NCAA champion. Gunther, known for his satirical takes on gymnastics, made up the story entirely as a joke. “I like making satirical videos on gymnastics, and you can get away with goofing and making some jokes about the sport,” Gunther told Forbes.
From TikTok to Google’s AI
The video, which Gunther posted after a training session at his Bay Area gym in May 2023, was meant to be a lighthearted parody. “Most skills in gymnastics are named after the first person to compete it internationally,” Gunther explained. “I went home and made a goofy backstory for someone inventing the backflip.”
In the video, Gunther also humorously credits other fictional gymnastics pioneers like “Henry MuscleUp,” “Richard Presshandstand,” and “Alfonso El Grip.” While the video gained modest traction on platforms like TikTok and YouTube, Gunther had mostly forgotten about it until he received a text in July. “It was a screenshot of the AI, and I was like, ‘Oh no, what have I done? Am I the spreader of disinformation?’” he recalled.
AI’s Struggle with Satire
Screenshots of Google’s AI summary about John Backflip quickly went viral on Reddit and other social media platforms, even prompting a corrective tweet from dictionary publisher Merriam-Webster. While Google has since updated its AI Overview to note that the John Backflip story is an internet meme, it still parrots Gunther’s video verbatim in some searches.
When Forbes tested rival AI platforms like OpenAI, Anthropic, and Perplexity, they either recognized the joke or acknowledged the lack of accurate sources on the first person to perform a backflip. Gunther himself remains baffled by how his video ended up as a source for Google’s AI. “They probably shouldn’t use my videos as any sort of source,” he said.
Why Did This Happen?
The John Backflip incident highlights several vulnerabilities in Google’s AI Overview design. According to a May blog post by Google VP Liz Reid, such errors often stem from “data voids,” where there is little reliable content on a topic, or where the only available content is satirical. “In a small number of cases, we have seen AI Overviews misinterpret language on webpages and present inaccurate information. We worked quickly to address these issues, either through improvements to our algorithms or through established processes to remove responses that don’t comply with our policies,” Reid explained.
To mitigate these issues, Google has introduced safeguards around AI Overviews. These include spotting nonsensical queries, reducing reliance on humorous or user-generated content, and adding a warning label that reads, “Generative AI is experimental.”
Google’s Response
Google spokesperson Olivia O’Brien defended the AI Overview feature, stating, “The vast majority of AI Overviews are high quality, with their accuracy rate on par with other search features like Featured Snippets.” However, incidents like the John Backflip story have raised questions about the reliability of AI-generated summaries, especially when they draw from user-generated or satirical content.
The Bigger Picture
The John Backflip saga is just one of several high-profile missteps by Google’s AI. Other notable errors include claims that eating rocks has nutritional benefits and that Barack Obama was the United States’ first Muslim President. These mistakes have not only made Google the subject of online jokes but have also become a cautionary tale about the limitations of large language models.
Lessons Learned
For Gunther, the experience has been both amusing and surreal. Currently competing at a gymnastics meet in Prague, Czechia, he joked about the possibility of his satirical creation becoming a part of history. “I hope in schools one day they will be teaching John Backflip,” he quipped.
While the incident may seem like a harmless joke, it underscores the importance of critical thinking and fact-checking in the age of AI. As technology continues to evolve, so too must our ability to discern fact from fiction—whether it’s about glue on pizza or a Medieval gymnast named John Backflip.
Key Takeaways
- Google’s AI Overview mistakenly credited a fictional character, John Backflip, as the inventor of the backflip, based on a satirical TikTok video.
- The incident highlights vulnerabilities in AI systems, particularly when dealing with “data voids” or satirical content.
- Google has introduced safeguards to improve the accuracy of its AI-generated summaries, but challenges remain.
- The story serves as a reminder of the importance of critical thinking and fact-checking in the digital age.
As AI continues to shape how we access and interpret information, incidents like this serve as a humorous yet important reminder of its limitations. Whether you’re searching for the inventor of the backflip or the nutritional value of rocks, it’s always a good idea to double-check your sources.
Originally Written by: Matt Novak