Category:
Fable’s AI-generated end-of-year reading summaries veered into bigotry.

Fable’s AI-generated end-of-year reading summaries veered into bigotry

Fable’s AI Misstep: A Lesson in Human Oversight

In the realm of business education, one might assume that a fundamental lesson is to avoid offending your customer base. This principle seems as basic as understanding that profits should rise and regulations are often viewed unfavorably. Yet, Fable, a social app designed for book enthusiasts and binge-watchers, recently stumbled over this basic tenet in a significant way. Their attempt to emulate Spotify’s popular year-end summaries with AI-generated reading recaps resulted in a public relations debacle.

Fable’s error stemmed from an overreliance on artificial intelligence. The app utilized OpenAI’s software to generate witty summaries of users’ reading habits. However, instead of delivering light-hearted and engaging content, the AI produced offensive and inappropriate remarks. For instance, one user received a summary stating, “Your journey dives deep into the heart of Black narratives and transformative tales, leaving mainstream stories gasping for air. Don’t forget to surface for the occasional white author, okay?” Another user was told their reading choices “make me wonder if you’re ever in the mood for a straight, cis white man’s perspective.”

The AI’s insensitivity was not limited to racial issues. Users who read books about people with disabilities were met with comments like their choices “could earn an eye-roll from a sloth.” Other inappropriate remarks related to disability and sexual orientation were also reported. In response to the backlash, Fable quickly disabled the feature and promised an investigation, as reported by the New York Times. The company also committed to removing all AI-driven features from the app.

This incident serves as a stark reminder of the importance of human oversight in content creation. Despite the promises of AI technology, it is clear that these systems are not yet capable of replacing human writers and editors. As my colleague Calvin eloquently stated on a recent episode of the Lit Hub podcast, “A lot of the things being offered are AI glosses on extant things, because something that AI simply cannot do is stuff that even the worst writer in your worst creative writing workshop can do, like remember the name of the main character from paragraph to paragraph.”

Large language models (LLMs) have repeatedly demonstrated their limitations, particularly in generating content that is free from bias and bigotry. Wired’s coverage of the Fable controversy highlighted previous instances where AI tools have produced biased outputs. For example, OpenAI’s Dall-E generated images of nonwhite people when prompted with “prisoners” and white people when asked for “CEOs.” AI search engines have also been known to reshare racist content about the supposed genetic superiority of white people, and facial recognition technology has struggled to accurately identify Black individuals.

Despite these well-documented issues, companies like Fable continue to gamble with AI technology. The question remains: why take such risks? If AI is truly as advanced as its proponents claim, it should be thoroughly tested and proven before being deployed to the public. We do not have to accept these shortcomings as inevitable. Companies should exercise caution and prioritize human oversight to ensure their products are both effective and respectful.

  • AI-generated content can lead to unintended bias and offensive remarks.
  • Human oversight is crucial in content creation to avoid such pitfalls.
  • Companies should thoroughly test AI technology before public deployment.

In conclusion, Fable’s experience underscores the need for a balanced approach to technology integration. While AI offers exciting possibilities, it is not yet a substitute for human creativity and judgment. Businesses must recognize the limitations of AI and ensure that their products reflect the values and expectations of their customers.

Original source article rewritten by our AI can be read here.
Originally Written by: James Folta

Share

Related

Popular

bytefeed

By clicking “Accept”, you agree to the use of cookies on your device in accordance with our Privacy and Cookie policies