Google has apologized after an AI-generated news alert about the BAFTA racial slur incident included the N-word in a push notification sent directly to users’ phones.

The alert linked to a The Hollywood Reporter article headlined “How the Tourette’s Fallout Unfolded at the BAFTA Film Awards.” Beneath it, Google’s automated prompt invited readers to “see more on” — followed by the slur spelled out in full.

Instagram user Danny Price flagged the notification, calling it “absolutely f****d” and adding, “What an interesting Black History Month this has turned out to be.” The screenshot circulated quickly. Google later told Deadline: “We’re very sorry for this mistake. We’ve removed the offensive notification and are working to prevent this from happening again.”

A mistake, they say.

The Incident That Started It

The alert stems from the BAFTA ceremony where John Davidson, a Tourette’s campaigner seated in the audience, involuntarily shouted the slur as Michael B. Jordan and Delroy Lindo walked on stage.

The BBC left the slur audible on iPlayer for 15 hours before removing it. At the same time, the broadcaster edited out a portion of Akinola Davies Jr.’s speech, including his “free Palestine” remark.

That contrast matters.

BAFTA has now announced what it calls a “comprehensive review.” The ceremony’s president, Prince William, attended the event. He has not issued a personal statement.

The Consequences Are Already Playing Out

Once the BBC aired the slur, it left the realm of a live incident and entered the permanent archive of the internet. Clips are now circulating across YouTube, X, TikTok, and fringe commentary channels. The word is being replayed for clicks. Monetized. Used as bait. That is not theoretical.

Large online personalities are already packaging the moment into outrage thumbnails and culture-war content. Screenshots show creators framing the situation as “emotional damage” content, complete with exaggerated graphics and inflammatory captions. In comment threads, users have posted openly racist and ableist language while claiming to defend one group or attack another.

This is exactly the danger critics warned about.

The debate quickly shifts into racism versus ableism. Black audiences and people with Tourette’s end up fighting each other online. Meanwhile, the institutions at the center of the incident step back and issue carefully worded statements.

And now Google’s AI has amplified the slur again — pushing it onto phones at scale.

Amplification Is the Real Problem

When a broadcaster airs a racial slur, even in a complex context, the responsibility does not end with a later edit. The clip becomes searchable and will be weaponized against the black community.

Now add algorithmic distribution.

Google’s alert did not contextualize the word. It repeated it. That means people who never watched the BAFTAs, never followed the controversy, suddenly received the slur as a headline fragment on their screen.

That carries real consequences in the United States, where racial tensions already run high and where past political rhetoric has included racist depictions of figures like Barack and Michelle Obama. Add in online culture-war channels that profit from outrage, and the clip becomes fuel. It becomes validation for those looking to deepen division. It becomes a weapon.

And that is why the lack of a clear, immediate apology on stage also matters. When a word with that history is broadcast globally without swift, visible acknowledgment, others fill the vacuum. Commentary channels step in.

Some online figures are already exploiting the incident to drive traffic and engagement, framing it as evidence of broader social decline or using it to mock both Black communities and disabled communities in the same breath.

To reiterate, this is not about punishing a medical condition. It is about understanding how institutional decisions and algorithmic systems multiply harm.

Google is not alone in AI missteps. Apple scrapped its AI news alerts last year after high-profile factual errors. But pushing a racial slur as part of a notification is not a minor glitch. It is a failure of safeguards around one of the most loaded words in the English language.


Discover more from Feminegra

Subscribe to get the latest posts sent to your email.