♿ Accessibility Options

Font Size
Text Boldness
High Contrast
Dark Mode
Grayscale
Focus Indicators
Highlight Links
Highlight Buttons
Reading Guide

Meta AI and Whatsapp Under Fire After Users Report “Offensive Jokes” About Islamic Figures Including Prophet Mohammad (PBUH)

Share This Article:

KARACHI — Meta AI Islamic Joke Controversy has erupted across Pakistan as multiple users report that Meta’s artificial intelligence system is generating offensive jokes referencing sacred Islamic figures, including Prophet Muhammad (PBUH) and Hazrat Bilal (RA). The backlash intensified after several office workers experienced the same pattern, raising questions about Meta’s safeguards, content moderation, and the possibility of deeper algorithmic flaws.

The Meta AI Islamic joke controversy surfaced when several users, including corporate professionals using Meta’s improved search function, noticed a peculiar and disturbing pattern: when selecting the prompt “I want to hear a joke”, the AI repeatedly responded with jokes referencing Islamic figures.

What initially appeared to be a random error quickly raised alarms when multiple colleagues at the same workplace reported identical experiences — the AI using sacred religious references in humorous or casual contexts, something that goes against Meta’s own global safety policies.

“I simply clicked the option to hear a joke, nothing more,” said one user, describing the shock he felt when the AI’s output referenced Prophet Muhammad (PBUH). “To check if it was aware of what it was saying, I asked whether the joke was related to any religion. Instead of clarifying, it gave me another joke — and this time it was about Hazrat Bilal (RA). That’s when I realized it wasn’t just a slip.”

 

Others in his office experienced the same, suggesting the issue might not be isolated. Screenshots shared among colleagues showed similarly phrased jokes, all tied to Islamic figures. “The pattern was too consistent to ignore,” another user said, adding that the team felt deeply hurt and confused about why an AI from one of the largest global tech companies would output such content.

A Sensitive Line Crossed

Globally, major AI platforms — including Meta, Google, and OpenAI — enforce strict content filters disallowing disrespectful, comedic, or casual references to religious personalities. Meta itself claims its AI is trained to avoid generating harmful, biased, or culturally insensitive content.

Yet, according to several eyewitness accounts, the AI repeatedly defaulted to Islamic references when asked to tell a joke. While no visible pattern indicated the AI targeted other religions in similar prompts, the repeated Islam-specific jokes triggered questions about the system’s training data and content moderation settings.

Experts say such behavior may stem from several factors:

Training Data Contamination:
Large language models learn from massive datasets scraped from the internet. If unfiltered, religious references — including jokes created by online users — may slip into the model’s responses.

Prompt Misinterpretation:
AI systems often incorrectly match user prompts to internal content categories, producing unrelated or inappropriate responses.

Regional Model Behavior:
Sometimes AI models respond differently by region due to localized training data, popularity of topics, or cultural patterns in user-generated content.

But intentional targeting? According to AI analysts, that remains unlikely. “AI does not have intent. It mirrors patterns,” explained one Karachi-based AI researcher. “However, the responsibility lies with the company to ensure those patterns do not produce offensive outcomes. If multiple users are getting the same kind of joke, that means the filtering has failed.”

Public Reaction: Shock, Anger, and Questions for Meta

On Pakistani social media, the issue is beginning to gain traction, with users demanding Meta explain how such sensitive content bypassed safety layers. Some users argue that if the same joke were generated about figures from other religions, it would have created global uproar.

“This is a direct insult to our faith,” wrote one user on X (formerly Twitter). “Meta needs to answer how their AI is allowed to joke about Prophet Muhammad (PBUH).”

Others called it a “dangerous oversight” and “deeply irresponsible,” warning that AI-generated religious insensitivity could spark unrest, especially in countries where faith is central to identity.

Related: Meta AI Introduces Generative Video Editing for Creativity

Glitch or Gross Negligence?

Meta has not yet issued an official statement regarding the incident. Technology experts believe the situation may stem from an oversight in content moderation or a gap in region-specific testing. “It may be a glitch, but if a glitch can disrespect sacred personalities, the system is broken,” commented a digital safety expert.

Several posts across Facebook, X, and WhatsApp groups called upon authorities to launch a formal inquiry into Meta’s content moderation practices, its AI training filters, and its compliance with local cultural and religious sensitivities. Some users urged the Pakistan Telecommunication Authority PTA to consider issuing a notice to Meta, warning that failure to prevent such incidents could result in penalties, restrictions, or further regulatory actions.

Focus Pakistan

focuspakistanofficial@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *