Exposed: Meta sanctioned violence political advertisements in India

22nd May, 2024
Exposed: Meta sanctioned violence political advertisements in India

Meta, the parent company of Facebook and Instagram, has been implicated in approving AI-altered political ads during India’s election, which propagated disinformation and incited religious violence, as reported exclusively by the Guardian.

The report revealed that Facebook approved ads featuring derogatory language against Muslims in India, including inflammatory statements such as “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned.” These ads also contained Hindu supremacist rhetoric and false information about political figures.

One approved ad falsely accused an opposition leader of plotting to “erase Hindus from India,” displaying a Pakistan flag alongside this claim.

These ads were created and submitted to Meta’s ad library by India Civil Watch International (ICWI) and Ekō, an accountability organization, to test Meta’s efficacy in filtering harmful political content during India’s six-week election.

The report highlighted that the ads were based on real hate speech and disinformation prevalent in India, showcasing social media’s role in amplifying harmful narratives.

The ads were submitted during the voting period, which started in April and concluded on June 1. The election will determine if Prime Minister Narendra Modi and his Hindu nationalist Bharatiya Janata Party (BJP) will secure a third term. Modi’s government has faced criticism for promoting a Hindu-first agenda, resulting in increased persecution of India’s Muslim minority.

The BJP has been accused of using anti-Muslim rhetoric to garner votes, with Modi himself referring to Muslims as “infiltrators” during a rally, although he later denied targeting Muslims.

A social media site, X, was also ordered to remove a BJP campaign video accused of demonizing Muslims.

Researchers submitted 22 ads in various Indian languages to Meta, of which 14 were approved. Three more were approved after minor tweaks. These ads were removed by researchers before being published. Despite Meta’s pledge to prevent AI-manipulated content during the election, their systems failed to identify these as AI-generated.

Five ads were rejected for violating Meta’s hate speech and violence policies, but the 14 approved ads also breached Meta’s policies on hate speech, bullying, misinformation, and incitement. Maen Hammad from Ekō accused Meta of profiting from hate speech, stating that supremacists and autocrats exploit Meta’s platform to spread violence and conspiracy theories without consequence.

Meta failed to recognize the political nature of the 14 approved ads, which targeted political parties and candidates opposing the BJP. Meta’s policies require specific authorization for political ads, but only three submissions were rejected.

This lapse allowed ads to violate India’s election rules, which ban political advertising during the 48 hours before polling. The ads coincided with voting phases.

In response, a Meta spokesperson emphasized the requirement for ad authorization and adherence to laws, and stated that violative content, including AI-generated ads, is removed. Meta also mandates disclosure of AI use in political ads in certain cases.

Previous reports by ICWI and Ekō indicated that “shadow advertisers” aligned with political parties, especially the BJP, spent significant amounts on unauthorized political ads during the election, often promoting Islamophobic and Hindu supremacist narratives. Meta denied most of these ads violated their policies.

Meta has faced accusations of failing to curb Islamophobic hate speech and violence-inciting posts, which have led to real-life riots and lynchings in India. Nick Clegg, Meta’s president of global affairs, acknowledged India’s election as a significant test and claimed extensive preparations had been made.

Despite this, Hammad stated the report’s findings demonstrate the inadequacies of Meta’s mechanisms in tackling the surge of hate speech and disinformation during critical elections.