Meta’s Oversight Board has announced its intent to review Meta’s handling of two cases involving explicit AI-generated images of female public figures circulating on Facebook and Instagram. One of these cases featured a sexually explicit deepfake of an Indian public figure.
Established by Meta, the Oversight Board operates independently to evaluate Meta’s content review processes. The board selected these cases based on their significance, focusing on “highly emblematic cases” to assess whether Meta’s decisions align with its stated values and policies.
In a blog post, the Board explained that the selected cases were chosen due to their potential impact on a large number of users worldwide, their importance to public discourse, or their relevance in questioning Meta’s policies. Additionally, the Board emphasized the importance of respecting the rights of individuals depicted in such content and avoiding further harassment by refraining from naming, sharing private information, or speculating on their identities.
The first case involved an Instagram account posting a sexually explicit AI-generated image resembling a prominent Indian public figure using deepfake technology. Despite a user reporting the content to Meta for pornography, the report was automatically closed by Meta’s systems without review within 48 hours. Similarly, the user’s appeal to Meta was rejected, and the content remained online. Upon appeal to the Oversight Board, Meta acknowledged its error in leaving the content up and removed the post for violating its Bullying and Harassment Community Standard.
The second case concerned a sexually explicit image posted on Facebook, also generated using AI to resemble an American public figure. Meta’s policy or subject matter experts escalated this case and subsequently removed the image for violating its Bullying and Harassment policy, particularly for “derogatory sexualized photoshop or drawings.” The image was also added to Meta’s Media Matching Service Bank, an automated enforcement system designed to identify and remove images violating Meta’s policies.
The Board selected these cases to assess Meta’s policies and enforcement practices in addressing explicit AI-generated imagery. The first case aimed to evaluate the effectiveness of Meta’s automated systems in combating the spread of such content, while the second case aimed to examine Meta’s consistency in protecting women globally, as highlighted by Oversight Board co-chair Helle Thorning-Schmidt.
In analyzing the nature and severity of harms caused by AI-generated explicit content to women, especially public figures, the Board seeks to evaluate Meta’s enforcement of its Bullying and Harassment policy and the use of Media Matching Service Banks. The Board has invited input and public comments from stakeholders by April 30 and possesses the authority to issue policy recommendations to Meta, to which Meta must respond within 60 days, though these recommendations are not legally binding.