Meta Platforms’ Oversight Board, an independent group funded by the social media company, is currently assessing how the company handles two AI-generated images of female celebrities circulating on Facebook and Instagram.
These images, detailed by the board but not naming the celebrities to prevent further harm, are being studied to evaluate Meta’s policies and actions against sexually explicit deepfakes.
Advances in AI technology have made it possible to create fake content like images, audio, and videos that closely resemble real human-generated material. This has led to a rise in the spread of sexualized deepfakes online, primarily targeting women and girls.
Earlier this year, there was a incident where Elon Musk’s social media platform X temporarily limited searches for Taylor Swift images due to challenges in controlling fake explicit content featuring her.
Leaders in the industry are calling for laws to address the spread of harmful deepfakes and to hold tech companies responsible for preventing their misuse.
The Oversight Board’s evaluation includes cases such as an AI-generated nude image resembling a public figure from India posted on Instagram by an account that exclusively shares such images of Indian women.
Another case involves an image posted in a Facebook group showcasing AI creations, showing a nude woman resembling an American public figure being groped by a man.
Initially, Meta removed the image of the American woman for violating its harassment policy but kept the one of the Indian woman until the board selected it for review.