Clegg said Meta has seen these safeguards “prevent misuse of our platforms during recent major elections in Nigeria, Thailand, Turkey and Argentina, as well as this year’s state and local elections in the United States.” He said each election presents its own challenges, but the company is confident that “our comprehensive approach puts us in a strong position to protect the integrity of next year’s elections on our platforms.”
What Meta is up to
Meta is clearly concerned about AI being used in political advertising. For this reason, starting in 2024, all political advertisers will also be required to disclose in certain cases if they use AI or other digital technologies to create or modify a political or social ad. This applies if the ad contains a photorealistic image or video or realistic-sounding sound that has been digitally created or altered to portray a real person as saying or doing something that they did not say or do.
In addition, this also applies if an advertisement depicts a realistic-looking person who does not exist, or a realistic-looking event that did not occur, alters footage of a real-life event, or depicts a realistic-looking event that supposedly occurred but was not a real one Event is real image, video or audio recording of the event, Clegg said in the blog post. All political parties must clearly indicate in the ad whether AI was used to generate any part of the ad’s content.
“Incurable gamer. Infuriatingly humble coffee specialist. Professional music advocate.”