In the course of the 2022 US midterm elections, a manipulated video of President Joe Biden circulated on Fb. The original footage confirmed Biden putting an “I voted” sticker on his granddaughter’s chest and kissing her on the cheek. The doctored model looped the footage to make it seem he was repeatedly touching the woman, with a caption that labeled him a “pedophile.”
Meta left the video up. At present, the corporate’s Oversight Board—an unbiased physique that appears into the platform’s content material moderation—announced that it’ll assessment that call, in an try to push Meta to handle the way it will deal with manipulated media and election disinformation forward of the 2024 US presidential election and greater than 50 different votes to be held all over the world subsequent yr.
“Elections are the underpinning of democracy and it’s important that platforms are outfitted to guard the integrity of that course of,” says Oversight Board spokesperson Dan Chaison. “Exploring how Meta can higher tackle altered content material, together with movies meant to deceive the general public forward of elections, is much more essential given advances in synthetic intelligence.”
Meta mentioned in a blog post that it had decided that the video didn’t violate Fb’s hate speech, harassment, or manipulated media insurance policies. Underneath its manipulated media coverage, Meta says it can take away a video if it “has been edited or synthesized…in methods that aren’t obvious to a median individual, and would seemingly mislead a median individual to consider a topic of the video mentioned phrases that they didn’t say.” Meta famous that the Biden video didn’t use AI or machine studying to govern the footage.
Specialists have been warning for months that the 2024 elections might be made extra sophisticated and extra harmful due to generative AI, which permits extra reasonable faked audio, video and imagery. And though Meta has joined different tech corporations in committing to making an attempt to curb the harms of generative AI, most typical methods, corresponding to watermarking content, have confirmed solely somewhat efficient at finest. In Slovakia final week, a faux audio recording circulated on Fb, by which one of many nation’s main politicians appeared to debate rigging the elections. The creators had been capable of exploit a loophole in Meta’s manipulated media insurance policies, which don’t cowl faked audio.
Whereas the Biden video itself shouldn’t be AI-generated or manipulated, the Oversight Board has solicited public feedback on this case with an eye fixed in direction of AI, and is utilizing the case as a solution to extra deeply study Meta’s insurance policies round manipulated movies.