A misleading seven second clip of President Biden could reshape Facebook’s misinformation policies ahead of the 2024 election, but the platform — and the American electorate — are running out of time.
The Oversight Board, the external advisory group that Meta created to review its moderation decisions on Facebook and Instagram, issued a decision on Monday concerning a doctored video of Biden that made the rounds on social media last year.
The original video showed the president accompanying his granddaughter Natalie Biden to cast her ballot during early voting in the 2022 midterm elections. In the video, President Biden pins an “I Voted” sticker on his granddaughter and kisses her on the cheek.
A short, edited version of the video removes visual evidence of the sticker, setting the clip to a song with sexual lyrics and looping it to depict Biden inappropriately touching the young woman. The seven second clip was uploaded to Facebook in May 2023 with a caption describing Biden as a “sick pedophile.”
Meta’s Oversight Board announced that it would take on the case last October after a Facebook user reported the video and ultimately escalated the case when the platform declined to remove it.
In its decision, issued Monday, the Oversight Board states that Meta’s choice to leave the video online was consistent with the platform’s rules, but calls the relevant policy “incoherent.”
“As it stands, the policy makes little sense,” Oversight Board Co-Chair Michael McConnell said. “It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.”
McConnell also pointed to the policy’s failure to address manipulated audio, calling it “one of the most potent forms of electoral disinformation.”
The Oversight Board’s decision argues that instead of focusing on how a particular piece of content was created, Meta’s rules should be guided by the harms they are designed to prevent. Any changes should be implemented “urgently” in light of global elections, according to the decision.
Beyond expanding its manipulated media policy, the Oversight Board suggested that Meta add labels to altered videos flagging them as such instead of relying on fact-checkers, a process the group criticizes as “asymmetric depending on language and market.”
By labeling more content rather than taking it down, the Oversight Board believes that Meta can maximize freedom of expression, mitigate potential harm and provide more information for users.
In a statement to TechCrunch, a Meta spokesperson confirmed that the company is “reviewing the Oversight Board’s guidance” and will issue a public response within 60 days.
The altered video continues to circulate on X, formerly Twitter. Last month, a verified X account with 267,000 followers shared the clip with the caption “The media just pretend this isn’t happening.” The video has more than 611,000 views.
The Biden video isn’t the first time that the Oversight Board has ultimately told Meta to go back to the drawing board for its policies. When the group weighed in on Facebook’s decision to ban former President Trump, it decried the “vague, standardless” nature of the indefinite punishment while agreeing with the choice to suspend his account. The Oversight Board has generally urged Meta to provide more detail and transparency in its policies, across cases.
As the Oversight Board noted when it accepted the Biden “cheap fake” case, Meta stood by its decision to leave the altered video online because its policy on manipulated media — misleadingly altered photos and videos — only applies when AI is used or when the subject of a video is portrayed saying something they didn’t say.
The manipulated media policy, designed with deepfakes in mind, applies only to “videos that have been edited or synthesized… in ways that are not apparent to an average person, and would likely mislead an average person to believe.”
Critics of Meta’s content moderation process have dismissed Meta’s self-designed review board as too little, far too late.
Meta may have a standardized content moderation review system in place now, but misinformation and other dangerous content move more quickly than that appeals process — and much more quickly than the world could have imagined just two general election cycles ago.
Researchers and watchdog groups are bracing for an onslaught of misleading claims and AI-generated fakes as the 2024 presidential race ramps up. But even as new technologies enable dangerous falsehoods to scale, social media companies have quietly slashed their investments in trust and safety and turned away from what once appeared to be a concerted effort to stamp out misinformation.
“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said.