Deepfake or Deep Fake Concept as a symbol for misrepresenting or identity theft or faking identification and misrepresentation in a 3D illustration style.

Tech giants sign voluntary pledge to fight election-related deepfakes

Deepfake or Deep Fake Concept as a symbol for misrepresenting or identity theft or faking identification and misrepresentation in a 3D illustration style.

Image Credits: wildpixel (opens in a new window) / Getty Images

Tech companies are pledging to fight election-related deepfakes as policymakers amp up pressure.

Today at the Munich Security Conference, vendors including Microsoft, Meta, Google, Amazon, Adobe and IBM signed an accord signaling their intention to adopt a common framework for responding to AI-generated deepfakes intended to mislead voters. Thirteen other companies, including AI startups OpenAI, Anthropic, Inflection AI, ElevenLabs and Stability AI and social media platforms X (formerly Twitter), TikTok and Snap, joined in signing the accord, along with chipmaker Arm and security firms McAfee and TrendMicro.

The undersigned said they’ll use methods to detect and label misleading political deepfakes when they’re created and distributed on their platforms, sharing best practices with one another and providing “swift and proportionate responses” when deepfakes start to spread. The companies added that they’ll pay special attention to context in responding to deepfakes, aiming to “[safeguard] educational, documentary, artistic, satirical and political expression” while maintaining transparency with users about their policies on deceptive election content.

The accord is effectively toothless and, some critics may say, amounts to little more than virtue signaling — its measures are voluntary. But the ballyhooing shows a wariness among the tech sector of regulatory crosshairs as they pertain to elections, in a year when 49% of the world’s population will head to the polls in national elections.

“There’s no way the tech sector can protect elections by itself from this new type of electoral abuse,” Brad Smith, vice chair and president of Microsoft, said in a press release. “As we look to the future, it seems to those of us who work at Microsoft that we’ll also need new forms of multistakeholder action … It’s abundantly clear that the protection of elections [will require] that we all work together.”

No federal law in the U.S. bans deepfakes, election-related or otherwise. But 10 states around the country have enacted statutes criminalizing them, with Minnesota’s being the first to target deepfakes used in political campaigning.

Elsewhere, federal agencies have taken what enforcement action they can to combat the spread of deepfakes.

This week, the FTC announced that it’s seeking to modify an existing rule that bans the impersonation of businesses or government agencies to cover all consumers, including politicians. And the FCC moved to make AI-voiced robocalls illegal by reinterpreting a rule that prohibits artificial and prerecorded voice message spam.

In the European Union, the bloc’s AI Act would require all AI-generated content to be clearly labeled as such. The EU’s also using its Digital Services Act to force the tech industry to curb deepfakes in various forms.

Deepfakes continue to proliferate, meanwhile. According to data from Clarity, a deepfake detection firm, the number of deepfakes that have been created increased 900% year over year.

Last month, AI robocalls mimicking U.S. President Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election. And in November, just days before Slovakia’s elections, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election.

In a recent poll from YouGov, 85% of Americans said they were very concerned or somewhat concerned about the spread of misleading video and audio deepfakes. A separate survey from The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults think AI tools will increase the spread of false and misleading information during the 2024 U.S. election cycle.

Hundreds of AI luminaries sign letter calling for anti-deepfake legislation

Abstract digital human face.

Image Credits: Hiretual (opens in a new window)

Hundreds in the artificial intelligence community have signed an open letter calling for strict regulation of AI-generated impersonations, or deepfakes. While this is unlikely to spur real legislation (despite the House’s new task force), it does act as a bellwether for how experts lean on this controversial issue.

The letter, signed by more than 500 people in and adjacent to the AI field at time of publishing, declares that “deepfakes are a growing threat to society, and governments must impose obligations throughout the supply chain to stop the proliferation of deepfakes.”

They call for full criminalization of deepfake child sexual abuse materials (CSAM, aka child pornography) regardless of whether the figures depicted are real or fictional. Criminal penalties are called for in any case where someone creates or spreads harmful deepfakes. And developers are called on to prevent harmful deepfakes from being made using their products in the first place, with penalties if their preventative measures are inadequate.

Among the more prominent signatories of the letter are:

Jaron LanierFrances HaugenStuart RussellAndrew YangMarietje SchaakeSteven PinkerGary MarcusOren EtzioniGenevieve smithYoshua BengioDan HendrycksTim Wu

Also present are hundreds of academics from across the globe and many disciplines. In case you’re curious, one person from OpenAI signed, a couple from Google DeepMind, and none at press time from Anthropic, Amazon, Apple or Microsoft (except Lanier, whose position there is non-standard). Interestingly they are sorted in the letter by “Notability.”

This is far from the first call for such measures; in fact, they have been debated in the EU for years before being formally proposed earlier this month. Perhaps it is the EU’s willingness to deliberate and follow through that activated these researchers, creators, and executives to speak out.

EU proposes criminalizing AI-generated child sexual abuse and deepfakes

Or perhaps it is the slow march of KOSA (Kids Online Safety Act) toward acceptance — and its lack of protections for this type of abuse.

Or perhaps it is the threat of (as we have already seen) AI-generated scam calls that could sway the election or bilk naïve folks out of their money.

Or perhaps it is yesterday’s task force being announced with no particular agenda other than maybe writing a report about what some AI-based threats might be and how they might be legislatively restricted.

As you can see, there is no shortage of reasons for those in the AI community to be out here waving their arms around and saying, “Maybe we should, you know, do something?!”

Whether anyone will take notice of this letter is anyone’s guess — no one really paid attention to the infamous one calling for everyone to “pause” AI development, but of course this letter is a bit more practical. If legislators decide to take on the issue, an unlikely event given it’s an election year with a sharply divided Congress, they will have this list to draw from in taking the temperature of AI’s worldwide academic and development community.

AI-generated Biden calls came through shady telecom and Texan front ‘Life Corporation’

Instagram logo reflected

Hundreds of creators sign letter slamming Meta's limit on political content

Instagram logo reflected

Image Credits: LIONEL BONAVENTURE/AFP / Getty Images

If you haven’t been seeing much political content on Instagram lately, there’s a reason for that. Since March, Instagram and Threads have instituted a new default setting that limits political content you see from people you’re not following.

Hundreds of creators, convened by GLAAD and Accountable Tech, have signed an open letter demanding that Instagram make the political content limit an opt-in feature, rather than on by default.

“With many of us providing authoritative and factual content on Instagram that helps people understand current events, civic engagement, and electoral participation, Instagram is thereby limiting our ability to reach people online to help foster more inclusive and participatory democracy and society during a critical inflection point for our country,” the letter reads.

The letter’s signatories include comedian Alok Vaid-Menon (1.3 million followers), Glee actor Kevin McHale (1.1 million), news account So Informed (3.1 million), activist Carlos Eduardo Espina (664,000), Under the Desk News (397,000) and other meme accounts, political organizers and entertainers.

Instagram’s definition of political content leaves a lot of room for interpretation, which stokes further concern among these creators. It describes political content as anything “potentially related to things like laws, elections, or social topics.”

The letter points out that this “endangers the reach of marginalized folks speaking to their own lived experience on Meta’s platforms” and limits the conversation around topics like climate change, gun control and reproductive rights.

For political creators, these limits can also impact their livelihood, since it will be harder to reach new audiences. While Instagram isn’t particularly lucrative (there’s no regular revenue share with creators), building a following on the platform can lead to other financial opportunities, like brand sponsorships.

As election season looms in the U.S., Instagram’s decision to distance itself from politics could seem like a way to do damage control — Meta has a less-than-stellar track record when it comes to its role in elections. But Meta could be creating even more problems by siloing its users into political echo chambers, where they’re never exposed to any information from people outside their existing circles.

“Removing political recommendations as a default setting, and consequently stopping people from seeing suggested political content poses a serious threat to political engagement, education, and activism,” the letter says.

How to turn off Instagram’s political content filter

TechCrunch Minute: You’re likely seeing less news and politics on Instagram. Here’s why