AI-Generated Fake Biden Robocalls Lead to Indictment of Political Consultant
Steven Kramer, a political consultant, was indicted for using AI-generated robocalls to impersonate President Biden and suppress votes.
Steven Kramer, a Democratic political consultant from New Orleans, faces serious legal trouble after allegedly using artificial intelligence to create and distribute thousands of fake robocalls imitating President Joe Biden. These calls were sent to New Hampshire residents in January, during the Democratic primary election, and aimed to suppress voter turnout.
Details of the Indictment
The New Hampshire Attorney General’s Office has indicted Kramer on multiple charges related to this incident. Specifically, he faces 13 felony voter suppression charges and 13 misdemeanor impersonation charges. The indictment alleges that Kramer, who was working for rival candidate Dean Phillips, used advanced deepfake technology to generate the robocalls. These calls, mimicking President Biden’s voice, urged voters to save their votes for the November election rather than participating in the primary.
The robocalls carried a misleading message, telling voters that their votes were more significant in the November election and encouraging them not to vote in the primary. This tactic was a clear attempt to influence the election outcome by reducing voter turnout.
The Federal Communications Commission (FCC) has also taken action against Kramer. The agency proposed a hefty fine of $6 million, citing violations of caller ID rules due to the deceptive nature of the calls. The phone company responsible for transmitting these calls, Lingo Telecom, faces its own consequences. The FCC has proposed a $2 million fine against Lingo Telecom for improperly labeling the calls with the highest level of caller ID attestation, making it difficult for other providers to identify them as potentially spoofed.
Repercussions and Reactions
Attorney General John Formella emphasized the importance of this case in deterring future attempts to interfere with elections using artificial intelligence or other means. He expressed hope that the enforcement actions would serve as a strong deterrent to anyone considering similar tactics.
In defense of his actions, Kramer claimed his intention was to highlight the dangers of AI in politics. He described his plan as an act of civil disobedience, aimed at drawing attention to the potential for AI-generated content to mislead voters. Kramer argued that his actions brought significant media and regulatory attention to the issue, though this defense has not mitigated the legal consequences he now faces.
This incident has amplified concerns about the potential misuse of AI-generated content in elections. The Biden campaign has acknowledged these threats and has assembled a team to address the potential impacts of AI on the upcoming election. Moreover, the campaign is preparing to counter malicious AI-generated deepfakes and other forms of digital manipulation that could mislead voters.
Lastly, the increasing sophistication of AI technology and its potential to create convincing deepfakes poses a significant challenge for election integrity. Media reported in March about the rise in AI-generated deepfakes during this election season, emphasizing the need for voters to become more adept at identifying such content. In response to these growing concerns, twenty of the largest AI tech companies pledged in February to prevent their software from influencing elections.
Comments are closed.