Experts say the Kremlin could include artificial intelligence (AI) in efforts to manipulate November’s presidential elections through influence schemes.
The U.S. Department of Justice last week revealed indictments that were part of an ongoing investigation into alleged Russian government plots to try and influence American voters through a variety of disinformation campaigns.
U.S. Attorney General Merrick Garland revealed a major crackdown on influence pushed through state-run media and other online platforms – part of a campaign called “Doppleganger.” He focused on employees of Russian state-controlled media outlet RT, but other indictments released this week showed a wider scope and complexity to Russia’s initiatives.
The U.S. also seized more than two dozen internet domains related to the operation and the establishment of an Election Threats Task Force, which includes FBI Director Christopher Wray and top Justice Department officials, according to CBS News.
AUTONOMOUS CAR BOMBS, ONLINE RECRUITMENT: EXPERTS WORRY HOW AI CAN TRANSFORM TERRORISM
“This is deadly serious, and we are going to treat it accordingly,” Garland said while announcing the indictment alongside Wray on Wednesday.
Those indictments included the alleged use of AI tools used to create social media profiles “posing as U.S. (or other non-Russian citizens)” and create the impression of “a legitimate news media outlet’s website.”
“Among the methods Doppelganger used to drive viewership to the cybersquatted and unique media domains was the deployment of “influencers” worldwide, paid social media advertisements (in some cases created using artificial intelligence tools), and the creation of social media profiles posing as U.S. (or other non-Russian) citizens to post comments on social media platforms with links to the cybersquatted domains,” the indictment stated.
The U.S. Department of the Treasury expanded on these allegations in an announcement that designated 10 individuals and two entities under the Office of Foreign Assets Control, allowing the U.S. to impose visa restrictions and a Rewards for Justice reward of up to $10 million relating to such operations.
The Treasury reported that Russian state-sponsored actors have used generative AI deep fakes and disinformation “to undermine confidence in the United States’ election process and institutions.”
The Treasury named Russian nonprofit Autonomous Non-Profit Organization (ANO) Dialog and ANO Dialog Regions as using “deep fake content to develop Russian disinformation campaigns,” including “fake online posts on popular social media accounts …. that would be composed of counterfeit documents, among other material, in order to elicit an emotional response from audiences.”
FOX NEWS AI NEWSLETTER: HOLY SEE CALLS FOR END TO AUTONOMOUS WEAPONS
ANO Dialog in late 2023 allegedly “identified U.S., U.K. and other figures as potential targets for deepfake projects.” The “War on Fakes” website served as a major outlet to disseminate this fake information, which also used bot accounts that targeted voting locations in the U.S. 2024 election.
In an interview for PBS News Hour, Belgian investigative journalist Christo Grozev revealed that complaints over the “global propaganda effort by Russia” – which the Kremlin was “losing to the West” in the early months of the invasion of Ukraine – prompted the decision to use AI and “all kind of new methods to make it indistinguishable from the regular flow of information.”
“They plan to do insertion of advertising, which is in fact hidden as news, and in this way bombard the target population with things that may be misconstrued as news, but are in fact advertising content,” Grozev explained.
“They plan to disguise that advertising content on a person-to-person level as if it is content from their favorite news sites,” he warned. “Now, we haven’t seen that in action, but it’s an intent, and they claim they have developed the technology to do that.”
“They’re very explicit that they’re not going to use Russia-related platforms or even separate platforms,” he added. “They’re going to infiltrate the platform that the target already uses. And that is what sounds scary.”