Search This Blog

Thursday, May 30, 2024

'OpenAI says its tools were used in foreign influence campaigns'

 OpenAI said Thursday that it has seen several foreign influence campaigns tap the power of its AI models to help generate and translate content, but has yet to see novel attacks enabled through its tools.

Why it matters: Supercharging misinformation efforts has been seen as a key risk associated with generative AI, though it has been an open question just how the tools would be used and by whom.

Driving the news: OpenAI said in a new report that it has seen its tools used by several existing foreign influence operations, including efforts based in Russia, China, Iran and Israel.

  • For example, the Chinese network known as "Spamouflage" used OpenAI's tools to debug code, research media and generate posts in Chinese, English, Japanese and Korean.
  • The Russian "Doppelganger" effort, meanwhile, tapped OpenAI models to generate social media content in several languages as well as to translate articles, generate headlines and convert news articles into Facebook posts.
  • Meanwhile, an Iranian operation known as the International Union of Virtual Media used OpenAI tools to both generate and translate long-form articles, headlines and website tags, while an Israeli commercial company called STOIC ran multiple covert influence campaigns around the world, using OpenAI models to generate articles and comments that were then posted to Instagram, Facebook, X, and other websites.
  • OpenAI said it also detected and disrupted a previously unknown Russian campaign dubbed "Bad Grammar" that operated on Telegram. The effort targeted Ukraine, Moldova, the Baltic States and the United States and used OpenAI models both to debug code for a Telegram bot and to create short, political comments in Russian and English.

What they're saying: "While all of these operations used AI to some degree, none of them use it exclusively," said Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team.

  • "Instead, AI generated material was one of many types of content they posted alongside more traditional formats like manually written texts or memes copied from across the internet," per Nimmo.

The big picture: OpenAI's report comes ahead of a wave of global elections, including the U.S. presidential election.

  • In all, more than a billion people around the world are headed to the polls just as generative AI chatbots continue to become more widely available and easier to use.

Between the lines: Nimmo said that while AI is helping create text faster and with fewer language errors, the toughest part of foreign influence campaigns remains getting them to spread into the mainstream.

  • All of the attacks it noticed were rated low in severity because they didn't show signs of spreading organically on their own.

The intrigue: It's unclear, though, whether OpenAI is seeing all the ways its tools are being used to aid in such operations.

  • Bad actors can use generative AI to quickly spin up fake news sites, whether to help generate the misinformation or the legitimate news stories that serve as cover. For example, this Russian fake news operation, run by an American, reportedly used OpenAI's tools.
  • Plus, attackers could be relying on others' generative AI, especially open source tools with fewer guardrails and which might be harder for outside groups to detect.
  • "What you have is what we've got so far," Nimmo said. "One of the reasons it felt important to put this report out was to say, here is what we have observed and to kind of fill in the blanks about what might be happening."

Yes, but: OpenAI stressed that AI is also giving new tools to defenders aiming to spot and disrupt coordinated attacks.

https://www.axios.com/2024/05/30/openai-misinformation-china-israel-russia-iran

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.