Open AI bans multiple accounts found to be misusing ChatGPT

3

Open AI bans multiple accounts found to be misusing ChatGPT


Sam Altman and OpenAI
(Image credit: Shutterstock/PatrickAssale)

  • OpenAI has banned accounts using ChatGPT for malicious purposes
  • Misinformation and surveillance campaigns were uncovered
  • Threat actors are increasingly using AI for harm

OpenAI has confirmed it recently identified a set of accounts involved in malicious campaigns, and banned users responsible.

The banned accounts involved in the ‘Peer Review’ and ‘Sponsored Discontent’ campaigns likely originate from China, OpenAI said, and “appear to have used, or attempted to use, models built by OpenAI and another U.S. AI lab in connection with an apparent surveillance operation and to generate anti-American, Disrupting malicious uses of our models: an update February 2025 3 Spanish-language articles”.

AI has facilitated a rise in disinformation, and is a useful tool for threat actors to use to disrupt elections and undermine democracy in unstable or politically divided nations – and state-sponsored campaigns have used the technology to their advantage.

Surveillance and disinformation

The ‘Peer Review’ campaign used ChatGPT to generate “detailed descriptions, consistent with sales pitches, of a social media listening tool that they claimed to have used to feed real-time reports about protests in the West to the Chinese security services”, OpenAI confirmed.

As part of this surveillance campaign, the threat actors used the model to “edit and debug code and generate promotional materials” for suspected AI-powered social media listening tools – although OpenAI was unable to identify posts on social media following the campaign.

ChatGT accounts participating in the ‘Sponsored Discontent’ campaign, were used to generate comments in English and news articles in Spanish, consistent with ‘spamouflage’ behavior, primarily using anti-American rhetoric, probably to spark discontent in Latin America, namely Peru, Mexico, and Ecuador.

This isn’t the first time Chinese state-sponsored actors have been identified using ‘spamouflage’ tactics to spread disinformation. In late 2024, a Chinese influence campaign was discovered targeting US voters with thousands of AI generated images and videos, mostly low-quality and containing false information.

You might also like

  • Take a look at our picks for the best AI tools around
  • Check out our recommendations for the best malware removal software
  • Norton boosts AI scam protection tools for all users
TOPICS
Ellen Jennings-Trace
Ellen Jennings-Trace
Staff Writer

Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

More about pro

Security padlock in circuit board, digital encryption concept

Palo Alto Networks gateways facing huge number of possible security attacks


Gmail on MacBook

Google reveals better end-to-end encryption for Gmail business users


An image of the Nintendo Switch 2

Nintendo Switch 2 specs revealed, and yes, it will support 4K resolution – as well as a host of other upgrades over the original

See more latest
Previous articleNordVPN review: still the pinnacle of VPNs in 2025
Next articleTop digital loan firm security slip-up puts data of 36 million users at risk