OpenAI: GenAI's election disinformation impact limited

CybersecurityHQ News

Welcome reader to your CybersecurityHQ report

Brought to you by:

Cypago enables strategic decision making through a full Cyber GRC product suite to help you avoid business reputation impact, financial or client trust losses

OpenAI - Influence and cyber operations: an update

In October 2024, the cybersecurity landscape continues to be shaped by the increasing use of artificial intelligence (AI) in both defensive and offensive cyber operations, as well as in influence campaigns. The latest report provides an update on how AI models are being exploited by state-linked actors and criminal networks, outlining the key trends, challenges, and disruptions experienced throughout the year.

AI in Elections With over 2 billion voters expected to go to the polls in 50 countries this year, there is heightened concern about the potential misuse of AI in election-related influence operations. The report highlights how OpenAI and similar companies are continuously working to detect and disrupt these efforts. So far, AI-based election manipulation attempts have not gained significant traction or viral engagement. Operations identified in Rwanda, the United States, and Europe have largely failed to influence large audiences, suggesting that AI is not yet a game-changer in election interference. However, ongoing vigilance remains essential to safeguard democratic processes.

AI in Cyber Operations Threat actors have been found using AI models to assist in various stages of cyber operations. For instance, “SweetSpecter,” a China-based adversary, attempted to use AI to conduct reconnaissance, develop malware, and evade detection while launching phishing attacks against OpenAI employees. Although the attacks were unsuccessful, the case highlights how AI can support offensive cyber operations, even if it does not yet provide capabilities far beyond what is already possible with traditional tools.

Another notable case involves "CyberAv3ngers," an Iranian group affiliated with the Islamic Revolutionary Guard Corps (IRGC), using AI for reconnaissance and scripting in attacks targeting industrial control systems (ICS) and programmable logic controllers (PLCs) in critical infrastructure. Their efforts to exploit known vulnerabilities in water systems and energy grids illustrate the potential dangers of AI in the hands of sophisticated adversaries.

Covert Influence Operations Several covert influence campaigns using AI-generated content have been disrupted, including operations originating from Russia, Iran, and other actors. One of the most significant cases involves a Russian network, “Stop News,” which targeted audiences in West Africa and the UK by generating AI-created articles, images, and comments. This operation, though prolific, failed to gain widespread engagement or influence, indicating the challenges threat actors face in achieving success even with AI-enhanced content creation.

Another operation, “A2Z,” focused on praising Azerbaijan and criticizing political opponents in multiple languages across social media platforms. While these AI-generated comments were sophisticated and multilingual, the network similarly failed to gain significant traction, with low engagement numbers across platforms.

Single-Platform Influence Campaigns Smaller-scale influence efforts were also detected, including a network generating comments to criticize the Anti-Corruption Foundation in Russia and another set of accounts spamming gambling links via direct messages on X (formerly Twitter). These operations underscore the variety of ways threat actors are experimenting with AI to support their agendas, whether for political manipulation or financial gain.

The Role of AI in the Information Ecosystem AI’s role in the broader information ecosystem is increasingly pivotal, particularly as threat actors leverage AI models in intermediate phases of their operations—such as creating personas, generating content, and refining their attack strategies. The report emphasizes that while AI can improve efficiency, it has not yet led to groundbreaking advances in the creation of malware or viral disinformation campaigns. Instead, it offers incremental improvements in adversarial tactics, techniques, and procedures (TTPs).

AI companies, on the defensive side, are also making progress. New tools powered by AI have allowed investigators to compress complex analytical tasks from days to minutes, improving their ability to detect, analyze, and disrupt malicious activities. This capacity is becoming increasingly important as threat actors continue to evolve their use of AI in cyber and influence operations.

Future Outlook Looking ahead, the report stresses the importance of continued investment in AI-powered defenses, collaboration across industry peers, and proactive disruption strategies. AI’s role in both cyber operations and influence campaigns is still evolving, and while it has not yet drastically altered the threat landscape, its potential remains significant. Companies like OpenAI are committed to staying ahead of these threats by continuously improving their detection, investigation, and disruption capabilities.

In conclusion, AI is a double-edged sword in the world of cybersecurity. While it empowers defenders with more advanced tools, it also provides adversaries with new methods to enhance their operations. The report serves as a timely reminder that staying ahead of these trends requires constant vigilance, collaboration, and innovation in both AI and cybersecurity practices.

Upgrade your subscription for exclusive access to member-only insights and services.

Stay Safe, Stay Secure.

The CybersecurityHQ Team

Reply

or to participate.