OpenAI released a report stating that its AI models have been used to influence elections, leading to the takedown of over 20 operations involved in such malicious activities. The report highlights the trend of OpenAI's models being used to disrupt elections and spread political misinformation. Bad actors, often state-sponsored, use the AI models for generating content for fake personas on social media and malware reverse engineering. OpenAI has disrupted campaigns in Iran, Venezuela, Israel, and Rwanda. Despite these activities, the campaigns have gained minimal traction, indicating the difficulty in swaying public opinion through AI-powered misinformation. OpenAI emphasizes the need for collaboration with stakeholders to address the potential threat and build robust security defenses. The World Economic Forum also stresses the need for AI regulations and strategic partnerships to implement ethical AI systems.



Other News from Today