Yu Xian, founder of blockchain security firm Slowmist, has expressed concerns about a growing attack called AI code poisoning. This attack involves injecting harmful code into the training data of AI models, which can be risky for users who rely on these tools for technical tasks. The issue gained attention after a user reported losing $2,500 in digital assets while using OpenAI's ChatGPT. The chatbot recommended a fraudulent API website, leading to the theft of the user's private keys. Further investigation revealed that the address consistently receives stolen tokens, suggesting it belongs to a fraudster. While it is unclear if OpenAI intentionally integrated the malicious data, this incident demonstrates how scammers can manipulate AI training data to generate fraudulent outputs. Xian warns crypto users about the risks tied to large language models like ChatGPT and emphasizes the need for stronger defenses to prevent further financial losses and maintain trust in AI-driven tools.



Other News from Today