The use of malicious links is increasing, even in AI chat platforms like ChatGPT, leading to user losses in the crypto space. A user discovered that ChatGPT provided a contaminated API link when they attempted to create a crypto app, resulting in their wallet being compromised and assets stolen. This incident highlights the unreliability of artificial intelligence tools for Web3 security. The fake API site has been targeting multiple wallets to steal SOL tokens, and it is the first complete exploit in the crypto space involving AI. The founder of SlowMist confirmed the exploit and suggested that users should verify AI-generated code. The exploiters used the API link to steal various meme tokens. It is unclear who else accessed the fake API site, but it is possible that ChatGPT's data was contaminated through Python code from multiple repositories. Users should be cautious when using unknown repositories and review code to mitigate risks. Flawed APIs were also promoted in a Medium article and led to a documentation page with the same name as the flawed GitHub repository. The best approach is to avoid unverified code and use separate devices for wallet and risky connections. Another attack involving fake Zoom links has emerged, targeting crypto influencers and holders to download malware. Storing wallets and private keys on separate devices and using newly assigned wallets for token mints and connections is recommended.
- Content Editor ( cryptopolitan.com )
- 2024-11-22
User Solana wallet exploited in first case of AI poisoning attack