Original title: Can AI bots steal your crypto? The rise of digital thieves
Original translation: 0x deepseek, ChainCather
In an era of rapid development of cryptocurrency and AI technology, digital asset security is facing unprecedented challenges. This article reveals how AI robots have transformed the crypto field into a new criminal battlefield with their automated attacks, deep learning, and large-scale penetration capabilities. From precision phishing to smart contract vulnerability harvesting, from deep fake fraud to adaptive malware, the means of attack have exceeded the limits of traditional human defense. In the face of this game between algorithms, users need to be wary of AI-enabled digital thieves and make good use of AI-driven defense tools. Only by maintaining a balance between technical vigilance and security practices can we defend the fortress of wealth in the turbulent waves of the crypto world.
TL;DR
AI robots have the ability to evolve themselves and can automatically perform massive encryption attacks, with attack efficiency far exceeding that of human hackers
In 2024, AI phishing attacks have caused a single loss of $65 million, and fake airdrop websites can automatically empty user wallets
GPT-3-level AI can directly analyze smart contract vulnerabilities. Similar technology once led to the theft of $80 million from Fei Protocol.
AI analyzes password leakage data through brute force cracking to establish a prediction model, reducing the protection time of weak password wallets by 90%
Fake CEO videos/audio created by deep fake technology are becoming a new social engineering weapon to induce transfers
AI-as-a-service tools such as WormGPT have appeared in the black market, allowing non-technical personnel to generate customized phishing attacks
BlackMamba proof-of-concept malware uses AI to rewrite code in real time, making it 100% undetectable by mainstream security systems
Hardware wallets store private keys offline, which can effectively defend against 99% of AI remote attacks (such as the FTX incident in 2022)
AI social botnet can manipulate millions of accounts at the same time, Musks deep fake video fraud case involves more than $46 million
1. What is an AI robot?
AI bots are self-learning software that can automate and continuously optimize cyberattacks, making them more dangerous than traditional hacking methods.
At the heart of today’s AI-driven cybercrime are AI bots—self-learning software programs designed to process vast amounts of data, make independent decisions, and perform complex tasks without human intervention. While these bots have become a disruptive force in industries like finance, healthcare, and customer service, they have also become a weapon for cybercriminals, particularly in the cryptocurrency space.
Unlike traditional hacking methods that rely on manual operations and technical expertise, AI bots can fully automate attacks, adapt to new cryptocurrency security measures, and even optimize strategies over time. This makes them far superior to human hackers who are limited by time, resources, and error-prone processes.
2. Why are AI robots so dangerous?
The biggest threat of AI cybercrime is scale. A single hacker trying to break into an exchange or trick a user into handing over their private keys has limited capabilities, but AI bots can launch thousands of attacks simultaneously and optimize their tactics in real time.
Speed: AI bots can scan millions of blockchain transactions, smart contracts, and websites in minutes, identifying wallet vulnerabilities (leading to wallet hacks), DeFi protocols, and exchange weaknesses.
Scalability: While a human scammer might send hundreds of phishing emails, an AI bot can send personalized, carefully crafted phishing emails to millions of people in the same amount of time.
Adaptability: Machine learning enables these bots to evolve from each failure, making them harder to detect and stop.
This automation, adaptability, and ability to attack at scale has led to a surge in AI-driven crypto scams, making protection against crypto fraud more critical than ever.
In October 2024, the X account of Andy Ayrey, the developer of the AI robot Truth Terminal, was hacked. The attacker used his account to promote a fraudulent meme coin called Infinite Backrooms (IB), causing the market value of IB to soar to $25 million. Within 45 minutes, the criminals sold their positions and made a profit of more than $600,000.
3. How do AI robots steal crypto assets?
AI robots not only automate fraud, but also become more intelligent, precise, and difficult to detect. The following are the types of dangerous AI scams currently used to steal crypto assets:
AI-powered fishing bot
Traditional phishing attacks are not new in the crypto space, but AI has made them even more threatening. Today’s AI bots can create messages that are highly similar to official communications from platforms like Coinbase or MetaMask, and collect personal information from leaked databases, social media, and even blockchain records to make the scams extremely convincing.
For example, in early 2024, an AI phishing attack against Coinbase users defrauded nearly $65 million through fake security alert emails. In addition, after the release of GPT-4, scammers set up a fake OpenAI token airdrop website to trick users into automatically clearing their assets after connecting their wallets.
These AI-enhanced phishing attacks often contain no spelling errors or poor wording, and some even deploy AI customer service robots to obtain private keys or 2FA codes in the name of verification. In 2022, the Mars Stealer malware can steal private keys for more than 40 wallet plug-ins and 2FA applications, and is often spread through phishing links or pirated tools.
AI Vulnerability Scanning Robot
Smart contract vulnerabilities are a gold mine for hackers, and AI bots are exploiting them at an unprecedented rate. These bots constantly scan platforms such as Ethereum or BNB Smart Chain, looking for vulnerabilities in newly deployed DeFi projects. Once a problem is detected, they automatically exploit it, usually within minutes.
Researchers have demonstrated that AI chatbots, such as those powered by GPT-3, can analyze smart contract code to identify exploitable weaknesses. For example, Zellic co-founder Stephen Tong demonstrated an AI chatbot that detected a vulnerability in a smart contract’s “withdrawal” function, similar to the vulnerability exploited in the Fei Protocol attack that caused $80 million in losses.
AI-enhanced brute force attacks
Brute force attacks used to take a long time, but AI bots have made them incredibly efficient. By analyzing previous password breaches, these bots can quickly identify patterns to crack passwords and seed phrases at record speeds. A 2024 study of desktop cryptocurrency wallets (including Sparrow, Etherwall, and Bither) found that weak passwords significantly reduced resistance to brute force attacks, highlighting the importance of strong and complex passwords for protecting digital assets.
Deepfakes
Imagine seeing a video of a trusted cryptocurrency influencer or CEO asking you to invest — but it’s completely fake. That’s the reality of AI-driven deepfake scams . These bots create hyper-realistic videos and recordings to trick even savvy cryptocurrency holders into transferring funds.
Social media botnets
On platforms like X and Telegram, a large number of AI bots are spreading cryptocurrency scams on a large scale. Botnets such as “Fox 8” use ChatGPT to generate hundreds of convincing posts hyping scam tokens and responding to users in real time.
In one case, scammers abused Elon Musk and ChatGPT’s names to promote fake cryptocurrency giveaways — complete with deepfake videos of Musk — to trick people into sending money to the scammers.
In 2023, researchers at Sophos discovered that crypto romance scammers were using ChatGPT to chat with multiple victims at once, making their affectionate messages more convincing and scalable.
Similarly, Meta reports a sharp rise in malware and phishing links disguised as ChatGPT or AI tools, often associated with cryptocurrency fraud schemes. In the realm of romance scams, AI is driving so-called pig-sucking operations — long-running scams where scammers cultivate relationships and then lure victims into fake cryptocurrency investments. In 2024, a high-profile case occurred in Hong Kong: police busted a criminal gang that defrauded men across Asia of $46 million through AI-assisted romance scams.
How AI-powered malware is fueling cybercrime against crypto users
Artificial intelligence is teaching cybercriminals how to hack into encrypted platforms, enabling a group of less skilled attackers to launch believable attacks. This helps explain why crypto-phishing and malware campaigns are so large — AI tools allow bad guys to automate scams and continually improve on what works.
AI is also enhancing malware threats and hacking tactics targeting cryptocurrency users. One concern is AI-generated malware, which uses AI to adapt and evade detection.
In 2023, researchers demonstrated a proof-of-concept called BlackMamba, a polymorphic keylogger that used AI language models (like the technology behind ChatGPT) to rewrite its code each time it was executed. This meant that every time BlackMamba was run, it generated a new variant of itself in memory, helping it evade detection by antivirus and endpoint security tools.
In tests, industry-leading endpoint detection and response systems failed to detect the AI-crafted malware, which, once activated, can covertly capture everything a user types, including cryptocurrency exchange passwords or wallet seed phrases, and send that data to the attacker.
While BlackMamba was just a lab demo, it highlights a real threat: Criminals can use artificial intelligence to create shapeshifting malware that targets cryptocurrency accounts and is harder to catch than traditional viruses.
Even without fancy AI malware, threat actors are taking advantage of AI’s popularity to spread classic Trojans. Scammers often set up fake “ChatGPT” or AI-related apps that contain malware, knowing that users may let down their guard because of the AI branding. For example, security analysts have observed fraudulent websites impersonating ChatGPT sites with a “Windows Download” button; if clicked, it quietly installs a cryptocurrency-stealing Trojan on the victim’s machine.
In addition to the malware itself, AI has also lowered the technical threshold for hackers. In the past, criminals needed some coding knowledge to create phishing pages or viruses. Now, underground AI as a service tools can do most of the work.
Illegal AI chatbots such as WormGPT and FraudGPT have appeared on dark web forums, generating phishing emails, malware code, and hacking tips on demand. For a fee, even non-technical criminals can use these AI bots to create convincing scam websites, create new malware variants, and scan for software vulnerabilities.
5. How to protect your cryptocurrency from attacks by AI bots
AI-driven threats are becoming more advanced, so strong security measures are essential to protect digital assets from automated scams and hacks.
Here are the most effective ways to protect your cryptocurrency from hackers and defend against AI phishing, deepfake scams, and buggy bots:
Use hardware wallets: AI-driven malware and phishing attacks mainly target online (hot) wallets. By using a hardware wallet such as Ledger or Trezor , you can keep your private keys completely offline, making it almost impossible for hackers or malicious AI bots to access them remotely. For example, during the FTX crash in 2022, people who used hardware wallets avoided the huge losses suffered by users who stored their funds on the exchange.
Enable multi-factor authentication (MFA) and strong passwords: AI bots can exploit deep learning in cybercrime to crack weak passwords, using machine learning algorithms trained on leaked data breaches to predict and exploit vulnerable credentials. To combat this, always enable MFA through authenticator apps like Google Authenticator or Authy, rather than SMS-based codes — hackers have been known to exploit SIM swap vulnerabilities that make SMS verification less secure.
Beware of AI-driven phishing scams: AI-generated phishing emails, messages, and fake support requests are nearly indistinguishable from real ones. Avoid clicking on links in emails or direct messages, always manually verify website URLs, and never share private keys or seed phrases, no matter how convincing the request may seem.
Verify identities carefully to avoid deepfake scams: AI-powered deepfake videos and recordings can convincingly impersonate cryptocurrency influencers, executives, or even people you know. If someone asks for money or promotes an urgent investment opportunity via video or audio, verify their identity through multiple channels before taking action.
Stay up to date on blockchain security threats: Regularly monitor trusted blockchain security sources such as Chainalysis or SlowMist.