Since its initial discovery in May 2024 by the Sysdig Threat Research Team (TRT), LLMjacking has rapidly evolved, posing a significant risk to organizations relying on large language models (LLMs). The latest wave of attacks has set its sights on DeepSeek, a fast-growing AI model that cybercriminals exploited within days of its release.
Understanding LLMjacking
LLMjacking involves the theft of API keys and cloud access credentials to run costly AI models without authorization. Stolen credentials are often resold on underground markets or used to power illicit AI services, leading to massive financial losses for legitimate users.
Why DeepSeek Is the Latest Target
DeepSeek introduced its DeepSeek-V3 model in December 2024, gaining immediate popularity. Within days, attackers integrated it into unauthorized proxy services. A similar pattern followed in January 2025 with DeepSeek-R1, illustrating how cybercriminals track and exploit new AI models as soon as they gain traction.
The Role of OpenAI Reverse Proxies (ORP) in LLMjacking
Attackers frequently use OpenAI Reverse Proxy (ORP) techniques to disguise unauthorized API access through masked IPs and dynamic domains. These proxies monetize stolen API keys, with access often sold on underground marketplaces.
For example, an ORP proxy at vip[.]jewproxy[.]tech was found selling access for $30 per month. In just a few days, logs showed millions of tokens processed, resulting in tens of thousands of dollars in unauthorized cloud fees. Some of the most expensive AI modelsโsuch as Claude 3 Opusโincurred nearly $39,000 in stolen usage.
Tactics Attackers Use to Evade Detection
Cybercriminals are leveraging thriving underground communities on Discord and 4chan, where they exchange tools and techniques. Key methods include:
- TryCloudflare Tunnels โ Attackers use dynamic domains to hide malicious proxy activity.
- Obfuscation Techniques โ Some proxies use CSS tricks or password authentication to deter detection.
- Automated API Key Theft โ Stolen credentials are tested and categorized before resale or further exploitation.
How Security Teams Can Defend Against LLMjacking
To mitigate the risk of LLMjacking, organizations should implement the following best practices:
- Secure API Keys โ Use vault solutions like AWS Secrets Manager or Azure Key Vault.
- Enforce Least Privilege Access โ Restrict access to only essential users and applications.
- Monitor API Usage โ Use anomaly detection tools to spot unauthorized activity.
- Regularly Rotate Keys โ Reduce exposure by automating credential rotation.
- Conduct Routine Security Scans โ Leverage tools like TruffleHog and GitHub Secret Scanning to detect exposed credentials.
Conclusion
As AI adoption accelerates, so do the threats targeting these technologies. LLMjacking is a fast-evolving cyber risk, with attackers continuously adapting their methods to exploit new models like DeepSeek. Security teams must remain proactive, implementing strong access controls and continuous monitoring to safeguard AI infrastructure from unauthorized use and financial loss.