DeepSeek, the tool designed to exploit vulnerabilities in AI systems, raises significant concerns amongst the security industry, particularly OpenAI. The DeepSeek competitor discussed the potential for DeepSeek to undermine the security of AI applications, which could have far-reaching consequences for technology as we know it.
Satnam Narang, Senior Staff Research Engineer at Tenable commented,
“DeepSeek has taken the entire tech industry by storm for a few key reasons: first, they have produced an open source large language model that reportedly beats or is on-par with closed-source models like OpenAI’s GPT-4 and o1. Second, they appear to have achieved this using less intensive compute power due to limitations on the procurement of more powerful hardware through export controls.”
OpenAI’s apprehensions stem from a fundamental belief that security must be at the forefront of AI development. With DeepSeek’s capabilities, there’s a risk that malicious actors could manipulate AI systems. Imagine an AI model designed to assist in healthcare being compromised. The implications of such an event could impact patient care and erode trust in technology.
Countries that prioritise security measures and AI frameworks will likely dominate the future of AI technology. As nations race to develop advanced AI systems, those that overlook the threat posed by tools like DeepSeek may find themselves at a disadvantage. This creates a competitive landscape where the balance of power could shift dramatically, influenced by how effectively countries manage security risks.
Narang went on to say, “The release of DeepSeekv3 and its more powerful DeepSeek-R1 as open source large language models increases accessibility to anyone around the world. The challenge, however, is that unlike closed source models, which operate with guardrails, local large language models are more susceptible to abuse.”
To illustrate, consider the case of a nation that invests heavily in AI research but neglects security protocols. If a security breach occurs due to vulnerabilities exposed by DeepSeek, the repercussions could tarnish that nation’s reputation in the global arena. Conversely, countries that proactively address these threats can enhance their standing as leaders in the AI sector.
Assessing security risks is not an exercise in caution; it’s a necessity. AI systems are increasingly integrated into critical infrastructure, from finance to transportation. A breach could lead to not just data theft but also the manipulation of essential services. For instance, if an AI system controlling a power grid is compromised, it could result in widespread outages, affecting millions. We’ve already seen this with the CrowdStrike incident and to a smaller degree, the Optus outage in Australia.
Industry experts are continuously discussing the importance of a multi-faceted approach to security. It’s not enough to implement basic safeguards; we must also anticipate potential threats and develop strategies to mitigate them. This includes investing in research to understand the evolving landscape of AI vulnerabilities and fostering collaboration between governments, private sectors, and academia to share knowledge and best practices.
Large language models with cybercrime in mind typically improve the text output used by scammers and cybercriminals seeking to steal from users, through financial fraud or to help deploy malicious software. We know cybercriminal-themed tools like WormGPT, WolfGPT, FraudGPT, EvilGPT and the newly discovered GhostGPT have been sold through cybercriminal forums. Narang Added.
Transparency in AI development processes can significantly enhance security. By openly sharing information about vulnerabilities and security measures, developers can create a community of practice that collectively strengthens the resilience of AI systems.
While it’s still early to say, I wouldn’t be surprised to see an influx in the development of DeepSeek wrappers, which are tools that build on DeepSeek with cybercrime as the primary function, or see cybercriminals utilise these existing models on their own to best fit their needs. Continued Narang.
OpenAI recommends a tier-based framework for AI regulation, which would categorise AI applications based on their potential risks and impacts. This approach allows for tailored regulations that address the specific vulnerabilities of each tier. For instance, AI systems used in healthcare or critical infrastructure would fall under stricter regulations compared to those with lower risk profiles. By implementing this framework, we can ensure that the most sensitive applications receive the attention they require, thereby reducing the likelihood of exploitation by tools like DeepSeek.
Industry experts echo OpenAI’s concerns, that a proactive stance is essential. They argue that we must move beyond reactive measures and instead generate a culture of security awareness within the AI community. This involves investing in research and development focused on identifying potential threats before they materialise. Collaboration between government entities, private companies, and academic institutions will be crucial in creating a united front against emerging threats.