Since it first appeared late last year, ChatGPT has quickly built an extensive user base keen to put the evolving tool through its paces.
In response to a plain language prompt, ChatGPT can generate anything from a business proposal or marketing message to a poem or humorous story.
Its capabilities have also captured the attention of software developers. They’re attracted by the way in which the tool can generate fully featured code essentially at the press of a button.
Unfortunately, however, security experts have been quick to point out that, in many cases, the code being produced was poor quality, vulnerable and, in the hands of those with little security awareness, could cause an avalanche of insecure apps to hit unsuspecting consumers.
Meanwhile, there are also those who have enough security knowledge to use the tool for nefarious purposes. Phishing, deep fake scam videos, and malware creation are now achievable much faster, and with lower barriers to entry.
Poor Coding Patterns Dominate Its Go-To Solutions
With ChatGPT trained on decades of existing code and knowledge bases, it’s no surprise that for all its marvel and mystery, it too suffers from the same common pitfalls people face when navigating code. Poor coding patterns are the go-to, and it still takes a security-aware driver to generate secure coding examples by asking the right questions.
Even then, there is no guarantee that the code snippets given are accurate and functional from a security perspective. The technology is also prone to ‘hallucinations’, even at times making up non-existent libraries when asked to perform some specific JSON operations.
This, in turn, could lead to ‘hallucination squatting’ by threat actors, who would be all too happy to spin up some malware disguised as the fabricated library recommended with full confidence by ChatGPT.
Ultimately, we must face the reality that, in general, the IT industry has not expected developers to be sufficiently security-aware, nor has it adequately prepared them to write secure code as a default state. This will be evident in the enormous amount of training data fed into ChatGPT, and we can expect similar lacklustre security results from its output – at least initially.
Developers would have to be able to identify the security bugs and either fix them themselves or design better prompts for a more robust outcome.
The first large-scale user study examining how users interact with an AI coding assistant to solve a variety of security-related functions — conducted by researchers at Stanford University — supports this notion.
A road paved with good intentions
It should come as no surprise that AI coding companions are popular, especially as developers are faced with increasing responsibility, tighter deadlines, and the ambitions of a company’s innovation resting on their shoulders.
However, even with the best intentions, a lack of actionable security awareness when using AI for coding will inevitably lead to glaring security problems. All developers with AI/ML tooling will generate more code, and its level of security risk will depend on their skill level. Organisations need to be acutely aware that untrained people will certainly generate code faster, but so too will they increase the speed of technical security debt.
Naturally, just as junior developers undoubtedly increase their skills over time, you can expect AI/ML capabilities to improve. A year from now, it may not make such obvious and simple security mistakes.
However, that will have the effect of dramatically increasing the security skill required to track down the more serious, hidden, non-trivial security errors it will still be in danger of producing.
While there has been considerable talk of ‘shifting left’ for many years the fact remains that, for most organisations, there is a significant lack of practical security knowledge among the development cohort. Organisations must focus on finding ways to bridge this gap through ongoing education.
Currently, many organisations are ill prepared for many security bugs that are already in the wild let alone new ones generated by AI tools. When you add in emerging AI-borne issues such as prompt injection and hallucination squatting that represent entirely new attacks, the challenge becomes even more acute.
It’s clear that AI-powered tools will deliver significant benefits to software developers in coming years. However, it’s vital to be aware of – and mitigate – the security pitfalls that might be encountered along the way.