Artificial intelligence (AI) engines are starting to populate everywhere, with each new model and version seemingly offering more powerful and impressive capabilities that can be applied in a variety of fields.
Itโs been suggested AI should be deployed for writing code, and some models have already proven their abilities using a multitude of programming languages. However, the idea that AI could take over the jobs of human software engineers is overstated.
All of the top AI models operating today have demonstrated critical limitations when it comes to advanced programming, not least of which is their tendency to introduce errors and vulnerabilities into the code they compile at cracking speed.
While itโs true that AI can help save some time for overworked programmers, the future will likely be one where humans and AI work together, with trained personnel in charge of applying critical thinking and precision skills to ensure all code is as secure as possible.
The Ticking Time Bomb of Unsupervised AI Code Writing
Highly sought-after developer jobs are unlikely to be replaced by AI anytime soon. But there are also other limitations on what AI can do, with the most critical being its inability to consistently write secure code. It is prone to not only adding errors and vulnerabilities into the code it creates, but also repeating the same mistakes until someone more security-aware corrects it.
AI models need to be trained on large samples of training data, and inevitably there are bound to be vulnerabilities, errors and exploitable code upon which the AI may base its future decisions, and thatโs a dangerous window of opportunity for a threat actor.
Most AI tools are not transparent in their decision-making process, so itโs anyoneโs guess whether they start to favour using vulnerable code in their mission to complete development tasks. And if it does, you can be sure it will repeat those errors time and again unless its actions are corrected. That is one of the main reasons AI tools are prone to generating inaccurate and insecure code.
However, too many articlesโmostly from companies that make AI and AI toolsโeither oversimplify the criticality of the problem or misrepresent how dangerous AI’s tendency to generate insecure code can be once applications are deployed into production environments.
The Importance of Learning Secure Code Development
Employing AI in a partnership role with human developers can reap impressive gains in productivity and efficiency, but only if developers are highly trained in recognising secure coding patterns and the vulnerabilities associated with insecure code.
The way forward is to train developers in cybersecurity best practices so they can, in turn, help to train and correct the actions of their new AI partners. And the good news is that most developers want to learn more about cybersecurity.
The recent The State of Developer-Driven Security 2022 survey found that the overwhelming majority of software engineers saw the value of cybersecurity, even though only eight percent said that creating secure code and keeping vulnerabilities out of programs was easy.
Most also stressed a willingness to learn more about cybersecurity, skills that will be increasingly taxed as they shift from writing all of the code themselves to also working with code created by their new AI partners.
But the skills required are so complex that typical check-the-box type training wonโt be effective in this new world of human and AI partnerships. Instead, organisations must provide developers with comprehensive, Agile-based training that teaches them how to apply all security best practices to their work.
All of those elements are part of the pillars of learning set up by Secure Code Warrior to ensure that developers can embrace and use secure coding. And those skills will be increasingly important as less-precise and error-prone AI assistants start writing large blocks of code, requiring human intervention and contextual critical thinking to ensure everything is correct and free from vulnerabilities.
The Future Is a Human-AI Partnership For Secure Code Development
Having AI assist developers with their work is a key component of the future of software development. However, given the many limitations of AI when trying to create secure code, the best path forward is establishing a close partnership between AI and developers.
If humans and AI can work together, humans skilled in secure coding practices can be fully in charge of ensuring that all code is as secure as possible, whether itโs created by humans or generated by AI coding tools.
It will take some work to achieve the best possible results from human and AI partnerships, but the rewards are more than worth the effort as we journey towards the bright future state of coding excellence. Training developers to create secure code is the lynchpin to realising the potential of AI assistants.