Overcoming The Complexity And Security Issues Posed By AI Coding Tools
Posted: Wednesday, Jun 18

i 3 Table of Contents

Overcoming The Complexity And Security Issues Posed By AI Coding Tools

The rapid adoption of artificial intelligence (AI) coding assistants has transformed software development, offering much-needed relief for developers grappling with growing workloads and tight delivery timelines.

These generative AI tools have been widely embraced for their ability to speed up code generation and streamline development processes.

However, the initial promise of increased efficiency has been followed by a surge in unintended consequences, particularly in the realm of cybersecurity. What began as a productivity boon is now complicating an already challenging security landscape.

Developers are now facing an expanding attack surface, made more difficult to defend by the accelerated pace and volume of code produced with AI assistance. Experts warn that while these tools excel at generating code quickly, they often do so without the contextual understanding required to ensure security best practices, especially when used by developers with limited security expertise.

The result is a growing number of vulnerabilities being introduced into codebases at an unprecedented rate, adding pressure to teams already struggling to maintain secure and stable applications. As AI-generated code becomes more prevalent, organisations will need to reassess their development and security strategies to keep pace with this evolving technological frontier.

The current software environment has grown out of control security-wise and this trend shows no signs of slowing. However there is hope for slaying the twin challenges of complexity and insecurity.

Organisations must ensure strong developer risk management, backed by education and upskilling that gives developers the tools they need to bring software under control.

Complexity and maintenance  issues

When generative AI first appeared in November 2022 as OpenAI’s ChatGPT, developers were quick to take advantage. By June 2023, 92% of US developers were using AI tools for work or personal use, according to a GitHub survey[1]. Developers mostly saw accelerated code creation as beneficial, and using AI tools quickly became routine.

However, although subsequent surveys[2] found that about three-quarters of developers said AI-generated code was more secure than code created by humans. They also found that AI was nevertheless introducing errors in more than half of its code.

What’s more, 80% of developers were ignoring secure AI coding policies, passing up any chance of catching those mistakes as they happened.

More recent research[3] sheds light on how AI-generated code increases complexity and compounds the challenge of maintaining and securing software late in the software development lifecycle (SDLC). GitClear analysed four years of changed code – about 153 million lines – created between January 2020 and December 2023, and found alarming results concerning code churn and the rate of copied or pasted code.

“Code churn”, described as code that was changed or updated within two weeks of being written, was projected to double between 2021 and 2024, and that’s before the onslaught of AI tools came into play.

During the same period, the amount of copy/pasted code increased faster than code that had been updated, deleted, or moved, indicating movement away from DRY (Don’t Repeat Yourself) practices, a trend that invariably leads to an increase in software flaws.

Both bad practices amplify the complexity of applications, which drives up support costs while increasing the difficulty of securing software. The speed of software production, accelerated by AI, puts more vulnerabilities into the pipeline before they can be fixed, which also considerably lengthens the time it takes for security to catch up.

AI tools increase the speed of code delivery, enhancing efficiency in raw production, but those early productivity gains are being overwhelmed by code maintainability issues later in the SDLC. The answer is to address those issues at the beginning, before they put applications and data at risk.

Creating a security-first culture

Organisations involved in software creation need to change their culture, adopting a security-first mindset in which secure software is seen not just as a technical issue but as a business priority.

Persistent attacks and high-profile data breaches have become too common for boardrooms and CEOs to ignore. Secure software is at the foundation of a business’ productivity, reputation, and viability, making a commitment to a robust security culture a necessity.

Implementing an education program to upskill developers on writing secure code and correcting errors in AI-generated or third-party code can prevent those increasingly common defects from entering the pipeline, reducing complexity while improving software security.

Companies need to invest in programs that provides agile, hands-on and continuous learning, and gives security a prominent place among their key performance indicators. A learning program should establish a baseline of skills developers need, and it should include both internal and industry baselines to gauge their progress.

A crucial aspect of education is knowing that the program is working, that developers have absorbed their new skills and they are applying them consistently.

The advantages AI tools deliver in speed and efficiency are impossible for time-crunched developers to resist. But the complexity and risk created by AI-generated code can’t be ignored either.

Organisations need to thoroughly upskill their developers so that they can work with security professionals to nip software security problems in the bud. Only by managing developer risk can the challenges of complexity and insecurity be overcome.

The result will be better code that supports superior business outcomes.

[1] https://github.blog/news-insights/research/survey-reveals-ais-impact-on-the-developer-experience/

[2] https://snyk.io/reports/ai-code-security/

[3] https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality

Matias Madou
Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realised that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.
Share This