Australia’s FSIs Can Lead The Way In Secure AI-augmented Software Development
Posted: Tuesday, May 14

i 3 Table of Contents

Australia’s FSIs Can Lead The Way In Secure AI-augmented Software Development

Large banks and other financial services organisations have proven to be early adopters of AI.

In Australia, each of the ‘Big Four’ banks – Westpac, CBA, ANZ and NAB – have invested in AI assistive coding tools to help deliver new features to customers faster.

That aligns with the results of a survey by GitHub, which found that more than nine out of 10 developers already use AI coding tools, with advantages including productivity gains (53%), freedom to focus on creative rather than repetitive tasks (51%) and preventing burnout (41%).

While generative AI’s early ability to write code has realised some potential so far, its expanded usage in the sector raises another issue tied tightly to AI use—security.

As early adopters of new technologies, financial institutions need to ensure a balance between productivity gains and the secure and responsible use of AI. The quality and security of code that is either produced or suggested by AI are paramount to ensuring effective operations, but it’s a dangerous area where organisations can too easily fall short.

As AI becomes an integral part of code delivery in banks and other FSIs, the volume of code produced will increase, and so will the number of errors, flaws and vulnerabilities – unless these are corrected early in the software development lifecycle.

The responsibility to run those quality checks will initially fall squarely on development teams to execute.

Checking AI’s Output

AI and developers can work together very productively, but only if developers are trained well enough to ensure that AI is generating secure code.

Developers need more than a bare minimum checkbox approach to learn how to use AI. They require precision training to truly grasp security best practices in real-world settings, so they can not only write secure code themselves but can also ably supervise the work of their code-writing AI assistants.

Without adequate oversight, AI’s coding mistakes can spread quickly. In being trained to write code, an AI model will ingest thousands of examples of code writing, but there is no guarantee that the AI model isn’t drawing on an example that contains errors. Because AI models aren’t transparent about their processes, errors won’t show up until after the fact – and the AI will repeat those errors until they’re corrected.

An early study by AI researchers found that AI-generated code had introduced significant flaws, including cross-site scripting (XSS) vulnerabilities, susceptibility to code injection attacks and new vulnerabilities specific to AI-generated code, such as those associated with prompt injection.

The financial industry’s ability to make effective use of AI relies on ensuring that the code being generated is secure from the outset. To do that, FSIs need to ensure they have highly trained engineers who will closely oversee AI code writing, deliver effective contextual prompts, identify errors and quickly correct them.

Implementing An Effective Training Solution

Developers working with AI-generated code will need to sharpen their existing skills— and acquire some new ones—regarding security best practices and the ability to spot poor coding patterns that can lead to exploitable vulnerabilities.

Properly trained developers will be able to spot an AI model’s missteps before deployment and enhance the advantages of using AI to accelerate development.

For example, a useful training exercise may be to prompt an LLM [large language model] to change the content of a real code snippet in order to modify the code’s function. The AI responds by producing a code block—but that block is susceptible to cross-site scripting (XSS). This kind of training ensures that the developer can recognise that vulnerability.

As demonstrated in this example, the skills required to operate in an environment that essentially facilitates AI pair programming are complex, and are not conducive to being acquired by simply employing standard, static training methods.

Instead, FSIs adopting AI for code generation or developer augmentation will benefit from offering development teams a more complete program of agile-based training that takes a hands-on approach to secure coding and that has been shown to significantly reduce the number of vulnerabilities in software.

Agile training should be tailored to include the programming languages developers will come across, no matter how niche (whether they work on a legacy COBOL codebase or more modern apps written in Google Go, they must be security-aware). Ideally, it should also be designed to deliver advanced content in formats suited to the developer’s preferred learning methods—such as visually, auditorily or verbally, as well as directly hands-on—and delivered at a pace that works best with individual developers and their work schedules.

In addition, the training should also be tailored to the specific roles and needs of employees. A platform can make use of a feedback loop to improve content and recognise when a developer is weak in a certain area, so the content can be automatically targeted to address that area.

In Summary

Agile training in secure coding best practices can provide a foundation for secure, trustworthy applications built in collaboration with AI, which not only reduces an FSI’s risk, but helps feed the institution’s ongoing productivity and success.

Matias Madou
Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realised that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations. When he is not at his desk as part of Team Awesome, he enjoys being on stage presenting at conferences including RSA Conference, BlackHat and DefCon.
Share This