Since the release of ChatGPT in late 2022, artificial intelligence (AI) tools have become widely used by software developers.
According to research by the Australian Government’s Department of Industry, Science and Resources[1], 35% of small and mid-sized businesses are already using AI tools and usage is expected to continue to increase. Applications include everything from marketing automation and fraud detection to data and document processing.
Larger firms are also embracing the technology. Banks, manufacturers, and utility companies are using AI tools to streamline processes and reduce costs.
Interestingly, experiencing a subsequent surge in their productivity, many developers are taking part in what’s called ‘shadow AI’ by leveraging the technology without the knowledge or approval of their organisation’s IT department and/or chief information security officer (CISO).
This trend should come as no surprise, as motivated employees tend to seek out technologies that maximise their value potential while reducing repetitive tasks that get in the way of more challenging pursuits. After all, this is what AI is doing for not only developers but professionals across the board.
The unapproved usage of these tools isn’t exactly new either. Similar scenarios have played out with shadow IT, and shadow software as a service (SaaS).
However, even if staff circumvent company policies and procedures with good intentions in a “don’t ask/don’t tell” manner, developers are (often unknowingly) introducing potential risks and adverse outcomes through their usage of AI. These risks include:
- Blind spots in security planning and oversight: As CISOs and their teams are not aware of the shadow AI tools, they therefore, cannot assess or help manage them.
- Vulnerable code: AI’s introduction of vulnerable code can lead to the exposure/leakage of data outside the organisation.
- Compliance shortcomings: These can be caused by the failure of AI usage to align with regulatory requirements.
- Decreased long-term productivity: While AI provides an initial productivity boost, it will frequently create vulnerability issues, and teams wind up working backward on fixes because they weren’t addressed from the start.
Bringing AI out of the shadows
What’s clear is that AI on its own is not inherently dangerous. It’s a lack of oversight into how it is implemented that reinforces poor coding habits and lax security measures.
Under pressure to produce better software faster than ever, developer team members may try to take shortcuts in – or abandon entirely – the review of code for vulnerabilities from the beginning. And, again, CISOs and their teams are kept in the dark, unable to protect the tools because they aren’t even aware of their existence.
However, there is a way for CISOs to bring AI-assisted coding out of the shadows, allowing staff to get the most out of productivity benefits while avoiding vulnerabilities. This can be done by embracing it – as opposed to blanket suppression – and pursuing the following three-point plan to establish reasonable guardrails and raise security awareness capabilities among software development team members:
- Identify AI implementations:CISOs and their teams should map out where – and how – AI is deployed throughout the software development lifecycle (SDLC). They should consider who is introducing these tools, their security skill set, and what steps are being taken to avoid unnecessary risks. By mapping out the SDLC, security teams can pinpoint which phases – such as design, testing or deployment – are most susceptible to unauthorised AI usage.
- Encourage a ‘security-first’ culture:It’s essential to drive home the message that a “proactive protection” mindset from the very beginning will actually save development time in the long run rather than adding to workloads. To get to this state of optimal and safe coding, team members must commit to a security-first culture that does not blindly trust AI output.With this culture fully taking hold – strengthened by regular training – these professionals will acknowledge that it really is best to ask for permission rather than forgiveness. They’ll understand that they need to let CISOs know what they want to use and why.
- Incentivise for success:When developers agree to take AI out of the shadows, they are adding value to their organisation. That value should be rewarded, in the form of promotions and more appealing and challenging projects. By establishing benchmarks to measure team members’ security skills and practices, CISOs will be able to identify those who have proven themselves as candidates for greater responsibilities and career advancement.
AI creates a powerful new skill set
With a security-first culture fully in play, developers will view the protected deployment of AI as a marketable skill and respond accordingly. CISOs and their teams, in turn, will be able to stay ahead of risks instead of literally getting blindsided by shadow AI.
As a result, organisations will benefit from having their coding and security teams working closely together to ensure software production is better, faster, and more effective.
[1] Exploring AI adoption in Australian businesses, Australian Government, Department of Industry, Science and Resources