Generative AI: Putting it to Work to Improve IT Security
Posted: Wednesday, Jun 21

i 3 Table of Contents

Generative AI: Putting it to Work to Improve IT Security

Since being unveiled to the public a little over six months ago, the generative AI platform ChatGPT has focused attention on the potential of artificial intelligence.

Seen by some as a powerful tool and others as a direct threat to their jobs, ChatGPT can complete an array of tasks in a matter of seconds. Users can request anything from the drafting of articles on particular topics to help with formulating a business plan.

What is less well known, however, is the role that this new technology can play in improving levels of IT security within organisations. Added to an existing protective framework, it has the potential to increase the ability to spot and neutralise threats.

 

Understanding generative AI

Before considering its role in security, it’s worth taking a moment to understand exactly what the technology is and how it works. Essentially, generative AI is a tool that uses large volumes of data — such as text, images, and code — to create something new.

Most generative AIs follow one of three techniques: foundation models, generative adversarial networks (GANs), and variational autoencoders (VAEs). ChatGPT uses a foundation model.

Foundational models are based mainly on transformation architectures that embody a type of deep neural network that computes a numerical representation of training data sets. ChatGPT uses this technique on language data to create natural-sounding text based on the model’s analysis of how words are typically used together.

The tool doesn’t understand the inputs or outputs in the same way a human does, which is why it sometimes return text that, while grammatically and syntactically correct, is factually inaccurate.

Generative adversarial networks, in contrast, are based on two neural networks that work, as the name implies, in opposition. One, called the generator, specialises in generating objects of a specific type, such as images of faces or animals. The other, the discriminator, learns to evaluate them as real or fake. After a few million iterations, a GAN can produce deceptively realistic results.

Like GANs, VACs are built from two neural networks: an encoder and a decoder. The encoder creates a compressed version of an object that retains its core characteristics. These representations can then be mapped onto a two-dimensional space where similar objects are clustered. New objects are generated by decoding a point in the dimensional space, say, between two objects.

 

Putting generative AI to use in security

Security specialists have already been attracted by the potential benefits these tools can deliver. One example is as a chatbot positioned as a first line of IT support.

An AI-powered chatbot is able to correlate multiple tickets and help security teams spot potential signs of a compromise more quickly than would otherwise be possible.

Generative AI tools can also be put to work to write code and evaluate binaries or stack traces to discover software flaws. This, in turn, can speed development cycles and get better protective measures in place more quickly.

Meanwhile, GANs and VAEs could be used to generate synthetic data to help train other AI models. This could be beneficial if an organisation has insufficient real data, or where that real data contains protected information such as health records or financial data.

GANs also have potential applications for security researchers and red teams. They can use them to generate synthetic biometric data which can be used to conduct highly sophisticated pen tests.

 

Taking pre-emptive measures

Security researchers could also use GANs in the same way that attackers might by creating new forms of malware that can evade detection, or reverse engineer an algorithm used in a phishing filter.

At the same time, generative AI tools could also be used to find the most efficient layout for components on a microchip, or to minimise latency in an organisation’s network architecture, thus improving overall defences.

However AI is deployed in an organisation, security departments will need to determine whether sensitive data may be at risk.  Assisting them today are solutions which able to offer visibility into both devices and users on networks that are connecting to external AIaaS domains as well as insight into the amount of data employees are sharing with these services, and the type of data and individual files that are being shared.   This will enable enterprises to reinforce their overall risk mitigation and compliance requirements and reduce the threat of potential data leakage.

It’s clear that generative AI has much to offer when it comes to improving IT security. With the technology evolving at a blistering rate, the ways in which it can be put to work will continue to increase.

Rohan Langdon
Share This