The Voice of Cyber®

KBKAST
Episode 334 Deep Dive: Brad Jones | Securing AI Deployments and Mitigating LLM-Powered Attacks
First Aired: September 17, 2025

In this episode, we sit down with Brad Jones, CISO at Snowflake, as he unpacks the evolving challenges of securing AI deployments and defending against large language model (LLM) powered attacks.

Brad explores the complexities enterprises face in keeping up with the rapid pace of AI innovation, especially as traditional policy frameworks struggle to adapt. He outlines the growing use of LLMs in both consumer and enterprise environments, the unique risks of agentic workflows, and the blurred boundaries between public and private AI deployments.

He also highlights the increased sophistication of social engineering threats fueled by LLMs and discusses strategies for observability, governance, and keeping security teams ahead of the curve in a fast-changing landscape.

 

Brad serves as the Chief Information Security Officer and has been with the company since 2023. Prior to joining Snowflake, Brad was the CISO and VP of Information Security at Seagate for over six years. Before his tenure at Seagate, Brad oversaw Information Security at Synopsys and SanDisk. Additionally, he has actively participated in a number of customer advisory boards and is currently part of the CISO Advisor Council at NightDragon. Brad earned his Bachelor of Science in Mechanical Engineering from the University of California, Davis.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Brad Jones [00:00:00]:
I think the pressure on enterprises to be leveraging AI in general to gain efficiency and the broad changes of features and capabilities within these tools is driving a curve that is hard for more rigid policies. And those people are developing those policies to really keep up with that pace.

Karissa Breen [00:00:36]:
Joining me now is Brad Jones, CISO at Snowflake. And today we’re discussing securing AI deployments and mitigating LLM powered attacks. So, Brad, thanks for joining me and welcome.

Brad Jones [00:00:51]:
Oh, thank you. I’m glad to be here.

Karissa Breen [00:00:52]:
Okay, so I really want to start perhaps Brad, with a little bit of lay of the land here around current landscape, so specific to large language models, also known as LLMs. For people who aren’t familiar. What are you sort of seeing? Because this is an interesting interview, so I’m keen to get into this, but I want to sort of start there first.

Brad Jones [00:01:09]:
You know, as a security practitioner, we have to think of this in a couple different angles. So one is probably most obvious is, is more of the consumer space, what people know of like ChatGPT or Cloud, where they’re interacting with them, maybe in their personal lives or maybe bringing them into an enterprise environment through shadow or unsanctioned measures. There is the broader deployment in an enterprise of some level of LLM or agentic workflows. And then as a security practitioner, we also have to think of the threat actors and how they’re leveraging some of these tools to help in social engineering attacks or more quickly pivot from vulnerabilities into exploits. So it’s a pretty broad spectrum of things that we have to think of. You know, a lot of people’s probably primary exposure is more of the consumer side of things, but we’re seeing more and more that it’s penetrating into the enterprise environment either through sanction tools or building robust applications that may have a LLM or kind of chat interface, natural language model interface to various tools or data sets?

Karissa Breen [00:02:19]:
Yeah. So Brad, you make some interesting points there. Would you say as well, given like the whole LLM large language model thing, do you think it’s something that people as people as insecurity folks are trying to just get their head around how it works, what are the risks, what’s going to happen? Is there still a lot of that in terms of new territory for people or how does that sort of sit with you?

Brad Jones [00:02:39]:
I think it’s a rapidly evolving space so if you go beyond purely the large language models, how they can be used in these agentic frameworks, there’s not a lot of policy or regulation or standards that is keeping up with the pace. We have things like the ISO 42001 standard. That’s a high level how you should be thinking about creating policies around the usage of AI within a platform or in your enterprise. But you know, it’s evolving at such a fast pace. We’re having to look at various thought leadership organizations of publishing, how to frame the broader problem. I think OWASP has put out some really good collateral over the past month plus of really framing the broader problem of not just LLMs, but the whole agency workflows and the different areas that you need to be focused on. You know, they carve out things like coding. AI tools are very different than creating a, you know, a chat application in an enterprise environment.

Brad Jones [00:03:45]:
That’s very different than using a, you know, ChatGPT or Claude in your business environment. A lot of focus around defining some of the problems around some of the interoperability tools like MCP or agent to agent protocols. It’s a very dynamic environment and evolving landscape at a very rapid pace. When I talk to other CISOs, they’re in the same boat of trying to keep up and trying to have policies, guardrails and guidelines that help them guide them. But they also understand that they need to be almost rethinking every three months the strategy on it because it’s such an evolving landscape.

Karissa Breen [00:04:23]:
Yeah, okay, that’s, that’s going to be tough. So would you say, given your experience, what you’re seeing, what do you think people just don’t really get about large language models at the moment in terms of securing them, Is there anything that people are sort of overlooking, like you said, like there’s no real handbook, there’s no rule rules or policies, regulations at the moment. So relatively early days. So how are people feeling about this?

Brad Jones [00:04:45]:
There’s no avoiding that there. You know, AI in general is not going to be used and leveraged to gain efficiencies either in people’s personal lives or in the enterprise environment. So from a security practitioner, we can’t be the no, we need to be the yes and organization that’s figuring out yes, these things can add benefits to the business. That’s undeniable. But how do we do that in a way that we can protect the enterprise? All of these large language models or agentive workflows at the core, they’re going to need access to Data and systems and ensuring that we have good guardrails, guidelines, trust boundaries with those is going to be imperative to make sure that when they’re deployed in an enterprise environment, they’re done in a way that you still have the same controls and governance over them.

Karissa Breen [00:05:33]:
Okay, so let’s go into the guardrails. So this is something that I’m hearing a lot. I literally ran an interview yesterday, same sort of guardrail situation. A lot of people still don’t know what they don’t know. So how is it gonna. Is it sort of gonna be a bit of a pendulum that’s gonna swing each side until companies, whether it’s more specific to a company with their individual guardrails, it’s like, okay, this is how we’re gonna handle things. Is it gonna be a bit more trial and error? Are people going to get burnt from certain things? Because you just don’t know. Right.

Karissa Breen [00:05:58]:
So. And then it may pendulum sort of fall in the middle or how does that sort of look in your eyes?

Brad Jones [00:06:04]:
I think in every organization it’s going to be an active document that’s under constant revision. We need to put some hard boundaries around things that shouldn’t be done and that may be certain outcomes in the business. We shouldn’t depend on agency workflows or LLMs. That could be things like HR decisions or financial reporting that, you know, there may not be a confidence level in the ability or the accuracy of the results. There’s going to be a large gray area that most people are working with and over time that, you know, hopefully people are shrinking the cloud, clearly allowed and clearly not allowed such that gray area is very small. There’s an aspect of the outcomes that leveraging an LLM, be it what data it has access to or what data it could expose, that’s fundamental to, you know, this. And it’s not something new with AI, it’s just something that AI has a broader capacity to potentially expose. So people need to think about if it’s accessing data, and especially in a corporate environment, confidential data, that there’s a lot of rigor around understanding what are the potential outcomes that you want to protect against.

Brad Jones [00:07:18]:
And over time, there’s going to be more and more products or capabilities that enter into the marketplace to help practitioners get a better visibility or put some more of this governance structure around it.

Karissa Breen [00:07:30]:
So can I ask more of a rudimentary question? So we look. You said before like active document, right? Which I get. I’ve historically come from a GRC sort of background and you know, we develop all these policies, but then people just wouldn’t adhere to the policy anyway. It’s like, oh, I forgot, couldn’t be bothered. It’s so long forgot about the thing existed. No one’s looked at it. And I know you said active document, but will that still help people follow those guardrails or do you still think people are just going to, you know, do whatever they want anyway? And we’re already seeing this with ChatGPT and all these other things that people are trying to prevent because people uploading sensitive information, etc. How’s that going to sit there? How are we going to be.

Karissa Breen [00:08:06]:
I don’t want to use the word police, but how are we going to make sure that even if we’ve got the active document that’s going to keep evolving, people are still trying to do the right thing?

Brad Jones [00:08:14]:
Sure. So we’re thinking about our AI policy as a very high level set of principles and then we’ll have lots of modular documents that we can update as new features and capabilities get added. It mentioned that OWASP put out some good papers recently. They have one on the state of agenda security and governance and they even mention that rigid policies are not going to be able to keep up with the pace of innovation or changes. That you know, at some point your policy is going to be more like a dashboard that when you’re seeing deviations from known standards, that you’re quickly acting upon it. Now, having a broader set of policies is a good idea. So Snowflake went out last summer and achieved ISO 42001. And part of that was that we had to create a broad set of policies.

Brad Jones [00:09:05]:
Those were primarily focused around the Snowflake platform and how we protect these foundational models, how we have transparency of data, cards of how those models are trained from the foundational models coming from the OpenAI’s, the Claude, the Deep Seq, but as well as when we provide refined models that we have that transparency in there of how we address those, how we maintain the rigor of understanding what training data went into a given model or refining a model. But I think if you look at it more broadly in the enterprise environment, not looking at it from a platform provider, you mentioned people uploading sensitive documents to something like a ChatGPT. There’s a very big difference. If someone’s using a personal instance of ChatGPT versus an enterprise managed version, the enterprise managed version you have, you could argue that’s part of your control boundary. You probably have more visibility. So providing guidelines around what should be used and what shouldn’t be used in an environment. So we’ve set up some policies around that to say if you are uploading general public information or asking questions or asking for a clarification around a term that’s reasonable to do in ChatGPT, but you should not be uploading sensitive documents that are internal documents. That is where that boundary is clearly defined for us right now.

Brad Jones [00:10:31]:
I think one of the challenges the enterprise are going to experience is the large foundational model providers are quickly rolling out features at a rapid pace there. You could argue they’re in somewhat of an arms race of being the first to release a new feature. They’re not necessarily focused on the enterprise level controls, and that’s putting a particular burden on enterprises to figure out how to put compensating controls or uploading policies or updating policies or providing more guidance or getting more observability to understand when people are bypassing or not adhering to policies.

Karissa Breen [00:11:09]:
Okay, I know we’re going to get into that in a moment in terms of private versus public. But before we do that, going back to your comment, Brad, before around rigid policymakers. So if we look at a historical policy writer, the rigidity of these people is we stick to the policy, we do the policy, we implement the policy, we police the policy. So is that going to rattle folks now? Because it’s like, well, it’s going to be forever changing, right? Like we’re not going to be doing the same things the way you used to write them and implement them and govern them and all this sort of thing. So is that going to rattle a few feathers now for these folks internally or perhaps.

Brad Jones [00:11:42]:
But I don’t think we’ve seen a technology curve like we’ve seen with the AI curve. When cloud came about, there was a slow build to it and people adjusted policies and got comfort level with it. I think the pressure on enterprises to be leveraging AI in general to gain efficiency and the broad changes of features and capabilities within these tools is driving a curve that is hard for more rigid policies. And those people are developing those policies to really keep up with that pace. That’s one of the reasons why we’ve taken the stance of creating a very modular framework and policy framework for AI that we understand. The broader concepts of the policy may not change, but some of the specifics may change when things like browser plugins that can browse or scrape your web pages are now released. That’s something that’s a completely different functionality or variant that wasn’t covered in the previous broader policy document. So I think teams just need to be agile with this because the rate of change is so prolific.

Karissa Breen [00:12:53]:
So I want to slightly switch gears and let’s get it back into the whole public versus private LLM debate. And you mentioned it before. Right. So it’s like, well, if you’re uploading an internal document to internal capability, that’s fine, but if you’re doing it externally, that’s a problem. But you mentioned before because these things are evolving so quickly, perhaps that external foundational providers has a better capability. So we’re probably going to start to see people going, yeah, but if I just use the new version of ChatGPT, it’s way better than the internal one that we’ve got. And so that rapid keeping up with that is going to be hard. So I’m keen to get into this a little bit more because I find this really interesting.

Brad Jones [00:13:30]:
If you’re looking at those providers as being completely external to your environment, that’s always going to be a problem of your trust boundaries of where data is going. One of the reasons that we feel Snowflake is well positioned in this is we have access to those foundational models within a security and trust boundary and governance boundary within the Snowflake environment that customers can feel confident in their ability to leverage their sensitive data using the latest foundational models and bringing those models to where their data lives. Anytime that you’re sending data to other services or interconnecting services, there’s always challenges in having that unified governance and control.

Karissa Breen [00:14:17]:
Yeah. Okay, so in terms of then. But the behavior of people, right. And like I’ve had a lot of discussions recently in terms of people can find a faster way to do something, they’re going to do it right. And it’s. They don’t think like security people, just like an average employee. So how does that sort of work behaviorally?

Brad Jones [00:14:36]:
Well, I think if you look broadly back over the past 10, 15 years, the concept of shadow IT is generally the source of that, is IT teams not providing the tools that teams need. So I think it’s important for IT teams to be part of the business discussions and understanding where the needs of the business are and providing those tools when they’re needed in an environment that has the right governance and oversight. So if you’re in the space where you don’t have a team that can scale up and understand those needs, those are the companies that are going to run into more problems.

Karissa Breen [00:15:14]:
All right, this is interesting. So then because of the advancement of these external providers, do you think that it’s going to be hard now for enterprises to really keep up with that in terms of, well, these people are rolling this stuff out super fast, right? So how is the velocity of these internal LM sort of capabilities going to keep up or how do you sort of see that unfolding now?

Brad Jones [00:15:38]:
Well, so certainly at Snowflake, we’re leveraging Snowflake for most of our business processes. We are the largest Snowflake user and we use that, the platform for every aspect of our business, be it security or finance or you name it. All of our data is in there and we have a good level of trust of being able to leverage our data in our Snowflake environment because the security team has a good view of the governance model, the security controls, and that we’re bringing those foundational models to the data and being able to leverage it in that manner. Manner. That’s something that I think differentiates what Snowflake can provide because we’re already the trusted provider of the hosting the data, those large data lakes or data warehouses. And bringing those foundational models gives the users the ability to innovate and leverage these tools in a secure environment. If you look more broadly at some of the things like the coding tools that are taking off, like the cursors and the Claude code, it is imperative that the IT teams and the security teams are part of those conversations. And as well as putting good guardrails and guardline guardrails and guidelines in the usage of those tools, have a good level of observability of their enterprise environment to understand what new tools are being introduced that haven’t come through the right processes and validation and enterprise level governance over those tools.

Brad Jones [00:17:07]:
So I think observability is going to be a key part that every IT team or every security team is going to have to put into their environment to understand when tools are being used that aren’t being managed in the right way and being introduced in the environment in a shadow IT sort of manner.

Karissa Breen [00:17:23]:
So now just to go into this a bit more, I want to maybe get your thinking on perhaps some of the strategic considerations enterprises must assess when choosing between the public and LLM sort of side of things and the services as well as like internal deployment. So how do people sort of sit back and say, well, given what we’re doing, we may be able to use a little bit of both. But these are the. To your point, these are the guardrails.

Brad Jones [00:17:46]:
I think you’re probably never going to get in a complete environment that you’re only going to use internal tools. I think that would be silly to assume that there won’t be use cases where some of those external tools are valid or useful or created different automation or productivity gains that can’t be gained in more of a enterprise managed internal environment. I think it’s imperative though that teams are looking at procuring enterprise managed versions of those tools where they can have that viewpoint of controls, governance, observability when interacting with LLMs. That observability side is critical to understand how people could be misusing or abusing those tools, to understand how you can put additional, say either policy statements or control mechanisms in place, things like an LLM gateway to be a man in the middle to try and observe those conversations or flag when there’s abuse or detect when there are people trying to manipulate the system. As I said, it’s such a rapidly expanding space. Some of these tools exist, some of them are tools that teams will have to build themselves. But no doubt as the environment and the ecosystem evolves, there’ll be more and more tools that teams can use to help in these various areas.

Karissa Breen [00:19:06]:
So Brad, going back to your point before, around catching people misusing or abusing, do you have any sort of examples of what that would look like? Perhaps?

Brad Jones [00:19:13]:
Well, so if you look at some of the security articles that have come out even in the last month, there’s a lot of threat actor manipulation of say documents where you could do rag poisoning, introducing additional commands or prompts hidden within documents. There could be people trying to get to data that normally they wouldn’t have access to, being able to trick that LLM that may be over provisioned or has too much permissions to get to a broader set of data. So I think it’s imperative for teams to be monitoring, especially their internally built tools of is there any instances of people trying to manipulate the system to try and get to data that they normally wouldn’t have done? You know, that’s a new avenue that is now part of the remit of most secure teams is doing that testing, doing that penetration testing or abuse testing of any of these LLMs that are introduced into an environment. There are challenges with the large foundational model. So when ChatGPT 5 came out, it lost some of its protection capabilities that were there in the 4.0 model. Right. And that was quickly observed in the broader security community. That was a step back and as I said earlier, that puts more of the burden on the enterprise practitioners, be it or security, to put those additional controls and do validation testing on these when New large models are introduced.

Karissa Breen [00:20:40]:
So perhaps for people who are not familiar what is you said before lost some of its protection capabilities. What does that look like?

Brad Jones [00:20:46]:
So people were able to manipulate GPT5 in ways that they hadn’t been able to manipulate 4.0. So some of the guardrails that OpenAI had put in, you maybe argue it was more of a hardened model and that quickly got exposed. Within 24 hours, people were able to manipulate it in ways that they hadn’t been able to do in 4.0 with some of the hardening has gone there. And you know, this is the challenge that, as I said, there’s an arms race with these foundational model providers. They’re trying to get things out as quickly as possible. It puts that burden back on the enterprise to do this level of validation. There’s plenty of security researchers are out there, they’re doing a lot of this hard work. But there’s also internally, you can’t do test once and assume that the same controls are in the next model.

Brad Jones [00:21:34]:
It has to be continuous testing and validation.

Karissa Breen [00:21:36]:
So are they going to put these sort of protection capabilities in place now.

Brad Jones [00:21:40]:
Or do you know, anytime there’s research that comes out, I’m sure it feeds back quickly into the system. Any time there’s a change in a version, in any sort of software. Right. There is a validation step that needs to happen within security teams or IT teams. You can’t assume that something new wasn’t introduced in a new version. And that’s why we do testing and we do this validation. I think AI is just going to force a more rigorous continuous testing cycle. Frameworks are being developed to help do that.

Brad Jones [00:22:11]:
Regular testing. You have a standard set of things that you look for. There are companies out there providing services like this that an enterprise can use their services to continuously test and validate. When a new model comes out that, hey, it passes all these same tests that we were successful in preventing before. As I said, it’s a very evolving landscape.

Karissa Breen [00:22:33]:
So do you ever have people or clients coming to you saying, hey Brad, I’m already stressed out as it is doing just the basic stuff like patch management. And now we’ve got to do constant testing and validating on these models or that’s going to add even more pressure to these already stressed out, you know, overworked security teams?

Brad Jones [00:22:50]:
Yeah, certainly. I mean, it’s a yes. And so you can’t say no, you’re not going to stop this train. It’s left the station. So I think most security teams are figuring out how they’re going to ramp up their resources or tools that they need to have in their environment to do this continuous testing.

Karissa Breen [00:23:06]:
And would you say, given the current climate that is heavily now considered to be like, well, we have to do this regardless, like you said, it’s a yes and doesn’t really matter how you feel about it. It’s here, we need to keep doing it. Is this now become more of the priority in terms of the totem pole in security sort of businesses at the moment, or internal security businesses?

Brad Jones [00:23:25]:
I would say most organizations, this is one of their primary things that they’re thinking about, of how they wrap their head around the entirety of AI, not only from the protection perspective of what additional tasks they’re going to need to do, but how they can leverage the technology itself to increase their efficiencies in different areas or leverage the technology itself to have some level of oversight over AI. So you’re going to have, you know, more and more agents monitoring agents for different purposes, and one of those purposes is probably going to be heavily focused around the security and validation of, you know, that environment.

Karissa Breen [00:24:00]:
You said the sentence wrap their head around. What are some of the common questions that people sort of are asking? Like you said, the whole how do we constantly test and validate? Is that sort of the main sort of line of questioning? Or like, hey, I don’t get any of this at all. What are some of the things that you’re hearing out there in the market?

Brad Jones [00:24:14]:
A lot of the focus right now is more of the agentic workflows more than primary the models themselves of how these things can be interconnected. If you look at something like mcp, which Anthropic released in November December, has become the de facto standard for how you allow these agency workflows or large language models to connect to legacy tools. That’s been described as the USB of agentic workflows. It in and of itself has a lot of challenges that a lot of the security functionality wasn’t built natively into the protocol. Things like authentication or plugin standards. There’s a lot of lessons that holistically we’ve learned over the past 30 years of how to craft good APIs or interfaces that seem to have been skipped in that implementation. And that’s just one example of how practitioners are challenged with understanding the broad set of the landscape. In May, there were some standards that were ratified or pushed to the Linux foundation around agent to agent communication.

Brad Jones [00:25:19]:
This expanding connectivity fabric adds more challenges to security practitioners to figure out what good looks like in their environment. And it’s incumbent on those teams to be educated and keeping up of what’s coming down the pipeline or what’s being newly introduced. And I think that rapid pace, as I talked about before, is a very different paradigm than most security teams had been dealing with in the past. There wasn’t wholesale changes of how cloud works. You know, there’s well established tools to help you look at configuration management or monitoring of those environments. And most of traditional workflows have been very prescriptive workflows. Automation used to be very, very much. You go from step one to step two.

Brad Jones [00:26:03]:
There’s an outcome in each of these steps. When you’re introducing agents in these LLMs, there’s reasoning models and they’re not necessarily always prescriptive of how they go about addressing those sort of things. And I think that’s adding to this complexity of what security teams need to think about and focus on.

Karissa Breen [00:26:21]:
So what would you say good sort of looks like, as you mentioned before, or generally speaking?

Brad Jones [00:26:27]:
So I think every team is probably challenged with the fact that the train has left the station. Security teams need to be putting those guardrails and guidelines and laying the track in front of the train. And I think as teams get more mature in this, they’ll be able to get further and further out in front of the train, laying around more policies and process and guardrails and guidelines and best practices. I think it’s also very incumbent on IT teams and security teams to define what good looks like in their environment. And that could be platforms that are approved, like, hey, come play on our field. We’ve built an environment that you can use that meets your needs, rather than every team going out figuring out their own way of doing things. So the more that teams can get into standardized tool sets and frameworks is going to help them longer term at being able to herd the cats of AI, if you will.

Karissa Breen [00:27:18]:
So going back to maybe agentic AI for a moment now, would you say historically in security it’s always about. We’ve historically liked to have that control, that governance sort of layer that we’ve had. But now things are. You said everything’s rapidly evolving. It’s almost spiraling out of control at this point. How do you think security teams are feeling with sort of relinquishing some of that control? So I’ve just, I’ve literally had a bunch of interviews this week of people saying, hey, we’re leveraging AI to do this, this and this. If the AI can’t figure out the answer, then a Human then intervenes. But there’s still a lot of things that are being done in the background without any human intervention.

Karissa Breen [00:27:52]:
How do you think that sort of sits with a security person, that that control is not with people as much as it used to be?

Brad Jones [00:27:59]:
Well, I think it’s still incumbent on business to decide which outcomes they still need the human in the loop for. So in all of our areas, within our broader IT teams or our finance teams, we’re just, we’re kind of defining crawl, walk, run stages of what we’re comfortable with, how AI is being used, it’s very different. If AI is going to create a report for you that you can leverage to help you out with a customer meeting, it’s very different. AI is going to be making a financial statement. So having those understandings of things that we’re comfortable with, AI taking action or delivering some outcome on and the ones where we’re not comfortable with. And as I said, that’s kind of defining the clearly okay realm. The clearly not okay realm. And that gray area is going to be probably pretty broad for most organizations.

Brad Jones [00:28:47]:
And over time they need to shrink that gray area to clearly have articulated what the organization has a comfort level with, those outcomes and where there are areas where they’re not comfortable with the outcomes and humans have to be in the loop for.

Karissa Breen [00:29:02]:
So then what are your thoughts on. I mean, you raise a great point. Right, so what are your thoughts then on. Because security people have got a long list of stuff to do and perhaps that they’ve got the AI in the background making the decisions and maybe it’s like, well, yes, I should sort of human and loop. Maybe I need to intervene. But today, feeling a bit tired today, couldn’t be bothered doing the thing and I’m just going to run for it because I want to maybe get a couple of hours back on a Friday and head into my weekend. Do you think in terms of behaviorally, as much as we don’t want people to do that, they may start doing it anyway? I don’t want to use the word lazy, but maybe overwhelm. So it’s like, yeah, I’ve got to cut a few corners.

Karissa Breen [00:29:40]:
Where am I going to cut them because I’ve got so much to do anyway.

Brad Jones [00:29:43]:
Well, I think every team needs to look at where there is a cost or resource benefit for leveraging AI technologies. And I don’t think that should be individuals making decisions. There should be broader discussions on that. We’ve already leveraged it in a number of areas where we saw highly repetitive Easily tunable responses. For instance, we get say 1000 customer questionnaires a quarter around security questions. And the challenge in this area is despite having all of these certifications, you get a bespoke set of 400 questions that are unique that an internal procurement or legal or security team has come up with. We’re able to leverage that about 95% accuracy looking at our publications, our documentations, our internal policies or certifications to be able to answer 95% of those questions with a high rate of confidence. We still include a human in the loop to do that last step validation to make sure that there hasn’t been anything that’s been answered incorrectly.

Brad Jones [00:30:44]:
But we’ve already seen massive benefits and resource time in that something like that is probably low risk. In other areas, we’re looking at AI to be more of sidecars running along. In our IR team of looking at the cases, could we have done more? Did we follow our standard processes? Those are probably low risk that it’s helping us out, getting more insight into what we’re doing. As we get further down that path of to a walk or a run state, we may get more comfortable with AI doing that first set of triage and handing off to a human with a lot of that grunt work done for them. I think in every organization there needs to be a rationalization of is there a clear benefit for using the technology and there needs to be a cost benefit, you know, using AI and you know, expensive GPU resources at the back end or non zero. So every organization needs to be able to rationalize that, you know, they’ve got a net benefit from it, both from a resource and a cost perspective.

Karissa Breen [00:31:42]:
So Brad, if we were sort of just to zoom out now for a moment, I know we’ve spoken a lot about the current sort of climate, but what are some of the other strategies for defending against LLM related threats sort of moving forward, even if it’s like to early point something that’s coming down the pipe perhaps that you could foresee, or what is it that you can sort of suggest here today for people to be mindful and perhaps consider so.

Brad Jones [00:32:06]:
From the pure security threat landscape, certainly threat actors are using LLMs to create better social engineering attacks. There’s been recent publicly acknowledged threat actor activity that has hit some pretty large organizations and there’s clear evidence that They’ve been leveraging LLMs to help improve their ability in these social engineering attacks to sound more authoritative or to create that greater sense of urgency. So the human firewall from a security threat landscape is going to be increasingly important to make sure that employees are educated, that these attacks, while not fundamentally different, are going to get better and more sophisticated. In the past, things like a phishing email, we could point to people and saying like there’s obvious spelling mistakes or poor grammar. Those sort of like easy indicators have gone away. So people need to be smarter in training your human firewall to understand that what you looked for before are probably not going to be key indicators of that. So you have to have that extra level of skepticism or scrutiny when you’re looking at these things. When you look at the large language models themselves, it’s very different at using one of the large foundational models that probably has more rigor and more testing and more security researchers looking at it versus a random, you know, model that you pick up off hugging face.

Brad Jones [00:33:33]:
So I think those are broader policies and enterprise decisions of where their comfort level or where their risk level is in these various areas that’s going to be increasingly important for people to have strong principles or guidelines that they’re adhering to.

Karissa Breen [00:33:49]:
And do you think these guardrails and guidelines is going to be quite specific to each company rather than like your standard, I don’t know, like NIST framework where it’s got like, like general sort of things that you can apply. Do you think the specificity for AI, large language models etc in companies needs to be that bespoke rather than just a general rule of thumb? Because everything we discussed today, there’s so many different requirements, complexities to these things. Right?

Brad Jones [00:34:12]:
It has to be more prescriptive. If you look at like the NEST frameworks, they’re generally saying you should have a policy and consider these things. I think enterprises are going to have to take more specific prescriptive decisions on these things and what their, where their comfort level lies. NIST generally gives you an idea of areas where you should look. It’s not telling you what the right answer is. And so I do think it’s going to be bespoke to every enterprise of what they’re comfortable with and what they want to leverage in their enterprise environment.

Karissa Breen [00:34:42]:
So what do you sort of think moving forward now with, given everything we’ve spoken about, is there any sort of hypothesis that you, given your role and your experience, what you’re seeing day to day, what can we start to expect now, moving forward and beyond?

Brad Jones [00:34:55]:
I think right now what we’re seeing is especially in the agentic workflows we’re still seeing, there’s probably more realization of very specific task oriented, single agent systems. The promise of these multi agent systems and doing incredibly complex tasks and handing off to other agents I don’t think has materialized in any meaningful way yet. Yet. But I do certainly think that is on the horizon. The tools and frameworks are getting easier and easier to use. If you go back a year, it was your data science team was the only team that could understand how to use the tools and the frameworks. It’s really changing in the fact that a lot of these things are getting easier and easier to manage in say enterprise environments with, you know, open source frameworks. Every SaaS vendor out there seems to have some agentic capabilities that they’re adding into their platforms.

Brad Jones [00:35:52]:
So I think the future will be that we will get to more of these robust agentic workflows. But I think right now where we’re seeing reality is more of these very cast specific single agent systems.

Karissa Breen [00:36:05]:
And the other thing as well, Brad would be back sort a couple of years ago. It’s like you need to, as a, as a board, you need to have some person with a cyber background. Are we going to start to see people with an AI background now that needs to be part of a board and make up a board and like, yes, security is still going to be there, but are we, are we going to start to see that sort of come through nowadays?

Brad Jones [00:36:24]:
No, I, I don’t foresee that in the immediate future being a requirement. And if you look at, you know, some of the policies around cybersecurity expertise on the board, a lot of that was backed off that you didn’t need a security professional or you didn’t need a CISO necessarily that people could argue a CIO has enough broad understanding and understands the concepts. So I think we’re a long way off having AI experts being a requirement of boards. I think security for the near term is going to be that main point or main participation member in those board meetings, be it a cyber board or an audit committee, that’s going to be the person that’s bringing up the risks and what the company’s doing around controlling AI in their environment.

Karissa Breen [00:37:14]:
So perhaps to your point. So if you look at this CIO for a moment, they’re going to have enough general knowledge on like AI, generally speaking to how to communicate that back as well as like security and regulation and compliance and all those sort of things. Is that sort of where you see it sitting?

Brad Jones [00:37:28]:
Well, I think that’s where the board makeup has, from a regulatory standpoint has settled on for now a lot of the things that were coming down the pipeline two years ago was very you had to have very specific security knowledge and a lot of those didn’t come to fruition the same way that was in the initial draft documents there. I think the CIO is probably going to have a broad enough understanding of the business value of AI and we’ll be bringing in the security experts to talk about the risk side of things. The CISOs of the world are the people that can help understand risk and educate on risk, and then generally they can bring that up to a board level to align on risk alignment or risk tolerance in various areas.

Karissa Breen [00:38:11]:
So Brad, do you have any sort of closing comments or final thoughts you’d like to leave our audience with today?

Brad Jones [00:38:16]:
I think, you know, the the broader AI revolution is undeniable and I think every day I’m amazed at the capability, but also worried about the what could go wrong with it. It’s pretty natural for the security practitioner to think about the negative or how these things can be misused or abused or what could go wrong. But I think it’s a very exciting time. Right? I don’t think we’ve seen this sort of paradigm shift in the broader environment maybe since the birth of the Internet, but the rapid pace of change is something we haven’t seen before. And so every day I’m excited and a little worried about.

Share This