Karissa Breen [00:00:00]:
What’s up everyone? It’s KB and I’m on the go at ElasticCon Sydney for 2026. Data is exploding, environments are getting noisier, and the line between observability and security, it’s basically gone. Search isn’t just a feature anymore, it’s infrastructure. It’s how you see, how you detect, and ultimately how you defend. From AI-powered detection engineering to unified visibility across logs, metrics, traces, and security telemetry, we are officially in a world where if you can’t search it in real time, you can’t secure it. And I’m here talking directly to the people building that backbone, like Mandy Andrus, Chief Information Security Officer at Elastic. Stay with me as we dive further into the conversation with Mandy that you won’t get on stage. This is Kaby on the go from Elastic On Sydney.
Karissa Breen [00:00:57]:
Let’s get into it. So Mandy, welcome back. Today I want to discuss with you what’s happening with CISOs on the front line of Australia’s agentic AI transition. Yeah, I know you’ve got an upcoming conference, a lot going on, a lot to be presented, but I really perhaps want to start there. Let’s paint the scene that Australia has opted for sector-led oversight instead of heavy AI legislation. So What comes to your mind when I ask that question?
Mandy Andress [00:01:28]:
Well, first, thanks for having me back. Happy to be here and chat with you again. CISOs and AI and agentic AI, top of mind for all of us. And it’s bringing a lot of discussion on what is that balance between moving fast, adopting AI across the organization, but doing so in a safe way, but in a safe way in an environment and with technology that is very immature. And so it’s finding that right balance. And from a CISO perspective, because technology solutions within the AI space are pretty immature, it’s focusing on bringing us back to focus on the fundamentals of security and ensuring that, for example, on an agentic approach, ensuring that there’s both guardrails on what the agent programmatically can do, but also guardrails, kind of second level of guardrails on what’s the access that agent has. So even if it wanted to try to do something different, it wouldn’t have the ability to do so. And then of course, the most challenging aspect are those agents that continue to learn and evolve and try to change their own permissions and hack other agents to get them to do what they want.
Mandy Andress [00:02:40]:
So it’s a very quickly evolving space, one that is both exciting and sometimes scary, but it’s a fun world to live in these days.
Karissa Breen [00:02:48]:
So would you say, obviously you’re based in the US, but you’re looking at different regions. What do you, you said before around the maturity, well, how does Australia sit against like perhaps the US? And I know that it’s interesting because depends on who I ask, I get like different answers. Some people say that it’s actually more mature here than it is in the US. So I’m curious maybe to understand what that means from your perspective.
Mandy Andress [00:03:09]:
So from my perspective, read through the AI programs that Australia has compared with the US focus, compared with the EU AI Act, I see Australia really sitting in, in the middle of the paradigms. So the EU AI Act being very prescriptive, being very prove to me first that everything is safe and secure before you can use it. The US is, yes, here’s some general standards, but organizations it’s on you to find the balance and manage the risks. And within Australia, I find a good balance of overall, here’s some high-level guardrails, but not being overly prescriptive and still allowing that innovation and allowing that speed from an organizational perspective and allowing organizations to make those decisions for themselves and really framing that balance as a way to help Australia move forward quickly. In the AI world.
Karissa Breen [00:04:07]:
So that’s interesting that you say high-level guardrails and not like super prescriptive. So how are companies in Australia really framing that then? Do you think there’s a lot of gray area that’s like, oh, okay, well, let’s just figure it out for yourself. Let’s see what happens. I know people are saying it’s still very early days, so it’s not like they have a blueprint to look back on to know. Is it still maybe in more of those formative sort of place at the moment because we don’t have a lot of data to really go off as of yet?
Mandy Andress [00:04:35]:
It’s still early days. It’s definitely still very formative in the AI space, and the technology is moving forward and changing so rapidly that even if we have an approach in place today, it’s likely not even applicable 3 months from now, if not possibly tomorrow, based on how things are evolving. And taken with that, certainly heavily regulated industries. So usually think financial services and others, Regulations are not currently written in a way to really support AI. So those industries also struggling to find how do we maintain our compliance with the laws and regulations that they must follow while adapting and taking advantage of any new technology or benefits that applying AI within their organizations would make sense. And a lot of that comes back to kind of the proof. So the visibility, the transparency, Understanding if you have generative AI, if you have agents acting autonomously, why did they do that? What’s the decision tree? What’s the logic that they followed? And being able to have that transparency so you can go back and be able to explicitly answer the question of who did what or why was this happening? And do you need to make adjustments? Do you need to retrain models and things like that? So having that overall visibility and transparency is core to moving forward successfully.
Karissa Breen [00:06:04]:
And then, Manny, given your role, just focusing on that on size, though, just for a minute before we move on, what are you sort of hearing is like the general chatter or concerns or apprehension? I know it varies, but is there any sort of common trend that you’re hearing here in Australia?
Mandy Andress [00:06:23]:
In general, there is a lot of both skepticism and concern Skepticism in that we understand there are significant benefits that we can both see in our own roles and within the organization by applying AI technologies. But is the technology there today? Is it going to be able to do that? Is it going to be able to bring more challenges than solutions right now? And what do we need to do about that? So that’s one. I think fear in the respect that organizations, the organizations we work for, want to rapidly implement and adopt AI. But how do we do that within our roles and allow the organization to do that safely? Key example, a lot of the solutions right now don’t have what you would typically define as enterprise-level controls. So think identity and access controls. MCP servers, connecting to MCP servers is a big focus to bring in more data and have agents access different types of data. Oftentimes the MCP implementations are have zero identity and access controls. It’s, you connect with an identity and you have access to all of the data within that MCP server.
Mandy Andress [00:07:35]:
That scares us from a security perspective because you may not want that agent or that individual to have access to all of the data within that environment. And so it’s looking at, you know, how do we balance that? If the technology solution providers aren’t going to include it, do we add layers of controls? Do we add different elements within our kind of AI stack that allows us to create that control structure around it. So it’s a lot of— within CISOs, I’m not seeing specific concerns that we shouldn’t be using AI or shouldn’t be looking to apply it. It’s just how to do our jobs successfully while allowing our organizations to move forward with AI adoption.
Karissa Breen [00:08:15]:
So you just mentioned before layers of controls. I know, again, we’re all starting to figure this out. Would you say because trying to implement layers of controls, does that then slow down the premise of AI because like so many people are like, we’re using AI so people become more effective and more efficient at their job. But then do you see that then as like counterintuitive?
Mandy Andress [00:08:33]:
It could be. And I think it’s certainly counterintuitive if you look at every new technology implementation or application of AI in an organization as its own discrete effort and needing its own control set. Yes, you would be working, I think, much more slowly than the organization wants. For me, the way I look at it is I try to extrapolate and look at the bigger picture and understand what are the key controls, irrespective of AI, of it being a specific AI technology or AI implementation. What are the key controls I need to have in place that allow my organization to leverage AI safely? And if we look at that, we get back to the standard paradigms that we’ve been talking about for years in security, least privilege, zero trust. So it’s really forcing us to rethink how do we implement those principles well in our organization now to help us support AI.
Karissa Breen [00:09:32]:
So I was recently talking to a CISO and they were sort of saying like, it’s just, it’s at the identity. Like it’s all about, that’s the biggest risk then at the moment. Would you agree with that? Because like, obviously you run this podcast, I’m speaking at people like you like every week and then every time I speak to them, I’m like, that’s a really, that’s a big problem. Where would you say fundamentally the biggest issue that you are sort of seeing, or what’s top of your mind given your pedigree and your role? What’s the worst? The identity level, or other people are trying to claim that it’s other things like policies, et cetera, or lack of leadership. So I’m just then curious to understand where you sit on that front.
Mandy Andress [00:10:08]:
For me, it’s identity. Identity is the control plane of AI. It’s the control plane of agentic AI. It is where threat actors are focusing because we don’t do identity well today. We have accounts that have been compromised. We have, whether it’s accounts for humans being passwords or it’s API keys, secrets if you’re looking at microservices and more cloud and SaaS implementations. And we’re just now saying, oh, we’re gonna add all of these agents and we’re going to have this exponential increase in the number of identities that we have, but we’re not going to change our approach in how we’re managing them. We’re creating a disaster for ourselves.
Mandy Andress [00:10:48]:
And that’s where I often talk right now that it’s going to get worse before it gets better because we’re going to start taking advantage and using agents and agentic AI and deploying them through our organization without making some of the necessary changes in how we manage identities. And so we’ll have challenge, we’ll have agents that are doing things we don’t want them or expect them to do because they have too many permissions. We’re going to have, as we learn what AI, what generative AI, and what agents are able to do, it will help us reframe or help organizations reframe how to approach identity. So that’s when I go back to traditionally it’s been if you try to do least privilege, someone can’t do their job, they can’t do what they need to do. So we just continue to open up access until they’re successful. If we do that in an AI and agentic world, we could create some very significant challenges. So it’s now going to get back to we need to give the agent only what it needs And it has its own identity. That identity should only have the access that it needs to do its job.
Mandy Andress [00:11:50]:
And we will really need to be able to figure out what that access needs to be. It won’t be good for organizations to say, oh, the agent needs to do this, and now I just need to open up the broader permissions. Because the more permissions you give an agent, the more it figures out what it can do for itself. Because agents are non-deterministic, the biggest challenge is, as humans, we aren’t necessarily always able to anticipate what an agent might decide to do. If we’re not providing strong guardrails, then we could have some consequences and some impacts that we don’t anticipate that could be very, very serious for an organization.
Karissa Breen [00:12:28]:
So Mandy, when you say give the agent only what it needs, how do you figure that out? So in my experience working at a bank, I remember getting calls from like the identity team saying, hey, you’ve requested access to this system. Why do you need it? Justify yourself. Or, you’ve got too much access, now we’re gonna decommission your access, which obviously that’s a bit more manual. This is going back a while ago, but how do people determine that now? ‘Cause I feel like even back then people couldn’t determine it manually. And so are we using the manual effort then to apply that to the agents and then maybe there’s, it’s not gonna be 100% accurate, there’s gonna be gaps.
Mandy Andress [00:13:02]:
Welcome to the conundrum of the world of AI and security in organizations. I say yes to all of that. It is, I would say in a kind of go-forward approach. So if it’s an organization that’s implementing new systems and AI, they’ll have a better path because they’ll be able to define upfront what those access and what those roles should be. It’ll be very challenging for organizations looking at legacy technology and legacy systems where maybe it’s systems that were developed many years ago. They don’t have all the detailed documentation. The folks that built those systems are no longer in the organization. They don’t have an understanding of what exactly the access roles and the permissions granting.
Mandy Andress [00:13:42]:
And so it’s getting to a lot of testing, it’s getting back into the visibility of tracing and reverse engineering some components. It’s keeping a lot of human in the loop because you don’t have the detailed understanding necessarily. So just minimizing the pure autonomy of agents to help balance that risk, it’s definitely another pain point.
Karissa Breen [00:14:02]:
So do you think people, I hear what you’re saying, because if I just focus on a bank that’s got heaps of legacy tech, lots of systems, and it’s like there’s no documentation, the guy that used to work there, he’s left 20 years ago, we can’t figure it out, who’s the system owner, we don’t even know. How are people going to sort of like weave their way through this maze when companies now are getting so competitive to be like, okay, we need to start leveraging agents because if we don’t, our competitor is. And I’ve seen it now more and more that companies that are smaller that are overturning big players because of how competitive they are, faster. So like even development teams are being pressed on to say, hey, we need to do releases faster and we just need to get stuff out the door to become competitive. So how do people balance in this? And I know it’s not an easy question to answer. It’s just more I’m keen to sort of hear what’s in your mind.
Mandy Andress [00:14:46]:
A lot of organizations are looking at that as human in the loop and ensuring that there’s still a pause point or a place where there’s someone that can look and make sure, yes, before we take this explicit action, it’s okay. Sometimes that’s not sufficient and that processes need to move faster. So this is where I see some organizations creating, you know, agents that are managers of other agents and using agents to look at the— and be that kind of human agent in the loop and looking at what is this agent doing and is it doing what it should be doing and having a broader ecosystem of agents that are working together and creating that that infrastructure and creating that visibility and kind of decision tree.
Karissa Breen [00:15:31]:
So can I ask you more of a rudimentary question just based on behavior? I was in a meeting yesterday and I was like finding like this is not as good for a particular thing I was talking to the person about. I’m like, why do you think that is? And they said, AI, KB, it’s all about AI. People are getting lazier even to like, people can’t even read a book now. Like they’ve got to do the summarized version. So going back to what you were saying, do you think that even if there is a human in the loop, there’s still going to be relying off some form of AI to give a general senses of what’s happening and then try to make a decision off that. Perhaps I’m just looking at behavior and now that even people’s brains are atrophying, they can’t remember things like they used to.
Mandy Andress [00:16:10]:
For me, it’s not so much that we can’t remember things. To me, it’s we are now operating at a scale and a speed beyond human capacity. So even if we wanted to remember and we wanted to do things manually, we wouldn’t be successful. So bringing it back to, as a CISO looking at security, massive amounts of data coming our way, massive amounts of log events, security events. How do we make sense of that in both a pure volume of data that we need to analyze, but also a rapidly expanding threat landscape? One, we can’t keep up with the threat landscape of what we need to be looking for. And secondly, we can’t process as a human, all of the data coming at us. So leveraging AI to help us make sense of the data is one path of it. The analogy I often make is to automobiles and driving.
Mandy Andress [00:17:02]:
And it used to have, you know, my dad growing up, he was doing all sorts of things on his car and it never really went into a body shop or an automotive shop. He was able to do everything. Much to his dismay, I know absolutely nothing about my car and I know how to drive it and I don’t really know how it works. So if something goes wrong, I take it to an expert. And I think that if we look at it from computers and evolution, you know, I used to build my own computers early in the PC era. I don’t do that anymore. If I’m a computer, it works. I know how to use it.
Mandy Andress [00:17:35]:
And I think we’ll see a similar transition with AI and a similar evolution. There will be those experts that understand the underlying workings, but those of us that use it will know how to use it in former jobs and we’ll see significant, I’d say unknown, but significant transformations in the workforce and in the world over the next few decades. Should be pretty exciting.
Karissa Breen [00:17:58]:
And so then just to just clarify, so on the using AI perhaps with just from a behavior point of view, not necessarily about remembering, that was just an example, but would you say that people are still going to get to the point where they’re just getting the summary and they’re not going to read it word for word and just going to plug it into some LLM and get a bit of a high-level understanding what’s happening because it’s a lot more, it’s more volume. There is that human in the loop, but they’re like, hey, I’ve got all these tickets coming in that I need to action. I need to use a bit of AI to sort of increase that process. So do you think that will still be there or do you think people will go through quite meticulously, much to your point before using the analogy with your dad in the car and understanding the mechanics of how it works?
Mandy Andress [00:18:36]:
Will certainly be leveraging AI. And I actually think using AI in that way will be helpful. From the advent of the internet and social media, our attention spans continue to decrease. We skim things. We don’t focus large amounts of time. I think AI actually helps in that it’s able to give us a better summary. It’s able to potentially pull out the key messages instead of us just skimming it as humans and thinking we’re pulling out the key messages. Using AI as a tool to help us get a better sense.
Mandy Andress [00:19:07]:
And just from a productivity perspective, there’s lots of, whether it’s personal workforce agents and things that are looking through your email, making sure you’re not missing any actions or any specific callouts. Just because as your inbox grows significantly, you might lose track of things. And just having something that helps, hey, you have this email that says you need to get back to them on this. I think it can be much more helpful than we think it can be.
Karissa Breen [00:19:28]:
So, given what we just spoke about, would you say as a result of what we sort of just discussed, does that effectively make, let’s is the CISO the country’s de facto AI sort of regulator? Like, how does that look, would you say?
Mandy Andress [00:19:42]:
I think in most organizations, it’s a combination of the role of the CISO from information security, cybersecurity, infrastructure, control environments, legal from a policy, a regulatory, often privacy resides within the legal organization. And then the third component, IT in general, technology, What is the technology stack that our organization wants to use? How do we want to leverage new technology into our existing systems and infrastructure? And it’s, for me, those three areas and those three roles working in close partnership in helping organizations navigate what is the kind of AI challenges of today.
Karissa Breen [00:20:23]:
And so many people say to me that like, oh, like the CISO has got so many things to do and now we’re sort of adding to it. So how do they feel about that? It’s like, okay, we’ve got a new stream of things that you need to sort of oversee and look after and somewhat be responsible for. How does that sort of sit then with them?
Mandy Andress [00:20:40]:
And that’s, there are increasing and expanding mandates for CISOs. And that’s where I always take the step back and try to look at the bigger picture. If I just take everything that’s coming into the CISO organization, Yes, it could be kind of unwieldy and overwhelming to look at each individual thing, but if I take a few steps back, look at it from a 50,000-foot view or so of just what are the key things that I need to be focused on in the CISO that will help address these major areas? And so it’s looking at, and what that generally comes down to is, one, fundamentals are key, whether that’s staying current with patching, whether that’s understanding assets and overall inventory. It just goes back to the things that we’ve always said in security are important, but are hard and are boring. And so we always looked to, you know, there’s a new technology, new solution that can help us solve this problem, but we’re just adding kind of band-aids around things. And now I think with AI and the speed and scale and overall complexity, it’s going to bring us back to we’re going to have to refocus on those fundamentals and we’re going to have to figure out how to solve some of those challenging problems. I think asset inventory being a key one, needing to look at not just now what’s your physical inventory, what’s your virtual inventory as it relates to cloud, what’s your agent inventory. Agents are now assets.
Mandy Andress [00:22:06]:
How do you know what agents you have in your organization? What are they doing? Why are they doing that? And having that whole space of things. So I think it’s not so much individual things coming our way, it’s how we, how we take in those individual things and build them within the broader ecosystem of what we as CISOs are responsible for.
Karissa Breen [00:22:26]:
And then when you mentioned like patch management, people have been talking about doing patch management properly for like 20 years and then people still can’t do it. So, and it’s not as easy as it looks in the book effectively, or now the summarized AI version of how to do it. So how then, if people keep talking about it’s the basics, it’s the basics, but then we’re adding on all these other complexities like agents, what are they doing? How are they doing it? Why are they doing it? All the things you just mentioned. Does it create, you mentioned the word, operative word before, conundrum. Is this just blowing that conundrum out now? ‘Cause we can’t even do the basic stuff and now we’ve got all these other complex sort of tasks and things going on. What happens now really?
Mandy Andress [00:23:03]:
With patching specifically, part of the challenge has always been, what do I really need to patch? What’s the broader, what does my broader environment look like? And so we’ve talked a lot about defense in depth over the, many decades in security and having layers of controls to where if one is bypassed, theoretically you have one or more additional controls in place to prevent something catastrophic from happening. Patching is one of that. Oftentimes patching is the deepest control in an environment that you have multiple layers ahead of that. How do we understand what those are? A couple things that AI is really able to help with is one, just from a pure application perspective, there’s reachability analysis and just understanding the code and how an application operates to know, yes, it may be vulnerable to the specific application vulnerability, or at least it looks like it might be, but is it really? Or is it the way this application’s running, these 10 things have to happen before you could ever exploit this patch? And so the risk is lower for that organization or if you like, in, in a more from an infrastructure perspective, you have to get through 5 controls before you could potentially get to exploiting this specific missing patch. And AI is very good at that, much better than humans in going through and trying to find what are those paths that you could exploit and take advantage of. And I think that’ll give us as security practitioners—
Karissa Breen [00:24:33]:
from an attack path perspective.
Mandy Andress [00:24:34]:
Exactly. More detailed attack paths just to help us better understand where to prioritize and how to prioritize. Because organizations like, you know, I have 100,000 missing patches. You’re never gonna just deploy 100,000 missing patches. It’s how do I prioritize? How do I understand where they need to be implemented? How do I understand where there’s potential interruptions or availability or kind of production environment issues? And where do I have other controls and compensating controls in place to help balance that. It goes back to what I talked about earlier, is having highly complex environments and as humans not able to put all the pieces together and able to leverage AI and AI technology to help us see that picture in more detail.
Karissa Breen [00:25:17]:
And so, just wanna move now on to, we’ve spoken a lot about the advancement of AI, good and bad, and a few of the different versions. And we were talking about the conundrum, which is obviously accelerating AI adoption, And now you’ve heard people say, well, we’re defending AI because we’re being attacked by AI. So how do you sort of see this playing out now? Because now I feel like people are even saying in my interviews like, oh, it’s really good, but we really need to slow down. I was in a conference in Canada like 2 weeks ago and they’re like, oh, this is really good, but we really need to slow down because no one actually really knows what’s going on. So I’m then curious to understand how, as I mentioned before, companies still need to adopt it. We’re seeing big organizations invest a lot of money into AI. People should be focused on more strategic tasks rather than manual, monotonous sort of labor-intensive things that they’re doing. So I totally understand, but then how is this going to play out moving forward? Is it that the bad guys are going to take two steps forward, we take one, or we’ll be on par with them moving forward? I’m just really curious because again, depending on who I ask, I get very different answers.
Mandy Andress [00:26:23]:
I think a couple different things are going to play out in that space. One, we can’t go back. As much as we may want to do things more slowly between demands of end users, demands of investors, demands of defenses against what threat actors are doing, it’s moving forward and we need to figure out how to work within the speed that it’s operating. And in doing that, again, I think in the short term as defenders from a security perspective, I think it’s gonna get worse before it gets better. Threat actors are quickly learning how to expand their use and how they leverage AI technology into events and creating incidents. AI technology itself continues to improve to where a year, year and a half ago, it was better phishing messages and very, very targeted phishing campaigns than it was self-propagating and self- morphing malware so it could adapt quickly to the environment it’s in. Now AI and agents are able to truly act as a threat actor, as a hacker, and get into environments and pivot and understand the context and do everything that a kind of a red team threat actor would do, but do it in potentially minutes versus hours, days, months. And so How as security practitioners we deal with that, one, it’s going to lead to a paradigm shift in what security programs look like and how security programs operate.
Mandy Andress [00:27:58]:
And that’ll happen over time. That’ll happen probably over the next, I would anticipate within the next 5 to 10 years, we’ll look back and what we know of the security program today will look very different. So that’s one piece of it. The other piece is, As we learn how to better leverage AI technology within our organizations, the key thing that we’re gonna build for ourselves is context. What we miss today from a security perspective is that as practitioners, we often don’t understand the full contextual picture of our environment between what does that infrastructure look like, what are the employees doing, what is critical in our overall environments, and just having that holistic picture. And what the combination of the general technologies we have available to us today and what AI brings to us is we have massive amounts of data and now we have tools and technology with AI that helps us make sense of those massive amounts of data. And what are the patterns? What are the behaviors that we’re seeing? What are the trends? What are— what questions should I be asking? Oftentimes we don’t even know what questions we should be asking because we haven’t necessarily anticipated all the types of behaviors and activities that will go on. So what I do see in the future is that the benefits will shift to defenders because we will have a full contextual understanding of our environments.
Mandy Andress [00:29:18]:
And so we’ll be able to very rapidly respond, we’ll be able to very rapidly adapt, and we’ll be comfortable doing that more autonomously in the future. And so when a threat actor tries to do something, yes, a lot of comments about it’ll be agent to agent, threat actor and defender. And yes, we will get to that. It will be agents when trying to attack an organization and when changing controls or morphing to prevent that attack. We will get there. But by having the full contextual understanding, as defenders, we will be able to react at machine speed, which is what we’re not able to do today. We still have to have humans in the loop. We still have to bring in, because we just don’t have the ability to pull that full context together.
Mandy Andress [00:29:58]:
And threat actors, that’s what they pull forward. They’re able to, and taking full advantage of whether it’s open source public information. They’re able to get into an environment, they’re able to pull in all sorts of data and use their tools and build that context for an organization. And that’s the shift that I’ll see is that as defenders, we will finally be able to have that holistic contextual picture of our environment.
Karissa Breen [00:30:21]:
Okay, so this is really interesting. So when you say the benefits will shift to defenders, and I know you don’t have a crystal ball, but how long do you think until we’re at that stage in the industry? Are you comfortable by saying? So everyone’s like, well, no, at the moment we’re behind and, you know, this is what’s happening. But you’re sort of saying that this will happen in due course. Is due course 10 years, 5 years? I know it’s hard. It’s just more, I’m just, I really want to try to paint some timeline.
Mandy Andress [00:30:50]:
For me, it’s more 10 years for, have a strong contextual understanding. The way I talk about it is 10 years from now, I want to look back at today’s time as the dark ages of security to where the visibility, the approaches that we have in place in the future, we look back and wonder how were we ever able to do our jobs in any fashion because we didn’t, we don’t have or didn’t have all of the stuff that we now have today. So that’s my ideal state.
Karissa Breen [00:31:17]:
So then would you say like retrospect’s always a good thing, right? ‘Cause it’s like, oh, like how did anyone do their jobs when we had typewriters and we had to write by hand? Do you think we will get to that point? So if I run an interview with you in 10 years, you can look back and say, yeah, those were the dark ages. It’s just that, you know, no one really knows at this point. Yes, we can predict things and we’ve got reports and there are analysts that are saying certain things, but no one really knows at the end of the day. And then do you think that it’s going to become a moot point? Because you’re saying machines can just defend super quickly. Do you think that will just exhaust cybercriminals to be like, well, there’s no point? Doing this because it’s being defended super quickly? I know that sounds sort of dumb, but I’m just curious to be like, well, why do something you’re not going to get a result anyway, or it’ll open up other issues?
Mandy Andress [00:32:02]:
I think the latter, it’ll open up other issues. Social engineering, human behavior, there will always be, whether it’s issues or limits or vulnerabilities in technology that the threat actors will take advantage of. Threat actors are very creative. If there’s a way to make money or achieve their objectives, they’re going to figure out how to do it. If you look at just a general history of security, started with, from a kind of the internet perspective, it was the network. We solidified the network a bit. So then they moved to the endpoint. We solidified the endpoint, then they went back to social engineering.
Mandy Andress [00:32:35]:
Threat actors just go to wherever the weakest point is, and we’re continuing to move that. That’s not going to change. What those weakest points are will evolve and be things probably completely different than what we anticipate today.
Karissa Breen [00:32:47]:
So unless going back like full circle to be like, we used to do humans and then we leveraged technology. Now at the point where it’s, if we’re attacking, you’re defending, we’ve canceled each other out. Now we’ll have to find a new avenue like social engineering. So this is going to get to that point potentially. I mean, I know you don’t have all the answers. I’m just trying to map it out in my mind because we have this so much emphasis on technology. I’m even hearing as an example, kids today, that generation is spending a lot less time on social media. It’s like, it feels like the pendulum swinging fully the other way.
Karissa Breen [00:33:19]:
So I want to move on now and I want to zoom out at the world and I want to paint a picture that Europe is saying, prove your AI is safe before deployment. The US is saying, here are the standards, industry, figure it out. And I’m paraphrasing. And I know you said before, Australia sort of sits in the middle. Which one do you think is not better, but more of an optimal position, would you say? Do you think by saying like, here are the standards, figure it out, is better? That’s obviously a lot more leeway, but then Europe’s sort of saying like, justify yourself before we do something. I’m just curious to see what approach other countries can potentially take as a result.
Mandy Andress [00:34:01]:
If I look at the bigger picture, the key thing that’s driving forward momentum to me is speed. It’s how fast can you adopt technology? How fast can we evolve technology? How fast can I be ahead of my competitors? So everything’s speed. And if you’re trying to move your organization forward very quickly, technology is evolving very quickly. If you’re looking at the EU approach of if you’re trying to, if you need to prove that your technology is safe and secure before you implement it, to me, what’s the line? Because you can prove what was created yesterday was safe and secure, but there’s been an evolution a week later that now it’s no longer. So it’s kind of where’s the line needing to be drawn as technology is evolving very quickly.
Karissa Breen [00:34:46]:
But then people spending more time proving rather than doing the thing.
Mandy Andress [00:34:49]:
Exactly. So the balance is, we talk in security often of a risk-based approach. And the key for me in a risk-based approach is who’s defining what risk is acceptable. Sometimes it’s an organization, sometimes it’s individuals, sometimes it’s a regulatory body, sometimes it’s a government. So for me, the question isn’t, well, what does this country have? It’s what risks are we trying to address and how should we best address those risks in finding that balance? It’ll be different in countries, different countries, it’ll be different in different industries, it’ll be different in different organizations in the level of risk that they’re willing to accept. And that risk appetite that we talk about in security. So I think we’ll find some norms over time, but right now this is all new and we’re all trying different things to see what works for each of us.
Karissa Breen [00:35:41]:
Do you think as well Australia being in the middle is unusual? And I say that because traditionally speaking, people have always said Australia is a very reserved market. Even when US vendors come in here historically over the years, the last 15, 20 years, it’s been really hard to sell because it’s like not as, US is a bigger market, a little bit different mindset culturally. So do you find it unusual Australia is in the middle? I thought perhaps they’d be more erring on the side of caution than in the middle. Or would you say that’s changing now? Because I also read a report, Mandy, that said Australia’s, in terms of organizations, are adopting AI the fastest. So that just blew my mind because I just thought maybe things really are changing culturally here then. Than what was happening before.
Mandy Andress [00:36:26]:
Yeah, from my perspective, I’ve been involved with the Australian market and organizations in Australia for a handful of years, and I agree, when I first started engaging, I did feel that Australian organizations were a few years behind from a security controls perspective, just how security was or was not integrated within their organization. I have felt that shifting, and I definitely see and feel today that Australia organisations are near the leading edge of both adopting AI and how to think about adopting AI. And I see that more broadly. I mentioned social engineering before. I think Australia’s at the forefront of just helping from a social media approach, from an overall societal measure. And I think that’s been a key step that the country has made in looking at new technologies and being on the forefront of defining what’s the kind of AI national plan and giving the supports to organizations and investing and putting a lot of investment in that, you know, being ready and creating the skill sets, creating the infrastructure to support that. So it’s been, I have seen and felt that shift as well that you described.
Karissa Breen [00:37:32]:
And so do you think there’ll be, you said about US, Europe, Australia, they’ve got their own approach to how they’re going to do things. Do you think there’ll be one country though, perhaps, that does something like the North Star and the rest will follow? So for example, the whole social media ban thing here in Australia, other countries are going to implement it now. Whether that’s right or wrong, like, that’s up for people to decide, but governments in those countries think that’s a good idea, they’re going to follow. Will we still see the same sort of pattern happen with this, or do you still think it’s highly individual because there’s a lot more complexity to these things than just turning social media off?
Mandy Andress [00:38:07]:
I think ultimately it’s highly complex and there’ll be different approaches. I do think that different countries, different organizations, different industries will try different approaches and some will expand, some will be more niche and apply in specific areas. Over time we’ll move to some norms and kind of the law of averages. Right now it’s so new, people will be trying all sorts of different things. Some will work, some won’t, some will expand, some won’t. So I don’t specifically see, say, a specific country. Becoming the norm, at least at this point. I think it’ll all grow to an amalgamation of different things that different countries, different industries, different organizations are trying.
Karissa Breen [00:38:48]:
And then just quickly, a couple more questions before we wrap up would be when you say some things will work, some won’t, do you think as a result something quite crazy could happen, like be caught in the crossfire? Maybe it’s a risk that someone couldn’t foresee. Again, we don’t have all of the answers. Try to map them out. We try to look at the attack path, but things do pop up, things go wrong. You think that will become a result, but then we can correct that perhaps to say, well, that didn’t work out. Now we have to make a better move.
Mandy Andress [00:39:14]:
I do anticipate there will be some fairly significant events that happen because we can’t necessarily anticipate the full either breadth of implementation or potential ramifications. That’s where, and going back to the beginning of our conversation and talking about guardrails and minimizing the potential impact is looking at how we’re deploying AI technology specifically to help minimize what the impact of something happening will be. I do think that the biggest challenge is we can’t necessarily anticipate today what tomorrow’s challenges will be. And so we just need to be very adaptable. We need to focus on kind of resilience of an organization. I talk a lot about antifragility, so the focus that under times of chaos and stress, your organization actually is able to get stronger. And really focusing on those core concepts and what does that mean for your organization so you’re able to withstand and be more successful in the world that we live in today.
Karissa Breen [00:40:19]:
And then lastly, Mandy, what do you think sort of moving forward now? I know we’re sort of at the earlier stages of 2026. Again, you’re not Nostradamus, you don’t have all the answers. It’s just more, What do you think’s gonna, how’s the year gonna unfold in your eyes?
Mandy Andress [00:40:31]:
2026, year of agents. Where can we apply them? What can they do to help us? I think initial focus is a lot on both development and using agents to develop solutions and coding. And then a lot of focus on the personal productivity or personal agents and just folks starting to look at, well, how do I spend my time? And what are the things that I spend my time on that I don’t wanna be doing or that I could I have an agent do on my behalf. And I think just a lot of experimentation, a lot of trial and error, a lot of folks starting to understand what this AI technology could mean to them and trying a lot of things out. And I think starting to find some common themes as we get to the end of 2026. I think we’ll find some areas where it’s now fairly standard practice to be using AI to handle this, or AI as an input or a piece of a process or an approach.
Karissa Breen [00:41:36]:
And there you have it. This is KB On The Go. Stay tuned for more.