The Voice of Cyber®

KBKAST
Episode 251 Deep Dive: Mandy Andress | Charting the Path of AI Innovation and Security
First Aired: March 27, 2024

Mandy Andress is currently the CISO of Elastic and has a long career focused on information risk and security. Prior to Elastic, Mandy led the information security function at MassMutual and established and built information security programs at TiVo, Evant, and Privada. She worked as a security consultant with Ernst & Young and Deloitte & Touche, focusing on energy, financial services, and Internet technology clients with global operations. She also founded an information security consulting company with clients ranging from Fortune 100 companies to start up organizations.

She is a published author, with her book Surviving Security having two editions and used at multiple universities around the world as the textbook for foundation information security courses. Mandy also tested and reviewed information security products for multiple publications as well as serving as the author for the weekly InfoWorld security column. She has been a sought-after expert in the field, speaking at signature security conferences such as BlackHat and Networld+Interop. In addition, she has taught a graduate level Information Risk Management course for UMass Amherst in the College of Information and Computer Sciences.

Mandy has a JD from Western New England University, a Master’s in Management Information Systems from Texas A&M University, and a B.B.A in Accounting from Texas A&M University. Mandy is a CISSP, CPA, and member of the Texas Bar.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Mandy Andress [00:00:00]:
Think there’s a a lot of debate right now on speed of innovation and how do we do that in a mindful way, in an ethical way? And we look at a lot of things that we’re trying to address in that world. And I believe that, at least at the moment, there will be a bit more practical approach and moving forward. On the flip side of that, certainly from a security perspective, threat actors aren’t taking that same approach. They are quickly researching and understanding what they can do and how they can leverage. And so it’s going to be again that balance of how do we move forward comfortably, safely, as a society, but knowing that there will be parts of society that don’t follow those rules, and how do we balance that?

Mandy Andress [00:00:54]:
I thank you

Karissa Breen [00:00:55]:
for being silent.

Mandy Andress [00:00:56]:
The primary target for ransomware campaigns

Karissa Breen [00:00:58]:
is security and testing and performance and scale

Karissa Breen [00:01:02]:
And who can actually automate those. Take that data and use it. Joining me today is Mandy Andress, CSO from Elastic. And today, we’re discussing how Securely integrate enterprise data with OpenAI and Other LLM. So, Mandy, thanks for joining, and welcome.

Mandy Andress [00:01:19]:
Thank you. Great to be here.

Karissa Breen [00:01:21]:
Now, look, this is such a big topic, and I was literally just in an interview before this talking about AI. And I really maybe wanna start just with your view on, like, where people are sort of at on this from your perspective, and what are you sort of hearing from customers instead of the broader community?

Mandy Andress [00:01:37]:
Yeah. I think AI has certainly been a a hot topic, and everyone’s been looking into it. And a lot of the the folks that I’m speaking with, whether it’s other CSOs or folks looking at how they could utilize AI within their business, a lot of folks are in the let’s try to figure out where it makes sense for us to use it. A lot of investigation, a lot of learning, a lot of trying. And so a lot of different things happening in in that space these days to see what customers react to, what provides value, and how they can best leverage it.

Karissa Breen [00:02:08]:
Yeah. So okay. So couple of things then on that point. You said best leveraging it. I believe and maybe you can have a better view on this than me. Do you think that’s the part that people are still confused on, like, how to leverage it? Because it’s some of the things I’m seeing in interviews, but then also with content with, you know, other media publishers out there and just people, you know, on social media and friends. Do you think that’s still a question mark for a lot of organizations and a lot of people?

Mandy Andress [00:02:33]:
I think there’s still a lot of questions on what is the true current capability of of AI and Gen AI when we’re looking at that. When we take into account some of the the security concerns and and privacy and and data risks that go with it. But more specifically, how could it work within their environment? So we hear a lot of discussion on customer service, customer interactions, customer support is a significant use case that a lot of organizations are trying GenAI Technologies out, security as well, being a CSO. A lot of additional analysis and faster analysis and improved analysis that we’re we’re researching and trying out to see if it will be able to be a benefit to us. And beyond that, there’s just a lot of creativity and a lot of interest. And so I think there’s still a lot of outstanding questions as you referenced just on how we could use it, but I think a lot of that will become clearer over the next year, at least some of the initial use cases. And then I’m always fascinated by the creativity of what people are able to find and and use technology for.

Karissa Breen [00:03:39]:
So let’s follow the creativity sort of comment then along a little bit more. What do you sort of envision happening over the next 12 months? And you are right. I think there is still a lot of outstanding questions probably because it’s still relatively early days even though AI sort of been around. But if you wanna still look at it a little bit more ubiquitous, of course, people are asking those questions. And perhaps if we sort of roll it out to more tumor based sort of everyday people, perhaps those are the more the the questions that are coming through from an organization standpoint. But where do you sort of see the creativity side of things, like, sort of panning out over the next 12 to 18 months?

Mandy Andress [00:04:11]:
I think the creativity is going to continue to significantly increase. I listen to a lot of podcasts and just have heard a lot of podcasters utilizing Gen AI and making it available to their listeners to be able to search through or get information from the the back catalog and previous podcasts that they have had. Folks looking at it to improve how they are just researching or, I should say, starting to research to see how that could expand all of the areas that they would look into and give them insight into some pieces or sources that they did not have any insight or visibility into before. And use that as some of the the most basic to significant implications, implementations from larger organizations and whether it’s consumer facing or business to business interactions, it’s to me, it’s either gonna go 2 ways over the next 12 to 18 months. It’s going to be significant amount of growth implementation, the speed. A year ago, we didn’t anticipate that we were gonna be talking this much about Gen AI. And if that speed continues, there’s a lot of things we’re gonna be talking about a year from now that we can’t predict the future and just can’t see coming our way. So either gonna take that route or we’re gonna hit to that point where, okay.

Mandy Andress [00:05:31]:
We talked about this a lot. It’s really interesting, but we’re just not quite finding the use cases yet. I don’t think there’s gonna be a lot of middle ground.

Karissa Breen [00:05:38]:
So when you say middle ground, what do you mean by that?

Mandy Andress [00:05:40]:
More of the we’re implementing it. It’s it’s beneficial, and we’re leveraging it. And I was kinda here, and and we have good uses for it, and this is how it’s going to work in our environment. So I think we’re going to have folks that are continuing to try new things, continuing to have those creative ideas, or it’s going to be, hey. It’s we have a lot of ideas and the technology is just not quite there yet for what we wanna do, which will drive all the further innovations that we know will be coming.

Karissa Breen [00:06:11]:
I’m curious just to go back a step on your example around Gen AI around podcasting, doing a podcast myself. So what does that sort of look like from your perspective? How how can people start leveraging that? I found that I found that really interesting, and I and I’m keen to hear more.

Mandy Andress [00:06:25]:
Yeah. The ones that I’ve heard talking about it and utilizing it, they are just utilizing chatbots, whether it’s chat GPT or whatever model, in Freiburg that they want or building their own off of their library and back catalog. And they have a website related to their podcast. Just putting that there and allowing people to search through and get the information, some behind subscriber walls if they have that set up with their podcast. But it’s a way to you know, we talk about data and search being the the core of how we understand and sift through information. And looking at, you know, what AI has provided to us, it’s made much more realistic and reasonable natural language processing, the ability to interact with the system much more similar to how we interact human to human. And so it’s not having to necessarily understand syntax and rules and certain parameters when you’re talking about a prior search. It’s just, you know, ask a question like you would wonder.

Mandy Andress [00:07:31]:
You know, I have 3 kids. They ask me lots of questions every day, and it’s now easier for them just to go type their questions in and get an get an initial answer.

Karissa Breen [00:07:39]:
Yeah. Absolutely. And I think that’s the power of the whole large language models because you can ask those sort of everyday questions rather than asking it, like, in a way of, like, a certain syntax, which maybe is more technical that everyday people just won’t know how to actually pose that. Is that where you’re gonna see sort of a shift towards now even going on the chatbot front? Like, that that’s really interesting, something I need to look into myself. But does that mean that people will just do that in lieu of, like, listening to, for example, this this podcast then directly when they can just get the synopsis, or how do you sort of see that panning out?

Mandy Andress [00:08:12]:
I don’t see it as a replacement. I see it as I know for myself, there’s podcasts that I love to listen to. I listen to all of them. But there’s things that I remember hearing, but I don’t necessarily remember exactly which episode. And so if I wanna go back and share it with a colleague or someone that I think would find it interesting, something like the ability to search through the exact where’s the topic? Where was it covered? It helps speed up that process and helps make it so much easier rather than trying to sift back through lots of podcasts and just try to figure out the exact episode where I heard that and wanted to share that information.

Karissa Breen [00:08:46]:
It’s, like, focused more now on the company data side of things. I mean, organizations leveraging the AI component of it. What what are you sort of seeing in the in that space?

Mandy Andress [00:08:58]:
I see a lot of interest. A lot of organizations these days, they have tremendous amounts of data, and they want to make use of it. They want to understand how they can take advantage of all of that data and continue to to further the success of their business. And so it’s a combination of looking at, do I utilize public LLMs and things that have been trained on more broadly publicly available data? Do I want to build my own, or do I need to build my own with my internal company data and and models off of that? So a lot of it is use case dependent on how they want to move forward and what they’re trying to achieve. It is also risk based. And what is the the risk appetite of the organization and concerns related to data protection or privacy and general issues, prompt injection and and model poisoning and such? What their appetite is there and how much control they they want through that. And then tied into all of that is the of compute power that’s sometimes necessary depending on the size of your data pool. And if it’s significant or some significant computing power that you’ll need to build those models and do that analysis.

Mandy Andress [00:10:09]:
And then how do you augment all of that? And, you know, looking at vector databases, being able to add those components and help further manage. So if you don’t wanna build your own model, vector databases, of which Elasticsearch platform is a vector database and allows you to have all of that retrieval augmentation. So you can pull something from a public, like a chat gpt, open AI, and then you can augment that with more specific company relevant information. So you’re able to take advantage of of what’s public and what’s already out there and trained, and then you can add that data add into that your company specific data, but helps you be much more precise and much more applicable to your specific environment and use case.

Karissa Breen [00:10:55]:
So one of the use cases I wanna focus in on now for a moment is and you’ve heard it yourself, obviously, like employees uploading sensitive data, but happy into chat gpt in order to perform their job better and faster, which I 100% get. And, you know, that’s the path we’re going down around. It’s not a replacement. It’s gonna it’s gonna be a tool. Perhaps people didn’t think about what they were then doing when they’re uploading sensitive information into GPT, for example. So my question would be, what’s sort of your view on how to ensure people within companies are working within the realms of OpenAI, for example? Because, like, at the end of the day, people aren’t thinking necessarily like a security person, especially if you’re to some finance folk who’s, you know, just trying to do their job faster so they can get home on time, etcetera. And everyone’s saying we should be leveraging AI to do our jobs better and more accurately. But how does that sort of then then look? And I know it’s still relatively early days.

Karissa Breen [00:11:54]:
There are frameworks that are being developed at the moment, but there’s still a lot of unanswered questions out there. So what can people sort of do today to make sure, like, those there is sort of guardrails around sensitive information being uploaded in the chat gpt.

Mandy Andress [00:12:09]:
So I’ll I’ll talk about this in the the context of employees or or users in general utilizing public models like chat gbt to help, whether it’s start a a reports or do some research. And a couple things tied to that. 1, I don’t see the all the AI or GenAI security issues as anything that’s new. And it’s largely not that different from similar issues that we’ve had to, as security practitioners, deal with over the last 20 years. And the area that I most directly equate it to is I was at Black Hat Conference, the big security conference that happens, in Las Vegas each year. And the big topic that year was what was called Google Hacking. And this was 15 years ago or more. And Google Hacking was running specific searches and using data that was indexed into Google to identify sensitive company information, system configurations, data that could be utilized to further research and plan, an attack if that was what you were looking to do.

Mandy Andress [00:13:17]:
And I don’t see today’s, you know, Gen AI that much different than the Google hacking. And so it’s looking at you know, with Google hacking, it was, you know, controlling the index and what Google was indexing off of your site. And so with gen AI, it’s looking at what are what are the ways that we can manage the the input in the in the training? So there’s, you know, with chat g p t, you can make the selection that the data you submit is not utilized in training. So it’s making sure that that type of configuration is enabled. A big piece of it is awareness for your users and what the impact could be, how that information could be utilized. Because, the key thing is you can block things like chat g p t in your corporate networks, but you can’t control very easily what folks are doing on their personal tablets, mobile devices, their home laptops, home computers. It’s very, very accessible. And the the biggest component is awareness and and education and helping everyone understand what could happen and why it’s important and how that could affect them personally.

Mandy Andress [00:14:25]:
If something happens that’s, built on and it impacts the overall success or, ability for the company to continue with its with this business model.

Karissa Breen [00:14:35]:
So just going back to the awareness piece, hear what you’re saying. Do you think that there are people out there that just don’t think, well, hey. I’m just I don’t know. Arbitrary, I’m not an accounting or anything who have never planned on moving that field, so perhaps I’m speaking out of turn here. But in terms of, like, got all this data, I’m uploading it to chat gbt because, I don’t know, I wanna get the median wage across our company then, for example. Do you think people wouldn’t think hey is probably not a good idea, or am I just looking at purely focus on the security lens and that’s what’s in my DNA? And so maybe I’m not the right person to ask that question. I’m just always curious, though. Like, do you think people are aware of what they’re doing, but they think, hey.

Karissa Breen [00:15:16]:
This is gonna increase, like, my productivity, so I just don’t care. I’m just gonna forfeit potential of the security side of it.

Mandy Andress [00:15:23]:
I do think there’s an amount of unintended consequences. So individual pieces of data that are being searched on and and utilized that go into training in and of themselves are not potentially an issue. But when you’re working at the scale that these LLR are looking at, there’s so much information available that you can’t predict or sometimes even comprehend at the scale that some of this is is working on. What may be in there and what connections could be made that you just can’t anticipate? So that’s where the best practice is to avoid having any company information utilized to train models. So some of those unintended consequences are not as readily available and, successful for someone trying to to search on different components. And then it’s understanding more clearly for your organization. What are those key data points? What is the whether it’s the intellectual property for your organization, the sensitive data, the personal information, where that is is stored, and to make sure that the the controls and the protections and the understanding of the impact of using that type of information in anything that is not company specified to help build continue to build that education and understand the impact.

Karissa Breen [00:16:48]:
Yeah. Okay. That that’s an interesting point. So then going back to your original point around how do they control what people do on their home laptops and phones and all that type of stuff. So what would be your recommendation? Because you’re right. But that that doesn’t mean we solve the problem, though. Telling someone, hey. Don’t do that on your laptop, for example.

Karissa Breen [00:17:07]:
Even explaining to them the repercussions, people are still gonna do it regardless for whatever whatever reason that is. It could be unintentional. It could be, hey. I’m over this company. I don’t care. So the motivations are are varying, but I’m curious to know what does that process then look like? Because it’s it’s something that I think a lot of people out there are wanting to know in terms of, well, I can safeguard it from an internal perspective, but I can’t control what Carissa Breen does on her laptop on Friday night.

Mandy Andress [00:17:36]:
Yeah. And I think that’s one of the most fascinating aspects of all the conversations that are happening today. We don’t have necessarily specific answers for that. There’s a lot of discussion on what’s copyright, what’s created, and what’s potentially trade secrets. And I think there’s going to be a significant regulatory and legal side of this that we’re just starting to see come out that puts a little bit of accountability or much more accountability on the organizations building the models to make sure what data that they’re utilizing in a way to remove data from the models. If it’s found, you violate copyright, you violate trade secrets, and that’s gonna be very, very interesting to watch over the next few years.

Karissa Breen [00:18:19]:
So would it be an example around, I’m in accounting, downloading a large file at a random time of the day, like Friday night, for example, like, no one’s probably really doing that, that would then trigger something to say, oh, this Carissa Breen lady is appearing like a rogue employee, for example, because she’s doing something which is kind of really in business hours, Friday night, large file you’re downloading. What is she gonna do with it next? In terms of proactive measures, is that gonna probably be the easiest thing as of right now to be able to decrease people potentially from getting that information, getting onto their personal laptop, and then just, you know, going to chat gbt and then trying to increase their productivity, which in their mind, but that also poses a bigger security problem. Would you say that’s gonna be the easiest way, though, moving forward?

Mandy Andress [00:19:08]:
For for companies, yes. And that’s not any different. That specific risk with Jet AI isn’t any different than what we need to worry about with the broader insider threat categories. So someone’s downloading that file, they could email it using their personal account if they’re able to download it somewhere, a personal device or access personal email on their work device. They could put it in a a share somewhere, whether it’s public or sharing it with a competitor. So it’s not just the the Jet AI piece of it that is an issue. It’s the broader data protection issues. And we’ve seen a lot of work.

Mandy Andress [00:19:46]:
There’s the DSPM industry, data security posture management. We’ve we’ve seen a a number of new technologies coming out trying to improve on what we tried before with DLP, data loss prevention, and weren’t overly successful in being able to take full advantage of that technology. But that more broadly is the from a company perspective, how to understand it and manage data. And for ourselves, we we’ve only made that challenge greater. We have not put data in less places. We have put data in more places. We have such complex technology environments when you’re looking across the use of hyperscalers, and you’re looking across the the use of SaaS and the low cost of storage. So helpful and and see this good to to contain and retain significant amounts of data.

Mandy Andress [00:20:40]:
But then how do you avoid that data going into chat gpt, going to competitors, being shared publicly accidentally? And that is just a that’s a broader security concern that we’re still trying to to tackle and find the best way to to handle for ourselves.

Karissa Breen [00:20:55]:
I think you’re right, especially on the insider threat point of view. And, again, I’m not I don’t expect you to have all the answers. It’s more so like I think because like you said, like, no one really knows. It’s just more so having these conversations. But then just going back a moment on the legal side of things, what does that then sort of look like? How do you see how do you see that sort of unfolding then? Like, from Chrissie Ring takes a bunch of accounting information. She’s gonna chat GBT. Company finds out. I get prosecuted.

Karissa Breen [00:21:20]:
What does that look like?

Mandy Andress [00:21:22]:
Yeah. That’s I’m not sure what it exactly looks like. I know right now there’s a lot of focus on you look at the the New York Times in the US is claiming copyright infringements on models being trained with with their articles and their data without their their permission. There are Silverman, the the comic in the US is also claiming copyright infringement. And so that’s going to be, I think, the initial piece of tackling what is fed into the models and what approval needs to be gained before that can happen. What I do see potentially down the road is a a mechanism when companies identify. So similar today with with Google and and other websites, if we find something that is out there, whether it’s, domain impersonation, brand impersonation, trademark infringement, We have mechanisms to request takedowns and to remove that from whether the hosting provider or or the site. And I anticipate there will be similar things in the the ChatCBT LLMs of the world.

Mandy Andress [00:22:26]:
The ability to have a type of of legal process to request the data is is removed. The the hows and the specifics of that, I think, are all the things that will be figured out over the next handful of years. But I don’t necessarily see us reinventing something at the moment.

Karissa Breen [00:22:44]:
When you say figure out the next few years, totally get it. It makes sense. But what do we sort of do now in this sort of weird time where it’s like, hey. This thing is clearly here. Don’t really have answers. Try to get the answers. People still gonna do the wrong thing regardless. What are some, like, sort of hate to say it, but, like, Band Aid solution.

Karissa Breen [00:23:01]:
Like, people can sort of just start implementing today. And I know it’s not an easy answer. It’s just more so do you have any insight then on that front of what people can start doing, Mandy?

Mandy Andress [00:23:10]:
Yeah. It goes back to a lot of the the data protection components that we talked about. So having those types of of controls, whether you have existing for your employees, web browser extensions that can mask data, can hide data, can disallow data certain data going into website. More broadly, controlling from a a company perspective where information can be accessed from. So not allowing services or sites to be accessed from personal devices only be accessible from company owned devices. Making sure that any vendor that you are working with that is interacting with, an LLM. So if you’re using some type of vector database to to augment or any other data source to ensure that there’s masking or that that data is somehow being anonymized. So you’re not you’re still getting the value of the technology and the analytics, but your data is not specifically being fed into the the public LLMs.

Mandy Andress [00:24:12]:
Those would be the first two areas that I would really focus on.

Karissa Breen [00:24:16]:
And then in terms of maybe more broadly, what do you think what do you think is gonna happen in the next, like, sort of 12 months? Now I know, like, these these questions are gonna be answered. And like I mentioned earlier, there are I think in the EU, they’ve put together some sort of AI framework that I think people are referring to now to get ideas, etcetera. Still, some of them are baked. Some of them are half baked. Would you say just with your and your sort of role at the moment, say people are quite worried about this or concerned? Because, again, like, no one likes knowing that we’ve got a problem. We don’t have an answer to it. No one likes feeling like that.

Mandy Andress [00:24:48]:
Yeah. There’s definitely some significant amount of concern, And I was reading a book recently about the history of of automobiles and was chuckling when I read about what was called the red flag rules. So at least in some parts of the world, when an automobile was on the road, when it was first coming out, not many on the roads, still a lot of of horses, that there had to be a a person either walking or on a horse holding a red flag to let everyone know that this, you know, new automobile was coming behind and to and to watch out. And so if you go from there, just to how our automobiles work today, largely computerized, we’re up to self driving, the speeds of which that we drive on on roads today. And I take that and use that as an analogy for AI in what we’re doing. And and I look at where we are today with Genai as we’re very cautious. We’re not entirely sure where it’s gonna go. It looks kind of interesting.

Mandy Andress [00:25:50]:
We are finding some some good uses for it. We think there can be more, but we’re not entirely sure where that’s gonna go. And I I equate significantly to the of the automobile analogy. And so when we talk about concerns, yes, there are current concerns today as there are with with any new technology, and we need to be mindful, and we need to be careful.

Karissa Breen [00:26:10]:
But I also think

Mandy Andress [00:26:10]:
there’s significant opportunity in what we will be able to do. And, we’re at the point where data and the amount of data that we have and how we’re using that data, it’s at a scale far beyond what humans can comprehend and analyze for themselves. And with technologies like machine learning and and Gen AI, we’re going to be able to gain insights and make use of data in ways that we can’t even anticipate today. And those are the things that really excite me. I’ve always been a a lover of technology and how we can use that to improve and and help ourselves and the world around us. And I think this is another iteration of of that.

Karissa Breen [00:27:01]:
I’m definitely optimistic when it comes to this. I love AI. I love what it can do. Before we get into that, I’m just really curious now to your thoughts on privacy. A lot of privacy people out there talking about, well, you know, we have to maintain our privacy. But, look, depends on who you ask, and I’m definitely not a privacy expert, but I’ve interviewed a lot of them. How do you think they feel about all the data and everything that’s going on? And, I mean, they’re still pushing for all this privacy stuff. But as far as I’m concerned, if you’re operating on the Internet, which is effectively most people, privacy is really hard thing to sort of maintain.

Karissa Breen [00:27:35]:
Do you have a view on that?

Mandy Andress [00:27:36]:
Yeah. Similar to you, I I work with a lot of privacy professionals. I’ve not focused directly on on privacy. What I find potentially most challenging is, the topic we touched on before in that you can have discrete data points that by themselves or a couple of them together don’t impact privacy, but you don’t necessarily know what other data points, whether it’s other organizations or or other places in your own organization are putting data in to where you could create a privacy issue by accident. And and for me, that’s some of the the key things that I see privacy folks really concerned about. Again, those unintended consequences. How do we ensure that data in, an LLM just can’t be used to track someone down that should not be identified. And I think those are some very interesting privacy cases and, concerns that’ll be interesting to to watch.

Karissa Breen [00:28:34]:
So I wanna switch gears now and focus on the opportunities that are out there. AI has been more ubiquitous as it is today. I mean, I’m loving it. I think it’s great. It’s definitely helped me in terms of my workload, especially if I’m so media, just learning the summary of something is really great for me to be able to read more articles perhaps and just getting the key points. But what from your perspective, what do you think the opportunities are? Because I’d like you to express them in detail because perhaps this is the part going back to our first part of the conversation around some of those outstanding questions that people have because so many companies out there are talking about all the bad side of AI. Whether they’re the right people to convey that or not, that’s a separate matter. It’s just more so you’re seeing it from your view.

Karissa Breen [00:29:16]:
I have definitely a view, but I’m really keen to see what it is in terms of opportunities that people can learn from that can actually change perhaps their perception on AI and Gen AI. As a CISO, I spend my

Mandy Andress [00:29:35]:
So for me, machine learning and and AI technology has been used in security tools for a number of years. 1st started, when we moved from signature based antivirus tools into more behavior based malware, anti malware tools. So that was the beginning of it, and it’s been spreading across all different types of of areas in this in the security field. What I’m really optimistic about with with Gen AI and where that evolves is the broader context that you are able to gain. And by context, we have a lot of capabilities to understand this user does these things and behaves this way. This system does these things and behaves this way, But it’s much harder to understand all of the interactions between multiple users and multiple systems and really understanding what behavior is what we typically see and then what activities are anomalous and the things that we might want to investigate. And, you know, an example, a lot of the the threats today are focused on finding valid use user accounts and credentials. And so from most of our traditional security detection measures, this would look just like a user logging in as they they normally would.

Mandy Andress [00:30:55]:
But what if suddenly they’re starting to attempt to access a system that they’ve never tried to access before? We won’t necessarily see that in a lot of today’s security tooling and and and setup. But if you do that, you see they access maybe a system that they are allowed to access on a regular basis. But from there, they suddenly then move elsewhere that is a place, you know, a production system that they shouldn’t even be trying to access. All of that analytics and analysis and understanding is something that’s very achievable now with the the large language models and all of the kind of AI capabilities. And that’s the piece that I’m really excited about, to give defenders that broader context and understanding of what’s happening in their environment, to be able to much more quickly identify activity that is anomalous or just not standard behavior.

Karissa Breen [00:31:50]:
So it’s going on the defender side of things for a moment. Do you think, as you would know, alert fatigue’s massive thing. I’m not saying, like, people should just sit back and not look at things more closely and let the AI do everything for you because there obviously, there’s anomalies with that and things not always being accurate. But do you think it gives, like, people like a defender a bit of a break To be like, well, maybe it just helps arbitrary number 10%. So I get a little bit of I get a bit of breathing space. Because as we know, people are burnt out. They’re tired. Let fatigue’s real.

Karissa Breen [00:32:21]:
Perhaps their concentration for doing these types of things are going down because they’re so exhausted. So do you think AI is gonna help with giving people a little bit of belief in their day to day jobs?

Mandy Andress [00:32:30]:
Oh, absolutely. And I think that’s probably the the largest use case in security today is focus on the security operations and helping augment and and support analysts. So whether that is utilizing Jet AI or AI assistance, Elastic is one example with the security AI assistant, that’s able to pull context from your environments. And you see this type of detection come through and pulling the information different contextual information from your environment, whether that is information from your asset database, your CMDB. All those things that analysts complain about, they have to do manually today. This is one way to start to pull all that context together automatically. But they’re also able to pull in this type of event and this type of activity, breaking in the specific steps. This is the mitigation action that you should consider.

Mandy Andress [00:33:24]:
And being able to pull all of that through a combination of open LLMs with the the likes of ChatGBC and OpenAI, your internal company technology stack and configuration. And it’s really able to help your your SOC analyst understand what actions they need to take versus right now, they spend a lot of their time gathering data and gathering information to try to get to that analysis. And once they get to that analysis point, like, well, I have to move on to the next alert. There’s not a lot of focus or it then moves up to a a level 2 analyst. And what I really like about the the AI technology is it will allow analysts to spend their time on the critical thinking and the more high value human centric analysis and understanding of, alright, what’s the user impact of this? How would this work in our environment? Do we need to take immediate action? But all of the data gathering and and things that we spend a lot of our time today on would already be completed on our behalf.

Karissa Breen [00:34:30]:
Yeah. Totally in agreement with you on the critical thinking side of it. I think that’s something that is paramount importance that people should be focusing on. So in terms of any discussed today, now I know it’s always hard to go into super amount of depth. We’ve gone into depth enough looking at both sides. Where do you think we go from here as an industry as of today? And then we come back and we have the same conversation in a year? Where do you sort of see us as an industry and as a society moving towards?

Mandy Andress [00:34:54]:
I think there’s a a lot of debate right now on speed of of innovation and how do we do that in a mindful way, in an ethical way. And I think the conversations that are happening, I I’m pleased to see. They’re not necessarily things that we saw in the early days of the the Internet and, the growth of social media. And we look at a lot of things that we’re trying to address in that world. And I see those conversations starting with AI much sooner. Say, hey. This could happen. We need to understand how to protect against that, or or they were talking about the the copyrights.

Mandy Andress [00:35:30]:
We’re talking about privacy. And I believe that at least at the moment, there will be a bit more practical approach and moving forward. On the flip side of that, certainly from a security perspective, threat actors aren’t taking that same approach. They are quickly researching and understanding what they can do and how they can leverage. They have their own chat GPTs for the dark web. They have all of their tools to to help with great phishing messages that are much more bespoke, much more targeted, no longer having all of the the language and and grammar and punctuation errors. And so it’s going to be, again, that balance of how do we move forward comfortably, safely, as a society, but knowing that there will be parts of society that don’t follow those rules. And how do we balance that? And and that it’s what we talk about for the Internet.

Mandy Andress [00:36:26]:
It’s what we talk about with any new technology. And I think that will continue to be the focus of the conversation for the next 12 to 18 months and continue to see that that growth and that change and start to I say on top of that, I would like to start to see, much more global conversations, as well. There’s a lot of country region specific. I think this is something, looking at more globally, would help as well.

Karissa Breen [00:36:53]:
So, Mandy, do you have any sort of closing comments or final thoughts you’d like to leave our audience with today?

Mandy Andress [00:36:57]:
One, thank you for for having me. It’s been a a great chat, and I have been a a technology lover from my my early, early days, and I’m always fascinated by what capabilities we have. And I’m always fascinated by the the creativity of how folks see how you can leverage and and take advantage of technologies. I look at Elasticsearch, created, when Shai started to write a a tool for his wife to help manage recipes, and that’s turned into a global very large organization, that’s used by over half of the Fortune 500 and runs from the ocean to Mars. And it’s just I look at that and it’s a fantastic world and I’m really excited to see where it goes next from a use of technology perspective while we balance the downsides.

Karissa Breen [00:38:04]:
Thanks for tuning in. For more industry leading news and thought provoking articles, visit kbi

Share This