The Voice of Cyberยฎ

KBKAST
Episode 271 Deep Dive: Nathan Wenzler | The Dangers of Public Sector Employees Using AI Tools Privately
First Aired: August 07, 2024

In this episode of KBKast, Nathan Wenzler, Chief Security Strategist at Tenable, joins us once again to discuss the critical importance of ensuring the accuracy and legitimacy of data within back-end databases to provide reliable responses from AI tools. We explore the shift towards AI reliance and the associated concerns about data integrity. Nathan emphasizes the need for purpose-built AI tools to ensure data accuracy, especially within government organizations. Additionally, we uncover the potential for AI to automate low-level tasks and emphasize the value of AI as a skills augmentation rather than job replacement. We also address the challenges of balancing AI innovation with security concerns and the need for practical implementations of AI tools to mitigate risks.

Nathan Wenzler is the Chief Security Strategist at Tenable, the Exposure Management company. he has over two decades of experience designing, implementing and managing both technical and non-technical security solutions for IT and information security organizations. He has helped government agencies and Fortune 1000 companies alike build new information security programs from scratch, as well as improve and broaden existing programs with a focus on process, workflow, risk management and the personnel side of a successful security program.

As the Chief Security Strategist for Tenable, he brings his expertise in vulnerability management and Cyber Exposure to executives and security professionals around the globe in order to help them mature their security strategy, understand their cyber risk and measurably improve their overall security posture.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Nathan Wenzler [00:00:00]:
If an organization is taking the stance of, well, I’ll just wait for a regulation to come about, and then I’ll do the bare minimum to become compliant to that regulation. I could not support that approach right now. It all is moving too quickly. It’s moving too quickly at a scale that we haven’t really seen in technology before. And you can’t wait a year or 2 or 3 to start to make a decision about this. The decisions need to be made now. It’s going to have to be more than the bare minimum.

Karissa Breen [00:00:49]:
Coming back on the show today is Nathan Wenzler, Chief Security Strategist from Tenable. And today, we’re discussing the dangers of public sector employees using AI tools privately. So, Nathan, welcome back.

Nathan Wenzler [00:01:00]:
Thank you for having me. So so happy to be here.

Karissa Breen [00:01:03]:
Okay. So I really wanna talk about the dangers because, again, depends on who you ask, who you speak to, if you read different forums, social media. I mean, I troll a lot of this stuff to do research and to sort of understand what people are sort of saying out there, but I wanna hear your thoughts on the obvious dangers of public sector employees using AI tool.

Nathan Wenzler [00:01:25]:
Yeah. For sure. And and I I appreciate you talking about all the the sort of commentary and discussion going on out there because I do feel that there’s a lot of hype and a lot of misplaced fear when it comes to AI tools. I think it’s really important to talk about that as well because that the fear for things that aren’t necessarily a problem distract us from the actual dangers and and risks involved in using these tools. So to give you an example, you know, I hear a lot of chatter about people being very concerned that attackers can compromise the algorithms, they can compromise, you know, the engine itself so that they hallucinate more, they give up false information, that kind of thing. Quite frankly, that’s just not a real attack factor. I mean, it would be a very complicated sort of thing and a lot of effort for an attacker to, you know, break into an environment, get to an AI application, and then subtly manipulate the underlying Gen AI engine to, make it spit out results. There’s far easier ways that, these tools can be compromised, and I and I think that’s really where we have to be more mindful about it.

Nathan Wenzler [00:02:37]:
So as an example, we’re concerned about trust. We’re concerned about the tools providing users the right answers. That comes from the back end data. That comes from what data sources are we using that the AI tools are leveraging to give us those answers. If those data sets get compromised, that’s where you start to have real danger and certainly much easier to compromise a database than it is to try to rewrite, an application. So, you know, when we start to talk about the dangers of using these kinds of tools, what it really comes down to is where are the risks that could compromise the trust that users place in these tools to help them answer the questions they’re asking or to perform the tasks that they’re trying to perform. So anywhere that we can find that those those areas of trust can be compromised, that’s where we have to really focus. So that’s good data security.

Nathan Wenzler [00:03:33]:
That’s understanding the source of the data we’re using for these tools. Are we using a public dataset that could probably be easily poisoned with a lot of bad information, or are we using a very small private dataset that we can control what data is there, What information is there? We can validate it. We can secure it. We can make sure the integrity of the data stays stays good. Those are the kinds of areas that we have to really focus on if we want to avoid the kind of dangers from the applications themselves. So I think that’s a it’s a big part of the the equation that we get fixated on the front end of it, but it’s really more of a data security kind of issue. The other obvious danger then is just misuse. And we’ve already seen some instances of this in the private sector, but a lot of concern from government agencies as well, especially when we get into military, energy, national security areas where there’s very, very sensitive and classified information.

Nathan Wenzler [00:04:31]:
A lot of concern that just user error. Government employees might accidentally take private information and plug it into one of the public AI tools, and now that data is out in the wild and other people could potentially access it. And so there’s a lot of concern there as well about just user training. Are we educating our users about the proper use of these tools? When is it okay to use them? When is it not okay? And really trying to put in good controls around the safe use of approved AI tools while ideally trying to make sure that our users are, circumventing all of that and going out to public datasets with really, really crucial and critical information. So there’s a lot of moving parts to the to the to the dangers of all these things AI. I think it’s just, again, really important to focus in on the real problems and not get too caught up in the the hype of the these very scary sounding algorithm compromises or other sort of almost movie level kinds of, adventurous hacking that people are afraid is gonna happen when there’s far more practical dangers to be concerned about.

Karissa Breen [00:05:42]:
Okay. I wanna get into this a bit more. Now you raised something which is interesting around the right answers. So I’ll give you some context, and then I’ll ask the question. So, basically, I got back from Las Vegas, went to a conference, interviewed the head of AI. Guy was brilliant. And then he’s he talks a little bit more about what you were saying around the right answers. Right? So just say hypothetically, I okay.

Karissa Breen [00:06:02]:
The sky is obviously blue. But what if, like, we write all this content out there that said, you know, the sky is pink, and then people use, you know, that dataset to then say, well, you know, if I then typed in the chat GBC, well, what color is the sky? And it came up with Bing because there’s all this inaccurate information that’s out there or propaganda that exists, as you know. What does that then mean? Because and the reason I asked that is, 1, journalism’s decreasing. Right? 2, things are being written by AI all the time, whether it’s accurate or not accurate. How do you then look at that more closely to say, hey, that’s an accurate piece of content versus not? Right? Like, it could be in my crazy head that maybe I genuinely think the sky is pink. Right? So how does that then translate into these are these are right answers? Who’s gonna be auditing that stuff?

Nathan Wenzler [00:06:49]:
I mean, it’s a great question and and and that’s exactly the the kind of danger that I was sort of alluding to about the data side of it. Right? Data poisoning or filling a dataset with a lot of bad information is, in my opinion, the greatest danger when we’re leveraging these kinds of tools because, again, it’s an attack on trust. If I’m an organization, I take a lot of really good steps to put in a Gen AI knowledge base kind of thing for my users. If you have a question about what to do, go to the little chatbot, type in the answer and the Gen AI stuff can easily explain to you what you should do. Well, if my back end database is compromised and filled with a bunch of bad data, think about what what’s gone on. I’ve told my employees, here’s an application you can trust. When it gives you an answer, use the answer. But then when the data is compromised and those answers are wrong, what’s the user left with? The user’s been told to trust it.

Nathan Wenzler [00:07:51]:
They they believe that the information is good. And so you’ve already set people into a place where they’re not necessarily going to question it or validate the information as sort of a first response. Their first response will be to just sort of trust it even if it might seem a little bit off. So it’s a real problem for organizations that have have taken the step to say, We’re not gonna leverage public datasets because we know that information or that that data is just crazy. Right? There’s lots of comments about what color the sky is and it’s very difficult to understand what’s real, what’s not. So we’re gonna have a internal controlled AI, setup and we’re gonna have it be very useful to our users. But if that controlled dataset gets poisoned, now you’re essentially attacking the trust of the users, and and it’s so much easier then to cause them to make mistakes, to cause them to click on malicious links, to give up their usernames and passwords. There’s just so many scenarios where an attacker can manipulate that trust because of the data poisoning that that becomes a much more effective sort of an attack than than almost anything else.

Nathan Wenzler [00:09:05]:
So this is the, you know, the big thing that organizations have to take into account is how do we ensure whatever the back end database is stays correct, that the data is sound, that we’re not compromising the integrity of that data so that we know that when our users ask questions from it, is that the Gen AI tools are giving them legitimate responses. That’s the that’s really the goal that we have to work towards. And I think a lot of organizations are still trying to figure that out, frankly.

Karissa Breen [00:09:35]:
Well, I was just gonna ask, how do you do that effectively? And how do you know? Because you’re gonna get to a point. Right? Okay. But look at before the Internet started up, before Google was around. You had to go in the encyclopedia and all these things. Right? Kids say, wouldn’t even know what that is. Now if I wanna know something, I’ll just Google it. Now, you know, sure that there could be some maybe some of the, you know, the the answers are wrong or incorrect or whatever. But then I’m gonna get to the stage where it’s like, if I can just ask, I don’t know, chat gbt or, you know, language model, whatever it is, a question, and it gives me an answer.

Karissa Breen [00:10:09]:
Am I really going to question it? Like, it’s it’s more about the mindset. Like, why would someone in the next generation sit there and question it because this is all that they know. Right? It’s not back through the day where we’re looking through encyclopedias and then there’s all these references and all that. All that’s gone out the window. So how do you ensure it’s accurate, though? How do you maintain that integrity of that data?

Nathan Wenzler [00:10:31]:
Well, I think what we’re gonna start to see in a lot of cases is an an acknowledgment that what we saw initially with things like chat gpt, where it’s that the promise was all of the information that’s out there on the Internet is within scope of this database. And you can ask it any question, and it’ll answer any question. Oh, we’ve seen. We’ve seen exactly what kind of chaos happens to that. The hallucinations, the the crazy information, the crazy answers. It’s become, I think, pretty common knowledge that you can’t just trust whatever comes out of those public datasets because people have seen firsthand that the answers are pretty wild, or can be at least. So I think what we’re gonna see is this trend towards not trying to to leverage AI tools to be the answer to all questions, but to leverage very purpose built specific AI tools that can answer particular questions. And when you do that, you’re gonna you’re gonna be able to to create a dataset that’s much more focused, that’s more aligned with a single purpose or a single topic, and it that will become much easier to manage from both an integrity standpoint and also just a a verification standpoint.

Nathan Wenzler [00:11:48]:
You know, an example that I often I often talk to people about is if you think about a just a help desk, a knowledge base for a help desk, users always have questions, right? They wanna know how do I change my password or how do I request a new piece of software. These are all very common help desk type questions and you could build a dataset to answer those questions without answering things like what color is the sky or how many fish are in the sea. A help desk dataset would be much smaller. It’s going to be a much more finite set of answers instead of data. That becomes a lot easier for organizations to ensure that the information that’s there is accurate. It’s aligned with the purpose. And then when users are leveraging those LLMs or, you know, the equivalent chat gpt interface, they’re much they they’re gonna have a a much stronger plus stance to be able to trust what comes out of it because the data is very focused into just the thing they’re trying to solve. And that’s I think for especially government organizations, I think that’s where you’re going to see the biggest shift in how they use these kinds of tools is to to kinda do this purpose driven kinda knowledge bases or or or themes or topics so that they can manage the accuracy and the validity of the data better.

Karissa Breen [00:13:06]:
Okay. So just to press on a bit more, you said purpose built to answer particular question. Can you give me an example? Like, how how particular or specific are we talking here?

Nathan Wenzler [00:13:15]:
Oh, it’s gonna be as particular or specific as the data behind it. So, I mean, I can give you an example with which, you know, we we do this a little bit of 10able quite frankly, but we look at vulnerability data. Vulnerability data can be really complex and technical. We talk about why is a particular piece of software vulnerable or how is the vulnerability exploited or how do you remediate the problem. There’s a lot of data associated with vulnerabilities. And if you were to build a dataset that just contains the information about vulnerabilities and essentially nothing else, you could build a a Gen AI interface on top of that that would look at all that data. It would look for patterns, look for commonalities. It’s going to start to really parse out that dataset about vulnerability information to understand how do I fix this vulnerability that one of my tools told me about.

Nathan Wenzler [00:14:19]:
You could just ask the tool that, how do I fix this? And hit enter. And it’s gonna be able to tell you, oh, here’s all the remediation advice about that. And if you’re not clear about what that means, you could ask it further. And as long as that data about all of the various pieces of of the vulnerability are there, the user is going to have a very quick and and powerful way to get that information out in a easy to digest sort of way, a very fast way so that they can start to address the problem. So I think there’s a lot of use cases in this in this regard of where we could build, especially for security practitioners, build these kinds of purpose built interfaces to do threat research, to do asset research, to understand what’s connected on our networks and and be able to ask more questions about who owns this server or who’s logged into this. There’s just a lot of ways that can be done, but that’s the kind of sort of specificity of the data that we can leverage. It’s not gonna answer the broader questions we were talking about like the color of the sky, but it can help us with that that investigative need that a lot of security practitioners have. So, that’s the kind of thing you’re gonna see, I think, more and more commonly, across different areas of study, across very specific security specializations.

Nathan Wenzler [00:15:37]:
That’s where we can see some wins with these kinds of tools.

Karissa Breen [00:15:41]:
So do these tools sort of exist today? Because, like, when I was doing this sort of stuff and telling I mean, this was going back a, like, a decade ago. I didn’t have any of this, and I used to pull together reports. So it would be nice if I could have leveraged some type of, you know, AI capability or else I was doing all this stuff manually. Right? Like, that was time consuming and hard, and you miss things. So are we gonna start to see this more? And I know people are gonna say, oh, AI is taking my job. Well, not really because you’re gonna be able to do better things that are, like, you know, better outputs for for companies. Right? Like, I would rather AI do real low level stuff and then pay that person to do more, you know, critical thinking tasks and strategic tasks. Right? So what do you think now moving forward as of today? Like, what what what do you see with your role, customers that you’re talking to? Where are we gonna get to with all of this?

Nathan Wenzler [00:16:29]:
Well, it’s I mean, well, first of all, let me just address. Fully agree with you, by the way, that this is the AI is not replacing anyone’s job. Like, that is I I know there’s a lot of fear and and hype in the industry about that. It’s just not real. Uh-huh. It’s exactly as you say. We need people who can make really sound risk decisions and get organizations moving to mitigate or remediate these problems before breaches happen. Like, that’s the whole point of cybersecurity is to harden the environment so that we experience fewer data breaches.

Nathan Wenzler [00:17:01]:
And if these tools can help you make that decision faster, you’re actually more empowered and more valuable in the organization than when you’re doing it manually and spending hours or days drifting through spreadsheets trying to figure it all out. So, I think it’s a really important point you’ve brought up that that this is not a job replacement function. This is a a skills augmentation. This is going to make us better at what we’re actually here to do, which is mitigate risk and not just sorting through data and spreadsheets. That’s it’s really, really critical that people understand that. And and I think going forward, I think that’s really what you’re gonna see more organizations start to focus on. When when we cut through the hype and the buzzwords that are used out there, and there are a lot, let’s be fair, there’s a lot of companies that have tacked on the, you know, the letters AI to every single thing that they do. And so it can seem like there’s just a lot of of fraud essentially of of what AI can or can’t do.

Nathan Wenzler [00:18:03]:
When we cut through that and you start to see it as this kind of analysis tool, is this way to expedite how we get information, how we absorb information, that’s the power. That’s what you’re gonna find the the benefits to your organization. You do get to make those those risk decisions faster and more accurately. And so, as more tools integrate these kinds of functions into them, as we educate users that this isn’t the all encompassing apocalypse of security as we know it. It’s really just another tool in the toolbox that can help you do your job better. I think you’re going to see a lot bigger return from the implementation of these kinds of tools or in a lot of cases, vendors, you know, again, like like my own company, we we leveraged some of these kinds of capabilities already in our tools, specifically to help people ask those kinds of questions or to query the datasets. So you’re definitely seeing already more and more companies are incorporating it in a very practical sort of way. And as that gets adopted to through more organizations, the benefits are gonna become much more obvious.

Karissa Breen [00:19:11]:
Okay. I wanna switch gears now and talk about regulation and government sort of taking that more into their realm. Now I’m gonna ask a very harder question because, like, governments in the past are not great at doing, like, the best of things. Right? So now we’re asking them to do something that’s pretty complex to be like, okay. You guys should just regulate it. So how does that work? Because, again, I get it. Like, it shouldn’t be on the onus for, like, you know, each you can’t regulate things unless it’s, you know, backed by the government, etcetera, or independent bodies. I get that.

Karissa Breen [00:19:42]:
But that’s not gonna probably happen anytime soon, and I would challenge the capability and the competency of that being a thing. So I’m really curious now. And I I mean, I I might looking at for what what I’m you know, the research that I do, the reconnaissance that I see from what people are saying on Twitter or x or whatever you wanna call it. I’m looking what people in the market are sort of saying into your earlier, you know, thoughts around the the commentary that’s out there. So how does this work?

Nathan Wenzler [00:20:13]:
It’s a really good question, and I think the honest short answer is we don’t really know yet. Regulation, when it comes to the use of software, security tools, anything like this, and we’ve and look, there’s been a lot over the last 20 years. Right? Like, number of audit requirements, compliance requirements in healthcare and finance, and government has stepped in in a number of places to establish requirements for basic cyber hygiene, for basic security matters to ensure that these industries or these areas are essentially safe. But I think one of the challenges and the tricks here is that we have to sometimes think about why government steps in in these cases. What I see in a lot of cases in public is people say, Well, you know, we don’t need government oversight. We don’t need all these regulations. Just let the market decide. Like companies should recognize that cybersecurity is a good business practice and if they don’t invest in that, then they get data breaches, they get compromised, users and customers will move on to other companies, and they’ll lose their business and they’ll they’ll be out of business because nobody wants to do business with someone who’s not secure.

Nathan Wenzler [00:21:23]:
Well, in practice, has that actually happened? I mean, maybe one of the major breaches we’ve seen over the last couple of decades, those companies are still in business, and in many cases their stock prices are higher than ever. So relying on sort of organizations to just sort of do the right thing because it’s the right thing to do hasn’t really panned out. We just haven’t seen that broadly, and it has forced the issue in a lot of cases for governments to have to step in and say, well, listen, people’s information is being compromised, Individuals are are losing money. They’re being, attacked by these folks. They’re having their money stolen. We have to step in to protect individuals because the market hasn’t really gone as far as we had hoped. So it it’s a two sided problem, and I think each side likes to blame the other a little bit. Companies don’t like being told what to do or having mandates, and government is, as you mentioned, not always the greatest when it comes to defining those things.

Nathan Wenzler [00:22:26]:
But neither side is really getting it right. So, you know, we have a lot of work to do, I think is part of the the the reality of this this sort of thing. We’re going to need some amount of regulation when it comes to AI usage and and especially when it comes to sort of trust and validation for public datasets. And and we’re already seeing some of the major players, Google, Microsoft, you know, the OpenAI, all these things, they’re they’re starting to be a lot of questions about how are you protecting people’s personal information? How are you ensuring that data can’t be stolen or manipulated? Those are kind of the right questions to be asking, and it may require regulation, maybe. This may be also be a place where we just essentially need to step up. These organization organizations need to step up and do the right thing and implement really, really strong controls to protect their users and their customers. So I think you’re gonna see both. I think you’re gonna see in the AI space, you’re going to see regulations come about.

Nathan Wenzler [00:23:24]:
I know a number of countries are discussing that right now and trying to figure out what they can legislate and what they can require. Internally, a lot of government agencies are, of course, creating policies that their users can’t or should not ever use public facing gen AI services like chat gbt and all the rest. A lot of what happening all at once, but I don’t think there’s a single right answer in this particular case. And how it evolves in a lot of ways is gonna depend on the companies behind these big gen AI platforms, how they take the next steps in their security practices, I think is gonna dictate a lot of how much the government’s going to have to regulate and how much they’re gonna have to to, require or what depth they’re gonna have to require to ensure the trust of these systems. It’s really complicated, so it’s it’s a hard question to answer.

Karissa Breen [00:24:18]:
But how long do you think this will take until it’s, like, implemented and, like, this is a thing? Like, because you can’t really, like, not so easy to just ring fence this problem. Right? Because it’s it’s everywhere. So how do you do that effectively? You know, there’s no rule book to, like, oh, this is how we’re gonna do it. So this could take years.

Nathan Wenzler [00:24:35]:
Yes. It absolutely could, especially if we’re gonna rely on regulation to be the answer for that. So this is where it becomes imperatives that essentially individual organizations need to step up and do more for themselves. Right? They they need to look at the real risks that these platforms can cause. They need to look at the way that it could affect their businesses. If if users are chipping off intellectual property into these databases, if users are making corporate decisions based on bad or misinformation or government agencies are compromising secrets because, you know, we’ve we’ve copied and pasted it into an interface somewhere. There’s a lot of places where this can go wrong and if an organization is taking the stance of, well, I’ll just wait for a regulation to come about, and then I’ll do the bare minimum to become compliant to that regulation. Well, I I can’t.

Nathan Wenzler [00:25:31]:
I could not support that approach right now. It it it all is moving too quickly as you’ve pointed out correctly. It’s moving too quickly at a scale that we haven’t really seen in technology before. And you can’t, you can’t wait a year or 2 or 3 to start to make a decision about this. The decisions need to be made now. It’s going to have to be more than the bare minimum. We really have to get better about driving strong policy about what can or can’t be used within organizations, ensuring proper security controls around the things we do use. So that’s application access, that’s data access, all of your standard access control kinds of things.

Nathan Wenzler [00:26:11]:
You need validation processes for the data to ensure that what your users are getting is is real. There’s a lot of work to be done here, and I don’t think it’s necessarily unknown. It’s just that we have to start doing it. That’s, I think, the challenge for a lot of organizations is is they’re just not yet doing what they need to do to protect themselves.

Karissa Breen [00:26:33]:
So you mentioned before, companies need to be doing more. What do you think they should be doing though in terms of, like no one wants to have more work on their plate. Right? Like, it’s like, we got enough things to do, Nathan. We’re trying to keep our head above the water. We’re trying to do real basic stuff like patch management right, and now we gotta think about these other things?

Nathan Wenzler [00:26:50]:
This is the the core of risk management. How much risk is introduced by allowing unfettered use of Gen AI systems? That’s not a question that’s gonna be answered the same way by any 2 organizations, but that’s that is the question you have to answer. And so, yes, we all have a lot of work to do, but from a security standpoint, you know, we’re here to help advise the business about where to focus on the areas that put us most at risk. And if you’re an organization that deals a lot of critical data, protected personal information or healthcare information, intellectual property, classified information, whatever it happens to be. This can introduce a lot of risk to your organization, maybe more than some of the other things you’re already working on. It is something that has to be answered. It’s going to be answered a little bit differently by everyone, and, you know, I think the other thing to remember, much like I just said a moment ago, these tools are essentially just applications like most any other application. There’s a front end user interface.

Nathan Wenzler [00:27:51]:
There’s a back end database. So in terms of what to do, we kind of already know. Right? Good application security is really your your first major step here. Make sure the user interface can only be accessed by authorized people. Make sure the database can only be accessed by authorized people. These kinds of controls are are fundamental to any application, including an AI based application. Might be some extra steps for the data validation. There might be some extra steps in terms of, you know, moving away from a public based dataset and leveraging your own internal well controlled set of data.

Nathan Wenzler [00:28:31]:
That’s a little bit of work there too. But I don’t see this as really all that different than any other application that we secure in our in our organizations. So, yeah, it it it’s gotta be factored in like any other risk factor. And if it if it puts your organization at risk, you’ve got to find the resources to put good controls in place.

Karissa Breen [00:28:51]:
Just following that train of thought, what do you sort of see as the largest risk to sort of government agencies more specifically?

Nathan Wenzler [00:28:57]:
I think the real risk is it is the use of sort of the public datasets, you know, the chat gbts and bar and all of the other sort of public facing things out there. When government agencies have protected and classified kind of information that a user who makes an an error, you know, they they copy and paste information out into ChatGPT that they shouldn’t do, that can put a lot of lot of people at risk depending on the the agency and the services. So I see for government, the bigger concern is that. It’s it’s human error. It’s you wanna call it insider threat. It’s a little bit of bad. It’s not necessarily malicious insider threat, but it is the use of those public datasets to either have information from your your secured areas get out or bad information coming in that feeds the the users and causes them to make really bad decisions, which depending on what level of government you’re in could be very, very harmful to a large group of constituents. So having good policies in place around that, it’s, you know, your training and education is there.

Nathan Wenzler [00:30:06]:
It’s a lot of monitoring to make sure that your users aren’t circumventing what controls you have and using these public things. And frankly, you have to be prepared for it to happen anyway. If there’s anything we’ve learned over the last 20 or 30 years or so is that when there’s some kind of new shiny technology out there, users will find a way to use it. Technology out there, users will find a way to use it. The reality is we could put a lot of controls in place and we have to, but we still have to but we still have to be prepared for people to use ChatGPT from their phones or leverage it from their own personal systems at home or whatever the case might be. So there’s a lot more work that kinda has to be done in terms of monitoring and ensuring that everyone understands the risks, especially around protected data. But that’s gonna have to be the focus for government agencies, I think, as a first and foremost kind of thing at this point.

Karissa Breen [00:30:54]:
So even if people understand the risk. Right? Like, do you think people care? Now I say that and what I mean by that is if if I could reduce the time, like, if I I’m putting myself obviously, I’m an entrepreneur. It’s a little different, but if I’m putting myself in someone else’s shoes, if I could reduce the time of doing my work because I can use chat GBT to do it for me and maybe, I don’t know, it reduces 20% of my work life, for example. Are I more inclined to do that irrespective of the risk? Because I think that’s people’s mindset.

Nathan Wenzler [00:31:20]:
It very much is. And and in in a lot of cases, it might be appropriate to do that. But if we’re talking about, you know, government especially, there’s so many areas where you’re dealing with very sensitive datasets. You’re dealing with, again, military, classified information, financial information, healthcare. There’s so many areas of government that deal with really, really sensitive information. So that does become a question for the organization essentially of, is the efficiency gain that you might see for your users worth the risk of exposing all of that data if somebody makes a mistake? And I suspect in those areas where the data is more secretive or classified, that answer is gonna be no because the risk is too great. So this is this is gonna have to be an exercise that a lot of organizations go through and this is part of the education process. You may have to help your users understand this.

Nathan Wenzler [00:32:13]:
Like, look, you work for defense. We just cannot risk any of these things getting out, and so, no, these tools won’t be used, or they will only be used in this context, or whatever the policy decision is. That may have to be the decision in some of these these organizations in order to ensure that the data isn’t isn’t compromised or, you know, escapes the organization and is able to be, sort of stolen if you will through these public datasets. So it’s gonna there’s no single answer here. I mean, every organization is gonna have to make sort of a different call about this and be mindful of the fact that users, they do make mistakes or they sometimes have good intentions. Those good intentions can lead to really problematic security incidents, and we’ve gotten prepared to deal with that as well. But that’s that’s part of the whole process we’re we’re we’re wrestling with right now.

Karissa Breen [00:33:06]:
And you’re absolutely right. Makes sense. Right? If you’re dealing with sensitive information, you’ll think twice about it, but maybe that’s because of just the pedigree that you and I have come from and other security people out there. Do you think about the average person? Like, Mildred is, like, doing a very you know, a job that maybe doesn’t come from our sort of background. Has it maybe it wasn’t our intention. Just didn’t really think through it. Right? Didn’t didn’t didn’t think this could be a problem and until it is. Right? Like, again, maybe the intention was to do the right thing, and it didn’t quite turn out that way, and then there’s a problem.

Karissa Breen [00:33:37]:
That’s often where you and I both seen things have gone wrong. It’s not like someone’s intentionally being like, like, I’m going to purposely, you know, ruin it for everyone. It just may have been a mistake. They didn’t think it through.

Nathan Wenzler [00:33:48]:
Well, that’s ex exactly what I said, is that, you know, we’re it’s still a form of insider threat. It’s just not a malicious insider threat. Accidents are legitimate risks. Accidents are leg can cause legitimate security incidents. I mean, this is something that’s, not new to AI tools. Let’s take a step back here for, you know, a bit. Let’s think about when wireless access was pretty new and and such a kind of a wow sort of thing. A lot of organizations did not implement any form of wireless network because they were concerned about the security risks and the harm that it could cause.

Nathan Wenzler [00:34:28]:
And what ended up happening? Well, users wanted to use wireless. It was cool, right? So they bring their own access points from home and plug them into corporate networks and configure it to just broadcast wireless access in a totally unsecured way directly into a corporate network. Call it shadow IT, call it whatever, you know, whatever term we wanna do, but this is a problem that’s been going on for a long time. People even in that example of wireless, it was done with good intention, right? I can use my laptop from a different office because it’s wireless. I could use it somewhere down the hall. I don’t have to be tied to my desk. I can do more work in more places. Yeah.

Nathan Wenzler [00:35:08]:
That’s all good intention, but it was a big security risk. We’re essentially talking about the same thing here. People have really good intentions about, Hey. Hey. I can write this email for me. I’ll just copy and paste all this really secret data into it, and it’ll write the email that I have to send internally back. Yeah. Your attention’s good.

Nathan Wenzler [00:35:26]:
It saves you a lot of time, but you’re putting the organization at risk. So what’s happening or the concern around this isn’t new. What is new is the scale and speed and scope of all of this because AI has become kind of ubiquitous. It’s accessible everywhere. I can get it from my phone. I can get it from home. I can get it from anywhere. So it is a little bit of a, of a more complicated problem we try to manage just because of the accessibility.

Nathan Wenzler [00:35:53]:
Fundamentally, we’re talking about security incident response and how you’re prepared for when those accidents happen. What do you do about it? How do you recover? How do you isolate the damage? How do you, you know, deal with the user in question? Do you have a know, way to handle that side of it from an HR perspective? A lot of different factors, but no different than any other sort of accidental insider threat that we’ve been dealing with for decades now.

Karissa Breen [00:36:22]:
So then sort of this is my next point around obviously, we wanna encourage innovation and AI, and we’ve sort of discussed the benefits of it within reason to the point where you’re not, you know, jeopardizing the company and putting things at risk. So how do you find the balance between that, though? Because, of course, we want we want people to do more meaningful tasks and critical thinking, strategic tasks, and we can eliminate or automate, you know, using AI, etcetera, to do some of the the more menial tasks. But then, obviously, you don’t wanna push it to the point where it’s like, oh, I just exposed a whole bunch of, you know, trade secrets. I shouldn’t have done that. How do you find the equilibrium?

Nathan Wenzler [00:36:56]:
Well, I I think it’s it’s again, going back, we talked a little bit earlier. This is the the place where moving away from leveraging public broad datasets like a chat gpt inside a corporate environment, moving away from that and moving to smaller controlled internal datasets only on and having that kind of purpose built tooling for a particular task or for a particular function. That’s going to be the way where we can balance the security needs. You can ensure that, you know, if I have an internal tool and the database is all internal to me and it’s not available or accessible to the public Internet or any other user out there, it’s not great if my user copies and pastes classified information into it, but at least I know that data and everything in it is still contained within my environment. I still have some amount of control there. So the movement, I think, what we’re gonna see the next, you know, couple of years, roughly, is multiple smaller implementations of focused AI usage that is well controlled like we control any other application. We’ll move away from leveraging the public datasets and the public tools for corporate benefits. We’ll use these private controlled versions of it for our corporate work or for our government work so that we can ensure that all the security controls we need in place can be there.

Nathan Wenzler [00:38:21]:
And that’s I I think that’s gonna be the way that everyone’s gonna have to manage this going forward.

Karissa Breen [00:38:26]:
So where do we go from here, Nathan? How do you think we we move forward? Because as we’ve sort of spoken about a lot, the undertone of this this interview is it’s hard. We don’t have all the answers. You know, we’re trying. Could take some time. But what do you see, like, practically next steps?

Nathan Wenzler [00:38:40]:
I think that’s already happening to some extent. I I think that people are realizing the dangers, and they’re starting to ask the questions about how to best secure these things. And the way to that again is these these smaller sort of focused purpose driven tooling. So some of that may come from your vendors and partners that you’re already using. They may be leveraging it in that way. Some of it may be internally developed. If an organization is large enough and they have a very particular need and a very specialized dataset, they might leverage a GenAI engine and and set of tooling to build something internally. But I think that’s really the next steps here.

Nathan Wenzler [00:39:21]:
The next steps are understanding where it can help in that very practical kind of way, and then looking at the best ways to implement that, whether that’s through a third party, a trusted vendor or a partner, or building it yourself. Those are sort of the next steps here in terms of of how are you going to leverage it going forward. Like you would for any other business tool or security product, look at the need, look at the place where it can help you most, and then implement accordingly from there. And and and like I said, I from organizations I talk to all over the world, that’s already happening. We’re already seeing folks training and educating their users away from the public facing things or putting policies in place that say, you know, within this organization, we don’t use those tools, and they’re starting to leverage more things internally because of the security controls that they can put around it. So we’re already heading down the right road. It’s just gonna have to continue to play out for the next bit of time while everyone implements and and embraces that.

Karissa Breen [00:40:22]:
So, Nathan, do you have any sort of closing comments or final thoughts you’d like to leave our audience with today?

Nathan Wenzler [00:40:27]:
Yeah. I I honestly think the most important thing is is it’s really important right now for everyone to cut through the hype. I mean, we’ve heard so many horror stories about how AI is going to allow attackers to be unstoppable. We’ve heard AI is going to replace every security practitioner on the planet, then it’ll just do our jobs for us. There’s a lot of buzzwords and rhetoric and hype around it, and it’s really, really important that people cut through that and start to see that this is a tool like any other tool. It can be very powerful when it’s implemented correctly, and let’s talk about the best ways we can leverage this particular tool in our toolbox to be more efficient, to better understand risk, to, you know, understand my inventory, my asset space better, whatever the need is. I think the more that people to a practical place about this, the easier it will be to understand the problem, the easier it will be to put the right security controls in place, and the better position they’re gonna be to actually get the benefits from it without panicking and and just being in this sort of constant state of putting out a fire around it. We can get to a place where it actually is beneficial and and does a lot of good for organizations.

Share This