The Voice of Cyber®

KBKAST
Episode 359 Deep Dive: Omar Khawaja | Data Intelligence for Cybersecurity
First Aired: March 18, 2026

In this episode, we sit down with Omar Khawaja, Vice President of Security and Field CISO at Databricks, as he explores the intersection of data, AI, and cybersecurity defense. Omar addresses the real fatigue facing CISOs amidst rising AI hype, emphasizing that combining high-quality data with AI—not just AI alone—is pivotal to effective cyber defense. He shares insights on the growing need for organizations to get their data in order, challenges in adapting operating models for AI, and the importance of reducing security tool sprawl through robust, unified platforms. Omar also discusses the increasing role of AI agents in automating routine tasks, the evolving skills required to leverage AI securely, and why mature frameworks and a growth mindset are critical as organizations navigate the complexities and risks of AI adoption.

Omar Khawaja is the VP, Field CISO at Databricks where he gets to work with CISOs to help them securely shepherd their organisations’ data+AI journey. He leads Databricks’ Field Security practice globally, teaches at Carnegie Mellon’s CISO program, sits on the boards of HITRUST and FAIR Institute, spent 9 years as CISO of a $26B enterprise and is leading a team that developed an actionable AI security framework for 11,000 enterprise data platform customers at Databricks.

Vanta’s Trust Management Platform takes the manual work out of your security and compliance process and replaces it with continuous automation—whether you’re pursuing your first framework or managing a complex program.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Omar Khawaja [00:00:00]:
The challenge with leveraging data and AI when you’re trying to do it in a hurry and you don’t have the ability to build the requisite skill sets to actually get the outcomes from it, that only increases and exacerbates that sense of fatigue. We get fatigued and tired when we feel like we’re not getting an outcome. When we’re working hard and we get an outcome, that outcome, that benefit actually energizes us. And offers a feedback loop for us to keep going. But the reality is in the world of AI, in order to get to the outcomes, there’s a lot of work that needs to be done on the data engineering side, on the governance side.

Karissa Breen [00:00:55]:
me now is Omar Khawaja, Vice President of Security and Field CISO at Databricks. And today we’re discussing data and AI is every organization’s strongest cyber defense. Omar, thanks for joining me and welcome.

Omar Khawaja [00:01:13]:
Delighted to be here. Thanks for having me, Karissa.

Karissa Breen [00:01:15]:
Before we got on the call, I’ve been watching a few of your interviews and you sort of explained things quite meticulously, so I’m really keen to get into that side of things today. But maybe let’s Let’s start with, talk to me about AI is every organization’s strongest cyber defense. Isn’t every vendor sort of saying that? And I say that because I’m interviewing people like you every week, and I do believe everyone’s saying the same sort of thing, right? So I’m keen to maybe get a different perspective from yourself.

Omar Khawaja [00:01:47]:
If there was one nuance I would offer from that, I would say our point of view is not that AI is going to solve it, that AI is the best defense,, but we think data plus AI is likely the best defense. And there are some times where you absolutely want to use AI. However, at the very least, you know, before you start to take a hammer at what looks like not a nail, it’s important to make sure that AI is going to be the right solution. And so you do a little bit of a problem fit test. And so if the problem that you’re looking at doesn’t have a lot of ambiguity, doesn’t have a lot of variability, there’s not a whole lot of unstructured data involved, it doesn’t require much reasoning, it doesn’t require generation, chances are picking AI as the solution to your problem is probably not going to be the right solution. In that case, if your data is mostly structured, you don’t have a lot of ambiguity, you don’t have a lot of variability, there’s no reasoning or generation required, you’re much better off using more more traditional analytic techniques and business intelligence techniques in order to mine your data to extract value. So for us, we really believe that fundamentally what you need is to bring data intelligence to solve your problem. And data intelligence could use— means that you have the entire arsenal of tools available to you and you pick the tool that’s the best fit for the problem that you’re actually trying to solve.

Karissa Breen [00:03:17]:
Do you think people perhaps, and like, as I said, every vendor is saying AI, all of this sort of stuff, right? And then people are saying like, oh, well, maybe I’m confused. Do you think that given your role and your caliber of the person that you are, do you believe that people are a little bit fatigued in the whole AI sort of conversation a little bit as well? Because I do believe that everyone’s got their own version. And that’s why I’m going to get your version of it as well, Amar. Do you think you’re hearing a little bit with what some of your customers are sort of speaking to you? Do you think they’re tired about it?

Omar Khawaja [00:03:54]:
Yeah, I mean, Chris said the fatigue is real, and the fatigue is real, especially from parts of organizations. And if we were to talk about the average CISO’s organization, I’d say they are feeling the fatigue because they’re about, they’re hearing about AI from a couple of angles. One angle is. Every part of the business that has a use case that they think AI can solve, the CISO’s organization has to figure out how to enable the secure and responsible and ethical deployment of that AI. They’re being on an ongoing basis barraged with AI use cases. And then of course, when it comes to being able to solve their own problems within the security organizations, particularly within the SecOps side, oftentimes AI is being sold as the solution. What’s happening though, Carissa, is that the average CISO org is overwhelmed. They were overwhelmed before AI became a big thing.

Omar Khawaja [00:04:55]:
And when AI came in and introduced even more VUCA, I don’t know if you’ve heard of that acronym before. So VUCA is volatility, uncertainty, complexity, and ambiguity. And it’s sort of the way that the world is trending from a macro perspective, and you add in the technological changes and you add in customer expectations and you add in globalization, all of those things collectively, what they’re doing is they’re increasing the amount of V, U, C, and A in the environments that we’re in. And cyber teams are particularly impacted by increasing VUCA. And so for cyber teams, they are underwater. They don’t have the time, the capacity to go and learn a whole new discipline, AI and the world of data being a fairly new discipline for the majority of security teams. Now, there are maybe the, the 5% in the margin security teams that have very high proficiency and acumen around data and AI, and they’ve got data and AI experts on the team. But the vast majority of security organizations don’t have the luxury of having data and AI talent in their organization.

Omar Khawaja [00:06:15]:
So they’ve been working really hard to figure out how to bring in some of those skill sets. And the challenge with leveraging data and AI when you’re trying to do it in a hurry and you don’t have the ability to build the requisite skill sets to actually get the outcomes from it is what you can imagine. That only increases and exacerbates that sense of fatigue. We get fatigued and tired when we feel like we’re not getting an outcome. When we’re working hard and we get an outcome, that outcome, that benefit actually energizes us and offers a feedback loop for us to keep going. But the reality is, in the world of AI, in order to get to the outcomes, there’s a lot of work that needs to be done on the data side, on the data engineering side, on the governance side. And if organizations’ data estates are very fragmented, then to be able to get value from their data takes quite a bit of checking the boxes and the prerequisites before you can get to the AI to start to automatically solve your problems. That sort of is what is causing many of the CISO organizations to experience fatigue.

Omar Khawaja [00:07:27]:
The ones that said, we are going to not assume that there are some shortcuts and easy buttons. We’re going to go through the process. We are going to get our data in order. We are going to make sure it’s well governed and organized, and we’re going to build some of that expertise either within, or we’re going to build strong relationships with the data team in our enterprise. Those teams are the ones that are making a lot more headway.

Karissa Breen [00:07:55]:
Okay. So, I wanna get into this a little bit more. This is quite interesting. So you said getting the outcome, would you say the industry, generally speaking, like the end of last year was 2025, people can sit back and be like, hey, I think I got an outcome.

Omar Khawaja [00:08:11]:
Yeah, I’d say, you know, generally speaking, Carissa, what I hear with the exception of digital natives or technology organizations, as you get to larger organizations that aren’t technology organizations, I would say the majority feel like they did not make as much progress with their AI initiatives and get as many outcomes as they wish they would have by now. Some of that is explained very nicely by Amara’s Law. And Amara’s Law basically says in the short term, we overestimate how much we can do, and over the long term, we underestimate what we can do. And so we’re at this lull/inflection point that we’re going through right now where we’re feeling like, is this technology all that it’s cracked up to be? And a lot of organizations are going to end up deciding, you know what, maybe this was hyped. And certainly from a marketing perspective, there are corners of the market that are hyping this and making it seem like it can solve more problems with less effort than is actually the case. But the other piece of it is, if I have not been nearly as successful with my outcomes, it’s actually very tempting for me to feel, for me to look outside of myself to say it’s more the technology and it’s less me. The organizations that do that will find themselves falling a little behind versus the organizations that say, you know what, we are, if we didn’t meet our expectations, can we look within to see what are the things that we can do to better meet our expectations? Now, that may be Picking the right vendors and the technologies and holding them accountable. That may be some training and organization.

Omar Khawaja [00:09:56]:
A big piece of it is going to be looking at your own operating model internally and saying, you know what? Our operating model worked fine pre-AI. However, our technology, our governance, our risk, our security, our privacy, our operating model as it references AI isn’t really optimized for the world of AI. And we need to make some material updates to it. The organizations that are taking that approach they will over the midterm and long run end up making, getting significant outcomes from AI.

Karissa Breen [00:10:30]:
So then do you also think that perhaps, and you mentioned it before, it appears to people that we haven’t made as much progress perhaps. So do you envision, Amar, that, you know, coming into 2026 now, and at the end of the year, you can sit back and say, hey, all those people that you are speaking to on the frontline and be like, Hey, I think we actually have done a lot more because it’s new and people are trying to understand it. There’s a lot of people going on and on about it. As you said, there’s a bit of a fatigue going on out there. So do you think it’ll be a little bit more like compound growth? And so it’s sort of like, well, the first year, like a couple of years, you’ve been trying to test it out, you know, learn about it. And now perhaps coming into 2026, a lot of those feelings of that perception, or do you believe you haven’t really gotten the outcomes and will probably change towards the end of the year? I’m really just curious.

Omar Khawaja [00:11:26]:
I think it will. And you know, it will happen for a lot of organizations initially, you know, they are going to make that transition, but there are going to be some laggards that may not make that transition to that sentiment at the end of 2026. They may, some of them may bleed into 2027 before they get there.

Karissa Breen [00:11:46]:
Okay, so I want to talk about data now for a moment. Given we are, that’s sort of the core of what we’re going to discuss today a lot as well. We’ve obviously seen a phase through the last 10, 15 years where it was about getting all this data, all this intelligence. So I worked for a bank before on the practitioner side and it’s like gather all the information so we can sell banking services to people. So like Carissa, she can buy a loan or whatever it is because we’ve got the intelligence on her. And then we sort of saw, if you look at cloud, there was sort of the days when we’re sort of seeing people say, we’ve got rid of all this data because we don’t want it, because having all these breaches and holding all this PII. And now we’re seeing people and companies being heavily held accountable, especially obviously now in the US, but I’m an Australian, and being heavily fined for just unnecessarily having all of this data, right? So talk to me a little bit more about the prevalence that’s required for the AI side of things. Because I’m hearing from a lot of people as well as saying, hey, we’re looking at you.

Karissa Breen [00:12:48]:
You mentioned before structured and unstructured data, all this type of stuff, structured and unstructured data and all this type of stuff. But the part that I’m interested in is companies now trying to get to and get to the place of getting rid of the data because they feel it could be a risk in case they get breached. And then we have to go on and explain to some government body about why they had all this unnecessary data. About customers like myself.

Omar Khawaja [00:13:14]:
Yeah. On the one hand, that does feel logical that if we have a lot of data and data is getting breached, then is the value of us keeping the data potentially negative compared to the value of us getting rid of the data, which would basically be, um, it wouldn’t cost us anything to get rid of the data, but we’d probably save some money on storage. But if retaining the data ends up increasing our data breach risk, which obviously the cost of that could be in the single-digit or multi-digit millions very quickly. And I think for a lot of organizations, getting rid of data may be the right answer, assuming that they feel they don’t have a path to be able to get value, to be able to extract value from that data, to be able to monetize it in some way, either for competitive benefit or for expansion. Or for increasing profitability or reducing risk. There’s got to be some business value to that data. What we see with many of our customers, they want to be data and AI companies, right? It doesn’t matter if they’re retailers, if they are in the biotech space, if they’re in the aerospace space, they realize that the leaders in each of those respective industries in the next 4 or 5 years are likely all going to be ones that have just doubled down on data and AI. So for the leaders, they’re likely not going to say, you know what, it’s cheaper for me to eliminate this data because I’m afraid of data breach.

Omar Khawaja [00:14:51]:
For the leaders in every industry, what we are hearing from them, and I’ve been on, I think, 3 calls today just about this, is they’re on the one hand saying they’re very aware and attuned to the fact that if there is a data breach, it is going to have an immense negative impact on their organization, the reputation, the brand, their customer trust, and so on. However, their conclusion is not in some ways the easy way, which is, well, let’s not do this. Let’s get rid of our data. Their next step then is, how do we make sure we secure this data such that we have confidence that the likelihood of a data breach happening to this data is very low? That’s sort of the approach that the most mature organizations, many of our customers, That’s the approach they’re taking. They want to leverage data and AI. They are very risk-aware and risk-conscious, and they’re going through the efforts to make sure that their data is well governed, it’s well secured. Some of that means reducing the inherent complexity of their sprawling and sometimes fragmented data estate. So one of the reasons our customers love us is because they can unify a lot of their data, whether it’s structured or it’s unstructured or it’s streaming.

Omar Khawaja [00:16:09]:
In the Databricks environment because it’s all built on open source and it gives them the extensibility and the freedom to do the things they need to do without fear of lock-in.

Karissa Breen [00:16:20]:
Yeah, so I think, you know, a lot of things you’re saying is really interesting. And one of the things that I want to get into a little bit more, because I do believe this is a really big topic, why do you believe data is now more powerful than traditional security tools?

Omar Khawaja [00:16:33]:
Some of this, I would say, Charissa, is because of the maturity of the many other tools that we have. I’d say, you know, 10 years ago, it felt like from a security perspective, we didn’t have all the tools we needed to protect the different assets, the different technologies that we had on-premise, in the cloud, at rest, in transit, in databases, in applications, on servers, in endpoints, in mobile. And so we were sort of filling up the slate to say, let’s make sure we’ve got appropriate controls to get coverage across all of the places where our data flows and where we have digital processes that the organization is reliant upon. You know, where we are now, the problem is very seldom, I don’t have the right technology. I don’t have the right tool. I don’t have a way to protect X or Y or Z to detect X threat. To protect against Y vulnerability. That’s seldom the case.

Omar Khawaja [00:17:33]:
So it’s not as much about what I need to do being the gap, but the gap, as I have many conversations with CISOs, the underlying gap is almost always, we’re not quite sure how to do this at the scale with the uncertainty, with the level of change and variability and diversity How do we do that? So scanning systems and identifying vulnerabilities is not really that hard. Fixing vulnerabilities is a little bit hard. Fixing vulnerabilities in 10 systems is not hard at all. Fixing vulnerabilities in 1,000 systems is kind of hard. But what do you do if you have a million systems? What do you do if instead of having 3 log sources, you have 3,000 log sources? So what we need to do, which is run the vulnerability scan, fix the vulnerability, collect the logs, write detections, the what is not hard. If you ask a security professional in any department on the identity team, on the SecOps team, on the data protection team, and you ask them, is your job hard? Can you do X and Y and Z tasks? They say, individually, none of these tasks are all that complicated because at this point, we figured this out. We’ve had a phenomenal streak of innovation within the world of cyber over the last couple of decades. But what’s really, really hard is the fact that I have 300 SaaS vendors and I’ve got 27 teams that are building applications internally and they’re all disparate.

Omar Khawaja [00:19:05]:
And I’ve got multiple IaaS cloud providers and in multiple regions and in multiple tenants. And I keep doing M&As. And that’s sort of when you layer that on, that is where the complexity is. And the grand unifier across all of that is How do we organize the data? How do we build vector databases? How do we build the strong sort of knowledge graphs that we need to start to make sense of all of this? And all of those solutions are really data and AI solutions.

Karissa Breen [00:19:39]:
Okay, this is really important. So one of the things, as you would know, there’s lots of tools, right? So like many companies now, they all had these tools. All these point solutions, and now they’re at the point where they’ve just got too many and they’re trying to consolidate, as you would have heard, like platformization, etc. One thing that I’ve noticed even the last 5 or so months, I recently attended a conference and a company came out and said, look, we’re doing all these things. And I’m like, hang on, they’re doing all these things. It’s pretty much just makes these other point solution products vendors out there redundant or almost redundant, right? So what I’m seeing, and maybe you can talk about this a little bit more, it’s kind of like there’s a lot of overlapping and they’re all really confusing. See more of these vendors out there that are doing a lot of these different capabilities and so on, and they’re not as specific as they used to be, right? So I’m just seeing a lot more of that coming through the next, in the last sort of 6 to 12 months. So I’m keen to get your thoughts on that because customers are saying, and I’ve interviewed them, hey, we can’t have all of these tools.

Karissa Breen [00:20:47]:
The other thing is they’re saying they’re probably not using it. They’re wasting money on stuff they’re not even using. Didn’t even know he had it because the guy left the company 10 years ago and we’re still paying this vendor all this money. And I really think that there is something when I get into the customer’s mind, it’s something that I’m hearing a lot when I’m out in the field talking to people.

Omar Khawaja [00:21:06]:
I just read a paper and there’s a great word for this. It’ll come to me in a little bit, but I’d say Maybe the best metaphor for this is when you are earlier in life and you’ve just graduated college and you finally have a real paying job and you are in your own apartment and you get a home and you’re buying items and you look at these empty walls and you feel like you should have something hanging on them and you get the paintings and you get the furniture and you get the window dressings and you get all of that and you buy all of this stuff. And for the longest time, you just keep feeling like if I bought one more thing, my life would be closer to being complete. And then you get to a point in your life where you realize the only way for the quality of my life to improve is not by buying more things and adding them to my home, but it would be by removing things. And having fewer things in my home would actually improve my quality of life. When it comes to security tools, many organizations are now in that season of their program where in order to get more value from the security program, they actually need fewer tools. They don’t need more tools because they might have the same capability embedded in 2, 3, 4 tools, or they may have tools that provide such a narrow sliver of a capability that the upkeep and the maintenance of it and the integration of it and the training of the personnel on it is just not worth the effort. So, you know, the more that organizations move to platforms, and particularly platforms that are open, that get to leverage the entirety of their enterprise data, which means now they’re getting to data intelligence, and that’s exactly what data intelligence for cyber is giving you, that strong platform that sort of goes, spans across your enterprise, and you may have some individual tools here and there that plug in, but the reality is many of the capabilities and the tools that you likely need are already built into that one platform.

Omar Khawaja [00:23:17]:
And so the idea is, going back to VUCA, you’re really trying to address the C and the A, the complexity and the ambiguity, by having, picking a robust platform that’s built for the future, not picking the best cyber platform to do data and AI, but the best platform to do data and AI, period.

Karissa Breen [00:23:37]:
I like your example about the house. I think that makes it easy to understand and digest. So that whole adage around history repeats itself. Okay, so just sit with me on this one. So back in the day, I don’t know, like 20 years ago, companies, big banks and stuff would just outsource all their capability to like IBM, for example. Then we’ve obviously seen the rise of point solutions. They handle, and then customers are sort of handling 50, 200 tools or something. Are we going to start to see, as we’re going back to history repeats itself, are we going back to the IBM sort of days? And what I mean by that, is it just going to be sort of outsourcing? Because I mentioned before, there’s a lot of these companies out there like, well, you kind of don’t need 5 tools because this company’s actually doing like 5 or 6 of the things that you’re after anyway.

Karissa Breen [00:24:24]:
So are we going to start seeing customers shedding themselves from the tools? I know we sort of are seeing that generally, but is this something that’s going to become more prominent now in 2026 and beyond? Because why should customers pay for stuff they’re not using or getting value from really at the end of the day?

Omar Khawaja [00:24:41]:
If we think about many of the capabilities and things that we had that were bespoke, ad hoc, add-on tools. 10, 15 years ago, over, over this period of time, a lot of that has been built into the operating system. And then what happened to the operating system? It got virtualized. And then what happened? It got moved to the cloud. And then what happened? You know, most of us, and from an enterprise perspective, are not thinking as much about operating systems. We’re thinking about containers, or even that gets virtualized. And we’ve moved up the stack from infrastructure as a service to platform and to software as a service. And every time.

Omar Khawaja [00:25:19]:
We move up the stack, what we’re choosing to do is we’re choosing to abstract out the things that are lower level, and we’re relying on someone else to take care of that because they can likely do it at scale, they can do it cheaper, they can do it more effectively, and that sort of frees us up to do things that are closer in delivering value to the business that are unique to us and only we can do. So I think that is a very healthy approach is to say, what are the things that I can shed and someone else can do? Because the list of items in my backlog is increasing much faster than my capacity is. So the only way for me to be able to address the high priority items in my backlog is to be able to take some of the things in that are taking up current capacity, realize others can do that well. And move that over to others so I can focus on the things that only I can uniquely add value to.

Karissa Breen [00:26:20]:
So then here’s another question before we sort of move on. It’s about AI agents. So when I messaged you on LinkedIn for the AWS re:Invent, there was that many vendors on the show floor. I was so overwhelmed when I went for a walk on the show floor, for example. So whilst I get the sentiments, just maybe I’m looking at it closer than I have before, but I feel like there’s more people and I’ve seen before and more vendors out there. So it’s like, are we really trying to reduce tools and vendors, but then they just keep spawning more? Like, I don’t know. What do you realistically think is going to happen?

Omar Khawaja [00:26:57]:
Given the complexity of the underlying technology that we’re securing and the level of creativity and innovation among the threat actors, the number of things that we need to protect and the number of things that we need to protect from has been increasing exponentially. And at the intersection of every one of those is potentially an opportunity for a startup to create a product to uniquely and best solve that. However, if you think about what is happening is most of these startups, they either get acquired within 2, 3, 4 years, or they end up being in the margins or they end up going out of business. And so ultimately that is what’s been happening is more and more of the security capabilities have been shifting to larger platforms. So CISOs have to really be thinking about what are the core platforms they need to rely on, just like HR teams realized a while ago they need an HR information system and finance teams realized a long time ago that they needed ERP. What does that start to look like in the space of cyber and security? We think at the core of that is going to be some kind of a data intelligence platform. And on top of that, you absolutely will plug in different capabilities. A lot of that will be aligned to the specific risk profile of the organization.

Karissa Breen [00:28:22]:
And I’ve heard other people say that like, they’ll either have an M&A or, you know, they’ll either go out of business. Everything you mentioned before, do you think we’ll just get to the point of industry that we’ll have like a couple of big players and that’s kind of it? Then we’re going to have all these like startups, new sort of capability that comes out, or you’re always going to have that. But do you think that these big businesses like your Ciscos and Oracle and friends, they’ll just absorb these companies and then just be like, oh, these are the top 10 big players. We go to and we just get everything from there.

Omar Khawaja [00:28:58]:
Size is an advantage and it is a moat. And as cyber risk, you know, continues, in most research studies, it’s listed as the number 1 or number 2 enterprise risk. Leaders are going to become more hesitant to work with smaller organizations, and they’re likely going to want to work with organizations that have a track record, and that have been doing this for a while and built up a large, broad set of capabilities that are going to address many of their needs versus just identifying a sliver of their needs. So on the one hand, I think the big players will likely get bigger, but given that the needs and the demand isn’t going to slow down, there will be for the foreseeable future a very healthy market of cyber startups that will continue to flourish.

Karissa Breen [00:29:49]:
So Amar, I want to switch gears and talk about AI agents. Again, everyone’s talking about them and so are we. So I really want to hear your version of what you guys are sort of doing in this space. And do you think people, and maybe this is an Australian term, might sort of kick back too much, put their feet up because we’ve all got this other stuff happening? And I know that’s generally not what’s happening, but I’m just trying to sort of paint a picture now because I think it’s more important for people to understand whilst there’s still a lot of work to do, like, hey, we can get agents to do some, some more, you know, menial tasks, perhaps they don’t offer a lot of value, for example. So I’m keen to get your sort of lay of the land here.

Omar Khawaja [00:30:31]:
I’ll sort of make the distinction between agents and maybe non-agentic AI and sort of the value that agents bring that are additive to traditional sort of non-agentic AI. What agents allow is for the ability to take actions and for the ability to call on, to call functions, many of which are going to be to deterministic systems. So, you know, one of the things, Chris, if you recall a year and a half, two ago, there was a lot of talk about AI hallucinations. Now there’s still some of that talk, but much of that has receded. In large part, that’s because of the shift to agentic. I’ll give you a really simple example in my mind how I think about agentic. Up until maybe a couple years ago when I would sit down with my wife and we would on the weekends go through the schedule for the upcoming week, every now and then my wife would say something like, hey, I’ve got a girls’ dinner on Wednesday night, so I’m not going to be home. You’re going to have to feed the kids and bathe the kids and put them to sleep.

Omar Khawaja [00:31:38]:
And my answer would almost always be, yeah, honey, that’s fine. That makes sense. And then the night before, she would ask me, she would remind me, hey, I’m going to be out tomorrow night. So you got this, right? And I would hesitate. And then I would say, actually have something tomorrow night. And we would have this conversation where she would say, but you said you would be able to make it on Wednesday night and do this. We had this conversation and I’d say, I don’t remember having that conversation, and I actually have something Wednesday night. I’ve got this client meeting, or I’ve got this commitment, or I’m traveling, what have you.

Omar Khawaja [00:32:12]:
And so what we realized through that process, or I realized, she already knew, is that my probabilistic human brain, the equivalent of, or the analog to the LLM, is imperfect. Now, I wasn’t lying. I wasn’t willfully making up that I’m free on Wednesday, I actually genuinely believed I was free on Wednesday. Now we have a new rule in our house where when we sit down and we review the schedules, my wife said, we’re, says, we’re only going to do it once you take out your calendar and you look at it. And so now when she asked me the same question, are you available on Wednesday, my probabilistic brain decides to run a query tool call against a deterministic system. Namely the calendar app that is in my pocket. And so that— what that does is my probabilistic brain married with the deterministic calendar that is 100% accurate, all of a sudden now my accuracy has gone way up. More importantly, the marital discord has gone way down.

Omar Khawaja [00:33:18]:
So when we think about agentic, that’s what it enables. It enables this ability for the AI to gain better access to the authoritative sources, the applications, the databases, the structured, the unstructured data to go do the analysis, do the summarization, do the contextualization, whatever function needs to, and to get answers that are going to be much more likely to be the right answers. Now, there’s 3 challenges that we see with agents. One is that there’s too many manual knobs. The second is that it’s very difficult to evaluate whether an agent is performing well or not. And then there’s this constant trade-off between cost and quality. So with Agent Bricks, those are the three things that we’ve worked really, really hard to address. So we’re taking much of that complexity out so we can reduce the knots.

Omar Khawaja [00:34:13]:
We can choose the best models and techniques for your task. We can auto-generate custom benchmarks and evals and synthetic data. So. We make the model evaluation process much easier, and then we can help optimize for the lower cost without sacrificing quality. So the idea is our customers focus much more on how do we get to value and a lot less on how do we configure this.

Karissa Breen [00:34:38]:
Okay, so what’s come up in my mind as you were speaking would be if you look at the newer generation like Gen Z, because they haven’t probably had the tenure. I mean, I’m a millennial and, you know, everyone else above the millennial sort of range. Will they have the capability though to discern like, okay, even if I look at it like it all looks right because it’s not everyone saying to me like, KB, we still have to do— we have to govern all of this and we have to check everything’s in line, you know, checks and balances, right? But then if you’ve never really done something before, so I mean, I don’t know anything about beekeeping. I could watch TikTok and YouTube and read all the stuff about beekeeping, but like I’d still be a bit nervous, right? Because I haven’t really been out in the field and become a beekeeper myself. And do you think that if we look down the line for all this new talent coming through now, like, will they have the capability to know, like, yes, this makes sense, it checks out, because I haven’t really done it before? Whereas, like, historically as a practitioner, we had to learn all of these things a little bit more manually. And I know it makes sense because we want to do things faster and better and all of that, it’s just more, I’m hearing this sort of come through a little bit more now in more recent interviews towards the end of last year. People sort of saying like they’re concerned at like perhaps the talent down the line may not really have the capability to discern things. So I’m just curious to hear your thoughts on that.

Omar Khawaja [00:36:03]:
Chris, I’d maybe go a little bit further and I’d say the current talent in organizations doesn’t know how to discern that as well. Because AI and agentic is new for pretty much everyone. So for instance, on security teams and risk teams and legal teams, the easiest answer for them when asked about whether or not the enterprise can proceed with an AI or agentic use case, the easiest answer to deliver and the most convenient answer to deliver is no. Or the second best answer is maybe later, come back in a year. And we see that in enterprises over and over again. The most mature enterprises are saying, this is important. We have to figure it out. It’s not going to be easy because it’s new for everyone.

Omar Khawaja [00:36:50]:
You know, typically in an organization, whenever you have some complex piece of technology, you bring in the senior enterprise architect or the senior security architect. And almost always they have 20+ years of experience. Which is why they can look at an architecture, they can look at a use case, they can distill from it what are the threats, what are the risks, what is going to work, what isn’t going to work, how do we address it? Because, and they can see through all of that because they have decades worth of, worth of experience. There’s no one that has decades of experience with Agentic and with AI. And as we, you know, as we look at survey after survey and we see this in customer after customer, The number one reason organizations don’t want to move forward with AI or with the Gentec is because they don’t feel confident that they can adequately secure it. And so we’ve worked really hard over the course of the last 2 and a half years to solve that specific problem. And instead of saying, you have to figure this out through 5 years of experience and then you’ll be an expert, what we did is we said, what are the synapses that aren’t firing? What are the dots that aren’t connecting? What are the reasons that the instincts have not kicked in for folks to be able to analyze, risk assess, threat model these systems, determine with high efficacy whether they will be high quality or if they will result in inadvertent losses for the organization? And we put multiple frameworks together to just focus on that. One of them is the Databricks AI Security Framework, which is an open vendor-agnostic framework.

Omar Khawaja [00:38:33]:
And it starts by saying, what is AI? What is agentic AI? And so there’s clarity on what the system is, and we define it as 12 components. And then we describe what are the things that can go wrong? What are the risks associated with these 12 components? Here are the 62 risks. And then we define what are the mitigations in place for each of these? We identify 64 controls. Map everyone to a risk and map everyone to a component. So now someone that is either brand new to the enterprise, brand new to technology, or just brand new to this specific technology, namely Agentic and AI, they can leverage that framework as a basis for how to scrutinize a use case. And with confidence, they can say, we know how to securely deploy AI in our environment. So that is the key because what we’ve now just done is we’ve enabled the instincts to apply for AI in Agentic, just like they’ve applied for the many other technologies that came before it.

Karissa Breen [00:39:37]:
So really what you’re saying is it doesn’t really matter whether you’ve got 40 years of experience or you’ve got 2 years, you can just come out of university slash college where all the sort of is the same and it’s like an equal playing field.

Omar Khawaja [00:39:52]:
If anything, from my experience, you know, people like me that have decades of experience, we are disadvantaged because we have decades worth of learnings and assumptions and paradigms that have been baked into our psyche that we assume this is how technology works. But AI, because of its probabilistic nature, deviates quite significantly from all of the deterministic technologies that I grew up with in the first 25 years of my career. So if I’m brand new, I don’t have anything to unlearn. All I have to do is learn. But if I’ve been in the technology game for 20, 25 years, I actually first have to unlearn stuff and then I’ve got to learn new stuff.

Karissa Breen [00:40:37]:
So are people unlearning stuff now? Is that what we’re moving towards?

Omar Khawaja [00:40:41]:
They are learning, but I also have to unlearn if I have sort of this curse of knowledge because my assumptions of how things work are not related to AI, but I project them on AI and I assume AI is going to work that way and it’s just not the case. But if I am learning from scratch, then I’m not making any assumptions about how AI works. So I’m going to learn much faster.

Karissa Breen [00:41:05]:
So if you’re saying, because You’ve got all these decades of experience and perhaps you’ve got all these assumptions, which I understand. And sometimes even generally speaking, companies are like, oh, I want to hire someone that’s, you know, new and hasn’t had much work experience. So I can teach them more as opposed to someone else that thinks that they know it all. So then people that are at your level, then are they a disadvantage to the company if they’re leading the business at a senior level? And they’ve got all these perhaps misconceptions and assumptions. And I’m not saying you, I’m just trying to use you as an example so I don’t pick on anyone else, because typically you’re just not gonna get 20 years.

Omar Khawaja [00:41:46]:
I’d say there are many organizations are at a disadvantage when they have leaders that aren’t willing to give space and go in with an open mind. To say, we have to go learn this, we have to go figure this out. What’s happening in many organizations is that the primordial response to something novel is fear. And when we have fear, there’s really one of three reactions. It’s fight, it’s flee, or it’s freeze. And we see quite a bit of that now. It’s a lot better now than it was two years ago. But on a weekly basis, I work with organizations where the data teams and the technology and the business teams are saying, how do we get our security organization to actually review this and provide us guidance? I was on a call with probably a Fortune 100 organization earlier this week, and we asked the CISO, we said, hey, you did not approve this particular AI use case., and the team was unable to provide any reasons behind it.

Omar Khawaja [00:42:56]:
And the CISO said, well, that’s not good. We need to be able to provide some reasons why we think this is insecure. The challenge is, if I don’t understand what AI is, I’m fearful of it, and I’m not able to put into words exactly why I’m fearful of it. So we have to develop taxonomy and very specific words to say I’m fearful of it because of X or Y or Z. Which is what the Databricks AI Security Framework aims to do by saying, here are the 62 risks associated with AI. So instead of saying it’s scary, instead of just saying it hallucinates because that happens to be the one thing I’m aware of. And when I just say that, I’m falling for what psychologists call the availability heuristic. But I need to be able to do a more comprehensive analysis, one that is believable by the business, but the business teams are losing trust in the security orgs because the security orgs usually are pretty confident in their analysis, but with AI, they’re still playing catch up.

Karissa Breen [00:43:57]:
So then, Mark, to conclude our interview, obviously we’re at the start of the year 2026. I mean, if you and I speak again at the end of the year, what do you think is sort of going to happen between now and then and the end of the year? And I know that like you’re not Nostradamus, you don’t have a crystal ball, but it’s more What do you sort of think about on December 31st? And you’re probably not thinking about work, but what is it that you think like, hey, that sort of vision came true in the industry or that things like your hypothesis, for example, I’m keen to understand that.

Omar Khawaja [00:44:29]:
I think there’s going to be a lot more organizations through the course of this year that are willing to accept that they have a lot more to learn. And that are going to then in meaningful ways, take the steps to say, we can’t just copy paste how we’ve done things with other technologies, project that on AI and expect it to work. We tried that for 2 years, 3 years. We didn’t get the right results. Let’s go and do this the right way. It’s going to be more involved. We’re going to have to roll up our sleeves. We’re really going to have to understand a lot more.

Omar Khawaja [00:45:07]:
It’s going to involve more change, but the benefits at the end of that are going to be more than worth it, both personally for the individuals going through that journey and certainly for the organizations that are going to benefit from the outcomes. I think that is going to happen with a lot more organizations. I think the hubris is going to start to go down and more of the growth mindset is going to start to emerge.

Podcast Voice-over [00:45:38]:
This is KBCast, the voice of cyber.

Karissa Breen [00:45:42]:
Thanks for tuning in. For more industry-leading news and thought-provoking articles, visit kbi.media to get access today.

Podcast Voice-over [00:45:51]:
This episode is brought to you by MerckSec, your smarter route to security talent. MercSec’s executive search has helped enterprise organizations find the right people from around the world since 2012. Their on-demand talent acquisition team helps startups and mid-sized businesses scale faster and more efficiently. Find out more at mercsec.com today.

Share This