The Voice of Cyber®

KBKAST
Episode 365 Deep Dive: Ashley Rose | Human Risk – The Next Frontier
First Aired: April 29, 2026
Ashley Rose is the CEO and Co-Founder of Living Security, where she is building the future of workforce security through AI-native Human Risk Management (HRM). Her work sits at the intersection of AI, cybersecurity, and business transformation—helping enterprises turn human and workforce risk into a measurable, manageable business outcome.
Since founding Living Security in 2017, Ashley has led the company through rapid growth, raising more than $25M for product development and scale, and driving consecutive years of revenue acceleration. Today, her focus is on helping CISOs and security and risk leaders move beyond traditional awareness to a data-driven, predictive model that reduces real risk and supports organizational growth.
Ashley speaks regularly at industry forums including EWF, Security ISACs, and other security and leadership conferences, sharing practical insight on topics such as human risk, AI in the enterprise, and building security programs that executives and boards actually care about. She also contributes thought leadership to outlets such as Forbes and other publications.
At her core, Ashley is a builder—of companies, products, teams, and categories. She is committed to creating a diverse and inclusive organization that reflects the communities Living Security serves, and to leading with transparency, curiosity, and accountability.
Ashley holds a BBA from the University of Michigan and is a serial entrepreneur with a background in tech and product management. She founded Living Security on a simple belief: when you empower people, they become your strongest security asset—not your weakest link.
Vanta’s Trust Management Platform takes the manual work out of your security and compliance process and replaces it with continuous automation—whether you’re pursuing your first framework or managing a complex program.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

[00:00:00] Ashley: I would say if security awareness and training worked, we’d have fixed the problem by now. And the industry knows this. I think it’s time to say that out loud and to really recognize it.

And so the future is really not blaming the human. That’s not a strategy. Right? It’s not a security strategy. This is KISS as a primary target

[00:00:27] Ashley: for ransomware campaigns, security and testing and performance, risk and compliance. We can actually automate that, take that data and use it.

[00:00:39] Karissa: Joining me now is Ashley Rose, CEO and co founder at Living Security. And today we’re discussing why the human risk really is the next frontier in cybersecurity. So, Ashley, thanks for joining me and welcome.

[00:00:50] Ashley: Yeah, Karissa, thank you so much for having me.

[00:00:52] Karissa: Okay, so this interview is really interesting to me because there’s a couple of sort of things that, you know, I’ve been in this space for a while, you know, over a decade, and the same sort of phrases just keep resurfacing. And one of those would probably be, you know, people really are the weakest link. But what’s interesting is that’s been the narrative in the cybersecurity space, or more broadly the tech space for 20 or so years. So why do you think nothing’s really improved since then?

[00:01:21] Ashley: Yeah, so over the last couple of decades, you know, we saw an emergence of, you know, a pretty fast growing and large category security awareness and training.

But security awareness and training was really the industry continuing to try to teach and educate, but never actually effectively measuring the result and measuring the behavior of the user and whether or not it was improving, whether it was changing. And so the output that we leveraged to essentially try to measure whether or not this was effective was really that those breach statistics, right, 80 to 90% of cybersecurity incidents, breaches are a result of some sort of human based initiative, human decision.

And so we had these two very disparate, or I would say conflicting signals. We saw security awareness and training growing in a fast pace, yet human initiated cybersecurity incidents continued to remain, you know, the number one cause of breach.

And so double clicking into that, what you actually recognize is that security awareness and training was actually never designed for risk reduction.

It was designed for a compliance checkbox. So those are two very completely different jobs and they have different outcomes.

And so what are unique insight was that without data and human behavior, we saw that security teams were actually really flying blind and they were unable, they were unequipped to essentially improve what they didn’t measure. And so that narrative, as you suggested in the question continued to stay the same because vendors were benefiting from the same teaching curriculum being recycled every year.

It was a compliance mandate. Every court, every organization had to have security awareness and training to check that box. And so the market kept growing despite the fact that the incidence of the breaches were not reducing. Therein lies the problem. Let’s point the finger, point back to the people and say, stupid users, people are the weakest link. But the real problem was the lack of context, signals and ultimately personalized guidance.

[00:03:41] Karissa: Okay, this is super interesting. So a couple of things that’s coming in my mind. The first would be the security awareness. Now, even when I started doing what I’m doing, like as in today, that was a big thing at confidence, like security awareness. But would you say that the awareness is definitely there? Like, I do a lot of recon work even on like public platforms like Instagram, TikTok, just to see what the average American Australian saying. I would think generally people probably are aware of about scams, you know, online safety, security. Would you agree with that statement?

[00:04:15] Ashley: Now more than ever before, the general public is aware of scams, things like phishing emails, the fact that some of these are fraudulent. But we’re also all aware that we should be flossing our teeth twice a day.

[00:04:30] Karissa: Right.

[00:04:31] Ashley: And so. But does that actually mean it’s getting done? Right. And I would, I would venture to say no. Right. We are aware that smoking cigarettes. Right. Drinking alcohol, these things have harmful effects on our health. Right. That we should be going to the gym, but is it actually getting done?

The answer is most often no. And so to answer your question, yes, there is more awareness, but it doesn’t actually mean that the behavior of the individuals. And when you’re in the moment trying to get a job done and you know, you’re rushing because you have a hundred different tasks to accomplish, you get hit with a quick email, oh, it’s your cfo, you know, they’re asking you to, you know, do a quick wire transfer, like reply or whatever the request is from the social engineer. It doesn’t mean that that individual is going to like stop and actually make the right decision in the moment. And that’s the just, you know, discrepancy that we’re seeing. That’s the state of things.

[00:05:28] Karissa: Yeah. And you’re totally right around flossing teeth, going to the gym, smoking, etc.

So then what’s the point? I mean, this is such and depends who you ask. Like I’ve spoken to so many people, some are for, some are against it. Some don’t believe in security awareness at all.

So I want to unpack this a bit in a bit more detail because the part that I find interesting is, do you think it’s a little bit of virtue signaling? It’s. It’s an easy answer. So what I mean by that is when people are doing these presentations or on a panel, I feel like it’s an obvious answer to say, oh, well, people aren’t aware.

So I don’t know, like, sometimes I just feel like people just say that because it’s like they don’t have anything else to say or they don’t know the answer. So just say, oh, well, people aren’t aware. That’s why we have incidents.

[00:06:10] Ashley: There’s many ways to kind of tackle that question. So I would say, look, you know, one side of things is that it’s easier to point at a fished employee than it is to redesign the security culture. I mean, that’s just the state of things. Right? So it’s a scapegoat, I would say, in some aspects.

Additionally, most security leaders were trained in tools and technology, not behavioral science. Right. Or how to work and how to change human behavior. And it’s really not a knock because that’s where we’re seeing, you know, a lot of security leaders today come from. It’s a more technical background, but it is a gap that we can close. I think what we’ve seen is that CISOs who are saying, you know, my users are the problem are the cc’s who are ultimately losing their jobs after the next breach. And so the really great security leaders are now asking, what is my human risk posture today? And so they’re grasping for. They’re desiring that visibility. They want to understand, where are the 10% of my users that are driving 80% of my risk? And I don’t throw out that at, you know, blanketly. Like, we actually looked across a hundred of our customers and anonymized data within our system. And just to give you some context, so our human risk management platform, it touches 12 to 15 to 20 different security systems across organizations. So the most mature organizations that have very mature tech stacks are feeding their data into this platform. And so we had a third party analyze this and said, like, please help us. You know, we have some gut instincts here, but can you look across the customer base and give us, you know, some kind of average stats? And on average, you were seeing that a majority of users, a majority of employees, are actually making the right decisions on a pretty consistent basis. They’re using mfa, they’re resetting their passwords, they’re reporting phishing emails, they’re visiting the right websites, like they’re doing the right things to help protect the organization. And it’s actually this 10% of users that need some extra love, care and attention. And so if we can get the data right and those, those CISOs that I said that are asking for it, that they’re saying, what is the state of my human risk posture today? They’re actually looking and saying, well, help me find those users that need, you know, whether it’s extra training, you know, a next level set of controls and let me get more prescriptive and adaptive in my process program the same way that I’m doing within my, you know, my email security program, within my web security program, within my endpoints. I want the data to understand what’s going on so that can be more strategic in how I respond and how I intervene.

[00:09:01] Karissa: Okay, so this is interesting. So cohort of people, 100% of people, and then going, focusing on your 10%, those people would be pulled out, created a separate program, whatever that looks like.

Because the other thing is you don’t want the other people in the company to get disgruntled. And what I mean by that is I recently went to Disney World and it’s like they explain the rules, then someone literally just does it wrong. They have to re explain it. So you have to wait another five minutes and you’ve already been waiting 90 minutes at this point. So it’s like you start to get agitated then by your peers. But it’s like, look, you couldn’t follow the most basic instructions. And it’s not a necessarily. We have to outcast these people in the company for someone who has worked in large enterprises and you got to do the thing and it’s super basic.

It’s like, could this be a little bit more advanced considering I’m working in the security division. But they’ve just sort of blanketed across everyone and then I think it does a disservice then for people who then become disengaged in the whole program in the first place.

[00:09:56] Ashley: That’s exactly right. Right. You’re describing the methodology right behind a human risk management platform. Right. Security awareness and training. The goal is to get checkbox compliance. So, so what’s the easiest way to do that? Treat everyone the same, make it easy and straightforward to click through some sort of training curriculum, you know, check the box, call it done, report up and say, great, we’re done, we’re Secure Human risk management says no. Right. The, the janitor that clicked that phishing email over there is actually a different risk level to the organization than the person that’s sitting over there in finance, you know, that is clicking phishing. They’re browsing by website. They haven’t, they’re not using mfa and they haven’t reset their password in years, you know, whatever that is. So what we’re doing here is we’re actually taking, you know, we actually have a couple hundred signals that we’re pulling into the product and that’s actually looking across both the behavior of the user. So as you said in Disney World, but the person that’s not following instructions for whatever, you know, security requirement or policy that’s been put into place. But we’re also overlaying that with two additional signals that really matter.

One is threat. So we want to know who in our organization is under the most threat that could be that they’re getting, you know, a higher amount of malicious or potentially malicious email coming into their inbox. So there’s a greater number of opportunities for them to fall victim. It may also mean that, hey, these, this person’s credentials were involved in a breach. And so we saw that their password was gone on the dark web. And so they may be vulnerable to a password spraying attack, for instance. The third is the identity of the user. So I mentioned kind of the janitor to the finance executive, right? We need to know what systems do you have access to and do you have, you know, do you have access to a lot of sensitive data? Because if you are to become compromised, your identity is compromised, then the blast radius of that attack is going to have greater outcomes. Right. Than maybe your counterpart down the hall. And so it’s the combination of these three signals, the behavior, the threat, and the identity of the individual that we’re able to run through an algorithm and produce that risk score so that the person that is of higher risk to the organization across those three factors, they’re the ones that are going to be engaged in whether it’s more comprehensive training, you know, whether they’re enrolled in a more frequent phishing simulation campaign, maybe it’s that they are dropped into, you know, a stricter MFA group. Right? They’re having to MFA in more frequently or reset their password on a more frequent basis. And then the other employees that are making the right decisions, maybe they have less risk for the organization.

We actually want to figure out how can we let them work the fastest? How do we actually reduce the amount of friction that’s applied and that could mean that hey, we’re not going to give them additional training because they don’t need it. Or you know, maybe there’s actually some controls that are lessened for this group so that they can run. Maybe these are the individuals that are allowed to be exploring with more AI tooling whereas their counterparts have a more lockdown system. Because hey, if you are compromised, we’ve seen that you’re handling data outside of policy that therefore you haven’t gained like the right to essentially like, you know, test and trial and experiment. And so these are the types of like adaptive decisions that can come out of a very like data driven approach to managing human risk.

[00:13:33] Karissa: Yeah, totally. Because it’s like I said before, like why if someone’s already like good at it or is above the, you know, the cohort of people, why should they have to keep doing it? Again? Because it’s like, well, that there are other people that perhaps need a bit more help, which is totally fine. So then my next question would be, and I’ve had this experience myself and I’m curious to hear your thoughts, would be, I’ve got emails before, it’s like Chris is on the non compliant list again and I think I’ve shared this with you before, Ashley. When it’s like they did this security awareness program, I thought it was super cringe and awkward. So therefore like wasn’t engaged. So therefore when you’re not engaged you don’t do the thing but then when you don’t do the thing you’re on a non compliant list. And when you’re on a non compliant list it’s like literally everyone knows about it and like whilst I don’t care at that time, whilst I didn’t care at the time, but maybe other people would see it as oh well now I look like the loser because now I’m on, you know, I’m on the bottom percentage. Like no one really wants to do that. So is there ways in which companies can sort of do it a little bit more discreetly where it’s like no one really needs to know if you’re not top of the class, so to speak, on security awareness stuff, for example, or human risk management, to use your vernacular.

[00:14:43] Ashley: One of the things that we offer and we actually see as quite effective is sort of instilling in like building in this, you know, social proof kind of competitive spirit. So we have, we actually have a lot of customers that like to send out scorecards to the individuals in the organization and that’s not just did you complete your training, but it could be, you know, another set of factors. For instance, like if you’re trying to roll out some sort of new technology and you’re trying to drive adoption, maybe it’s passwordless login, for instance, or a password manager. We’ve seen that when you put these things on a scorecard and you actually create that feedback loop to that employee, and they can kind of see where they stand in the organization. And now this isn’t like, oh, you know, Chris, you’re number 50, and then like, Lucy down the hall is number 48. So Lucy’s better than you. That’s saying, hey, you’re number 50 out of 120, you know, and it’s really just holding up that mirror to the individual. And that actually is very motivational. And we’ve seen people respond quite well because then as long as they have very clear next steps of ways that they can improve.

And so, you know, it’s kind of like giving them that report card and then saying, okay, well, here’s like, the three assignments that are still, like, not submitted, but if you go send out the assignments, then you can, you know, continue to climb up in the ranking. I think as long as you, you know, provide that feedback to the user and then you give them a way to, like, control their destiny. We’ve seen this be very effective in driving the compliance behavior that we need. So that’s first and foremost, like. So we are a believer in, like, providing that type of, like, score scorecarding and feedback. And we’ve done the same thing with departments as well. So we found that leaders in the organization, they’re increasingly competitive as well. That’s how they’ve got really, you know, some of them, how they’ve gotten to where they are. And so their department, they don’t want their department on the bottom of that list, especially when it’s going in front of a CISO or a CEO, Right? And so they’re very motivated to then figure out, you know, how do I help and support my team to improve their security behavior? And this is a way that CISOs are essentially democratizing security, where we’re not, you know, just kind of isolating it within a security team and saying, security team, like, you are the only ones responsible of improving here. But, like, how do we actually deputize the rest of the leaders in the organization, all the way down to the individuals? So first and foremost, I think this stuff can be good and it can be helpful, and it works, I would say, on the on the other side of your point, we are a big believer in, hey, if you’re going to make someone do something because there’s a compliance reason, you know, how do we actually like, you know, reduce this sort of suckiness, you know, of that, ask like of that requirement. And so if we can figure out how to ways to make training more relevant, more personal, make it, you know, presented in a way that that individual really wants to consume content and they want to learn, then people, they understand that this is a need, it’s a requirement on the business, so they understand the why behind it. Maybe it doesn’t like suck so bad anymore and so they’re like more likely to engage. One of the ways that we’ve done this is turning some of those annual compliance trainings into almost more of like a Netflix style series.

And so there’s like a storyline and you know, people like get engaged with that. We’ve even had employees come back and they’re saying, hey, you left me on a cliffhanger, like, I want to know how this ends, like when’s the next series come out? And so we actually, and we do actually measure this scar rating from the employees because we said, you know, we recognize like you’re busy, you have a job to do, we’re asking you to do something that the company needs from you. Maybe you care, maybe you don’t, but how do we just, you know, kind of help the medicine go down a little bit smoother, right?

[00:18:45] Karissa: Handling sensitive health data you already know security and compliance aren’t optional. Whether it’s ISO 27001 SoC2 or GDPR, Vanta helps you build trust while staying focused on patient outcomes. Their platform automates up to 90% of the work, so you can hit your compliance goals faster and scale safely. Visit vanta.com kbcast that’s V A N T A.com forward/kbcast to learn more.

So just to clarify, Ashley, when you said the scorecard, do you mean that that’s individually sent and it’s sort of like a credit score so it’s not like everyone’s getting it as well?

[00:19:26] Ashley: That’s right. That is exactly right. So in some organizations they’re sending it out to every employee. It’s almost like a credit score. It’s called like their human risk score or it can be culturally branded. Some of our customers, they have, you know, hybrid programs. Whether it’s like themes or they have different mascots or logos or namings, whatever, we let them integrate that into the scorecard. Other customers don’t go down to the individual level, but they do go down to the manager, the department level. And then it’s up to the security leader to then engage with that department leader to figure out, hey, you know, we’re seeing your department like fall behind. Here are some gaps, here are some areas of weakness. Like how can we partner together to figure out how do we, we, how do we improve? Right. What additional tools can we offer as a way to help and assist in you improving your score?

[00:20:14] Karissa: Okay, so that part I get. So then maybe let me rephrase. My question would be historically speaking. So for example, just say you were my manager and I did something wrong and it’s like you’ve been now cced into the email. What do you think of that approach? Because I’ve had that before. And then again, whilst I didn’t care, I think other people were like, oh, now you know, Joan, Margaret and Ben know that I’m non compliant because all the senior managers have been copied in the email to say they haven’t done the training.

Because I’m sort of of the opinion that you do attract more with honey than you do with vinegar. So then what’s your view?

[00:20:49] Ashley: Yeah, I think that’s right. So it’s gotta be the care and the stick. And so that’s what we talked about with the individual scorecards that presents a very like individualized honey approach. Right? So you can improve your score and so you can see your score go up and like there’s sort of intrinsic motivation there for many people because they want to be on the top of the list, they want to be recognized. A lot of companies are also doing like security champions programs. And so like training completion could be one of those signals that goes into deciding whether or not somebody, someone’s a security champion. But look, at the end of the day we have to recognize that being out of compliance is also a risk for the business.

And so what businesses have found is that individuals respond at a much higher rate and frequency to direct line managers than they do even the CEO. If the CEO asks an employee to do something, they’re actually less likely to do it than if it just comes from their direct boss. So that’s why companies opt for the management CC is because what they’re really asking is, hey manager, can you help me get this, get my job done right? The goal is we need to be compliant. So I think it needs to be both. I’m def, I’m not like an opposer to the CC of the manager, but I do think Giving again. How do we help a medicine to go down a little bit smoother?

You know, how do we use a little, you know, a little sugar here? How do we motivate the employee? How do we make it less sucky? How do you know, how do we make it more engaging to go through? How, how do we shorten it up and make it more relevant to your dogtail and role? Let’s do all the things and give that individual the opportunity to, you know, help the company, right. With their initiative of compliance. And then, you know, as a last resort, if we need to incorporate manager feedback, I think that’s great and I think it’s fine. The other thing to recognize here though is you may be also responding to training fatigue, right? So if you’re asked to, to do something, let’s call it like, it’s short, it’s sweet, it’s one time a year, you know, it’s coming, you know, there’s an expectation and then you, outside of the compliance requirement are traditionally acting in a very like vigilant and secure way. And then you’re kind of left alone to do your job. You’re probably more likely to engage in, you know, let’s go through that training because I know this is my one and done time, unless I need additional reminders or, you know, I see, you know, my behavior sort of moving in the opposite direction, I’m likely going to be left alone and continue to, to be, continue to do my job. But if you’re continuously getting hit with more asks, right, more training, more requirements, because they are trying to produce like a blanketed, always on year round program, then you may be starting to get frustrated and you’re then less likely to essentially comply in those moments. And so I think taking that personalized approach is also helpful because then you as an individual realize like, oh, I’m only being asked to do this when it really matters, when I like really need to be a team player in this moment.

[00:23:55] Karissa: So then my other question would be around, as you know, at the moment, like everything’s super competitive. Competitive meaning we’ve got to release things faster, we get these features out and companies now are trying to win against, you know, the whole AI race, etc. So how do people manage doing their human risk management training with, hey, I’m so under pressure at the moment because the company’s putting me under pressure because we could deliver all these scenes because they’re so fearful of falling behind.

[00:24:24] Ashley: Yeah, this is a great point and something that we talk about and work with our Customers on all the time. So when we coined the term, essentially when we, we renamed this category human risk Management, that was really intentional.

And it was actually before Gartner and Forrester and even other competitors recognized the name human risk management, we came out with this term and the reason was because truly this is a risk management conversation and decision.

What does that mean?

It means that there are going to be some risks that are acceptable or actually worth taking.

And I think we’re in one of those moments right now where know we’re in an AI race. We know that competitively speaking we need everyone in our team to be adopting AI at a pace that you know, of technology adoption that we’ve, I don’t think frankly ever seen before. Right. As a society.

And if we don’t do that, we’re going to risk profits, we’re going to risk competitive positioning. You know, we may have declining sales, all of those things and ultimately like shareholder value.

Right. Which is what the board and the CEO and the leadership team should be caring about. And so there are going to be some risks that are worth taking. And that for a lot of organizations right now means hey, we have a pretty open policy for AI tool exploration.

And so they’re letting their team members work with Anthropic and Claude and OpenAI and chatgpt and Perplexity and Gemini and all the tools. Right. As well as like other purpose built, maybe vertical or you know, job specific tools. Whether that’s in go to market sales and marketing or customer support, they’re accepting that risk. Right. And so then the security team has to ask, well okay, how can I help drive the adoption of AI in a more secure way without creating too much friction where my teams are slowed down because that’s actually working the you know, antithesis to, it’s the antithesis to we’re working in opposition to what the business cares about right now. And so we’re seeing a lot of organizations deploy things like you know, we’re not going to be like blocking you from these access, but we’re going to start creating some like redirect nudges. And what does a redirect mean? It means okay, maybe we have a sanction tool, some sort of sanctioned AI tool by the business that it’s a closed model, we have enterprise subscription to you, we know the data is not being shared. Great.

So when we see people that are accessing other tools, they may get a little nudge pop up in their slack or Microsoft Teams that says hey, we love that you’re helping to drive the organization’s AI initiative. Did you know that we have full access and paid access to this tool? Go ahead and create an account.

So it’s helping to drive and nudge that employee, like in the right direction for secure AI adoption would be an example. You know, other things, obviously like DLP type tools allow you to like interact with the models or different, you know, different tooling. But what we’re really looking for is, you know, sensitive data upload or sensitive data sharing outside of policy. And so those may be a block or they may have some sort of like nudge or training to it. So I think all these tools are great. What we’re really trying to build to with all of this is, you know, how do we build security into the workflow of the individual? And so we’re not having to call them out to say, hey, go take this additional training so that you can learn the right way to engage with these tools. What we’re actually doing is like letting them run and then helping to like nudge and maybe keep them within boundaries, within their workflow. And so that’s a much different, I would say human centric approach to security than, you know, what you’re talking about before, which was in the past. Okay, we’re going to be like rolling out an AI policy before we let anybody engage with AI. We’re going to take them and send them through like a 30 to 60 minute training course. And it’s like trying to teach somebody to swim without water.

It’s like, hey, here’s a whiteboard. Let me share with you all of the technical techniques to swim versus hey, just get in the pool.

We’re going to teach you to swim while we’re, you know, while we’re in the water together. And I think that’s again a much more human centric and ultimately successful approach.

[00:28:51] Karissa: Yeah, that’s an interesting point. Okay, I want to talk about your earlier comment, making the medicine go down easier. So then I want to talk about perhaps historically, what have people been doing that has given the opposite of people like, no, I don’t want that at all.

And you mentioned just before the 60 minute training, like now people keeping people’s attention spans is like super hard. So like, what other things would you say, given your experience in this space, have people done that perhaps they need to stop doing like right now?

[00:29:19] Ashley: Yeah, I can give you a few examples. So in many of the things like we’ve deployed as a vendor in the space, so gamification and immersive learning, you know, has been very effective in engaging people. And so one of the things, one of actually our first products that we had out there was a cybersecurity escape room.

And so you’re actually joining as a team, you know, into a virtual. Actually before the pandemic, it was physical, so it was in person. But you know, post pandemic, as everyone’s like spread out, they’re joining together into like a zoom like environment.

And you’re actually given, you’re getting tasked with a mission. And so your mission is to go, you know, solve for this puzzle and you’re going to work together as a team while you’re learning about cybersecurity principles and solving this problem.

And so you can actually like almost provide sort of hands on, again, immersive opportunities to really learn about these fundamentals. And people like working alongside their coworkers, they like to work together as a team, they like to be immersed in a story story. They’re getting trained without even like realizing it, right? Because they’re going through this and you’re racing against the clock and there’s probably some sort of like incentive or prize for the winning team.

So providing a lot of these like behavioral science principles, the things that I just mentioned, create a little bit more of a engaging experience that people are more willing to participate in. So that’s one example. Other things we’ve seen work out there, you know, not like shoving sort of, I would say like animated training down professional throats. I think a lot of times like, like, you know, some of these like animations can be like very cringy. And you’re like, hey, why am I watching a cartoon? I’m a grown adult. But you know, to each their own preference. But I think we’ve seen people really respond well to humans, right? Being able to like interact with humans or even graphics sometimes like those graphical stories are things that people will respond really well to. And the other thing is like, just get to the point, right? Make it short, tell me what I need to get done and why and like, let me move through this rather quickly. So like very short kind of micro learnings are another area that we’ve seen. And then like I said, only really asking people to do it when it matters and making sure that it’s contextually driven. So, you know, if you saw that an individual was really struggling to understand, I don’t know, like kind of the easiest example would be to recognize phishing, right? Or maybe what data is appropriate for showing. You know, they’re struggling to like remember their data policy. Data handling policy. Well, are we going to go send them through like a 30 to 60 minute training course on like, you know, all 20 cybersecurity fundamentals that we train on? No, let’s actually engage with them in real time, right? Not 30 days, 60 days later, but like in the moment when it matters on the exact behavior that we’re looking to curb. Right. That we’re looking to improve upon. So, hey, we saw X, Y and Z. We want to assume positive intent here. We don’t want to assume that they’re acting maliciously or being stupid. Assume positive intent. Hey, we saw that you were doing X, Y and Z. Did you know, right, that, you know, like here are some, like, really specific ways that you could identify this phishing email earlier in a better way. Right? Here are some indicators that we could observe in the future.

What, whatever it is, like I said, just make it more personal, contextualized in the moment, and make it short. And then like, treat people like adults and let them move on.

[00:32:55] Karissa: What is up with the cartoons?

[00:32:58] Ashley: So first of all, it’s much. Well, it historically has been much easier and cheaper to produce that type of training videos. And so companies that were just trying to start up and like, get something out there, that was the approach. They were able to like, you know, hire people, whether, you know, upwork fiber or whatever, their own design team, and they could produce these and ship them out rather quickly. It was much harder to like, go hire a production crew, go strip this out, go get an actor in. And that took like a lot more time, money and, and frankly, like the time thing matters because when you’re trying to be responsive to what’s going on within the industry, you want to be you, you want to get content out there quickly, right? So we’re not waiting. I would say now more than ever before, we’re seeing, you know, AI is increased, like, is just getting more sophisticated and it’s looking really good. Like the quality is improving at race I’ve never seen before. And so, you know, I think you can actually have the best of both worlds now. And it’s something we’ve incorporated within our, our products and platform where AI content generation, right, where you’re, you know, whether it’s an avatar or like a graphics tool type form. The other aspect is, you know, what about podcasting, right? We’re, we’re here doing this podcast because we’ve seen that many people like to learn through listening and so being able to create or maybe other people like to read. So you’re producing an Article, like if you can look for different ways of producing and then consuming the content in a way that like resonates for the consumer, for the employee, you’re going to go much further towards your goal. And so I don’t think we need to stick to, you know, those hour long videos. And again, we definitely don’t need to be producing cartoon driven content anymore in an AI world.

[00:34:46] Karissa: And then what about, what are your thoughts then towards the discourse in which companies talk to their employees? So I’ve just seen sort of people talk to employees, especially around security for some reason, whether it’s a cartoon or it speaks to them a little bit like children.

Do you think that’s off putting though, for people?

[00:35:04] Ashley: Absolutely, absolutely. I said this before in my earlier response that we’re all adults here, we’re hired, you know, we’ve all interviewed, we’ve been hired to do our, do a job.

And you know, I expect to be treated, you know, like an adult, like an, a professional. And so yeah, I mean, this ultimately comes back to that cultural question that we started with, right? When you, you know, enter the field, right, enter the arena with the, the thought process in your mind that people are stupid, people are the weakest link that’s going to inform the way that you talk to the way that you train, the way that you educate.

If you take the approach of actually, you know, people could be our strongest front line of defense. They could be an active part of my security strategy.

I’m going to figure out what people are struggling with, who the people are that are struggling, and then I’m going to come in alongside of them to help empower and improve, right, their security knowledge and then ultimately like drive the right behavior. And also simultaneously, I’m going to look for as many ways as possible to actually bake security into the workflow and the processes so that I’m not asking people to have to make a hundred different decisions as a part of their job. And people will appreciate that. Right? And then the remaining 5, 10% that ultimately like the people do need to engage with, treat them like professionals, you know, meet them where they are, bring contextual information, bring data driven information and give them opportunities to learn and improve and like continue to move to, you know, get stronger.

[00:36:46] Karissa: Okay, so would you say, given where we are in 2026, like with AI and things can get a lot easier to produce, etc. Are we going to start to see this way of trying to educate? Maybe the intent’s there, but the execution is poor around, the cartoons are out the door, the way in which we’re talking to our employees like these are professionals, they’re not children. Is that starting to phase out, would you say? Or where we’re sort of sitting in this sort of space at the moment?

[00:37:13] Ashley: Yeah, again, great question. So we’re seeing a couple trends happen right now within our industry.

So you see these sort of legacy security awareness and training providers that are bolting AI onto their platform. And what that means is that it’s really the same old workflows of the solution with just AI added as a feature. And so I think that in many of these vendors, product offerings, you will probably see some of that still remain going forward. And ultimately know, I think that that’s going to make them less competitive and ultimately, like more AI native companies are really starting to emerge.

And so what does AI native mean?

It means that AI is in the architecture and it redesigns how these vendors, how these solutions are able to predict guide app. Right. And that’s the framework that we’ve actually created for our own solution, predict guide app. And so for us, that means that we actually have Libby, who’s sort of like an underlying AI agent. Think like a 24 by 7 analyst that’s like in your data all the time.

And she’s not just surfacing a dashboard, but she’s making those connections between those signals, right? The behavior, identity and threat to identify who’s at risk today and then suggest or taking action, you know, automatically, autonomously. And so the platform itself is really learning through, like whether it’s every simulation or every training, or it’s how people want to engage, it’s how they’re learning what’s actually working. When we deploy an intervention, whether that’s training or a control, and then the platform’s able to learn from that information and be able to update in real time. And so the next set of recommendations, the next set of content, the next set of interventions that’s coming out is actually the highest likely to drive improved behavior. And so that’s where I think that’s where I see the future of the market going. It’s really creating more of an adaptive and autonomous human risk management program for customers that not only adapts to the requirements, the policies, that, you know, the behavioral risk of the organization, but is also able to adapt to the employee or the consumer preferences of how they like to learn, how they engage. And so it creates a much more personalized experience for the employee. And I think the things that you write that you referred to up front, right, cartooning or children, you know, kind of childish languages, that stuff goes away. But you have to have the right underlying architecture as a platform and a solution to be able to do that at scale and to be able to iterate and adopt to those preferences.

[00:40:06] Karissa: Yeah, that’s interesting. Well, I’m interesting to see what’s going to unfold for the rest of the year. So, Ashley, to close, what would be your final statement for today? What would you like to leave our audience with?

[00:40:17] Ashley: I would say if security awareness and training worked, we’d have fixed the problem by now. And the industry knows this. I think it’s time to say that out loud and to really recognize it.

And so the future is really not blaming the human. That’s not a strategy. Right. It’s not a security strategy. But we need to leverage the Predict Guide act framework to understand who in my organization needs help. Be more intelligent in how we approach that, be more human in our approach, you know, and ultimately we’ll be able to start changing the metrics that got us where we are, right? 80 to 90% of breaches are human initiated. I want to see those numbers go down.

[00:41:06] Karissa: This is KBKast, the voice of Cyber.

[00:41:10] Karissa: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today.

[00:41:18] VO: This episode is brought to you by mercset. Your Smarter route to Security Talent Mercsek’s executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently.

Find out [email protected] today.

Share This