The Voice of Cyber®

KBKAST
From The SimSpace Summit 2026 – KB On The Go | Rushell Hopkins and Stanley McChrystal (Part 2)
First Aired: March 06, 2026

Karissa Breen [00:00:10]:
Welcome to KB On The Go. I’m coming to you from my new place of residence, Orlando, Florida. And today I’m being hosted at the SimSpace Summit. Cybersecurity is hitting a breaking point, compliance checklists, tabletop exercises, and confidence claims. Aren’t enough anymore, especially as AI accelerates both attack and defense. This summit is about something different, proving readiness under real pressure, real tools, real teams, real-world chaos. Today, I’m speaking with leaders and former US government officials pushing cyber training testing and validation out of theory and into reality. Because when the next incident hits, what matters isn’t what looks good on paper, it’s what actually holds up.

Karissa Breen [00:00:57]:
Stay with me, we’re diving into the conversations that matter. This is KB On The Go from SYNSPACE Summit 2026.

Karissa Breen [00:01:04]:
Let’s get into it. Joining me now in person is Rochelle Hopkins, Professor, Computer Science and Cybersecurity at Florida Southwestern State College, and today we’re discussing the future of cyber workforce. So, Rochelle, thanks for joining and welcome.

Rushell Hopkins [00:01:23]:
Well, thanks for having me.

Karissa Breen [00:01:24]:
Okay. So Rochelle, I’m really interested in the work that you do. And when we were talking before, you were sort of describing like how things are nowadays. And I think it’s really interesting to explore that a little bit more. So I want to start perhaps with your view on the growing concern about cognitive atrophy in the younger generation. And what are your thoughts here?

Rushell Hopkins [00:01:46]:
Absolutely. One of the things I also didn’t share with you is I’m part of a cohort or consortium called the AAC&U, which is the American Association of Colleges and Universities. And I’m in this cohort where we’re trying to bring AI into higher education and kind of look at what that’s going to do. I share concerns with many of the educators in what they’re calling cognitive offloading or cognitive atrophy in our younger generation. In cybersecurity, I tend to have really remarkable, creative, compassionate, and technically advanced students. But what I’m seeing and the shift that I’m seeing is that these students are using AI at a level where it’s eroding their patience, their deep focus, and their willingness to wrestle with the deeper problems. And learning, especially in cybersecurity, requires discomfort. We have to think outside the box.

Rushell Hopkins [00:02:33]:
It really requires us to sit with something, and if we don’t understand it, we are breaking it down and we’re building up that mental endurance, right, to solve it. And when their answer is just one click away, right, to these problems, that muscle greatly weakens.

Karissa Breen [00:02:50]:
This is where I think it gets really interesting as well, because I’m a millennial, and even when we were learning things, it’s still fundamentally different. Doesn’t feel that long ago. But when you’re talking about what you’re describing versus when I came up through the ranks, it’s not that long ago, but it does feel a lot longer. So I’m curious to see What does this sort of mean now for how people are actually learning things? You mentioned before 15-minute to 20-minute blocks before you had to say, right, we’re gonna get up, go for a walk, we’re gonna do something else. That’s a very short period of time when you think about it. How, what’s going on here?

Rushell Hopkins [00:03:32]:
So there’s a lot, there’s a lot of things. People are starting to do a lot of research on attention span, right? And I don’t wanna go down too much that down that road because I don’t have any degrees in psychology or, you know, I teach computer science and cybersecurity. But I’ve watched a lot of content, and I don’t mean social media content, I mean research, where shows like Cocomelon, right, that we put our kids in front of, the screens are changing every 1 to 1.5 seconds. You get to 2 seconds before there is a color change, a scene change, or something’s happening. And what’s happening from a very young age when we put children in front of TVs, we are literally breaking down their attention span from birth. Okay? And so when these college kids are coming in, we’ve already done a number on them in terms of by the time they get to the situation, their attention span, they don’t really have the same attention span that I have. I’m a Gen Xer, right? I didn’t have the same experiences or I wasn’t subjected to a lot of TV or these flashing lights or Cocomelon. We just, we didn’t have that, right? So my ability to sit with something is a little bit, is a little bit different than the younger generation.

Rushell Hopkins [00:04:37]:
What I’m finding is that it’s relatively new, like you said, over the past 5 years, I’m just seeing it. I can see it. And so when you’re looking at a classroom, I can tell within the first 5 or 10, 15 minutes that I need to either go down a rabbit hole, tell a joke, and I call it microdosing my lectures. My students are no longer able to sit through an hour and a half, 2-hour, 3-hour lecture, like the ones I sat through when I went through college. And so, what I’ll do is I’ll talk for 15 minutes about a topic, fundamental cybersecurity foundational language, and then I’ll tell a joke or I’ll be like, you know, this one time, and I’ll bring them back and I can see I can bring them back in. And so, the world’s trained that their, you know, their attention differently. And so, I would say that shorter segments I see kind of as on-ramps, right, to that deeper learning. And then, things like SimSpace, right, why we’re here, I will microdose that lecture and then we’ll jump right into a hands-on.

Rushell Hopkins [00:05:33]:
Right? So instead of listening, now I’ve got them acting. And so I’ll do a 15-minute lecture. Hey, let’s solve this knowledge check. Hey, let’s, you know, let me talk about this foundational skill and then let’s jump on and do that. My goal is to stretch their attention span, not succumb to it, right? So I think it’s really about growing that sustained concentration over time, like an athlete, right? Building that muscle memory.

Karissa Breen [00:05:57]:
So one of the things I’m curious to hear, well, whilst you were speaking, You said before when you would sit through lectures, hour and a half, and now it’s obviously these 15, 20-minute bursts before you could do something else. Do you think the outcome’s still the same though in terms of the learning and the knowledge?

Rushell Hopkins [00:06:13]:
This is really probably the first year, maybe year and a half that I’ve been seeing this. So I don’t have a really good litmus test yet in terms of what the outcome’s gonna be and how much they’re retaining. I would say that retention, again, when I was going through school, there wasn’t this one-click-away answer, right? It wasn’t this solution set at my fingertips, right? Everything had to be figured out. And so those lectures, I would take notes, I would be on it. I didn’t have a device, right? It would be me just writing and studying. And now my students are recording it or they’re just going back to it at a later date, or they’re using agentic AI to answer the questions. So I don’t really know. The answer to that is going to come down the road.

Rushell Hopkins [00:06:55]:
Sadly, I’m not really optimistic about that answer. To be honest, but I think we need to give a little bit more time to see how these students are coming out workforce ready. So there’s one thing to get through a course and take the class and pass it. And there’s another to take the course, pass it, and actually be employable and have employable skills where you retain the knowledge when you get out. I don’t know what that’s going to look like yet, or the answer to that yet.

Karissa Breen [00:07:16]:
And speaking of retention of knowledge, one thing, Emma, you would know this more, Michelle, than I would, like writing things down, it actually imprints it more in your brain to learn something versus typing it. So Do you think as well that even if all these things that you’re explaining, it’s still not perhaps being imprinted into this is how we do the thing? So it becomes to your point before around it’s a muscle memory because perhaps people aren’t going and taking notes and highlighting and going to library and using encyclopedias like we used to do. It’s not like that anymore. Everything’s so instant, so quick, so quick and it wants the next thing nowadays. Curious on your thoughts.

Rushell Hopkins [00:07:56]:
So technology is both a gift and a risk, right? It gives my students access to tools that I could only have dreamed of when I was in their position, but it also is a substitute for thinking, right? The future professionals simply accept what a system tells them. My students are, instead of challenging it, thinking outside the box, they’re just accepting the output of the AI. Do you see what I’m saying? And so that We are not creating a group of cyber defenders. We’re really creating a group of individuals that rely on AI to defend.

Karissa Breen [00:08:29]:
So I have a question then. So with going back to the AI and then discerning the answer, there’s a group of people in the world that believe the Earth is flat. So if you said, is the Earth flat? And hypothetically ChatGPT says, yes, people would believe it, or they wouldn’t go, I’m gonna challenge that and think, Is it a threat or not?

Rushell Hopkins [00:08:47]:
So that’s one of the things I challenge my students to do is do a lot of critical thinking, right? Because it all boils down to critical thinking. It’s about challenging the process, challenging the answer. Love to tell you I get a lot of students that do that, but sadly I don’t see a lot of that spark, that willingness to do a deeper dive to confirm. I think a lot of students now, and again, I think that social media has really created this environment where students are more willing to accept data or information that fits their narrative rather than doing the deeper dive and saying, well, if this happens, if this happened and this happened, then that couldn’t have happened, right? And they’re not willing to put in the research or the work. They just simply fall victim. And so one of the things the general was talking about, general gave this analogy about the blind man and the elephant, how you have a bunch of different blind men touching different parts of an elephant. And one man is holding the tail and he says, this is a snake. Another gentleman is touching the legs and he’s like, well, this is a tree.

Rushell Hopkins [00:09:40]:
Well, with all of this, all the social media, and the disinformation, how do you convince students that totality is an elephant when they’re only being fed pieces of information to the whole? Does that make sense? And so my students are seeing things and they’ll say, well, that guy says it’s a snake, so it’s a snake. And that guy says it’s a tree, it’s a tree. And they’re not evaluating the whole entire, they’re not looking at the information as a whole and coming to a conclusion. They’re letting the information that they’re given dictate the outcome. Or their theory.

Karissa Breen [00:10:10]:
Then if we zoom out and look at that as a theory, how does that help? Like, as cybersecurity, when I was a practitioner by trade, curious, ask questions, break things. That’s how I got into it because I’m curious. And it’s ironic now I’m a media person in this space because I’m genuinely curious. I want to understand how you think. So how do we get to a point then? Because we’re going to need, like, eventually my generation, we’re all going to retire. We’re going to need these younger folks to do the work. So how does that stack up?

Rushell Hopkins [00:10:40]:
I think that’s one of the ones that keeps me up at night. I would tell you 5 years ago that we were churning out some pretty amazing— my career was, I was really doing good as a force for good, right? Because we do absolutely need these generations to protect our nation, right? These are— I’m supposed to be churning out workforce-ready students. A lot of that’s why I brought SimSpace on, right, to start giving them more real-world, hands-on scenarios and getting out of that lecture environment, right, where I’m just verbatim spitting out information. If we think about where all these attack surfaces we’re looking at now, especially in Southwest Florida, you know, we need to look at our hospitals, our power grids, our schools, our military. It’s going to depend on them, not algorithms. And I worry about the future of AI in education. That’s a lot of why I joined that consortium, looking at how AI is brought into higher education, not just in cybersecurity, but as a whole, because we still need nurses and doctors. And one of the philosophies that we have, or we call, is like the pixie dust approach to AI.

Rushell Hopkins [00:11:35]:
Which is all these different educators are sprinkling AI into their curriculum or into their course development, or some are what we call conscious abstainers, where they choose to not allow the students to use AI in the classroom. Neither approach is great, right? We don’t have any type of structure. We don’t have any rules. You know, this AI right now is kind of the Wild West, right? We didn’t create any. It’s like putting the genie back in the bottle at this point. And then in higher education, you have a lot of people that are kind of there sitting out their career, they’re in that retirement phase, right? Maybe an older kind of pedagogy or ideas. And then you have these other individuals like me or myself teaching cybersecurity technology where AI is embedded in everything that we do. So our students have to come out and be workforce ready with this heavy AI.

Rushell Hopkins [00:12:19]:
And so one of the concepts we’re talking about is like, how do we shoot that gap? How do we teach them how to use AI to better themselves in their career field without actually doing that cognitive offload? And That’s really where you really have to teach them the difference between when to use it and how to use it. And the when to use it’s the most important because for cognitive offload, as educators, we use it all the time. I’m helping, you know, I have Gronk, I have Gemini, they’re helping me build curriculum, they’re streamlining my mapping. We would do course mapping. It takes a lot of time and does a lot of the heavy lifting and things that I don’t really need to do. And so we need to teach our students that yes, AI is incredibly helpful and it can take a ton of workload off of us.. But we have to make sure that we’re using the AI in the way that we’re still learning the skill to become a subject matter expert in our industry, but using that to take off some of the stuff that we don’t really need to learn.

Karissa Breen [00:13:13]:
So one thing I’m curious to understand from your perspective, Rochelle, would be recently I interviewed the former Deputy Director of the NSA, George Barnes, and he spoke a lot about the new generation and what concerns him. And maybe you can answer this. He said, I’m concerned that people aren’t taking up roles in this period, but also from a government perspective, service to our nation, because YouTubers and TikTokers XYZ, these roles that people want to do over protecting the nation from a public-private sector perspective?

Rushell Hopkins [00:13:43]:
Yeah, that’s something we definitely wrestle with.

Karissa Breen [00:13:44]:
And it’s—

Rushell Hopkins [00:13:45]:
I’m not gonna lie, I would teach a lot of gen ed courses where I had students in different disciplines, right? And one of the courses that I taught was computer literacy. And computer literacy, most, you know, students think that when they come in, they’re like, I don’t need this, I’m computer literate. And they really have no idea what goes on in the background, right? They don’t really know what goes on behind the algorithms and the social media and how those algorithms get into their phone and the security behind it and why social media is eroding our students’ concentration. And they don’t see that aspect. And there’s one assignment that I would, I’d book into my students where there’s a security module and it asks them to go through their phone and find 3 of the most commonly used applications in their phone. I want them to read the terms of service and tell me what they learned. And after they read the terms of service and learned about what these apps are using in terms of their personal data, how do they feel about it, and were they still going to use it or they were gonna delete it? The overwhelming majority of our students said that, wow, I didn’t realize how intrusive they were or how much information they were gathering. I don’t feel comfortable with it, but I’m still gonna keep it ’cause I like watching puppy videos.

Rushell Hopkins [00:14:53]:
So this is where I get nervous, and this is where I guess a little bit of the pessimism comes in because in cybersecurity, we know that end user’s our biggest threat. Our end user, all these demographics outside of cybersecurity, right? So people that are gonna get corporate world marketing, Hospitals, nurses, everybody is our users in our networks, right? And so if at this level we can’t get them to understand basic cybersecurity concepts and get them to realize what they’re giving up, they always say if you’re not paying for the product, you are the product, right? If I can’t get them at this level to take accountability and responsibility and understand the threat landscape, I don’t know what that’s gonna look like for us. And that’s the part I think that keeps me up at night is how do I, be that water droplet, that effect, invoke change, and get some of these kids to understand the repercussions of their actions. So, I don’t know what that’s going to do for our workforce. I think it also creates that conundrum of, you know, in my PhD program, I’m a PhD student, and my dissertation was focusing on how to shorten the cybersecurity skills gap using agentic AI. It’s a self-fulfilling prophecy. So, if we have students that aren’t workforce ready, that are outsourcing their critical thinking to AI, it just makes sense for companies to use AI to close that cybersecurity skills gap because the students that are coming out of higher education are critically offloading to their agentic AI and making their agentic AI more capable than they are to join the workforce.

Karissa Breen [00:16:16]:
So then how can anyone complain to say, well, we don’t have a job and there’s no entry-level jobs? That shouldn’t be a complaint then. It’s valid.

Rushell Hopkins [00:16:22]:
You’re right.

Karissa Breen [00:16:26]:
Joining me now in person is retired General Stanley McChrystal, former commander of Joint Special Operations Command, and today we’re discussing winning the cyber war without a map. So Stan, thanks for joining and welcome.

Stan McChrystal [00:16:37]:
Thanks for having me.

Karissa Breen [00:16:38]:
Okay, so before we sort of got on the record today, I was saying I’ve been watching some of your YouTube videos, some of your interviews, just to understand a little bit more on how you’ve previously interviewed and getting more familiar with your approach to certain things. So I want to start today, Stan, with you’ve said that modern warfare moved from fixed battlefields to the everyday fast tech world. So what I’m curious to know is how does this directly mirror what cyber defenders are dealing with today?

Stan McChrystal [00:17:08]:
If we talk about war, it used to be sort of geographically bounded. If you go way back, it was where the armies were. And then you expanded that. The armies could have a greater effect on countryside as they marched through it. Then we got to world wars, and suddenly the ability to reach with military things like submarines or aircraft, it basically brought the war close to civilians in a pretty real way. Now the war can be everywhere because information technology allows you to get both information and disinformation everywhere, but also cyber activities allow you to have impact everywhere. So whereas it might not have been possible to reach across an ocean and bomb an enemy’s factory, now we can reach across an ocean and attack it with cyber pretty simply. So the reality is the battle space, as the military would call it, has just expanded.

Stan McChrystal [00:18:06]:
It’s almost limitless now. And so there is no front of the battle line, middle, and there’s no rear anymore. I don’t think we’ve really felt that yet. I don’t think we’ve digested the meaning of that because we still have an idea that if something is kinetic and it blows up, then it’s war. And if it’s cyber, that it is maybe espionage or maybe crime or maybe just irritating. But we haven’t had power stations turned off in the United States or Great Britain yet. We haven’t had that real impact felt on everyday lives from enemies yet. They’ve done some of it in Ukraine, but we think of that as battlespace.

Stan McChrystal [00:18:48]:
So I think we are mentally not yet prepared for the fact that war is and can be everywhere, which then expands the idea of who’s a combatant. And then the question is, is someone sitting on a keyboard targetable kinetically, meaning if you hack me, can I bomb you? Is that fair? And I think we haven’t come to grips with that philosophical question yet, but we’re getting close because there will be an appetite to do that.

Karissa Breen [00:19:20]:
Yeah, okay, so this is really interesting. So I’ve come from a cyber background as practitioner before doing this sort of stuff today. And what’s interesting is, do you think now as we move forward with everything that’s happening in the world, of course there’s a rise of geopolitics, but you said before you can attack from anywhere, right? And then what I’m also seeing interviewing people across the globe with how it traverses into the kinetic warfare. And whilst you said nothing major has happened, and I hate to sort of raise it, do you think we’re going to unfortunately start seeing these sort of incidents now?

Stan McChrystal [00:19:51]:
I think we will. I think in the past what we’ve seen in the case of big powers is cyber-enabled killing, meaning we will find a targetable person by cyber, but then we will kill them kinetically. And that feels traditional. And so cyber was part of it. It was maybe the critical part, but the reality is we use traditional means. What if we turn off their pacemaker? What if we disable the hospital? What if we cause water or food to be contaminated, which I think we’ll be able to do all through cyber. And we’re killing people or we’re causing really damaging effects. We didn’t do anything that felt traditional.

Stan McChrystal [00:20:39]:
We’re going to have to come to grips with, if you hit the keyboard, is that the same as pulling the trigger? And I don’t think right now we think of it that way.

Karissa Breen [00:20:48]:
Do you think as well it’s because out of sight, out of mind? What I mean by that is, I worked in a bank as a practitioner. Someone’s money gets stolen, we would just replenish it back in your bank account. It’s a bit different to back in the day when people could physically go to a bank and, you know, rob some cash and run away. Do you think it’s because people can’t see it as much? Perhaps it’s not tangible. You can’t feel the impact?

Stan McChrystal [00:21:09]:
I think that’s right. I think we can’t feel the impact. But of course, if somebody effectively bilks you out of a large amount of money, you will feel the impact at some point because you’ll have less. But I think that’s right. It’s different than being mugged on the street where someone takes your purse or wallet. Which, or even being your home invaded and your jewelry and money stolen from your home, you feel violated. But I think that as people start to do identity theft and other, you will feel violated in the same visceral way, which will start to cause people to want to respond in the same visceral way.

Karissa Breen [00:21:46]:
So respond with force? We don’t. Right. Yeah, because I’ve always wondered this as well, because it’s like If someone upsets me online, for example, it’s not the same as someone coming up and punching me in the street, right? So where do you think the shift’s going to happen, or what do you think the catalyst is? Is it going to be, unfortunately, one of these incidents where people may lose their life as a result of cyber warfare that connects into kinetic warfare and then something bad really happens?

Stan McChrystal [00:22:11]:
Of course I’m not sure, but my sense is it will be a cumulative number of events like that get so painful, like identity theft or even just extreme inconvenience. So, for example, suppose someone disables TSA at the airport and you go to the airport and you can’t board your flight because no one can get through TSA. I think people will get much more angry far quicker than we have so far, particularly if it’s en masse. If someone hacks your account, I feel sorry for you. But I’m glad it wasn’t me. If they hack a bunch of us together, we are outraged and we’re willing to do something. I think something at scale, or cumulatively at scale, enough things happening, people are going to want to act. And again, some cases are going to act kinetically.

Karissa Breen [00:23:03]:
So given your background, do you think people are starting to really focus on this now, like governments, etc.? I mean, like, well, people may start to respond this way. You think it’s on the radar?

Stan McChrystal [00:23:13]:
I think theoretically it is, but practically it’s not because we’ve been warned about this so long. We know the cases of it so much. We’ve seen big hacks of government data. We’ve seen big hacks of business data. We’ve seen individuals suffer different cyber impacts, but it hasn’t affected our life. Look at any of the big cases and you go, wow, that’s bad. But it really didn’t change my life at all. I think one that turns out to have a big military effect or a big financial effect, that will be the tipping point.

Stan McChrystal [00:23:48]:
And I think it has to come pretty soon. And it doesn’t have to come from a nation-state. In fact, it’s probably likely not to, as a nation-state can be held at risk. You can threaten a nation-state. You can’t threaten an individual that doesn’t care. So I think it’s more likely that someone is able to do the equivalent of a weapon of mass destruction on cyber, but without, you know, the national assets. And we just haven’t seen it. And I’m, I’m frankly surprised that we haven’t seen that yet.

Karissa Breen [00:24:17]:
So you think back to the Colonial Pipeline attack that happened, obviously that caused outrage for people. Do you think that even if it’s something like that, there’s going to have this flow-on effect for people, there’s a group of people who are going to be agitating, concerned, start acting in a certain way, or to your point earlier, if it’s water gets contaminated, people start dying because they’re drinking it, and then emergency services go down, people can’t go through. Like, it’s going to have a— it’s, from my understanding, interviewing people like yourself there, Stan, can happen quite quickly. Not like weeks, like hours.

Stan McChrystal [00:24:53]:
Yeah, I think that if you— and I use this word carefully— inconvenience our society enough, I would argue that if you were able to turn the electricity off in a city for 12 hours and people didn’t know when that electricity would come back on in 12 hours, you’d start to see society break down. I think if it was longer than that, it would be even greater. So I think you’d have this groundswell response. Colonial Pipeline inconvenienced some people, but the resilience of our society, in fact, mitigated much of it. And that’s a good thing. We want our society to be very resilient, but we don’t want us to be so resilient that all of these things can happen and we start to just accept them. We’ve got to understand we must be resilient to survive them, but there’s got to be a decision made on the part of society of what’s acceptable and what isn’t and what we are going to actively going to stop, try to stop.

Karissa Breen [00:25:53]:
So one thing I’m curious then You said before we don’t want to accept it because we’re so resilient. Do you think people, even with like cyber breaches, people seem a little bit desensitized, like, oh well, another breach happened? Do you think this will become, oh well, another power plant blew up and people died? Do you think it will unfortunately become like that because so much of this could be happening?

Stan McChrystal [00:26:13]:
I think it could. I don’t think so. I think once you get into those things that affect people directly, that will be a point it tips. And I use, for example, So right now, we know that there are these scam centers in some of the Asian countries where they literally have created little factories of people who reach out around the world and do scamming. We know it. We even know where they are, but we don’t do anything directly. I think the difference would be those kinds of activities, if they rise to enough inconvenience, suddenly you get a strike on them. Either a cyber strike or a kinetic strike, and you say, we’re not putting up with that, and the host country refuses or is unable to do something about it, suddenly that’s a new dynamic because you’re violating national sovereignty to pursue things.

Stan McChrystal [00:27:06]:
But yet, if you think that those nations are unwilling to do that, then you say that they don’t deserve their sovereignty if they’re unwilling to respect international law.

Karissa Breen [00:27:16]:
So with that thought for a moment, one thing I’ve observed and spoken to like ex-law enforcement folks is if I commit a cyber crime in a non-country that doesn’t have a treaty with the US or Australia or whoever, it’s hard to really go after. First of all, it’s hard to track them down, but then it’s really hard to do anything. Where does that— what happens now? I can’t really bring them back and do something. There’s no treaty.

Stan McChrystal [00:27:43]:
Yeah, that’s going to be, I think, a bold new frontier. You know, we just went and took the president of Venezuela out of his palace because he was doing things we didn’t like. Now, you could argue it was geopolitical, but our argument was he had done illegal things. I think the respect for national boundaries— think of the predator strikes that occurred in sovereign countries inside places like Pakistan against terrorism, I think we could easily find an appetite grow for doing operations inside sovereign countries that are either unable or unwilling to take action, that we suddenly don’t respect national boundaries as much as before. I’m not advocating this because there’s a lot of complications that come with that. Once you violate national sovereignty, you take away from them their perceived responsibility to do it. They go, okay, if you’re going to go after people in my country, we’re not going to do that, we’re not going to help you. That’s a different relationship than the international community that we want.

Stan McChrystal [00:28:50]:
But I think that once you get cyber activities that reach a level of damage, I think the appetite for that will be so great that it will force that. Now, I’d like to believe that international Alliances, United Nations, NATO, other places could be very helpful there because international law is critical here because we have the ability for crime to reach so far. And so international law, in my view, becomes increasingly important. And we’re not there yet, but I think we’ll get there. And I also think you described earlier, if someone does a cybercrime from country X and we identify them, You know, we still say, well, they’re guilty of cybercrime. They didn’t kill anyone. They didn’t hold a gun on someone. We almost put them in a different category, like white-collar crime.

Stan McChrystal [00:29:40]:
I think we’ll stop doing that. I think we’ll start saying it’s the impact that you create.

Karissa Breen [00:29:45]:
Do you think from your experience, Stan, it’s better to go on the offensive or on the defensive? Good and bad to both, but I’m curious to hear your thoughts.

Stan McChrystal [00:29:53]:
Obviously, what we’ve largely done is the defensive. We’ve done it imperfectly, but it’s getting better. Just people are becoming more resilient because more organizations get better at it and we can take a punch. I think there’s going to have to be a more aggressive offensive part, but it is going to have to be very controlled. I was with a corporation— it’s been a decade now— when they were exploring developing their own offensive capability against actors around the world. They stepped back and didn’t do it. But they were— their theory was they would have the equivalent of mercenaries. And so anyone who came after their organization was going to be punished offensively.

Stan McChrystal [00:30:35]:
And again, they stopped. But I can understand the appetite for doing that. And I can understand, say, big banks get together and they say, we got to stop this stuff. And they decide to create or hire capabilities to do that because national governments are unwilling or unable. And they say, well, we’ve got to protect the international financial system. And they’re not wrong, but now you’ve created extralegal military or law enforcement capabilities. And again, that’s problematic.

Karissa Breen [00:31:07]:
So talking of problematic, what about now adding AI into the mix, accelerating a lot of these problems you discussed here today?

Stan McChrystal [00:31:13]:
Yeah, we’re not ready for AI. It’s funny, we weren’t ready for social media. We got social media. We didn’t understand how powerful it would be, and it changed our society, and we’re still not mature enough to deal with it. We are polluting a lot of young people’s minds and whatnot. And what seemed like a good thing— I didn’t see it coming— has turned out to have a downside that’s so huge. I think AI is similar. We’re just, as mankind, we are not Maybe we are in the Prometheus phase.

Stan McChrystal [00:31:47]:
Somebody goes and takes fire and suddenly gives man fire and we’re not ready for it because we don’t have the rules in place. We have the whole idea of a capitalist economy is each person does their best. Now they operate hopefully within rules, but they do their best to get the most they can. And in an earlier age where you were reliant on your personal wits or your strength of your body The difference between you and I couldn’t be that much. In an AI world, the difference between you and I could be incalculable. And suddenly you could gather wealth, you can gather influence, you can gather power, and I could have none. You essentially can win Monopoly. You get everything on the first move of the game.

Stan McChrystal [00:32:36]:
And AI offers that potential for individuals, for companies, and for nations. And the problem is we’re still programmed to pursue that because, you know, you pursue in your best interest. And that has worked well in capitalist economies because it made people work hard and be effective. I don’t know that it works hard when suddenly people are empowered to a degree we never thought imaginable. And so I’m worried that we’re gonna get not just wealth inequality, we’re going to have inequality of everything to a massive scale. We’ll have a group of, a small group of empowered people who control everything. And we say, well, that’s not possible. Well, it is, I think.

Stan McChrystal [00:33:25]:
And at least I worry.

Karissa Breen [00:33:27]:
I think it’s interesting. So we’re not ready for social media. Even if you look back to when the internet started and people are saying we’re not ready for that. How are governments, society, companies, private, public, whatever, going to police all of this? What I mean by that is if I go through a red light here, I’m going to get fined. There is repercussion. But hey, someone might think, well, I don’t like Chris Breen’s comment or the story that she’s written about Stan. So I’m going to say something horrible to her. There’s no real repercussions.

Karissa Breen [00:33:55]:
Oh, but it was online. So, and now we’re adding AI into it. So it’s like, well, you know, we could be fighting people with AI and all of the things. So it’s like, Are people going to start to become deluded in what’s real and what’s not real? What’s a fake world fabricated? Like, how do we find that line? Is there a line?

Stan McChrystal [00:34:13]:
I think they are already challenged by that. At least the people who are informed enough, they know that much of what they read is not true and it’s designed to influence. A lot of people still aren’t really aware, but increasingly they are. Whether I think there’s hope, and I’ve done some thinking about this, but I’m not an expert. The potential to use AI for governance. Because if you think about it, a lot of the problems in our governments are inefficiency, but also corruption and things. And we all believe a representative democracy is a good thing, but AI could do a lot of things to make things fair. If you create AI and set up rules and understanding, You say, we’re not going to do something that’s not fair.

Stan McChrystal [00:35:02]:
And you put policies in AI and it comes back and it says, that’s not fair. And suddenly it’s the equivalent of having wise men or women, you know, great people, the gods who say, okay, here are the rules and the road. Now the danger there is it’s almost like an old science fiction movie or something where you have this machine that gives everybody the rules. But perhaps we could use AI to take some of the uncertainty and foolishness out of what we see. What if AI fact checks everything that politicians say and do? And then suddenly, in a best case, what that would do, it would be a shaping to stop them from lying. Because every time they talk, if a red light’s going off over their shoulder that says this person’s lying, boom, It’s hard on them. If we could use it for things like that, then there’s some potential that we shape ourselves into better behavior. You hope we don’t become lab rats that only do what the machine tells us to do, but we need something that brings us back to what is true.

Stan McChrystal [00:36:13]:
Remember the old alternate facts kind of thing? We’ve got to have something that says, no, there aren’t alternate facts, there are facts. There are interpretations, and We accept that, but some things are true, some things are not true.

Karissa Breen [00:36:26]:
So then on that note, and maybe to conclude today’s interview, do you think given everything you talked about with there’ll be a few players that own the power and the distribution, do you think those companies, people, whoever will use that AI for their advantage or to disadvantage people? So like you said, it actually could be used for good and maybe it gets distributed evenly. Or do you think perhaps these people, these companies would be more selfish in looking after themselves? So therefore there’s only the top 10% and the rest is a huge polarization.

Stan McChrystal [00:36:57]:
I think what we’re already seeing is they use it for themselves, for individuals and for the power of corporations. I mean, if you look at some of the conduct of some of the major entities, at the end of the day, they’re about survival and profitability of the entity. And they’ve got a lot of verbiage and narrative that says trying to make the world a better place, but they’re making it very much better for them. And a little bit theoretically better for everyone else. And then you look at the oligarchs, and I use the term pejoratively, intentionally. We have created a class of absurdly wealthy people who live absurdly wealthy lifestyles. And I don’t believe that’s healthy for any society. I think that, I think that’s not sustainable in a healthy way long term.

Stan McChrystal [00:37:52]:
And so that’s either going to get worse if it’s not addressed, or we’re going to have some kind of climactic crash-up about it.

Karissa Breen [00:38:00]:
And then lastly, Stan, what would you like to leave our audience with today? Perhaps one takeaway that you’re thinking about.

Stan McChrystal [00:38:06]:
I think we need to think broadly about cyber. Nuclear weapons emerged at the end of the Second World War, but it took about 15 years before a doctrine of nuclear weapons really emerged where people started to say, okay, a bunch of people have the ability to destroy the world. How are we going to manage these? How are we going to think about them? And you started to have the doctrine of mutually assured destruction and mutual deterrence, all those things which people hate, but they were actually necessary adaptations to the reality that a small group of nations could destroy the world at any moment. And it worked. We need the equivalent for the march of technology, because first it was cyber information technology, and I would argue that that got ahead of us. And now artificial intelligence. We are going to need to develop a doctrine of thought, common understanding, what’s right, what’s wrong, what’s not allowed. And we’re going to have to do it pretty quickly.

Stan McChrystal [00:39:14]:
And it’s not just a few legal rules. It is you know, how we think broadly about it. And so society’s got to mature faster than we have been maturing.

Karissa Breen [00:39:26]:
And there you have it. This is KB On The Go. Stay tuned for more.

Vanta’s Trust Management Platform takes the manual work out of your security and compliance process and replaces it with continuous automation—whether you’re pursuing your first framework or managing a complex program.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Cybersecurity is hitting a breaking point. Compliance checklists, tabletop exercises and confidence claims aren’t enough anymore, especially as AI accelerates both attack and defense. In this bonus episode, KB sits down with Rushell Hopkins, Professor of Cybersecurity and Computer Science, Florida SouthWestern State College and Stan McChrystal, CEO and Chairman at McChrystal Group. Together they discuss the future of cyber workforce and winning the cyber war without a map.

Rushell Hopkins, Professor of Cybersecurity and Computer Science, Florida SouthWestern State College

Rushell Hopkins is a Professor of Cybersecurity and Computer Science with more than thirty years of combined experience spanning higher education, military service, and the technology sector. She serves as the lead architect of Florida SouthWestern State College’s Associate in Science in Cybersecurity program, designing a curriculum grounded in applied, workforce-aligned instruction and industry relevance.

Her expertise includes cyber range operations, security operations center (SOC) pedagogy, and experiential learning models aligned to national cybersecurity workforce frameworks. Hopkins has played a central role in integrating artificial intelligence into cybersecurity education, ensuring students graduate with practical, future-ready skills that reflect the evolving threat landscape and employer demands.

She is currently ABD in Information Systems with a concentration in Cybersecurity. Her doctoral research focuses on the use of artificial intelligence to enhance cybersecurity education and strengthen workforce readiness.

Stan McChrystal, CEO and Chairman, McChrystal Group

Stan McChrystal founded McChrystal Group in January 2011 to deliver innovative leadership solutions to businesses globally in order to help them transform and succeed in challenging, dynamic environments. As Founder and a Partner, he advises senior executives at multinational corporations on navigating complex change and building stronger teams.

A retired four-star general, Stan is the former commander of US and International Security Assistance Forces (ISAF) Afghanistan and the former commander of the nation’s premier military counter-terrorism force, Joint Special Operations Command (JSOC). He is best known for developing and implementing a comprehensive counterinsurgency strategy in Afghanistan, and for creating a cohesive counter-terrorism organization that revolutionized the interagency operating culture.

Throughout his military career, Stan commanded a number of elite organizations, including the 75th Ranger Regiment. After 9/11 until his retirement in 2010, he spent more than 6 years deployed to combat in a variety of leadership positions. In June 2009, the President of the United States and the Secretary General of NATO appointed him to be the Commander of US Forces Afghanistan and NATO ISAF. His command included more than 150,000 troops from 45 allied countries. On August 1, 2010 he retired from the US Army.

In 2013, Stan published his memoir, My Share of the Task, which was a New York Times bestseller; and is an author of Team of Teams: New Rules of Engagement for a Complex World, which was a New York Times bestseller in 2015. His latest book, On Character, was an instant New York Times bestseller. He previously served as a senior fellow at Yale University’s Jackson Institute for Global Affairs, where he also taught a course on Leadership. He is a sought-after speaker, giving speeches on leadership and team dynamics to organizations around the globe.

A passionate advocate for national service and veterans’ issues, Stan previously chaired the board of the Service Year Alliance, advocating for a future in which a year of full-time service—a service year—is a common expectation and opportunity for all young Americans.

Stan is a graduate of the United States Military Academy at West Point and the Naval War College. He also completed year-long fellowships at Harvard’s John F. Kennedy School of Government and the Council on Foreign Relations.

 

Share This