The Voice of Cyberยฎ

KBKAST
Episode 253 Deep Dive: Mike Hanley | The Role of AI in Addressing Software Security Challenges
First Aired: April 05, 2024

Mike Hanley is the Chief Security Officer and SVP of Engineering at GitHub. Prior to GitHub, Mike was the Vice President of Security at Duo Security, where he built and led the security research, development, and operations functions. After Duoโ€™s acquisition by Cisco for $2.35 billion in 2018, Mike led the transformation of Ciscoโ€™s cloud security framework and later served as CISO for the company. Mike also spent several years at CERT/CC as a Senior Member of the Technical Staff and security researcher focused on applied R&D programs for the US Department of Defense and the Intelligence Community. When heโ€™s not talking about security at GitHub, Mike can be found enjoying Ann Arbor, MI with his wife and eight kids.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Mike Hanley [00:00:00]:
Think anytime you have a no in security, as a security leader or practitioner, I think you have to step back and really look at why that happened. There probably were a number of missed opportunities to be on the same page about what’s important to the organization from a security standpoint and what the business was trying to accomplish. So, again, I think that all goes back to just communication, and and I think a modern team in security, independent of AI, has to be communicating with their developers and figuring out how to work together about what’s important and generally being on the same page. Because, again, I think anytime you get to a no, there’s usually something that was missed there well before that.

Karissa Breen [00:00:57]:
Joining me today is Mike Hanley, chief security officer from GitHub. And today, we’re discussing how AI makes the promise of shift left a reality. So, Mike, thanks for joining, and welcome. I’m super excited to have this conversation with you, so I really wanna get straight into it.

Mike Hanley [00:01:12]:
Yeah. Happy to be here with you today. Thanks for the invitation to be on the show.

Karissa Breen [00:01:15]:
Now I’m curious. Okay. I wanna start with the phrase shift left. Now I wanna start with this because, you know, depends on who you talk to, people’s eyes seem to glaze over or they seem to be, I don’t know, even agitated by the term. So do you think in your experience that people just throw around the term ship left a lot? Because I’m seeing it you know, obviously, we run a media company. I’m seeing a lot just in, like, day to day articles as well. So what are your thoughts then on that?

Mike Hanley [00:01:42]:
Yeah. I think shift left really is one of those, information security terms that’s probably been overused in the last, you know, decade plus. You could also probably bucket some other things in there, like 0 trust where they’ve meant a lot of different things to a lot of different people at various times. But I think if you pull back from that, what people have tried to communicate by saying shift left is the idea that you’re giving helpful security feedback to developers as early as you can. And most of the time, that’s meant you have some tool that integrates with your CI and CD framework, and you get feedback sometime after you’ve written your code through the test and the sort of analysis process that occurs later. And that’s, you know, that’s largely been the best that the industry has been able to do for quite some time. But what is exciting that’s happening now that we’re on the front end of is AI is actually really gonna completely redefine shifting left. Because if you think about it, rather than feedback at test time, right, which is after you finished whatever you were doing, whatever task you were doing, whatever project you were working on.

Mike Hanley [00:02:48]:
Now what we’re actually saying is AI and having your pair programmer right there with you, who’s an expert on security and and has the benefit of all the things that the model’s been trained on, can actually give you real time security feedback as you go when you’re actually bringing the idea to code through your editor. And that it doesn’t get any further left than that in terms of shifting security left. So I’m really, really excited about that as a new horizon that we’re getting into now.

Karissa Breen [00:03:16]:
Okay. That’s really interesting. I love new horizons. So from just a human being point of view, so if we even think about the whole, you know, secure by design originally, everyone talking about this, and then obviously we got to dev sec ops or sec dev ops, depends on who you speak to. That was a bit of a change from minds mindset perspective. So now we’re sort of talking on the AI front in terms of redefining the whole shift left approach. How do you think people are gonna now respond to this? Because, again, like, I’m always very optimistic. I love technology.

Karissa Breen [00:03:47]:
So for me, it’s, like, makes sense, but I’m curious then with your knowledge and your insight, do you think people would be more receptive towards this? Or again, it goes into that old mindset around, okay, well, I need to sort of approach this new paradigm completely different now.

Mike Hanley [00:04:02]:
Yeah. Well, I I’m I’m with you, Chris. I tend to be an optimist about these kinds of things and what they can lead to from a productivity, from a security, from a experience standpoint going forward. And I’m very, very bullish and very optimistic about what AI is gonna do for security. But you raised a good point, which which we see and we’re we’re seeing, and it’s gonna be one of the the headwinds for for AI to sort of manage through is, you know, a lot of people wanna invent a new framework for assessing AI tools, or there’s a lot of uncertainty about what the future looks like because this is a rapidly changing space. And, you know, the advice that I’ve been giving people is a lot of tools that are built with AI in mind to help get some task done. You really already have a lot of the same, you know, vendor review processes, the same playbooks for thinking about, do you trust the vendor? Do you trust the tool? Do you understand where your data is going, etcetera? A lot of that stuff you’re already doing for third party vendor risk management. So I just try to remind people that just because AI and the experiences that we’re building with AI are new, doesn’t mean that we have to completely reinvent how we think about assessing vendors and tools.

Mike Hanley [00:05:08]:
Again, for example, if you’re bringing in basically any other tool into your environment, one of the questions you’re gonna ask is, who has access to my data? Where is where does it go? How is it used? How is it secured? You actually need to ask the same questions of of vendors that are building tools that happen to be powered by AI as well. So I I encourage people to, you know, use what’s on the truck, if you will, from a process standpoint to evaluate those opportunities and try to help people get over that initial, well, what are all the things could go that could go wrong, which is obviously part of what you wanna evaluate. But really focus on what can go right for your business from adopting AI, especially for a place like developer tools. And the numbers speak for themselves. I mean, we’re seeing pretty phenomenal improvements, not just in developer productivity, but also from a security standpoint. And I think when people sort of start to weigh that out and actually look at those things, it helps have a more reasonable conversation, not one that’s just based on uncertainty about what AI can bring in the future. Okay.

Karissa Breen [00:06:01]:
Okay. So there’s a couple of things that you said that I wanna get into a little bit more when you said having a reasonable conversation. So what does reasonable look like to you?

Mike Hanley [00:06:08]:
I mean, I think reasonable to me looks like using a lot of the same plays that you that you already probably use in your company. Right? Like, your company has security policies in place. You have expectations on what type of data you trust with what types of vendors. You you ask for data flow diagrams. You seek to understand the security practices associated with the vendor that you’re working with. You know, we have standards. We have templated security questionnaires. I mean, the the industry as a whole has a litany of these kinds of things that we can leverage to help assess the trustworthiness of a vendor or a tool and help us make an informed decision about whether we wanna use them or not.

Mike Hanley [00:06:44]:
And the reality is that works well for tools powered by AI and well for tools that are not powered by AI. And I think that can that can help you evaluate what, if anything, is unique about AI both from a potential risk trade off but also from a benefit standpoint to the organization And and stay focused on where those trade offs are, not on reinventing a special process just because it happens to be AI. And the the good news is I think we see, like, a lot of great resources that are being made available to help people understand, you know, that that actually there are plenty of very successful initial use cases for AI that can help people prove out those benefits. And the existing third party, like, vendor risk management type approach that we have is something that people can just standard and leverage off the truck as well.

Karissa Breen [00:07:29]:
Okay. So I wanna go back a step when you mentioned before sort of assessing vendors and tools, etcetera, and then asking the right questions like, well, who who has access to the data? Is it secure? All of those types of things you just and that makes sense from a security perspective, but for a don’t wanna say the word average developer, but from a development point of view, even if you go back historically, maybe those questions haven’t been as engendered like security practitioner. So are you starting to see a shift between, like, developers asking those questions more and more? Like, even myself going back, you know, 10 years ago, like, developers weren’t really asking those questions. Then now nowadays, it’s becoming more prominent because we’ve got people like you and me sort of banging on about it a lot. But are you starting to see that coming through a little bit more in just in the day to day conversations?

Mike Hanley [00:08:13]:
Yeah. I mean, I think so. It depends on the organization and their size and what kind of functions that they have. But, you know, if I if I’m talking to an average sort of midsize commercial customer, you know, they they might have somebody who works in procurement. They might have a a legal team involved. There’s somebody in IT who needs to run the tools, and then there’s the developer who ultimately wants the experience. And what I find is there’s a particular amount of, like, the time to wow, if you will. Right? Like, what’s the what’s the sort of wow moment in terms of how you demonstrate the overwhelming benefit to the business? And a lot of the developers, when they can show numbers, for example, that they’re writing code 55% faster or that in certain languages, the AI tools might be helping them write 60% more of their code, or they’re using AI to fix bugs that they didn’t know how to fix before or didn’t have expertise on or are getting better suggestions than what they got before.

Mike Hanley [00:09:03]:
And these are, you know, not small incremental improvements. Right? I mean, these are very, very significant things that you can actually quantify both in terms of productivity. Right? I mean, if you’re writing code 55% faster, I mean, you can you can tie a dollar value to that, generally speaking. And I think that helps the developers have a conversation with the nontechnical folks who are involved in a procurement process, better understand what the upside to the business looks like. And, you know, I think one of the things that’s key to building trust with AI, again, it’s new. There’s a lot of uncertainty, is also just being clear and transparent about what it does. So, you know, my recommendation to vendors out there that are building tools that are based on AI are don’t keep any secrets. Just share.

Mike Hanley [00:09:45]:
What are you doing? What kind of models are you using? How are they trained? How do you store data? What options are you giving customers in terms of, you know, do you store data or not? Or how do you use any data or telemetry that you get? And I think that’s you know, with that working in the open, that that responsibility that goes into being a a pioneer in that space is gonna be key to helping build trust as well, especially for maybe people who aren’t as bullish or optimistic about it as we are or people who are just trying to run through, you know, these very very standard business processes to help assess risk. Whatever the case is, I think the transparency there from the vendor community is is, like, very, very important to complementing the enthusiasm and then also sort of the objective stats of the performance of the tools that you might get from the developers.

Karissa Breen [00:10:27]:
Yeah. That’s an interesting point around building trust because depends on who you ask. They use this conundrum around, oh, AI is bad or no. It’s good. Obviously, we’re optimistic on this interview. So just going back to that then for a moment, and you said, you know, being clear, upfront, transparent about what it does, all of those types of things. So then I’m curious to know why wouldn’t someone be clear and transparent upfront, or would you say that people perhaps just don’t know those answers, so they just don’t say anything?

Mike Hanley [00:10:53]:
Hub has has been to lead with publishing everything that we’ve got and be clear about, you know, what we’re doing and how we’re doing it. But I think in in in other organizations, it it may simply be that not everybody necessarily understands. I mean, in in 2023, if you look at all the new product offerings and new start ups and sort of the really the advent of generative AI in a much more mainstream setting. Right? I mean, everybody remembers the moment that they had their first experience with something like chat g p t. That was only a year ago. I mean, a little over a year ago. Right? Maybe 15 months ago now. So the world has changed a lot in a very short period of time.

Mike Hanley [00:11:29]:
So I think as people have tried to develop new experiences very quickly on top of some of these capabilities, it’s probably the case that not every organization necessarily has taken sort of full stock of that. But, you know, my advice is for organizations that have done that, take the time to catch up. Take sure make sure that you understand what you’re doing. Lead with that transparency. Again, I think it helps especially for overcoming objections or at least having a a open dialogue with some of these other teams that are that are focused on managing risks. But, you know, again, this is gonna be a very fast moving space, not just right now, but for the next several years. I mean, if you look at how much has changed in, again, the last 15 months, and then you sort of project that forward that you expect at least that rate of change, if not more, I’m I’m expecting more personally. You know, we’re all gonna have to be racing to make sure that we keep up to date information about what’s going on, not just keeping information out there, period.

Mike Hanley [00:12:21]:
So that’s definitely going to be a challenge overall in the space to continue to build trust, and AI is keeping up with that pace of change.

Karissa Breen [00:12:27]:
But I guess you guys you guys, I mean, in GitHub, have sort of done it from the beginning. Right? So I guess you can, that’s not necessarily an issue for you. But then I’m curious going back to the trust side of it because everyone talks about building trust, which is a hard thing to sort of answer. But as you were speaking, what was coming in my mind, Mike, was, well, you’re being transparent from the beginning. People can access it. They can read it. They can absorb. They can you know, they they have all that information there.

Karissa Breen [00:12:51]:
Kinda like you’re not hiding anything. So are you would you say that because of that, that’s what’s enabled sort of the brand and the trust factor as opposed to perhaps other tools out there that maybe aren’t as transparent about things? And then if so, is it something that perhaps other companies can look to employ around the same sort of transparent strategy?

Mike Hanley [00:13:10]:
Yeah. Abs I mean, absolutely. No. I I absolutely think it’s been a contributor to the success of Copilot. And, you know, I frequently go have meetings and sit down with customers to talk about GitHub Copilot and, you know, how we’re using it internally at GitHub or, you know, generally what capabilities they could bring to bear for their organizations. It’s very, very common that I’m not just talking to, you know, my counterpart who might be running engineering or security at the customer. It’s often, you know, you have folks there from the legal team and the procurement team. And generally speaking, you know, we’re we’re able to sort of get quickly to the how can we help them because we’ve sort of already answered everything else that’s out there.

Mike Hanley [00:13:47]:
And I, again, I think that’s a great approach for people to just proactively take when they’re building these new products is, again, lead with transparency, lead with lead with what you’re doing and how you’re doing it, get all those questions up front, and then that helps you very quickly get to the great. And now how can how can we help you as an organization?

Karissa Breen [00:14:05]:
You made a comment before, and you said, what can go right with AI? So, again, we’re optimistic, so I’m really keen to get into that because there is a lot of articles and doom and gloom and all that type of stuff out there, but, you know, I’m very much a proponent of AI as a tool. It’s and to your point before, around 55% faster. Like, those are huge numbers. That’s not like 1 or 2%. Like, that’s that’s a lot. So I’m then curious then to know, like, the benefits and from your perspective, like, what is going right at the moment?

Mike Hanley [00:14:34]:
Well, I think, you know, the the first real tool on the scene here that’s gotten major traction has obviously been GitHub Copilot. And so we’ve we’ve been in a fortunate position to see what that rapid adoption looks like, and that’s, you know, already now the most popular, most widely adopted AI developer tool out there. There’s more than a 1000000 paid users using that tool already. But that’s just one use case. And I like what you said a minute ago, which is thinking about AI as a tool. Well, if you have a tool belt or a toolbox, you have lots of different tools to do lots of different jobs. And for us, GitHub Copilot, the pair programmer, is just one of those opportunities. And we’re also doing other exciting work, like, for example, in the security space, we we recently released some enhancements to our code scanning tools that now use the power of AI to automatically suggest fixes for findings when the code scanning tool runs across your code.

Mike Hanley [00:15:34]:
And this is really powerful because if you think about it, you know, it’s great that you’re helping with your pair programmer as you’re bringing the idea to code in the editor. But what about all that other code that you’ve written, you know, since the since the dawn of time and that it didn’t have the benefit of having an AI pair programmer over your shoulder or didn’t have the benefit of whatever you’ve learned through your experience or from the developer who no longer works at the company who wrote that code 10 years ago. Well, the cool thing is now code scanning doesn’t just tell you that you have a bug. It gives you a a a relevant, a well explained, a clear fix based on what it knows, not just about that bug, but the context around it. And that is really powerful if you think about helping organizations manage things like technical debt and vulnerabilities. Like, if you if you don’t just get better at writing new code but also get better at fixing your old code, you see the the advantages of this really start to compound quite quickly. And that’s only just 2 things that we’ve talked about. I mean, what if it’s also writing docs to describe your code faster? What if it’s better summarizing your pull requests? What if it’s giving you a narrative summary of the contents of a repo so that you don’t need to sort of poke around until you figure it out for yourself.

Mike Hanley [00:16:46]:
I mean, all of these things are gonna just completely transform the developer experience. And, again, the neat thing is we’re in such an early day, and these are already such powerful use cases that that’s, again, part of why I’m so excited not just about the, you know, the overall productivity implications, but I I do I do think that AI will fundamentally reshape how we think about security because the the power and the benefit of it is so obvious even in these early days that it is worth figuring out any of the other challenges that come our way because this is probably one of our best bets at figuring out the slew of technical debt that the Internet and all modern commercial entities and governments are built on. Right? Like, we we know that in a lot of cases, we’re relying on systems that are a decade or or decades old in many cases, and human powered refactoring isn’t gonna get us out of that problem. So so I’m I’m very, very excited about the opportunity for that, not just in the commercial space, but also in open source where, again, most of the code that we all depend on to some degree or another already exists, and and the opportunity for AI to go also help creators, developers, and maintainers in those spaces improve their projects and help and give them some of the support that they need there as well.

Karissa Breen [00:18:00]:
Okay. So the part that I wanna press on a bit more, which was really interesting that you said around, like, old code. So for example, voting organizations before, it’s like, oh, there’s literally one guy is the only guy that actually knows about this code. What happens if that guy leaves? What happens if that And then Mhmm. What you’re saying means that there’s built in redundancy then because we don’t need to just ask the same person that’s worked there for 40 years, hey. Like, what’s going on with this? When, you know, you’re talking about Copilot, like, it can do it for you, summarize things, explain things, make sense of the docs, those types of things.

Mike Hanley [00:18:38]:
Yeah. I mean, the the interesting piece of of what you’re describing there is, you know, for so long, organizations relied on tenure and institutional knowledge and community knowledge within organizations about how things worked or how they were manufactured or who wrote the code. And I think what’s interesting is, you know, when you think about the the power of some of the AI or, you know, generally just LOM backed technologies that we’re seeing enter the market, They have the ability to synthesize some of that context much faster than you even sitting down with whoever wrote that code, 10 years ago could get on your own. And the the ability to sort of look at what’s happening around you or the context in which you might be asking a question or even interact with the code in a way other than just kind of, you know, clicking through the editor and going line by line. Right? The idea that you can even ask a question in a natural language way, right, through chat, which has obviously become one of the most popular experiences where people interacting with AI apps over the course of the last, you know, year, year and a quarter. This is a complete game changer. Right? Because you you don’t have to rely on that individual anymore. And the idea that you can just ask your assistant, your AI powered assistant directly that I think is a big, big game changer.

Mike Hanley [00:19:48]:
And and and what’s neat is this gives organizations an incredible amount of agility. Right? Like, if you’re a security engineer coming in to do a review and the feature team is working on something that they’re really busy with and doesn’t have time to necessarily sit down and take you through it all, great. You could just load up that repo, ask some questions about the code to get started from from, you know, from from something like GitHub Copilot chat. And then you you probably have most of the context that you need to to head off to the races with your review. That really is exciting to me as well because it does give people fluidity. It provides for internal mobility. I think it increases the amount of access that people have the opportunities to be developers. It changes the game for getting into development in the first place.

Mike Hanley [00:20:29]:
Right? I mean, some people will learn to code now by asking natural language questions to a chatbot. Like, that’s phenomenal, and that’s remarkably accessible for people with different learning styles where maybe the existing ways of getting into programming or becoming a developer didn’t work well for them. So very, very bullish on what that’s gonna do, not just for teams, but just access for people generally who wanna get into development and wanna create.

Karissa Breen [00:20:55]:
Yeah. That’s a good point. And that was something I was gonna ask you about around skills. So you’ve obviously sort of summed it up around, it’s gonna lower the barrier barrier for entry. But then what what are people out there then worried about? Because I’ve heard from multiple people, like, they’re worried about AI, like, taking jobs, but then, you know, again, it’s the tool side of it. But then to your point, it means that we can potentially get more people in on this front. So what would be people’s reservations then on that on that side of things?

Mike Hanley [00:21:22]:
I I think it’s, you know, back to what we talked about a little earlier where I think there’s just uncertainty about what the future holds, and I think, understandably, some people come at uncertainty from from a place that might take a glass half empty view of it. You and I seem to be glass half full on this, but I understand everybody’s kind of looking at those from things from a different place. But my view on this, and I’ve seen this shared very broadly from a lot of other people, is AI will probably accelerate job creation and accelerate opportunity in places like development. Because, again, it’s the world is has been at this point, has been eaten by software thoroughly. And we we know that even, for example, in security, you know, that near and dear to to us, the job shortage that’s reported annually in places like the United States where I am is generally measured in the hundreds of thousands of unfilled cybersecurity job needs that exist out there. And we frequently point at, you know, a lack of qualified candidates or a skill shortage or a training shortage or whatever the excuse is that year when you see those reports. Well, if it’s now easier to become a security professional or if AI helps you with some of that security context or helps people get into the security field that who wouldn’t normally have had access to it or wouldn’t normally have had a a pathway or an interest to get into that, then that’s great because it’s gonna it’s going to help more people get into those spaces who weren’t there before. So my view is, again, it’s just it’s another way for people to get into development, but it’s also creating new work and new opportunities for people to to solve problems that will emerge as well just as as AI continues to grow.

Mike Hanley [00:22:53]:
So we mentioned a minute ago, the legacy code, the decades’ worth of code that the world is built on, challenges that we haven’t addressed yet but that we will eventually need to get to are things like, what are we gonna do about moving off of a language like COBOL? Or what are we gonna do about the unmaintained, open source projects that are running, you know, core parts of the Internet? I mean, these are interesting questions that AI and communities and the public sector and the private sector and academia are gonna need to come together to figure out. But AI is the common element of that that I think is gonna be a force multiplier that wasn’t in the picture a few years ago that is now that can help us manage some of those problems at the scale of things like some of the core open source components that could benefit from this, that power everything from your Tesla to your refrigerator to your iPhone.

Karissa Breen [00:23:40]:
So I wanna switch gears just subtly, and maybe let’s talk through, again, benefits and your thoughts on how sort of AI can then be leveraged for secure software development. I think the skills one is massive. Again, that’s something that’s coming up a lot in my interviews now, but for how you’ve answered that, I think that’s the way people need to be thinking along those lines. So I guess it’s listening to people like yourself to actually explain, like, this is actually going to enhance people and all types of people getting into the field. So now I’m keen to hear more on, yeah, the secure software development side of things.

Mike Hanley [00:24:13]:
Yeah. I mean, you know, and and to go back to that on the skills piece for just a brief moment, I mean, security is a big discipline. Like, I don’t know anybody who’s good at everything in security. I mean, it’s just too broad of a too broad of a space. We have specialties. We have areas that we have experience within that. You know, some people have spent time in cryptography. Some people have spent time in, you know, as a pen tester.

Mike Hanley [00:24:33]:
Some people have spent time in, like, security design and UX research. And all of this kind of comes together to be the broader landscape that’s security. But, you know, I I haven’t met anybody yet who’s good at all of it. It’s just too big of a space. But, again, this is a benefit where if you if you have some reference or context in some areas, AI can certainly help you with filling in a little out of the rest or can be a it can be a tool to help supplement where you may not have that experiential knowledge per se. But, you know, to get to the SDL or the, you know, the secure software development life cycle piece in a little bit more detail, I think the prior point is relevant because, you know, the the job skill shortage is one of those classic things that we point to as we say, like, well, there’s not enough security people to do the reviews or sit over your shoulder and watch developers while they work. I would assert we actually don’t want that at all, and I don’t think developers want somebody shoulder surfing them from the security team the entire time that they’re trying to work. I think everybody, you know, the developers generally speaking want good security outcomes.

Mike Hanley [00:25:32]:
But as an industry, we have failed to put them consistently in places that work best for developers. If you think about your average security tooling experience today, and I’ll I’ll I’ll call back to the shift left conversation we had at the beginning, It’s not easy for developers to interact with security tooling. Most of the time, security experiences are not designed with developers in mind. They have to get out of whatever they’re doing, go to some system that the security team runs that they weren’t, you know, involved in selecting or configuring, Get some findings and reports. Go figure out how to deal with that the best they can. Then they’ve got a red team report coming back saying that they’ve got, you know, 15 vulnerabilities that they need to fix. And we’re and we’re constantly asking people to react to security information. And I don’t think that this is a great experience for developers.

Mike Hanley [00:26:19]:
It’s where we are as an industry, but I don’t think it’s a great experience for developers. But the AI powered experience of more more continuously embedding security feedback in every single part of the of the experience and doing so with great context, that’s where the real power is. So if you think about you know, we mentioned, something like GitHub Copilot. We’re able to give you the feedback about what you’re doing in the moment. That’s not something that any security team is really equipped to do anywhere today. Right? So that’s already a big game changer. Like, there there just aren’t shops that have people shoulder surfing you all the time, every developer, while while you’re doing your work. So that’s kind of thing 1.

Mike Hanley [00:26:57]:
And the security review process, typically, you know, you run scanners and you get back results and findings. And then as a developer, you need to sort through all that noise. Well, if the AI is just simply telling you, hey. We found 10 things. We automatically fixed 6 of them. These 4, you need to take a look at. We have good suggestions for 3 of them that, you know, are accepted 98% of the time when we give them. And then on this last one, you know, we think this will work.

Mike Hanley [00:27:20]:
Try it. And if not, we’ll give you the next suggestion. That will completely reimagine that entire test experience. Right? Because you’re you’re giving all the context, and you’re explaining the the bugs and the fixes based on the rest of what the developer is doing. I mean, that that is, you know, that’s a like a dream experience, I feel like, for getting security feedback at that stage. And then, you know, when you think forward to to sort of then operating whatever it is that you built, we’re already seeing opportunities to integrate skills or ecosystem extensions where AI can help sort of recognize what’s happening in environment and suggest reactions to it. Right? So I think this this idea that the incident response loop will potentially get tightened up pretty significantly as well through AI. That will also be a very big game changer.

Mike Hanley [00:28:05]:
So I see really across the whole spectrum of things, there’s huge opportunity for improvement. Right? Like, you’re you’re saying better signal at the onset when somebody’s writing something. You have the potential to simplify something like vulnerability management, which is traditionally just really hard for a lot of organizations to get right. You’re giving much more clear, crisp feedback about security findings and even suggesting, if not automatically just fixing them for people. And then you’re helping assist with the sort of deploy, operate, and incident respond pieces of that as well. That’s a pretty big set of changes across the board, and those are just, you know, a few examples that we’ve touched on. So so I’m actually really excited about that. Now security teams are gonna obviously need to adapt to that as well and think about, you know, where that shifts their attention and focus.

Mike Hanley [00:28:47]:
But but that is a great problem to have because, you know, organizations just don’t get that kind of coverage today across their SDLC. Even the most sort of well resourced organizations would struggle to get close to that.

Karissa Breen [00:28:58]:
That’s that’s an excellent response because what was coming in my mind, it’s going back to the shoulder surfing, which is really annoying, and I can talk about this because I’ve worked, in a penetration testing team internally myself, and these were some of the issues that we used to run into because then it kinda feels like, oh, like, you’re policing us. And, you know, culturally, it creates a problem. With that in mind, would you start to see the culture shifting? Because, again, like, we don’t need to go and ask someone question when it could you know, you can ask it yourself. And then therefore, you’re removing removing that sort of embarrassment of asking someone a question when you because you don’t really know. You know, there’s been multiple cases of people just never really asking stuff because they never wanted to feel embarrassed because they didn’t know something. Would you start to see the culture within security teams? Okay. Let’s just focus it internally for a moment because now that’s being removed and the whole shoulder surfing thing just won’t be a thing moving forward with the with leveraging

Speaker B [00:29:56]:
AI? Yeah. I mean, you know,

Mike Hanley [00:29:57]:
if you look like 10, 15 years ago, right, I mean, the I think a lot of security teams had the the very much the, guns, gates, and guards approach to sort of more traditional views of security. Right? Like, with the the department of no, if you will. And I think especially in the last 10 years, my hope is that more and more security teams have moved to being the department of yes and, right, where they are more proactively engaged with the business, trying to help them get to the outcomes that they want to achieve. And and you can by the way, you can manage risk while also being approachable and being accessible to other teams, not just developers, finance, legal team, you name it. Like, the security team should be seen as a resource for sure to be helping with all these problems. But especially with, you know, the advent of AI tooling, I think it’s even more important that the developer or that security teams in particular think about their role as a business enabler because you you can’t have what you just said happen where people don’t go to the security team because while AI is an extremely powerful tool, you know, that that that’s going to continue to present again in a rapidly moving setting and context. It’s gonna continue to represent an opportunity for the security team developer teams to actually work more closely together and communicate more frequently because it’s gonna solve some problems, and it’s gonna free up time to focus on things that most security teams haven’t had time to focus on in the past. Or it’s gonna present opportunities for security teams to learn new tools themselves or new skills themselves or adapt to having AI in their own workflows.

Mike Hanley [00:31:25]:
So I think that that idea that the the culture of the security team needs to be transparent, needs to be focused on communication and being clear about what the priorities are of the business and working with other teams, that, like, my hope is most teams were moving in that direction anyway in the course of the last 10 years. But because the the pace of development is going to just radically accelerate with AI, like, the security teams that don’t adapt to that are going to to actually probably make their organization struggle more because they will be more out of tune and more out of touch with how the rest of the business is trying to operate in an AI first place. And I think this is one of those things where, like, we won’t go back to where we were previously. I mean, AI and adopting AI tools, not not just for developers, but just generally, we won’t go back to a day where that’s not happening because it’s such a differentiator from a productivity standpoint. And, you know, in a competitive economy where there’s lots of innovation happening, like, you you know, you need to be able to to compete in this space. So I think it’s a a matter of time as most organizations think about what the place is for that for them to adopt it and how they do it and at what and at what pace. But that’s going to necessitate again that change from a security mindset. Because if the organization takes off in one direction with AI and the security team’s not figuring out how to do it in a way that’s safe and that manages the risk appetite of the organization, they risk getting left behind.

Mike Hanley [00:32:48]:
And then once they’ve adopted it, if they don’t stay in communication with their developers, they’re not gonna be able to figure out how to continue to best meet their needs. Because, again, if you look at how much the space has changed just in the last 15 months since ChatGPT came on the scene, if you’re not talking to your developers or you’re not talking to the other consumers of AI tooling in your organization, you’re not gonna be able to keep up with the rapid pace of change, the the new opportunities that they’re gonna wanna see. And, ultimately, that’s that’s that’s bad business. I think it’s really, really important that security teams are in touch with, communicating with, accessible to, and resources to ultimately their teammates and other functions in the business.

Karissa Breen [00:33:25]:
Yeah. That definitely makes sense. I think maybe my my question was around just historically there being bad bloods, not the way to position it. But just in my experience of people being less inclined to wanna speak to us because they kinda felt like we were helicoptering them and we were the police, telling them, no. You can’t do that. And maybe there was the the dynamic, I think, will then change over time perhaps. So it was probably more from that point of view. Will you start to think that more receptive then between those 2 teams?

Mike Hanley [00:33:54]:
Sure. And and and, again, my hope, I think, is that that that was changing already. But I think I think anytime you have a no in security, as a security leader or practitioner, I think you have to step back and and really look at why that happened. Because if you have to say no or block a deployment or stop something from shipping, generally in that situation, there there probably were a number of missed opportunities to be on the same page about what’s important to the organization from a security standpoint and what the business was trying to accomplish. So when I reflect on the times that that I’ve been in situations like that, they are they are always missed opportunities to be in a shared understanding about what’s important to the org. So I think when it comes to, like, you know, developers don’t want helicopters around them watching everything that they’re doing. It’s like, yeah. Well, really, really what you’re saying is, how do we make a great developer experience? Because if you’re a software developer, you wanna develop software, and you’re, you know, you’re you probably came to work to create something, and nobody’s comfortable when they’re being hovered.

Mike Hanley [00:34:55]:
But security can solve problems like this with thoughtful design, and part of thoughtful design is talking to the developers about, hey. We’re here we’re here from the security team. Here’s what’s important to us. Here’s the risks that we’re trying to manage. I wanna learn more about what you do and the tools that you use and how you work so that I can manage these risks to the business and also help you be a happy developer. And what’s interesting is, like, again, when you look at what we’re already seeing with the tools that are out there today, I mean, the feedback on things like GitHub Copilot is not just that it’s making them more productive or helping them get to better security outcomes, but they’re also happier. Like, developers will actually say that they are happier in their jobs as a result of having access to GitHub Copilot being their AI pair programmer. So back to your question, if you can have better security, if you can have better developer productivity, and everybody’s happier about it, and this is a product of communicating about what’s important to everybody and how the tools fit together so that everybody gets done what they need from their respective stakeholder positions, like, this is a great outcome because it means you’re all on the same page and everybody’s rowing the boat in the same direction.

Mike Hanley [00:36:00]:
So, again, I think that all goes back to to just communication, and and I think a modern team in security, independent of AI, has to be communicating with their developers and figuring out how to work together and be communicating together about what’s important and and generally being on the same page. Because, again, I think anytime you get to a no, there’s usually something that was missed there well before that.

Karissa Breen [00:36:22]:
So in terms of moving forward, do you have any sort of hypothesis around even the next 12 months, as you mentioned before, like, things are, you know, increasing in a a rapid speed. In the last 15 months, we’ve seen massive change even with chat gbt emerging, being more ubiquitous. What are your thoughts then, like, even in 12 months, we come back, we do another interview, you know, in the whole the AI game and then the shift left just purely based on your experience and any sort of insights that you have just to leave our audience with today?

Mike Hanley [00:36:51]:
Yeah. I mean, I think I think the grand challenges that I’m interested in are some of the at scale problems that AI will lend itself to being more helpful to us, than solutions that we’ve had in the past. And an example that I frequently point to is DARPA announced that Black Hat this this past summer here in the States in in August that they were gonna have their AI cyber grand challenge. And, you know, looking at how AI models can help solve large problems in software security. Like, for example, how do you help make a big dent in the, corpus of technical debt that exists in open source software that we all depend on. I’m excited to see what comes from things like that where we’re taking big bets at, okay, we’ve created these great individual developer experiences and and tools, but what if we stomp out, like, an entire class of problems through a bigger industry effort? That would be really interesting and exciting. And I think the if you if you sort of project forward at, you know, from what we know over the course of the last, say, 2 years where we’ve seen advancement in models, we’re seeing lots of specialized models coming out now that are small and sort of particular purposes. So we’re seeing more and more options become available.

Mike Hanley [00:38:04]:
We’re seeing more and more tools become available. I think a year from now, I would love to see some actual meaningful progress where we see some real breakthrough opportunities to fix some of the bigger software security challenges at scale, Whether it’s fixing a bunch of projects, whether it’s, you know, a dramatic improvement in, again, in, like, a pair programmer, like like Copilot through model advancement or through advancements in the experiences. I do think by the time you know, if we have this conversation again a year from now, I do think we’ll have made meaningful progress on some of those items, and I’m really excited to sort of see which ones come to fruition. But, again, when you see things like DARPA putting a a challenge forth and we say, hey. Come come solve a really meaty problem with AI. The pace of progress that’s happening in the industry right now suggests that we will probably see some solutions to some of these big problems in in the next few years, if not the next few quarters. And I think there’s never been a better time to be in security because buckle up. It’s gonna be a exciting next couple months and next couple years for all of us.

Karissa Breen [00:39:01]:
So, Mike, really quickly, do you have any closing comments or final thoughts?

Mike Hanley [00:39:04]:
Yeah. I think my advice is, you know, if your organization is asking about AI today and and, what should you be looking at, I think just be open, go learn, talk to people, who are in the space who are already doing it. But, ultimately, you know, talk to your lawyers, talk to your IT practitioners, talk to your finance folks, figure out what’s important to your organization. Try to find that opportunity where, you know, you might be able to show how AI can help make your organization better. And if specifically, if you’re listening to this podcast and you’re in security, finding some opportunities to make your developers happier and to make your security theme happier, There’s plenty of wins, I think, that are at the intersection of that. And experimenting with those, I think, could be a great project for you in 2024.

Karissa Breen [00:39:50]:
Thanks for tuning in. For more industry leading news and thought provoking articles, visit kbi.media to get access today.

Share This