The Voice of Cyber®

KBKAST
Episode 340 Deep Dive: Paul Davis | JFrog’s First Step Towards AppTrust and DevGovOps
First Aired: October 29, 2025

In this episode, we sit down with Paul Davis, Field CISO at JFrog, as he explores JFrog’s approach to building trust in software development pipelines and the evolution towards DevGovOps. Paul shares his perspective on elevating trust from the granular level of software releases to the broader application layer, emphasising the need for consistent, automated, and reliable methodologies in development. He discusses the critical role of automation in balancing speed and security, tackling tool sprawl, and mitigating risks posed by open source dependencies. The conversation touches on the realities of legacy tech debt, the challenges of integrating and consolidating security tooling, and the importance of having a single source of truth.

Paul is an experienced IT Security Executive who, as Field CISO at JFrog, works to help CISOs, IT execs and security teams, enhance protection of their software supply chain. Additionally, he advises IT security startups, mentors security leaders, and provides guidance on various IT security trends.

Vanta’s Trust Management Platform takes the manual work out of your security and compliance process and replaces it with continuous automation—whether you’re pursuing your first framework or managing a complex program.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Paul Davis [00:00:00]:
I have this terrible saying because I’m in security and I manage my paranoia, which is people say trust but verify. I can’t do that. Software development pipelines, we have verify, verify, verify, and then maybe trust, because I’m always paranoid. I have to keep watching out for new issues, new problems, new vulnerabilities. So I have to build that pipeline to be trusted.

Karissa Breen [00:00:34]:
Joining me now is Paul Davis, field CISO at JFrog, and today we’re discussing JFrog’s steps towards app trust and Dev Gov ops. So, Paul, thanks for joining me and welcome.

Paul Davis [00:00:49]:
It’s a pleasure. Thank you for having me.

Karissa Breen [00:00:51]:
Okay, so there’s a lot of DevSecOps, or people say SEC DevOps, depending on who you ask, what the phraseology is. I want to start perhaps, or more specifically with app trust. I want to start there because the word trust is an interesting one and depending on who you ask what it means to them, the definition changes. So I’m keen to understand what does app trust sort of mean to you, Paul, and then more broadly, what does it mean to JFrog?

Paul Davis [00:01:16]:
So you’re right, trust is a big thing. It’s been shown to impact in so many ways, both internally inside an organization and externally with its users, whether it’s customers or partners or whatever. But the thing is, if you trust something, which in today’s world is very hard, you’re more likely to use it, you’re more likely to trust the person. If a trusted advisor, for example, you trust them, what they’re going to tell you, et cetera. So trust is a big thing and building that trust is so, so critical. And when it comes to the world, like of software, I want to be able to trust the software. I want to be able to know that it’s going to work consistently. I know when I open it up it’s going to perform.

Paul Davis [00:01:58]:
I trust it if soon. And unfortunately, in today’s world, loyalty is hard. We both have multiple right apps on our phone because you want one or the other, et cetera. So the experience of trusting it and knowing you’ll be able to get something when you want it is so, so key. Now, the other side of it is to build trust. It’s not just the developers, it’s not just security, it’s everybody involved, right? And so when, typically we’re talking with DevSecOps or DevOps or ML Ops or MLSecOps or, you know, SecOps, et cetera. The ripple effect of somebody releasing a piece of software saying, I want this to go into production. All the controls we have around compliance and regulations and testing and all that sort of stuff is all about making sure that the end product that’s usable by other people is trusted.

Paul Davis [00:02:47]:
Now, typically in the world of development, and I talk as a sort of recovering reluctant developer, we deal with releases. And in today’s world, we don’t deal with monolithic applications and single things. They’re usually multiple pieces that fit together. So then when we’re down in the weeds of development, oh, this release is coming in. But when we talk to the executives, when we talk to our users, we don’t talk about that. We say, hey, this application has this new feature. We fix this bug. So app trust is a way of pivoting to say that this whole application with all these pieces that make up this software is trusted.

Paul Davis [00:03:23]:
And with that you say, well, how do you know it’s trusted? Well, you have to have a predictable, reliable, consistent way of developing software, designing software, testing software, rolling into production. And so you need to make sure you’ve got control gates for all of that stuff. Right. And so app trust is really moving the trust conversation from what I call down in the weeds of development up into the world. Where do I trust this application? Do I trust this service? Sitting on, on. On the Internet? That’s what it comes down to. So app trust for me is about elevating the conversation so that it’s relevant to everybody and gives us this assurance, this sort of certification, saying, this has gone through everything and I know I’m going to have better experience for all the people that use this.

Karissa Breen [00:04:06]:
So there’s a couple of things that I want to get into a little bit more. You said we want to be able to use something like when we want it, which is now just the day and age that we’re in in terms of business more generally speaking, people will go elsewhere if they can’t get it super fast, super quick, it’s not working, et cetera. And you said before, you’re a recovering developer, if you’ve been on the App Store and it’s like someone’s comments like, oh, the app sucked and I couldn’t use it, and therefore I’m not using it again. So I think we’re in this really aggressive way of business nowadays, which probably in turn puts a lot more pressure on developers to be able to ship this stuff out a lot quicker, right? So how does that, in terms of the way the business is moving now, how does that sort of sit with you in terms of still maintaining that trust?

Paul Davis [00:04:50]:
So, as I mentioned before, this whole thing of having a consistent, reliable, predictable methodology approach pipeline for your software and what you’re trying to do is you’re trying to balance agility with features, with bug fixes, with innovation, and getting something that’s safe. Now, nowadays, you know, we’re not doing typically annual releases. There could be quarterly releases, weekly releases. We have Agile where we’re doing releases every 30 minutes. The only way you can do all those things consistently is to automate, right? We have to automate as much as possible so the human’s not getting in the way. That means we’ve also got to put controls in place that are really effective at making sure that we’re testing everything right. Especially in the world of development where, you know, a lot of times when developers are building software, they are typically required to specify the version of a library, the open source library they want to use, right? And sometimes they cheat and put latest. Now, to be honest, as a ciso, this scares me because it means that every time somebody goes and builds a package, it could be different from the last time because the speed and volume of changes and versions out on the in the open source repos is crazy.

Paul Davis [00:05:59]:
So it means if I’m rebuilding this for whatever reason, I have to have a consistent testing. And those test tools could be stuff that you’re doing, analysis of the packages. It could be checking for vulnerabilities if it’s AI, or checking for the guardrails are working correctly, et cetera. The only way you can do that at speed and consistently and reliably is through automating that pipeline, automating and integrating those test results into, into that. So when I talk about that app trust, we have a thing called a trusted app flag, right? It means that everything that I required to be tested has tested, and it wouldn’t have gone into production unless it had passed those tests. So when I’m moving that, sort of moving that speed at the moment, and as a developer, I want to know as much as possible, hey, talk about the one thing developers hate doing is fixing old code. They like to innovate, they’re artists and they’re saying that sarcastically. I’m saying that because it is a very creative art of hacking code to make it do things that publishers asked you to make it do.

Paul Davis [00:06:58]:
But the one thing you don’t want to do is go back and have to fix a vulnerability, an issue, et cetera. So if I can get the developers to have that information, that means that they then are committing more trusted code into the central repo, like artifactory, and therefore the down ripple effector means it’s going to less likely to get rejected as it goes through the test and ends in production. So for me, that whole consistency of how it impacts the whole pipeline and all the different groups means that that’s why trust is a big internal thing. But I have this terrible saying because I’m in security and I manage my paranoia, which is people say trust but verify. I can’t do that. Software development pipelines, we have verify, verify, verify and then maybe trust, because I’m always paranoid, I have to keep watching out for new issues, new problems, new vulnerabilities. So I have to build that pipeline to be trusted.

Karissa Breen [00:07:50]:
So the operative word that you use was a couple of words, but one of which that stood out to me was automate. So what? How are people’s sort of mindsets from a security point of view? When we’re automating things, does that give people anxiety? And the reason why I say that is because it’s about relinquishing control, security. Historically it’s about having control. And now because of how the market’s moving with AI and automation and getting stuff faster and faster, we have to be able to relinquish a little bit of control to automate things to make go faster, make our life easier. But how does that sit with you? Or more broadly, Paul, how does that sort of sit with security leaders that you’re speaking to?

Paul Davis [00:08:25]:
So the first thing is just that we have to have a confidence factor, okay? We have to be confident that we’ve put the controls in place. I am terrible for the cliches. It’s the things that I don’t know that wake me up at 3 o’ clock in the morning with an intercooler. The things I know about, we’ve put the appropriate controls in place, whether it’s I’m fixing the code, the application’s not accessible, I’ve got some other tools like IDS or firewalls or other solutions in place to protect against these attacks, et cetera, or detect strange activities. But the one thing we are very boring as security people, we do like, we are very much process driven. I call us process dweebs and sort of thing. We like consistency. So if I can get a set of tools that I have configured to perform all the tests, I want to be performed to ensure that I’m not including, for example, malicious code, I’m not including secrets, I’m not including tokens, I’m not using a bad configuration, that sort of thing.

Paul Davis [00:09:24]:
If I can have the tests in that pipeline that are doing it for me, then I’ve got a higher confidence that I’m going to be safer in production. And there’s a funny word, I use the word safer because we can never 100% guarantee that something is not going to be hacked in the future. Unfortunately, that’s the reality of today’s world. It’s just, it’s just the way we. The speed, the innovation and also because it’s humans coding and we tend to make mistakes or slip up occasionally, just like AI does as well. So if I can have that trust in place, I’m reducing my risk window or factor there, so I’m not really relinquishing control. And it’s like there’s an investment at the front, but if you’ve got that consistency, but you can’t just sit back and say, oh, everything’s perfect because it’s a moving window. So I also have to be able to detect exceptions.

Paul Davis [00:10:15]:
I also have to handle exceptions. Right? There might be a situation where you’ve got a dubious package or it might be a package that’s old that the business insists upon using and we have to grant them an exception and we have to be able to track that accession, document the risk with the business and say, look, we’re going to do this because we’re in the business of doing business. If we do totally secure, we’d never be in business. So we have to balance that through. So that whole thing of relinquishing control, I don’t see it as relinquishing control. I actually see it. If you have all the tools integrated properly, I actually get greater visibility. Right? That’s one of the things give a security people data, they love it.

Paul Davis [00:10:54]:
If I’ve got greater visibility, then I can see when something goes wrong how I could fix it in the future. So, you know, it is that sort of balancing act between defining controls and you have to vary the controls according to how close you get to the pipeline. Developers need to be able to experiment, just like machine learning data scientists, they like to experiment, but when they find the fixed formula, you lock it in place and that’s the one that gets tested and that’s where you apply the stricter roles around compliance and regulations and security tests and evidence files and S BOMs, all that sort of stuff. So, yeah, I think I’m willing to relinquish control, but I’m not taking my eyes off the road, you know what I mean?

Karissa Breen [00:11:38]:
So maybe it’s more of a feeling. But to go on your point around, if the tools are integrated properly, well, what happens if they’re not?

Paul Davis [00:11:46]:
Well, then. So there’s two scenarios I typically find in the world is either there is a gap, and so you really need to step back and assess your entire life cycle of how you’re developing software from design curation all the way through coding, through packaging, up through distributing it, making sure it doesn’t get compromised. Man in the middle of taxing. So you have to assess all the tools at those stages and make sure you’ve got coverage. And unfortunately, in today’s world, I do want layers of defense. I don’t want to just rely upon one point in the life cycle to determine bad. I like multiple tools, right? It’s terrible. There are different tools that different expertise and different sort of capabilities.

Paul Davis [00:12:30]:
You need to roll this. So it’s very difficult in today’s world to have one thing that does everything. So what you have to do is create like an ecosystem of compatible things that you can integrate together into your pipeline so that when the software is transitioning through those stages, it’s being tested and also validated, right? So that’s one problem with the other gaps. The other problem I find is they have too many tools doing the same thing, because software development and security have operated kind of separately, just like ML, and we’ve tended to develop our own tools, our own favoritisms, et cetera. However, they’re typically doing the same thing. So there has to be also some very, dare I say, heart rendering decisions about consolidating tools down to a set that makes sense, that provides the functionality at the right level. It’s never going to be a perfect match because we all have little quirks. That’s why we have multiple pieces of software.

Paul Davis [00:13:21]:
But if I can get that software to work in a sort of symbiotic relationship, and we’re sharing data and using the data in multiple places by multiple teams, and I can streamline the tools, then I’m actually reducing complexity. If I can reduce the number of tools down, I’m reducing complexity, which also means I’m reducing risk. And you might be saying, but, Paul, if I take this magical tool that does this and only does this, then maybe you keep that one. But having everybody having their own favorite tools and generating the same data doesn’t make A lot of sense.

Karissa Breen [00:13:52]:
Yeah. And I’ve heard that. And a lot of like internal sizes that I’ve spoken to, have interviewed. They’re saying like, we’re trying to reduce tool sprawl. We’re trying to reduce the amount of tools because like you said, there’s probably four or five of them that are doing the same sort of thing and then the cost associated with that and then the risk. So we’ve sort of started with one tool. Now we’ve gone to like doing all the tools because there’s one tool that does something super specific that we absolutely need. But now it’s like, okay, we’re trying to reduce the level of tools.

Karissa Breen [00:14:18]:
How do you sort of manage your. It’s like, okay, we need a few tools. We’re just going to have to accept that we don’t need all of them, but we need a few. So not just the one. To your earlier point, it’s going to reduce the risk. People are very big on interoperability nowadays. So how is the whole tool sprawl sort of conversation going at your level with the people that you’re speaking to?

Paul Davis [00:14:37]:
So there is, as you said here, you indicated the executives and everything wants reduce complexity because all of a sudden we’re seeing software as being really critical to the business, to the organization. Without software, it stops. Right. And ironically, people are discovering that when the ability to promote software into production breaks, it’s a big problem because it just. Everything, yeah, everything stops and you’re wasting a lot of money. So the thing that I talk to them about is, okay, so first of all, they have to do like an inventory of what tools they are. And then what they have to do then is have a mechanism to be able to hang the tools all together. So you’ve got something which is like the highway running along, which is the software traverses along that highway and you have vendors tools running along the plug into that infrastructure, that road and do the checks at each times.

Paul Davis [00:15:26]:
Like speed cameras. It’s probably a terrible analogy if you get caught by speed cameras, but you know what I mean? But it’s testing at each stage, right? And then also then it means that I can have a test. I can get the developers to do tests because it’s not just an automated thing. If I can get developers to check their code before they submit it for vulnerabilities, we’re making everybody’s life. But things slip up, things change over time. So I need more additional, you know, sort of continuous monitoring, as I put it, also monitoring for not Just code that’s going to production, but code that’s already in production. So I have to have a platform. And that’s why a platform play is really important because it helps you provide a guiding roadmap from beginning to end and you can attach your pieces.

Paul Davis [00:16:06]:
It means then you can end up with a single point of truth, which I think is so, so important. Take for example this recent thing with the NPM attack, right? It was running around trying to find out the NPM attack, which libraries are being used everywhere, right? So you got SHA 256s, you’re hunting there. How do you know what’s in production and how do you know if that application actually uses that, that tool, et cetera, or that open source library, the npm, right? So all of a sudden we’re having to rush around and we’re scanning our production infrastructure, we’re scanning our repos, we’re going. All that takes time. Well, if I could just have one place where I have all that data and I can say, hey, show me which applications have this vulnerability. It says these applications, okay? And they look at the criticality of the financial, what regulations, et cetera, are they customer facing, are they internal, what’s the level critical? And then also you can say to it, hey, this, this application, this has a problem. And it, the application owner is usually the business owner. It’s not normally the person that wrote the code because as I said, it’s, there’s multiple components that go into making that application.

Paul Davis [00:17:10]:
You go, oh, it’s George and Susan over here. They maintain this application. They’ve got to fix their code, they’ve got to upgrade their package or fix the call they’ve used because you don’t always have to upgrade. But that ability to sort of get that flowing through is so, so important. And that, and that’s why sort of just this, this ability to consolidate and streamline is important, but it does require you to have like a, a guiding path, a north star, a path that you can attach all the other pieces in because then you get consistency. And also then I can drive through to one thing which I really love to, which is reduce the parts into production. Because a lot of people say, Paul, where do I start? Right? You could start at the basic things, stopping bad stuff coming in, right? Bad open source packages, right? Things like JFRO’s curation that does that. But also I’ve got controls in my pipeline where I can put checks and balances in place to say, don’t allow, you know, don’t Allow it.

Paul Davis [00:18:08]:
Don’t. If I can have the same tools, same processes, the same processing, whether it’s chain controls, business approval, risk management, et cetera, at that gate into production, and I can get all the software going there, then that’s a good place to start as well.

Karissa Breen [00:18:22]:
And so by doing all the things you just mentioned here today, Paul, would you say that. And given the how aggressive business is getting now with staying ahead and being number one and trying to maintain that moat around people’s businesses so their competitor doesn’t sort of swim up beside them and overtake them, do you think that now having these tools is going to allow for developers to ship this code quickly but securely?

Paul Davis [00:18:43]:
Yes, very simple statement. Yeah. Because otherwise I’d be crazy. Yeah. The thing is, if we can automate things, so imagine somebody wants to use a new version of software, okay. And they have this situation where normally they would have to get an exception approved. So that’s a manual process, so that takes a few days. Right.

Paul Davis [00:19:04]:
Or they just use it and guess that it’s okay. I can say that I’ve done the PIP installs, the NPM installs, et cetera, on my own code in the past and I haven’t given a damn about the security or the licensing. You know, is it copy left? Is it permissive? Has it got a cve? I’ve just used it because I’m focused just purely on getting it working right. So I tend to ignore that stuff. But if I can get the developer to sort of become almost like a security practitioner, I’m terrible for doing this, but get them the information and they say, why do I want to do this? I don’t want to be security professional. I’m a developer, I code well. I’m actually making your life easier because the code isn’t going to come bouncing back later because it’s being rejected by one of the tests further down the pipeline. It also means you’re going to have less likely incidents in production because somebody would have hatch your code, you know, through one of the OWASP attacks and things like that.

Paul Davis [00:19:52]:
So it’s that whole thing. So I think it that does accelerate and the automation means that we spend less time doing boring, repetitive stuff than actually doing the real work of looking at the more complicated, more challenging goals, which is what our brains are suited.

Karissa Breen [00:20:08]:
For, you know, and I do understand your comment around, you know, developers saying, hey, like, I’m a developer, I’m not a security person, which makes sense. But then also by enabling, this is making sure you’re not building on super dodgy code as well. So all of this is really interesting. And then I want to zoom out and talk about trust factor a little bit more because again, it’s something that I’m constantly hearing. I mean, we’ve spoken about Trust the last 15 years plus, but want to sort of get deeper into this because I think this is really, really important. And now with everything that’s going on, with all these breaches and, and things that are happening in terms of businesses being disrupted and not continuing, but something’s happened. Like people are on edge when it comes to trust.

Paul Davis [00:20:51]:
Yeah, yeah. I mean, that’s why ironically, we have regulations. It’s a knee jerk reaction by some regulatory body to make sure that we are doing things the right way, that we’re custodians of data, we’re custodians of software, et cetera. From that perspective, that’s what the trust factor really comes down to. Right. Because everybody, it’s kind of sad. In today’s world, we expect everything to work perfectly. When a piece of software doesn’t work and we, you know, hit there on our iPhone or Android frantically tapping the same button, expecting a different result because there’s a bug, it’s a thing.

Paul Davis [00:21:25]:
And so you people expect to have 100% trust. But I said in the reality of the world is nothing’s perfect, you know, everything’s, everything’s different, you know, everything, everybody’s an individual. We’re not all the same, you know, so that whole ability. So when I talk about the trust factor, it’s about for me being able to demonstrate when somebody says, how do you know this is good? How do I know this is a safe application? How do I know you’re processing the data? You say, well, I could say, trust me, right? Ironically, in today’s world you’ll say, well Paul, you know, you’re pushing out thousands of applications every day or you know, solutions or releases. How can I trust you? How on earth can one person say that? And as a security person, I get asked, is this safe? And I sort of somewhat sarcastically say, to the best of my knowledge, at this moment’s time, I believe we are safe because I can’t anticipate all the pieces. But when I’ve been running operations and I used to get these phone calls, 7 o’ clock in the morning, every morning, I used to get a phone call saying, hey Paul, what’s our threat level for the whole organization? I used to say, we’re at a yellow, I’m cautious, we’re not under attack, but I think we’ve got enough protection. And I’m monitoring things. And one time they said, paul, so this must be some magical spreadsheet you have and you’re getting all this data.

Paul Davis [00:22:43]:
I went, no, it’s basically my brain is getting all the threat intelligence information from my threat team. My security team is telling me what’s going on. I’m seeing the activities, I know which groups are going after us, et cetera. And so I make that judgment of trust. But that was okay. But I had lots of other people helping me, and it’s just like lots of other tools helping me give that trust factor. So for me, trust factor is a, is a wonderful thing that we aim for, but I think we have to be realistic that it’s not going to be perfect. And this is the other side of the being a CISO is you have to be ready for when it goes wrong, right? If you don’t plan for a software breach, a data breach, etc.

Paul Davis [00:23:22]:
You don’t have your instant response plans written out and you don’t know who to engage. And when there’s a software program and you say, who wrote this thing? It takes you days to work it out. One customer tells them five days to find out who wrote a piece of software. That’s just far too long during an instance, right? So you need to have. The other side of it is to anticipate. And that’s also why that single point of truth is valuable, because I can say, who wrote this application? When was it last released? Was a change made? And as a ciso, you kind of sometimes succeed too great on incidents where you don’t just do security incidents, you also do outages. So knowing when something was released, knowing what changed, know who approved it, talking to the business owner, that’s all part of it. But trust factor is so fundamental that we are driven to want it that sometimes we’re too trusting.

Paul Davis [00:24:11]:
And that’s why I do the Verify, verify trust. But most of the world, which isn’t paranoid like security people, doesn’t do that. So we have to demonstrate it. So I mean, trust, for me, trust factor is a partly, a partly proving it through tools, through results, and also the credibility of the organization.

Karissa Breen [00:24:26]:
So you said before, who wrote this thing? And that is so relevant because how many times have we sat there saying, who owns this tool? What is this tool doing? We’re paying for this tool? Or who wrote this thing? Well, what happens when you’ve got a big enterprise that’s got all this random code Stuck together. Who knows? You wrote it was probably, you know, John from 40 years ago that wrote something that stuck together with duct tape and sticky tape. Right. So how do you sort of retrofit some of these problems? Because it’s like if you and I were like, okay, cool, you’re going to go start a new business today, you’d set it up beautifully and it look amazing. But some of these companies that have got a lot of this legacy technology and debt and who knows, like what they’ve done in the past, how do they get it to a point where they’re comfortable around knowing who owns it and they’re going to have the visibility in order to protect it. And we know how that all works. But what does that then look like? Because a lot of these big enterprises, this is where we start to see the unraveling of multiple problems because it comes back to who wrote the thing.

Paul Davis [00:25:22]:
Yeah, it can be fun. It sometimes can be a bit like being Sherlock Holmes where you have to be a detective, especially if you don’t have the change records, especially if you don’t know who wrote it. And you said if you’re going back years, it’s a real nightmare. And that’s why it’s really important as well to. So we have this concept in JFrog where we talk about being able to generate the SBoM. This is the manifest of all the things that power has built, the components of it. But there’s another side to it as well, which is evidence files. And evidence files are things like the results of the tests.

Paul Davis [00:25:53]:
But it also should include as well the things like the change control window. Like for example, we mentioned AptrUST. One of the great things which it sounds so simple and crazy is one of our new ecosystem partners is ServiceNow. Right. So imagine you’re trying to roll out a piece of software and it needs an approval from the business who’s typically operating in ServiceNow. And the system will say, ah, hold on a second, we haven’t got an approval. It’ll automatically generate the request to the business owner or whoever needs to approve it to say, hey, can I do this? That ticket will be created. That request will be created in ServiceNow.

Paul Davis [00:26:28]:
They will then acknowledge it and accept it. And we can do all the servicenow full workflows and the escalations and the SLAs. And when you come back in the say yes. It automatically comes back to the build engine in JFrog and says, Great, you can go now. That’s automation. But that ability to sort of Drive that through. That permission comes into it for me. And so, you know, I now know somebody approved it.

Paul Davis [00:26:51]:
I can now say, hey, nothing, no rogue apps are going to end up in production, hopefully that haven’t been approved. I’m trying to make it less process orientated, a human drink. But at some points people have to give an approval or give an acceptance. But it is that sort of process of melding the pieces together to get back to that trust factor. You know, if, if this business owner just says yeah, and they don’t understand the true implications or they say, oh, they don’t take it seriously, they’ll soon learn on the next incident saying why did you approve this? So that traceability. But it’s also ironically, this is one of the sad parts of this world is did you know that 80% of the code used in software is typically written by people we don’t know? Open source libraries, we don’t know those people, but we trust them. So a lot of times issues and vulnerabilities, of course outside, you know, we do software programming bugs and loop problems and Good old OWASP top 20 or top 10 comes into play. But also we’ve also got to have the ability to understand the risk posed by that 80% of things that could also cause more problems.

Paul Davis [00:27:57]:
And knowing then, okay, who requested this package? Who allowed this package into our environment? Right. You know, it’s like we have a category called malicious. We’d never let malicious packages into production. We’d never let developers touch malicious because they are literally being attacked now as well. So the who is the developer, the authorizer, the infrastructure management people and those third parties and dare I say in the future with agentic non human identities, nhis.

Karissa Breen [00:28:26]:
So you talk about open source like Repos for example. I mean I’ve had someone on the show just saying, absolutely, don’t do it at all. But then that’s not realistic for certain projects. Right. So how do we sort of blend to your point? Like we don’t want to drown people in boring processes, but how do we approach it to be realistic that we may have to turn to like GitHub and friends to get stuff from. Perhaps if we’re building something that’s innovative and we’re under pressure and we get things out the door. But then, you know, some people in this space at times can be unrealistic. We can’t just say oh no, absolutely not.

Karissa Breen [00:28:56]:
But it’s also about mitigating the risk. So how do you sort of find that equilibrium?

Paul Davis [00:29:02]:
Yeah, it is a risk. Calculation, right? You could decide. And there are a couple of organizations, highly sensitive, that don’t use any external libraries. They write it all themselves or maybe they accidentally get the code and rescan it and clean it. And there are some events out there also that clean libraries, et cetera. But it is a risk calculation. But the first starting point is to say what’s my basic criteria of what I will allow to be used inside my environment? What policies should I have in place to actually allow, you know, developers or data scientists to use this stuff? And you start with, hey, do I want malicious, right? No, I don’t. Nobody should have any reason.

Paul Davis [00:29:43]:
That’s your security research team to play around with malicious libraries, right? Then you’ll say to yourself, well hold on a second with this recent incident and it typically takes the security community a few days to detect an alert on bad, as one might say. Actually I don’t want any developers to play with stuff which is say younger than five days, right? You’ve immediately started putting policy in place and then you start. So I want to start protecting my ip, so I don’t want to allow copy left licenses unless there’s an exception request, right? So you start putting this policy in place and you start gradually to put those controls in place, right? And it’s very difficult in today’s world to say I’m not going to use a third party package. It’s like I program in Python, right? I hate JSON. I know JSON is everybody’s loving, but for some strange reason for me passing JSON and Python is it gives me heartache, I’m incompatible with it, but I have to use it. So I use third party libraries to help me traverse through JSON libraries to do things, right? If I had tried to write my only, I would have given up. So I go and do it. If I’m trying to innovate and develop leaning edge software, then I want to speed things up.

Paul Davis [00:30:53]:
So I’ll go and use those libraries and I rely upon the community and the threat security research teams to actually go and do the assessment and tell me that. But I have to have the controls in my pipeline to detect am I going to allow this bad thing? And also it could turn bad later on. So how do I do continuous monitoring to see if a vulnerability or bug was discovered later? But it’s kind of that assessment. It’s like you take the world of AI and AI, if we weren’t willing to use other people’s code, we would never have got excited about ChatGPT and ask it questions we shouldn’t be asking.

Karissa Breen [00:31:25]:
You know, and that’s the point about being realistic. And there’s probably that government agencies that might not use like external code, for example, Right. But like in an everyday business and to get ahead, like we’re not going to do it all from scratch, takes time, cost, money, resource that we don’t have. So it’s about, hey, there’s going to be some risks, need to assess it, need to make sure it’s not malicious or it’s injecting something that we don’t need. That’s the part where I really want to focus on because it’s about being realistic in this day and age, about being competitive in the market, getting stuff out there, you know, building better features and because now with everything that’s been going on with like breaches and customer trust and if a site’s down for like an hour, people are completely outraged and they’re already on Twitter talking about all how much these guys suck. So do you think businesses are aware of the impact that yes, whilst we do need to be secure, but we still need to work at a level of velocity to ensure that we are maintaining our competitive edge. How does that sort of sit with you, that conversation? Because I’m just, I’m just seeing it more and more aggressive nowadays. Like big businesses that were really like killing it a couple of years ago, now they’re taking a backseat in these smaller upcoming, like whether they’re tech vendors or whoever, they are starting to overtake now because they can move a lot faster.

Paul Davis [00:32:42]:
I always say if you’re not developing your product, it becomes stagnant and you can’t move. You have to innovate. I mentioned AI a couple of times. If you don’t have AI in your product, but the smaller agile people, ironically, they’re taking risks. I’ve been involved in a number of startups and at the beginning we tend to just want to get the solution so we can demo it to people, right? And then after that they say, well, can we use it because we’re going to get it, Then all our security controls go flying out. And it’s only when you start working with experienced startups that we build the framework for ensuring the security of our software. Friendly from the very beginning so that we make sure that we’re not including bad things in our software, we’re not exposing stuff, but I think we have to. It’s a reality of today’s world that we embrace change and innovation and realize that you have to do risk it’s like banks have budgets for fraud, shops have budgets for liftage, shoplifting, Iranics professionally.

Paul Davis [00:33:38]:
In security we don’t have that buffer. We’re supposed to be perfect, we’re supposed to protect against everything. But the reality is we can’t. So being realistic means you have to have enough controls to cover business serious events that could impact your business at a sort of a damaging level. But at the same time, don’t get away. Security does have this reputation of saying no, that’s kind of an old fashioned model. Rarely what we should be doing is saying, okay, how can we work around this problem? How can we let you still innovate but manage the risk? Could I put some other tools in place to mitigate the risk or executive look, this is a problem and you want to do it. And I’m trying to articulate what the risk is, but you really want to do it.

Paul Davis [00:34:20]:
And I’m doing to the best of my knowledge and trying to tell you, well, this is as much as we can protect, but. But we don’t have a choice. Are you telling me and sometimes in certain cases I’ve literally got them to sign a piece of paper to accept that risk because it scared me so much. You know, ironically it worked out in my incidents. I’ve had incidents through my life around security and it’s proven to be there’s no such thing if you had it well as a bad incident. Because the lessons learned and the improvements that come as a result of it always improve everybody, not just security, but the whole organization.

Karissa Breen [00:34:50]:
And given what you just said, because you’re right, obviously it needs to be calculated. We can’t just randomly just go and, and do these things. But we do need to innovate. So would you say, generally speaking, given your role and how you see what’s happening out there, would you say enough companies are risking it for the biscuit out there or would you say they’re still on the conservative side in terms of innovation?

Paul Davis [00:35:11]:
Unfortunately, I’m still seeing a lot of businesses who are almost in denial about the risk because they’ve become overwhelmed. It’s not that they didn’t have the best intentions. It’s like I talk about security, tech debt, this is tech debt to do with their software supply chain around bugs, which we don’t use those words with developers for security issues that end up in production because they just can’t keep up the volume because they haven’t put the right controls in place to proactively prevent them in the first case. And that’s both with new code, ironically, and legacy code. The problem is, I think companies are realizing this as a business impacting, but it’s such a huge transformation. I’m working with major organizations, helping them transform and mature their pipelines so they can become more agile, more effective, less onerous in their process of checking their software and helping them whittle down that backlog which has been building there for years. It’s kind of scary how big the volumes are, but it is, unfortunately. I don’t want to say it’s a perfect world.

Paul Davis [00:36:09]:
It’s actually quite a bad world at times when you understand the state. But everybody has the best intentions, everybody knows we have to fix it. You just have to have a good plan to how you get out of that dark place, as one might say.

Karissa Breen [00:36:19]:
So when you’re working with private enterprises, I guess it’s one thing to do a bit more risky business. But what about government entities? They’re known for being old, slow, not innovating, scared. Well, I mean, at least I’m an Australian, I live in the US now. But that’s how a lot of people perceive those government agencies. Right, so what’s their sort of position then to innovate and do things a bit different to stay modern as well?

Paul Davis [00:36:42]:
I mean, I actually was attending a government conference a few years ago and got told off for saying that the pressures on the sort of enterprise, the commercial world are actually impacting the government. We have different rules about people, resources and stuff like that, but the same risks around security exists, you know, being hacked, being denial service compromised, being exposed. We do have more rigorous controls because, you know, the financial sector and the government are very, very tightly controlled regularly because there’s a lot of risk. But at the same time, you know, their constituents expect them to have a mobile app to do parking apps and stuff like that. And they leverage external groups who are using open software to deliver the capabilities. When you do your tax returns, you’re using a website. This is quite open source. So there is an expectation, probably a higher expectation on government organizations to be more secure and more safe.

Paul Davis [00:37:37]:
Now the reality is it’s tough. I think it’s the same challenge that they exist. It doesn’t mean that they have more royal resources or people, more people. They’re just as constrained as anybody else and they’re facing the same challenges about how do I ensure that my, you know, what I’m delivering to my constituents and protecting data. And I’m talking about like the puppet government. The military is a whole different game, but in some cases they rely on some other controls which sometimes might not be good. But it’s all got the same challenge of I need to keep up with the data because there’s so much data out there. We’ve got structured and unstructured data that can be leveraged to help improve and give us, make us help better decisions.

Paul Davis [00:38:18]:
Whether you’re commercial or government, that data helps you. That is those insights you need better to process that data, you need to be able to massage it and you need to be able to do it consistently and reliably and prove it. Otherwise you get called up before Congress and asked to explain why it went wrong. So I think the pressure the same in some ways it feels sometimes the government has a bigger challenge than commercial because as you said, they are slower, they are bigger and resistance to change can be thought to be there. But I know I work with many of them and that need to keep up to innovate and some of the programs they’re up to are absolutely amazing. On bleeding edge around data analysis, stuff like that. So just, it’s good. But in my humble opinion, we face the same challenges about how do I leverage code and how do I separate it, how do I air gap it and should I or should I not? You know, do you know, open source software, do I need to write my own and how do I do that? Same problems and different problems, but all are big challenges for all organizations.

Karissa Breen [00:39:14]:
So then what’s Dev Gov Ops?

Paul Davis [00:39:17]:
So this is that process of automating and tracking. So interesting. When people are doing sort of building software, there’s always some aspect of regulation framework, whether It’s Salsa or S2C2 from Microsoft or PCI or financial regulations or EU with the AI and the GDPR and all that stuff or the US with it together we have these things and we need to show quickly and effectively that our software has passed all the tests we said we’re going to do. So Dev govops is this ability to actually start. Remember I talked about the evidence files. If I can put controls in place that say this piece of software is never going to end up in production unless I have it has performed all the tests that I documented I was going to do to comply with this regulation, it’s never going to end. And I can tag that application as requiring this particular policy. So it’s PCI or gdpr, whatever it is, right? And then when it comes to checking those results, I can then automate.

Paul Davis [00:40:23]:
First of all I’m automating, so I’m enforcing what controls. Nothing goes in production unless it’s passed. Then when you come to be audited and assessed, you can say, hey, you don’t need to get through logs. Here’s all the results, here’s all the evidence files, here’s the controls, here’s the exceptions. All of a sudden being assessed for that compliance makes it easier, but we can embed that into the pipeline of development so it becomes automated and it becomes enabled. That ability to start assessing at the application level, compliance, because as I said, you might for PCI have one module which handles credit cards, but a lot of times there are other regulations for a socks 2, whatever it is that requires you to certify the application has followed a process. And that ability to be able to break down compliance to different pieces, but also to manage governance of compliance across the application’s lifecycle makes it more relevant. So it’s going to speed up the ability to be assessed, it’s going to reduce workload around the ability to actually enforce compliance and is also going to give us the ability to start identifying exceptions to the process.

Paul Davis [00:41:30]:
You know, so for me, Dev Gov Ops is embedding the automatic testing and the tests that we want to do to ensure compliance into the process. So it’s transparent and it’s painless, but we gain the visibility, we gain the data, we can actually prove it, as they say.

Karissa Breen [00:41:46]:
And then lastly, Paul, what’s one thing that you’d like to leave our audience with today? Any final thoughts?

Paul Davis [00:41:52]:
Oh, there’s so many. But the thing for me is to start looking at the bigger picture. Start looking at how you are managing your risk at the different levels of software development, I should say. Right. And I think start defining your end to end journey because when you start doing that then you can show that to people who don’t understand development and they understand it. I can show it to development teams and they understand what an NPM package. I was talking to a customer recently and they said, I spent five days explaining to security operations what an NPM package is. It’s like whoa.

Paul Davis [00:42:31]:
But if we can start communicating and sharing and showing that it’s a shared responsibility and that we can all, all work together to generate more secure trusted applications, then the world’s going to become a better place.

Share This