Thomas Kinsella is the Co-founder and COO of Tines, a no-code automation platform for security teams. Before Tines, Thomas led security teams in companies like Deloitte, eBay, and DocuSign. As COO, Thomas is responsible for customer success, professional services, and more. Thomas has a degree in Management Science and Information Systems Studies from Trinity College in Dublin.
These transcriptions are automatically generated. Please excuse any errors in the text.
You are listening to KBKAST, the cybersecurity podcast for all executives. Cutting through the jargon and hype, to understand the landscape where risk and technology meet. Now, here's your host, Karissa Breen.
Joining me today is Thomas Kinsella, cofounder of Tines, who is based in Ireland. If you can't already tell, bomb your accent. So, Thomas, thanks for joining. I'm really excited to have you here today. I actually really like what you guys are doing. And on that note, I think I saw something in the media yesterday that you guys did a major Series A or Series B capital raise. So well done and it's good to see companies that are not necessarily, perhaps in your mainstream areas like United States really coming into the foreground.
Thomas Kinsella (01:05)
Yeah. Thank you so much. It's great to be on. Thanks for highlighting the news. Yeah, it's a great piece of news, great achievement for the company. We've been working really hard over the last 18 months, growing our customer base, improving the product, hiring a lot of people on the team. It's not an amazing environment out there for startups, so it's a real testament to some of the work that the team has been doing that we've managed to raise some money. But realistically, it's just exciting to be able to get back and actually focus on making sure our customers are successful. But, yes, we're delighted with the news.
Well done. That's not an easy achievement, especially in this climate. So I want to start from your perspective. When we caught up a few weeks ago, we spoke about the future of operations. So in respect to your standard Active Directory on prem and automation, talk to me a little bit more about this. What are your thoughts? What do you think?
Thomas Kinsella (01:54)
Yeah, I think it's probably important to baseline a little bit on where I think operations are at the moment, and then talk about the future of security operations. I think there's some challenges that security as an industry has had for the last couple of years that are still definitely true. So we're still facing a huge shortage of security talent, there's still millions of jobs out there that are unfilled and a huge opportunity for anybody to get into the industry and we need to do a lot better at that. We're still facing too many alerts, so the vast majority of security teams have a bunch of tools, not all of them talk to each other and they're facing a challenge of even when they turn on a good tool, that it's hard to stay on top of all of the work. There's still a whole lot of vulnerabilities that are taking place and I think we have to still assume that most organisations at some point are going to be compromised. And now it's about how quickly you can respond, rather than whether or not you're being compromised, how quickly you can contain and what your response actions are as soon as you detect something.
Thomas Kinsella (02:51)
I think there's a few of the things that are still definitely true as well. So I think most people are realising that just looking at dashboards isn't security. But when we're thinking about the future, there's a whole lot of, like, paradigm changes that are that are taking place now and that have taken place over the last few years. First of all, a lot of companies are moving towards the cloud, so whether it's migrating on prem assets into the cloud or like, building a brand new company with a cloud first strategy, I think spending on cloud security services is expected to increase 30% to 40% year over year, which is absolutely enormous. And most organisations, the vast majority, are using cloud services, whether it's like A Okta or whether it's like AWS. And I think in terms of the future, we're just not good at responding to those, those cloud security alerts just yet. So when I think about, I suppose, where security is going, I think we're continuing to get more and more data, we're continuing to deal with the challenges of not enough people, we're continuing to deal with some of the challenges of too many alerts.
Thomas Kinsella (03:49)
So I think we have to, I suppose, change the way we're approaching approaching security. And the future is going to be there's certain things that are going to be non negotiable. Like, first of all, every security team is going to be a partner to the business. Things like two factor authentication are going to be mandatory, security awareness is always going to be there, et cetera. We absolutely have to focus on the risk assessments. But I really think that security is going to move towards a much more operational model. So what I mean by that is that it's no longer possible to triage absolutely every single alert. Immediately. We're moving away from a model where every single alert goes into a ticket and then you can just view it. And then if a human being says yes or no, I think you're going to have to correlate absolutely everything. So data is already eating the world, but data is going to be where it's that. And then, similarly, I think everything is going to be measurable. So, as I said, moving more towards operations. It's no longer about, hey, did we detect this? There's a new vulnerability about the metrics are going to be how quickly you can respond, how quickly you can write a new detection, what percentage of your assets are going to be covered by that.
Thomas Kinsella (04:54)
And, yeah, I think we're going to see companies struggle to stay on top of some of these new pythons, but there's a huge opportunity for people, there's a lot of really, really smart folks out there. It's just a challenge that we're going to have to face as an industry.
So you raise a good point around cloud security alerts and you say, we're not really good at handling that. Why do you think that is? Is it because there's just so many that it's just infinite and we can't deal with it? Or what's the reasoning?
Thomas Kinsella (05:21)
I don't think that it's so many that we can't deal with it. I think that as an industry, there's been some organisations that have moved towards the cloud, but we just haven't developed, I suppose, that shared knowledge of what best practises. So if you go to a security conference, you can learn a lot about the latest attacks, the latest breaches. You'll read a tonne and you'll hear a tonne about the opposite breaches, the uber breach. But what you won't hear a lot of is, I think, around some of the much more challenging parts of securing your cloud environment, you're much more likely to hear a talk or presentation about here's how you secure your Active Directory, here's how you secure your On Prem Exchange. Or here's the latest type of malware that's running PowerShot or something like that. You're way less likely to hear a talk I'm way less likely to hear other folks talk about cloud security misconfigurations or talk about, hey, here's how we secure our s three buckets in AWS. And I think people are very comfortable. People have trained, people have gotten degrees in forensics, degrees in insecurity, and those haven't caught up.
Thomas Kinsella (06:25)
So people are still super familiar and super comfortable talking about those as well as older style environments. And then I think we just haven't focused as there's plenty of startups that are doing a great job, but we just haven't focused as much on saying this is actually just as critical. The attack surface has expanded to the cloud because we don't have a lot of experts that have done it in two or three different companies. There's not that depth of experience where people are able to share their knowledge and kind of make it more common, make it more accepted. The other part about it is it is still very new. So I think people don't have experience of dealing with every single type of alert as they would in kind of an old environment. And there's not as much training grains where you can see exactly what an alert would look like. So with standard malware reverse engineering or something like that, or with front intel, you can say, hey, have I seen this IP address? Or what would this alert look like if or what would a detection look like if I execute this malware on this particular device, in this particular sandbox, would my detections or would my team be able to pick this up.
Thomas Kinsella (07:26)
That's not the same. There's no real sandboxes out there right now which are saying, hey, your EC Two instance hasn't been secured properly or has misconfigured permissions, or maybe that your disc hasn't been encrypted and therefore you've got some other risk, or maybe your public storage or your storage buckets are public. It's not something that people are training on. It's not something people are thinking about as much. You can accidentally reveal your entire AWS infrastructure by just leaving a Kia there, and I think that's just not as much of a paradigm these days.
Yeah, that's fair. I understand what you're saying, and I guess it's going to be maturation of the industry until we have sort of stock standard ways of doing things. Or best practise. What do you think from your perspective and your opinion on best Practise? Do you have a view on it or how to go about it at a high level?
Thomas Kinsella (08:15)
Yeah, I think the very first thing, as always, we should be looking into our people. There are plenty of good tools and there's plenty of good courses, but we should be absolutely training up our team to say, okay, this is how you do it, or these are some courses, this was a baseline of what we can do. There are actually great tools out there both like public and public and private. But for a lot of these, for AWS, for GCP, for Google cloud security posture, for AWS, you've got guard duty. You just should absolutely be turning these things on. At a minimum, like when you're starting and absolutely observing those alerts, it's not so much making sure that everything is secure or detecting a breach. It's more, at this point, knowing a little bit more about your environment, poking around in it, and seeing what sort of misconfigurations there are and making sure that your security team are aware of it, and then they can push the information to the engineering team. So again, if you've got if you don't have any logs stored or six months and an incident occurs, same as in a standard behind your firewall environment, you're not going to be able to detect anything where you're not going to be able to understand what happened.
Thomas Kinsella (09:21)
But right now, you need to know if somebody has stood up an instant or stood up like an EC Two instance that is misconfigured, or if somebody stood up an S three bucket that's public, or if, yeah, that bucket isn't encrypted. And the way you can do that is there's a bunch of tools that you can just plug in and they'll start telling you about these misconfigurations instantly. Tools like, yeah, laceworker, Wiz or Oracle will say, hey, we've detected some suspicious traffic to an IP address for the very first time, or somebody has logged into this device for the very first time. And at this point, for a lot of security teams, I don't even think it's about detecting bad, it's about knowing what a baseline is so that you can start understanding a little bit more about that environment and then you can start those next steps. I don't think auto remediation is something you should be immediately thinking about in security in the cloud right now either. But it is something that you can very much or, like, auto contacting or auto crediting tickets is definitely something that you can do where you're contacting somebody and saying, hey, we noticed you just logged into this device for the very first time.
Thomas Kinsella (10:23)
Or, we noticed pseudo was run on this particular box for the very first time. Can you tell us a little bit more about that? Was that deliberate? So I think that the first step is just trying to become a little bit more familiar with that environment logging, turning on those really basic basic detections or basic tools so that you become more familiar with it and then getting more comfortable with just contacting users and understanding and making sure that your engineering team are comfortable with talking to you and know that you're detecting and know that you're able to help them when standing up some of these environments as well.
So would you say, Thomas, that people out there don't know much or anything about their cloud environment? So I mean, it's sort of me not knowing where the kitchen is in my apartment that I live here in. So I'm just curious to know why is that the case?
Thomas Kinsella (11:15)
I think in many ways I wouldn't say there's plenty of organisations that are doing extremely good job at securing this and there's a whole lot of security companies who've raised funding and are doing a great job at yes, but enabling people to get a handle of their estate very, very quickly. And when you plug them in, turn them on, you're able to your dashboard will light up with alerts. I think it's more though, that there are a whole lot of organisations that aren't considering it as a completely separate separate attack surface and aren't as concerned, or maybe not as concerned, maybe aren't as aware of that. It's just as vulnerable. I think to answer your question though, I think there are probably a lot of organisations that don't have the barrier to entry to spinning up a new environment is very, very small. So anybody can go in and set up a sign up for AWS account and start the serving data. So that's exactly what it was designed for, right? So that somebody could get started really quickly. The challenge is that in this day and age, when people are getting pressure from the business to move very fast, that you can move very fast.
Thomas Kinsella (12:19)
And whereas previously you'd have to stand up hardware in order to stand up a product or stand up a website or something like that, nowadays you can set up an account, start serving data to customers immediately and you haven't taken those steps to secure that. So I think the ability for anybody in the organisation to stand up something really quickly without securing it is really hard. But the second part about it is, because you're moving very quickly, you may not know that your estate, so we all know that shadow it. Everybody has heard of that second slack instance. If you're using teams or the use of whatever DocuSign, if you're using hello sign or something like that, because people think it's easier. It's the same in cloud, right, that people are prepared to stand up, they're completely separate environments that were originally test, they actually look great. All of a sudden that gets moved up to discussions around securing that aren't happening. I think that's part of the whole shift left movement, though, the whole shift left ideology that realistically, security should be involved at the very start of pretty much every project, that you need to understand how things are being set up.
Thomas Kinsella (13:23)
You need to understand what your engineering teams or what your product teams plans are so that you can bake in those security controls and understand actually, you shouldn't be setting this up or you need our help in setting this up, or we need to lock it down, or we've got a centralised exchange which has all these policies. So therefore, you're automatically not going to be able to make some of the mistakes that somebody else has made in the past. So if security get involved at the design stage, then making a lot more progress and you're shifting some of the problems from an instance to a simple design decision when it's gone down the road of this environment is already stood up. It's already talking to customers. And then you're told, oh, actually, you need to change the architecture. Then you're in, I'm not going to say massive trouble, but you're facing a much bigger, much bigger challenge.
Do you think as well? So, hypothetically, it's about shadow it. You're just spinning up a cloud environment. Just say I'm working in an organisation, I do something I'm not supposed to do, or I'm doing something that no one really knows about, and then I leave and it's just sort of sitting there and no one knows about it. Do you find a lot of cases like that as well?
Thomas Kinsella (14:28)
Yeah, absolutely. Again that's shadow it. But that happens in every sort of environment where somebody has written a script, like unbeknownst to everybody else, it's supporting a lot of the company or a reasonably critical process, but it hasn't been checked into a git repo, so it's not actually formally supported or formally scanned or nobody knows exactly what it's doing. That definitely happens in the cloud, where there's some organisations that have spun up separate environments that are doing one or two things. I think that the examples that I've heard about and been involved in myself are things like, it's not important, it's non core, it's only the marketing side. The marketing side isn't super important if it gets compromised. There's not much customer data that's on there, but if it turns out the marketing site is able to accept credit cards, or it turns out that the marketing site is where people are redirected to after something's happened, so you've got referral logs and things like that. So there can be definitely instances where, yes, smaller projects that are stood up that seemingly don't matter as much, but that are actually very critical to the organisation.
Thomas Kinsella (15:33)
And the other part about it is that, as you say, that scope can increase, right? So it can just be originally set up as a small project, so nothing too important. We don't need to go through the formal security process. And then all of a sudden it becomes a lot more important because at one point somebody needs to move very fast. And even though it was in that separate environment and didn't follow our security controls, it was in digital ocean instead of AWS or azure. And now all of a sudden it's being used for something a little bit more important. Yeah, we do see those environments get set up or yeah, test environments that contain real customer information when they shouldn't or they're left, they should be deleted and they're not. And you'll see incidents that are related to that recently frequently as well. Again, I don't think there's anything malicious about what anybody's doing. I think it's just not good enough guidelines for organisations and security, unfortunately not getting involved in off in the process.
So as a manager or an executive or someone listening to this and they're like, oh, you can't really physically have the control that you'd like over all of your employees. You can't necessarily lock everything down or you don't have oversight into every single thing that they're downloading or that they're creating or that they're spinning up, or people have got BYODs now, they've got air gaps, machines, they're off network, potentially, if they're doing like red teaming and stuff like that. How do people sort of get some level of control? Because a lot of the time, which is what you've just sort of said, is it's not everyone's intention to have these things are sitting out there. But, you know, people forget or people move on, people get fired. They don't know that. They just had this test environment that's sitting out there with customer data in it, and all of a sudden, we got a breach. How do people handle this?
Thomas Kinsella (17:13)
I've plenty of conferences recently where I'm talking to a whole lot of CISOs who's seven or challenges. I think the answer well, first of all, there's no easy answer, but I think the role of the CEO, the role of those managers or directors involves a lot more communication rather than building up the programme. So it involves like, partnering with the engineering team, partnering with the It team and talking to them and building those relationships so that they feel comfortable coming to you. A lot of the time, security is seen as the bad guy in the room, the bad cop, where we come and get involved in enough of these incidents when working in industry where we're like, what just happened? We need to lock this down immediately. How is it possible that we allowed this to happen? Whereas if we come instead with a little bit more, again, of an operational model of, hey, this happens and it's not good, but we know that this is going to happen. We know that this is not the first time people are going to become a little bit more comfortable with talking about it and saying, actually, hey, I need a little bit of help with this, so this is bad, but I'm okay with talking about it.
Thomas Kinsella (18:14)
So I think, like, building those relationships very early on and if you bust them and that happens as well, just reaching out and trying to improve them so that nobody feels bruised by their interactions with security. I've had plenty of incidents where somebody reports a phishing matter, somebody reports that they've clicked on a link or that they've downloaded some software, and oftentimes you're able to detect or you're able to lock that system down. But what's really valuable about that is when, as a next step, you go and see, did anybody else do something? Or did anybody else receive that same email? And in plenty of occasions, you will see that other people have. So by encouraging and having an open and honest conversation with folks saying, like, hey, it's okay. I want you to report it's more valuable to me that you tell me that something has gone wrong or that you tell me that you suspect that something is not secure now, rather than me finding out about it in two months time. Or three months time. When? Actually, I find out about it on Twitter. So if you make it so that people feel comfortable talking to you, people feel comfortable disclosing things to you, and part of your job is cleaning up these environments, saying, hey, it's okay, we can migrate this and turn this off immediately, we're going to disrupt the business.
Thomas Kinsella (19:26)
But coming into it with a much more partnership attitude, you're going to find that people are reasonable as well. Nobody wants to be the reason for something bad happening to your organisation. So as a result, I think people will be in my experience, people will be a lot more open if security is a proper partner to the business, concerned about not just their stress levels when an incident happens, but actually concerned about the top line with business reality.
Would you say, historically, security hasn't been looked upon as a partner? And I say this because I've been in a position before where everyone's like, oh, you're security. You're here to tell me what I can't do. And then I almost think that because unfortunately people can go around and say, oh well, you can't do that, Thomas. And then all of a sudden you end up doing it anyway. It's almost like when you're a teenager and your parents like, no, you can't go to that party and then you sneak out and then you go, right, because you're like, well, stuff you, I'm doing it anyway. So do you think there's a little bit of that in there? Because they may be sick to death of people coming around acting like the police, acting like they run the whole show and then as a result they feel angry and annoyed because you're holding up their project or whatever it is and then they just end up doing what they want to do anyway.
Thomas Kinsella (20:37)
Yeah, 100% right. I think that's happened with every security organisation and it's really hard to change that because as a security professional, when you see something going wrong or when an incident occurs, the next week or two weeks or month is going to be really hard. And it's not just hard on you, it's hard on your team, it's hard on your peers and on your reputation as well. Most security folks, it won't affect them too badly, but at the same time, it's hard to see when that incident happens, especially when there's an element of we knew this was going to happen, but yeah, like if you approach somebody and you scold them, you're definitely not going to get the best response out of them. Instead, as you've identified, coming at it with saying, like, actually, hey, here's a suggestion, and then shifting left in that environment. So getting involved in that process earlier on and making sure that you're being very reasonable about it, like at some point, ultimately, every single thing is a risk, but security is part of the business. It's a risk based decision. At some point somebody is going to say, you know what, we don't need two factor authentication here because it's going to slow us down too much.
Thomas Kinsella (21:42)
Or maybe it's okay to take this small risk that something bad is going to happen here, or we need to move really fast and we can secure something later and that's okay as well. Security has to be comfortable with that. But if security comes in, if the team come in with an absolute attitude of saying no, you're just going to rub people up the wrong way and it doesn't help anybody, it burns those bridges. And if security doesn't come out looking better, in reality you're going to be more secure if you come with that more collaborative attitude. And it's not to say that saying no isn't the right call in some circumstances. It's more to say that if you want to build up those relationships, you have to come with a much more practical, business first approach.
So where do you think this scolding sort of business came from this behaviour. Why do you think people historically have gone around and scolded people, oh, you did that wrong time. Where does that come from?
Thomas Kinsella (22:33)
I think from a security point of view, it's a little bit borne out of frustration, but it's also borne out of the hope, or the misguided hope that people will follow those rules if you tell them strictly, you absolutely have to do it this way. From a security practitioner point of view, it is really hard when that incident happens. And if you know that something is going to go wrong or you predict that something's going to go wrong with a reasonable likelihood, you're going to PreGate may not be the word, but you know that there's going to be some pain, so you're going to shade from the rooftops about it. But again, you have to overcome that and say, actually, this is sad. This is something that we're comfortable with, bringing you to the table and having a reasonable conversation and not telling you to do this, but rather everybody realising that everybody is on the same page. There's nobody that wants an organisation to be breached. Everybody wants what's in the best interest of the company. So, yeah, I think if you look at security awareness training, it's really interesting when you look at some of the stats around those organisations that punish people for clicking on those links.
Thomas Kinsella (23:33)
First of all, people are going to click on links anyway. At some point, people are going to make those mistakes. So being absolute about it, it doesn't help the situation where anybody who clicks on a link gets fined or anybody who clicks on a link is not eligible for a monthly or an annual bonus. I've heard of that happening in some extremely large organisations, but instead encouraging people to report, encouraging gamifying and encouraging people to like, hey, this is the reason that we're doing this. I'm making it part of their role where they're thinking about security. I'm not encouraging them, like highlighting the winds, not the reason we didn't get breached, but we were able to prevent this as a result of this particular thing happening. So somebody patched and we've seen other organisations get compromised, highlighting that and saying, thank you so much for patching, or, thank you so much for reporting. Thank you so much for putting in place these policies. That's how you're going to win. That's how you're going to win friends and make sure that your organisation stay secure. But I think scolding comes from frustration and probably comes from a little bit of fear of what's going to happen.
So it's repeatedly saying, no, Joanne, you can't do that. No, Joanne, no, Joanne, and then eventually there's like, okay, then they start to get upset because on the 20th time I thought, you can't do it that way because of these reasons. And it's what we're trying to do in terms of protecting the asset or whatever. It is the company. And then eventually, I think people start to lose, maybe manners, and they start to become impatient because they've gone over things so many times that people just aren't listening.
Thomas Kinsella (24:56)
Yeah, I'm saying instead, like, hey, Joanne, talk to me a little bit more about what you're trying to achieve. What are you trying to do here? Like, what's your aim? What are you working on? And getting into her head and saying, okay, this is important and we understand why you want to do this. We understand you're expecting this invoice and that's why you're clicking on this link or you're downloading this file, or you're setting up this environment. Here's a way of doing this, security, and we can help you do that and we can train you to do that. It'll take more time in the short term, because you're going to be spending time with Joanne. But in the long term, Joanne is going to think of security not as the bad guys, but rather as a partner, and will hopefully then the next time she's standing up an environment where she's downloading something, think a little bit more about it and say, actually, maybe I should get in touch with security. But, yeah, saying no to somebody, it doesn't endear you to them.
That's how you frame it. I would say maybe that's where people historically have gone wrong in the past of not actually framing, like, hey, what are you trying to achieve? What are you trying to do? Let me help you, rather than like, no, you can't do that because no one enjoys that.
Thomas Kinsella (25:53)
No, nobody enjoys it. Not even the security team, though. It's not a fun experience being on the other end either, because you're rubbing people up the wrong way and they're not going to react positively. But there is nothing like the feeling of getting those detections or preventing innocent as a result of partnering. That's a great feeling that's security analyst, security engineers, they feel emboldened when that happens, but they also feel it's a little bit of a security is a mission, right? You're never secure, you're only securing. And they feel when you get those wins, it's very rare you get those wins, but when you get those wins, it's a great feeling and it's a testament to the work that they're doing. So being able to showcase the wins, it's really important for the morale of the security team as well, that they're not just the defenders. It's really hard being on the defence all the time. It's really hard just trying to prevent if you're able to proactively, take away some of the challenges of your team and if you're able to proactively prevent those incidents or put in place better processes, you're going to feel like your job is a lot more worthwhile and that it's getting better.
Thomas Kinsella (26:49)
You're improving. You're actually making a more impactful risk reduction effort in your organisation, which is great. That's what every security professional actually wants to do. They want to reduce that risk, they want to add value. They don't just want to say no. I don't think that's what anybody signs up to do.
So if we zoom out so we've spoken about the tools that you've mentioned before and to end best Practise, we've spoken about how to have the right conversations in terms of that, what to sort of start saying versus stop saying. And we spoke a little bit more about getting further alignment around security, being in the conversation early. What other advice would you have as well? If you want to sort of look at securing the cloud, holistically, do you have anything else that you'd like to add?
Thomas Kinsella (27:28)
I think it's like in terms of securing the cloud, yeah, but it's also just in terms of security more generally, that it's a really hard environment right now to hire and retain good people on the team. As I said, there's millions of security jobs out there where somebody can move to. So if you've got people that are saying no all the time, if you've got people who are just detecting bad all the time, they're not necessarily growing in their career and they may very well just be overwhelmed and decide to move on to another organisation. So I think we have to look at the security programmes and make sure that we're treating the teams and the individuals kind of correctly. And unfortunately, as we told you before, a lot of analysts do get burned out. So I think there's a few different things that we can do about it. But when you look at when you ask them at times, we've done some surveys around this and plenty of other organisations have done some surveys as well. But I suppose part of the reason that people feel burned out is that they are dealing with a constant change in technologies and they are dealing with way too many alerts.
Thomas Kinsella (28:32)
So I think giving people a little bit more time where they're able to focus on the fun stuff, so focus on actually building new detections, so making detection and triage fun again, but also enabling them to automate some of their workloads where they are able to get away from that mundane response. That will help your environment stay more secure, but it'll help the analysts and the engineers be more fulfilled in their job so that they'll stay add value to the organisation and they'll feel much more productive and that they'll feel much more valued members of the team.
Okay, I want to press on this a little bit more. I have been seeing things recently coming out around the fatigue, the burnout, across any sort of security discipline. But you spoke before about alert fatigue, for example, but then you also then spoke about the broader sort of the challenges on the burnout. So if I'm a leader, what sort of conversations that I'd be having with my analysts with my staff to ensure that they aren't leaving straight away. And I guess much to your point around the new technologies as well. Actually, when I'm thinking about it, you said they're just sort of changing technologies all the time. Is that because they're getting a better tech or because they're swapping or they're adding things to their stack? Or why is that?
Thomas Kinsella (29:46)
Yeah, it can be a little bit of everything, right? So it can definitely be because they're getting better tech, or it can be because they're suddenly setting up a cloud environment, or It team have changed octa to one login or something like that, or from Teams to Slack or Salesforce help Spot. And now all of a sudden, you're dealing with trying to build out new detections for those environments. But I think when we talk to security leaders and we asked them, okay, what do you think you should be doing? I think a lot of them are still focusing on the same metrics, right? So they're still focusing on meantime to response, meantime to remediate, meantime to investigate. But when you're trying to keep your team happy, it's not just about the number of alerts. You should be measuring team performance in terms of burnout, in terms of your employees mental health as well. So right now, not just for the employees, for you. If you've got one employee who's dealing with 20% of the alerts on a ten person team, they're clearly doing way more. But also, you've got a potential single point of failure there.
Thomas Kinsella (30:40)
If they're responsible for a certain type of alert, they're the only person that knows about it, but more burnate. So if somebody is not taking their time off, if somebody is working all airs and so always working overtime, or if they're taking on more tickets than anybody else, that's a leading indicator of churn. So the first thing is measuring some of hey, who's doing most of the work. The second is, I suppose, establishing a baseline of how many alerts are we able to actually deal with without our employees spending 100% of time on responding to alerts or spending 100% of their time on tasks that aren't necessarily as fulfilling, and then measuring the time off that they're taking, establishing, like, one on one rituals where you're checking in with them, making sure that they're doing okay, making sure that they feel supported. And then when you establish that baseline, you can improve on it. I think the second thing that you have to do is you have to make the security team feel the fun. And some of that is that Win, where we were talking earlier with Joanne, that saying no to Joanne is not great, that's not a good experience.
Thomas Kinsella (31:43)
But saying yes to Joanne, saying like, hey, this is Lab, and Joan has come to you and proactively activate something like celebrating things like that, or making the triage process of this alert has come in let's see if we can figure out what mass bomb campaign this phishing email correspondent to. Or let's see if we can figure out more detections for this. Or let's see if we can basically allow them to have more creative, high impact fun while working on those projects. Not always possible, definitely something that's done. But I think you need to design your team, design your tools, design your processes around minimising that bad aspect of that repetitive manual work and more on maximising the great that creative and a high impact fun work. And again, security automation can definitely play a part of that because you're allowing the analyst who's fed up of that process but who knows that process cold to get rid of it and allow them to focus on that much more fun aspect.
So would you say with the alert fatigue from an analyst perspective because they are fatigued. Perhaps they're missing things or things that get going by the wayside or there's gaps because they're so used to it or they're seeing things all the time. Like it's sort of like if you're driving and they always say that you've got to stop every few hours because you're tired and you're fatigued. Right? And that's how people end up having car accidents and dying because they're just that tired that it's just sometimes that their brain just isn't focused on potentially there being a risk in front of them or they fall asleep with a wheel or something like that. So it's like the same type of fatigue and risk. No one's dying from things, I hope, but it's the same type of, I guess cognitive ability towards the fatigue side of things.
Thomas Kinsella (33:29)
Yeah, definitely. And it comes out in a few different ways. So obviously, as you said, it could just be that they don't recognise you pseudo in this particular situation for the first time. That's not particularly unusual that it's an engineer, so I'm absolutely fine. But then all of a sudden we see Steve do it and Steve works in customer support. It's probably a little bit more unusual that they've used parachute, they've run something a suspicious. So it's being able to gather that context or understand that context in your head is really important for that analyst. And if they're burned out then it's going to be hard to do that. But the second part of it is that when you're kind of making the point as the driving, when you're driving you're using that second half of your brain. I think you usually don't make mistakes but when you're tired you definitely do, right? And it could just be that you typo something or you don't cheque that IP address or you just miss those two or three steps in the process that are actually really critical because you've got too many other alerts to deal with.
Thomas Kinsella (34:25)
So if you've done five or ten different things that context switching is always really hard and humans are not great at just following that process. Time over, time over, time over, time over again. They'll try to find some optimizations, but they'll also try to notice those patterns, even if they don't exist, or hopefully they do exist. But I think that when you have analysts that are burnt out, they're going to miss things. It's really put the blame on the analyst saying, I can't believe they didn't recognise this, but if an analyst doesn't recognise something or missed something in the process, it's because they're tired, or because the process should be automated, or because they haven't trained property, or they just don't have enough time. And they say, actually, it's very likely this is a false positive. I've seen this alert 20 times, but sometimes you've got alerts that are designed to only be true positives 5% of the time. Somebody logging in from a suspicious IP address in the Middle East somewhere. If the company is based in Australia, it's very unlikely that if that happens, it's not just somebody either travelling through or on holidays in the Middle East, but one in ten, one in 21 in a hundred times.
Thomas Kinsella (35:35)
That is bad. And as a result, it's the analysts may close up being, ah, look, yeah, I'm way too many alerts likely that's a false positive. Whereas in reality you should be following that process, contacting the user saying, hey, do you recognise this activity? Were you on holidays? Is that person on PTO in your HR system? That alert fatigue is when those alerts start getting missed and that's when that breach can happen, unfortunately.
Wow. Okay. So we started off with swimming cloud and then we've sort of moved over to the analyst side of things. Alert fatigue, of course, changing multiple technologies and systems. Yes, I find that annoying and overwhelming myself. You get to learn a whole new system, which of course takes extra time to do things, and then when people are tired and they are fatigued, they miss things. And then when they miss things, we have errors. And then we have errors, we have breaches.
Thomas Kinsella (36:28)
Yeah, exactly. There's a whole lot of ways that you can solve those challenges, but it's up to security leaders to enable their team and to make sure that they're doing work that they feel valued for and that they're able to learn on the jobs. They're able to grow on the job. And fortunately, really smart security leaders that are doing this and we can learn from that are doing it the right way and that are measuring the right things. So there's a lot of really good folks that are doing this. And I think belatedly, there has been a huge focus on it, which is great to see.
So in terms of measuring the right things, what are those things people should be measuring, from your perspective?
Thomas Kinsella (37:03)
I guess it depends on exactly what we're trying to accomplish. So I think when we're talking about the analyst and what they're doing. There's a few different things that you can be measuring so obviously you should be measuring your mean time to detect. Meantime to respond you should be measuring things like how much time my analysts are taking. Are there too many alerts that are being acted by a particular analyst or by a particular engineer saying you should hopefully have a reasonable idea of your fridge and your data feed helped? Am I still receiving detection? Am I still receiving logs for this particular log source? Are my detections still working? Testing out your detections but those metrics don't actually tell us if we're maturing and improving our security posture. I don't think so. I think we should also be measuring a whole load of other things around how useful are these detections or how useful are each author's detections? Obviously things standard enough like how effective are patching, what percentage of our bugs are actually known? Bugs are actually patched but the things that I'd really be focusing on and I think you have to be thinking about is your speed to respond to those new attacks.
Thomas Kinsella (38:14)
So we've seen a brand new exchange vulnerabilities come out recently. Now that should be reasonably easy to say like hey are we vulnerable to this? But the question is like hey are we vulnerable? And if we imagine that maybe that's not quite exchange but how fruitful or how quickly we can analyse our environment, compile that vulnerability and risk status and then share it and understand hey are we actually affected? And then the next question is how quickly in our environment can we build a new detection for that? So meantime to detect, meantime to respond that's fab but the real question is when something comes along how quickly can I accurately tell if I'm going to be affected and then if I am going to be affected or even if I'm not, how quickly can I build out a detection for that? There are two things that every CISO should know about. If we're moving fast and we're talking about earlier kind of bringing it back, we detect that there is somebody is using a separate AWS instance or somebody I don't know why they do this but plenty of people do a separate slack account for communications.
Thomas Kinsella (39:18)
The question is how quickly can you add a new log source at your environment? And then same other things like hey you get new detections or a new report of the IOCs that were involved in that optus breach. For example, how quickly can we sweep our entire environment to see have we been infected by this across like our workstations, across our servers, across our cloud environment? I think those are the metrics that if I'm trying to measure the success of my programme that I'd be really focused on trying to get a handle of. And then if you're being able to break a dain even further by going into things for each domain or those exact same statistics, but for my front facing assets or my DMZ or my crown jewels, et cetera. We have a lot of reasonable metrics, but you really want to know how quickly you can respond and how quickly your team can respond and how effective your team are able to operate when something goes right or something goes wrong.
So, in terms of final thoughts or closing comments, do you have any sort of to share their audience today? Thomas I think you've been quite detailed in your response. I think we've looked at multiple angles as well. Not just necessarily focusing on the technology itself, but also the behaviours in which people can start changing internally as well from a leadership perspective, but also as an employee level as well, and how we speak across our business. Is there anything you'd like to leave our audience with today?
Thomas Kinsella (40:33)
The only thing I'll say is that there's a lot of people that are doing a great job and I'll come on and I'll say, hey, here's what you should be doing. Just be a little bit wary of not necessarily folks like me, but I suppose the kind of security instagram, influencer effect of you'll hear a lot of people that are claiming to have these incredible detections or claiming that they're doing everything perfectly. It's not always the case and there's no such thing as a silver bullet. But there's a lot of people that don't have an incredible security programme and are just getting started. And that's okay. You are where you are. The whole point is the fact that you're listening to this podcast is absolutely fab. But you can just take those first steps and those first steps are going to be building those relationships with your peers in other parts of the business and then looking at your team and saying, okay, how are they doing? How am I able to get the best you said to them. But don't be put off by saying I'm so far behind. In many ways, there's plenty of organisations that claim to be doing an incredible job that aren't or that are showcasing the absolute best, as most people on Instagram do.
Thomas Kinsella (41:38)
And just don't be afraid, it's okay. There's plenty of organisations out there. The security community is absolutely fab. If you want to reach out to me and have a chatter base like security automation, we're more than happy to have that conversation. But also there's a really good community and a lot of people will have your back if you approach them and are honest about the situation. The final thought is, even though I've shared a lot of I hope not too scary things, but a lot of challenges that people are facing, there's a really positive future and there's great people in the security industry who are dealing with these challenges and helping people face them.
Well, the grass isn't always greener on the other side, as they say. So thanks very much. Thomas. Thanks very much for your time, for sharing your thoughts and your insights, and can't wait to get you back on the show.
Thomas Kinsella (42:19)
Thank you for having me up. It's been great.
Thanks for tuning in. We hope that you found today's episode useful and you took away a few key points. Don't forget to subscribe to our podcast to get our latest episodes. This podcast is brought to you by Mercksec, the specialists in security, search and recruitment solutions. Visit Mercksec.com to connect today. If you'd like to find out how KB can help grow your cyber business, then please head over to KBI Digital. This podcast was brought to you by KBI Media, the voice of Cyber.