Introduction (00:00)
You're listening to KBKast, the cyber security podcast for all executives, cutting through the jargon and hype to understand the landscape where risk and technology meet. Now, here's your host, Karissa Breen.
Karissa (00:46)
Hey, everyone, it's KB. Thank you for being an active listener on the show. And thanks to you, we've been downloaded in 63 countries. If you haven't already, please ensure that you follow and subscribe to the show for the latest updates. Now, time to get to the interview. Joining me today is Peter Bauer, CEO and co founder of Mindcast. And today we're discussing the state of cybersecurity threat landscape from your perspective. So, Peter, thank you so much for joining. I know you're traveling and you're out and about at the moment. You got a lot going on. So I really appreciate you making the time to talk with me today. So I want to start with a little bit more about you and your experience and what you're seeing in your position because you are an executive, you are looking across multiple different industries and multiple different organisations. So what worries you the most about cyber crime in the coming years? Now, I ask this because we've seen GPT emerging as well as AI and all these other things, but I really want to hear it from you and what you're seeing at a very high level.
Peter Bauer (01:51)
Yeah, it's really interesting. Being a cyber security executive or responsible for a cyber security company that looks after over 40,000 organizations around the world and their most mission critical communication channel, email, really. And all the millions of end users that sit behind those mailboxes and are potential allies to the cybersecurity program but also could potentially be the weakest link in the chain. When you think about where's the vulnerability and what could be coming down the track, I think mentioning chat TPS and generative AI generally is a really interesting one because when these technologies are designed to improve productivity, and you have to think, what could they do to improve the productivity of adversaries? Now in cybersecurity, one of the things we're very familiar with is the machine generated attack. And the machine generated attack that either focuses on machines, DDoS or directory harvesting, or machines that focus on attacking humans. So maybe credential harvesting or spam or phishing attacks that are generated by kits. And maybe they might have some level of targeting. It could be a theme. Think about fires in Australia, and then there's a scam to do with fundraising relief programs and so on.
Peter Bauer (03:09)
But they're quite generic. But if you think about what some of the most effective attacks are, they're generally human to human attacks. So somebody does some research and they might know a few things about a target, maybe that they're on vacation somewhere, that they just started a new job in a particular company, that their kid goes to a certain school, or they've purchased a house, or in the process of purchasing a house, and they can glean this information through social media. And also there's a vast treasure chest of data available to adversaries, thanks to numerous breaches that have occurred. So that human to human attack can be extremely effective, extremely targeting, and those deceptions work very well. Now, the mitigating factor has been that those are quite expensive to pull off because there's somebody sitting behind a chair putting that thing together and figuring it out. And so they can't really be deployed at scale. Now, I enter chat GPT and generative AI and think about what a technology like that could do to gather context and to create deceptive content at scale that can be used by a cybersecurity adversary to really operate at a much, much higher volume, to cost the net a lot wider with a much higher quality form of cyber attack and what they could get in return in terms of the volume of credentials, the volume of funds, the volume of compromised individuals who might share information unwittingly.
Peter Bauer (04:43)
So it really is scary. And so from a defender's point of view, the technologies that we use to go and examine this content and interrogate communications, interrogate purported identities just needs to keep getting smarter and being able to operate at greater speed all the time to be able to deal with what potentially is coming down the track. Okay.
Karissa (05:06)
So there's a few things in there which is really interesting. You make a great point around chat GPT, for example, around enhancement of productivity, which I get. But then there's the other side of it which you touched on is the weaponization of chat GPT, for example. So the thing that's been coming out recently, which is the part that I'm paying attention to as a security person and security from a media perspective is chat GPT to then write malware. So back in the day, you had to somewhat understand how to write malware. Now we're saying, well, chat GPT can write that for us. And as you said that that can be then deployed at scale. So I'm curious to know from your perspective, Peter, so historically, cybersecurity people, and they still are, they're trying to get the head above the water. Now they're like, oh, okay, well, chat GPT is just going to launch into the market. And now it's coming out of the right malware for us, and now it's going to be able to write malware for us, and now it's going to be able to deploy malware, et cetera, at scale. So what does this mean, though?
Karissa (06:07)
Because people were struggling before, and now we've just added in a big range of complexities, which is going to make it even harder. I haven't really spoken to anyone on the show about it yet, so I'm curious to hear your thoughts.
Peter Bauer (06:20)
To be fair, it's a concern. But to me, it's less of a concern than the ability to perform what would have previously been a human to human attack. Because today we're quite good at dealing with machine type attacks, malware type attacks that are polymorphic, for example. So there's already malware that changes so that we've moved on from being signature based or hash based alone to detect malware. We already used machine learning. We know how to examine files that have malicious content. We know how to do that pretty fast, pretty effectively, pretty reliably. It's not foolproof. But I'm not certain that it's as much of a game changer to have AI building malware because the pathways still look like malware and the pathways in our store, vulnerabilities in systems and unpatched environments and so on. The human interface, though, the issue is that human to human attacks, they're deceptions. They don't look like human attacks. They look like legitimate mail, but they've changed something discreet about that mail that makes it fraudulent, that makes it inauthentic, that gives funds, information, credentials to the wrong person. And so it's a very different game because when that lands up in front of somebody, if they fall for it, the consequences can be high.
Peter Bauer (07:49)
And then deploying the malware or getting whatever and beyond that is significantly easier, whether that was generated by AI or in a more traditional way. So I worry about that. It's a little bit like, do you watch Antiques Roadshow?
Karissa (08:04)
Yes, I do.
Peter Bauer (08:05)
The plot there is people show up and they've got a thing that they found in the basement and the expert examines it and they turn it over 17 different ways and then they find this marking which is either there or it isn't there. Okay, so this is not an authentic Louis the 14th mask pad or whatever they decide. So they've got an expert eye that's looking for very discreet signs and going through things that an untrained person wouldn't know to look for to determine authenticity and value. It's the same approach that we have to be able to apply at scale, examining communications coming at people is we've got to look for very discreet signs to be able to assess value because you can't just stop communications. You have to allow enormous amount of communication to flow freely inside and outside of an organization. But you're looking for very, very discreet signs that will become more and more discreet that indicate intent, that indicate authenticity, that compare with other things that we may have seen. So the expert capability in the system has to be there. And obviously, the more valuable something is purporting to be, or the stronger the intent to acquire something is perceived to be, the higher the stakes are.
Peter Bauer (09:29)
And it's going to become, I think, a really interesting game of assisting human beings to make smarter calls about how they respond and participate to digital communications in a corporate environment.
Karissa (09:41)
Okay, so there's a few things I just want to press on there a little bit more, which was interesting. First of all, I like your analogy about antique road trip. I have seen that. And occasionally people do win the jackpot when they think it was a piece of junk that they found. But beside that point, just going back to the scale side of things again, just so I get this right. So what you're saying is you're not worried so much about chat GPT from a scale perspective because most organizations at the moment have the environment to be able to deal with scale anyway. I used to be a practitioner in a bank. We would have thousands and thousands of things hitting us a day. And a lot of those were just automated because it's like, well, we can tell. So is that what you're saying from that scalability perspective? Because people are set up to deal with it anyway. It's just going to be like, oh, what's 10,000 more?
Peter Bauer (10:24)
What I mean is malware at scale. So being able to deal with malware and attacks that go after machines or vulnerability in machines because those are easy to observe. Now, we benefit today that the ratio of good male to male that is not good but is also human generated and highly targeted that is designed to get through filters and designed to confuse or deceive a recipient in proportion. And so we have technologies that can detect those, but they're also expensive technologies to operate and run. And sometimes you're really reliant on training end users to spot things as well. Or we use intelligent bannering perhaps to provide an indicator to the recipient, Hey, you know you've never replied? No one in your organization has ever applied or sent an email to this address. Do you know that the display name of this is very similar to somebody else you communicate with, but it's a different email address, or this domain was recently created. So there's color coded, certainly in the MIMEcast, the implementation of this color coded advice that guides the user. Now, if you're in a world where you get, let's say, I don't know, a thousand emails and two of them are these types of things.
Peter Bauer (11:50)
Well, that's one level of problem. But what if a hundred of them were convincing pieces of content like this? What if an attack... Now, it's two out of a thousand. I don't know what the exact number is, but let's say it's two out of a thousand. Because it's quite expensive for an attacker to create those, they're going to choose targets very carefully, they're going to do their research. But they have to actually do work. There's a high cost. But if that cost drops by many orders of magnitude, and computers can go do the research and go and comb social media and comb dumps of personal data and can craft these things in a way and learn about what people are more susceptible to at the time, suddenly you're dealing with a much higher ratio of this type of content, I think that is going to change the level of threat that exists in the environment if we're not able to answer that effectively with technology.
Karissa (12:44)
So then the other thing is as well, and maybe you know more about this, I'm just asking the question. I think from memory people said, so with chatGBT, I know you're writing an article, but there are certain syntax s that if you run some tool over it, for example, it'd be able to detect obviously that is AI related content just purely on the syntax, how it's written. So then that can then be applied from an email perspective, potentially.
Peter Bauer (13:07)
So go back to the antique road show, you start to know common traits of forgery artists. Well, back in 1960, there was this guy and he used to use green paint from this thing. It wasn't the original. So it's that mentality to say, Okay, well, we now know this. Now it doesn't mean it's going to work all of the time, but it's this level of expertise and knowledge that systems have to acquire and develop over time to say that's an indicator of risk. And it may not be conclusive because there may be perfectly benign marketing emails that have been generated by GPT that you may want to get. There may be expert content that's created by generative AI. You don't want to just convict everything because it comes from that environment. So you might need combinations of indicators. You might want to warn a user this content appears to be created by generative AI. This also has not been used. You have no relationship with this person based on an organization. There may be the IP address that this person sent from has been associated with malicious activity in the past. These are all indicators which either technology itself can say, Well, you know, the risk is too high.
Peter Bauer (14:24)
We're going to block this. Or you provide that context of that advice to the user so they can make a decision. Do they trust? Do they act on this thing or do they just hit delete? Or do they report it and say, No, I've looked at this too and I think it's problematic and I want to make our defensive machine smarter by referring this for further analysis. Here's my opinion because this thing is fish.
Karissa (14:48)
I want to switch gears now and focus on the executive side of things. So you yourself, you're an executive, so you understand how executives think. And as you know, SISOs have constant competing pressures and budgets which have traditionally been stretched and they're getting more stretched and stretched every year. And then as you know, with the addition of the wave of the recent breaches here in Australia, would you be able to share some of your insights on how boards are responding to supporting all of this? Do you have any insights to share with our audience today?
Peter Bauer (15:19)
Yeah, it's an interesting one because generally I think we're all used to IT, certainly, and cyber security being considered somewhat of a cost center. And I'll actually just come from a lunch here today with a bunch of our customers and one of our great reseller and consulting partners in the region. And we had quite an interesting discussion. And towards the end of the lunch, I said, Folks, I know you may not want to tell a vendor and a partner this and just pretend you've had more wine than you actually had, but how are budgets working nowadays? Have you felt a difference moving from 21, 22, nine to 23, the general macro environment? What are CFO saying to you? What's the environment like? I was frankly expecting to hear a lot more stories of pressure on cost. And I'm sure I know in many areas of business that that is true. But certainly in Australia, what I was hearing was, in the wake of the Optis and the other major breach on the medical provider side, the feeling is, from boards and from senior execs that might control purist rings but not be responsible for IT is, are we doing enough?
Peter Bauer (16:39)
Are we protected? Are we spending enough? Have we got the right tools? Are we in good shape such that we're not going to be somebody that's in the headlines. And I think it's cyclical. I think it comes and it goes. People forget the new cycle moves on. But certainly where we are right now, one of the benefits, it's called comfort, frankly, but of having a high profile breach is that it does create an impetus and awareness and a desire to invest in areas that are very important but can be overlooked in the glory rush for perhaps other aspects of the business, investing or product development or applications that the business might want. Cybersecurity is so important. And if boards go through periods of time and phases where they'll over emphasize it, let's grab that with both hands and move the ball forward, make the firm stronger.
Karissa (17:35)
I'm not surprised by that comment that you've made with the people you're speaking with today because there was something I was reading. Pwc produced a report that was saying that majority of major companies in Australia, their CEOs are worried about cybersecurity. So now they're like, oh, okay, we've really got to look into it because of the breaches. Do you think it's just going to be like someone has to be full victim of the breach, but people pay attention? So what I mean by that is so hypothetically, just say Medibank and Optis didn't have the breach. Do you think that if you were to ask the people at lunch today, they would have given you the same answer if the breaches didn't occur? I know that you mentioned it before that it was the impetus, but do you think that someone has to lose in order for everyone else in the industry? And I don't want to use the word win, but to be like, bring that awareness.
Peter Bauer (18:25)
Yeah, I think unfortunately that is the case because how do we assess risk? W e can imagine risk, we can conceptualise it, but how do we, at the end of the day, assess it? It really impacted somebody else. If 10 people died of COVID, how would we conclude that we need to protect ourselves from it? But when you know that millions of people died from COVID, you kick into gear and you have a different action plan. So unfortunately, that is, I think, how people quantify risk is what's the scale of the impact on others. Therefore, how many calories should I burn to avoid that fate myself? If it's remote, no. I mean, if you never met someone who's ever won the lottery, do you buy lottery tickets? I don't know. Maybe you do, maybe you don't. But if you know 10 people that have won the lottery, my God. You'd be.
Karissa (19:24)
Buying.
Peter Bauer (19:25)
It.
Karissa (19:26)
You'd be running out that door to get those lottery tickets.
Peter Bauer (19:29)
Yeah, exactly.
Karissa (19:32)
So hypothetically, just say 20 more breaches happen in Australia. Hopefully, that's not the case. Do you think that the budgets will just keep going up and up similar to people running out the door to get those lottery tickets? Do you think we will see that? Or would you I know that's a hypothetical question, and I know that you're not an ostradamus, but I'm just genuinely curious.
Peter Bauer (19:52)
Yeah. The sad thing is probably during the time we've taken to record this discussion, there probably 20 breaches. They're just not high profile. They're just more incremental gainings of access or compromises that have happened in corners of Australian businesses because it's a constant activity of adversary action. Some of it goes undetected, some of it gets detected and resolved, remediated, stopped quickly. But a 20 may be on the low end. But I think your question is, at what point is it diminishing returns to just keep throwing money at the problem? I don't know the answer to that. I do know that money isn't necessarily the only limiting factor. There are challenges around skills. There are challenges around the shift in new technologies being introduced that introduce new vulnerabilities, that there's some level of inevitability there, that there's increased exposure. And to what extent can more money and more cybersecurity tools mitigate that? Maybe the money has to be spent in building more secure technologies that we use. Maybe the money has to go into supply chains. Maybe we just have to avoid using certain types of technologies that expose us, if we can, expose us to risk.
Peter Bauer (21:27)
We dump an enormous amount of money of information about ourselves willingly across multiple social media platforms, the various forms of freedom of expression that we enjoy, security implications of those things. So the lack of budget can certainly cause problems. The lack of skill can certainly cause problems or make it much more difficult to deal with problems. But the way the world is does set us up for some level of risk here. And so really examining what are the implications of our behaviors and our choices from a security perspective. Again, maybe it comes back to your prior question of, until enough people can see enough harm done to themselves as a consequence of things they've shared and exposed on social media, maybe people will just not be as willing or interested in making the change. They don't see a need for that trade off.
Karissa (22:21)
Yeah, and you're right. And just 20 high profile ones probably was what I was more thinking of that you picked up on. And I think it's changed definitely over the years. Even when I was doing executive reporting for a bank, we used to cover all these things to give that awareness, to be like, oh, we obviously... And not from a FUD perspective, but more so like, hey, did you know that this was a thing? We had the target breach, and then we had the Ashley Madison one. So giving that awareness. But now it's 10 X that. And then when it's in your own backyard, a big player like an Optis who is owned by Sintel, billion dollar company, and they're getting breached, I think that's what started to sound the alarms for other people and ask those questions. But I really appreciate you sharing your insights on that. So as Mindcast is a global company, you have great insights. So I want to understand, maybe from yourself, Peter, what are some of the current threat landscape that you're seeing globally with your position, your customers that you talk to? Is there anything you can share?
Peter Bauer (23:23)
We built a business on identity a lot around our work in email security, and that's super important. It's the number one attack factor. It's just the common culprit that shows up. It's such a useful tool in an adversary's kit bag. It's the pathway to the mind and the machine of every employee at the company. But that's really an intersection. The work surface is what adversaries are after. They're after this intersection of people, communications, and data. They don't think about that attack surface necessarily in categories like if you're family with Gartner, the magic quadrants or the boxes, the categories that a Gartner might think about. This thing is going to defend endpoints, or this thing is going to defend mail, or this thing is going to defend networks, like they see it and Adversary sees it all as an environment that they need to penetrate. And they inherently have a systems thinking approach to it. And so it's really important for us as defenders not to get too tied up in our boxes or our categories or our definitions and more to think about adversary behavior and how do you stifle their ability to accomplish their outcomes? How do you follow the sequence of events and make sure that you have mitigations, detections, in ways to reverse them out at various of those points?
Peter Bauer (24:53)
And so a lot of our strategy and our branding, our positioning around this concept of work protected, which is both the noun and the verb. The work is protected, but you're able to work in a protected way. People, communications, and data is really recognizing this global phenomenon, and email is right at the core of it. But there's a lot more to it around security awareness training, around dynamic policy based on an individual risk, based on leveraging human beings. If they see something, say something so that you can train machines to be better at detecting things in conjunction with humans. And then all of these detective technologies, the things we spoke about, we have to be good at looking at content and meaning, interrogating things, looking for unique forms of malware, blocking technologies, and then understanding if something gets in, what is the blast radius that that thing has? Who's got it? Did anyone click on it? And how quickly can we get it out? And how do we share that intel through APIs with some of these other systems like endpoint security, some of the other Gartner Magic Quadrant category products that our customers might have so that defensive radius is expanded and made stronger a lot faster based on what we see coming at our customers through this high intensity attack surface with email at the centre of it.
Karissa (26:21)
So, Peter, as you were talking, what was coming up in my mind was I was trying to understand because you mentioned the lunch and the customers that you speak to and you're executive yourself, so you get other executives. If you had to maybe name the top three concerns from people that you're speaking to in terms of cybersecurity, would you be able to illuminate potentially what they are? And again, there's no right or wrong answer. It's just more so I'm just curious and just want to understand a bit of a barometer of where people are at.
Peter Bauer (26:50)
A statement I've heard maybe too many times is a feeling from people that they might be too rich, but capability poor. They've bought a lot of things, but they haven't yet been able to turn this into a system. And part of that is a people and process. They have technology, but the people and processes in place to make this all work together, to feel confident that it can be a reliable defense against the creative attackers. I think that's a concern. So somehow finding a way to get to a system or a system of mesh, if you like, that's a concern. I think the second one is, we're living in an environment with, some would say, globalization peaked, and now we're in geopolitical tension and rivalries. And some of that is Connecticut conflict and unfortunate. And some of it is digital. But we're clearly in a new phase of conflict and in nation states' willingness to go to the brink and to jettison relationships and alliances that would otherwise have been maybe a lot more productive. And so what does that mean for companies that are involved in these things? Governments can make all sorts of statements, but the government doesn't administer my firewall, let alone buy it for me.
Peter Bauer (28:11)
The FBI may be able to supply some intel or tip off about certain things, or they can arrest some culprit at an airport at some point in time. But companies have to provide their own digital armed forces in this regard. So people are concerned about that landscape. That's definitely a factor. And I think the third thing is it's the current economic environment and the skills in the market. It comes down to limited financial resource. So we're all competing, companies are competing for cybersecurity talent. They've got a lot of complexity in the environment. They want the best talent, but they can't build up massive teams because there are economic pressures on them, and there is a limit to how many people have these skills and expertise. So there's a need to rely on service providers, on partners. And there's a growing sense that they don't know what they don't know. It's very, very difficult to hire and retain over a long period of time, a top class expert team of cyber folks in every organization that needs it. Of course, if you're a major government agency or a very large bank, you're going to find a way to afford these people, retain these people.
Peter Bauer (29:40)
But there's a massive midmarket of companies and who just frankly don't feel like they got enough fingers to be able to stick in the dark and keep the thing secure. So that's a real concern.
Karissa (29:55)
Yeah, those are great. And I do agree with you on the geopolitical one, which has stuff I've spoken about on the show before, and I definitely believe it is a real concern. So in terms of any final thoughts or closing comments, Peter, is there anything specific you'd like to leave our audience with today?
Peter Bauer (30:11)
It's easy to think of the world today in static terms and to think that it is going to be somewhat like it is on a perpetual basis. If we look at where we are today compared to where we were just 10 years ago in terms of proliferation of mobile devices, in terms of new application architectures, even 10 years ago, something like the massive centralization of every company's... I think Office 365. In 10 short years, we've basically emptied our server rooms and moved all of our applications to a very small number of application providers. If it's Microsoft and it's maybe AWS, it's a massive centralization, consolidation, concentration. And there are great benefits to that. There's efficiencies and there's all kinds of cool new things that are possible. But it is worth just observing the shifting risk profile that that brings with it and thinking as people who are paid by our respective organizations to ask ourselves the question, what could possibly go wrong? It's our jobs to think about that and to say, well, there's a lot of good involved, what could go wrong? And then think about mitigation strategies for that. And I'd encourage your listeners just to process that.
Peter Bauer (31:40)
And it's not a thing to do immediately, but just to bear that in mind as we continue this shift to massive centralization, concentration, dependency in core areas across all things, everyone's eggs all in these baskets. In and of itself, it's not bad, but we've got to have mitigation strategies. We've got to be wide eyed, we've got to educate our stakeholders as to the fact that potential risks can occur. And unfortunately, like we've discussed earlier on in our conversation, sometimes we only learn through the ceiling of pain what the risk was, and then we respond to that pain, either observing it in others or experiencing it ourselves. I would just note that when you have this huge concentration and centralisation, that the impact of that pain can be very widespread and can be solved very quickly. So it might be prudent to think about mitigations. And obviously, that's a fair area of the work we do with tens of thousands of our organisations being on something like Office 365, providing layered security risk mitigation capabilities just to help them be more confident and secure while they embrace some of these new ways of consuming IT. That would be my parting thought.
Peter Bauer (32:59)
Karisa, thanks for spending time talking with me. No, thanks.
Karissa (33:03)
For your time. And I will say that, yes, a lot has changed in the next 10 years. And who knows what's going to happen in the next 10 years. Maybe you'll have to come back on and do it. Where are they now for the next 10 years? Everything changes. It perplexes me some days on how fast things just evolve and it grows. So I really appreciate your time.
Peter Bauer (33:21)
Yeah, who knows? It's quite possible, Karissa, that a copy of this podcast will be featured on Antique Roadshow in 10 years' time.
Karissa (33:28)
Maybe yes. Antique Roadshow. Gosh, that makes me feel really old. Appreciate that, Peter. But yeah, thank you so much for coming on the show today, sharing your thoughts and your insights. And I hope you're enjoying your time down here in Australia.
Peter Bauer (33:41)
Wonderful. Thanks, Karissa. Great to spend time with you.
Karissa (33:44)
Thanks for tuning in. We hope that you found today's episode useful and you took away a few key points. Don't forget to subscribe to our podcast to get our latest episodes. This podcast is brought to you by Mercsec, the specialists in security, search, and recruitment solutions. Visit Mersec.com to connect today. If you'd like to find out how KBI can help grow your cyber business, then please head over to kbi.digital. This podcast was brought to you by KBI.Media, the voice of cyber.