Graeme Neilson [00:00:00]:
I do feel that password managers in the cloud, from a sort of operational security point of view, is a fundamentally bad idea. I mean, you’ve just got a huge target in plain sight sitting there that everyone is going to want to breach, because then you’ve breached many systems rather than one. So the efficiency for the attacker with these aggregate providers is enormous.
Karissa Breen [00:00:39]:
Joining me now is Graeme Nielsen, founder and researcher at Siege. And today we’re discussing how the security industry ignores the halting problem. So, Graeme, thanks for joining me and welcome.
Graeme Neilson [00:00:48]:
Hi, Carissa, Glad to be here.
Karissa Breen [00:00:50]:
Okay, so halting problem, I’m really curious to understand what do you sort of mean by that?
Graeme Neilson [00:00:56]:
That’s a fundamental theory in computer science. Around about the time of Alan Turing, when computers were first considered, he had the whole idea of a Turing machine, where you have some tape with some symbols on it. You manipulate the symbols, you have some memory and you output some information. There may be some transition within the Turing machine and effectively all computers, modern computers, phones, are Turing machines. And what he. What the halting problem states is that given a Turing machine, an arbitrary input, you can never prove whether any particular input, I.e. program, will ever halt or run forever. What that translates into, effectively is that you, if you consider, you know, protocols, programs you might write, you can think about the input to that program that’s provided by, say, a user or an attacker effectively as a program, because any program you write on a Turing machine can be executed by any other Turing machine.
Graeme Neilson [00:01:56]:
That’s one of its properties. So when you accept input and process it, you’re potentially always running into the halting problem, depending on, I guess, how complex your language is, what you’re processing. But effectively, the halting problem is the reason we have bugs in programs. It’s a way of almost stating, you know, for example, if you have some loop, you won’t know whether that loop will ever finish or not given arbitrary input. So you can test some inputs, obviously, you can test some variables and see whether that program will halt under those conditions, but you can’t prove that it will halt or not halt for all possible inputs. So in terms of security, that means you’ll always have bugs.
Karissa Breen [00:02:39]:
Okay, so this is interesting. So let’s get into this a little bit more. So you’re sort of saying people are ignoring the halting process, so therefore we have defects. Vulnerabilities bugs, etc. So then why, why are people ignoring it? Is it because it’s like now we got to, you know, we got to ship stuff faster? You know, I was at a conference the other day and it was, that was the whole conversation faster than I mean, majority of people have ever seen. Right. So do you think. But even if we go back, even historically back at, look at like a, you know, computer science, for example, this was happening back then, now there’s this getting more sort of holes everywhere.
Graeme Neilson [00:03:15]:
It’s been understood for a long time. And I guess my contention is that the security industry as a whole is ignoring it in terms of, you know, vendors, people selling you security products, security solutions, even advice around how to develop programs properly. They people tend not to consider the hauling problem. They tend not to think about it. I mean, it’s there in the background. There are some systems use formal verification allows you to actually prove what a program will do. Those use cases are pretty small, like space military, you know, critical systems. And they’re not as flexible, obviously like a formally proved system is quite constrained.
Graeme Neilson [00:03:52]:
So I guess my point is not so much that, you know, computer science and people who use computers ignore it, but when people are giving you advice or trying to sell you security solutions, I would say they ignore the halting problem. They, they imply that you can give better security by say, buying their box sticking in front of your box. That box processes all the bad stuff and you’re safe. What the halting problem would tell you is actually you’ve just another full attack surface in front of your attack surface, neither of which can be proven to be secure and therefore there will be more bugs, there will be more issues. So I feel they’re being a bit disingenuous. A bit, you know, there’s a pretense that, you know, users are to blame for security instances or developers are to blame for writing poor software, whereas in fact those people are powerless. This is something fundamental about, you know, what computers can, can and cannot do. There is no way to change that.
Karissa Breen [00:04:49]:
Okay, so I want to talk about users now for a moment. When you and I spoke before the interview like a week ago, you sort of shared a couple of things with me, one of which was awareness programs. So you, I think a gangster is probably a strong word. Maybe you are, but tell me more about this because, I mean, I’ve had people really pro awareness programs. I’ve had other people say absolutely not, but I’m really keen to hear what’s in your mind.
Graeme Neilson [00:05:15]:
Awareness programs, well, Again, I feel that’s a little bit of, you know, email phishing is a good example of this. You have a protocol which, you know, at the time it was designed was fine. Share information, send emails. Now, of course, there are security implications of accepting emails, reading emails, people trying to con you phishing, and those are not to try to train people to not use email and computers and the Internet as designed. So I do not click on links. Seems to me ludicrous. The people sending phishing emails are doing what people have done for centuries. They’re trying to corner people.
Graeme Neilson [00:05:51]:
They are playing with people’s psychology. There’s lots of contextual reasons why people might fall for those kind of emails, those kind of cons, but people fall for those kind of cons on the phone or they fall for them in the street or when they meet people to try to train people to fix some fundamental psychology of human nature. It shows. I mean, how long have we had security awareness programs and phishing training? Does it work? Has it stopped phishing? I would say no.
Karissa Breen [00:06:19]:
Okay, this is really interesting. Okay, so you’re right, we’ve had it for years. I mean, I’ve worked in companies before and I’m like, this is an awful training program. The other thing is as well, I mean, I was speaking to a cio, I don’t know, maybe nine years ago, and they’re like, yeah, but like, we do these things, KB but like, no one actually explains it, that it’s like, don’t do it, rather than like explaining it. So what do you think needs to happen? Because at the end of the day, like you said, these people are going to keep doing these phishing emails, are going to keep trying to con people out of their money. They’re going to keep doing it. But then everyone, and I hate to use the term, the whole awareness thing, people say, oh, you know, the awareness, the awareness. So then how do we overall try to reduce people from getting scanned, conned, whatever, when these things are going to keep coming and, you know, maybe some of these trainings are not effective, but like, talk me through it.
Graeme Neilson [00:07:10]:
Well, I mean, the fundamental problem with phishing is that the, you know, sender verification, like, how do you, how do you know who sent you the email? I mean, that’s a fundamental problem of email. It’s not a fundamental problem of people. It’s a problem of email. And there are plenty of nowadays secure messaging systems around that we could use. You know, the fact that all businesses use email store, it’s convenient. It’s historical. We have like, you know, lots of sunk cost in email. But using a communication medium that doesn’t allow verification of sender is, again, seems foolish.
Graeme Neilson [00:07:44]:
I don’t know, maybe I’m being naive, but it seems obvious to me. I mean, why, you know, okay, I understand people aren’t going to. You can’t just stop using email overnight. But if you are wanting to communicate with people in a secure fashion and not be subject to those cons, I mean, you’re effectively allowing everyone on the planet to, you know, directly contact you effectively and try to con you. So. And for businesses, it’s not sustainable. Surely, you know, all the spates of the targeted phishing emails against, you know, CFO is when attackers know the CEO is traveling, you know, and asked to transfer money for some deal that is, you know, not yet signed. So don’t talk about it.
Graeme Neilson [00:08:25]:
You know, we’re all aware of all these targeted attacks like that.
Karissa Breen [00:08:29]:
Okay, I want to keep following this because I hear what you’re saying and I want to want to explore this. So. Okay, so sometimes when I’m emailing people, because obviously we’re in media, we primarily email external people all day. There’s a lot of people, when I see their email comes back to me saying, like, this email is an external email. Be careful, all those sort of things. So do you think, like that’s enough or like, I know it’s sort of. I know it’s very what you’re sort of saying, but at least it’s, it’s giving. Like it’s.
Karissa Breen [00:08:56]:
Some of them are quite like highlighted in yellow red. Like it’s quite bold in your face. Actually annoying sometimes to read it. It’s a bit distracting. So like doing that sort of in some way helps solve the problem?
Graeme Neilson [00:09:06]:
I don’t think so. I mean, the way email works, you’re always emailing people outside your organization. I mean, that’s one of its fundamental reasons for existing, isn’t it? I keep getting missed. Every time I send an email to an external organization, I get a warning. So I just don’t see the warning anymore. I mean, my brain just turns it off. It’s just part of the page rather than. I’m not proposing here that we necessarily try to re engineer email.
Graeme Neilson [00:09:28]:
I just think, I guess the awareness program should really be about understanding what email can be used for and basically how you cannot trust any email. So internal email, sure, but external, I think you need other processes in place to ensure that communicating with external parties is not putting you at risk. So, you know, you understand the Limits of what the email can tell you in terms of who you’re talking to and who they might be or where they might be. I think that would be a better approach.
Karissa Breen [00:10:01]:
So the reason, okay, so where my mind’s going is because what you’re saying makes sense. Right? So, for example, travel on a plane, right. Should it be up to me as the traveler, the customer, the passenger, to be like, oh, you know, is this plane secure? And look at everything that’s happening in the news at the moment, is it on me? Because I’m not an engineer and you know, and do have anything to do with aviation, so why should it be on me to check, like, is this plane going to be okay? So I’m using that as an example of how sometimes in security we just think, oh, well, you know, Graham should have known because he did the awareness training. And I just think it’s not these people’s profession. It’s not probably what they’re interested in yet we’re still trying to hand over the blame to be like, well, the user didn’t think it through, therefore it’s their problem. So I just, like, we’ve just seen this over the years and then we’ve tried to, you know, patch it with technology and more technology and all the things and it’s like, well, it’s the user’s problem. So I’ve just, I’ve done so many of these interviews and I’m just trying to really get a gauge on, well, what do we do moving forward? Because it, it is unfair for people to know all the things about cyber security. Right.
Karissa Breen [00:11:07]:
Or else that’s exhausting. Or else we’d have to know everything about construction and, and cars and vehicles and all of these other things. Right. So how is it fair?
Graeme Neilson [00:11:15]:
Absolutely. And I think maybe, you know, as computers become more ubiquitous, more embedded in everything and less a separate entity in itself that you use, I think maybe we need to move more to dealing with these security issues, you know, analogous to, say, safety. Because as you point out, I don’t know anything about planes. I shouldn’t have to know about a plane before I get on a plane. I buy lots of electrical equipment, electric, you know, electricity is dangerous, yet I don’t need to know anything about that to use devices safely. I feel that should be the same approach for computers. You know, you shouldn’t be, yeah. Having to prod yourself, stick a fork in the wall in your socket to become aware of the dangers of electricity, which is effectively what anti phishing or security awareness programs do.
Graeme Neilson [00:12:01]:
I Think so.
Karissa Breen [00:12:02]:
Why haven’t we gotten to this point? Because it’s not like computers were invented yesterday. They’ve been around for a long time. We get a lot of smart people out here. Do you think that, okay, so do you think that potentially this is a theory? Do you think that companies out there are like, well, if I kind of sell the thing, I’m distracting you from fixing the, the root problem because I can sell you this other thing which is to plug the gap, which was there already. Or what do you think’s happening here?
Graeme Neilson [00:12:28]:
Well, I think the technical debt on the Internet obviously is it was built as a, as a sharing network. You know, you can think of it as a giant copying machine if you like, you know, universities initially, I mean obviously from DARPA and military, but initially university network. And I guess, you know, having grown up as the Internet came into existence, seeing how it went from a sort of network of sharing and knowledge to one where quite rightly people were like, well, hey, you know, we can use this to kind of a business. I can make some money, I can provide a service. But a lot of the protocols were not designed with security in mind. So there’s certainly a lot of backwards fixing of even for example, HTTP. You know, there’s no idea of a session in HTTP. So the whole all business on the Internet using HTTP is a kind of retrofit of security to kind of allow those things to happen.
Graeme Neilson [00:13:21]:
And again, obviously you can’t just fix everything and turn it all off and change to a new protocol. It’s not feasible. We do have to start addressing these issues quite soon. I think the longer you wait, the harder it’s going to get.
Karissa Breen [00:13:33]:
Yeah. Okay, so. And look like the Internet, right? It’s really built of like sticky tape, duct tape, like it’s, you know, not the most.
Graeme Neilson [00:13:40]:
I’m surprised it works most of the time, to be fair.
Karissa Breen [00:13:42]:
It’s just one of those things that people nowadays just don’t really think of, like I can just go on the Internet. They don’t really think about the mechanics, how it’s built, etc. Right. So, so how do we look? This is going to be a hard question, but like, how do we get to the point where, like you said, we’ve, we’ve sort of just kept building stuff on stuff and it’s, it’s a rickety sort of bridge we’re walking on at the moment, but like, it’s going to be hard to knock down the whole thing and rebuild it. But what do we do to make it, reinforce it make it stronger.
Graeme Neilson [00:14:09]:
For me, the line is, as we discussed just earlier, that as computers become more ubiquitous, embedded in everything and smaller and everywhere, and once computers are interacting with the physical world effectively or influence the physical world so you can cross that line from stealing money on the Internet or defacing websites or stealing digital information is one class of problems. But once you have all the potential security issues that we have in the digital realm, then being able to be exercised in the real world as well. In terms of smart cars, smart construction vehicles, all the IoT devices, supposedly smart devices in your home. I think at that point, as this is happening, I think now is where we have to start drawing some lines in the sand. And I think those devices that interact with the world have to have much stronger. I guess you would term it safety rather than security. I think that might be the way to term it. But it is security, but looking at it through a safety lens.
Graeme Neilson [00:15:12]:
So trying to persuade vendors, governments, people to think about it like that may get us some traction in terms of fixing these things.
Karissa Breen [00:15:21]:
Okay, so I want to zoom out for a second. So, quick update. I have now Google just to give you some insight. The first standalone electric toaster called the Eclipse was made in 1893, well over 100 years ago. And it looks pretty dodge. You can look it up after you. You know, it’s. It looks something that you’re gonna 100% burn your hand on, something’s gonna happen.
Karissa Breen [00:15:43]:
Right, but look at the toaster now. Right? It’s completely. I mean, look, unless you’re putting like a knife or something in the toaster once turned on, but other than that, it’s not like the one I’m looking at. So the reason why I’m bringing that up is are we gonna have to wait another hundred years before. I mean, you and I won’t be here to have that conversation, of course, but do you think we have to wait that long? To your point around, the computers are going to get smaller, they’re going to get better. We’re not going to have these sort of fundamental, flawed problems that we’ve had historically over the last 20, 30, even 50 years when some of the first sort of computers started to come out.
Graeme Neilson [00:16:15]:
Well, that’s my hope. I think it’ll be quicker than the hundred years. There’s some really pressing issues coming up at the moment, particularly around credentials, look at breaches and credentials. And I did some research recently where I was going on dark forums and seeing what data is available, and there’s a recent. I think it came out Today actually a research on a people discovering info stealer logs and how many actual username password pairs have been breached. You know, and we’re into the, we’re into the billions. So once everyone’s credentials are compromised at that point, I kind of feel maybe we are freed up to fix things as being a little bit flippant. But you know, it used to be that attackers would brute force credentials and now they just have them.
Karissa Breen [00:16:59]:
But we must be getting close to it and because like how many breaches, Like I’ve been in multiple major breaches, like multiple times. Yeah, so this is where it gets. So okay, so then I’ve got. Okay then I’m curious to understand. Right, so all these breaches have been happening especially here in Australia, like 2022. There’s a fair few of them. So then there were people online because I like to do research as well and see what just like the average person sort of saying someone’s like, oh well, who cares? I was in the first, second, third breach or whatever it was. So then I’ve asked people on the show, do you think they’ve become desensitized? People have said yes, et cetera.
Karissa Breen [00:17:28]:
But then do you think it’s going to get to a point where it’s like, well, no one cares. My stuff’s out there. And then therefore we’re even in a worse position as a security industry than before because no one cares. Which means like businesses, yes, they’re regulated and all those things and they’re going to get pinged by the government to some degree, but they may have less impetus to want to do anything because they’re like, well, no one really cares anyway. So it’s sort of like we’re going, it always feels like, are we going backwards? Because no one seems to care that much.
Graeme Neilson [00:17:57]:
I think that caring from users is simply powerlessness though. I think their only option is to not care because the breaches are happening and they have to use these online services. Now to your life admin. I mean you can’t not have online banking. You can’t. Health is going on, you know, is online. I mean interacting with the government is online. You can’t avoid it.
Graeme Neilson [00:18:18]:
So they, and they are not the ones that are being directly breached. It’s all these, it’s all the services, the databases that we breach. So the user themselves has basically very little power, I would say. And so I think that they’re not caring as simply, I can’t do anything about this, so why care? What can I possibly achieve by caring?
Karissa Breen [00:18:39]:
So do you think companies are thinking about that? Well, we made a mistake and. And then the people are powerless. And I get that, like you’re right and we’re forced. And that’s why I keep saying to people like, they’re like, oh, you know, my privacy and all these things. I’m like, yeah, but if you want to operate in today’s society, you’re going to be on the Internet, which means you have to give up potentially being in a breach, giving up that risk and loss of privacy.
Graeme Neilson [00:19:00]:
You have no idea of how that company is storing your data or how it’s using it. Again, it comes back to the user really shouldn’t be the focus of security. It needs to be the systems that we’re using. Businesses, I feel are simply. Sometimes businesses care about security because of the people in them or the particular nature of their business. Their forced to because of compliance potentially. But there’s plenty of companies who I think take this sort of wildebeest approach. We’re running around crossing the river out in Savannah.
Graeme Neilson [00:19:28]:
As long as we’re not the oldest or the slowest at the back, then there’s enough targets that the attackers will get someone else and we’ll be fine. We’ll have our business and we’ll do our exit before any devastating events happen.
Karissa Breen [00:19:41]:
And would you say that sort of the general mindset at the moment?
Graeme Neilson [00:19:43]:
Oh, it’s varied enormously. I mean again, it all comes down really to the people in those companies. And that can change as those people change. I feel know companies security profiles change. Look at Microsoft for example. You know, that started off very, very weak in security. They suffered a lot of breach, generating a lot of exploits for attackers. Then they had to take security seriously and took it very seriously and then improved enormously.
Graeme Neilson [00:20:07]:
And then recently I feel they’ve been kind of going the other way again. You know, I think it requires constant effort again because the systems are using the halting problem to maintain your security.
Karissa Breen [00:20:17]:
So what do you think? Okay, given what you do and your experience doing a lot of research, what do you think people like? People as in company, what do you think they’re most upset about at the moment? And I want to use that sort of word because obviously I’m on, you know, social reading what people are saying they’re frustrated with a vendor or this or users or they can’t get enough money or you know, there’s all of the things. But what is it would you sort of attribute to people’s main frustration at the moment?
Graeme Neilson [00:20:44]:
Business at the moment because of the sort of political situation in the world and what’s going on. There’s an enormous amount of political denial of service going on. I call it political where there’s no ransom request, there’s no communication. It’s not a. They’re not even necessarily public infrastructure that’s being attacked. It’s simply denial of service to attempt to damage business. What I’m hearing at the moment from in Australia and New Zealand is just a huge amount of that causing impact to them, basically.
Karissa Breen [00:21:16]:
Okay, so then may. Okay, so on the user side then I was like scrolling through Instagram reels and there’s this random. I couldn’t, I didn’t save it and I just, it made me laugh so much. There’s this woman in her car completely almost having a meltdown. She’s like, I’m so sick of using multi factor authentication. I just can’t do it anymore. I’m over it. I’m sick of it.
Karissa Breen [00:21:34]:
Like, she was really like raging hard. So, you know, I get it, right? Like we’re trying to be secure, trying to do these things. But then obviously it just annoys people. People just want to log in. They don’t want to do the whole, oh, I’ve got to go to my authenticator app and do all these other things, right, it annoys them. So I would say that’s probably maybe the main consensus of people’s frustration from a user.
Graeme Neilson [00:21:55]:
I was considering that actually the other day. I spend an inordinate amount of time logging in constant, just all the time. And it is very frustrating. And because I’m a security person, everything’s a two factor because it has to be. As we talked about brutes, you know, your credentials are out there. So that’s definitely a frustration. And again, that’s possibly why people are sort of turned off by security or talk of security or being more secure. They just feel it’s just going to cause them more frustration and more effort, which is a problem as well.
Graeme Neilson [00:22:23]:
And I think the sort of implementation of Google and Apple where they allowed you to have password keys rather than passwords, I think that was a slightly missed opportunity. It seems that the implementation of those wasn’t perfect. Well, it wasn’t ideal. And they’re attempting to lock people into their little ecosystem because I think passkeys would be a good solution for the password problem and multifactor authentication and working on motorcycles or buses, especially when the machines are better than humans at the captchas these days.
Karissa Breen [00:22:54]:
So totally get it. So, okay, so then, I mean Passwords infuriate me, right? Like, it’s like, oh, you shouldn’t have the same password. It’s like, yeah, but it’s like some site, something I need to log into that I don’t really care about. Right. And then it’s so hard to remember. Okay. So then people would say, okay, well, Chris, you can get a password manager. But then what gets me about password managers is that, as you’ve seen, like, they’ve just been breached anyway.
Karissa Breen [00:23:16]:
So it’s like I’m paying for a password manager for a service to help me with not remembering all my passwords, and then it gets breached. So what’s happening?
Graeme Neilson [00:23:25]:
Well, the whole thing problem’s happening. All the protocols, all the languages we’re using are. Their complexity is not being considered and therefore we are introducing bugs. Because all these breaches are the people building the systems don’t want to be breached. Obviously, these are unintentional bugs, unintentional security bugs due to the tools that people have used to build their products. Because they can’t, you know, it’s software art, it’s not software engineering. We call it software engineering, but there is no, there’s no proving of anything. It’s simply how the developers decide to write the code.
Graeme Neilson [00:23:57]:
I do feel that password managers in the cloud, from an operational security point of view, is a fundamentally bad idea. I mean, you’ve just got a huge target in plain sight sitting there that everyone is going to want to breach, because then you’ve breached many systems rather than one. So the efficiency for the attacker with these aggregate providers is enormous.
Karissa Breen [00:24:18]:
Then the other thing, which was really interesting, and I know we’ve covered a lot of ground already, but one thing that you raised with me was we’ve have all this technology, we’ve got more and more vendors that we’ve ever seen ever before. Right. But then, you know, as you’ve rightly pointed out throughout this interview, you know, there’s more breaches, there’s more vulnerabilities, there’s more defects than ever. And I know we’ve discussed about the halting problem, but outside of that, how do we have more money being invested into technology than before and then we still have more issues than before?
Graeme Neilson [00:24:48]:
Well, that’s a good question. The more code you write, the more bugs you have. That’s just a fact. No one’s. No one’s writing perfect code, no matter how good they claim to be.
Karissa Breen [00:24:57]:
I don’t want to let this thought leave my mind. Do you also think it’s because, like, Lower barrier for entry. Now like you don’t really like, you can just with everything now with AI and stuff like that, like people are just shipping code and who are not like a developer quote unquote by trade. So therefore we’ve got, you know, the, this code that’s being written isn’t the best, is the most secure or whatever it is, but you know, and then it’s being pulled from open source repos. So it’s like, well, do we really know what’s in this sort of code? Or do you think there’s just more problems than before? Than just Graham developing the code securely, then he ships it? Like what? Talk me through that.
Graeme Neilson [00:25:30]:
Yes, and I think the inherent complexity that’s hidden from you is to some extent of when you write software now where you’re importing other people’s libraries, which import other people’s libraries, you know, sort of ad infinitum. I think that the interaction and the complexity of those things leads to bugs. A lot of bugs are in the interfaces, so parsing of data, the passing the communication of data and obviously that’s happening a lot more. And I guess we used to write code that was first programs were machine code. Like that was the code, the code was the code you wrote bit for bit. Then we moved to assembly language where you’re almost writing machine code but not quite this abstracted. And then we have object oriented languages and declarative and now AI writing code. I kind of feel.
Graeme Neilson [00:26:20]:
So all through those steps we are abstracting away from the actual bytes running on the cpu. There’s more code interpreting your code if you like. So that’s again, there’s more opportunity for bugs. I have a horrible feeling that there’s a bit of a tsunami of AI generated code bugs that we haven’t quite seen yet. Suspect that might happen in the next year or two, depending on how successful people are actually shipping product. If they’re, you know, just using AI to write code. I’m doubtful that that is completely doable. So at present, so we might be saved from, from that.
Karissa Breen [00:26:55]:
But then would you say as well, people could sort of hit back and say, well, you know, you’ve got things like, you know, S boms so you can look at the transparency, the compliance like you can go through with a fine tooth comb, you know what I mean? Like, do you think people would respond that way to be like, well, you can sort of look at the, you know, detailed sort of list of the ingredients in this particular code.
Graeme Neilson [00:27:15]:
You can try to look at the list of ingredients in that code. If it’s the same ingredients tomorrow, that that would be good. It may not be the same ingredients tomorrow. You know, the ingredients may have changed slightly. And if you’ve ever looked at an SBOM for a significant product, I mean, how are you going to assess whether the vulnerabilities within the SBOM are actually going to be exercised by your code? So, for example, if you have an SBOM with a whole lot of vulnerabilities in some libraries, there’s no guarantee that you’re actually, you know, touching those code paths. Those vulnerabilities are relevant to you, and it’s very hard to determine that. So SBOMs are to some extent put in the same category as sort of lists of vulnerabilities that, you know, people scan themselves, have lists of vulnerabilities, they have an SBOM with some vulnerabilities, they put them on a risk register, they determine the likelihood is low, they kind of polish them and put them in a drawer. I’m not sure lists of vulnerabilities and generating more lists of vulnerabilities is particularly useful.
Graeme Neilson [00:28:13]:
I don’t think we have a problem with finding vulnerabilities or, or exploiting vulnerabilities. We need to get to the causes rather than, you know, looking at the symptoms all the time.
Karissa Breen [00:28:23]:
So what do you think needs to happen, like, long term? So ultimately, you know, we discussed a little bit of history, what’s happening, some of the issues, but how do we sort of get. And I mean, there’s no, like, perfect place. I understand that, I get that. Right. Like, even if you look at cars at the minute, you’ve got people who are so pro, like, no, I’m never going to, you know, electric vehicle or evil, and then automated cars and all these sort of things, and you’ve got your hardcore. No, I drive this style car with diesel, for example. So what do you think that needs to happen in the tech space, the security space? Do we just keep going as we’re going, or do you think there’ll be an inflection point? And what I mean by that is, even when to some degree, when OpenAI really launched ChatGPT hard in 2022, that was a bit of an inflection point with industry. Right.
Karissa Breen [00:29:09]:
So people sort of change their mindset. Do you think something like that will come along, which will change our overall thinking, or what do you think needs to happen here?
Graeme Neilson [00:29:17]:
I feel now that it’s going to be a little more like maybe aviation industry was, you know, a lot of People died in the early days and when I say early days, I mean, you know, 60s, 70s, for actual aviation safety to improve. So I’m thinking that maybe computers and security is something similar. As we pointed out, computers are driving cars or health devices potentially or with AI generators generated code that maybe, maybe a lot of people get hurt. And that’s the inflection point. I’m kind of slightly pessimistic about humanity’s ability to actually change things for the good of all. We’re all quite self centered at the moment. So if it’s not happening to you, as we’ve said, people aren’t too concerned.
Karissa Breen [00:30:00]:
So what do you think people are concerned about? Now when I asked you that question, what I mean by is this, and I mean this is just a general sort of view. At the end of the day people are concerned for themselves and then that extends to well, I gotta keep my job, I gotta hit my case. KPI’s got to make sure we’re hitting our targets, you know, we’ve got to make sure we don’t get in a breach. Because if we get in a breach, I’m not going to hit my KPI and I’m going to give my KPI and he does come on holiday, you know what I mean? Like there’s all these things. Right, but what would you sort of attribute are the main concerns though to. Because people, if they don’t have to do something new or whatever, majority of people just don’t want to. Right, and I get that and there’s risk. But what do you think people are worried about? Do you think we’ll get past that point then as an industry as well? Until you said something catastrophic occurs, like people just, lots of people start dying, which could happen.
Karissa Breen [00:30:51]:
Where do you think that sort of sits now in terms of people’s mindset? Tell me what you think again.
Graeme Neilson [00:30:56]:
Good question. If you know someone, for example, you’ve met someone who has kind of due to some breach, has been scammed, actually lost money or been affected some way, they are very concerned. Whereas other people I think are fairly. Fairly. Yeah, I guess there’s nothing to worry about. But it’s a little like risk assessment. You know, everything’s unlikely until it happens. And then when it happens you’re like, oh shit, that’s really bad, you know, and you knew it was going to be really bad if it happened.
Graeme Neilson [00:31:21]:
But until it happens it’s not really sort of viscerally real to you. You don’t consider it in the same way. So I’m not sure. I think users, as you pointed out, I think they’re more frustrated with security. Unless they’ve actually been conned or scammed or, you know, had some impact as a result of a breach or if your email is compromised, for example. Again, it’s a very emotional sort of effect on, you know, like somebody burgling you is quite emotionally hard. Someone’s been raking through your. Your information or your physical stuff.
Graeme Neilson [00:31:52]:
There’s a very different concern and assessment after that event as opposed to before. Yeah, I think most people find security a pain in the ass and wish it would go away. They don’t want to log in, they don’t want to have to do captchas. They don’t. You know, it just becomes. It stops them doing their job. It’s a hurdle. Well, I say unless they’ve been, you know, unless they’ve had an event, in which case they are typically have a much different view of the world and security.
Karissa Breen [00:32:16]:
Just going back down on passwords for a second. I mean, again, I’m not. I can’t deal with passwords as a consumer. So I was always curious. I don’t work in big corporations. In the past, like, you’d see so many people down the help desk like, oh, I can’t log in. I’m going to reset my password. And I was curious to understand.
Karissa Breen [00:32:32]:
I wonder how many hours of productivity and resources have been spent on it. Help desk helping people reset their password. Or someone can’t log in to do something because of a password. In terms of hours, resource money of just trying to get into a system to do work.
Graeme Neilson [00:32:49]:
I think if you could solve a password problem, you’d probably be very wealthy. Carissa. It’s a fundamental problem. And as I say, I think pass keys were good. I think they’ve just not been implemented well. They’ve been implemented to, I guess be a differentiator for the business and they want to lock people into the ecosystem. So I think if we had a kind of open. Well, we have an open standard, but if we had a sort of open implementation, passkeys, I think that would.
Graeme Neilson [00:33:15]:
Because you want the machines to do that authentication for you. As we all know, humans are terrible at remembering complicated passwords and why should they?
Karissa Breen [00:33:26]:
Right. Like you had been saying.
Graeme Neilson [00:33:27]:
Yeah, it’s crazy. And all you’re doing is nowadays with passwords managers, is basically making the machine generate it and then copying and pasting it. So passkeys should be what we’re doing. We just need an implementation of them or a way that Makes them force vendors to make it easy to transfer those I guess from service to service. Although again, I guess that provides attack surface as well. So this is the. There’s always some conflict between business and security as well, I guess in that respect.
Karissa Breen [00:33:57]:
So then find this a little bit more. So there’s people out there, I’ve been speaking to their theory and I mean, depends on who you talk to. I mean, I’m speaking to certain people at all different levels and what they’re talking about is always interesting and important. Now people are saying the fundamental security issue is, is at the identity level. Would you agree with that?
Graeme Neilson [00:34:16]:
Definitely, definitely. I think the trying to differentiate between humans and bots. So identifying who you are I think is absolutely fundamental. Yeah. And this is where there’s a lot of tension for me anyway, where you want a system that you don’t necessarily. Well, you don’t want necessarily a single identity and you don’t necessarily want a national identity that allows surveillance. So you need some kind of. Multiple identities are nice.
Graeme Neilson [00:34:47]:
I mean, even with different emails and passwords, you know, if one is breached, it’s not devastating for you then like people already have multiple kind of identities. And as you even stated yourself, you know, you have some services that I don’t really care about that. It’s not a, you know, your identity on a recipe site is different from your identity on the bank and that’s a good thing. So there’s a problem with their folding all those into having one identity, which I think we should avoid because when your one identity is compromised, then you’re fucked, to put it mildly.
Karissa Breen [00:35:18]:
True, true.
Graeme Neilson [00:35:19]:
I think multiple identities that also preserve privacy, that’s where I feel we should be going. That’s where the problems are. That’s where you want to solve. That’s what I would like to solve.
Karissa Breen [00:35:30]:
So where would you say as an industry we are in this identity journey? Right.
Graeme Neilson [00:35:34]:
I don’t think we’re anywhere. I think we’re just all over the place. It’s email, which, you know, as we pointed out, isn’t the best, most secure system. Email is what’s used as identity at the moment and it’s okay, but it’s also very. When it’s breached, that’s it. Everyone gets their password, resets it to their email. Now there’s a number of systems I’ve noticed that even rather than having any kind of password, when you log in, you simply get a token emailed to you and you then use that, you know. But all that’s really doing is Pushing the identity problem to your email, which isn’t necessarily that secure.
Graeme Neilson [00:36:08]:
I mean it all depends on what you’re using. Strong, your passwords are.
Karissa Breen [00:36:12]:
But then the other thing is just to extend that more. Then there’s machine identities. Right. So now that’s another problem. So then how. So we’ve got like the physical human identity and then we got machine identity and there’s thousands of them to the point where people don’t even know what machine identities are in their environment. Right. So then like, then we got another problem.
Graeme Neilson [00:36:30]:
In cloud service providers, there’s a lot of confusion when they write software around confusing the, say, the phone and the user. And what I mean by that is they assume, for example, if you want to have an Uber account, you know, they link it to your phone. You can only use the app on the phone or whatever the phone is, the identity.
Karissa Breen [00:36:52]:
Yeah, that’s annoying as well though.
Graeme Neilson [00:36:54]:
Yeah, the phone is identity and they don’t actually understand it. You can create lots of other identities that are not actually phones, but use the API in a similar way and look like phones look like people. So there’s confusion there around that. And a lot of people, I guess their whole life is in their phone to some extent. I guess they are their phone, but it’s not necessarily unique or non divisible.
Karissa Breen [00:37:14]:
So what do you think? Like if you were to hypothesize with their identity, like do you think we’ll just get. Do you think when we’re moving forward and this can sound so bad, but like it’s all I can think of right now when we’re born, like, is it like we’re gonna get like a token? Like, here you go, Graham, here’s your blue token. And this is your identity just with how the world works. Right? Because like even with phones it still gets me. Right? Like people can easily just port your number. Like SIM porting, like, how easy is that?
Graeme Neilson [00:37:38]:
Yeah, that’s it.
Karissa Breen [00:37:39]:
And no one’s thinking about fixing it?
Graeme Neilson [00:37:41]:
No, again, because phone companies have to allow you to port your SIM across providers is what they would claim. And again, that’s the reason I think that’s a problem, isn’t. I don’t think the solution to. That’s technical. I don’t even think the problem is technical. The problem is people help desks are there to help. So why would you be surprised when someone phones up and asks for help, they give you help. And again, blaming the help desk or you know, the people who are doing the SIM port.
Graeme Neilson [00:38:11]:
Again, seems to me a bit disingenuous. It’s, you know, that is their job. It’s what they do daily to ask them to sort of spot someone trying to port US illegally. There just needs to be a different process for porting sim.