The Voice of Cyber®

Episode 143: Arun Vishwanath
First Aired: November 16, 2022

Dr. Arun Vishwanath studies the “people problem” of cybersecurity.

His research focuses on improving individual, organizational, and national resilience to cyber attacks by focusing on the weakest links in cyber security—Internet users.

His particular interest is in understanding why people fall prey to social engineering attacks that come in through email and social media, and on ways we can harness this understanding to secure cyberspace.

Dr. Vishwanath is an alumnus of the Berkman Klein Center at Harvard University.  He was a tenured associate professor at the University at Buffalo and was faculty at Indiana University, Bloomington.  He serves as the CTO of Avant Research Group (ARG)—a Buffalo, New York based cyber security research and advisory firm, where he consults for major corporations and government agencies on issues ranging from cybersecurity to consumer protection. He also serves as a distinguished expert for the NSA’s Science of Security & Privacy directorate.

Dr. Vishwanath’s research on improving cyber resilience against online social engineering has been funded by the National Science Foundation. He has published close to 50 articles on technology users and cybersecurity issues and his research has been presented to principals at national security and law enforcement agencies around the world. He has also presented his work at leading global security conferences, multiple times by invitation at the US Senate/SSA and House, as well as four consecutive times at BlackHat.

Here’s the link to the book and the discount code. Anyone ordering the book with a US address can get 15% off The Weakest Link when you enter the discount code “READMIT15” here:

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Introduction (00:15) You're listening to KBKast, the cybersecurity podcast for all executives cutting through the jargon and hype. Do you understand the landscape where risk and technology meet? Now, here's your host, Karissa Breen. Karissa (00:30) Joining me today is Arun Vishwanath. He's the founder and the chief technologist from Avant Research Group. Now, today we're talking about your newly published book, The Weakest Link. We will be linking in the show notes on where you can buy the book if you are based in the United States, and then you get an additional discount. But for people who are not based in the United States, we will also put that link in as well. So, everyone, thanks for joining. Arun Vishwanath (00:54) Hey, Karissa, it's great to be here. Very excited to talk to you. Karissa (00:58) So, for those who are unfamiliar with your book, talk to me a little bit more about what you wrote about. And also, I want to understand from you, what was the genesis behind it? Arun Vishwanath (01:06) All right, so the book basically deals with social engineering, right? And everybody knows, I mean, this is potentially the biggest factor out there when it comes to cyber attacks. So whenever we talk about cyber attacks, invariably when it is targeting users, it's a social engineering attack. And social engineering is a very broad vector. So it encompasses messaging, email, telephony, so it's fishing, social media. It's pretty much every modality that you can find on the internet is something a social engineer can target. And so what the book basically does is it allows an It person organisation, just any person who is interested in security, to prospectively examine who is at risk from social engineering, by how much of a risk they are and why. It answers three important questions, right? Who is at risk? Which is who are the weak links in the organisational security? Who are the strong links? It's a question we don't really have an answer to yet. The book provides that answer. It tells you about how much of a risk there is. Can we quantify the likelihood of these individuals letting a social engineer penetrate the system? And finally, why are they at risk? Arun Vishwanath (02:21) What's the reason? Why this individual? If it's you or I or anyone in the audience, why are they at risk? Is it something they do? Is it something, some way in which they think? Or is it something that they do technically? Is it the behaviour? Is it the technology? Is it a cognitive process? What it does, it allows you to do all of this. That's why it's called The Weakest Link how to diagnose, detect and defend against Social engineering. So that's the synopsis of the book. Karissa (02:46) So did you just wake up one day saying, I'm just going to write a book and it's going to be called The Weakest Link or how do you sort of come to the conclusion? Arun Vishwanath (02:53) Pretty much right. Yeah, I wish that was how it was, but I'm a cognitive behavioural scientist by training. I spent a good 20 years of my career as an academic, social scientist, tenured professor, researching users, technology users, and basically studying how people use technology, why people use technology, and the way in which you could get them to do things using technology that they might not even think of doing right. About a decade or so ago, when I was in the middle of studying this, we received in my university, we received one of the first, back then, social engineering spear phishing attacks. This was basically an email that we come out to comment right now, which is an email asking all the students at the university to change their password. And when I went back to the It department and said, hey, you know who sent this? The answer was no. Nobody knew who sent it and nobody really cared why it was sent and nobody did anything about it. We don't know how many people change their passwords, we don't know how many people fell for that attack. And given what I was studying, I recognised something. Arun Vishwanath (03:58) Hey, there's a bad guy trying to do exactly what I was trying to do for different purposes. And so that became area that I started exploring in my research. And what started then almost a decade plus ago, led to where we are with the book. And as I board with the research, the attack started. And if you remember, back in about 2014, November 2014, we had this massive attack on Sony Pictures in the United States. And when that attack happened, I was already about five or six years into doing these attacks on a limited basis, using different subject pools and studying them. So when I saw what had happened with Sony Pictures, I knew exactly what the vector was. And I at that time reached out to CNN and I said, hey, this is something I've been studying for almost a decade by then. Would you really like to know what these guys are doing and what's coming next? And that started a process of writing in the media that eventually must be booked on this day. Karissa (04:55) Well, that's awesome. I think that's a really great sort of journey. And I did see that school attack down in Los Angeles as well recently. Okay, so there's a couple of things that come out of my mind, but before we do that, I want to speak to you about in cyber, we often say human beings are the weakest link. It seems to be a common phraseology that we use in this space. But then it appears in my mind that many aren't really than taking a step back and understanding human beings, how they work, what drives them, etc. For and with your background, especially being a social scientist for two decades, why do you think this is the case? And I guess my second point to that, something that you just spoke about before everyone was you said no one cared about the attack, and then no one did anything about it. Which sort of leads into my question, right? Arun Vishwanath (05:47) Yeah, absolutely. And it's still the case, right? I mean, it's changing, but it's a very important thing to think about, which is why don't we study? Why don't we have answers to the question of why do people fall for social ethnic attacks? But what's been going on for the last like, 20 years? To give you some perspective on it, one of the first social engineering attacks, spear phishing attacks, that's all to come in now, came out in 1996 in AOL. And back when that attack happened, AOL basically emailed all the users and told them to be more careful. Fast forward to 2022 and we're doing exactly the same thing. They trained their users, we're still training users. And we've been doing Security Awareness Month since 2004, and we're still doing it. And just last month, two months ago, microsoft was hacked by Spearfisher. So something's not right. We've been ignoring something, something's not working. And why is that? And part of the reason for that is technically, the entire cybersecurity world, the whole world of technology, is dominated by a different paradigm, right? It's a paradigm that is run by people who are engineers. It's an engineering paradigm. Arun Vishwanath (06:55) And the engineering paradigm has given us great technology, great phones, great tech. But the engineering paradigm is also something that looks at users as just a small, little component in the larger scheme of technology. And so the interest is not in studying users, but the interest is in adding technology to fight a problem in technology. So what they're trying to do is make more endpoint security. And the focus has been on endpoint security, has been on vendors, has been on developing better technology, hoping that when we have enough technology, the user will become somehow safe. And as the saying goes, you can make a weapon as safe as you want, but even a kitchen knife in the hands of a madman can be very legal. The technology can only do that much. You need to study people. You need to understand that and use it. And we have not done that. We've not spent the time doing it. And like I said, it's been close to 20 plus years. Two decades is a lifetime when it comes to technology, right? It's more than a few lifetime when it comes to technology. We're still doing the same thing we did, and we're still falling for the same attacks. Arun Vishwanath (08:01) And now, mind you, the attacks that happened on Microsoft, on Nvidia, on Samsung in February of this year, all of these are major tech companies. On October, which is an identity management company at the forefront of cybersecurity. Microsoft, which is known to be very secure when it comes to where it takes cybersecurity very seriously. Every one of them got hacked in February of this year. And the best part is that they were all hacked by a bunch of teenagers using social engineering. Nothing has changed. And I say nothing has changed because teenagers have been hacking and using social engineering from the days of AOL. Karissa (08:37) Yeah, interesting points. And this is where people get divided a little bit, because you are right, we've been two decades, nothing has really changed. I mean, I've been sitting there speaking to people like yourself on the show, in the community, that we're just trying to create Band AIDS over top of people with technology. But then when you speak about someone who's on the engineering side, they're like, oh, but this is the way it's going to be. I don't give the respect to the people that understand people. So how do we work together? Effectively? We've got let's call it the nontech side and the technical side. For argument's sake, they equally try and do the same thing, but they're not really working together. So how do we get better alignment from your perspective and with your research? Arun Vishwanath (09:26) Let's understand the paradigm here, right? So what I've been calling for, and there's an article that I've written that's coming out next week that also talks about the sale, is putting people first. Right? I'm talking about inverting the existing paradigm, right? So what's the existing paradigm? The existing paradigm is to create a product and to go out there and test it on people, right? That's how technology is developed, right? You develop a smartphone and then you go out there and represent it to people. It's no different than how you develop a new recipe when you're Starbucks or when you're McDonald's, right? Because you can't imagine products. It makes sense, right? You can't imagine an interface, so you have to see it. So we've used the same paradigm for developing security technology. We have said, here's a security technology or here's a security awareness programme, or here's a security certification. Now go use it or let's test it on people. The problem is, security is not something you need to imagine, like a cup of coffee at Starbucks and a new recipe, a new frappuccino. It's something that's layered on top of an existing behaviour. So when we talk of social engineering coming in through messaging, messaging is something people already do. Arun Vishwanath (10:35) When it comes in through phone or through an email. Email is something people already have. It's like you're not trying to teach people how to imagine a car, you're trying to teach them how to wear a seatbelt. So you have to build it around the user. So what we need to do is kind of invert that paradigm when it comes to cybersecurity, we got to start with the people first, rather than putting people as just another input in the larger cycle of interface hardware, software. And when you invert that paradigm, what you basically do is you say, okay, why are people vulnerable? What is it that people are doing that doesn't work? Or what is it that they're doing that does work? Diagnose the problem and build security around it. We did that with seat belts. That's how seatbelts were devised. Designed. They weren't designed first and then as tested on people, they were designed with people in mind from the get go. And they were designed buffalo, New York, where I am at. And so the idea is, if you invert that paradigm, you basically start incorporating user inputs and the study of people all throughout the security cycle, rather than it just being something that is added in the end. Arun Vishwanath (11:41) And when you do that, you come up with a holistic product that actually works on the first day. All this could be a technical solution, this could be an awareness solution, and this could be a combination of both. And that's how you change this paradigm. Karissa (11:53) Okay? So I'm with you. I totally hear what you're saying. How do we start with people first? Like, if someone is listening to this saying, oh, actually, I'm not putting people first, what do we do? Arun Vishwanath (12:03) Well, the first point of this is we have to change the pair line, right? We have to appreciate that what we're doing right now doesn't work, right? That's your first step. Awareness is the first step. Karissa (12:13) But do you think people have resigned themselves to the fact that it's not working, or do you think that they think it works still, in your experience? Arun Vishwanath (12:21) I think people think it's going to work eventually, right. Which is why we're still doing a various training 22 years later. Karissa (12:28) So when you say eventually so if you and I speak again in 2050, is eventually still in that cycle yeah. Arun Vishwanath (12:36) We will be doing the exact same thing because we already know it doesn't work. So let's take security awareness training as a concept, right, as a solution, which is called a solution that relates the product. Let's say it's a solution everybody writes out, every corporate out there implement some form of security awareness training. Every academic study has shown that it doesn't work. And if it does work, it has a very short term impact, yet everybody does it. In fact, we spent a whole month in October doing just security awareness and we've been doing the same thing even though we know it doesn't work. So the first step is to say, okay, it doesn't work. Because unless you're honest with yourself, it's like the AA right on this. You come out and say, hey, you know what? I got a problem how to deal with it. Unfortunately, we haven't even gotten to that first step. That's your first step. The first step is saying, hey, this is not working. The next step is to say, okay, why is it not working? What's happening here? Is it the product? Is it the solution? Or is it something in the way in which the user is oriented to the technology? Arun Vishwanath (13:34) That's the problem. That's your diagnostic stage. Once you do the diagnostic stage, then you have the answers. And the diagnostic stage is not a black box. Humans are not a black box. In fact, the answers to why people fall for social engineering, for why people are not secure or not resilient, are already there. The answers are in fact, in my book, I explain the cognitive psychology and the behavioural psychology behind human vulnerability to social engineering to basically all forms of attack dog that's target users. And I also give you the diagnostic approach. And it's not complicated. It's actually less complicated than going to the doctor and getting your blood pressure checked. It's actually a very simple process. And so if you could do the simple process and it's essentially boils down to asking a few questions after you do a pen test, not any pen test, but a certain form of a pen test. And how you do it is in my book, once you do that, you can actually estimate, you can project their risk and you can project why they are at risk and you can explain why they are at risk. And then you can build the fences around it. Arun Vishwanath (14:43) And to the last point, the defences don't always have to be more training. Because right now what's happened is we begin with this idea that training is the solution and then we end up with training telling us that, hey, we need more training. And then it's a cycle that's been going on for the last two decades. Karissa (15:00) Okay, you make great points. I'm just trying to distil this down on my mind and just map it out. So what you're saying is we don't really need security awareness training? Arun Vishwanath (15:11) No, I'm not saying that. I'm saying that what we're doing as security awareness training is limited and flawed. Karissa (15:19) Right, so you're saying the method the. Arun Vishwanath (15:21) Method we're doing yeah, let's talk about that. Right. Why don't we talk about that? So what are the issues with how we approach social engineering to it? There are five issues with it. Let's talk about that. The first is what is security environments? There's no standard for what is security virus? What is ruled? There's really no standard for when it has happened. When are people security aware as any security awareness company, any professional who does security events? Hey, what would happen when everybody is security aware? Will everybody be protected? Karissa (15:56) So have you asked someone that though, like, hey, what happens when your whole company or 50,000 people are security aware? And what do people say? Arun Vishwanath (16:06) Well, they can be secured. There will always be a weak link then always be someone who's not security aware. Well, then the question is, what's the objective of security awareness when we don't know. What does it mean? What is security awareness mean? Does it mean you're knowledgeable or does it mean you just know by now? Now, you and I, we've travelled extensively. I've travelled acceptable. I've gone to Indonesia and given presentations and people know about Nigerian fishing attacks there. I've gone to south parts of South America and given talks on security awareness and people know Nigerian tax there. Too many of the people in the world might not be able to really effectively tell you what's in Nigeria or where Nigeria is on the back, but people's awarenesses of phishing attacks. So what is the goal of security available? We don't really know. It can be, hey, let's make everybody a computer scientist because that's too high about it. And even if everybody becomes a computer scientist, let's not fool ourselves. Those guys in Microsoft who fell for the fishing attack, those guys in Nvidia, the guys in Samsung, many of them were also highly trained. Arun Vishwanath (17:15) Maybe there were even some even computer scientists who found appreciated. So the first issue we have is there's no standard, there really is no standard for what it is and when it is achieved, what it's meant to achieve. We don't even know what it's meant to write. The second is security awareness is not a solution, it's a product. It's a product that comes with a package, with firms, it's vendor driven, it's marketplace at work. No harm with that. But the problem with products is that products need to be sold and products lead to the sale of more problems. So inherent to the product cycle is the idea that hey, every time you use your product, if you don't need them anymore, you're not going to be using them. So they keep coming back and giving you more and more training. Which is how we came to what is called as the pentesting, which is we send a mark phishing test and we keep doing it. And every time somebody fails, you keep doing more of it. Right? Which brings us to the next point. Invariably somebody always fails. I've done these phishing tests in companies in various parts of the world, in Asia, Europe and North America. Arun Vishwanath (18:21) Somebody always fails. For the test there is a ceiling effect. What do I mean by ceiling effect? And there is a number after which you cannot get that number of people who fall for a pen test down. This happens all over the world. I know it happens in Australia too. So you have a third problem. The fourth problem you have is there is no standard for a test. We don't know how to make a pen test. Now, a lot of people say they do. And I've talked to companies that make pen tests and many of them go and say hey, you know what, we saw this attack and we came and created a test like it. Well, that's not something where you're preparing people. You're basically just following the hacker. That doesn't prepare people for what is coming, that prepares people for what has happened. Right? And then the last problem is when you do a Pen test and you get some data, no one can tell you why someone fell for that phishing test. In the Pen test, is it because of something people did by mistake? Did they click on a link by the state? Arun Vishwanath (19:18) Did they not think about something? Did they do it inadvertently? Did they do it deliberately? There's no way to tell. So if there's no way to tell in a Pen test why someone falls, how can you explain to me when someone really falls for a real attack? If you can't even tell me in a simulated environment why an errone or a carissa or a john or a jack fell for a phishing test, you can't tell me why. Is it is it a thought process? Is it a mistake? Is it deliberate? What was the thought behind it? But how can you tell me when someone falls, whether or not someone's going to fall or a real attack which is going to targeted or targeting it? So you have these five problems, right? These are major issues. We've got vendors and vendors and vendors out there doing this, but none of them can answer this. Now, we answer all of this in the book. The book addresses all of this, right? That's one of the things that the book cannot provides, right? What it does is the book is not a product. It's not trying to sell you a licence. Arun Vishwanath (20:25) It shows you how to create a phishing test, prospectively created. Not just follow a hacker and copy their attack, but actually create a test. It shows you how to establish the test baseline so you understand how to quantify that test shows you how to measure risk, so it shows you how to measure Carissa's propensity to fall for that particular test. And then it explains why you fell. It explains the mind behind the machine, right? And so we try to address all of this using the science of cognition and behaviour as the undercurrent of it. That's what this book does. Right? So these are the problems and these are the solution. That's what the weakest link does. The book. Karissa (21:10) So in the book you talk about that there just isn't a lot of human science in the study of human factors, but there is lots of data around software bug, for example, that you've spoken about. So tell me a little bit more about this or does this go back to your engineering paradigm? Like, we're so focused on looking at the bugs and because we got engineers. Arun Vishwanath (21:30) That's right, it goes back to the engineering paradigm. And I'm a big history buffer. I read a lot of history and the best example of paradise comes from tea. So if you think about tea is consumed, UK people do a lot of tea. India people drink a lot of tea. And about the tea bag was invented around 1960s. Microwaves were invented in the seventies, right? But tea bags could not be put in microwaves, even though people were using microwaves to warm up the water up until 20 10, 20, 20. You know why? Because all the tea bags had stabilised them. They were stable then. It was only recently, in the last ten years that Tbags came without staples. This is a lock into a paradigm of doing something that you've been doing for 30, 40, 50 years. And this is precisely what's happening with this human factors approach in computing, right? The idea of human factors and compute human factors is a term that people in security use a lot, but it's actually a term that comes from the days of machine operatives. It's an industrial engineering term. It comes from the day where people used to work in factory floors and assembly lines. Arun Vishwanath (22:41) We didn't matter who the person was, we didn't study people, we studied productively. And we trained people back then to basically increase their output by basically doing what was called back then in the 1940s and 50s, time and motion stuff. We used to study how quickly they could move to how much time they took to kick moving things down the assembly line in factories like Ford Motor Stamping Company. And that's the science approach of human factors that's been applied and taken into the world of security, where we really don't care about we treat people as operators, right? We call them users. We really don't even call them people. We call them users and then we don't study anything about them other than how they are clicking, how quickly they're clicking. It's a time and motion kind of study, that's all it is. And then we presume that that explains their motivation, their behaviour, their thought, their feelings backwards. So you have so many different names or all these different computer bug idea. The man of bob, control flow bob, the shortened bug. But when it comes to people, there's human factors. There's no personality, there's no effectivity, there's no human cognition, there's no individuality, there's no motivation, none of that matters. Arun Vishwanath (23:54) And then you wonder why users continue on. People continue to be the weakest thing, because we have really not studied them. We have really not incorporated anything about them into the construction of the technology or the security technologies that are out there. Karissa (24:10) So how do we get out of this? Rut so we spoke about the engineering paradigm. How do we move forward from this? How do we get out of that? Because, like you said, you and I are going to speak in 2050 and you're going to say the same thing has been going on the last 30 years. Carissa right? Arun Vishwanath (24:24) And like I said, we have to flip that paradise, right? We have to recognise that security technologies are not the same as computing technology, they're not the same, right? So I use the analogy of the seat belt. The seat belt is a layer to an existing process, it's not the process. Seat belts are not why you drive cars. Seatbelts are what you do when you drive cars. Security technologies are very similar. Security is something you layer on top of an existing process. That process in the user has to be incorporated into the design of a security technology. And that's where we have to begin to incorporate the study of people into pretty much every walk of this. We have to begin by diagnosing, which means understanding what makes them move, what makes them do things. Is it thoughts? Is it behaviours, actions, feelings? What is it? Right? And the science of that already exists. It's not new. We've been studying people in cognitive science and behavioural science for the last 100 plus years, right? We've been studying human psychology 100 plus years. All we need to do is start incorporating. And that's the important thing. Arun Vishwanath (25:35) Let's start incorporating. We have to flip that paradigm around. And the first part of rubbing that paradigm is to accept that the current paradigm is flawed, insufficient, and it's not working. Otherwise we can keep on doing this and going around in circles. Karissa (25:48) Do you think people know it's flawed, though? But again, it's like anything like, for example, people that say I should lose weight, they probably know they're overweight, but I can be bothered getting to the gym, getting a trainer and all that. It's just easier to just sort of be complacent and just keep doing the same thing. Do you think there's a bit of that in there? Arun Vishwanath (26:05) Well, it's because the way paradigms work is the way it's a lock in. How does those lock in happen? You have people teaching people, it becomes a discipline. So engineers train other engineers who train other engineers who join the workforce and take that training with them. And so it kind of feeds itself and that's how a lock in exist for hundreds of years, right? It's a way of thinking. It socialises people into thinking about problems in a certain way. Which is the teabag example, right? What is it that happened for 40 years where people were using microwaves and people were drinking tea? Almost 95%, 96% of the UK was drinking tea out of using tea bags and about 80% of them had microwaves. How come loaded with the two together and said, hey, wait a minute, why do we have these staples in every tea bag? Because you can't put the tea bag in the microwave because the lock in you were thinking about things a certain way, making machine tools, machining. It a certain people were buying it and just doing the same thing over and over again in a completely different way. And that's how a lock in happens, right? Arun Vishwanath (27:13) It can go on for hundreds of years until someone gets inside and. Disrupts it. And so this book and all what you're doing I'm doing is this process of disruption. What we're saying is, hey, of course they don't. Of course they don't. Of course they don't. Which is why people like you and I are important, right? We live in the age of disruption, but technology isn't the disrupting tool. But the problem is when technology is the disrupting tool, the people who create technology don't like to be disrupted themselves. Which is the irony of it, isn't it? It's like the biggest paradox out there is that the people who develop technology who are the disruptors don't like change. I mean, we see this over and over again, even with people who create some of the greatest technology that has ever been created. You look at the development of Apple, and it's a paradigm within which they are also locked. It's working for them, but it's going to work forever. Time will tell, right? Because we've all bought into that paradigm. But how long does that last? And will we even realise it and will they be willing to change the ball? Arun Vishwanath (28:19) We've seen this with great companies like Sony, for instance, when they were in the heydays of remember the Walkman? Where's that now? Right? So you realise these products also get into a lock in of it, and today security is going to lock in. Cybersecurity isn't a lock in where the talk about culture, the talk about users, has been drafted peripheral. Now, one caveat there, or one thing that I have to say, is it's gotten better in the last five, six years. And I think part of the reason it's gotten better, what I mean by bad reason, people are more open to talking about what you and I are talking about. People are more open to listening to the need to change security, culture and organisations wasn't the case in 2014 when I was writing about it. In 2014, when I wrote about social engineering and Spearfishing and Sony Pictures, nobody cared much in 2013. In 2000 and 2010, when I was studying social engineering, people in my university were laughing at me, calling me the fishing net. The joke at the time was, what is fishing? Because everybody was studying social media. Facebook was the thing that people are studying back. Arun Vishwanath (29:31) And I remember this conversation in a conference where somebody asked me, why am I wasting my time doing this when a spam blocker would come up and one day make all my work irrelevant? So has that changed? It has, right? We're talking about social engineering today more than we ever did. We're recognising the problem. And I think the bad guys out there are to thank for that because they've been relentless. Karissa (29:58) Okay, here's another question for you. Do you think, as well, that some of this is maybe because it's about financial gain? So what I mean by that is like that old saying, don't fix what's not broken so if someone's acknowledging that we don't need this or this is broken, that people are winning from this, right? People are selling things off the back of whether it's training, whether it's a product or a solution. So it's like, for example, selling diet pills, just say, I'm selling diet pills and I know it doesn't work, but if people are not questioning me on it, I'm not going to go out there and say, hey, they don't work or else I'm going to lose my revenue. So do you think there's a little bit of maybe selfish financial gain in this whole paradigm? Arun Vishwanath (30:40) Yes, there is a lot of it, actually. There is a lot of it. In fact, the entire world of cybersecurity is run by vendors. You don't see academic institutions touting solutions, right? So, for instance, the entire cyber security, when you go to any security conference, like Black Hat, it's a vendor conference for smart. I mean, the vendor room is overflowing with vendors. And the reason is that we have left security to the free market. And there's nothing wrong with that, right? But there are some things that capitalism is great. I love capitalism, right? There are some things that capitalism doesn't do, though astrology is a good example, psychics are a great example. I mean, you don't want to leave that to the free market. Bad Science is another one of these areas where I don't think the free market does. And that's what this is, right? These are products, like I said, and the products, once they're created, people who created, want to make sure that the product is the only thing sold as the next big silver bullet out. And we have yet to file one. And so, yeah, there is a huge part of behind keeping security awareness going, and security awareness, the use of security awareness invariably leads to you needing more security of those. Arun Vishwanath (31:59) Every one of their pen tests basically tell you how much more pen testing you need to do. So it is a product which is a licence, which is a perpetual licence. And like I said, ask them when that would licence, it would stop, when would I not need that security advance package? And there's no answer to that. And so this is why here you have a book which tells you gives you the answer. So if you don't have a security awareness programme, the book teaches you how to create one from scratch without needing to use a vendor. If you don't want it teaches you how to create a pen test, how to do that pen test and how to actually test the awareness levels of your users and diagnose who's at risk and why. And if you do have a security awareness mark right now, the book teaches you how to improve on what you already do. So you really can do it both ways. What I'm trying to say with the book is, hey, we have renders out there, which is great, but the product is inadequate as it stands right now and we can improve upon there's a better way of doing it's. Arun Vishwanath (32:59) In fact, a white way of doing it. We don't need to throw the baby out of that. We can actually make this into a better word, dog. Karissa (33:09) So one of the things I want to understand from you is the people listening, they're inspired by what you said. What are some elementary sort of ways that people can start understanding human beings? Like, as of tomorrow? Arun Vishwanath (33:21) Look, if you want to understand people, I dedicated, like I said, I'm done academic for 20 years, studying people for 20 years. I distil a lot of that knowledge in the book. In fact, I dedicated an entire chapter, without getting too much into the science of it, in explaining how people use technology, right? You need to know this knowledge. So if you really want to understand your user, you have to begin with the right knowledge, right? Because you need a framework to understand, right? So, for instance, remember there's always these things that come out on the internet, which is there's a hidden image. For instance, there's an animal and they're like, oh, spot the five deer image, or spot the hidden animal in a particular graphic. If you don't know what you're looking for, it's very hard to know what to look for. And once I show you the animal, your mind is going to then hone into that every time you see that particular image. So what you need is a framework that can orient you to understanding the result. Without the framework, you're basically just shooting in the dark. You're just looking at a screen and there's a bunch of images. Arun Vishwanath (34:33) So the first step is understand the user is to get the science behind cognition and behaviour. Now, I have a whole chapter dedicated to it. You can do that, you can go out there, go take a course on human psychology, human behaviour, but a very simple way of doing it. Read the chapter in the book. Once you understand what makes people do things online, whether it's risky things or safe resolve behaviours, then the question is, how do you measure them? And again, in the Weakest Link, I discussed the measurement approach, which is how do you go out there and how do you measure them? And there's a certain science to it, there's a certain process to it. It's actually way easier once you understand it. It's like baking a cake, right? It only has five or six ingredients, but if you know when to put those ingredients, you'll get a kick. If you just start mixing them up and doing everything any which way, you want to end up with something that's inevitable. In the same way, you need a framework, need to understand the science behind it, and you need to follow the steps that are laid out to doing this. Arun Vishwanath (35:37) So if you want to do this in your company, it doesn't cost you a lot to do it, it doesn't cost a lot in time. But you need to be in the right framework to understand this. You have to understand the right framework. And this is why I say buy the book, read the chapter, we can talk about it. But basically, you have to be able to measure human cognition and human behaviour. You can't understand how people think, how they process information, what they believe in, and the kind of behaviours and habits they inhabit. And once you can measure those four or five things, then you can diagnose risk. Then you can understand users. You can understand them at a level in which nothing that we're doing right now can help you understand. Karissa (36:16) Wow. Thanks for sharing that. I think that's helpful for our listeners that want to get started on this. But I've got an interesting question for you, Arun, because you've studied people for 20 years and people at the same time fascinate me as well as rattle me. So I'm curious to know, what do you like most about people? But then what do you sort of dislike the most about people in your research and two decades of understanding human beings? Arun Vishwanath (36:44) That's a great question. I don't think anybody has ever asked me that. Karissa (36:47) Well, I'm glad. Arun Vishwanath (36:48) I don't think anybody's ever asked that is a great question. The thing that fascinates me about people is that no matter who we are or where we are, there's more similarity and overlap in the way we are programmed. Whether it's the way we think, whether or we are motivated and the way we act, then we realise. In fact, I would go so far as to say that almost 80%, and I would even go higher than that 90% of most or even 95%, most of our thoughts, behaviours, I'm making a big distinction, cognition and behaviour. And within cognition, how we process and the beliefs we have and how we act are actually very, very, very similar. What varies is the context in which we use them. So that similarity is a piece of really fun because it's very predictable. And when it comes to technology, that 80% similarity and overlap goes up to as high as 95% to get to 98%. And I'll tell you why, which is why I study people as computing technology users. The reason it's so high is because platforms all over the world are universally sumo. So even though human societies are very different, right, I mean, Australians are different from, say, Indians are different from Americans, even though we all sound and speak English, we are geographical differences. Arun Vishwanath (38:16) But when it comes to technology, we're all using very similar browsers, very similar operators, there's more similarity in the contextual conditions. Therefore the predictability of our thoughts and actions are actually much higher when it comes to technology. And that's what fascinates me. It's so much is so predictable that people don't realise how predictable and how predictable their motivations are. For instance, how predictable their thought processes are, which is what makes a risk assessment actually easier when it comes to technology. Even though any engineer would say it's the opposite, that humans are probabilistic, stochastic and unpredictable. It's actually the octopus about that. Humans are unpredictable, I agree, but humans are unpredictable as human beings in everyday contextual life, not when it comes to technology, because you're using an iPhone, presumably, that is very similar to the iPhone I'm missing. So it is already dictated or put in the boundary conditions on what you can do. And so you layer human similarity of thought processes on top of a platform that's already universally similar and you get very high similarity, very high predictability. On the flip side of it, that is also what makes social engineering so capable and powerful. Arun Vishwanath (39:31) This is why you can have a social engineer sitting in India attacking you, or sitting in Russia, attacking all of us, anywhere in the world, because they are able to take advantage of the exact same processes and flip it around. They just flip that same thing around you're using the same technology, so they are using the same vector. We use the same language, so they use the same vector. And because you're similarly predictable, regardless of if you're in Australia or if you're in the United States, I'm able to use the same hook to get people. That's what makes us so fascinating. Karissa (40:03) Okay, so here's another question for you. I probably know the answer to this. Maybe I don't know, maybe not you, but I think other people would do this. The amount of people that the phone is, the last thing they look at, touch, whatever it is, before they go to sleep. And the first thing that people look at when they get up in the morning guarantee you is that you arun and anyone listening, it has to be you. And if it's not you, I want to hear if it's not you, because that would just bloat me away. Arun Vishwanath (40:29) No, I think that's all of us. Karissa (40:31) Including you, yes, I kind of got the vibe that you'd be like, no, I wouldn't look at my phone because you know much about human beings, but you're absolutely right. Name a person out there and I think people are lying to you. Like, oh, no, I get up and I didn't know someone said they don't look at their phone for an hour. I don't think so. Arun Vishwanath (40:48) I find it hard to believe that people are doing it because the phones are programmed to make you 100%. There's a reason your alarms on your phone today. Karissa (40:56) So I think that it pulls into the habits of people. I mean, there's 8 billion people apparently on the Earth now. Arun Vishwanath (41:02) Yeah. And they're trying to make you sleep with your watch on you. Karissa (41:05) I've got a watch that's beyond so. Arun Vishwanath (41:08) You can quote, unquote, track your sleep, right? Humans have been sleeping since the time they were apes. But, oh, no, we now need to track our sleep, right? So suddenly there's a new use case for wearing a watch with even a bed. Karissa (41:22) Well, it's a selling point. There you go. People are people are buying watches. They're doing these things right now because they think they need to track it. Arun Vishwanath (41:30) And I did that once. Okay, I'm guilty of doing that once. Recently, I just did it to say, Where's my sleep pattern? Yes. I just wanted to see my sleep pattern. And guess what I learned. But guess what I learned. I mean, there was an interesting insight I got, which is that apparently somehow I had set an alarm for I wake up at a Galloff time, 430 in the morning because it's a great time to ride. So I wake up very early and at 430, all my notifications started coming into my watch. So even though I was like, right off my alarm now I had every notification that hadn't come into the night. I was like, wow. And suddenly you're awake and suddenly you're kind of back in that world of technology. And that was the first and only time I wore my watch to be and I was like, okay, I'm not cracking my sleep. Karissa (42:19) There's a habit. Arun Vishwanath (42:20) But I do cheque my phone. Karissa (42:22) Everyone does. They're lying. If they don't, people will cheque their phone before they look at their husband and wife. That's fine, that's fine. Arun Vishwanath (42:29) But I'll give you another one about habits that most people don't know. Okay, I'll give you another one. All right. Now, I wrote about habits. I hate to say this, I was one of the first people to write about technology habits, cellular phone habits, mobile habits and social engineering. My research first basically write about it, to find it, and to make it imperative to research. I'll give you one thing about habits that most people don't realise. If you want to see a habit, everybody said, I get up in the morning, look at it. That's a great way to say it. But that's private behaviour. I'll give you a public behaviour of habit. Next time you're anywhere in an airport or even with your spouse or whatever, if you're in the car, pick up your smartphone and look at it. And then put it down and look around you. And you'll notice that they will immediately pick up their phone and look at it. Karissa (43:15) Why? Because they can't handle that. You're just sitting there looking at them. Arun Vishwanath (43:18) No, it's something like almost like yawning. It's almost contagious. Karissa (43:23) It's like infectious. Arun Vishwanath (43:24) Yes, well, it's contagious. What happens is you do this in the airport. You go to Starbucks. Just go to Starbucks or coffee shop, what have you. Look at your phone, put it out and look next to you at that person. Whoever is male or female will look at their phone and put it out. There's a social signal that everybody sends, which is, I have a phone to or I have something to look at. Karissa (43:46) So you're saying if you pulled out your phone and I saw you, I would then do it. But I think it's about awkwardness, because no one wants to look at anyone, so they just go, I'm going to go on my phone. Yeah, yeah. Arun Vishwanath (43:57) It's either that, well, why does your spouse do it? Or why is somebody sitting in the car next to you? Karissa (44:01) I think it's a habit. No joke. I watched something yesterday. Arun Vishwanath (44:05) There you go. That's why I said it's a habit. Karissa (44:08) I don't think you're thinking about it. Arun Vishwanath (44:10) You're not. It's in none, which is the definition of a habit. It's in non conscious behaviour that's basically done because the technology exists. Karissa (44:19) So here's the other thing that gets me. I saw something on television yesterday. I don't know if you have any research or stance on this. Apparently, as an average human lifespan, we will spend now, in our digital world, 17 years on our phone. 17? Arun Vishwanath (44:36) Wow. Karissa (44:36) 17. Arun Vishwanath (44:38) I believe you. Karissa (44:39) But then you spend like a quarter of your life sleeping. Arun Vishwanath (44:43) Yeah, but if you put screen instead of phone screen time, I mean, that would be pretty much like 80% of your life, 35 of whatever years you have. It's like twice as much. I believe you. I believe that's fair to have. Karissa (44:55) Well, I thought about this the other day. If you're not looking at your phone, you're looking at your laptop. If you're not looking at your laptop, you're looking at your iPad. If you're not looking at your iPad, you're looking at your television. If you're not looking at that, you're looking at your watch. You can't get away from it. And I think sometimes my eyes hurt and then I'll watch television. Arun Vishwanath (45:10) Coming back to platforms, right? These are all platforms. There's a universality to them, which is kind of really bizarre, isn't it? Whether you're in Sydney or whether you're in Dubai, or whether you're in Singapore, or whether you're in Adelaide or whether you're in La or New York, doing essentially the same thing. Everybody is on some kind of YouTube, on a browser. Everybody gets that smartphone experience. They all have TikTok of some sort, or Instagram or Facebook or LinkedIn. There's a similarity of what you're doing. Everybody goes, now, I don't mean you, but a lot of people do this, right? Today. You go somewhere and your proof of going is the picture that you took, that selfie that you took in the place. It's like a proof, right? It's like, I live about 30 miles away from Niagara Falls, and whenever people I drive by it, I see all these people taking selfies by it. I was like, wow, here is one of the wonders of the world. You take a selfie, you miss most of it, but you captured on the camera. Nothing wrong with that. All I'm saying is, wow. We spend all our time not just on screens, but on looking at the world through a viewfinder. Arun Vishwanath (46:28) And that's actually something that fascinates me quite a bit about how in our mark of achievement is basically social media when we put it out there, our mark of doing something is basically social media when we put it on. People I know go far enough to photoshop themselves in places. Remember on YouTube, all these kids who went nowhere, but they put out these videos saying they were here and they were there, but really they were nowhere. They were just free screening the whole thing. Well, if you have to do that, what does it tell you about the human condition? What does it tell you about the human condition? We've gone way past needing technology to being technology, to basically just experiencing the world as technical. Karissa (47:12) Well, I don't know anyone on that. Well, I mean, I wouldn't know. Maybe the photoshop so good, like, even the facetune and like, all these you know what I mean? Like, that's why I'm always like social media is pretty fake as well. Like, everyone's going to give the highlight reels, are going to make them look the best in the photo, they're never going to put a bad photo up. But what in your research, in your time, rattles you the most about people as well? Arun Vishwanath (47:36) What rattles me the most is the amount of non conscious behaviours that people engage in, right? See, the human mind is incredibly capable of doing some very complex things without thinking. And that's what worries me the most too, right? The more technology we have, you look at new electric cars that are out there, all of them have huge monitors, huge screens, and people are driving as they're doing all these other things. There's so many distractions, there's so many nonconscious things that we do. I always ask people, when I give a talk, I ask them, do you remember how many stop lights or stop signs you stop at before you came? And most people cannot. They have to think about it. And even when they think about it, they don't remember it. Because you do so many things almost non consciously and it's kind of fascinating, because you're essentially, when you drive, or I drive, or you walk on the street, you are leaving yourself susceptible to essentially every other person's non conscious behaviour going, right, isn't it? And somehow it all works now, right? 8 billion people and we're all ploughing into each other on our cars. Arun Vishwanath (49:01) But I know people who essentially are driving and texting and messaging, I've done it too, and I keep reminding myself that I shouldn't be doing it. And sometimes I do. I slip into it, I'm like, oh, my gosh, I can't believe I just did that. And this happens, right? And that's what boggles my mind, that we actually depend on everybody else getting it right, including ourselves. Karissa (49:27) Yeah, that's interesting because I have noticed that sometimes I'm even driving, I'm like, oh my gosh, it's like I didn't even notice. Like I was just so on autopilot. I was like, I go the same way. I mean, that's what's different when you go to another country a bit more aware of things because you haven't been there and you're opening your mind up a lot more. But if you're just going around the same places you always go, you really think about it. You're not noticing other people or what they're doing, right? Arun Vishwanath (49:52) When you go to a different country, you're in beginner's mind, right? You're back a big nerd's mind. But the moment the human brain is so capable of automating stuff. This is why I tell you again, I go back to the book and what I talk about in the book and what I help measure through the approach in the book is not just the conscious actions of people, but the non conscious habitual behaviours as well, as well as the conscious thoughts behind. Remember, we think thousands of things but don't enact all of them. And there are lots of things we do which we never think about. And we need to capture both when it comes to any human behaviour or any human action, be it in the real world and in our world, the cyber world. So we need to be able to capture the entirety of it. We can't just say, hey, you know what, let's only measure all the clicks. So this is one of the big issues that I have with big data today. All the big data approaches that are there depend on data that's basically collated from search engines or what have you. Arun Vishwanath (50:53) But all of this data is people typing and stuff. It's action data, behaviour data that doesn't tell you how they are thinking, it just tells you what they're doing. But we think about a lot more things than we do. We go to thousands of parts and we don't enact all of them. And so a lot of that cannot be measured with big data. And then there are also a lot of things that we do which we don't think about, which is your whole world of habits, and we need to be able to capture that too. And so when it comes to cyber and cybersecurity social engineering, we need to understand again, going back to, let's say a phishing pen test that let's say somebody does out there, we need to be able to say, okay, why did this happen? Was this something that a person, let's say Karissa, did Karissa do this? Partially. Did she think about this and then not know which is a knowledge issue? Or did she not even think about it and just act, which is a habit issue. Or she thought about it, but you know what? She did it and therefore or she avoided it, which is a conscious avoidance. Arun Vishwanath (51:59) Each of these gives you a different answer, right? Or is this something she believed? Right? Is there a belief in her mind that makes it make a mistake? For instance, I've done these surveys all over the world. In fact, I have a quiz coming out in a couple of days, and I'll share that with you, where we measure this idea called cyber risk beliefs, right? All of us, you, me, every one of the people, one of us who are on the Internet, who listen to this, we have these beliefs about technology that we fall, right? Let me ask you this. What do you think is safer, a PDF document or a Word document? What do you think is safer to open? Karissa (52:35) Like a Google Doc or like a Word Doc type of thing? Like Microsoft? Arun Vishwanath (52:40) Yeah, somebody sends you an email, there's a PDF document, or there's another one with the Word document. Which one is safer to open? Karissa (52:47) From a security perspective? Safer to open that. However, which one? Google Doc? Arun Vishwanath (52:54) No, not Google Doc. I'm giving you two. Karissa (52:56) Oh, sorry. Microsoft doc over a PDF. Yeah. Arun Vishwanath (53:01) Which one? Karissa (53:02) Microsoft Doc? Arun Vishwanath (53:04) Why Microsoft? Karissa (53:05) Well, I mean, a PDF, it could be filled with malware or whatever. You don't know, right? But from someone potentially changing something, you want a PDF. Because if it's just a Word Doc, it's easy to change manipulation. It depends which way you look at it. Arun Vishwanath (53:23) All right, when I've done this exact same survey, very sponsored, the world, invariably the answer always is that a PDF is safe for global. Karissa (53:32) I'm not surprised. But they're probably not security people either, right? Arun Vishwanath (53:36) And the reason is exactly what you just said, right? They say that, hey, you know, I can't edit it, but therefore it's safe. So they make this cognitive leap from their experience with security or their experience with usage and the security of a document type, right? We know it's flawed, you know it's fraud. Your answer was quite on. I know it's flawed, but they don't. And that's called a cyber risk belief, right? A cyber risk belief is not always flawed, but these are beliefs that we all hold in our mind. And there are many such beliefs that even you old and I own and everybody who uses technology, even, you know, elon Musk holes and these views about what's secure and what's not, what's safe to do and what's not. So some people will say, oh, is an email client safer than using a web based portal for accessing? The same is a public WiFi sync is a private client thing. It's a VPN sign, is SSL secure and so on. So there's a lot of these beliefs that we form, and I have a measure that measures all of them, most of them that matter and most important ones. Arun Vishwanath (54:50) And these risk beliefs also dictate what you do when you get a social engineering attack, right? But they have a different impact. So notice that these are consciously done. So you may think about something and do it, but you may have thought about it long because you have a bad, long, inaccurate belief about it. So we have to measure those. And again, the weakest link we talked, I teach you how to do that, and it arms you with the knowledge to measure them. But again, you see how we form these ideas about technology are based on our limited experience with it. And you, me, even as security people, have a very limited experience with technology because today's computing technology is incredibly complex. Right. What you see is just one end of a very large chain of process. I'll give you an example. Every time you get an email from a listserv, there's an unsubscribe button in the bottom, right? There's a hyperlink that says unsubscribe to this email. Now, many of us say, okay, let's just hit unsubscribe and unsubscribe. But what is to stop the unsubscribe button from leading us to a volcano? Nothing like a website. Arun Vishwanath (56:00) Exactly. It's just pure trust. Karissa (56:03) Yeah. And have it. Arun Vishwanath (56:04) Right. It's the same. And it's the same thing with essentially every graphical user interface that's there on the Internet. Right. That's their own computing technology. The Send button doesn't really send an email out. It does a series of processes that you don't see, but we are programmed to think about that. Send button. Press send. A great example of how this can be used by the bad guys is I checked into a hotel about two years ago before the Pandemic. So this is about 24 years ago in Washington DC. When I checked into the hotel, the WiFi log in that they were asking for, they asked you for your Gmail email address. And what was really interesting was it also asked you to enter your password. The only difference was the password was supposed to be the password to the WiFi system. Do you understand what they did? So the WiFi login portal on the hotel asked you for your Gmail address and your password, but the password you were supposed to enter was theirs, not yours. And I went down and I asked the person, and I asked the person, how many people do you think entered their email password? Arun Vishwanath (57:21) And he was like, you have no idea. But I was like, wow, this is such a poor design. And then I took it back to my lab. I know, but I took it back to my lab and I recreated it. And out of 500 subjects, 70% of the individuals who participated in that study entered the password, their female password, because. Karissa (57:41) They didn't read it properly. They just said, oh, okay. They didn't think about it. They didn't think about what potentially this could do in terms of ramifications. Arun Vishwanath (57:48) No one stopped because it's a habit. No one stopped for a second and said, wait a minute, why would they ask me for my Gmail password? Because whenever you enter your Gmail credentials and enter your password yeah, I would. Karissa (58:01) Question it, but I question everything. Arun Vishwanath (58:03) I know, but you're insecurity you've seen enough of it. Karissa (58:06) Yeah, true. Everyone's guilty until proven otherwise in my. Arun Vishwanath (58:09) Books, but yes, that's right. And that's one of the primary measures if you read the book, because when we talk about primary measure is this measure that helps you pause, right? And it's this idea of is the individual pausing? What is it that causes that cause? And that is suspicion, right. Did they become suspicious? The primary measure that we use to certain whether or not this individual is going to be higher in their risk threshold. Karissa (58:45) Well, I think we could absolutely go on and talk about this for hours. I think this is interesting. I love understanding humans. I love talking to people like yourself about this. I love security. So thanks very much. Arun, thanks very much for going through some of your theories, answering those questions which are interesting and sharing this synopsis about your book. We will be linking the link to your book in the show notes. And of course, if you read it, get in touch with Arun, tell me what you think, leaving a review on Amazon. And yeah, I can't wait to get you back to talk further about human beings and why we do what we do. So thanks for making time and thanks for coming on the show. Arun Vishwanath (59:24) Hey, thank you so much. It was a total pleasure. Great fun. Thank you so much, Karissa. I appreciate it. Karissa (59:29) Thanks for tuning in. We hope that you found today's episode useful and you took away a few key points. Don't forget to subscribe to our podcast to get our latest episodes. This podcast is brought to you by Mercsec, the specialists in security search and recruitment solutions. Visit to connect today. If you'd like to find out how KBI can help grow your cyber business, then please head over to KBI. Digital was brought to you by KBI Media, the voice of Cyber.
Share This