The Voice of Cyber®

KBKAST
From Cisco Live 2024 Melbourne – KB On The Go | Raj Chopra, SVP & Chief Product Officer, Security Business Group, Cisco, Angelique Medina, Head of Internet Intelligence at Cisco ThousandEyes, and Matt Caulfield, VP of Product for Duo and Identity, Cisco
First Aired: December 11, 2024

In this episode, KB sits down with Raj Chopra, SVP & Chief Product Officer, Security Business Group, Cisco, Angelique Medina, Head of Internet Intelligence at Cisco ThousandEyes, and Matt Caulfield, VP of Product for Duo and Identity, Cisco on the ground at Cisco Live 2024 in Melbourne. Together they discuss Cisco’s vision of being an AI first company, data in flight, and the concept of ‘identity is the new spam’.

Raj Chopra, SVP & Chief Product Officer, Security Business Group, Cisco

Raj Chopra is SVP and Chief Product Officer of the Cisco Security Product Management organization, where he leads strategy and execution for Cisco Security and SD-WAN products, ensuring comprehensive security for all users, from any device to any network or application.|

Raj is a seasoned executive with a proven record of delivering market-leading innovation in security with a strong focus on user-first experiences. He is a strong advocate of supporting diverse teams and equitable environments that bring out the best in everyone.

Prior to taking on this role, Raj led product, design, and strategy for Proofpoint’s flagship Email Security portfolio, helping grow that business nearly 3-fold in 4 years.

Before Proofpoint, Raj was part of the founding team of Netskope. He built both the product and new market category of CASB (Cloud Access Security Broker) and spurred its growth into SASE (Secure Access Service Edge) and SSE (Security Services Edge). During his career, he launched more than a dozen cybersecurity products, including several in the past 15 years that rose to $250M+ in product revenue.

Raj holds an MBA from the Haas School of Business, UC Berkeley, and a BS in Computer Science from NIT (National Institutes of Technology) India and has a growing eagerness to go deeper into in his yoga practice.

Angelique Medina, Head of Internet Intelligence, Cisco ThousandEyes

Angelique Medina is Head of Internet Intelligence at Cisco ThousandEyes, where she reports on all things Internet related, from BGP routing and outages to the performance of edge and cloud-based services. She has more than a decade of experience in the networking industry.

Matt Caulfield, Vice President of Product for Duo and Identity, Cisco

Matt Caulfield is VP of Product for Duo and Identity at Cisco, where he leads Cisco’s strategy and thought leadership in all things identity. Previously, Matt was the Founder & CEO of Oort, a venture-backed Identity Threat Detection & Response (ITDR) pioneer, which was founded in 2019 and acquired by Cisco in 2023. Matt has a technical background and is an expert in identity, networking, cloud, and security domains. Until 2018, he led the Cisco Boston Innovation Team focusing on new product initiatives.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Karissa Breen [00:00:14]:
Welcome to KB On the Go. This week, I’m coming to you from Cisco Live 2024 at the Melbourne Convention and Exhibition Centre, where AI is taking center stage in driving the future of technology. Here in Melbourne, we’re surrounded by the buzz of innovation and industry leaders, all exploring how Cisco’s latest technologies are enabling us to work faster, safer, and smarter. Stay tuned for the inside scoop from some of the world’s leading experts presenting at Cisco Live 2024 here in Melbourne as KBI Media brings you all of the highlights. Joining me now in person is Raj Chopra, SVP and Chief Product Officer, Security Business Group at Cisco. And today, we’re discussing being an AI first company. So, Raj, thanks for joining and welcome.

Raj Chopra [00:01:09]:
Thank you.

Karissa Breen [00:01:10]:
Okay. So Cisco’s Security Cloud Vision as an AI first company. What do you mean by that?

Raj Chopra [00:01:17]:
So I’m gonna give you my tagline, and then I’ll hopefully explain a little bit. To be, in today’s world, a successful networking company, you need to be a security company. Because last time I checked, nobody wanted an insecure network. To be a security company that accounts for all of the different exceptions that exist in our daily lives, enterprise or personal, you need to be an AI company because the complexity is too big for you to capture the travel knowledge in some sort of logic expressions. So you need the benefit of machines and hence AI. And then to be an AI, successful AI company, you need to be a data company. So when we talk and enhance the asset, that we all know as Splunk, now part of the Cisco portfolio. So the arc really starts from us being a successful networking company, leading us all the way to being very thoughtful, very methodical about data.

Raj Chopra [00:02:18]:
And in between is the is the work that we’re doing with AI.

Karissa Breen [00:02:22]:
Okay. So I heard Dave West on main stage yesterday talking about to be this company, it has to be data company, security company, makes sense. What do you mean by methodical data?

Raj Chopra [00:02:32]:
Yeah. Being methodical about the data means that there is a large volume of context that needs to be brought to bear. So I’ll give you a an example, made up example, but but example nonetheless. You start to see, let’s say, some activity on the network that you’re not quite sure, is it really Raj accessing the board materials, tomorrow is the earnings announcement, is it really Raj accessing this? That’s one part of the context, which is what we are seeing right now in the systems. But the larger context maybe, Raj also booked a flight on United Airlines and he’s in Australia. By the way, 2 weeks ago, he got a new phone, so he has a new authenticator app. He has failed 7 logins for a variety of reasons over the past month. Should Raj have access to the board materials, not can Raj have access to the board materials.

Raj Chopra [00:03:29]:
Today’s security construct ask and answer the question, can Raj access board materials based on my job role, based on whatever? But the real question to ask is, should Raj be able to access the board materials, given all of the complexity of my activity over the last days, weeks, months? And that is what the context and methodical bringing together of the data to develop that context helps us provide an answer to the question, should Raj access this material? Not can Raj access this material.

Karissa Breen [00:04:04]:
So what happens if people don’t have myth methodical data on their side?

Raj Chopra [00:04:11]:
You end up making with the best of efforts, you end up making substandard decisions. And what that leads to is that usual risk in the day to day activities of that particular user because you’re not exactly keeping track of whether this access is fit for purpose, this activity is fit for purpose, but you’re doing something today just because it was done yesterday. So you make substandard decisions. You end up making substandard decisions. Okay. So I wanna get into something now that is quite topical, which is responsible AI. Everyone’s talking about it. Depends on who you ask.

Raj Chopra [00:04:51]:
I wanna know how is Cisco approaching responsible AI? And then I wanna zoom out to what does this actually mean because

Karissa Breen [00:05:01]:
there’s different definitions of it. Yeah. But also, there are now examples on, you know, the driverless car, you know, the the whole biases come into certain things. That I find very interesting area.

Raj Chopra [00:05:15]:
Yes. So first, responsible AI is a is a principle by which we’ve been developing AI based products in Cisco for years. There is a published manifesto you can Google or search for Cisco and responsible AI. It takes you to the trust portal. You can you can go through all of the details, the principles that we follow. And these are these are reflected in our development methodology. So this is not a, this is not a statement of intent. This is a reflection.

Raj Chopra [00:05:49]:
This is a codification of our development practices. Right? So this is not a sort of airy fairy statement, but how we develop software. Case in point. I’ll give you a couple of data points. I think Dave also mentioned this, but we do today billions worth of minutes, of translation, spoken word translation into written, text in Webex. Billions of them every month. There is exactly 0 bytes of meeting minutes that are used for training the models that convert. Because spoken word in a meeting is specific to the customer.

Raj Chopra [00:06:34]:
It is not for us to use that material for training our models. Every bit of training that we do for these models that can now translate from spoken word into written text from a 120 different languages without using a single byte of customer voice data. Every bit of the data is synthetic. So we never ever use customer data to train our models to do transcription from spoken word to written. And this is at a very, very large scale. Right? Now fast forward to some of the other things that we’re doing as we’re developing capabilities, let’s say, in a product that we called XDR, extended detection response, that gets used in the, what is called, the security operation center, SOC centers. Now in that, we do want to use Cisco, Talos has done incident response for thousands of customers over tens of years. The best practices, not names of customers, but best practices have been distilled into playbooks.

Raj Chopra [00:07:36]:
So what happens when certain incidents show up? If some activity happens, what’s the corrective action for us to take? Mhmm. Right? This is fully anonymized. It’s not like it has attribution to a particular customer. That data, that anonymized data, is something that we have blended into our AI models. So that that is benefiting, that Talos tribal knowledge, so to speak, which was captured in documents and PDFs and whatever over the course of thousands of incident responses, is now available to every single one of our customers because this is a playbook that they can utilize. Anonymized data being used for practically use for for all of our customers. Right? That’s another example of using data responsibly. Then we have bespoke models, which is our own documentation, which is public documentation.

Raj Chopra [00:08:26]:
But using that to say, instead of when somebody looks up, how do I change, I don’t know, the NTP setting, some acronym setting in my product? This is public information. Right? And so if you search for it right now, the best answer you’re gonna get is, well, go to this manual. Right? Some manual. And then you’re coming to to page 47, section 3 dot forward dot whatever, and, like, great. Thank you very much. Instead, from this public information, instead of you being sent down this rabbit hole, it literally just tells you, here are the steps you need to follow, and below it is an attribution. If you really want to go read it on your own, here is a link. Click it, and you can get to that detail.

Raj Chopra [00:09:12]:
But take in the toy. Right? And maybe some people will do, maybe some people won’t. People just need to form their confidence, so maybe in the early days they will. So I’m giving you 3 models where one is completely synthetic data being used, right, to build, train, fully service our models. 2nd is anonymized content that is used for general practice and benefit of customer base. And then there is public information which is out there in the domain that is being brought together in a way that becomes very usable without the toil of having to thumb through manuals and whatever else people end up doing today.

Karissa Breen [00:09:53]:
And that would be the benefit of having the source so you could cross check it. Because, again, if there was so much data out there that said the

Raj Chopra [00:09:59]:
You want it to be explainable. If somebody asked the question, why are you saying this? Why are you telling me this? You need to be able to say, here’s why.

Karissa Breen [00:10:08]:
So the part that’s interesting about this is and I’ve spoken to many people about this, one of which who is on the, World Economic Forum for the AI board, spoke to me about this when I was in Vegas recently, around hallucinations. But then, for example, going back to the source, imagine if there was 50,000 sources out there that said the sky is orange. And therefore, when you ask chat gpt, for example, what is color of the sky? And it came up with this. Do you think p and then it led back to the source which is fabricated. Yeah. How people gonna be able to discern that? And I know that they they’re trying to validate certain sources, but what is all make sense of that because this is where it gets interesting, especially for the younger generation who won’t be able to understand and discern to do their own research and, you know, and to be able to look at the source and validate it or not.

Raj Chopra [00:10:59]:
So this is the third leg of the whole thing. So we talked about data, then we talked about AI or training AM, and then the third thing is how do you defend these models? K. So you mentioned about younger generation, like, if this is my source of information and it’s hallucinating consistently and persistently, how do I know any better? Yeah. So there are typically 3 phases in which models are developed. The first model the first part is when I’m building these AI models, am I training it on diverse enough data? Next to you, there are what Granny Smiths and some other apples sitting. Right?

Karissa Breen [00:11:41]:
Yeah. Yeah.

Raj Chopra [00:11:42]:
Yeah. Right? So these are round and they’re colorful. If all I trained my AI model on was round colorful things, and I presented it a tennis ball, it would say that is also an apple. True.

Karissa Breen [00:11:56]:
It’s the shape. The shape.

Raj Chopra [00:11:58]:
Right? Because AI is very different than how traditional software has been written. The way traditional software was written,

Karissa Breen [00:12:04]:
If this, then

Matt Caulfield [00:12:05]:
if

Raj Chopra [00:12:05]:
this, then that. I give this input, I get an output. I don’t know the output, but I know the input and I know my logic. In AI, it’s flipped. I already know the answer. I have to now build my hypothesis that this answer shows up consistently when I presented a new input. K? Let me say it slowly one more time. Right? So if this then that’s exactly what you said.

Raj Chopra [00:12:31]:
Right? Correct thing. So I write my logic. I give it input. It will consistently give me an output. I don’t know the output, but it’ll consistently give me the same output. K? AI is different. I already know the answer. K? Now I need to give it enough input and work my hypothesis such that it shows this output consistently.

Raj Chopra [00:12:54]:
So what needs to happen if we were looking for apples, right, we’d have to present it oranges and tennis balls and basketballs and whatever, round, circular, colorful things, enough of them so that when a new input is provided, it doesn’t hallucinate it as seen before and the weights have been appropriately assigned.

Karissa Breen [00:13:16]:
Well, that’s complicated though.

Raj Chopra [00:13:18]:
But and here comes the part. It is complicated for you and me, but it’s not complicated for

Karissa Breen [00:13:24]:
machines. Well, that is true. Yes. Yep.

Raj Chopra [00:13:27]:
And that is the power of AI that remember, now we’re talking about I don’t know what the input is gonna be. I do not control the input. Unlike software development where I control the input. This could be literally anything. I might be walking down the aisle here and there is a certain round colorful thing. Is this an apple? I don’t know what it is going to be. Right? So first thing that you need to make sure is that has this model been shown enough diverse data that it’s not going to hallucinate? Is it proper?

Karissa Breen [00:13:59]:
But how would you define diverse enough?

Raj Chopra [00:14:02]:
So the way it happens is when you present so there are actually scores. I’ll not go into that. But there are measurement scores. It is a very deep topic. It’s called ELO, LO. You can look it up. But that’s how you measure whether the outputs are good or not, from this. But has it been trained on enough diverse data, is is a matter of how large your model needs to be.

Raj Chopra [00:14:24]:
So for chat GBD, it needs to be in trillions of parameters. Right? But if you had a drone that was going around the oil storage systems right now, there is a person, once a month, they rappel up and down on one of those oil containers next to the airport to look if the rivets are leaking. It’s not an easy job. Right? But once a month they climb up and down and take pictures and here is a drone that goes around, takes pictures, like 5 pictures every second, and you just compare. You can do it every day. You can do 5 times a day. That model needs to be very small. All it needs to know is how do the rivets locate, this is the oil container, blah blah blah.

Raj Chopra [00:15:06]:
It’s not very large. So you can make it really small. You don’t need to train it on a bunch of things other than all the crusty, peeling pin, bad rivets, this and the other. But you don’t need to tell it, like, how does a submarine look? How does the sky look? How does whatever? Right? It’s irrelevant. It’s irrelevant. It’s a small problem set. So you can have small models, you can have large models. Small models, you don’t need to train it on that much diverse because it’s fit for purpose.

Raj Chopra [00:15:33]:
Right? Large models are more generic. You ask me anything. Translate this to that. Whatever. Right? So the the diversity of training data that is required for it to not hallucinate is a function of how generic versus how specific do you want a model to be. Yes?

Karissa Breen [00:15:57]:
Yeah. Wow. Okay. Yeah. That’s interesting.

Raj Chopra [00:16:00]:
K. So now bringing back to how to be secure so that this is fit for purpose. 1 is making sure that the data that is being trained on is good, diverse enough that it gets to the right yellow counts, yellow counts. The second is there are biases that creep in because if all we showed, this model of apples, right, all we showed were fruits that would be inadequate. We need to have lots and lots of different kinds of things in there. We need to show it berries and we need to show it, I don’t know, watermelons for it to have enough of a diverse thing that it has seen. So it doesn’t start to say, if it’s small in this size then it’s an apple. Right? So you need to ensure that it is fit for purpose.

Raj Chopra [00:16:47]:
It starts suddenly feeding it images about leaves and trees and chairs and then one’s like, what? That’s nonsense. Right? So that’s the second part. So build train. And then the third one is where you start to ask questions and get answers back that need to be fit for purpose. So what does that mean? I have this model where I’m I’m gonna maybe use the drone example a little bit rather than the apples one. In the drone example, I’ve got this model and I start to ask, like, show me how to make a bomb. Should it answer that question? Never. Never.

Raj Chopra [00:17:22]:
Right? Then it goes around and says it says, give me the best paint that I can use for this oil tanker, oil container, whatever. Should it answer that question? I don’t know. Maybe it should. Maybe it shouldn’t. Maybe it’s biased. Maybe it’s giving me the cheapest one. Maybe it’s giving me the most expensive one. I don’t know.

Raj Chopra [00:17:41]:
Like, I don’t know. But I can’t determine that, but the person who’s building that application can determine whether this model should answer that question or shouldn’t answer that question. Right? So there are certain things that it should never answer. Show me how to make a ball. Never. But can or should this model answer the question, what’s the best paint to use? Maybe. Maybe not. I don’t know.

Raj Chopra [00:18:02]:
But somebody needs to define whether this is fit for purpose to answer or not fit for purpose. That’s the application developer. The person who made that application would says, I am responsible for the upkeep of these oil tankers or tankers, whatever. That’s their that’s what they do today. If I’m the person who builds the application, I determine whether it is serving the purpose or not serving the purpose.

Karissa Breen [00:18:32]:
Joining me now in person is Angelique Medina, head of Internet Intelligence at Cisco ThousandEyes. And today, we’re discussing data in flight and the risk. So, Angelique, thanks for joining your marker.

Angelique Medina [00:18:42]:
Thank you. It’s great to be here.

Karissa Breen [00:18:44]:
Okay. So let’s start. For people who are not familiar, what what’s your definition of data in flight?

Angelique Medina [00:18:49]:
Well, when we think about data in flight, we think about all of the traffic that’s flowing across the many, many networks that not only make up the Internet, but also private networks as well. So when a lot of people think about the Internet, they almost think about it as a utility, you know, where it’s just sort of the pipe that’s going from one place to the other. But the reality is that it’s actually many thousands of networks that are interconnected together. And the way that these networks interconnect with one another is really based on a system of trust. So you’re actually talking about traffic that’s changing potentially many many different hands from going from one place to another.

Karissa Breen [00:19:25]:
Do you think as well in your experience Angelique that people just dishing the Internet like I can just start up Google and then I don’t have to worry about what happens in the back end?

Angelique Medina [00:19:34]:
I think that’s the way that many people think about it. And it’s very easy to think about it because it just sort of works. You know, you type something into your browser and then almost instantaneously you just get a response. It comes up and you don’t really think about everything that’s happening under the hood. But the reality is that first, you know, you are effectively translating the domain name that you pop into your browser into an IP address. So that is a system that performs that. So you

Matt Caulfield [00:20:00]:
can do

Angelique Medina [00:20:00]:
a DNS lookup. So you’re connecting over a network to those servers. You get a response, then you are connecting over again many more networks to the actual web servers for the application or site you’re trying to reach. Those web servers may be connecting to many more things on the back end to fetch data. So again, there’s so much connectivity all happening under the hood. It happens very fast, but it’s actually highly interconnected and interdependent.

Karissa Breen [00:20:28]:
So with interdependency, obviously, can create issues on downtime, things not working, as we’ve seen in recent times of interdependencies of systems that we’re reliant on. What sort of concerns you around that then in terms of interdependency?

Angelique Medina [00:20:45]:
Yeah. I mean, it’s very interesting because when you think about a lot of the trends over the last, say 10 years, there’s a lot of concentration of services in actually quite a few, parties, if you will. So you think about the big public Cloud providers, you think about the major CDN providers. Oftentimes when something goes wrong with any service or whether it’s a platform service or a network within any of those providers, it can have a very broad ripple effect because you may not even be hosting your application or services in a cloud environment. It just may be that you might be using one of their platform services under the hood. And when you think about, you know, availability of an organization services, Absolutely. There’s a security element to this, but oftentimes with, you know, when you think about security, you know, issues and folks who may want to impact an organization services, a lot of times they’re trying to impact their availability. And so, you know, availability can be really important to think about from multiple angles, not only in terms of protecting things that might be outside, but also in ensuring rigor internally so that you’re you’re not impacted regardless.

Karissa Breen [00:22:03]:
Yeah. And I guess just following that a little bit more in terms of availability. Imagine if the internet just stopped working.

Angelique Medina [00:22:08]:
Yeah. I mean it’s it is pretty, frightening thought for sure. And, you know, we have seen a lot of instances in which, like, there have been very broad outages, you know, or there’s been widespread disruption. And even think about things like submarine cables, you know, we’re here in Australia now. It used to be that there was, you know, only a few cables even for example on the West Coast of Australia. And there were several more added over the last 5, 6 years. And so there’s greater redundancy, but you still have to think about that because I don’t know that a lot of people realize that there’s actually, whether it’s submarine cables or terrestrial networks, a lot of internet providers and cloud providers, they all use the same pipes effectively at the end of the day. And so if something were to happen to those, it can be very broad in terms of the disruption.

Karissa Breen [00:23:01]:
I wanna ask you now about data sovereignty. So in Australia, it’s a big conversation. There’s probably a lot of companies out there that wouldn’t actually know where their data is being stored. So what’s what’s your view then on that?

Angelique Medina [00:23:15]:
Yeah. I think that’s that’s a very interesting one. We’ve seen that also in Europe as well where there’s considerations around that. And, and it really is in fact the case. And, and it doesn’t, you know, that that organizations need to be very mindful of this. Because even if they’re working with a very large organization that might have data centers in different regions, it’s not always the case that you can assume that your data is going to be moved into the correct region. And so you really have to understand where it’s moving at any given time, effectively having a paper trail. And the way that we think about this is you really need to understand across every single network, every single router, across the Internet, all the way through to the destination, where is your traffic at any given time? Because we’ve seen this quite a lot where oftentimes traffic can go out of region when it really shouldn’t.

Angelique Medina [00:24:08]:
Not only in terms of where it’s destined, but also just sort of incidentally routed. And we’ve seen this with things like route leaks, where it’s it’s very much an accident that traffic might get handed off where it shouldn’t. And oftentimes, a lot of organizations don’t even realize that their traffic is being misrouted until something very public happens. So an example of this is like a few years ago, there was a very small service provider in Nigeria and they started accidentally advertising themselves as a route to Google services. Now that is, you know, doesn’t seem like a terribly bad thing. They couldn’t necessarily handle all that traffic, so that wasn’t great. But sit standing, one of their peers was trying to telecom who was right kind of neighboring them. And the thing is that China Telecom doesn’t pass Google’s traffic.

Angelique Medina [00:25:03]:
So all the traffic just started, they just started dropping all of Google’s traffic. And that was really the first indication that folks had that there was this incorrect route that was propagating across the Internet. Now, if it China Telecom hadn’t been dropping their packets, then folks may not have known for some time. So that is sort of an example of how a lot of these things happen much more frequently than folks realize. And oftentimes, it’s only if there’s a performance issue or something quite catastrophic that clues people in to this happening. But again, it doesn’t always, you know, manifest in that way.

Karissa Breen [00:25:40]:
Would you say as well, I mean, with security IT in general, like, people are just struggling to keep their head above the water and keep the lights on. So it’s like unless they have to look at something to your point around performance issues, no one’s looking.

Angelique Medina [00:25:52]:
Yeah. A 100%. And and I think that, you know, the other thing too is that even though as users, you know, we think about the Internet is like, okay, well, this has been part of our lives for a long time. When it comes to enterprises, they have historically had a very different kind of approach to how they network. So they had, you know, managed connectivity between say their data centers and their branch offices. And, you know, they may have funneled all traffic through, you know, a central place that, you know, enabled them to kind of have their firewalls all there and to filter traffic and to ensure that they were very safe. So they weren’t as heavily dependent on Internet connectivity. So a lot of these issues and concerns are relatively new to a lot of enterprise IT operators.

Angelique Medina [00:26:39]:
And so this is kind of a new world that they really have to now understand that the Internet kind of functions very differently than it does in an internal private network. I mean, you think about it, the Internet was actually founded as like an academic network. And it was built on this chain of trust. You just sort of agree to peer with another provider and and exchange traffic. It was never meant to have the same security mechanisms that we think are really, really important today.

Karissa Breen [00:27:08]:
I’ve had other people I’ve spoken to that I know saying that, you know, the Internet is still put together by sticky tape and duct tape.

Angelique Medina [00:27:14]:
Absolutely. Absolutely. That’s a good way of putting it.

Karissa Breen [00:27:17]:
Would you say that people’s version of the Internet is like, it’s this perfectly well oiled machine? But in reality, if you have a look behind the curtain to your earlier points, it can be a disaster.

Angelique Medina [00:27:28]:
Yeah. Absolutely. You know, I think that we’ve made some strides in terms of of route security. There’s been some initiatives around RPKI where folks can effectively, you know, have service providers ensure that it’s not just anybody can say that they’re Google or say that they’re Facebook and try to take their traffic. There have been strides, but it the interesting thing about the Internet is that it’s it’s kind of a it really is a community effort if you will, because I can say, hey, you service provider, make sure that, you know, this route that you’re receiving, go check it and make sure that it’s something that belongs to who’s advertising it. But it’s really up to that service provider to do it. So it requires a combination of really every service provider on the internet, as well as the application providers too, working together to make this work. And so it’s gonna take time to get there, but there has been some progress.

Angelique Medina [00:28:28]:
But in the meantime, you know, obviously I think enterprises need to be much more mindful that this is not it’s not the same as as a as an enterprise private network.

Karissa Breen [00:28:39]:
So maybe, Angelique, can you talk to maybe some of the risks associated with data route changes or maybe start set the scene on, like, what is a data route change for people that are perhaps unfamiliar? And then the risks associated with that.

Angelique Medina [00:28:50]:
Yeah. So routing on the Internet is really interesting. So I let’s say that, you know, I’m Google and I’m advertising my routes to my service through maybe many different service providers that are connected to me. And then it’s kind of like a game of tunnel telephone. Those service providers then announce to their peers and those peers announce to their peers. It really is sort of like this, hey, I’m I’m advertising further out. Now, if I’m a service provider, I may have different options to get to say Google. You know, I have one provider here, one provider over here.

Angelique Medina [00:29:26]:
And the decision that I make on on that is gonna there’s gonna be a number of factors. Some of them might be commercial. You know, maybe it’s cheaper for me to send it one route than the other. Maybe one route is shorter. So that might be preferred. But in terms of like route changes again, I’m really dependent on the providers who are sending their advertisements to me. And if something comes through to me that is illegitimate, you know, and is, you know, effectively there might be somebody who is trying to spoof or say that they are someone and that, you know, they they have a route to the service and it really is up to the provider if they accept it and they send it on, which we see quite often, then all that traffic can just go to them. I mean, that’s kind of really almost how it works, you know.

Angelique Medina [00:30:17]:
Well, is it it is how it works, you know. And so it it’s kind of interesting when when we think about like those route changes, it really is about do you trust an advertisement that you’ve got or a route that you’ve got from somebody next to you. And because of that, really routes can change on the fly. They’re quite dynamic. And like I said, they could change for legitimate reasons. You know, maybe there’s a provider who thinks I have a shorter route to the service. Or maybe I’m taking an alternate path because, you know, I have more favorable kind of commercial arrangements with them. They can really just fluctuate quite a lot.

Angelique Medina [00:30:56]:
But because of that, it’s really important that organizations have really continuous visibility because sometimes those changes are legitimate and sometimes they’re not legitimate. And so really because of that very fluid nature of routing across the Internet, it is it is one of these things that again requires a lot of vigilance to kind of ensure that traffic is going where it should at any given time.

Karissa Breen [00:31:23]:
So what happens when it’s not legitimate?

Angelique Medina [00:31:26]:
Well, I guess a few things can happen. One, as kind of gave you an example earlier. If it’s not a legitimate route for example, it could impact performance. So maybe traffic is going through networks that it really shouldn’t who don’t have the capacity to route that traffic. So it could cause loss. It could cause a very degraded experience for users. It could be something in the example I I shared about China Telecom where the traffic is just black hole because, you know, they just drop those that type of traffic. That is quite often a common one in a BGP hijacking scenario where somebody might be advertising routes that they don’t own because they don’t actually have that destination server in their network.

Angelique Medina [00:32:09]:
The traffic just will get dropped when it hits their network. But other times we see instances like a few years ago, AWS, their route 53 DNS service was hijacked. Now that was a very sophisticated hijack in that the attackers effectively compromised a small service provider in North America. And they then using that service provider system started advertising themselves as AWS’s DNS service. The reason that they were doing that was because they wanted people and they were filtering out anybody who was requesting the IP addresses for the Ethereum cryptocurrency site. They would then serve them an illegitimate IP address to go to. Right. And then if they went to that IP address, it was spoofing the Ethereum service.

Karissa Breen [00:33:03]:
And they’re putting their details in and Exactly.

Angelique Medina [00:33:05]:
And so, you know, that is a serious security concern because now you’re effectively, you know, again, it’s sort of this spoofing thing. And if somebody is not, you know, very vigilant as a user, you have to remember that like, even as somebody like Ethereum, you’re you’re kind of leaving your users vulnerable if you’re not ensuring that their traffic is going to the right place. So it can be quite serious in its consequences.

Karissa Breen [00:33:32]:
I’ve seen another example. I knew you probably know more than me, but I think it was a company in the US. They sold, like, these really nice coats. And then, apparently, people were then calling that company directly and say, hey, like, I never I never got my coat. Like, I paid you all this money. Like, where is it? And that’s when they started to understand actually something’s wrong here. Yeah. So then that’s too late then.

Angelique Medina [00:33:55]:
A 100%. Yeah. I mean, it it and and yes. Like, there there’s a whole range of, like, possibilities in terms of somebody who’s like trying to impersonate. Right? You, whether they’re impersonating a brand. But quite often, you know, as I mentioned, these can have pretty significant financial consequences for an organization, reputational consequences. Because, you know, in that case, obviously, these particular individuals considered the company to really be at fault here. Right? And that can be difficult to repair when you’re talking about a reputation, a brand reputation.

Angelique Medina [00:34:30]:
Right? And so it really runs the gamut in terms of significant, like, financial reputational, privacy considerations. So these are all things that, you know, there’s there’s certainly a lot for enterprises today to consider, especially from a security standpoint. But even in this particular case, there was no compromise to the actual like, like Ethereum servers. Right? Their internal systems weren’t compromised. No. Right? That’s what’s so interesting about this type of attack. Sure. But they were impersonating.

Angelique Medina [00:35:05]:
They can you know, these these folks were able to convince them that they were them. And so that constitute constituted a security issue nonetheless. So it’s a very different sort of type of thing. A lot of folks really focus on, hey, I’m gonna batten down the hatch. I’m gonna ensure that my systems are very secure. And that’s one of the reasons why it’s increasingly harder to, you know, penetrate somebody’s systems. It might be in fact be easier to just try to impersonate them and see where you get with that. Right? And so, you know, it’s a it’s a very interesting landscape when you, when you think about kind of not only the Internet but just how the Internet operates.

Angelique Medina [00:35:40]:
There’s some very fundamental systems like DNS that are very foundational to how the Internet works. And they are often the target of issues like, when we see DDoS attacks, they’re very often targeted to things like DNS. Because if you take down DNS, you’re not just taking down a single organization. You’re potentially taking down 100, thousands of organizations. And so, those are the things you kinda have to think about. Like, where are those kind of fault lines or where are those, like, points where you really have to ensure that you’re maybe redundant, that you’re working with a party that you have a lot of trust in, or that your own systems are well secured.

Karissa Breen [00:36:28]:
Joining now in person is Matt Caulfield, vice president of product at Duo Identity. And today, we’re discussing identity is news span. So, Matt, thanks for joining and welcome.

Matt Caulfield [00:36:37]:
Thank you very much, Carissa. Happy to be here. Happy to talk about identity.

Karissa Breen [00:36:40]:
Okay. So on that note, what is identity spam? What does it mean to you, and what’s your version of it?

Matt Caulfield [00:36:46]:
Yeah. Identity is the new spam is kind of a catchy phrase. Right? What does that mean? Maybe 15 years ago, spam was a big problem for companies and for individuals. I I remember it personally having a, you know, Yahoo mailbox or Gmail mailbox is a is a big deal. Then along came all these email security companies. Kinda clean that up. It’s not really an issue we talk about anymore. Identity though is very similar today in that you can think about it as attackers are kinda constantly knocking on the front door of all these accounts that are either your personal accounts or your corporate accounts.

Matt Caulfield [00:37:15]:
They’re trying to make their way in almost in the same way that attackers used to send loads and loads of spam and phishing attacks via email. Now that’s happening through the identity vector.

Karissa Breen [00:37:24]:
Okay. So I have to ask look, identity in my experience is not I mean, I’m probably asking, you know, you obviously care a lot about identity, but do you think it’s one of those things that just gets relegated a lot? Like, good at these other cool things like scenes and socks, and they seem to be take the center stage. Identity doesn’t strike me as a center stage main character energy.

Matt Caulfield [00:37:45]:
It does get relegated a lot, and it it shouldn’t be, to be honest. I’ve usually say, like, look. You know, network security is still important today, but really came into its own about 20 years ago when we got next gen firewalls and network security really flourished. 10 years ago, you know, maybe endpoint security.

Raj Chopra [00:38:01]:
There are a

Matt Caulfield [00:38:01]:
bunch of vendors who came out around that time Sure. And really flourished. This decade, you know, 2020 is, I think, is the era of identity where people started waking up to the fact that hackers are no longer hacking in. They’re just finding ways to log in. They don’t need to go through the trouble of finding a 0 day exploit on your operating system or in your router or in your network. They’re simply logging in. And so identity, we’re finding, is moving in terms of ownership from where it’s been traditionally, which is part of IT, into being under security. And so I think a lot of organizations are realizing that identity is just as important as a pillar of security as network security or endpoint security.

Matt Caulfield [00:38:41]:
Identity security is is there as well.

Karissa Breen [00:38:44]:
There is one of the things being so challenging in terms of, like, people are still using VPNs. People are still trying to figure out how to do MFA. So, like, let’s even go back to the consumer. How many times do people have to reset their password? They forget it. It’s not strong enough. Like, that’s annoying. It creates friction. Right?

Matt Caulfield [00:39:01]:
Yes.

Karissa Breen [00:39:01]:
So what’s your view then on identity? Like, it’s it’s not an it’s an easy thing for us to sit out here in a nice comfy chairs and talk about it, but in reality, it’s not as easy to implement. Right?

Matt Caulfield [00:39:10]:
Yeah. For that very reason, which is unlike networking or endpoints, there’s a real human factor in this one. Because this is how are you verifying that a person is who they say they are? How are you making sure they have access to the things that they’re allowed to have access to? There’s that human element which makes it very difficult for both individuals and organizations to implement identity security. And so every company we talk to is sort of in their own different walks of life or all they’re all different parts of the maturity curve when it comes to identity. Some are just now trying to, as you said, adopt MFA. And they’re just getting there. And some of the, you know, studies that we’ve done through Talos and through Duo have shown that we’re doing pretty well in terms of MFA adoption. You know, most companies now have rolled out MFA to most of their users.

Matt Caulfield [00:39:56]:
That’s great.

Karissa Breen [00:39:57]:
Most companies in North America or globally would you say?

Matt Caulfield [00:39:59]:
I’d say globally.

Karissa Breen [00:40:00]:
Sure.

Matt Caulfield [00:40:00]:
Certainly, there are many who haven’t, and we like to think too it makes that easy. But we’ve also found that even if you’ve rolled out basic multifactor authentication, maybe through an SMS on your phone or a Duo authentication application where you can press an allow button, you know, when something comes in. We’re finding that attackers have found ways around those. Those are easy to fish. I can call you up and say, hey. I’m from the help desk. I’m trying to fix a problem with your account. You’re gonna get a code on your phone.

Matt Caulfield [00:40:27]:
Could you just read that back to me? And most people will fall for something like that, and they’ll give over that 6 digit code. Or, hey. You’re about to get a notification on your phone. Could you just press the allow button or read off the code for me, so I can understand, you know, who it is that’s trying to log in? These are ways that attackers are using social engineering to bypass even if you have MFA, bypass MFA. So we’re starting to see the adoption, especially in more sophisticated organizations that care about identity security, phishing resistant authentication. So that would mean biometrics using your fingerprint or face ID using device trust. So you can only log into certain applications from your work laptop because it’s kind of assigned to your accounts. So this combination of biometrics, passwordless logins, so you can’t be phished as easily or can’t give your password away.

Matt Caulfield [00:41:11]:
FIDO 2 cryptographic compliance, all these things are coming together now for sort of a second wave of MFA. Now to your point though, not if everybody’s even gone through the first wave.

Karissa Breen [00:41:20]:
1st wave meaning MFA or 2 of a?

Matt Caulfield [00:41:22]:
Yeah. Just the first wave, 2 of a.

Karissa Breen [00:41:24]:
Yeah. Wow. So using that example, when you say people get desensitized, so many notifications coming through, someone just keeps pinging pinging pinging. Eventually, you’re like, okay, allow because my kids are going crazy in the background. My wife’s yelling at me. I’m just gonna press allow, and then we’ve got a problem.

Matt Caulfield [00:41:39]:
Totally. Yes. So MFA flooding and MFA fatigue. So these are two sides of the same coin. Flooding would be me sending you multiple MFA push notifications, and you see them on your phone, and you press deny, deny, deny, deny, deny. And eventually, it’s 3 AM in the morning, you know, kids are screaming, whatever’s happening. You’ve eventually relent and you press allow. The other form of that is MFA fatigue, which is we’re trained to just, by habit, press the allow button when we see it on our phone.

Matt Caulfield [00:42:08]:
We oftentimes don’t look at or scrutinize the details behind it and just say, oh, I’m you know, it’s the middle of the work day. I’ve seen the last thing. There must be some application that’s trying to reauthenticate. I’m just gonna go press the allow button.

Karissa Breen [00:42:20]:
Exactly. And I think that that’s something that even in my own company sometimes, I’m like, oh, who’s trying to access x system? And depends sometimes, you know, IP address isn’t necessarily related to where I’m situated. But typically, I do ask and it is someone legitimately trying to to get into the system. So let’s go back to the friction. So obviously, we still need to ensure that we, you know, we’re identifying who the person is and we’re dedicating them properly. But you said before around, you know, biometric passwords. I’ve spoken at length on the show about that. What’s gonna happen then to the future perhaps of, like, password managers? What’s their future look like?

Matt Caulfield [00:42:53]:
Yeah. I think there will always be a long tail of applications that will still be disconnected from the rest of the identity ecosystem. And the only way to manage authentication for them might be a username and password. But my hope is that we use passwords for those applications that you simply can’t remember. And maybe use a password manager for those, but the vast majority of applications, say 99% of them that we use in the corporation, should be tied back to your single sign on system. So you don’t need password managers. You simply have a single password for your corporate account, and you can log in.

Karissa Breen [00:43:25]:
Using biometrics and etcetera.

Matt Caulfield [00:43:27]:
Plus biometrics and multifactor authentication and all those good things. Yes.

Karissa Breen [00:43:32]:
But then I hear the consumer side of it. People saying like, yeah. But then I have to have all these things that I need to, you know, look at the the whether it’s a token, it’s the thing in your phone that you allow, it’s, you know, you get a text message, you get an email. People seem that they get annoyed by that. So how do we implement security where it’s secure in terms of identity, but then also not impacting the user? Because again, you know, as security professionals, when I was one historically, you’re there to serve the business rather than practicing security all day. Right? So where do you find that equilibrium between security and then, of course, allowing the business to operate?

Matt Caulfield [00:44:09]:
Right. Right. Yeah. There’s sort of a thin line between security and usability.

Karissa Breen [00:44:13]:
Yes.

Matt Caulfield [00:44:13]:
We like to paint a picture that Duo is, like, right at the intersection as it’s that brand that’s kinda known as being both usable, but also secure by default. But it’s a very thin line to walk. It’s very easy to fall into being too secure and too much friction, which takes you away from usability. If you stray too far into the usability side, you start losing some aspects of security. So it’s about just finding that right balance. One of the technologies that I’m most excited about is passkeys. Just because this is a thing that consumer applications and consumers can adopt in their everyday life to unlock applications using a fingerprint or using a face ID on their phone rather than using a password at all. And we’re starting to see passkeys start to show up in the enterprise world as well, where we can use passkeys as a way to log in to enterprise applications.

Matt Caulfield [00:44:57]:
And it’s very low friction. It’s easier than it is to use a password because you’re just using a fingerprint.

Karissa Breen [00:45:02]:
And I guess the parallel to be drawn to that would be when you open up your iPhone with your your face. And how annoying is it to have to put my, you know, my code in if it doesn’t recognize my face, for example?

Matt Caulfield [00:45:12]:
Exactly. So I think we’re gonna start to see more companies and more vendors leverage those technologies that are already built into your mobile device. And the technology is very good. It helps avoid deep fakes and some of these AI things that’s built into the technology. We just need to leverage it in the corporate and consumer settings, not just for unlocking your phone, but for doing many other things.

Karissa Breen [00:45:31]:
But we’re doing these sort of initiatives, Matt. It’s you have to do a bit by bit. You can’t just roll it out and then see what happens. So how what would be your approach to say, okay, we’ve got to do the first wave and the second?

Matt Caulfield [00:45:41]:
Yes. It’s a journey. So everybody is in a in a different place. Some people are just starting on that kind of multifactor authentication journey, identity security journey, and some people are are further along. I’d say, look, if you don’t have any form of 2FA or MFA on your accounts, start with that. Some MFA is better than no MFA. But once you’re there, once you have basic MFA, make sure you’re connecting all of your applications to your single sign on system. You don’t want a bunch of islands of identity that you’re managing individually.

Matt Caulfield [00:46:09]:
It should all be tied back to your single directory, single sign on system, whether you’re using Duo or Microsoft or Okta or Ping. It doesn’t matter. Just connect it back to the SSO system. And then from there, you can start getting a little bit more advanced. You can do things like start automating identity life cycle. So when someone joins the organization, they automatically get access to the applications they need. And when they leave the organization, for whatever reason, they get those applications taken away and their account deactivated. These are, you know, basic block and tackle things.

Matt Caulfield [00:46:36]:
And then from there, start adopting sort of 2nd wave multifactor authentication, biometrics, passwordless, phishing resistant.

Karissa Breen [00:46:43]:
So going back to when someone starts a company, what how do you manage, like, privileged account management, for example? So I come in, I randomly have access to a system I shouldn’t have. I mean, I’ve worked in enterprises before. It’s like, why does Carissa have this level of access? We need to go and investigate them. There’s just whole teams of people looking at this stuff manually. What’s going on there?

Matt Caulfield [00:47:01]:
Yes. So there’s a lot of exciting innovation in that space. Cisco recently announced identity intelligence back in February, which is a product that I’m intimately involved with. That takes a data first approach to identity security. So, yes, we need to put stronger MFA in place. Yes, we need stronger authentication. But you can’t assume that’s always gonna work or you set it up properly. You need a compensating control there that’s providing monitoring.

Matt Caulfield [00:47:25]:
So what identity intelligence does and what we see a lot of organizations starting to do is collecting this treasure trove of information that’s inside of the identity systems. These single sign on systems spit out 100 and thousands of logs every minute of people logging in. We can see which user is logging into which application, using what device, through what network. All that information, yes, we can start processing with AI to understand. Alright, this looks good and normal and secure or it doesn’t. And if it doesn’t, we can raise a flag and send that to the, you know, to the SOC and they can take action.

Karissa Breen [00:47:56]:
And then what happens? An isolates that person immediately or like what?

Matt Caulfield [00:48:00]:
It depends on the company. Some people don’t want to get into a career limiting move where they automatically turn off access for the CEO. So there’s usually a human in the loop these days, but I think we’ll get to the point eventually where these things are more automated. We can at least automate today just re authenticating you. Like, hey, we saw something weird. Let me just try to reauthenticate you. May force you to log in one more time. I think eventually, we might get to the point of quarantining an account automatically if we’re certain that it looks like it was compromised.

Karissa Breen [00:48:29]:
So, for example, if they know I’ve gone on vacation, it looks like someone’s leveraging my credentials in a weird place to log into something that’s privileged, like a finance back end or something like that. You were saying that my account would automatically be quarantined. Is that

Matt Caulfield [00:48:45]:
Yes. Exactly.

Karissa Breen [00:48:46]:
Or you said you’re moving, sorry, towards that automatically being quarantined.

Matt Caulfield [00:48:49]:
Today, we have a human in the loop for all of these things just because, like I said, you don’t wanna lock out the wrong person at the wrong time. But you can imagine, yeah, an attacker signing in while you’re on vacation, you know, maybe you’re in finance and your quarterly earnings are in a couple days, and they go into the finance system. They take that information out, and they can do a little bit of insider trading. That’s kinda scary, and you don’t want that to happen. And so be on the monitor when an account is taken over and what they’re accessing and look for those anomalies. Like, okay. This is strange. This person’s logging from a new device from a new location to an application that they shouldn’t have access to.

Matt Caulfield [00:49:20]:
You know, maybe during quarterly earnings, things are more locked down. They’re trying to access the finance system. Maybe that shouldn’t be happening. So, like, let’s raise a flag on that and have somebody take a look at it.

Karissa Breen [00:49:29]:
So going back to, you know, all the logs and the the data or data, speaking to American. From what you’re talking, the one I’m hearing that you’re saying is you take a blueprint of someone, like, in the type of role that I do, and they’re gonna say, alright, these are all the systems you typically get. This is the type of access that you get. Are you sort of gonna be mirroring it off Carissa’s identity because based on it, someone else is already working there. Is that how that works in terms of the PM side of things, but then also managing if someone’s stealing my credentials, etcetera?

Matt Caulfield [00:49:56]:
Right. So we’re looking for patterns. We look at a lot of information, both who you are, what are the attributes about you, what, you know, what department are you in, who’s your manager, what’s your role or job title in your organization. We’ll also look at what is the policy associated with your account. So if we look into the identity systems, we can see, okay, Carissa has access to applications a, b, c, and d, and we can also see your behavior. You know, you usually access application a on Mondays and b on Tuesdays. We can take the information about who you are, what you have access to, and what you’re doing with that access in order to create what we call you call the blueprint. We call it a user 3 60 view to understand you from all angles, from a bunch of different systems about who you are, what privileged access you have, what regular access you have.

Matt Caulfield [00:50:40]:
So if we see a deviation from that, we can flag

Karissa Breen [00:50:43]:
it. Getting as well from what you’re saying, this will reduce, like, credential stuffing, for example?

Raj Chopra [00:50:48]:
I don’t know if it

Matt Caulfield [00:50:48]:
will reduce attackers attempting credential stuffing, but I do think it will help us detect, attempts at credential stuffing. And we’re already seeing these capabilities built into tools like Duo to detect credential stuffing more more readily. Yeah. Faster. When it when right when it’s happening, maybe rather than after the the fact. Credential stuffing, you know, great way in order to take over the 1st factor credentials for an account. Hopefully, somebody is putting in place multifactor auth. But if we can say, hey, we see a lot of credential stuffing in your organization, it helps motivate the need for implementing stronger forms of multifactor as well.

Karissa Breen [00:51:22]:
And do you come across that often in your experience?

Matt Caulfield [00:51:24]:
Quite a bit. We’ll run identity assessments. So as part of our identity security offerings, you know, we do this identity security assessment where we’ll come in and we’ll take 30 days of data. It’s all read only, so we’re just pulling data in. We’re analyzing it. And oftentimes, we do find attacks in the wild. We don’t always find a smoking gun of a compromised account, but we can give you a profile of these are the types of attacks that your organization faces and from where. So we see credential stuffing, we see session hijacking, we see users sharing authenticators, sharing devices, and we paint a picture for for the customer around this is the the the type of thing you’re you’re up against.

Karissa Breen [00:52:00]:
So, Matt, from your experience, why would you still say I don’t know, like, again, as I said earlier, it’s easy for us to have this chat, but it’s harder to implement. Why would you still say identity is still so challenging though?

Matt Caulfield [00:52:12]:
Yeah. Like, why haven’t we fixed this quite yet? And why is it so hard to get right?

Karissa Breen [00:52:17]:
I was thinking that, but you said it. But yes. Yeah.

Matt Caulfield [00:52:19]:
It it it’s difficult. Part of it is the human element, which we we we mentioned already, which is, you know, we’re not dealing with patching a server. Right? A server can’t really argue with you if you’re patching it. People are a little bit a little bit more difficult to deal with, and it’s difficult to change their behavior. So that’s that human factor is probably the hardest thing to change to get them to adopt more mature ways of doing authentication and changing their day to day behavior. So we need tools that are very easy for them to adopt and feel very natural as part of their day to day work. So that’s a piece of it. I think the other thing is that attackers continue to evolve.

Matt Caulfield [00:52:54]:
So just as much as the tools are evolving, and they’re becoming more user friendly, and more secure by default, you know, passwordless and things like that, at the same time, attackers, with the pandemic, you suddenly have companies enabling people

Raj Chopra [00:53:07]:
to work from home.

Matt Caulfield [00:53:08]:
They’re already, doing that, but they’re also enabling attackers to work from home. You have all these remote access technologies where anybody can access anything from anywhere. Well, suddenly, anybody means anybody. Anybody on the Internet can access anything from anywhere. And so we’ve seen a rise in identity based attacks just because this is the easiest way for attackers to get in. And we’re seeing a lot more and and and more of that lately, especially with people on LinkedIn. You’re able to do reconnaissance on, hey. I’m gonna search for everybody in this organization who’s part of the IT department who has a Microsoft certification or a Cisco certification.

Matt Caulfield [00:53:42]:
And that’s gonna lead me to the network admin or maybe the the identity admin pretty readily, and then I can go target them with social engineering. I can pretend to be their relative or a coworker or somebody from the help desk or the IT department and call them up and try to harvest their credentials so that you can use them.

Karissa Breen [00:53:59]:
So I just wanna end maybe on one last question. You said, obviously, challenges because of people and how they are there, you know, we can’t configure human beings as much as we’d like to. You said you want stuff to feel more natural. What does natural look like in your eyes?

Matt Caulfield [00:54:11]:
Right. We want ideally, you know, your ideal work day. Sit down at my desk, open my laptop, either show my face to the camera or I put my fingerprint on the fingerprint scanner, and that’s it. Throughout the rest of the day, I’m logged in. The trust is established from, you know, 9 AM or 8 AM, whenever you start your day, 7 AM, whatever it is, and it it carries with you throughout the day. Of course, if you walk away from your system and you lock it, you know, of course, you’ll have to reauthenticate when you come back to it. But other than that, we don’t want every little application throughout our day interrupting our flow. You know, nothing drives me more crazy than I’m trying to do something, and then suddenly I get, you know, distracted.

Matt Caulfield [00:54:47]:
And then I can’t remember what I was doing in the 1st place. So just staying secure and then staying out of the way is the ultimate goal for for how we do that. And so Duo has that capability with something called Passport that we announced back at RSA in May, which just makes it so that you can do exactly that. You log in with a fingerprint to the, to your laptop, and then throughout the day, you’re not re authenticated again. You it’s remembered.

Karissa Breen [00:55:08]:
Just a rudimentary question. I’m just curious to know in terms of, like, if you’re cutting 50,000 seats and anyone’s getting logged out, what would be the reduction in the productivity of people going down the help desk, we’ve got my password. I don’t know. Now I’m I’m over it. I’m gonna get a coffee. I’m just purely being logged. I mean, I’ve done it, but imagine multiplying that by 50,000 people that, you know, 20 seconds here and there can then add up.

Matt Caulfield [00:55:30]:
Exactly. Yeah. 5 seconds here, 10 seconds there. It adds up dramatically throughout the day. We’re starting to do those types of studies. I don’t have an answer for you yet, but, my expectation is that it’s not trivial. There’s probably 1,000,000 of dollars a year wasted in the average company on just trying to log in and dealing with the authentication issues. Issues.

Matt Caulfield [00:55:49]:
That’s a guess, but I wouldn’t be surprised.

Karissa Breen [00:55:53]:
And there you have it. This is KB on the go. Stay tuned for

Raj Chopra [00:55:59]:
more.

Share This