The Voice of Cyber®

KBKAST
KB On The Go: Zenith Live 2024 (Part 2)
First Aired: September 19, 2024

In this bonus episode, we’re joined by Claudionor Coelho, Chief AI Officer, and Deepen Desai, Chief Security Officer & Head of Security Research at Zscaler as they share the latest in zero trust networking and AI security to protect and enable organizations. Claudionor discusses the societal implications of AI, the fears of obsolescence, and the generational changes in communication, providing a comprehensive look at the future of AI in both the digital and human landscapes. Deepen highlights the incredible potential of AI in transforming cybersecurity through initiatives like Zscaler’s “copilot” technology and the use of predictive models to foresee and mitigate breaches and the pivotal shift from reactive to proactive cybersecurity measures, underscoring the necessity of a zero trust architecture to minimize breach impacts.

Claudionor Coelho, Chief AI Officer, Zscaler

Claudionor Coelho brings a wealth of expertise to help Zscaler deliver a competitive technology advantage through the development of AI and ML innovations. Prior to joining Zscaler, Coelho served as the Chief AI Officer and SVP of Engineering at Advantest, where he spearheaded the development of a Zero Trust private cloud solution tailored for the semiconductor manufacturing market. Before Advantest, Coelho was the VP/Fellow of AI and the Head of AI Labs at Palo Alto Networks where he led the charge in AI, AIOps and Neuro-symbolic AI, an advanced form of AI that enables reasoning, learning, and cognitive modeling, to help revolutionize time series analysis tools on a massive scale. Coelho’s career also includes vital roles in ML and Deep Learning at Google, where he developed a state-of-the-art Deep Learning technology designed for automatic quantization and model compression which played a pivotal function in the search for subatomic particles at CERN..

Deepen Desai, Chief Security Officer & Head of Security Research, Zscaler

As Chief Security Officer & Head of Security Research at Zscaler, Deepen Desai is responsible for running the global security research operations as well as working with the product group to ensure that the Zscaler platform and services are secure. Deepen has been actively involved in the field of cybersecurity for the past 15 years. Prior to joining Zscaler, he held a security leadership role at Dell SonicWALL.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Karissa Breen [00:00:16]:
Welcome to KB On the Go. And today, I’m on the go in sunny and hot Las Vegas with Zscaler, and I’m reporting on the ground here at the Bellagio for the Zenith Live conference. Zenith Live is the premier learning conference where experts converge to share the latest in zero trust networking and AI security to protect and enable organizations. I’ve got a few executive interviews up my sleeve, so please stay tuned. Joining me now in person is Claudionor Coelho, chief AI officer from Zscaler. So, Claudionor, thanks for joining, and welcome.

Claudionor Coelho [00:00:45]:
Thank you very much.

Karissa Breen [00:00:46]:
So yesterday, you presented. Talk to me a little bit more about what you presented on.

Claudionor Coelho [00:00:50]:
I did a presentation on how we can use generative AI and and graph neural networks to improve cybersecurity. One of the things that we did is that recently we announced the copilot at Zscaler and some of the technologies that I presented yesterday, they they were used to build the copilot technology.

Karissa Breen [00:01:10]:
So when it comes to AI in general, would you say, like, people and I’ve spoken a lot about this on my podcast. Do you think people are fearful of it? Because maybe it’s mainstream media that aren’t technologists at heart that are engendering a lot of negative sort of energy towards it. So would you say that people just generally seem worried about it?

Claudionor Coelho [00:01:32]:
That’s a very good question. I’m a member of the World Economic Forum AI Group, and last year, they were discussing, like, the they’re everyone, government officials, people, they were, like, word that AI, specifically large language models, JGPT type of technology. They’re going to take over the world. People are realizing that that’s not going to happen anytime soon because large language models, they are I I usually joke saying that by themselves, they are like expensive toys, because they hallucinate a lot, and that makes it very hard to create, like, a a trustful product based on them. But when you added like all the infrastructure that you created before like tools, algorithms, connections to other systems, then it makes a really powerful asset for a company.

Karissa Breen [00:02:22]:
When you said anytime soon, when’s anytime?

Claudionor Coelho [00:02:25]:
So I I I still believe that we still have a long way to go. Anytime soon is basically like this. If you just take the LLMs itself, they do not have the capability to self train them. You need a whole software system on the back to connect to the systems, connect to other things. Suppose, for example, you have like a company that went to IPO last month. The LLMs cut out date was before that. That means that all the training data they used to create like an LLM happened before the cut out the the the IPO of this company. So that means if you ask the LLM any questions about it, they will say, I don’t know that information because it was before my my cutout date was before that time.

Claudionor Coelho [00:03:07]:
However, if you basically start connecting to search engines, LLM, and you use LLM as your next generation user interface, then it makes really powerful, we call them AI agents, And that has, like, a much better capability to basically answer questions and and to interact with systems. And I think that is really the the advantage of large language model based systems as opposed to just the LLMs.

Karissa Breen [00:03:34]:
So can I just ask, don’t you think, like, where we are with AI in general and what people think whether it’s negative or whether it’s positive? Is it wasn’t it sort of the inevitable that we’re gonna get here eventually at this stage? Are people surprised, would you say, that we’re sort of here with everything that sort of happened, looking even the last 20 years and how technology’s evolved so much?

Claudionor Coelho [00:03:53]:
I’m going to quote, Gartner. I was in Gartner watching a presentation, and they said LLMs here are here to stay. So if you think about how people develop software systems before KGPT, like November 30, 2022, people used to have a PM that would imagine how you would interact with the software and they would create the UI for user interfaces and the flow for the user interfaces and that if you want to to basically use the system in any way that is different than that, then you’d have to adapt yourself to the way that the person who designed the software to to interact. With LLMs, they can basically with generative AI or large language models, you can actually detect the user intent, and that makes a huge advantage because now instead of having the person to adapt to the way that the software works, what you do is that the LLMs detect what you want to do, and it adjusts dynamically how the software will work to answer your question, and that’s what’s the real advantage is.

Karissa Breen [00:04:58]:
So from your perspective being part of the World Economic Forum, what do you think people are fearful of? Now I keep I keep asking this because I’m very pro for it, but then, you know, as I mentioned before, there are, like, mainstream media, etcetera, that sort of come in and try to create this story that, like, you know, the world’s over and, you know, robots are taking over. Like, what do you think, like, scares people? And I have some context around that, but maybe you answer that question first.

Claudionor Coelho [00:05:22]:
People are fearful of the unknown. They don’t know what’s happening is going to happen in the future, and one of the things that I I usually say is this, in 1970 or 72, someone, went to New York, to Madison Square Garden, and then took a brick, something very large, and started talking with it. It was the 1st cell phone conversation in the world, and if you ask the people around it, people did not if you say if you ask people, like, do you want a cell phone? They would say, why do I need a cell phone? And now it took, like, several years, like, maybe 20, 30 years until the cell phone matured. The cell phone industry matured, and then we we have, like, iPhones now. What’s happening right now is that the speed at which this technology is advancing is much faster than anything else that we have seen before. And that’s what people are scared because it took, like, 30 years to for people to get acquainted with cell phone. And remember, if you just think about a cell phone, you created business for app development, you created new opportunities for even people who sell cases for iPhones, okay, or or cell phones. But right now, people don’t have don’t know how long it’s going to take until this technology matures or if it’s going to mature or not, and which opportunities they will create, but there are a lot of opportunities.

Claudionor Coelho [00:06:40]:
I usually, I was looking a few, weeks ago, and if you look at Anthropic, Anthropic is one of the makers of one of the large language models. They had a posting about prompt engineers for jobs, saying if you’re looking to to work with us to be a prompt engineer, then we are requesting that you have 2, 3 years of experience. Remember, 2, 3 years ago, there was no prompt engineer, there was no large language of JGPT, and they were saying that you don’t have a you don’t need to have a degree, you don’t have to have, like, more than 2, 3 years experience, but you are looking for people. So that shows that it just created opportunities for new market, for new programs, for new qualifications for people that did not exist before.

Karissa Breen [00:07:25]:
Sure. Absolutely. Okay. So there’s a couple of things in there. So would you say that people’s fears would be fear of unknown, but also the speed? Because what you’re saying before, like, originally, like, yes, things would come out, but not to the same velocity as now. So then what are the opportunities that do exist? And you’re right, like, things, you know, even before like, when the Internet started, there were no Google, like, all of that sort of started up when the, you know, in the nineties when the Internet sort of started. So it has created jobs, like Zscaler and all that, like, went around, like, all this time ago. So what are the opportunities that do exist that you foresee?

Claudionor Coelho [00:07:57]:
So first of all, we you have to understand that whenever a new technology comes in, it’s used both for good and for bad. So if I base imagine that I can turn on hallucination, large language models are known to hallucinate, which means that they generate like a completely out of the blue answers for you that you could call creativity, or if you basically want them to do some action, they they call it hallucination, but it’s bad, but hallucination on itself it’s not a bad idea. In my talk yesterday, I was saying, let’s suppose you want to create a new product in health care, and I was even joking saying I’m not going to be talking about cybersecurity because if it gives me a good idea, I’m not going to mention it to you here in my presentation. But if you go to LLM and turn on hallucination, and you say, give me ideas for our products and names in health care industry and explain to me why this name is a good name. And it gave me very good names. One of the names I would have to to to look at my presentation here, but it it gives really credible names and marketing names for a product that would probably be a hit in the market. So it makes your like, the the idea of creativity, it makes our creativity go to anywhere you want because you can evaluate opportunities and you can evaluate scenarios much quicker than you what it could do before.

Karissa Breen [00:09:17]:
Where where do you think we’re sort of gonna traverse to now? As you said that the velocity is there, things are increasing, like, much faster than before. What do you sort of see the next sort of 12 months, like, realistically, but then also broaden that into the next 5 years in terms of AI? Like, what are the things that we’re gonna see even if it’s on a consumer level?

Claudionor Coelho [00:09:33]:
So let’s split this into 2 parts. Number 1, large language models. Number 2, deep learning including graph neural networks. For example, a few months ago, Google DeepMind created a startup to work on new drug discovery because they have been using like a graph neural networks and deep reinforcement learning, actually to search for new drug compounds. Remember that whenever you buy a new drug or if you watch like on advertisement for new drugs on the TV, they basically say how wonderful the drug is and then at the end they say this drug, although it’s wonderful, it may kill you, it may give you something, some side effect, and the list of side effect is very extensive. So there is like the need to search for new drugs in the market that will reduce the side effect, and that’s why Google basically created this new company that is or it’s funded the new company just to search for new types of drugs. And you can imagine that those types of system, because they’re maybe operating at atomic level, and to do search for new compounds or or new connections between drugs or molecules, they can search for new materials, they can search for new drugs, they can search for for I don’t know, their methodological, like medical supplies, tools, and things like that. Okay? This one point.

Claudionor Coelho [00:10:52]:
Now you need to interact with those systems, and what is the best way for you to interact with those systems other than spoken language or written language? So if you can interact with those systems saying, oh, you know what? I did not really like the way that this molecule is is behaving. Let me try it a little bit in a different way, and the system start to doing the thinking by itself. So you can think about this is like it’s a copilot that help you think about the problem that you’re trying to solve.

Karissa Breen [00:11:22]:
Isn’t that better though? Because why would you wanna use your own computational power when you have a machine to do it for you?

Claudionor Coelho [00:11:27]:
Yeah. And that that’s one of the the real advantages of this technology is because it’s how we are going to be seeing this being used in in the near future. New materials, new drugs, or new ways to solve problems, or new products in the market. So you’re going to be accelerating the way we can introduce or solve real problems.

Karissa Breen [00:11:45]:
Do you think as well that in terms of skill set of people, do you think people are sort of I don’t wanna say the word lazy, but maybe people are like, oh, well, now I’ve gotta go up skill again when I just went to university or college or what what do you guys call it here for, like, 4 or 5 years, and now I’ve gotta do something again. Do you think people just aren’t motivated to go out and learn a new skill set because their current skill set potentially be could become obsolete?

Claudionor Coelho [00:12:07]:
That’s one of the the fears people have is that because if the machine is doing so well for you, then people started basically getting, let’s call it, like, a number for lack of a better term, and there is, like, this comic book called Idiocracy, not book, movie called Idiocracy that talks about the IQ of people going down because the machines started doing everything for them. But in in fact, on the other side, you can imagine that people will just find the new ways or start solving additional ways. The way that I think is that AI, large language models and deep learning in general, they’re going to become your exoskeleton. I don’t know if you ever watched the movie, Alien 2, but there is this part on the movie where Sigourney Weaver dresses up an exoskeleton to fight an alien, and she knew that she would not have enough strength to fight the alien with without the exoskeleton, but the exoskeleton which at the time was like a mechanical piece, it would help her to have like super human strength. So the way that I like to think about this technology is that it’s going to enable you to have super human strength.

Karissa Breen [00:13:09]:
So do you think people are are people’s IQ, are they going down would you say?

Claudionor Coelho [00:13:13]:
No. I said this is like the comic book movie that I I that that I watched because that is, like, what people are fear that it’s going to take over so many of the intelligence or or the jobs or even, like, the most jobs that people do that actually people will not have anything to do. Okay?

Karissa Breen [00:13:30]:
What about we’re reliant though on it? So for example, like, nowadays, no one goes anywhere without their phone. You got your wallet, you’ve got your, you know, navigation, you’ve got talking to someone. So, like, we are reliant then on a phone, but, like, back in, you know, the parents’ days, like, there was no phones and stuff like that. You just had to know where you were going and you had to pray that the person that you were meeting up with was there on time and all that type of stuff. So now are we gonna become more reliant then on this technology and we can’t live without it?

Claudionor Coelho [00:13:58]:
So let me give you, like, an example. When I was, like, a kid, that’s when VHS, like, a video tape came. And at that time, my my father bought a video cassette machine. It was part of the first one in the family, and I subscribed it to a company that you could just rent the tapes to watch at home, and I had an answer that basically came to me and said, I don’t know why you’re spending money on this. People will never replace a movie theater by just watching movies at home, and now we have Netflix. So it changed the way that people basically do business. I still value about a lot about human interaction. I think that one once people have to understand is that the human interaction is really what people need to focus on.

Karissa Breen [00:14:42]:
Do you think people are focused on that though?

Claudionor Coelho [00:14:44]:
Sometimes no. Because if you look at the video games, I see that the new generation of kids, they try to spend more time on the video games than actually going out and and and spending time. But I I wish that people eventually, they’re going to realize that interaction is important, human interaction.

Karissa Breen [00:15:00]:
So I’m a millennial, but the generation below me is the the z. So would you say that their human interaction level is not the highest in comparison to other generations, and then so what does that then look like?

Claudionor Coelho [00:15:13]:
Not that it’s the highest, but it’s different. Each generation, they basically change the way that people interact with your which it with each other. And it’s very funny because my wife was complaining to me recently that the kids these days, they only use, like, chat, Discord, or or WhatsApp to talk to other people, and that did not happen before. And then I showed her a picture from 1930 about people on the subway reading newspapers.

Karissa Breen [00:15:38]:
Yeah. Oh, yes. I think I’ve seen that, and it’s now it’s replaced with a phone.

Claudionor Coelho [00:15:42]:
Yes. So it changes the technology.

Karissa Breen [00:15:44]:
So what excites you the most about AI?

Claudionor Coelho [00:15:47]:
It’s the the possibilities that right now, for example, Natalia was basically congratulating me because I was a seed investor in a company in Brazil 10 years ago, and the company has just been acquired. And it was the the first deep learning company in Brazil, the one I invested. And at that time, we would have to to rebrand the company to create scenarios and do a lot of, like, what if analysis to what and right now, if I have, like, a chat GPT or if I have, like, a large language models, I can start doing analysis on this and and basically running several scenarios in parallel. And although they will hallucinate and I will throw away maybe, like, half of the the stuff that it will return to me, some of the scenarios they may generate for me, they’re actually going to be pretty good that I can follow-up on that. And that just this opportunity, this possibility of the large language models or and generative AI to be able to do this analysis in a much faster way, it’s, like, incredible.

Karissa Breen [00:16:47]:
In terms of people sort of listening and going back to your comment before around leveraging, you know, large language models or, you know, AI more specifically as a copilot, how what would you sort of say to people to start thinking more along those lines? Not necessarily replacing, but a tool that’s sort of assisting. Do you have any sort of, you know, advice for people?

Claudionor Coelho [00:17:07]:
Do not fear about it. Embrace it because the the the I think in the future, it’s not going to be about AI is going to replace you, but it’s going to be like, if you don’t use this tool towards your advantage, then you’re going to become obsolete.

Karissa Breen [00:17:23]:
Do you think people are aware of that?

Claudionor Coelho [00:17:25]:
The next generation is aware of that. The the new generation, like, generation z or alpha, they are aware of that because they’re using it extensively. I I have been I teach at a university in the Bay Area too, and I have seen and actually tell my students whenever going to write the report, ask Air TPT or ask the large language model to help you write the report. It’s not for it to write the report entirely because it’s going to hallucinate badly, but if you do it in a controlled way, it’s going to generate a lot of, like, a content for you that you can utilize later on.

Karissa Breen [00:18:00]:
What do you mean hallucinate badly? What does that mean?

Claudionor Coelho [00:18:02]:
So, that’s a very good point. My wife is a medical doctor and she was asking me, like, to to show her how large language models could help her create papers. And then it based she gave a topic to and asked to generate references. None of those references actually existed. It made up those references. I told her, whenever you ask, like, larger language model to create a reference for you, you have to double check because it it may find out that they are invented. They’re made up.

Karissa Breen [00:18:34]:
Wow. So you’re just saying it’s just bit of a wild west at times.

Claudionor Coelho [00:18:38]:
Yeah. That’s why I said that you have to be very careful when you build Copilot to try to constrain as much as possible hallucination.

Karissa Breen [00:18:45]:
What do you think about these large media outlets that they’re now complaining trying to, you know, sue OpenAI around leveraging their content for, you know, large language models? What do you think about that?

Claudionor Coelho [00:18:56]:
It’s a very, very good questions. You have to understand part of the presentation that I gave, like, to some universities recently, I told them that written content has eroded badly in the past few years. That means that this is like the 4th copilot I work in in my life, and think about this. It used to be before that whenever people would write a technical document or or any kind of document, they would start thinking about section topics, then they would write the topic sentences, then they would basically expand the topic sentence to paragraphs, and that do that would make the the how you would write pieces, any kind of document. Right now, I have seen documentations that says the next picture shows everything you need to know about whatever the topic is. They show a very complex picture, like, almost like a IKEA how to assemble manual that you cannot extract any information from that picture because it it it is just too complex. So largely because of this large language models, they rely more and more on written well written pieces to train the the large language model. Remember, LLMs, they communicate with you through spoken lang through to spoken or written language, so they need to be trained on very well written documents.

Claudionor Coelho [00:20:15]:
And you cannot train them without well written documents, and some people are saying that they’re going to run out of training text to train the next generation of larger language language models by the end of this year or next year.

Karissa Breen [00:20:26]:
So would you say that media outlets are within their rights to feel violated by this?

Claudionor Coelho [00:20:33]:
To certain degrees, yes. I don’t have, like, a solution to this whether they would and and whether they should use this or not, they should pay pay fees or or or on that. But to a certain degree, this is the same problem as the search engines that have been with the media, and and remembered, like, the the media companies, they complain that if you go to to one of the search engines and you search about the news, you may end up with a news that nobody was paying and and they’re complaining that it that’s reducing that revenue. So to a certain degree, it’s the same problem that has been going on with the search engines business. I think Australia, they were discussing that a few

Karissa Breen [00:21:12]:
They turned it off. They turned it off through I think it was like a year ago, they turned it off through social media, and then people complained, so then they turned it back on again pretty pretty quickly.

Claudionor Coelho [00:21:20]:
But it it it’s the same problem. Okay? It’s the access information, and and to tell you the truth, I think we have to somehow give credit to the person, especially because in the case of large language models, they can leak out to the training data. By leaking out training data, depending on how you ask a question to a large language model, it can start spitting out to the training data that it it was trained upon.

Karissa Breen [00:21:42]:
But what if I was, to your terms, hallucinating in my thoughts and created a media site that tried to convince everyone that the sky was purple, and then then, you know, large language models are trained off of that, which is in fact fabricated?

Claudionor Coelho [00:21:57]:
Tell the larger language model depending on how you write the prompt, but saying, I want you to explain to me why the sky is purple. It’s going to tell you and it’s going to give you reasons why the sky is purple.

Karissa Breen [00:22:06]:
What I’m saying that it’s got to train off the content that’s out there. So if I created, you know, another media outlet that was complete garbage and made no sense at all, are you saying that some of these large language models could be trained off that?

Claudionor Coelho [00:22:19]:
So if you look about how people train large language models, there are 2 ways you can train them. You can train on the web scale data or text that is available in the world, and they usually do very little content classification. They are in the hopes if you basically have garbage, as you said, like, badly written text, and and you have good texts, you have much more good text than you have, like, bad text. But it’s still going to be trained on that text, on on both of them. And then you fine tuning, you fine tune the large language model later on only on a good and high quality text. There’s another type of large language models that are training right now. I3 from Microsoft being one of them that they train the models from the start with only high quality text.

Karissa Breen [00:23:04]:
How do you define high quality text though?

Claudionor Coelho [00:23:06]:
You need a lot of people to actually to evaluate and and to basically see this is high to tag the text as being this is high quality, this is not high quality.

Karissa Breen [00:23:15]:
Who? What type of people?

Claudionor Coelho [00:23:16]:
They usually hire people to do the analysis, like contractors. For example, at some point, OpenAI was even basically hiring people in different countries to basically tag the text for them as good quality or or bad quality.

Karissa Breen [00:23:30]:
Is there, like, a framework that they follow to determine quality versus not high quality?

Claudionor Coelho [00:23:35]:
That’s a very good question. I don’t have the details on that, but I’m assuming that they have some guidelines, internal guidelines to detect for people to say this is good quality or or bad quality. Of course, if you can do that using a large language model, they can do maybe they’re they have a 2 stage process where the large language model first give you an assessment, and then you just say yes, it’s true or or no, it’s not true.

Karissa Breen [00:23:57]:
Because then going back to my example before around the sky was purple, it would then say this is low quality content.

Claudionor Coelho [00:24:05]:
Someone would basically tag us, this is not truthful, and and then they would discard the piece of text.

Karissa Breen [00:24:10]:
Are those frameworks or those sort of parameters, are they publicly available?

Claudionor Coelho [00:24:15]:
Probably not. Okay. Sometimes what they do that they disclose the the training set. So they they don’t tell you how they got to the training set, but they disclose the training set or the algorithms to to search for the training set or but not how to qualify as I pilot our platform.

Karissa Breen [00:24:32]:
Joining me now in person is Deepen Desai, CSO and head of security research from Zscaler. So Deepen, thanks for joining and welcome.

Deepen Desai [00:24:39]:
Thank you for having me. So I

Karissa Breen [00:24:41]:
know the last few days you’ve had a couple of sessions, so maybe run over like what you’ve discussed.

Deepen Desai [00:24:45]:
One of the sessions that I delivered was a main stage keynote where, the talk was focused around AI and cyber. It covered both, innovation on the bad guy side, how they’re leveraging AI to target enterprises, and then also how vendors, including us, Zscaler, Zero Trust Exchange is embracing AI across the platform to counter that. I did go through a few interesting innovations. It’s something that we announced late last year called Breach Predictor. This is a product where we’re trying to combine generative AI with multidimensional predictive models to flag potential breach like scenarios before they progress further. And the goal over here is, again, to harness the power of generative AI to prevent breach before it progresses further. And so that was one of the innovation that I talked about. I I actually demoed a generative AI driven pack where the third actor just provides a single prompt.

Deepen Desai [00:25:46]:
Everything else is fully automated and dynamic in nature at the rogue GPT variant that was being leveraged by the director.

Karissa Breen [00:25:55]:
Can we talk about the predictive side of it? How does that sort of work? Because I mean, there’s lots of vendors coming out now and saying that we can do this with that and we’re integrating gen AI. Like, what does that really mean for people though?

Deepen Desai [00:26:05]:
AIML, has been around for several years. What has changed in the last couple years is the generative AI. Advant of generative ad definitely increases your ability to process vast majority of the data, and there is also this thinking element where you’re able to predict. So the way to think of what we’re doing is we’re combining generative element with the existing predictive element, and the goal over there is to make sure based on the intelligence that the team has compiled over last 10 years, up here about 10,000 plus potential breach like scenarios, use that to train this AI breach prediction recommendation engine. Now, when all the real time traffic that we’re seeing in the organization take those transactions, feed it into this engine, able to point out, okay, this is where there is a high probability of an actor or an attack campaign moving from stage a to stage b of previously seen attack as well. So this is not one is to one match, but these are variations of things that we have seen in the past.

Karissa Breen [00:27:11]:
So from a research perspective, what are some of the are you working any sort of research piece at the moment? Anything you can share in terms of insights?

Deepen Desai [00:27:17]:
I lead a team of, global security experts called ThreatLabs. And on an annual basis, there are 5 reports. I would encourage you all to take a look at it. It’s on our research on zscaler.com website. The most recent one that will be coming out next month will be on ransomware. So ransomware is an area that team tracks in a lot of detail. There are a lot of ransomware threat families where we know how they’re operating. The ransomware is a service model, for instance.

Deepen Desai [00:27:46]:
Couple things that we have seen over the last few years is ransomware started with encrypting the files, demanding ransom, then they added exfiltration piece. Now over the past couple of years, we’re seeing them not even encrypting files. They’re just exfiltrating files from your environment. The volume is very, very high in many of the cases, and the amount of ransom that we’re seeing these folks able to get out of the victim is significantly higher than what we used to see before because of the type of victims that they’re going after, because of the type of data that they’re able to steal. So I think last year, the amount was the highest amount that we saw was around 40,000,000, 35 to 40,000,000. This year, and this is probably not, out there. This will come out with a report. It’s 75,000,000 in a single attack that was collected by a ransomware operator, and, again, this is purely because of the type of information that they’re able to get from victims.

Deepen Desai [00:28:45]:
Based on that, they’re able to get these type of ransom amounts paid out.

Karissa Breen [00:28:49]:
What’s the type of information that people are sort of taking?

Deepen Desai [00:28:51]:
Without going into too much detail on on the specific case, but think about defense, related information or or drug information from a large pharmaceutical company, which is, you know, the next level thing that the company is embedding on. You’re seeing a lot of the health care or is getting caught. That’s where there is patient information. Again, there is a lot of IP information involved. That’s what these guys go after and then try to get large ransom payments get out.

Karissa Breen [00:29:20]:
So you’re saying your plan or your current plan is to predict these breaches. How do you do that though?

Deepen Desai [00:29:27]:
Right. So, look, ransomware is just one threat, category. There are many other threat categories. What we have done is we have documented, like I was describing earlier, 10,000 plus multistage attack chains that are known breach like scenarios. We’re then leveraging that in the product to train an LLM that we’re calling AI Breach Predictor. The goal over there then is to flag what stage of a breach like scenario is an organization is at. And then we take a look at how the organization security controls are configured based on that, based on the amount of activity we have seen in the environment till then, we’re using compute to figure out what’s the next stage probability like in that environment. A simple example, like, if you know a threat actor is using TLS to do certain activity, organization is not doing TLS inspection.

Deepen Desai [00:30:25]:
The probability of that next stage happening in that environment and them not catching or blocking it is close to a 100%. Right? Now that increases the overall probability of the entire breach scenario as well by a certain percentage.

Karissa Breen [00:30:39]:
So are you saying that companies are gonna be reliant then on this predictive breach capability?

Deepen Desai [00:30:46]:
The the goal over here is, like, a lot of the modules that you see out there were always reacting, and when things happen yeah. You sure would wanna block known bad stuff, but there is always going to be those unknown unknowns. We need to go in the direction of having this proactive, preemptive, predictive approach where you’re trying to get ahead of the unknown unknowns that we will see with AI driven attack.

Karissa Breen [00:31:11]:
But haven’t we been trying to get ahead of the unknown unknowns for years? We haven’t quite gotten there?

Deepen Desai [00:31:16]:
Absolutely. That unknown unknowns by definition, you’re not gonna get ahead of it. That’s where having these preemptive modules combined with another important thing that I covered in the keynote is 0 trust. Now, that term has been heavily used and abused, but if you think about it, if the true 0 trust architecture is implemented, you’re able to contain the blast radius from an asset or an identity that gets compromised by these unknown unknown attack. When I say unknown unknown, these are vectors that we don’t know of. AI doesn’t think like human. If it’s an AI driven attack, it will figure out ways that we haven’t thought of before. So what can you do? You either use AI to fight that AI, or you can’t wait for that perfect AI solution, which is what I’m describing on the breach predictor side.

Deepen Desai [00:32:02]:
In addition to using these preemptive modules, you should invest in 0 trust architecture. You’re you’re basically shutting down a lot of these vectors for bad guys, whether it’s a human driven attack, AI driven attack. Your goal is to contain that blast radius to as small asset volume as possible.

Karissa Breen [00:32:21]:
AI side of things, so you’re right. But cyber criminals could equally use AI to, you know, attack us. So how does that how do we sort of get an equilibrium here? Because, again, like, it’s a double edged sword.

Deepen Desai [00:32:33]:
It is a double edged sword. You are going to see in fact, we’re already seeing them use AI to attack enterprises, whether it’s on the phishing side, deep fake side, you’re seeing a lot of those news. What we’re gonna see in near future, and that was actually one of the demonstration I did it as part of my keynote. I actually showcased how a futuristic attack would look like where AI, all it needs is a prompt. The prompt that I gave to the AI module, we call it rogue GPT, is target a company named unlocked AI that recently invested $2,000,000,000 in AI ML initiatives. And the goal is to exfiltrate data from their dev and production AI ML environment. That’s the only prompt that was given. Everything else after that was automated by the GPD module, and it’s able to think and reason.

Deepen Desai [00:33:24]:
So first, identity that it was able to compromise belong to a finance person. Now you can’t have finance person having access to AI ML environment. So GPT dynamically then uses that to target the AIML employee because an email coming from a finance person internally to the AIML employee would make more sense. So the point I’m trying to make is we are already seeing them use it to a certain extent, but these end to end attacks is what we’re gonna see in near future. And that’s where when I say you need to use AI to fight AI, yes, that’s very, very important to level the playing field. But then if you use true zero trust architecture, you’re able to also contain a lot of these new attack vectors that you’re gonna see as these automated and dynamic AI attacks come to see.

Karissa Breen [00:34:17]:
What do you mean by true zero trust? Now you’re right. Zero trust is a term that’s, you know, thrown around a lot. Different vendors saying, you know, zero trust this and that, from versions of it. I think I spoke to your colleague yesterday around the definition of 0 trust according to Zscaler. What does true zero trust mean? You you said that a lot.

Deepen Desai [00:34:36]:
Yeah. So, the way the product got built on Zscaler side, it actually perfectly aligned with NSA’s 0 trust architecture definition that came out 10 years later. As per NSA’s zero trust security definition, you should never trust and always verify. You should explicitly verify with least privilege access, and you should assume breach scenarios. What would happen if this device that you’re using were to get breached? What’s that blast radius? You heard me repeat that term multiple times as well. So with those three fundamental principle in mind, when we device the product, the platform, we look at 4 important stages of the attack. What are you doing to eliminate your external surface? What are you doing to prevent compromise by applying consistent security to all your devices no matter where they are, whether they’re in office, whether you’re traveling, whether you’re at conference like Zenith Live, should be same. The 3rd stage is pretty big in terms of, how whether it’s a true zero trust solution or not.

Deepen Desai [00:35:43]:
This is where you prevent lateral propagation. The part that comes in over here is user to app segmentation, and the way we have done it is we don’t bring users on the same network as application. So think about this this device as, an application that’s sitting in an application environment. This is your user. The way we have it done is your application is making an inside out connection to Zscaler Cloud. User connects to Zscaler Cloud. Once we authorize and authenticate the user based on the policy that the organization has set, we’ll stitch these two connections using mutual TLS tunnel. Now that’s what I call 0 trust because you’re not bringing user on the same network as application, which is what the legacy architecture does, whether it’s VPN, firewall.

Deepen Desai [00:36:33]:
To bring the user on the same network as the application, no matter what type of ACLs you are deploying, the attackers will find an indirect path to the application. Right? And you don’t have it. This user has it. I have access to the user. Let me target that and then get around it. So user to app segmentation, very, very critical. So that’s the 3rd stage, prevent lateral propagation. And then finally, the 4th stage is every attack is after your data.

Deepen Desai [00:36:58]:
So what are you doing to prevent the data loss from your environment? Every data that’s egressing your devices, it should go to full TLS inspection, applying your EDMs, IDMs, custom dictionaries, AI driven data classification. Your goal over there is to make sure your data is not leaving your environment, which is extremely important when it comes to the modern ransomware attacks that we’re seeing.

Karissa Breen [00:37:23]:
On the policy side of things, I think again I spoke to your colleague about this, like, working at enterprise myself historically at science. Yeah. We’ve got all these policy writers, but who’s adhering adhering to it? Who’s implementing it? Who’s following it? Who’s governing it? Who even knows about it? So what happens with what you’re saying, people are not doing that at all? Isn’t there sort of a there’s a defect there?

Deepen Desai [00:37:43]:
You you you make an excellent point. In fact, in some of my sessions, I’ve been sharing a literally a playbook that we have vetted against a lot of these ransomware attacks that are seen out there. And in many of the customer scenarios, we see this playbook being successful in booting out guys like Scatter Spider, you know, Black Cat, which is now disbanded. But the TTPs that we see over and over again in many of these attacks, follow that segmentation playbook. You will be able to defend against this. Now in order to help organizations, what we have done is we’re again integrating AI into the product. So the way to think of it is, you purchased Zscaler platform, you’re using Zscaler private access to perform segmentation in your environment, but you don’t know what application, what users need access to what. We’re leveraging AI to study 3 months of your data, and then that AI module would recommend this group of users should be allowed access to this group of application based on the historical data that we saw.

Deepen Desai [00:38:46]:
We take into account those weekend updates or software updates and we’ll we’ll take all of those factored into the model. It will also tell you that based on the type of the assets that this group of users are accessing, they belong to engineering group. So not only is it tagging who should access what, but it is also tagging, this looks like a development application. This look like a development grouping. So we’re further simplifying the process of implementing 0 trust segmentation using AI.

Karissa Breen [00:39:16]:
Okay. That’s okay. That’s interesting. So what do you think now moving forward in order for people to start implementing this approach? Because again, it’s like easy for us to sit up here and say that again when you’ve got like legacy systems, 100 of years old of data and things are everywhere. It’s not as easy to do that. So what what do you sort of see people doing, you know, moving forward, what you’re saying? Because it makes sense, but not as easy to implement.

Deepen Desai [00:39:43]:
It’s definitely not easy. So look, I always call out 0 trust transformation as a journey. It’s not a flip of a switch that you did this and now you’re 0 trust. So the 4 stages that I described, you need to go through that journey. Like, the number one piece, the playbook that I shared is it’s high time that you should just eliminate your inbound VPNs for remote access. VPNs are being exploited over and over again by bad guys to gain entry into your environment. So get rid of external VPNs. At the bare minimum, make sure you should not be able to get to your Crown Jewel applications using those VPNs.

Deepen Desai [00:40:19]:
So number 1 is that. Number 2 is prioritize user to app segmentation. Make sure you’re not using wildcard policies allowing all users access to all application because then you’re basically providing the attackers a feed in in your environment. Number 3, you should prioritize proactive security layers that are part of a zero trust platform. And what do I mean by that? I mean things like cloud browser isolation, inline sandboxing, where the goal is to protect against unknown net new payloads, net new malware attacks. Because if you think about it, when you use things like browser isolation, no active content ever lands up on the user’s system. It’s a a stream of pixels that they’re seeing if they’re going through a browser isolation chamber. So it’s a journey.

Deepen Desai [00:41:15]:
Start with reducing your external attack surface. Make sure you prioritize user to app segmentation, and then make sure you’re applying consistent security policy with TLS inspection with proactive security layers like inline sandboxing and browser isolation.

Karissa Breen [00:41:32]:
So can I just ask, on the VPN side of things, why are organizations still using them? I mean, they’re pretty prevalent from large enterprises as well.

Deepen Desai [00:41:39]:
No. So look, we are we are all used to doing certain things a certain way. It’s been, what, 3 3, 4 decades now? We’ve been doing networking a certain way. VPN was built for that older time. It’s hard to move from something that you’re so used to doing, whether it’s VPN, firewall. So it requires that culture shift, mindset shift in order to move away from it. Now, what is making it more and more obvious over the last 6 to 9 months, I think there have been so many 0 days that are coming out there. The most recent one was what? I think it was last week where FortiGate.

Deepen Desai [00:42:16]:
Again, I’m not I’m not trying to name vendors here. It’s not the vendors. I mean, even Zscaler products will have issues. That will be vulnerabilities, but it’s the architecture. If an architecture is flawed, Redactor will enjoy huge ROI on a successful vulnerability exploit. So VPN sitting out there, I’m here, come connect to me. If there is a vulnerability that act threat actor is able to exploit, they’re able to get inside the corporate environment and do a lot of damage. So the ROI for a threat actor is pretty high.

Deepen Desai [00:42:48]:
So coming back to your question, it’s a mindset shift, but with what’s happening over the last 6 to 9 months, that transformation, that shift is increasing. The number of people moving away from VPN is definitely increasing.

Karissa Breen [00:43:05]:
And there you have it. This is KB on the go. Stay tuned for more.

Share This