The Voice of Cyber®

KBKAST
From Microsoft AI Tour 2024 – KB On The Go | Bret Arsenault, Janice Le, and Chris Lloyd-Jones
First Aired: March 26, 2025

In this bonus episode, we sit down with Bret Arsenault, Corporate Vice President and Chief Cybersecurity Advisor at Microsoft, Janice Le, GM of Microsoft Security, Compliance, Identity & Privacy, and Chris Lloyd-Jones, Head of Architect & Strategy in the Office of the CTO from Avanade. Together they discuss Microsoft’s Secure Future Initiative (SFI), securing AI, and the culture and change program needed with AI.

Bret Arsenault is the Corporate Vice President and Chief Cybersecurity Advisor at Microsoft. With over 30 years at the company, he leads global efforts in information security, compliance, and business continuity. Bret oversees a team dedicated to protecting Microsoft’s assets and advises Fortune 100 leaders on cybersecurity strategies. He is also the Chairman of Microsoft’s Information Risk Management Council and a founding member of the Executive Security Action Forum (ESAF).

Janice Le is the General Manager of Microsoft Security, Compliance, Identity & Privacy. Based in the San Francisco Bay Area, she leads a global team dedicated to safeguarding Microsoft’s customers and their data. With a strong background in software development and cybersecurity, Janice drives innovation and strategic initiatives to enhance security and compliance across Microsoft’s vast ecosystem.

Chris Lloyd-Jones is the Head of Architect & Strategy in the Office of the CTO at Avanade. He leads strategic initiatives and architectural frameworks to drive innovation and digital transformation. With a strong background in technology and leadership, Chris plays a crucial role in shaping Avanade’s technological direction and ensuring alignment with business goals.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Karissa Breen [00:00:15]:
Welcome to KB on the Go, and today, we’re coming to you with updates from the Microsoft AI tour on the ground at the International Convention Centre here in Sydney. Listen in to get the inside track and hear from some of Microsoft’s global executives. You’ll get to learn more about the exciting SFI and MSTIC cybersecurity solutions in-depth, and you’ll be hearing from a select few Microsoft partners. We’ll also be uncovering exactly how the Australian Federal Police are leveraging AI to detect crime to keep people in our community safer, plus much, much more. KBI Media is bringing you all of the highlights. Joining me now in person is Brett Arceneaux, corporate vice president and chief cybersecurity advisor at Microsoft. And today, we’re discussing the Secure Future initiative, also known as FSIG. And today, we are talking more about the learnings from a year on.

Karissa Breen [00:01:16]:
So Brett, thanks for joining and welcome.

Bret Arsenault [00:01:18]:
Thank you. Thank you for having me, KB.

Karissa Breen [00:01:20]:
Okay. So Brett, obviously, before we started, we’re talking a little bit more about your tenure at Microsoft, which is quite long, long time. So perhaps, talk to us a little bit more about SFI. I mean, there’s a lot of acronyms.

Bret Arsenault [00:01:32]:
Sure.

Karissa Breen [00:01:33]:
Tell us a little bit more about what this means.

Bret Arsenault [00:01:35]:
Yeah. No. I think it’s great. I think, kidding aside, like, I’ve had five different careers just the same company. So it’s been fantastic. I think when we think about SFI, fundamentally, every year we look at, you know, what’s going on with the rent landscape, technology landscape, and regulatory landscape, and we decide how we wanna go address and work on the process we’re doing to protect ourselves in my role as the peers of the CISO, as well as protect our protect our customers. And so, SFI really came down to, as we look at, you know, the technology shifts that are going on, mobile cloud, and now AI, depending on where it’s a platform. And we look at the increasing regulatory pressure.

Bret Arsenault [00:02:13]:
Added to that, a threat landscape that I think everyone’s very well aware of will be in the sophistication. It was an effort to sort of rethink the way we wanna go build our software as a political person. The series came down to three things. One, we really wanna make sure that we think about how we’re gonna embrace and use AI to make a better and more secure world. Two, some engineering changes that we do in fundamental engineering changes and how you build software and services. And then three, really, we need to work on this regulatory harmonization globally, because I think that many of the companies I’ve met, including here in Australia, are really under a lot of pressure with this regulators. So that’s really this fundamental framework of SFI is. And there’s a lot of focus I think we see in the press around the engineering part of it, the product engineering.

Bret Arsenault [00:02:56]:
And that’s really broken down in the three simple things. Secure by design, secure by default, and secure in operations. But just you wanna ship things out there, but it isn’t insecure, which you always do. So we’re actually changing the level of defaults. And so that but when they get out of the box, they turn in their things, like, do hacker authentication by default. And, making sure that things don’t drift once you implement them. And so that’s really that I I think that’s probably the easiest way to break it down. Too much?

Karissa Breen [00:03:21]:
No. That’s perfect.

Bret Arsenault [00:03:22]:
Okay. Great.

Karissa Breen [00:03:22]:
So one of the things I wanna talk you talk about with you is I’ve been watching the event that you recently had in Vegas. So Satya was up there on stage that security is your number one priority at Microsoft. Yeah. And obviously, SMI is backing that up. So I wanna sort of I know that we’re gonna have a lot of time, but I’m keen to sort of talk through maybe the three principles that this is anchored on. So could you talk through them and what they mean?

Bret Arsenault [00:03:47]:
Well, I think the biggest three principles for the engineering part, which is what you humans really are afraid to do, is this idea of secure by design, secure by default, and secure in operations. And so across six pillars of technology that any company would work with, that everything we build at design time, including threat modeling, including code analysis, all those things, they’re all in there by design. That when we ship things, we do more and more things with secure by default. So as I mentioned, training two factor on by default, services that turn with, like, you know, deprecating old legacy protocol by default, using the highest level of security, you know, the new thing we’re doing is Windows 11. Being able to run Windows 11 as standard user finally. All those things that we’re doing by default. And that is all the operational river that goes into that. And so ensuring when people get these, they have to opt out instead of opt in for the security components that they’re doing.

Karissa Breen [00:04:39]:
And you mentioned before because of now AI really coming into the fall, we’re in the AI era as people would say. So do you think that was probably the main sort of catalyst to implementing SFI, but also, you know, we had sort of a cloud adoption and then that caused people to change and then we had COVID and then we’ve seen AI sort of coming into it. So what would be your your thoughts? I mean, obviously, you’ve been in the business a long long time and you’re sort of curious then to know you you’ve seen the evolution.

Bret Arsenault [00:05:03]:
I have. I think every time there’s a tech change whether that was from since I’m as old as I am, from mainframe to PC, from PC to network to PC, from network to PC to Internet, from Internet to cloud, add mobile.

Chris Lloyd Jones [00:05:14]:
Mhmm.

Bret Arsenault [00:05:15]:
And so when you see these platform shifts, they fundamentally change the way you build software.

Karissa Breen [00:05:19]:
Sure. And the

Bret Arsenault [00:05:20]:
way you deliver services, we don’t deliver disks anymore. Right? It’s all on consumption and usage. The network is not as doesn’t have the same control effectiveness because people are connecting directly from your laptop to your cloud servers, whether it’s Salesforce or three sixty five or Google. And so you really need to rethink in a way you’re gonna protect things in that environment. AI is it was not the catalyst for it, but it was certainly a large contributor. Mhmm. Because it gives us new capabilities, both as a threat, more importantly as capabilities for productivity and how we go secure in that model as well. So it was definitely part of the conversation, but it wasn’t the only motivation for doing it.

Bret Arsenault [00:05:55]:
But I do think just be clear. I do think that AI gives the the offenders up hand and an asymmetric asymmetrical model that we didn’t have before we’re fighting against adversary. Sure. I’m super excited about that.

Karissa Breen [00:06:08]:
Well, the reason why I asked you that question, Brett, is this more so, would you say in terms of the shift, as you just mentioned before around the Internet era and cloud and all that, do you think now with what we’re dealing with today, this has probably been one of the the largest shifts in terms of how companies are approaching building software and engineering and their approach?

Bret Arsenault [00:06:25]:
Well, what’s gonna be the shift or the outcome? And, you know, I look at the mobile phone that I have. I mean, it was seven years to get that device to have a hundred million users, and it was less than seven months for a hundred million people to download chat g p t. Right? So this adoption is so fast. And I think about the mobile phone, and it was a phone. Today, I would argue most people don’t use it at all, and they use it for everything. And so I we’re at the very tip of what’s gonna happen with AI. So it’s pretty fascinating. I mean, I’m excited to see a couple of things that’s gonna be possible that just before you couldn’t even do.

Karissa Breen [00:06:58]:
And I’m curious to know, in terms of the adoption, like you said, it was so rapid in opposed to, like, when the Internet came out. People were pretty apprehensive. They were worried. They’re like, oh, well, you know, the Internet is not gonna be a thing. I think I read in papers. So why do you think the adoption was super fast with your from your experience?

Bret Arsenault [00:07:14]:
Well, I think there’s a general trend curve that we will get that almost all technologies, you’re getting a compression of adoption for it. Part of that is though because of the availability of it. Right? Because you have infrastructure, particularly in cloud services. Even in the Internet, as an example, that device you’re sitting on now, you’re on wireless, I’m assuming. Mhmm. And we think, oh, wireless. Everyone is wireless. I was with Dell recently and we’re going through it took twenty years for wireless PCs to eclipse the wired PCs.

Bret Arsenault [00:07:42]:
Twenty years.

Karissa Breen [00:07:43]:
Wow.

Bret Arsenault [00:07:43]:
Right? But you had to have the card. At one point, a network card, and you find this hard to believe, was $700 for a network card. And so, you have to have the infrastructure, you have to have the support, you have to have and that’s what I think is amazing about AI. In particular, our approach

Karissa Breen [00:07:58]:
What?

Bret Arsenault [00:07:58]:
Is this diffuse technology. When you have data centers around the world, you can get it to every every corner, every part super fast and it’s I think and I think it had real value. People saw things that you just could never do before.

Karissa Breen [00:08:11]:
So now that we’ve had I think it’s slightly just over a year since the launch of SFI. Mhmm. So I’m curious now to explore like what are the key learnings then from that that you can share with us today?

Bret Arsenault [00:08:22]:
Well, I think there’s two sets of learnings interestingly. There’s always the shiny part of security that we read about, you know, and journalists write about and you had on your show and they’re all really, exciting, like, through the adversaries. But I think also, there’s the I call it the pedestrian part of the job. That’s still true that holds true. It turns out, you know, like, we look at passwords going from in four years where you see 730 password attacks per second going to over 4,000. Mine. That’s pretty scary. But on the flip side, if all you do is turn on free two FA, you’re not you will not be impacted by that.

Bret Arsenault [00:08:59]:
Even if you did a complex password or as a lethal password, you do that or not. So there’s a lot of hygienic things you still need to go do that are true even for AI. Make sure your things are current. Make sure you have the right identity in place. Make sure you use strong authentication. Lose privilege. So all those things still apply. But it was true that we have much more sophisticated adversaries and many more adversaries than we had in the past.

Bret Arsenault [00:09:20]:
And I think the progress and the learning for us was when you do this, it’s not just the technology changes you have to do. There’s a cultural part of it

Karissa Breen [00:09:28]:
that you have

Bret Arsenault [00:09:28]:
to do to embrace it. And then there is and then there’s having the mechanisms in place to ensure that you’re doing all the right things. And and probably, maybe more importantly, we call them paid paths. But to do things at scale, and you can have everyone go pass their machine, but you come back a month later, we’re gonna do it again.

Karissa Breen [00:09:47]:
Right. You

Bret Arsenault [00:09:47]:
have to build a paid path for your developers

Karissa Breen [00:09:49]:
Sure.

Bret Arsenault [00:09:50]:
And for your operations and engineering things, they fall into the pit of success. They can’t not fall into the pit of success.

Karissa Breen [00:09:56]:
Okay. And their

Bret Arsenault [00:09:56]:
stuff just working. That’s where the divide by default and operations come from. So we’ve learned that, especially in the identity side, we’ve gotta get this identity infrastructure around. For your users, for your services, for your cloud services, and your service principles. That is, I always say, hackers don’t break in, they log in. It’s just real driven.

Karissa Breen [00:10:15]:
So I wanna find this on the cultural side of it now. So it’s come down a lot of my interviews with people is, you know, we gotta get the right culture. Yeah. But what does that mean for you specifically?

Bret Arsenault [00:10:24]:
I’ve been through a lot of cultural changes that often come with leadership changes. I would say one of the roles one of the things is also in learning. Security teams do good security work. And so I think when you have a we I think we had a very good security aware culture for a lot for at least last ten or fifteen years.

Karissa Breen [00:10:44]:
Sure.

Bret Arsenault [00:10:44]:
But a security culture is very different than a security first culture.

Karissa Breen [00:10:48]:
Okay.

Bret Arsenault [00:10:48]:
Which is when you say, I’m gonna make a trade off and I’m gonna actually delay the shipment of my product, which is what I’m paid to do as an engineer because I didn’t meet the security bar. I had or I in the direct line down there might be a problem. And so having the support from executives all the way to the the new hire that just started last week, you have to do that variation to be to make it secured in first. I do think it’s important to note, I’ve seen this trend more so here in Australia, to be honest, in the

Janice Le [00:11:16]:
last Really?

Karissa Breen [00:11:16]:
Okay.

Bret Arsenault [00:11:16]:
Where people are saying, well, if we’re doing security first, well, how are you doing anything else? And so I’m not sure what your day looks like, but I’m assuming you have more than one priority in the game.

Karissa Breen [00:11:26]:
That’s true. Yes.

Bret Arsenault [00:11:27]:
Exactly. And so security may be our highest priority, but we still have to ship great software, make sure that AI works, make sure our customers that we serve consumers, enterprises, governments can all be the most productive companies they can be. So security first, yes. So this is the priority. But we still are doing all work and everything. And so we call it charity of the hand. Right? You can’t do it’s not this or that. It is bundled.

Karissa Breen [00:11:50]:
That’s an interesting observation. So I I’ve come from a security background myself. I work for one of the Australia’s largest banks

Bret Arsenault [00:11:56]:
Yes.

Karissa Breen [00:11:56]:
Before moving into doing this type of work. And you said before around security first. So one one of the problems that we used to face is going to engineers and they obviously get stressed to say, well, our project’s now gonna be delayed because we haven’t thought about security. And then, I mean, this is probably going back around ten, twelve years ago. So obviously, now things are taking a shift to more, you know, security first culture. When would you sort of say that shift started to happen? Like, people were aware of it, but they obviously used to see us as the bad guys and the people that were slowing down their projects. Now things are starting to change, but when did you sort of say that shift started to really become a little bit more ubiquitous?

Bret Arsenault [00:12:33]:
For us, it probably started nine years ago when we started this path of trying to make everyone aware of what security was. And since you have the background in the security, do you know the difference between requested and required? And so it was requested to do security, but making it required via paid pass. So that, for example, you can do you can do a amount of pull request, and you can’t do a push for your code for a build unless it passes the sending gates and VARs. That’s required versus requested. And so that but doing that in a way that is not thou shalt I think it was a my lovely quote I have with Eric. And I think security people have struggled with this in the past, which is, a famous quote of that, if you if you tell me, I’ll forget. If you Right. Teach me, I’ll I I may remember.

Bret Arsenault [00:13:20]:
But if you involve me, I will learn. So being more involved with the teams and being a business leader and not a tech it’s not you need running around that it makes sense. And and then providing these paid pass it is I’m not saying you have to go do a hundred things. I’m giving you a library that just does this for you, which they don’t wanna do anyway. And they’d love not to have to build you an optimization library. And so I think being more of a service provider, it’s really helpful in that scenario. It also helped. We hear, dev support can be a very top in the company Right.

Bret Arsenault [00:13:49]:
All the way down through and have mechanisms to measure. But I think culture is really more of a a reflection of behaviors. You don’t say this is our culture and tomorrow it’s your culture. You know, the set of behaviors that exhibit what that culture is supposed to be.

Karissa Breen [00:14:02]:
Well, it’s where I think it’s interesting because, again, like, when you’re telling people, like, you have to care about security, I think it started to create more of this dislodgement between security teams and then technical teams throughout the business or even the business because it was sort of just people telling them and barking orders at them. So maybe the culture has you’ve seen the shift. Yeah. So then, I think it’s gonna continue to to get better because historically, people would just not enjoy speaking to security teams at all. And you said before that you’re seeing it more in Australia in terms of culture. Why do you think that is?

Bret Arsenault [00:14:32]:
I think more people were just asking us. They were worried about our priorities. They had everything they needed, not just the security things. So I think they’re just very aware of what they need to be prevalent us. I think it’s an you’ve it’s a good point though on this change. I could, like, illustrate with an example that was mind it was it was globally shifting for me. We knew two factor auth was a good name to do. And so we kept pushing two factor auth anywhere.

Bret Arsenault [00:14:54]:
And I and I remember people would see us coming because you had to have a slot reader, you had to have a special badge with three pieces on it. It had a lot of infrastructure and friction for our clients. And we even created a thing called the virtual smart card to get rid of the physical part. They think that would be better. One day, a very smart person said to me, hey, from a design principle, like you think about human centered design and said, what does he change what you were trying to go do? Like, what do you mean he does? What are you really trying to achieve? And so I’m trying to get rid of the password, which is great. Make your vision eliminate passwords. And I said, that’s just words. It’s gobbledygook.

Bret Arsenault [00:15:28]:
I’m a math person. It’s words. And it turns out, when we said, hey, what do you have to do to eliminate passwords? It fundamentally changed the way we build software. And so that’s when we developed Hello. And that’s what we did. And by the way, we get into this model which is users love not having passwords. So when you have something that users love and IT trust, how do they hit the nexus? And so when you take that design principle of how you can help people make their lives better whether it’s an engineer and a user and do it through technology. It’s actually I know it’s just words, but it made such a huge difference.

Bret Arsenault [00:16:01]:
It was really unbelievable. So now and they I remember people pushing so hard on us doing QFA, and now they get they send me notes like, hey. I got asked for a password. This is wrong. What’s happening? You got rid of all of our password. And that I mean, that’s the golden day for a CISO, not an option. So,

Karissa Breen [00:16:17]:
Brett, how do we find the equilibrium between we want our company to be secure, but then also we don’t, you know, introduce so much friction that people can’t do anything.

Bret Arsenault [00:16:25]:
Correct.

Karissa Breen [00:16:26]:
How would you find that balance with your experience?

Bret Arsenault [00:16:29]:
I think the thing I was mentioning about, how do you think about it from a design principle helps? I think that made paths really help, which I mean, you say, listen, I’m not gonna just force all this extra work on it. If you use my payment path, you’ll be more efficient, more productive. And frankly, I think the use of AI and some of the things that we’re doing just think about writing code and using Copilot for and to get up Copilot and, like, suddenly, in a miraculous way, you’re writing code 30% faster. Yeah. And you’re getting use of libraries that are already preconfigured and you know are secure. It has free detection for static code analysis. So then we’re if you’re using our public repositories, see, you know, you drive that that part of it down and see you get broad adoption. And, being the day you just make people work well.

Bret Arsenault [00:17:11]:
There are some times where you’re still gonna have that, you know, the idea is not to say no, not to say how. Sometimes you might have to say no way no how, but you hopefully never.

Karissa Breen [00:17:19]:
We are running out of time. So I know you’re busy. However, just to wrap up the conversation for today, what can people expect sort of from SFI moving forward and, you know, when we do it maybe next year, what what are some of the things that you you see that’s gonna happen on the horizon now?

Bret Arsenault [00:17:34]:
I think that you’ll continue to see more and more understanding and evolution of how to use AI to better protect the workloads that people are doing and the data that people are doing. I think you’ll see a lot more energy. Other companies starting to adopt some of the cultural norms that we’re doing. We’re executive temp compensation and chains being here. Where you have sys like our system, my favorite thing in our system today is every employee, twice a year is asked one of the you know, the latest employment surveys to do at the bank. They’re very valuable, but there’s always one question that’s the most valuable, which is, can I do my best work here? And that will generally tell you when you’re an organization. From a security perspective, we’d have a similar question. They are you supported to make the security trade up procedure? That’s one.

Bret Arsenault [00:18:16]:
It’s so simple that when 238,000 people answer that question, you get pockets of, wow, these people are really supportive. What are you doing right? We may have a problem over here. We should go address it. Is there a leadership issue? Is there or is there a tentative issue? Let’s let’s not assume something’s wrong. Let’s go and deseciate and understand and learn from it. I think that’ll be great. And I think you’ll just see more and more software shit by default the security capabilities that are not high friction user experiences. Both for the IT department as well as for the user, and I’m hopeful for government to adopt it.

Karissa Breen [00:18:49]:
So, Brett, do you have any sort of closing comments or final thoughts you like to leave our audience with today?

Bret Arsenault [00:18:53]:
You know, I would just say more just from my experience of being in Australia. It’s been fascinating always, other than business, but global travel is you assume certain things, you learn things. The people are amazing. That was something else. But I think that it’s really important to understand this is a team sport, And it the securities team’s job is not to just secure the companies to make sure everyone understands that. But it is gonna take great collaboration and cooperation between the private sector and the public sector. One of these I heard here a lot is, you know, you had at least three or four new regulations this year. Some of the state laws may not align a %.

Bret Arsenault [00:19:27]:
True. All normal. But I think we need to really work on how we can go work together to make sure that we’re providing regulation and support that holds bad guys accountable, supports the innovation we wanna see in AI, but also, fundamentally, it is not just a checkbox exercise. It’s actually helping serve our constituents to US support. It’s gonna be like global wish list.

Karissa Breen [00:19:53]:
Joining me now in person is Janice Lee, general manager of Microsoft Security. And today, we’re discussing securing AI and AI in security. So, Janice, thank thank you for joining and welcome.

Janice Le [00:20:03]:
Super happy to be here. Thanks for having me.

Karissa Breen [00:20:06]:
So let’s start right there. Tell me more about your thoughts around securing AI.

Janice Le [00:20:11]:
The first thing I’ll say is that it’s actually not just about securing AI. It’s actually securing and governing the AI. And that’s quite honestly the first concern that a lot of our customers have. It’s not so much about protecting it against the bad guys. Sure. It’s about how do you govern the use of AI and how do you ensure that your use of AI is compliance.

Karissa Breen [00:20:36]:
What you say from your experience on how to govern the use in AI, would you still say it’s either still trying to figure that out of what that looks like? Because obviously, AI is in terms of being more ubiquitous in the market. It’s really emerged probably since 2022, and AI has been around for a while. But, you know, people are still trying to understand and that looks like with their organization. Do you have any thoughts on that, Janice?

Janice Le [00:20:57]:
Yeah. I first say that, you know, we need to acknowledge that the adoption of AI and GenAI in particular Agree. Is happening at a rate that we’ve never seen before with any other technology in our lifetime. And the speed of that alone is something that, you know, a lot of organizations are struggling with because while there’s a lot of excitement, there’s also a lot of unknown. What I am seeing though is the quick organization of teams coming together across entire companies, and that’s something that was not seen with other technologies. You mean Where in order for any organization to adopt AI and be able to leverage it for its fullest potential, it involves the entire organization from the CEO on down. And they have to come together in a way that, you know, other technology adoptions have not forced them to come together. So I would say it starts there which it with different functional leaders from the head of technology to the head of even HR to the head of governance and legal and compliance to functional other functional business leaders coming together and aligning on what is the objective, what are the risks, and how do we want to collectively govern and guide the use of AI in a safe and secure way?

Karissa Breen [00:22:26]:
Switch gears slightly, and I’m aware that there are simple steps in planning effective security for AI. So maybe walk us through what are they?

Janice Le [00:22:34]:
Yeah. You know, I would break it down and just say, you can’t manage what you don’t monitor. So the first step is discover. Understand what AI apps are being used, are being created within your organization. I think there isn’t a single customer that I’ve talked to that doesn’t have users who were very quick to adopt consumer gen AI like chat GPT and others. And so there’s a lot of that usage happening by employees in the workplace that we have to completely be aware of. And then there are, of course, teams that are out there wanting to deploy their own AI so that they could improve their customer experiences so that they could be more productive in their jobs. And so discoveries is step one.

Karissa Breen [00:23:22]:
Yep.

Janice Le [00:23:22]:
Then two is establishing governance and sort of the rules of the road on what data can be used, what data can’t be used by whom. And so that preparation of data and making sure that your data is properly classified and labeled so that it doesn’t get misused by GenAI. Pulse. And then the third step is protecting it. And that’s protecting it from not just the bad guys, but protecting it from being, you know, misused or manipulated by people who don’t always have a bad intent. But, you know, they’re just very creative in looking for the right answers that that they seek.

Karissa Breen [00:24:01]:
Okay. So you made a couple of interesting points, and I wanna explore a little bit more with you. So in terms of adoption, you said before, you know, it was very it was quick. So now that we’re in this AI era what era? Sorry. And then we’re always seeing, like, cloud adoption. Would you say with your experience and your pedigree that the adoption towards AI has been a lot faster than we’ve ever seen in the other cloud, Internet, etcetera?

Janice Le [00:24:23]:
Yes. I’ll add to it that not only is it fast, but what’s different about that also is that it is not reckless. Okay. So with other technologies that we’ve seen being adopted, you know, be it, say, you know, enter enterprise Wi Fi when they came about, and then there was the, you know so that was mobile, BYOD, then there was cloud. There was a lot less coming together, across the organization and thinking about how do we truly minimize risk and be compliant at the same time. So we’re seeing a lot of that happening within companies where there is a consciousness of doing it safely and securely that I would like to think that a lot of it is because when we innovate with AI at Microsoft, we did it within the framework of responsible AI, and that is the only way that we could introduce AI GenAI into the market. You could not introduce GenAI without a framework like that, and customers have embraced it.

Karissa Breen [00:25:26]:
So it’s an interesting point that you raised. So, security was obviously peppered throughout the keynote. In May 2024, Satya came out and said, you know, security is our number one priority. I’ve been reading some literature online. Talk us through what it what’s actually been for Microsoft? Like, what what can people sort of expect now moving forward? Everything you just sort of said, how do we sort of tie it to be in a in a in a boat?

Janice Le [00:25:47]:
What we do with the Secure Future initiative is probably the biggest change management that any organization could ever undertake. And change management typically involves the harmony effective change management is the harmony of people, process, and technology and tools. So that’s what the Secure Future Initiative has done, which is elevating security to be the top priority that drives change across people, process, technology, and tools. Sure. And we benefit from an innovator standpoint because what we learn to secure our own organization and Microsoft has a very large digital estate. Right? So we have a lot to protect as there’s nobody else at our scale. There’s nobody else really who sees the types of threats that we see. But we learn from the way that we have to protect ourselves and what we know about the threat landscape, and that then informs all of the innovation that we put out, including our own security products, but also all of our software and cloud services as well as our AI.

Janice Le [00:26:50]:
So all of our innovation benefits from those learnings that we can now surface in a much faster way given that security is a top priority as a company.

Karissa Breen [00:27:00]:
Fantastic. But initiative, I know it’s just ticked over to just be over one year old. And I know there’s a fair few people behind the initiative. I’m I’m sure the exact numbers, but I know it’s quite a lot. And you mentioned before the operative word around change management. So how how do you do that effectively? It’s hard to get one person to change, let alone a whole company. And I think someone said it earlier, you know, if only it’s 50 years old Yeah. It’s not an easy thing to do in terms of, you know, the the processes and the culture that’s being engendered.

Karissa Breen [00:27:28]:
So how do you how do you do that then effectively?

Janice Le [00:27:30]:
Well, so that number, by the way, it’s roughly 34,000. It’s the

Karissa Breen [00:27:34]:
equivalent 30,000. Yes.

Janice Le [00:27:35]:
Yeah. The equivalent of 34,000 full time engineers all working on secure prioritizing security in their day jobs. So they’re still creating products. They’re still running their functions, but they have to make security a top priority, a top consideration in the jobs that they continue to do. And it’s hard to drive change unless it happens at every single level. And that starts with Satya. It starts with his and the board and his leadership team on down. And, you know, it all goes back to just the idea that humans need the right incentives.

Janice Le [00:28:12]:
Yeah. And one of the things that we’ve done to address to make security a top priority at all levels, including, you know, every layer is it’s now a core priority for every single employee. And in our annual review process, we have to talk about what we’ve done to help the company be more secure. That is a part of our performance metric.

Karissa Breen [00:28:37]:
K. There’s another interesting things in there as well. So going back to even the engineering side of things. So when you go and do a computer science degree, historically, stream wasn’t taught. Right? Yes. I know this from being in the industry myself and and now flipping over to interviewing people like you, would be that we’ve had to sort of retrofit talking about security and changing that mindset, which is probably why, you you’ve backed the initiative that you’re you’re doing within Microsoft. So what do you think now moving forward with engineers? Do you think that a new wave of engineers are going to be security is a main priority? Because historically, dealing with engineers, it wasn’t. It was that functionality.

Janice Le [00:29:15]:
Absolutely. And and I come from the same background too.

Karissa Breen [00:29:19]:
Okay.

Janice Le [00:29:19]:
And I know that, you know, when you’re learning how to write applications and software, you know, you’re taught how to create the function and do it fast and effectively and not use too much resources. Right? But given the change that we’re driving, there’s a concept called secure by design.

Karissa Breen [00:29:36]:
Yes.

Janice Le [00:29:37]:
Right?

Karissa Breen [00:29:37]:
Yes.

Janice Le [00:29:38]:
And secured by design is one of the three principles that we are are now all leaning into. Secure by design, secure by default, and secure in operations. And so we hope that and by the way, those are not concepts that Microsoft invented. No. Those are industry concepts that are actually spearheaded by, you know, the forces that be that govern, you know, nationwide security and security regulations. And so secure by design and secure by default are principles that we hope are gonna find its way into your your traditional computer science curriculum

Karissa Breen [00:30:16]:
Lunch.

Janice Le [00:30:16]:
So that emerging programmers can learn that you can still build secure software and do it quickly and innovatively especially if you have the right tools. So there are more and more tools now available to software developers so that they can shift left with security

Karissa Breen [00:30:34]:
A full

Janice Le [00:30:35]:
principles.

Karissa Breen [00:30:36]:
Sure.

Janice Le [00:30:36]:
And not just design it with security in mind, but also ship it with the right security defaults turned on to help users be secure from the start versus giving users an option to turn on security.

Karissa Breen [00:30:51]:
So I wanna sort of flip over to our last part of the MB because I know that you you’re so busy. We’re short on time. But I wanna talk through with you, Janice, AI in security. So we’ve sort of done security in AI and AI in security. So I’m flipping it on TED. So walk me through it.

Janice Le [00:31:07]:
So the same idea of CoPilot helping everybody be productive, you know, writing summaries or, you know, understanding what happened in a meeting or looking at what emails that they should prioritize when they get flooded with hundreds of emails in their inbox. Copilot in the security context offers the same benefits but to security practitioners. One of the biggest problems that security practitioners have is what we call alert fatigue. Close. Which is all of these alerts coming from all these different systems, and they just don’t know what to do with it. Right?

Karissa Breen [00:31:40]:
So you get to a

Janice Le [00:31:41]:
point where you just ignore them.

Karissa Breen [00:31:43]:
Yes.

Janice Le [00:31:43]:
What Security Copilot can do is actually prioritize the things that you shouldn’t ignore. So help you find the signal from all the noise so that you can focus on the right things. That’s one, is helping you prioritize. And then two is helping you shortcut those mundane tasks. Absolutely. Most sec security practitioners will tell you that they spend a majority of their time looking through mods and lists and data in order to figure out what the heck is going on.

Karissa Breen [00:32:13]:
Of course.

Janice Le [00:32:14]:
Copilot can summarize all of those activities and events for them in a way that would have taken hours and days into seconds and minutes. And that is a huge time saving for, I’ll say, low value work that security practitioners no longer have to worry about doing.

Karissa Breen [00:32:33]:
In terms of productivity, how how can we see people moving forward, like, optimizing them? Because you’re right. Makes sense. People are tired. And then as a result, when you’re tired, you’re not making the most important decisions. So do you have any like, what’s your view on that then, Jellis, in terms of people now getting more of their time back to do more of the critical sync and hands?

Janice Le [00:32:54]:
Yeah. So this is where we see the the huge shift from low value tasks to high value tasks. And that’s the thing that, you know, as any a practitioner in any function should embrace because Copilot isn’t there to replace anyone’s job. Copilot is there to do the things that quite frankly humans are not shouldn’t you know, their time is better spent doing other more important things. And so I see it as an opportunity for us to do the thing that, you know, we enter our fields to do, you know, to make an impact, to leverage our creativity, to leverage our intelligence. But we get caught in all these mundane tasks that do the opposite. It doesn’t leverage our creativity. It doesn’t leverage our intelligence and our ability to ration way better than machines can.

Janice Le [00:33:44]:
So that’s where, you know, I I think we need to just decide on what are those tasks that we are so willing to let go of. Right? And leave it up to Copilot to do so that we can do the thing that, you know, we we enter a certain field to do.

Karissa Breen [00:34:02]:
Tantive Copart from a security perspective, what excites you the most? Do you have to sort of maybe be one thing that excites you the most?

Janice Le [00:34:10]:
When I think about the threat landscape and the number of threat actor groups that that had exploded onto the scene because of ransomware as a service, the attackers outnumber defenders by 10 x, if not more. So what AI and PoPilot can do for us is it can help us become a part of the army of defenders. We’ve now got unlimited with with Copilot, we have an unlimited army of defenders that can be at our side because, you know, it’s really gonna be hard to outnumber all of the attackers that continue to go out there. So that’s what excites me the most is, you know, being able to level the playing field and tip it in the favor of defenders versus the attackers.

Karissa Breen [00:35:01]:
So, Candice, one last question for you would be, do you have any sort of closing comments or final thoughts you’d like to leave our audience with today?

Janice Le [00:35:09]:
Yeah. The final thing is, you know, just to the point I was making about the small group of defenders, you know, against this massive universe of attackers, security is a team sport. And while we provide some technologies that can help our customers be more safe and secure, we can’t do it alone. And so it is super important for all defenders to come together, you know, all vendors to come together to offer our customers simpler solutions that work better together and, you know, treat it like the team sport that it is because, you know, there’s a saying in in psychology that external threats create internal cohesion, and we need more internal cohesion within our industry.

Karissa Breen [00:36:03]:
Joining me now in person is Chris Lloyd Jones, head of architecture and strategy in the office of the CTO from Avanade. And today, we’re discussing the culture and change program needed with AI. So, Chris, thanks for joining and welcome.

Chris Lloyd Jones [00:36:15]:
Thank you.

Karissa Breen [00:36:15]:
Okay. So, Chris, let’s start right there. People talk a lot in the industry about culture. So I’m curious to understand from you. Walk me through your thinking or your approach or what comes to mind around culture for AI.

Chris Lloyd Jones [00:36:30]:
Okay. For sure. So I think when people hear AI, they think of this as a big technology change piece. But actually, if we were to step back twenty four months ago when ChatGPT, when all of these technologies came out into the market, well, AI isn’t new. We’ve had machine learning. We’ve had other technologies. And all these new technologies have been about how do you bring everyone in an organization along. So culture to me is about how do you empower people, how do you train people, and how do you enable people to make the most of these new tools that you’re providing them.

Karissa Breen [00:37:03]:
Okay. So because it is new, there’s no real, like, blueprint. It’s not like we’ve done this before necessarily. How do you train people?

Chris Lloyd Jones [00:37:10]:
So we may not have done this type of technology before, but think about chat GPT. Most people have a phone in their pocket. Most people have access to these technologies. So digital natives are learning about how to prompt, about how to ask information of AI. But if you were to ask, and I heard a great anecdote yesterday, someone was speaking to their mom, and their mom went, well, I want to create a CV to apply for a job. And this other person went, okay. Well, just pop it into Jack GPT. And they popped into Jack GPT and came back and went, that is absolutely rubbish.

Chris Lloyd Jones [00:37:43]:
And the response was, it is just nonsense. Gobbledygook. And that demonstrates that just because you have access to these tools, it’s about providing instructions on how to prompt and how to use these in a sensible and informed way. So at Avanade, we think about number one, what are the guardrails? What are the ways in which you can effectively roll these tools out to people whilst providing them with the training to make it useful? It’s not just about roll out AI and cut heads, cut of workers. That’s not what this is for. It’s about how you can you make your employees engage? How can you make them feel off happy at work? And how can you make them more effective in what they do?

Karissa Breen [00:38:18]:
Okay. So in terms of the guardrails, an interesting point because you’re right. It’s not just about, okay, we’ve installed the thing or we’re doing the thing and then that’s it. Would you say that’s perhaps where people fall down and understand the problems and how it works? How can we effectively leverage this? Because there’s still it’s still early ish days. So what would you say in your experience people overlook at times?

Chris Lloyd Jones [00:38:40]:
Okay. Well, I think of four phases of AI. So there’s tinkering with AI, the proof of concepts, the proof value. The second stage is trying to make AI useful within an organization. So that’s point solutions. That’s solving problems here and there. The third area is maybe scaling that, taking a business process to scaling it end to end. And the fourth is total transformation.

Chris Lloyd Jones [00:39:03]:
I’m seeing very few organizations at phases three or four. Most organizations are at phase one and two. And what they’re overlooking is that this isn’t a tech change problem. It’s a people change opportunity.

Bret Arsenault [00:39:14]:
So I’ve

Chris Lloyd Jones [00:39:15]:
talked about training, and in from a guardrail perspective, it’s more, well, if you’ve all got chat TPT or other tools in your pocket, how do you make use of them? If you all have access to tools like m three six five Copilot, how do you enable your data, your business operations to be connected to those systems? It’s about thinking about governance, the use cases, and the prioritization in a way that you can make sure your organization is still complying.

Karissa Breen [00:39:39]:
So Chris, I wanna explore a little bit more about the people side of it. Now people necessarily are creatures of habit. So what I’m hearing from the chatter in the industry, but then also even on the floor today, it’s been to people like yourself that the adoption to AI was significantly faster and more prevalent than perhaps in other areas like the cloud, Internet, etcetera. So why would you say from a people perspective, people are more willing nowadays to adopt AI than perhaps in other sort of more advanced transformations that we’ve seen in the last ten, twenty, thirty years?

Chris Lloyd Jones [00:40:11]:
So I wouldn’t necessarily be I wouldn’t necessarily say that people are more willing to adopt AI. One. So you look at machine learning. Okay. We’ve had that for decades. You look at ChatGPT. Models that could answer questions, large language models, have been around for a good few years. True.

Chris Lloyd Jones [00:40:27]:
In 2021, we had GPT two.

Karissa Breen [00:40:30]:
Mhmm.

Chris Lloyd Jones [00:40:30]:
What ChatGPT did is it fundamentally took this AI, it packaged it up in a way in your pocket, you almost have this magic unit that could answer things. So I think people are trying to adopt these tools if Physick can solve a problem that that they might not otherwise have been able to solve. They’ve got someone that will continuously forever listen to them and answer questions. Now organizations want to adopt this technology because they believe it can help them solve productivity challenges, it can help them solve cost challenges, or it can help them to grow. But once you start using the AI, that doesn’t mean that you have the skills to be proficient or to make the most use of it. And that comes back to what I mentioned before about the need to upscale.

Karissa Breen [00:41:07]:
Okay. So you raise a great point around packaging it up. So I was recently, after watching yourself on YouTube, but also Satya on his response around Copilot is like the UI. So going back to like chat gpt obviously, it’s just created that that interface with people to ask that question where it’s a little bit more complex than before. So because of that, is that where the adoption is? Is this a lot easier for people? Then I have to think it through. I’ve seen some of the demos today already which is making significantly easy even from an engineering development perspective. You don’t even really need to understand certain, like, languages anymore specific to Python. It’s doing it in the background.

Karissa Breen [00:41:43]:
So in terms of training, how can people start to understand what this looks like within their organization? Do you have any insight there on that front?

Chris Lloyd Jones [00:41:51]:
Yeah. For sure. So Satya talks about chat as the new interface for the UI, and I think that’s certainly true for where we are today.

Karissa Breen [00:41:58]:
Right.

Chris Lloyd Jones [00:41:58]:
But if these tools are going to start to be embedded more into what we do day to day. So you open up a legal contract and it’s been pre marked up for you with what you need to review.

Karissa Breen [00:42:07]:
Mhmm.

Chris Lloyd Jones [00:42:07]:
So chat to me is a transitional phase and not where it will end up. I personally believe in the concept of ambient AI and that being infused into what we do day to day In the same way that a one point spell check was considered AI, today it’s just a button that we click and we don’t think about it. True. So going back to governance and training and how we can adopt this in an organization. Number one, Avanade rolled out what we call the school of AI. We trained every single employee in our organization on what AI is and what it can do. We defined responsible AI principles and digital ethics that people knew if I use this tool, this is how I can use it responsibly. Recognizing that people will use tools like ChatGPT or GitHub Copilot.

Chris Lloyd Jones [00:42:46]:
Then number two, we rolled out the prompting, how can you engage with AI? And number three, we explained how to know what you don’t know. If you know what AI can do, when do you hit the limitations of the tools that you have so you can remain ethical, you can provide answers that are of high quality, and you’re not just regurgitating information that might be made up. Knowing how AI thinks in different ways and therefore you need to be careful and have a critical mind.

Karissa Breen [00:43:11]:
So I do wanna get into the responsible AI side of things, but before we do so going back to the prompting side of things, what you just discussed with, you know, fancy new UI, it’s a lot easier then for people. So what would you say that companies now are coming to you and asking you questions around? Is it still like, hey, like, how can we use this UI to ask the right questions to increase our productivity? I mean, I’ve heard that a little bit in the sessions today in terms of how much Copilot specifically can increase people’s productivity, but it’s just still a lot of questions around, well, how can I leverage this within my company internally?

Chris Lloyd Jones [00:43:44]:
So to we’ve had Copilot for a number of years, and don’t get me wrong. Copilot is a brilliant product, but I think the tenor of the conversation I’m having has changed. A couple of years ago, this was proof of concept or proof of value, just testing the organ organizational technology, rolling out source like GitHub, and just seeing do they work. I think now we’re starting to see organizations have proved the concept. They’ve proved the fact that AI can have value. They’re now looking at, okay. I’m ready to scale and implement this into my business processes. How do I think about principles? How do I think about business process engineering? We have an enterprise architecture as we did for mainframe and cloud.

Chris Lloyd Jones [00:44:20]:
What do I need now need to do for my enterprise architecture for AI? They’re really thinking about this in more of an enterprise fashion.

Karissa Breen [00:44:26]:
Okay. So scaling AI, I’ve heard that thrown around a little bit today. Yeah.

Bret Arsenault [00:44:31]:
Yeah. What

Karissa Breen [00:44:31]:
does it actually mean?

Chris Lloyd Jones [00:44:33]:
So to me scanning AI means it’s production grade and I’m gonna break that down because I know that’s a very like Okay. High level answer. Sure. So a year ago we had a number of tools so I could get data into, to get a bit techy here, a cognitive search index.

Karissa Breen [00:44:47]:
Okay.

Chris Lloyd Jones [00:44:47]:
I could take data and put it in a format that an AI could query.

Karissa Breen [00:44:51]:
Okay.

Chris Lloyd Jones [00:44:51]:
But someone might have to export data from a database into a spreadsheet and upload it somewhere else. Sure. Well then you’ve lost that chain of custody. If the data gets changed in your sales system, someone would then have to reupdate the AI. So that’s how do you get your data from a to b in a way that’s consistent and safe. Then it’s, okay. In the past, we rolled out chatbot interfaces. Someone can ask a question of their data.

Chris Lloyd Jones [00:45:12]:
Well, a lot of people don’t want us to answer questions. They want to solve problems.

Karissa Breen [00:45:16]:
Sure.

Chris Lloyd Jones [00:45:16]:
So now think people are thinking about this as a service design. So they’re starting to think about what is the holistic experience. I’m not just gonna add another sparkly button to my UI. What problem am I actually solving? So for example, do I need AI or should I cut this out? Am I doing this for the sake of it? And then finally, how, again, going back to governance, is this a use case that’s really adding value? Is this a use case which is responsible? Do I have the right data stewards involved? It’s just about taking all of the bits and pieces that you do with a prototype, throwing them away and thinking about is this ready for client time.

Karissa Breen [00:45:51]:
So going back to your comment beforehand, they wanna solve problems. When you’re saying they wanna solve problems, but people obviously do wanna do that, but is it more so how do we go about navigating that? Is that the part where people are still unsure about how to do? And so your earlier comment around the top of the interview around the guardrails. Is that what’s missing, would you say, the guardrails?

Chris Lloyd Jones [00:46:12]:
I think the guardrails are missing, but I don’t think that’s the whole picture. So when you talked about solving problems, people can identify the use cases in their organization that they want to roll it out, but then they might go, okay. Say you want to optimize your sales process. You want to look at what leads might I want to speak to. Though that data could be in Dynamics, it could be in Salesforce.

Karissa Breen [00:46:29]:
Sure.

Chris Lloyd Jones [00:46:30]:
How do I connect to that source system in a way that’s repeatable? In the past, if you were doing reporting in, say, Power BI or Tableau, well, there’s a data team that sat in the middle there. They were the intermediary. They made sure if you asked a question, everyone else got the same answer. One person measured sales in the same way that another person did. If you’re developing an AI system, all those same tools need to be in place and that becomes your guardrails. And that’s why these proof of concepts sometimes fail because, yes, we want to prove the tech works, but now we need to go back and do all the same things we would have done when we rolled out Power BI and reporting.

Karissa Breen [00:47:03]:
That’s interesting because I actually was a reporting analyst in every Tableau. Yeah. So I do understand that. But I used to spend a lot of time doing manual stuff.

Chris Lloyd Jones [00:47:11]:
Yeah. Yeah.

Karissa Breen [00:47:12]:
Building dashboards, pulling all the data sources together, just trying to figure out from the business what what is the problem that I’m solving. What do you wanna know?

Chris Lloyd Jones [00:47:19]:
And you don’t make a team send you Excel spreadsheets going, well, I’m getting this number, and you’re like, no. It’s this number. That’s the standardization.

Karissa Breen [00:47:26]:
Yes. And then you get a call from the CISO saying, hey. I’ve just presented. I believe the number’s wrong in the meeting. Yep. I mean, it’s not a it’s not a necessarily an easy problem to fix. So where do where are people sort of starting? Because, again, depends on what organization you have. There’s a lot of data, a lot of in terms of sources that are being fed.

Karissa Breen [00:47:42]:
How do you get that standardization?

Chris Lloyd Jones [00:47:44]:
So organizations can start at the top, but they can start at the bottom. It needs to be bottom up and top down. And I don’t just mean that as a kind of buzzword. So talking about bottom up, organizations generally are either highly centralized, so they’ve got a lot of control over their systems of record. They might go, my employee data is in Workday, for example. My expensive data is over here.

Karissa Breen [00:48:04]:
Sure.

Chris Lloyd Jones [00:48:05]:
And in those organizations, it’s relatively easy to identify the data steward, and then you might be focusing more on organizational change. But in multinationals that might span, say, Australia, New Zealand, Europe, generally, they’re gonna be a lot more federated, and that means you need to benchmark.

Karissa Breen [00:48:20]:
Do you

Chris Lloyd Jones [00:48:20]:
have the basics in place? Have you already identified your systems of record? You could be using three different travel booking systems, and then you might want to identify the domains you care about, your HR, your travel. Once you’ve got your systems of record and you’re speaking the same language and you’re speaking in the same way, then you can start to think about AI. So you can’t just jump to AI to be successful. You can have a a data foundation in place, strong data stewards, strong business use cases to enable you to make those changes.

Karissa Breen [00:48:47]:
But wouldn’t you say people are jumping to AI to be successful?

Chris Lloyd Jones [00:48:51]:
No. I don’t think people are. I think people roll out Copilot and they might make their knowledge finding better. But that might indicate that maybe their knowledge architecture wasn’t great in the past, so Copilot’s just an incredibly great search engine. When they want to take it to the next level of productivity such as sales or invoicing, at that point, you can’t escape the fact that you need to improve your data stewardship and your data quality. From the conversations that I’ve heard today, everyone is finding Copilot a great way to draft emails and a great way to find information, but only the organizations that invest in that data platform are making sales more efficient or invoicing more efficient or helping to serve their employees.

Karissa Breen [00:49:29]:
So people often speak about like structured data and then unstructured data. So how talk to me a little bit more about that. How does that then look in fitting into the AI beast?

Chris Lloyd Jones [00:49:39]:
So AI, when we’re talking about it today, we generally mean generative AI, so the generation of text or images. But AI in the past, machine learning was the kind of forecasting and the analytics from the past to analyzing what’s happened to date. Now when I’m talking about AI in the context of making this more effective, you need to consider all three. Because traditional machine learning will still come into play for how you analyze your structured data. For example, if you’re an investment firm and you want to maybe make a virtual agent that can help you identify customers where there might be an opportunity to optimize a portfolio, You’re gonna need to use machine learning to identify maybe the right trades to make, the cluster, and the data, and then you feed that through as unstructured data. So you change it from numeric data to data that the machine learning large language model can actually analyze to make those decisions. Both of those data formats are important. Large language models is like programming through language, commonly English, but unstructured data is ultimately required.

Karissa Breen [00:50:38]:
So now I wanna flick over and talk around trusting AI.

Chris Lloyd Jones [00:50:43]:
Yeah.

Karissa Breen [00:50:43]:
And if you want interview, you’re talking about that, and then we’ll get you responsible AI. I know I asked that before, but talk a little bit more about what does that mean for you when I ask you that question.

Chris Lloyd Jones [00:50:53]:
So for me personally, when I think about trusting AI, I think about a number of different things. I think number one, is this going to actually help me do anything I wanna do? Can I rely on the response that I get back, and can I make decisions from it?

Karissa Breen [00:51:07]:
Right.

Chris Lloyd Jones [00:51:07]:
And say I’m booking a flight and I ask when the next flight time is, if I can’t trust that information is correct or accurate, then that’s been a waste of my time

Karissa Breen [00:51:17]:
True.

Chris Lloyd Jones [00:51:17]:
And it that was a pointless AI interaction. So that comes back to we talked earlier on about measuring value. There’s a value gap. I talked about data as the data gap. The third gap I think of is that trust gap. And one, do you know you’re speaking to to AI? This is a very personal example. I had five five instances of this car product charged my credit card the other day, and I was really frustrated. And I raised the ticket on this website Yeah.

Chris Lloyd Jones [00:51:42]:
And got an automated response from from this chatbot by email going, we’ve logged it. We’ve already issued you a refund. I’m like, no. You haven’t. I raised it again. And I was getting more and more frustrated. And for me that’s trust because the AI isn’t listening to me. It’s trust because it wasn’t immediately obvious until I Googled it.

Chris Lloyd Jones [00:51:59]:
This was an AI chatbot. And it wasn’t immediately obvious to me that I was getting a good answer. So a lot of organizations need to think carefully because if you are looking to replace activities that human would have done with AI, you need to make sure that you’re fitting the same standard.

Karissa Breen [00:52:14]:
So I don’t have to trust. So you said, like, you know, for example, so pulling all the if it’s a large language, you’re pulling all these sources in, it’s like, you know, the sky’s blue, but maybe there’s some people out there that say the sky’s orange. Yeah. Comes up and it says the sky’s orange. How will people now I’m a millennial, so I’m not a Gen Z. So I’ve obviously been in toucans when you know the internet first came out etcetera. So I think I’m privileged. But how are people moving forward be able to discern and not question is that true because that’s just what AI said.

Karissa Breen [00:52:42]:
And I know that sort of comes into the responsible side of in the ethics side of it. What does that then look like? And will people have to sit there and question everything to discern whether this doesn’t look right, but if it’s telling me, it’s fine. I’m really probably more worried about the generation beneath and and moving forward.

Chris Lloyd Jones [00:52:58]:
I think that’s actually a a really good point, and I think there’s two phases to that. I think realistically, if we think about where we are today

Bret Arsenault [00:53:04]:
Right.

Chris Lloyd Jones [00:53:04]:
That is where we are. We have to use critical thinking to discern information that might be real from information that might be false. And that’s not just about the large language model making things up. That’s also about the use of AI to disseminate misinformation online and steer conversations. There’s a recent study piece using GPT four to curate someone’s timeline on a social media network. They provided them with a default network and two curated versions, one relatively left wing, one relatively right wing, and they found that people’s views would shift if they weren’t told it was AI. And that within the day, their opinions could be changed, and that would roll back that this is just one study. Now going forward, there needs to be a partnership from my perspective between the media, between the state, and private organizations to solve this.

Chris Lloyd Jones [00:53:46]:
And this isn’t just cloud, like, made up stuff that should happen, theoretical. There’s an organization called c two p a, Content Provenance. And they’re an open source organization that filled in with standards to certify that the information from the microphone you’re using hasn’t been altered between when we recorded this and when this goes out. Or that this information, this photo was taken here and you were seeing the real thing.

Karissa Breen [00:54:09]:
Right.

Chris Lloyd Jones [00:54:09]:
Or that a human wrote this and you’re reading this. And we’re starting to see those standards be uptaken by TV channels. If you’re on Google, if you’re on Bing, it now satisfies if the image you’re seeing is AI generated. And I do think we need those standards to have trust, but that’s gonna take some time for that to be built.

Karissa Breen [00:54:26]:
And it’s not an easy thing to sort of solve, and I’ve spoken about other people on the show historically, but also today. So I wanna talk to you a little bit more about hallucinations. So going back to my point around the sky is orange, like that that’s a hallucination. Like that that’s not true. But then who who gets to decide, well, maybe it is true because maybe I’m colorblind. So how does that then look in your eyes, Chris?

Chris Lloyd Jones [00:54:49]:
Well, you think about what hallucination is. Like, another way of thinking about it, which I quite like, is groundedness because hallucination gives the impression that the AI is thinking like a human might. So an AI that’s making stuff up is ungrounded. It’s it’s gone off the context it was trained on. And this happens because large language models, as is in the name, is trained on large bodies of text. So they’ve been trained on what’s online, what’s encyclopedias, and what’s in other private datasets, and they are grounded based upon the prompt that you give them. So if you tell an AI, pretend you’re I don’t know. You’ve got an IQ of a 80 or you’re from Star Trek.

Chris Lloyd Jones [00:55:21]:
You might get very factual responses. They might be grounded in sci fi. If you tell an AI something along the lines of, I don’t know. I like you’re an eight year old, so you’ll get the childish responses. The AI will only act based on what is seen other people do online. That’s why if you are very polite to an AI, you say police and thank you, you get better responses because on online forums, that’s how people respond to each other. So you can actually prompt an AI to act in a certain way. Now we have to recognize that AI is super powered predictive text.

Chris Lloyd Jones [00:55:51]:
It’s like typing on your phone. It’s just predicting the next sequence of words. And therefore, I think the organizations implementing AI, they are responsible for what they produce. But as an as a consumer of these, we still have to engage their critical faculties in order to make sure that we are getting value from these systems because no system’s infallible.

Karissa Breen [00:56:10]:
Being as an industry, we’re not flying there on how to manage, put the guardrails in ethics, responsibility of AI. This is this is growing quite substantially and there’s not a use case for it. I mean, it’s super powerful, but it’s like a double edged sword.

Chris Lloyd Jones [00:56:26]:
Yeah. Yeah. Yeah.

Karissa Breen [00:56:26]:
So how do we sort of manage that going forward where you still need to leverage this and with all the use cases we spoke about here today at Microsoft, but then also it can damage us as well. What’s your view then on that?

Chris Lloyd Jones [00:56:39]:
So I think the genie is out of the bottle. We have to think about how we govern it. There may be negative impacts and in an ideal world, we would be able to mitigate them all. But realistically, the technology is going to be implemented, and that means we need to think about the people aspect of this, the process aspect of this, and the test tech aspect. So Microsoft has released a number of different responsible AI tools. So you can scan the technology, you can scan the text that’s coming out for forms of potentially explicit content, potentially racist content, and other forms. And that’s a tech mitigation. We can make our AI more deterministic.

Chris Lloyd Jones [00:57:16]:
So when you ask a question

Bret Arsenault [00:57:17]:
Mhmm.

Chris Lloyd Jones [00:57:18]:
You get the same response every single time. That makes it easier to govern. We can track the inputs and the outputs. Okay. We can adjust our processes so that AI is being used for areas in our level of risk and that a human is always in the loop. So if I am a radiologist or if I am a cardiologist, AI might direct me to the red flags that I need to look at on a particular scan Mhmm. But it isn’t making the final decision. And that I guess goes back to CoPilot.

Chris Lloyd Jones [00:57:44]:
CoPilot is a CoPilot. It isn’t an autopilot. Human brains need to remain engaged. And before you mentioned that you’re a millennial, I’m a millennial. Thinking about the next generation coming up who’ve had ChatGPT since day one of going through university

Janice Le [00:57:59]:
Yeah. For

Chris Lloyd Jones [00:57:59]:
me it’s the people that work in Texas Ape are craftspeople. They know what good looks like. If they’re using GitHub Copilot, they can go, great answer, not a great answer. Mhmm. If I’ve had it in day one, I need support to identify what good looks like so I can become a craftsperson, build my skill, and I’m not just relying on the AI and and the answers it comes up with.

Karissa Breen [00:58:19]:
So, Chris, we’re running out of time. However, do you have any closing comments or final thoughts you’d like to leave our audience with today?

Chris Lloyd Jones [00:58:26]:
Yes. If there’s one thing I think we should think about and maintain an eye on, it’s the sustainability of AI. I’m really pleased with the commitments that the hyperscalers are making to the sustainability of AI. But ultimately, AI uses a significant amount of energy, and I think working with organizations like the Green Software Foundation and others, we should be making sure we continue to think about the land use, the water use, and the environmental impacts that AI can help us solve challenges and not make more.

Karissa Breen [00:58:52]:
And there you have it. This is KB on the go. Stay tuned for more.

Share This