The Voice of Cyber®

KBKAST
From Microsoft AI Tour 2024 – KB On The Go | Mick Dunne, Ben Lamont & Helen Schneider, and Leigh Williams
First Aired: March 28, 2025

In this bonus episode, we sit down with Mick Dunne, Chief Security Advisor at Microsoft, Ben Lamont, Chief Data Officer & Helen Schneider Commander, ACCCE and Human Exploitation for the Australian Federal Police and Leigh Williams, Chief Information Officer, Information and Technology Executive at Brisbane Catholic Education. Together they discuss the function of the Customer Security Officer team, how the AFP is using AI to protect Australia and its people, and the impact AI has on education.

Mick Dunne heads the new Customer Security Officer team across Asia who are part of a global team for over 40 former CISOs, CTOs and deeply experienced SMEs. They are focused on providing trusted, deep expertise and advice to customers, Microsoft area leadership and to feedback key strategic investments and product roadmap. Prior to Microsoft Mick was the CISO at AustralianSuper, bringing a long history as a security leader and also one of the first organisations to adopt Security Copilot.

Ben Lamont is the Chief Data Officer at the Australian Federal Police (AFP). In this role, he is responsible for developing and implementing the AFP’s technology strategy and data management initiatives. Ben’s work focuses on addressing capability gaps and leveraging opportunities to enhance the AFP’s operational effectiveness. His leadership ensures that the AFP remains at the forefront of technological advancements in law enforcement.

Helen Schneider is a Commander with the Australian Federal Police (AFP). She leads the Australian Centre to Counter Child Exploitation (ACCCE), which focuses on combating online child sexual exploitation and abuse. Commander Schneider has been instrumental in coordinating significant operations, such as Operation Bakis, which led to the arrest of numerous offenders and the rescue of children from harm. Her work involves collaborating with both national and international law enforcement agencies to tackle complex and sensitive cases, ensuring the safety and protection of children.

Leigh Williams is the Chief Information Officer at Brisbane Catholic Education. With a career that began in teaching, Leigh has held various leadership roles, including CEO, Executive Director, and COO. She oversees digital, information, and IT infrastructure for hundreds of locations and over 13,000 staff. A passionate advocate for digital innovation and education, Leigh is a published researcher and has led keynotes and workshops globally. She holds multiple post-graduate qualifications in Education, IT, Leadership, Management, and Business.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Karissa Breen [00:00:15]:
Welcome to KB on the Go. And today, we’re coming to you with updates from the Microsoft AI tour on the ground at the International Convention Centre here in Sydney. Listen in to get the inside track and hear from some of Microsoft’s global executives. You’ll get to learn more about the exciting SFI and MSTIC cybersecurity solutions in-depth, and you’ll be hearing from a select few Microsoft partners. We’ll also be uncovering exactly how the Australian Federal Police are leveraging AI to detect crime to keep people in our community safer, plus much, much more. KBI Media is bringing you all of the highlights. Joining me now in person is Mick Dunn, chief security adviser, Asia Pacific from Microsoft. And today, we’re discussing the function of the customer security officer team.

Karissa Breen [00:01:09]:
So, Mick, thanks thanks for joining and welcome.

Mick Dunne [00:01:11]:
Yeah. Thank you very much. Thanks for having me.

Karissa Breen [00:01:13]:
So you you new to the role. Walk us through, you come from a customer side now into a vendor land.

Karissa Breen [00:01:19]:
wanna start there because quite interesting observations you shared with me prior to to our chat today.

Mick Dunne [00:01:25]:
Yeah. Oh, for me, it was thinking about my career and where I was going. So I’ve been in a CISO role for the last five and a half years and then prior to that, I’d spent a long time in senior security roles. So the question for me was do I continue where I was or do I think about another size or role in a new organization? And to be honest, that felt a little bit like Groundhog Day. I wasn’t I wasn’t suffering from burnout or anything like it, really enjoying my role. But then this Microsoft opportunity came up, and I thought it was a great opportunity to use my skills and experience in a different way. But then also with the timing around the cyber safety review board report, Microsoft’s commitments around the secure future initiative, I thought what better time to move into an organization that’s really, really central to the global ecosystem. And I thought, you know, this is a chance to just do something quite different than to be in an organization that’s really really central to what’s going on.

Karissa Breen [00:02:22]:
And it’s good because like I said, you you’ve come from a completely different pedigree. Yep. You bring in a different dimension perhaps Yep. And a different perspective which adds significant value that you may understand more nuance things, from being on that customer side perhaps.

Mick Dunne [00:02:38]:
Yeah.

Karissa Breen [00:02:38]:
So I wanna talk a little bit more about the new customer security officer or CSO team and what does it actually mean?

Mick Dunne [00:02:46]:
Yeah. So it was really interesting to me in the recruiting process that a what Microsoft were actually after was people with experience and particularly to have that empathy for a customer. So we’ve been in the hot seat. We know what it’s like. We know what the customer’s trying to do. We understand the customer’s language. So the global team is new. However, there’s been historically, there’s been similar roles within Microsoft for quite some time.

Mick Dunne [00:03:12]:
And speaking with, Brett Arseneault yesterday, he said, well, actually, he started the a similar role probably about twenty years ago. So it’s one of those evolutions where Microsoft recognizes that where we have people like me in place, that there’s stronger and deeper connection with our customers. And, of course, that enables the Microsoft sales machine to go and do what it does. But we also play a role in building that long term strategic trust, thinking about, you know, what are our customers doing? What what are their tactical challenges? What are their more strategic challenges? And then being able to sort of bring some of those insights back into the broader Microsoft machine. But it’s just really, really critical for us. And what I find is when I go and speak to customers that there’s an advantage. I know many through the the community. But then also, when you sit there and have a conversation, you shared the fact that we’re not remunerated by sales, which is pretty rare inside Microsoft I mean then customers suddenly like, great.

Mick Dunne [00:04:08]:
I’m not talking to a salesperson. I’m I’m talking with someone who understands the challenge and then we can actually have some really deep and meaningful conversations.

Karissa Breen [00:04:16]:
Okay. So there’s a couple of things that is really interesting that I wanna get into. So you’re not talking to a salesperson. So are you saying that people instantly disarmed?

Mick Dunne [00:04:24]:
Yeah. Look, to a degree. And, obviously, you know, they’re not just disarmed by the fact that I’m not a salesperson. You know, they also know from my background, you know, usually part of the introduction is, you know, former CISO. Sure. I I joke about being a recovering CISO. And, you know, look, I understand the challenge. My area that I look after, you know, in Australia and New Zealand is focused around what Microsoft calls enterprise commercial, so all our major organizations and financial services.

Mick Dunne [00:04:52]:
So I’ve worked in, you know, those sectors. I worked in a range of sectors. So to be able to go and have those conversations with the background that I’ve got is, you know, is really, really valuable.

Karissa Breen [00:05:02]:
And you said before, Mick, that with having your background, you inject some of the insights back into the to your words, the Microsoft machine.

Mick Dunne [00:05:10]:
Yeah. So part of the expectation around the role is coaching internally. So we will spend time, you know, and one of Microsoft’s, you know, part of the Secure Future initiative is the expectation that everyone in Microsoft is seeing security is priority number one. People’s performance is measured on that. So part of the role we play is actually educating people about what is the secure future initiative. We also then helping teams understand, you know, what is the day in the life of a CISO? What does it look like for their teams, their heads of, or their general managers, their leadership team? And helping them understand that maybe I don’t need to go and soak up the CISO’s time. Maybe there’s other ways to access the organization. How do I go and have that conversation? How do I do that in a meaningful way that resonates with the customer organization rather than turning up and say, hi.

Mick Dunne [00:06:00]:
I’m from Microsoft. I’ll solve all your problems. And it doesn’t always land well. And and by the way, I’ve, you know, I’ve been on the other end. I know what that conversation sounds like. So that trust, that empathy is really, really critical. And often, we’ll interject with the team when they say, hey. I’m about to go and approach a customer.

Mick Dunne [00:06:18]:
I’m gonna do it in this way, and I’ll go, that’s not gonna land well. You need to think about this. You need to understand the CISO perspective and then reconsider how you might go and engage with that organization.

Karissa Breen [00:06:28]:
Okay. So a couple of things in here that I wanna talk through. So you mentioned the word empathy. Yep. So what does empathy look like in your eyes?

Mick Dunne [00:06:35]:
I’ve I’ve said this internally to Microsoft. My measure of success is if I’m enabling a CISO and their team in a customer organization to be more successful. So that might be through the Microsoft capability. It might be just through sharing some experience or a way I’ve approached the problem. It may be through an introduction. So the team that I’m part of is a global team. There’s about 44 of us globally. We’ve come out of SISO roles.

Mick Dunne [00:07:02]:
There’s some former CTOs, CIOs in the group as well. Right. We’ve had some people that have had deep experience in public policy. Deep So we’ve got this range of skills and a range of industries in the group. So, again, sometimes the power is in an introduction to say, actually, I don’t really know your sector, but I know someone in the team who does. Or even helping with introductions back into the central Microsoft SISO organization or even direct referrals into our product engineering groups where they can talk with a, you know, a deep subject matter expert. So often, I think that’s the way to really, really help. As I said, we’re not sales remunerated.

Mick Dunne [00:07:39]:
If I’m talking about a particular product, then I’m not really doing my job. Right. We’re there to talk about the problem at the high level. And then through introductions, through engagement, referring people, often that’s the way to give some organization the help that they need.

Karissa Breen [00:07:54]:
So from your point of view, what do you think is that people have perhaps missed in the past, like, on on the vendor side? Because you you you said before, like, you know, what does a day in the life of a CISO look like? Yeah. You know, perhaps people don’t have your background. So they’re maybe they’re not even acutely aware that they’re missing something. Is there anything you can share?

Mick Dunne [00:08:10]:
I I think it’s interesting. And COVID really highlighted it that, you know, there was a lot of pressure on sales team. So there was a lot of direct reach out, you know, cold calling, potentially reached a peak. I was getting phone calls in the middle of the night from vendors that are offshore that we’re trying to access the market, which is pretty offensive if you ask me. And then you’d get the call from the vendor that you never spoken with before, and they’d be telling you that they got the solution to your problem. But they don’t know you. They don’t know your organization. They don’t know your priorities.

Mick Dunne [00:08:39]:
They’re assuming that you don’t have a plan. So that, you know, the fact that I’ll turn up tomorrow and say, hey, I can sell you this that’ll solve a problem, completely ignores that I’ve got a budget. I’ve built a plan. I’ve sought funding for particular reasons. I’m trying to close control gaps on managing risk. So this idea that I can turn up as a vendor and I’ll sell you something in the next couple of months because I’ve got a sales deadline or a target completely misses the point. So understanding, you know, planning cycles, realizing that, you know, I’ve got a plan that’s very clear. I told some vendors in the past, you know, yep.

Mick Dunne [00:09:15]:
I like what you’re offering, but it’s in year three of my plan. So come back in two years and we can talk, which is pretty hard for some vendors to hear, but the industry is maturing. The expectations on a security group are higher than ever from a business perspective. We don’t have unlimited budgets. We’ve got to be really clear about our strategic plans, what we’re addressing and why, where that’s adding value to a business. And that doesn’t always align to what a seller wants to hear when they’re knocking on your door. So I don’t know if I’ve answered the question there, but it’s always very interesting when you get these calls and it’s like, doesn’t land well. So those strategic insights and what we can offer and thinking about our organization, we’re not really focused on this current financial year.

Mick Dunne [00:09:59]:
Mhmm. You know, we will get rolled out in support of some tactical initiatives. We might help the biggest Microsoft machine get access to a customer, but that’s not our priority. It’s about building those longer term relationships that if Microsoft was to see a benefit, it could be twelve or eighteen months away.

Karissa Breen [00:10:16]:
The interesting thing that you share with the CSO team is that you said that the people that they’ve hired like yourself have actually come from industry or government or somewhere Yep. That adds that dimension perhaps that maybe other vendors don’t. So what do you now see moving forward with the team? I know you said it’s nearly established, but is there anything you can share moving forward as we enter into the 2025?

Mick Dunne [00:10:39]:
It’s, it’s actually been a real joy to actually join the team. You know, we’ve all got a level of imposter syndrome. I had the opportunity to go to Redmond for an onboarding thing and meet all these people in the room from some Fortune 500 companies, you know, massive companies all around the world. And everyone’s been trying to solve the same problems. So that was really good. And then to hear the different perspectives in that community was so helpful. And the depth of experience across the group is so valuable. But then the ask is that we challenge Microsoft.

Mick Dunne [00:11:11]:
So, you know, we challenge the perspectives that Microsoft are working on, challenge the assumptions. So Microsoft has deep relationships with many, many talking with Judson and the like. But bringing our own perspective into the organization, making our own connections within the organization, and challenging the thinking of, you know, a significant software vendor in the world playing a critical role in infrastructure. By the way, the critical time for Microsoft in the world from the security perspective, it’s great time to go, you know what? You could think about this slightly differently. And I think that’s the opportunity that we have.

Karissa Breen [00:11:50]:
So how do you go about challenging a big machine like Microsoft in a way that is conducive to perhaps the culture or people not maybe feeling offside by Yeah. Hamix just come in and he’s saying this. What would be your your way to go about that to find that balance perhaps?

Mick Dunne [00:12:08]:
Yeah. So we’ve got part of our role and it’s, you know, called out is that we’re expected to either bring our own observations back from, you know, what do we see as missing in the market, what are some of the use cases that aren’t being fulfilled. And likewise, when we’re out there talking with our customers, we’re getting feedback from all the time. So it’s an expectation of our role that we bring that feedback in, and then we will take that feedback through to product engineering groups or into the office of the size. So and then any work is gonna go through a prioritization about, you know, is this a feature request? Where does that fit into our priorities? So we’re still, I suppose, in that not quite storming and norming phase, but we’re absolutely thinking about, you know, how do we bring that value to the organization. Culturally, Microsoft is really open. I’ve been surprised about how good the culture is, how open the culture is to feedback. And then coming in at the level we have, we’ve been treated really, really well in terms of, you know, we’ve we’ve been brought on for our experience.

Mick Dunne [00:13:08]:
There’s a value afforded to that experience, and people wanna hear from us, and they wanna learn from us. So that’s been great. But we’re still sort of shaping up how it’s gonna work, and we’ll refine that over time. But the function’s probably I’d say it’s twelve months old, although many of us have come on. You know, I’m six months in Sean. And we’ve still been hiring since then.

Karissa Breen [00:13:29]:
Why are you surprised?

Mick Dunne [00:13:32]:
Because culture is hard. And big organizations will have subcultures that, you know, might exist around a team or might exist around, you know, under a particular divisional leader. So it’s exceeded the expectation. And I came in with pretty open expectations. I’ve never worked inside a vendor before. But to see the level of effort and attention that they give to culture, the number of mandatory training courses I’ve had to go through that some are repetitive and what I’ve done in other large organizations, but then some I’ve learned so many new things coming into this organization. And, you know, the focus around diversity, inclusion, the focus around wanting people to be heard, you know, such as focus around growth mindset. Sure.

Mick Dunne [00:14:18]:
It’s actually real. It’s not just the glossy brochure where that goes out in the public statement. You come inside and you see it you see it every day. And that’s been quite eye opening. And the fact that a major organization is doing that and this is where, you know, referring to the Secure Future initiative Sure. And the cultural change around security. Every organization is thinking about how do I improve security culture. Microsoft is doing it at a scale that’s never been seen before.

Mick Dunne [00:14:46]:
They’re doing it in a way that maybe hasn’t been tried before. And when I’m talking with customers, there’s a high level of interest to understand, well, what is Microsoft doing? How are they approaching this? How might my organization think about trying to drive something similar? So interesting.

Karissa Breen [00:15:01]:
Curious to know, you said what’s missing in the market? So what is missing?

Mick Dunne [00:15:07]:
Simplicity is missing.

Helen Schneider [00:15:08]:
Sure.

Mick Dunne [00:15:09]:
I think that’s the real challenge. So, again, if we go back to your question earlier about the vendors, everyone’s got a solution to your niche problem and they’re coming in and, you know, with this little tool, I’ll solve that.

Karissa Breen [00:15:20]:
You have point solutions?

Mick Dunne [00:15:21]:
Yeah. Point solutions. And then you’re you’re left with this integration challenge.

Karissa Breen [00:15:25]:
Sure.

Mick Dunne [00:15:25]:
So I think I don’t think that there’s much missing. There’s always gonna be a new solution to the emerging problem, But what is missing and, you know, a number of organizations, Microsoft included, are sort of going down this platform monetization approach.

Karissa Breen [00:15:43]:
Sure.

Mick Dunne [00:15:43]:
It’s not gonna solve every challenge, but I think we’ve gotta work towards simplifying the conversation around security. Certainly, when we talk with our business leaders and our boards, you know, they don’t wanna hear about the complexity of the problem. They wanna hear things in simple terms. So the the simpler we can make this problem without dumbing it down to nothing

Helen Schneider [00:16:07]:
Sure.

Mick Dunne [00:16:07]:
I think that is the challenge. But I don’t think we want for much. I think one of the things that I call out is that, you know, we do talk about cyber burnout. It’s a real challenging interest tree. But in some ways, we’ve got all the things that we’re asking for. So years ago, we weren’t getting the support. We weren’t getting the executive level engagement. We we weren’t talking to our boards.

Mick Dunne [00:16:28]:
We weren’t getting funding. To a degree now, we’ve got all of those things. And in some ways, it’s overwhelming. But we’ve got to think about how do we change our language? How do we communicate in simple terms? How do we take advantage of the opportunity that we’ve got and make the most of that on behalf of our organizations? And that’s somewhat of a new challenge.

Karissa Breen [00:16:48]:
In terms of customers in Australia, that’s the same sort of chatter that you’re hearing. I interviewed someone on the customer side talks about customization Yeah. Reducing tools, complexity, more integration, interoperability. Yeah. Is that the same sort of thing you’re hearing across some of the customers that you’re speaking to in Australia or Asia Pacific?

Mick Dunne [00:17:05]:
Yeah. Absolutely. And cost. Cost is always a challenge. So with the global economy the way that it is and some uncertainty, security teams are still being asked to, you know, manage costs and be more effective. So some of the platform plays and by the way, you know, if you’re a major organization, and I heard, you know, Brett Arsenal said this yesterday, there’s no expectation that you would use Microsoft end to end for absolutely everything. Sure. You know, that’s not a reality.

Mick Dunne [00:17:30]:
In fact, you know, we don’t even have capabilities in every requirement. But depending on where your organization sits and your level of maturity, you know, there’s value to be had from looking that platform approach. You take away many of those integration challenges that really bring organizations unstuck. And if you can simplify your environment, then that means that you can simplify the training requirements for your team. You can make their life easy. So if you think about it from your pure your team perspective and your people perspective, how do you make their life easier? How do you allow them to focus on really what are the challenging problems rather than the day to day mundane mundane stuff that cyber teams spend a lot of time on, and it doesn’t allow them to get to those higher order problems. So not to jump to AI, but there’s a bit of excitement about what we can get out of some of the technologies that are coming along that can shift the way that we work and move the humans to higher level activities.

Karissa Breen [00:18:25]:
So you’re focusing on cost a little bit more. So we spoke before, like, obviously, people have got all these point solutions that perhaps they’re not integrated, they’re not being leveraged properly, It’s not end to end. Now I know you like you said, you can’t get the one vendor that does everything. Yeah.

Mick Dunne [00:18:38]:
Yeah.

Karissa Breen [00:18:38]:
Maybe you could reduce it.

Mick Dunne [00:18:39]:
Yeah.

Karissa Breen [00:18:40]:
But would you say there’s a lot of money being spent on just point solutions that maybe helping at all, aren’t moving the needle?

Mick Dunne [00:18:46]:
Well, yeah. I mean, often you’ll talk with teams and you’ll find that they’ve got a capability and then you tease into it about, well, how much of this capability are you actually using? Are you able to use it proactively? Are you using it reactively? Are you using the full capability? Where are the overlaps in that capability with other products you might have? And when you start digging in, you’ll find out that maybe we’re not using all the capability. Maybe we’re only using it reactively. So really, you know, when you’re talking and I used to do this with my team all the time, challenge about what are you actually using. But then I think the thing that we often forget is that there’s an overhead with every vendor. So if you’re in a highly regulated industry, your vendor governance, your vendor oversight. But even go back when you’re doing your market scan and you’re looking at a million tools to find the Rolls Royce tool, then you gotta go through legal and procurement. You gotta do a negotiation to get there.

Mick Dunne [00:19:35]:
There’s a cost that we often don’t quantify in all of that effort. Whereas, if you’ve got the ability to use some of your platform vendors Right. And there’s others outside Microsoft, then you’ve got an existing contract, you know. So then you can sort of limit the activity to well, actually, I wanna look at the capability of the product. Is it good enough? Does it meet my use cases? Yes. It does. And then you’ve got that simplicity from that legal and procurement process that means instead of maybe waiting three months in some organizations or even six to twelve months in others before you can go through all those government hoops to even start to deploy a capability, you can actually move to close down a risk exposure in a quicker way. So I think that’s a real challenge.

Mick Dunne [00:20:20]:
And often, security teams aren’t always recognizing the overhead that comes in the background that does cost an organization money. It may not directly cost the security function, but it’s certainly a business cost.

Karissa Breen [00:20:32]:
So, Mig, we are running out of time. So just to close-up, is there any sort of closing comments or final thoughts you like to leave our audience with today?

Mick Dunne [00:20:39]:
Oh, look, I just I still think it’s just such an exciting industry. And I and I talk about I mean, security security specifically. We’ve got a great challenge as much as I’ve stepped out of the CISO role. It was a job that I love. I love the people in the industry, but I also love that we can do as security leaders to help grow those people, to help them think about the problem on a on a wider scale. So for me, that’s the excitement. And now I get to do that, I suppose, on a a larger level than just doing it in my organization or sharing across, you know, my former industry sector. Now I get the chance to go talk to different people in a range of industries.

Mick Dunne [00:21:15]:
I get to learn from them. Sometimes I get to drop a nugget that someone really values, and that’s nice. But I suppose it’s just that ongoing learning journey for me. So I’m having fun, but I’m getting to see a whole bunch of things and obtaining a perspective that I didn’t always have before.

Karissa Breen [00:21:29]:
I personally appreciate your perspective, so thank you so much for your time. I really appreciate it.

Mick Dunne [00:21:32]:
Thank you. Enjoy the chat.

Karissa Breen [00:21:39]:
Joining me now in person is Ben Lamont, chief data officer of the Australian Federal Police, and Helen Schneider, commander, ace and human exploitation, also at the AFP. And today, we’re discussing how the AFP is using AI to protect Australia and its people. Ben, Helen, thanks for joining and welcome.

Ben Lamont [00:21:57]:
Thank you.

Ben Lamont [00:21:58]:
Good to be here.

Karissa Breen [00:21:59]:
Okay. So, Ben, I wanna start with you first. So walk us through how the AFP is using AI to protect Australia and its people.

Ben Lamont [00:22:06]:
Yeah. Look. I think giving a bit of context is probably the easiest way to start. We we’ve got a huge amount of data that’s in front of us. We’ve got a huge amount of jobs that are coming into the AFP, so really we have no choice but to lean in because it’s beyond the human scale and becoming more and more beyond the human scale. So AI is really the solution to a lot of these issues that we’re facing with the amount of data that we’re dealing with and the complexity of that data.

Mick Dunne [00:22:28]:
So we’re leaning pretty heavily

Ben Lamont [00:22:28]:
into AI, doing it in data. So we’re leaning pretty heavily into AI, doing it in a, responsible and ethical way, because we police by social license, so it’s really key that we do that. But we’re using AI across our business, so mostly for lower cognitive tasks like translation and transcription, for processing large amounts of video that we’ve collected lawfully, and for our telephone intercepts and others. So I think really key for us is that we’re we’re pushing it across all of that area where we’ve just got a deluge of data and just need to have some processing of that data. Mhmm. And then having the human in the loop of that process and, you know, I can go into more detail about that, but really making sure that the the, prediction that an algorithm makes is separated from the decision of a human being. So

Karissa Breen [00:23:16]:
Before I further get to you, Helen, I wanna go back to you, Ben, just for a moment is typically the people that I I interview are are businesses. You know, I’ve previously worked in a bank. You know, it’s one thing that when you lose your money, for example, you can get it back. But when you’re dealing with the work that you’re both dealing with, it’s a little bit different. There’s a lot more risk when you’re dealing with, you know, people’s lives. So how does that sort of factor in when you’re talking about, you know, the prediction side of things in terms of any AI potentially hallucinating, coming up with something, verifying to make sure, well, does that, you know, does that make sense with the, you know, with the response I’m getting? Talk me through that.

Ben Lamont [00:23:49]:
Yeah. So I know that’s a really key component to this is we have to have assurance in those processes. You know, we we we’re very good at using scientific methodology, in our forensic area, for example, and so taking that same type of conceptual process and building it out. So that generative AI, we wouldn’t use that where there’s an absolute key risk because it could hallucinate or it could be that we have false negatives and missing stuff, so we would use more of a traditional kinda algorithm there that we can have way more assurance on the inputs and the outputs and the and the work that that algorithm would do against that dataset. And then going back to the original dataset, so that that kind of lineage of where we may transcribe a telephone intercept, for example, we wanna make sure we can go back to the original source and listen to that before we make any decision on, having an impact on someone’s liberty or or an outcome of, a case. So

Karissa Breen [00:24:41]:
because I guess it’s not like, you know, as people would know, you don’t just sit back and go, okay, what’s gonna do it all for me? But how I what was coming in my mind, Ben, as you’ve been speaking would be it’s kinda like a conveyor belt. Instead of it’s making that a lot faster. By the time the product comes out, it’s just increasing that production line because you’re leveraging the AI and the Copilot to do that.

Ben Lamont [00:24:58]:
Yeah. Exactly. And and without that, it it is to the point where we would need thousands more people to actually allow that to happen. So this is actually giving us a force multiplier and giving us a ability to look at more data and more, that we wouldn’t have been able to get through otherwise.

Karissa Breen [00:25:14]:
So, Helen, I wanna flip over to you now, and I want to discuss how the work with AFP is addressing challenges related to, you know, Gen AI and deep fates, which I wanna get into a little bit more with you, but also specifically to what you’ve been doing, on the on the ACE initiative. So walk us through that. What does that look like?

Helen Schneider [00:25:32]:
Well, just to give you some context, so the Australian subject accounted for on exploitation of the ACE, really leads the national coordination, online, which on sexual exploitation, referrals that come into Australia. And we coordinate those out to our state and territory police now on the members in the AMP. So, as Ben has described, the data we’re seeing there has massively increased. In the last financial year, we received just over 58,000 reports into the ACE, and that was an increase of 18,000 from the previous financial year. So as you can see, that data that we’re having to deal with, and it’s not just volume, the spoons, the index Right. Is really increasing for our investigators and, and we’ll use those as a runtime. So, the the issue is that we’re starting to really see AI generated AI generated and all

Karissa Breen [00:26:26]:
of this material as well. Okay. So,

Helen Schneider [00:26:29]:
there’s some real, I guess, challenges for our victim identification capability where, you know, we don’t wanna be wasting time on looking at images where, there may not actually be a real child and, and we’re not then looking at the images where there are. So One, with the photorealism of AI, that is one of our our risks. So whilst AI poses a a grip on threat for us, it also can be a solution to some of our problems. So, we’re looking at, partnering with industry, in relation to how can we improve our processes to deal with that scale and performance that comes in. And we’re also looking at how to save our investigators having to look at volumes and volumes of, any improved materials. So is Ewhal seeing strategic marketing, do you look at tools and AI tools that will prevent that explosion of our members? I help them process material. We’ve got an inclusive volume all day, and potentially help us, intervene with the scrubs quicker in the process. So if we can predict behaviors that might be equivalent to online grooming or something like that, and then we can disrupt that because there’s all opportunities for us with the deployment of AI tools.

Helen Schneider [00:27:54]:
But as we’ve said, we we fully support social biases. So, we do have, strong line of signals in Australia, and we really, work really closely as an enforcement agency to make sure that we’re employing responsible

Karissa Breen [00:28:24]:
So a lot of people I’ve interviewed in on on my show, this one that you’re on today, they’ve sort of I’ve asked them, like, you know, how how are we managing deep fakes? Like, is it, you know, from a social perspective, is it people now try to point at the, you know, the the FaceWalk and Meta and Friends, but then, like, well, we can’t monitor it all. It’s too much. And to your point earlier, you know, people sitting there manually looking at it does quite a lot of physical damage, to people. So, I mean, how can people start to discern if it’s AI generated or not AI generated? Because again, I mean, you guys have obviously have the investigative background. You could probably say, well, you know, it looks suspicious because of these reasons. But to the average person, they don’t have the capability or the nous like you both have to be able to discern that. And equally, it’s quite exhausting then if you had to look through every single image just online to say is it fake or not fake.

Helen Schneider [00:29:14]:
I think, you know, some some of them are obvious. Sure.

Karissa Breen [00:29:18]:
Of course.

Helen Schneider [00:29:18]:
I think, you

Helen Schneider [00:29:19]:
we all read on social media. Well, that’s probably a deep thing. I I think for us, a key part of the work we do through the ACE is all around prevention, particularly because we see a lot of our, you know, child well, born in Australia, you know, they’re on social media, and, you know, we can see online child sexual exploitation occurring, you know, through things such as financial sex. You shouldn’t wear these the use of, you know, manipulation of photos that are quite benign. I mean, we use to to create more than 10 manipulation of online activities. I think, you know, one of the big things for us is making sure children really understand where how these accessible technologies that they use themselves can be used with various purposes and the and I guess, you know, that critical thinking around, you know, the impact of that. And one of the things that we do a lot is to talk about is the fact that people might not realize, but if you use, you know, an AI tool, for example, to, turn a completely benign image into a sexualized individual, And it’s still child abuse material unrestrained. Right.

Helen Schneider [00:30:32]:
Okay. I think, you know, sometimes people might think, well, if I generated it myself, it’s just me or you don’t

Mick Dunne [00:30:38]:
even know what

Karissa Breen [00:30:38]:
you mean. Sketched it. That’s another one I’ve heard.

Helen Schneider [00:30:40]:
Yeah. Yeah. So I think it’s about it’s really important, prevention, education, awareness is becoming a real capability in its own right as we tackle some of the challenges with technology.

Karissa Breen [00:30:50]:
And would you both say because of the, you know, the AI and technology that that convey that getting a bit faster, that could be the the change to preventing a lot of these crimes. It could be that couple of seconds perhaps that, you know, could make a difference. Is that what we’re gonna start to see more of in terms of, you know, the leveraging AI and how you how you’re going about it, etcetera. Do you have any sort of predictions then on that front?

Ben Lamont [00:31:15]:
I I think from that point of view, the we have a criminal, kind of dysphoria who are very early adopters of technology. They they have been through most of society and and that continues. AI is no different to that, so we need to counter that with AI because of speed and harm that can be done because you just, can with AI, by criminal use, they have a longer reach and and a shorter turnaround time, so we have to deal with that with AI. So that that means we need to lean in on it and and start countering it that way, and I think going to your point around seeing fakes, you know, there was a there’s a lot of work that was just, done in the media around, Cyber Monday and, Black Friday around, looking for scam websites and everything that they have faced and a lot of messaging around. That that is the same again. They’re using they can put those up way quicker, and we need to counter that with AI, not just our AFP, but across, across the the kind of government to to actually deal with that and and hence why we’ve been putting messages out there about how how to identify those websites, how to identify those images, and I’m sure there’ll be more messaging coming up at with the election, and and really going to what’s the credible source of that information and where is it coming from. And then initiatives like, Adobe, Microsoft, BBC, doing watermarking in relation to putting images out there, putting a cryptography watermark inside imagery so that you can say that this imagery was captured by the BBC and put out by BBC and you can do those types of checks, you know, and and many others, that that will become critical over time as well because, as the deepfakes get get better, you need to look at where the source of that information is coming from. So

Karissa Breen [00:33:00]:
And they are gonna get better. They’re getting better, like, every day, and, like, it’s even concerning that I’ve I’ve got a scooty background. You’ve got you’ve got both your crazy cool backgrounds. It’s it’s gonna be worrying for, like, the average person to be able to to combat that. And then the next point I wanna sort of just quickly touch on is ethical considerations. Now, I ask that I mean, I’ve been speaking to people all over the globe about this and I’m not there’s not it’s not an easy one to answer. So I don’t necessarily, you don’t have to answer in terms of a binary, response, but it’s just about hearing your thoughts with the work that you’re doing. As I mentioned, it’s very different to, like, a bang, get your money back.

Karissa Breen [00:33:35]:
But with your type of work, it’s people lives that we’re talking about that you just can’t back. So I’m really keen to hear from your perspective. What does that look like from from your point of view?

Helen Schneider [00:33:44]:
I guess, you know, we’re a signatory to the Anspar AI principles. So principles. So Australia and New Zealand leasing advisory agency. Yep. And, you know, there’s some some key things that we subscribe to there. I think, you know, really being transparent around how we’re doing things. But, really, what Ben touched on before is, you know, we have that humanly approach

Karissa Breen [00:34:09]:
Sure. To AI. And and that’s really

Helen Schneider [00:34:11]:
critical because, ultimately, we have to be accountable for decisions, and they have to be show that they’re well considered particularly if you’re talking about business or sports. Just business decisions you make that include impact and safety of the community. So, and our community has quite understanding where they have been compliant. So I think, you know, those kind of core values are critical to how we move

Helen Schneider [00:34:41]:
the space when they are. We we often talk about, you know, that we at least give consent and we need the social license of of our community. But it it is it is inherently true that, you know, if we don’t have the trust of our community, the challenges we’re facing from a human perspective now are not something that we can respond to on our own. So when I talk about prevention and education and talking about how these tools might be used for that, it goes to that one point that I need people sitting in their lounge at home Sure. To be as accountable to I am to having a conversation about what the safety we’re venturing through. People’s surreality is for our children. You know, the online world, this is real for them. It’s this group we’re sitting in right now and having this conversation.

Helen Schneider [00:35:28]:
Yeah. True. For me when I was a child True. With my parents. So, how do we make that, experience positive and safe? So that’s I look at it from my prime time. Obviously, while building in the enterprise, you know, we have that really important responsibility to be ethical. The thing is to fail. And because of the fact that, you know, we need partnerships.

Helen Schneider [00:35:50]:
We need partners to respect us as well. Not just only our government and our community, but we have we provide wonderful corporations to fight for life these days. So we need partners. We wanna be that partner of choice in the SP, and to be that partner of choice, you have to be seen as principled and ethical in how you conduct your business.

Ben Lamont [00:36:08]:
Yeah. And we’ve we’ve had to change the way that we do this because because we used to do it around procurement and when we buy a tool, but now you don’t have to procure these tools. Some of them are within systems and processes, so we’ve strengthened our governance internally with our we have a responsible technology committee now, and and it is about responsible use of technology. We’ve got more conversations with our university partners. We’ve got the AI, for law enforcement and community safety lab in at Monash University. These things are really kind of leading to a more robust and and a lot more nuanced conversation about the use of AI. So we we know we can’t just walk away from AI. It is not not going to be it would it wouldn’t keep the public safe, so we have to be able to find that balance of where the community expectation is that we use it, but at the same time without overreach or overstep.

Ben Lamont [00:36:57]:
So

Karissa Breen [00:36:58]:
So speaking of that point around overreach, what about concerns railing biases, for example? I know that’s coming up a lot in my interviews Yeah. How to manage it. So what do you have any sort of commentary around that, obviously, with the work that you’re doing?

Ben Lamont [00:37:10]:
Look. I I think and this is where understanding the AI itself between a black box capability, and even with, you know, generative AI, which you may not be able to look at input oh, sorry. Look at the model itself, you can do testing on efficacy and bias. It is absolutely critical. We we are using larger models outside of just law enforcement holdings, because that that could skew data for where we need to have a more generic model. So I think those things and that’s where our partnership with Microsoft and others is really key because we we wanna make sure that just like they’re doing now with other industries, they’re building these tools to remove a lot of that bias and doing a lot of work on that. We need to understand that and have that really open communication, but it is not just a law enforcement single dataset. It’s we’re looking at a way broader societal cross section dataset, so which is critical for, you know, some of the cases that have happened overseas, especially, we wanna make sure we avoid that here in Australia because it is a risk, and it’s something that we have to be really cognizant of and something we have to do serious testing around as well.

Ben Lamont [00:38:13]:
So the good thing is with most of the other models, you can do some you know what your training dataset is, you know what process you’re looking at, so they they become a lot easier when you’re talking about the more kind of standard AI models. But we also don’t use it in certain areas because the the the community expectation is not that. So, we just it it is that balance.

Karissa Breen [00:38:36]:
And do you think by leveraging that as well, it would eradicate, like, hallucinations, for example, if you’re looking at all of the sources and trying to get sort of, more of a a general consensus? Because I know you’ve got LNs, you got, you know, like, SLNs and things like that. And, obviously, it’s just how to get more well balanced and find that equilibrium.

Ben Lamont [00:38:53]:
Yeah. Definitely. I think there’s I think it’s not there’s not there there there’s a few things we can do. There’s adversarial processes where you’re actually training AI to train to check the AI as well, and and that human oversight is obviously really critical. But then also, like you’re saying, it’s about what tool is gonna be suitable for that job and what for what and not just a kind of, pure technical output, but also about what is the output that we’re putting in front of somebody. We’ve done a lot of training internally to train our people, and especially our SES, our senior kind of executive, to know the limitations and advantages of AI because it’s no longer this back office kind of tool that sits there and it’s the data scientists and data engineers using it. It’s actually now in the front in the hands of frontline members to help make, decisions. So that means that we need to be doing we we are doing more training, to to across the organization, not just in our technology areas, to understand the limitations and and risk with AI and that use case for that specific with specific outcome, what it actually means, and what is the risk of it being wrong, and and always going back to that original data and always making sure there is that human oversight in that process, especially when we start talking about, you know, act warrant activity or other activity that will have an impact on a a member of the public.

Ben Lamont [00:40:14]:
So

Helen Schneider [00:40:15]:
Precitation of evidence of the law and just, you know, sitting in front of the jury and

Karissa Breen [00:40:20]:
talking about how to

Helen Schneider [00:40:21]:
treat that evidence and being able to explain that. And if there was a piece of a tool again acknowledging that use, but also showing what was the human piece to verify their, you know, the validity of the of the activities, for example.

Ben Lamont [00:40:34]:
And the good thing is that it’s not just us looking internally at ourselves. There’s a a number of oversight committees, through parliament that we have to present at, PJCIE, LE, and IS, so for intelligence and for law enforcement. We also have senate estimates, obviously, so and and plus the ombudsman and a number of other kind of oversight committees and bodies that are making sure that we are in the right place and making sure that we’re doing the right thing as well. So it’s not just looking internally within the organization. There’s all the other mechanisms that government have put around us to make sure that we are we are we are being transparent and we are doing the right things.

Karissa Breen [00:41:14]:
So we are coming to the end of our interview. We’re running out of time, but I would just like to ask you both, do you have any sort of closing comments or final thoughts you’d like to leave that audience for today?

Ben Lamont [00:41:22]:
Look, I I think for for me, AI is an exciting future, and there’s a threat to it as well with, with the criminality use of, AI. I think it’s only gonna evolve. This is a long game, I think. This is not just about the next one, two years. This is about the next five and ten years, so, we’re gonna look at that horizon about what what we need right now, but also start investing into those longer term horizons as well. But, you know, the AFP is lending in in an ethical and responsible way to AI.

Helen Schneider [00:41:54]:
So I think my thoughts are is that, you know, technology is a bit of an enemy for us, but it is also a huge opportunity, and I think it’s really important for us as an agency, and this is where we are leading into is, you know, the partnership space. So partnerships are critical for us, whether it’s industry, you know, government partnerships, or it’s tech, show and be. So we need to explore, you know, what are our partners that we need now and what partners are gonna be ten years from now that we’re not thinking about right now because there’s tech challenges. So, yeah, it’ll be an exciting time, I think, the next one to ten years.

Karissa Breen [00:42:36]:
Joining me now in person is Lee Williams, chief information officer and information technology executive from Brisbane Catholic Education. And today, we’re discussing the journey with AI. So Lee, thanks for joining and welcome.

Leigh Williams [00:42:47]:
Thank you.

Karissa Breen [00:42:48]:
Now Lee, I’m aware that you’ve just recently jumped off a panel and you’ve come here to do our interview. So tell us, what did you discuss?

Leigh Williams [00:42:56]:
Sure. So I there was a couple elements to the discussion. Firstly was around our rollout of Microsoft Copilot, m three six fives and the journey that we’ve been on, how we got there and what the main, beneficiaries were of actually rolling out something like this. Being a a data analyst session of the most recent one that I did, they were asking about the data and the proof of the data, which is great to be able to question the evidence. And we really talked through what we were seeing both from a teaching perspective and the workload reduction, but also from a student side in their social and emotional well-being as well. So then from there, it was really around what else can we do, to keep furthering this work and what is 2025 gonna look like for us and beyond.

Karissa Breen [00:43:45]:
So in the keynote today, I noticed that someone was saying in terms of education sector that leveraging Copilot, for example, how much productivity they got back, how much, monotonous tasks were removed, etcetera. So what are your thoughts with, obviously, the role that you’re doing in terms of how AI can help transform the education sector? But is there any sort of insight you’d like to share?

Leigh Williams [00:44:07]:
Sure. So to me and when you talk to any well, when you talk to any educator, they don’t get into the profession to do admin work. They get into profession to be with students. They love their students and care for their students. So by enabling technology that takes away the administrative load, the administrative burden, the planning burden that’s placed on them, it gives them more time with students and doing what they’re passionate about, which is teaching. So to me, that is the just the greatest outcome that we can see is teachers spending more time with students because that in turn is gonna help students grow and flourish both academically and socially.

Karissa Breen [00:44:46]:
And would you say if you were to zoom out, that would be the biggest opportunity in terms of leveraging AI in the education sector?

Leigh Williams [00:44:52]:
I think the biggest opportunity that we’re still developing and working on that we’ve been piloting is is directly to students. So what we’ve already been trialing that we’re going to scale out next year is what we’re calling hyper personalization because it is impossible for a teacher, if they’ve got 25 students in front of them, to teach a concept and teach it in 25 unique ways in forty minutes to tailor it to where every single student is at with their learning, the modality of learning that they’re used to, and if they have any specific learning needs to cater for that, there’s 25 individuals. A teacher can’t be expected to do that, and they and they don’t. So AI can leverage that power and say, well, give me the lesson plan, and I will personalize it for exactly where every single child is at and give it to the they give their lesson in a way that’s gonna best suit their learning mode. So to me, that is just a game changer for how we can actually educate students.

Karissa Breen [00:45:52]:
That’s an interesting observation. When I look back, I mean, when I finished school like fifteen years ago, I remember being in I I was I was obviously a more of a an English student and I was maths, but I don’t think I did well at maths because I had the same teacher. It’s still like year ten, eleven, 12. My parents said like, she doesn’t quite the teacher doesn’t explain it in a way that Carissa understands. And I’m like, well, we can’t change her for whatever reason. And I think, like, leveraging something like this perhaps could have changed my experience. Maybe I could have become a mathematician. Who knows? Yeah.

Karissa Breen [00:46:19]:
But I didn’t get that opportunity. Yeah. So in terms of my own personal experience, I’m just thinking it through you as you’re speaking and I was like, I really I wish I had that opportunity. So what do you sort of see that moving forward in terms of, you know, the impact that’ll have on students now?

Leigh Williams [00:46:35]:
Yep. So I think first and foremost, engagement because students are not gonna learn unless they’re engaged. So we start with engagement and getting them just in a learning environment, whether that’s within a school context or when they’re at home, or anywhere else. So that engagement’s number one. And then secondly, engaging in the concepts, the knowledge, and the skills that they are actually learning and being able to have agency and voice over the learning that they’re actually, that they’re doing. So for what we’re seeing with students is that they are just taking AI and running with it. And so from something that like, I used an example before about, you know, a a junior secondary geography class learning about the rock cycle. Now for a lot of students, that’s not that fun or doesn’t sound that fun.

Leigh Williams [00:47:24]:
Sorry, geologists. But we can make it engaging and fun and to suit the learning of the student. And what we were seeing is that students were then going and learning even more about rock formations and plate tectonics and all these other things that they started learning about that weren’t even part of what the teacher had put forward, but it was actually they were wanting to learn more because they were interested in it. So when you see that and see the depth of the learning that they’re engaging in with the agency to even run it themselves, it was very powerful.

Karissa Breen [00:47:54]:
That’s interesting because I think that now that you presented something or that the schools presented something in a way that people are more interested in, it’s actually furthered their engagement and they’re going on that rabbit hole to even learn even more. So the other sign of students being really engaged is perhaps people’s concern around, like, cheating or leveraging AI to answer things. I think I’ve seen a lot of I know I’m millennial, but you got, like, the Gen z. There’s a lot of content online of people now going getting their first job and in an interview and they’ve got AI or chat GBT there, and it’s responding to the questions.

Leigh Williams [00:48:26]:
Yep.

Karissa Breen [00:48:26]:
So what because there are concerns around that from your perspective?

Leigh Williams [00:48:29]:
Yeah. We had initial concerns around that. The way we tackled that was through a number of fronts. Firstly, open and open and direct collaboration and dialogue. So bring to us what are your concerns, what are the things that we can go and test and run pilots on, and determine are these real fears or are they actually just myths that we can bust? In the concept of cheating, we actually busted that mist pretty quickly. So when we rolled out Copilot for students with senior secondary students, we found straight away they were using it for tutoring because they weren’t interested in not knowing the concept. In fact, the feedback we got from students is they said, I don’t want AI to build all the knowledge because if I walk away from AI, I don’t have the knowledge. True.

Leigh Williams [00:49:19]:
AI does. So how do I use AI to help me so that no matter where I am, I will always have that knowledge that I’ve learned or that skill that I have learned and be able to take it with me into, my career or my personal life. So that was really powerful to hear that coming from students. And so when we looked at what they were using AI for, they were using it more like a tutor and asking it more in-depth questions. They were asking it to give them feedback on drafts or to look at different concepts. Or if they were writing, say, a persuasive essay, they would say to chat GPT or to to any generic, like, copilot, they were actually saying, well, here’s my version of argument. Can you generate for me a counterargument? And so just that they would even think to do that, it would generate a counterargument. That in turn would help make their argument better because they would go, oh, hang on.

Leigh Williams [00:50:15]:
If that’s a counter, I need to make sure I’ve addressed that in my own essay when I’m writing it. So there was a whole heap of examples like that in the way that they were using Copilot and AI to better help their own learning and their own essay writing. The other main, area that we worked with was teachers themselves saying, well, if a student can cheat by using AI, maybe it’s the assessment we need to look at in the first place. And so we need to reconceptualize what does assessment look like so it isn’t about who can write the best essay, but it’s actually about what is the concept, what is the knowledge, or what is the skill that you actually want the student to walk away with and take with them for the potentially the rest of their life. So once you get to the core of what is it you’re actually trying to assess, then go then saying, well, what are all the different ways we could actually assess that that’s not just let’s write an essay? And so it really helped change the mindset of a lot of teachers to go, oh, actually, yeah, I don’t need to write it as an essay. An essay was more a way to scale out an assessment and easier to mark. But now, again, that AI can help me with marking, can help me with assessment. I actually that’s not a concern for me anymore.

Leigh Williams [00:51:31]:
So, yeah, I can be more creative with the type of assessment that I’m actually giving to my students.

Karissa Breen [00:51:37]:
So one of the things that you when you were speaking that came in my mind is even when, like, going back to the maths example of when you would do a maths test, you’d have a calculator. So it’s like two plus two four. Sure. But you suddenly show you’re working out which I would probably resemble to critical thinking.

Leigh Williams [00:51:50]:
Yes.

Karissa Breen [00:51:50]:
So people could say you say, well, people are cheating in maths exams because they got a calculator.

Leigh Williams [00:51:55]:
Yes.

Karissa Breen [00:51:56]:
Now, you know, Copilot or AI is a new, like, you know, high-tech version of a calculator, but the thinking is still there in terms of how well how did you get to that answer. People still need to understand how they got there, How, you know, to your point, how the data makes sense, how they can discern between if, you know, if that is the right answer or not. So people still so pulled up in perhaps removing that critical thinking then it’s actually still there. Even to ask a specific prompt, the critical thinking still needs

Helen Schneider [00:52:23]:
to be there.

Leigh Williams [00:52:23]:
Yes. Correct. And prompt crafting is such a great example because it’s now so prevalent that it takes a very sophisticated prompt to develop a good sophisticated answer. So just in the development of that prompt shows critical thinking, shows higher order thinking skills from that student to and then be able to put the prompt in a meaningful, palatable way that a large language model can actually understand and not generate some, you know, nonsense answer or hallucinations.

Helen Schneider [00:52:55]:
Yeah.

Leigh Williams [00:52:55]:
So just the generation of the prompt itself can actually show critical thinking and analysis by the student.

Karissa Breen [00:53:01]:
So zooming out, we’ve obviously seen teachers and then students. What about, like, the curriculum?

Leigh Williams [00:53:06]:
Yes. Yep. So this is to me what’s exciting because there’s different so many different ways that you can implement a curriculum. So, yes, we have an Australian curriculum that we all utilize, but it’s always up to the teacher how that curriculum is actually taught. And so that’s where the power of AI really comes in because it gives so many more avenues of how, and it doesn’t actually all have to be the same way. Just because that student handed in their assessment or showed their their ability to do a particular skill, and they did it by generating a PowerPoint presentation, the next student did it with an oral presentation, and the next student built a three d model. They all could be doing the same skill and concept and but just delivering it in completely different ways. And so AI can still ingest all of those assessments, still assess them and say, yep, that child actually does have that understanding or that level of expertise or mastery over a concept even if they have handed in what looks like three different assessments.

Karissa Breen [00:54:11]:
So would you say there’s still a little bit of a understanding on how to map out the curriculum, what this looks like to your point? Like, do we need an essay? Is that would you say that’s still evolving in terms of the education sector and how to leverage AI

Leigh Williams [00:54:24]:
Yeah.

Karissa Breen [00:54:24]:
To instill that critical thinking is still there? People aren’t cheating necessarily when they’re leveraging AI.

Leigh Williams [00:54:29]:
Yep. Yep. So there’s probably, a couple of things. There’s firstly, perception, and it’s getting past that barrier or hurdle of what are my own biases that I’m actually bringing to this, and what are my thoughts around AI, and what’s actually real and what’s fact and fiction. So how do we help change perception, around what’s possible? Then from a teaching perspective, then how do we essentially motivate and entice and engage educators into the space to even start thinking about it and and running pilots themselves in classes to say, well, let’s try using Gen AI to do this. Or if we’re gonna do critical thinking in the classroom, like the example I used before on persuasive arguments, okay, I want you to get up and give a debate, but guess what? You’re gonna debate against AI. And so here’s the the context of the argument. You’re in the for.

Leigh Williams [00:55:24]:
AI is gonna be in the against. Let’s go. And so it still has all the critical thinking elements there. That student could take that away and practice that at home. And that would be tutoring. That’s not cheating because they’re actually just getting AI to help better their own argument that they’re trying to deliver. So so to me, there’s lots of ways that we can still bring educators into the this journey and into this fold. Are we a % there? No.

Leigh Williams [00:55:51]:
Of course not. And it’s still gonna take some time. You still got like any, implementation of of new technology or new inventions, you’ve got the innovators leading the way. You’ve got the pragmatists who wanna see the data. And so, yes, we’ve been releasing as much data as we can on our outcomes. And then you’re always gonna have the laggards. But what we’re trying to do is have the smallest group of laggards possible and how do we get them over the fence as quickly as possible so that we’ve created a movement. And once we create that movement, it’s almost like you’re you you know you’re gonna be left behind if you don’t get on that bus with us.

Leigh Williams [00:56:27]:
So so that’s what we’re working towards is what’s the movement we can create around this.

Karissa Breen [00:56:32]:
So do you think the laggard group of people is diminishing?

Leigh Williams [00:56:36]:
Absolutely. Absolutely. From what we’re seeing even from six months ago to now Right. From the questions I get from principals or teachers directly compared to the questions that I’m getting now are chalk and cheese. Six months ago, it was there was some fear around it, lots of questions about data, show me evidence. Now the question is around, can I use it for this? Can I use it for this? Can I use it for this? And so they’re now pushing the boundaries. And as a technology expert, we’re going, these are great questions. How do I set up a safe and secure sandpit for you or put up some safe guardrails so that I can just let you run and innovate with this? So it’s really has flipped from where it was even just six months ago just by the types of questions I’m getting asked.

Karissa Breen [00:57:24]:
Sally, we are running out of time. However, I do wanna flip to one last question

Leigh Williams [00:57:30]:
Yeah.

Karissa Breen [00:57:30]:
For you would be around security side of it. Yes. So this is obviously top of mind. I know it’s Microsoft number one priority. This is a cyber screening podcast. So I’m really curious, like, what comes to mind on the security side of things, especially to the education sector?

Leigh Williams [00:57:47]:
Yes. So and look. I will preface this to say that we just didn’t get here by happenstance. We are on the end now of a three year cybersecurity uplift. So from three years ago, like a lot of organizations faced when a lot of cyber threats started coming in and being more prevalent across our markets, On top of our cyber insurance renewals, we needed to do something more with cyber. So we’ve just at the end now, of a huge cyber uplift program across the board regardless of AI, just everything across the board for us. So we’ve got a much better cyber posture. That coupled with us building over the last two years our data governance and our data frameworks as well and how our data interacts with other pieces of data, we’ve been doing that for a couple of years.

Leigh Williams [00:58:35]:
So we are in a very good position now that we were not in, I would say, three years ago that has now helped us leverage AI into the future. In terms of security specifically now for us, there’s two elements because there is still the security that we need to put around our data because we are talking about children. And when you’re talking about medical data, when you’re talking about court orders and things like that, they those have to be held absolutely secure. And so they they are still held absolutely secure and encrypted across all of our platforms so that just a general AI prompt will not surface that. And so we’ve ran significant tests and a lot of penetration tests around that. Then the everything else, we’re saying, what are the guardrails? Where do we want AI to search, and where do we not want it to search? So let’s just start putting up guardrails. And I talk about it in guardrails, not barriers because we say, we still want you to go and innovate. But but here’s here’s the lanes that you need to play in.

Leigh Williams [00:59:40]:
The last side to me is the human element. That at the end of the day, we still very much say to anyone, any of our staff and say, if that AI has generated, say, an email for you to send or whatever it is, you still hit send on that email. It is still your email. It is still your ownership over what you do with that. Sure. If AI generates a lesson for you, you’re still the one that goes and teaches that lesson. So it is still your lesson, your accountability over that. So and I’ve even heard Microsoft themselves talk about AI will get you about 80% of the way there.

Leigh Williams [01:00:14]:
True. The other 20% is the human element and even the relational element because teachers know their kids best. And so they still wanna put their flavor on things, which is great. They still know that, yep, I was gonna teach that lesson, and AI, that’s a great lesson. But I know I’ve got a a child who has just walked into my classroom who last night, their parents have just gotten divorced. And this kid is walking into my classroom emotionally unsettled and emotionally unstable. I can’t teach this lesson that’s been given to me no matter how good it is. I am going to work with this child of where they are at socially and emotionally first so that they are safe and secure, then I can get on with my teaching.

Leigh Williams [01:00:55]:
That would be very hard to for AI to ever replicate that.

Helen Schneider [01:00:58]:
True.

Leigh Williams [01:00:59]:
So so to me, security comes at the foundational level, putting in those guardrails. But over the top, we still got humans there making sure that everything’s okay and what is delivered is still in the best interest of the child.

Karissa Breen [01:01:13]:
And just to close out, do you have any final comments final thoughts, sorry, or closing comments you’d like to leave our audience for today?

Leigh Williams [01:01:19]:
I would like to, I guess, challenge people to say I get asked a lot around, well, why go with this? Why are you doing this? Why are you doing that? And my challenge back is always, well, why not? So that’s probably what I’d leave people with is to flip your thinking and say, why do it? Well, why not? And what’s the what’s the risk of actually not doing this? It can actually be just as good a use case as why you would do it.

Karissa Breen [01:01:46]:
And there you have it. This is KB On The Go. Stay tuned for more.

Share This