The Voice of Cyberยฎ

KBKAST
Episode 291 Deep Dive: Kobi Leins and Kate Carruthers | How AI Brings A New Set Of Risks To The Threat Landscape
First Aired: January 31, 2025

In this episode, we sit down with Kobi Leins and Kate Carruthers, directors from Info Sphere Education, as they delve into the intertwining issues of artificial intelligence and cybersecurity. Kobi discusses how AI can expedite security breaches and the need for cybersecurity professionals to understand and mitigate AI-induced vulnerabilities. Kate expands on this by highlighting the utilization of generative AI by attackers and the importance of data and AI governance within organizations. They both explore the challenges companies face in managing these technologies, emphasizing the necessity of upskilling and proper communication between AI and cybersecurity professionals.

Kate Carruthersย is an experienced data and technology leader who has expertise in analytics, AI, data management, Data Governance and AI governance. She is a passionate educator who loves sharing her knowledge and helping people to develop their own AI and data expertise.

Kobi Leinsย is a reformed lawyer, academic in tech and law, and is a technical expert for Standards Australia. She loves to teach, learn and be challenged at the edges of tech and governance.

The Essential Eight

The mitigation strategies that constitute the Essential Eight are:

  • patch applications

  • patch operating systems

  • multi-factor authentication

  • restrict administrative privileges

  • application control

  • restrict Microsoft Office macros

  • user application hardening

  • regular backups.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Kate Carruthers [00:00:00]:
Is what we’re doing with AI gonna make sense in the context of our business, our product, our customers? And if it does, we go forward with it. If it doesn’t, we don’t do it. And, you know, I think everybody’s still in the hype stage of this and hasn’t sort of started to think about this rationally from a proper business perspective because the fundamentals of business remain the same, and you need to add new technology to your products and services in relevant, meaningful, contextually sensible ways.

Karissa Breen [00:00:51]:
Joining me now is director Kobi Leins and Kate Carruthers, director as well from Infosphere Education. And today, we’re discussing how AI brings a new set of risks to the threat landscape. So ladies, thank you for joining and welcome.

Kobi Leins [00:01:05]:
Thank you so much for having us.

Kate Carruthers [00:01:07]:
Nice to be here.

Karissa Breen [00:01:08]:
Okay. So I wanna get a bit of a lay of the land in terms of AI. Now, as you you both know, depends on who you speak to, depends what media outlet you’re reading. Everyone’s got a bit of a view. But given both of your backgrounds, I think it’s quite interesting to hear your side and what you’re seeing. So perhaps, Kobi, I’m gonna go to you first. Tell us what’s comes up in your mind when I ask you that question.

Kobi Leins [00:01:33]:
So it’s a funny reaction I have when people say, do you do cyber? And I say, no. I don’t do cyber. And then I talk to my cyber friends and say, can you help me out on this AI question? And the reality is that they’re desperately intertwined, and the skill sets are both required. But what I think of, to answer your question more directly, is how AI can be used to expedite security breaches, not so much, you know, the recent disclosure of ChatGPT being used to blow up a Tesla truck that’s could have been done with Google, but with some kind of search, but really looking at how these tools can be used in ways to find weaknesses that at speed and scale that you couldn’t do previously. But, Kate, I’m really curious to hear your view because I know it’s slightly different.

Kate Carruthers [00:02:12]:
Well, the thing is that every good thing that we can do with AI, the bad folks can do with AI too. So it’s like a it’s like a continual arms race. So every time we ratchet up our capability using any kind of other technology, they ratchet theirs up too. And so they’ve improved their attack skills using AI. So a lot of attackers are using generative AI now to launch attacks at scale. One of the big things is a lot of the people who are building AI aren’t thinking of it from a security lens and aren’t really talking to their cyber folks, and their cyber folks haven’t thought about the new threats that are emerging with AI.

Karissa Breen [00:02:55]:
So can I ask one of a rudimentary question? You said before, Kate, that, you know, people or security folks aren’t looking at from security lens. What lens would you say they’re looking through? Would you say, oh, we can do things now faster, cheaper, etcetera? What’s the lens are people sort of gravitating towards, would you say?

Kate Carruthers [00:03:12]:
Well, if you take your average cybersecurity professional, they’ve already got a pretty full dance card. They are so busy defending against external threats that they are not looking back inside their organisations a lot of the time to look at things like data and AI governance and look at hardening the protections, the traditional, you know, confidentiality, integrity, availability sort of protections that are part of the information security and that is the thing that people really need to start thinking about. So AI systems are vulnerable to some unique kind of of poisoning from the underlying data to the models, and we need to work out how to defend against that kind of attack. And if the cyber folks aren’t really thinking about it, then you don’t have all your resources starting to lend their minds to how you defend against these kind of attacks.

Karissa Breen [00:04:07]:
So what would you say? I mean, there’s so many things that you’re both being going through your mind, so I wanna make sure I’m not going so off track. But when you’re speaking even, Kobi, to your point at the start around, you know, you’re asking cyber questions and then they’re asking you AI questions, what do you think is the main reservation or concern cybersecurity practitioners have, like, generally speaking?

Kobi Leins [00:04:28]:
I think Kate’s sort of tapped into the same things that I’ve seen. So in the corporate world, the initial grasp, given the overwhelm and the speed and of which things are changing in cyber is how can AI be used to find weaknesses in our own systems? And it’s great for that. So looking at anomalies, if you’re a financial institution looking at outlying patterns, some of which has been done by a lot of these industries for years, health insurance as well, if you’re looking at claims or you’re looking at health outcomes, there are some really interesting things that you can do with AI. But what cyber doesn’t tend to do, or at least I think they’re starting to now is probably a better way to put it, is to think about what does it actually look like. So even in Copilot trainings, for example, one of the conversations we had with everyone who had the trial was just be really wary that if you leave your laptop up and you’ve got, Copilot on it, it’s much easier to rip material from a security perspective. So really basic things. But to Kate’s point, you can use all of these tools. There is a fragility to some of these tools, and there are weaknesses that I think cyber is gonna need to upskill to understand.

Karissa Breen [00:05:33]:
So when when you say upskill, do you mean, like, like, training? Do you mean I hate to say awareness or, like, what do you mean by that?

Kobi Leins [00:05:41]:
Not the awareness word. No. There’s gonna need to be upskilling. There’s gonna need to be communication between people who know the systems well, who know security well. I don’t think of myself as a security expert, so it’s only in conversation with security experts that I have become more aware of these vulnerabilities or from very widespread reading. I don’t know. Kate, do you have a similar view? Or

Kate Carruthers [00:06:01]:
So the average cybersecurity person is using ChatGPT, is using something, some kind of AI in their day to day life as as a user, but they’re not often taking a systematic approach to thinking about the threats that these kind of technologies give rise to. For example, if you’re if you’re the typical cyber person, you’re busy defending against all these threats that you’ve already got on your plate. Are you thinking about something like defensive distillation? Are you thinking about a technique that involves training a model with softened output probabilities that helps to reduce its sensitivity to small perturbations in input data, for example? So these are the kind of things these are kind of really subtle things that can break models that we need to be thinking about how we defend against.

Karissa Breen [00:06:52]:
Okay. So I’ll switch gears perhaps and just stay with you for a moment, Kate, on this sort of same talk track around we’ve sort of discussed at a high level, like, you know, new risks. Even at the start of the conversation, I’ve introduced it as new sort of risk within the landscape. What are they? And there’s definitely has to be risks that we all, as a as a collective industry, probably don’t know about yet because it’s still relatively in the grand scheme of the world early days. So how does that start to unfold? And then to Kobi’s point earlier, is it that we’re just gonna read about these sort of risks that are emerging, or how does that look? How do you see it?

Kate Carruthers [00:07:29]:
Well, we don’t have to let it unfold in front of us. We can just think about it a bit and think about the nature of the AI models that we’re working with. So for example, you know, one of the first things that that attackers will try to do is poison the underlying data set that drives the AI models. So if we harden our defenses to protect those data sets, we can, be comfortable that there’s minimal risk of people actually getting into them and poisoning that data source. So what it means is we need to start to think about observability of data, at every point in its flow. So we need to stop thinking about data as a static thing that just lives in a database and understand that it’s now flowing through our systems. And we need observability of that data and any transformations that are happening to that data throughout its life cycle. And so that we can make sure that only only correct and permitted transformations have occurred so that we can be comfortable that the right data is providing input to the AI model.

Kate Carruthers [00:08:30]:
So that’s just the underlying data. And then there is the models because people can do things like insert bad prompts into the models to put data into the models that will close in the models. So there’s different ways that we can start to think about it before these bad things happen. Yes, we are going to find out about things, interesting things that people will do to us, but if we think about it a bit, we can start to do some preventative actions right away.

Karissa Breen [00:08:57]:
And, Kobi, do you have any sort of comments on what Kate was saying?

Kobi Leins [00:09:00]:
Yeah. I’m I’m also thinking about it from a slightly different angle, which is just the yes, and is probably the best way in addition to all those data points. And you touched on this right at the end, Kate, the systems themselves. So with a legal background, I see a lot of lawyers and there have been movements to to make it sort of, you know, if datasets use data that shouldn’t have been used, you remove the data and continue on your merry way with an AI system. When in fact that’s not how the systems work. And figuring out how to corrupt and prompt inject and do harm through the systems themselves, in addition to all of the things that Kate’s listed, is another thing that I think is poorly understood. And just the brittleness of these systems is perhaps the the challenge to get across and to make sure, as Kate said, that they’re hardened and that you think about redundancies if they do go down. And, I mean, there’s a whole sway of other issues that come along with the the more security focus issues that I don’t deal with at the heart of what I do, but definitely circle around the edges.

Karissa Breen [00:09:53]:
Okay. So as you were speaking, Kate, around, you know, there are things that we can do to prevent this, etcetera. But don’t you think that’s for more advanced companies? Now I caveat that with saying because, you know, I’ve done so many in interviews with executive, VPs, whoever, you name it, of people saying, companies out there can’t even do the basic stuff. Basic stuff meaning, like, you know, even let alone MFA. Like, that that’s the advanced level for some organizations. That’s just me having a go. That’s just actually me observing what’s out there. And then, you know, I’ve just, recently been in Cisco Live.

Karissa Breen [00:10:24]:
They told me about observability. I recently came back from the NetApp Insight in Vegas talking at they spoke about data poisoning, for example. So the average company probably isn’t at the point, this is my assumption, that they can even maybe take that initiative because they’re not doing, like, to what we call in security, the basic stuff. So how does that conundrum work then?

Kate Carruthers [00:10:47]:
Yes. The problem of the essential 8, I always call it in my head. You know, we’ve got so many organizations that that need help in that space. One of the things that Kobe and I often talk about is getting the boards to understand the risk of doing AI. So there’s benefits, but there are risks and weighing up the risks and the benefits to make sure that the organisation is taking the risk in in a sensible and structured way and not letting the organisation run away and do stuff that could get out of their control

Karissa Breen [00:11:18]:
really easily. So focusing then on the boards. Now from my understanding of speaking to people like yourself in the industry and on the show that there are there is bit of an emergence towards board members having a a tech background as historically there wasn’t, which is probably why people didn’t value it, etcetera. There wasn’t enough money for security funding. So what is sort of the general consensus that you’re hearing from boards? Like, what are their sort of what what are they sort of thinking when it comes to AI?

Kobi Leins [00:11:43]:
I think they’re thinking we should be doing it, but we don’t know how. So in addition to the low levels that you’ve described in relation to cyber, I don’t think we’re in a hugely different state in relation to AI. In fact, I think in some ways, we’re treading the path where cyber has already been. So, really, what you’re finding is that boards that a year ago enthusiastically were flying out to Silicon Valley and doing all manner of courses and trainings and going on trips and coming back and implementing things have now had a series of proof concepts where they found that these systems are a little harder to wield than they thought. They’re more expensive than they thought. And they’re really looking for value, but I think they’re also probably the best, most receptive to thinking about risks, as Kate’s already highlighted, because they have joined several risks as board members. So there’s a real interest to turn to, not just the skill sets, but to also use advisory boards, to use, you know, people like us to train, uplift, help with policies. But it’s more than just I mean, to your point, I love that nod to, like, what is even awareness? You know, you can have a policy.

Kobi Leins [00:12:45]:
You can have some kind of conversation, but what you really need are iterative processes that are uplifting in the business all the time and connecting those dots you’ve already highlighted. So cyber and AI, people need to talk to each other, but so do many others within an organization. If it’s a larger organization and if it’s a smaller organization, they’re probably gonna need to outsource to to others who have those skill sets because they’re just not gonna be able to have them on the board or in the company.

Kate Carruthers [00:13:05]:
And one of the big risks for these smaller organizations, so the ones that can’t roll their own, is they’re gonna outsource it to agencies, and these agencies will be quite often not skilled in data and privacy practice and cybersecurity practice when they’re building AI. So they’ll be building really nice looking AI that will have no controls around it. And these organizations will need help in understanding the risk that they’re undertaking in contracting with those organizations with those fancy looking apps that may not be as secure as they ought to be.

Karissa Breen [00:13:42]:
So then just on that comment around outsourcing, so companies because it’s sort of a a new thing. Let’s go with that. Companies don’t really know what good looks like. Now I if I wanna renovate a house, I have no idea who I should be going to, what a good painter should do, what accolades they should have. So what would you both say for companies that are looking to outsource some of their their questions, implementation, whatever it is to these businesses that, you know, say they do AI advisory or whatever it is. Because, you know, as you all know, as soon as something new comes in, everyone claims they do it. So how do people out there discern who’s good? Is it is it the pedigree? Is it number of years? Is it accolades, some awards someone won 10 years ago? What what does that look like?

Kobi Leins [00:14:28]:
I really wish that LinkedIn had a function that showed how long you’ve had a particular title because I reckon the last 3 months, I’ve seen more people change their titles to AI expert than I’ve seen in the previous over a decade I’ve been in this space. So I think to your point, it’s a it’s a really challenging space for experts to play in or for the the reverse of that is for boards to find the right people. And I think the short answer is those of us who’ve been doing it for a long time know each other. A lot of us have worked in or around standards or been in roles where we’ve done this work. And I think just verifying who you’re working with before you hand over your cold hard cash is a really good idea. The space will change a little bit once AI standards there’s another standard that’s coming out under the main parent standard, which will require auditors to have certifications. And that will, I think, hopefully, quell this a little bit. But in the meantime, it’s it’s a free for all at the moment, and, yeah, everyone’s putting their hand up and saying they’re an AI expert right now.

Kate Carruthers [00:15:25]:
The thing that I would say, you know, if you’re working with 1 of the 1 of the big cloud vendors, then reach out to them and ask them who they’d recommend amongst their gold partners because they won’t recommend somebody who doesn’t have expertise. So that’s a good way to mitigate your risk in that area and trying to get them to show you examples and actually talk to the clients that they’ve delivered them for. So, you know, don’t just take their word for it. They did a great job. Actually talk to the people they delivered to and see how they feel about it now. It’s often very enlightening.

Karissa Breen [00:16:02]:
So I wanna talk about AI standards. Now this is another thing that I’ve often heard or responsible AI. I mean, there’s a lot of this stuff going around. I mean, I think the UN is trying to

Kobi Leins [00:16:13]:
You make it sound like a virus, KB.

Karissa Breen [00:16:15]:
Oh, I just feel like I hear the same thing no matter where I go in the world now. So just it does. It’s a virus. It follows me. So, I mean, look, like, what’s your view? Like, what is define a standard. And then I wanna get into responsible AI thing, like, to point earlier about the whole Cybertruck thing, I mean, that’s a whole another thing again, but what how would you define a standard? And then I wanna talk about responsible AI.

Kobi Leins [00:16:38]:
So if I’m talking about standards, I’m talking about international standards. There are 2 there are many bodies that generate standards, and they govern everything from our PowerPoints to our cars and, you know, toasters and in between. So we’re moving away from which is sort of linked to your other concept, this idea of good AI AI for good or responsible AI, although those terms are still being being used to a specific set of parameters and instructions that are gonna be required. And I always say it’s really good to think of standards as best practice. That’s sort of the what you wanna be aspiring to do that’s the best way of managing your AI systems. Regulation is the bottom of the barrel. It’s the lowest spot that you can’t cross under that you you know, if you go below that, you’re gonna see litigation and and risk. But in between that, there are a whole range of risks.

Kobi Leins [00:17:21]:
So when I’m referring to a standard, I’m referring specifically to the ISO stand, the International Standard Organization standard, 42,001, which is the parent standard on AI management. And it’s quite an esoteric standard that really refers to processes you require in an organization, which is much of the work that we’re doing. And the boards are after, which is, you know, what does it actually look like? Other than having a policy, what else do you need to operationalize managing all of these systems in a way that makes sense? And is it, as I said, gonna be auditable in a documented way by experts who will have certain certifications? So that’s a very specific narrow definition of standards when I’m talking about it. And I think the reason it’s being talked about more is because that standard landed towards the end of 2023, and people are starting to realize that, you know, general practice is gonna have to change, that the the way that people are treating their systems, managing their systems is changing, and they’re starting to understand more and more of those risks that Kate was talking about earlier.

Kate Carruthers [00:18:12]:
Yeah. And the other thing is I teach a course at the Australian Graduate School of Management and AI for Innovation, And, we touch on responsible AI and all the major cloud vendors have their own version of responsible AI. It was typically developed before the ISO standard existed, so they were ahead of the game and it sort of caught up. But they broadly align with the standard. So if you’re on one of their websites and you wanna look at your, you know, your cloud vendor’s version of it, it’s a pretty good place to start if you don’t know where to start and you don’t wanna go and actually fork out the cash for the standard.

Kobi Leins [00:18:51]:
Yeah. It’s worth mentioning too that when you say the standard, we’ve just had the Australians got an AI voluntary safety standard, and there’s also there are mandatory guardrails. So those standards that are coming out Australia are largely ripped off the ISO standard as well. So they’re all fairly similar in terms of the direction they’re pulling in. There is a bit of devil in the detail, but it’s not like they’re vast vastly differing, which is good.

Kate Carruthers [00:19:12]:
So my message to people out there that are trying to think what they should do is look at the Australian standards that have come out. Look at the ISO standard if you if you can afford it. And if you can’t, then look at the look at the other Australian stuff because Australia is planning to regulate this space in some manner and form in the not too distant future anyway.

Karissa Breen [00:19:32]:
Okay. So there’s a couple of interesting things that I wanna get into. So, Kobi, you said before the operative word, auditable. So usually when people hear the word audit, risk, compliance, or governance, they start to panic. So how how do you audit AI? What does that look like now? As we’re sort of like you said before, I wanna get into the regulation piece. But before we do that, what is the an AI auditor gonna be doing here?

Kobi Leins [00:19:54]:
Well, they start to panic, but they also start to pay money, which is kind of where it gets interesting because you need to have independent people coming in and reviewing how you’re making decisions around those systems. So as we’ve already seen through an enormous and increasing number of cases and various failures, some more or less public companies take on risk when they use these systems. The auditable part means that when you make a decision about these systems, you, you should be documenting it, including the various stakeholders. Who’ve reviewed it. That means that someone needs to be able to come in much like you have financial audits. You’re going to have external experts coming in and reviewing those decisions that have been made, which is going to provide an interesting source of tension from a regulatory perspective in the sense that lawyers can be very reluctant to put things in writing because it can raise liability because it shows their decision making process. But in the case of AI, I would argue that it’s gonna be defensive. If you say that you’ve reviewed it to the best of your ability and you’ve got a documented trail of that, that’s actually gonna serve as a protective purpose for your company.

Karissa Breen [00:20:51]:
Okay. But the thing that’s interesting is, like, the best of your ability. Isn’t that how people obfuscate getting around some of these things?

Kobi Leins [00:20:58]:
Can do. And I think the culture will change. I mean, standards are soft law, so there will be there is already an uplift in and I saw that in in my times time in corporate, there was a shift slowly towards requiring more of third parties because in the end, whoever signs a contract takes on that risk. Right? So if you’ve got a a contract with another party and they haven’t got their house in order, their data practices, but in addition to their AI, then you’re taking on that risk within your company. So making sure that you’ve thought about that in ways that protect your company is really important. But even beyond the regulatory requirements, there’s still the reputational risk and harm. And I think there’s enormous ROI for companies who invest in these processes early, and that’s why I’m doing this work. Because you can you can comply with all the letters of the law or obfuscate to your point, but what really matters is, you know, what are your customers gonna think when they hear about what you’ve been doing? It’s not about trying to escape.

Kobi Leins [00:21:50]:
It’s about trying to do the best that you can in the environment that there is right now with the knowledge that we have.

Kate Carruthers [00:21:55]:
And I wanted to just pick up on something Coby just mentioned is, you know, this this third party risk where you’ve outsourced it and you think you’ve put it to bed because you’ve outsourced it. If you haven’t asked that outsourcer the right questions, so if you haven’t evaluated their cybersecurity, information security, AI governance, data governance controls as part of that contract, then no due no proper due diligence has been done on that that contract. So a lot of organizations think just by outsourcing, they get or risk off their own backs, but you don’t because if that vendor fails to do its duty, they fail and your business fails too with it. So you need to make sure that they are doing the right thing. And a lot of a lot of the times when you’re having these conversations with vendors, they’re going, going, nobody else is asking me these questions. And I’ve had these conversations and I just go, well, I am because I’m doing I’m a responsible employee and I need to I have a fiduciary duty to ask you these questions. And they don’t like it. They don’t want to be called on these

Karissa Breen [00:23:00]:
things. So going back to the reputational damage side of thing, which is interesting, would you say as well and, I mean, I’m generalizing, but, obviously, with, like, Gen z, we’re starting to see a shift in how consumers interacting with companies and brands. Like, people aren’t as loyal as they used to be instead of the baby boomer generation. So now, like, you know, maybe in earlier times, people would fob off if a company made a mistake. But, like, now, to your point, if this comes out that someone’s not doing responsible AI or whatever it is, that people will start to back away from that from that organization. Is that sort of more a, you know, consumer shift with how people are, where they’re spending their money, people aren’t loyal anymore? Is that factoring in as well as a as a risk or concern for these boards?

Kate Carruthers [00:23:45]:
Well, it should be.

Kobi Leins [00:23:47]:
Yeah. Yeah. It 100% should be. And if it’s not, the issue is that a lot of one of the issues is that a lot of boards don’t have younger representatives to your point, Katie. So it’s not just about diversity in terms of gender and other perspectives, but boards really do need to have those kinds of voices to understand that as as appetite shift, and there’s a choice study on this that Kate Bauer worked on, which showed that younger people do actually care quite a lot about what happens with their data, that, you know, if companies are doing creepy stuff, then they really don’t like it. They don’t like you know, we all do the call. A lot of us do the call at Christmas where everyone wishes you a merry Christmas who you’ve ever bought from all year. No one wants that, and particularly the younger generation are really intolerant of that happening, I think.

Kobi Leins [00:24:27]:
But I think that there’s definitely it’s it’s something to lead with as a company rather than think you have to invest in it for no purpose or just to comply with regulations. It’s not that’s not the sole purpose you do it.

Kate Carruthers [00:24:37]:
The other thing is to think about, you know, there’s there’s research that shows people not wanting AI when it’s shoehorned into products for no reason. So, you know, every product now comes with extra added fantastic AI, and it’s just not relevant to the product. And the and the you consumers don’t want it. So I think that really we should all sort of take a deep breath and go, is what we’re doing with AI gonna make sense in the context of our business, our product, our customers? And if it does, we go forward with it. If it doesn’t, we don’t do it. And that seems to me to be you know, I think everybody’s still in the hype stage of this and hasn’t sort of started to think about this rationally from a proper business perspective because the fundamentals of business remain the same, and you need to add tech new technology to your products and services in relevant, meaningful, contextually sensible ways.

Karissa Breen [00:25:35]:
Yeah. That’s a really interesting perspective, actually. I’ve noticed that even creeping into, like, anything that’s so basic and rudimentary. It’s like,

Kobi Leins [00:25:41]:
yeah, I

Karissa Breen [00:25:41]:
don’t know if I really need this. So would you say or would you guess that there’s gonna be a reduction then on people adding these features and functions and then removing it because no one uses it or they’ve got bad backlash or we start to see that come through as the next wave of this AI train?

Kate Carruthers [00:25:56]:
I’ve been joking for a while that we’ve just gone over the, you know, the the falls of the Gartner hype cycles. You know? So we’re at the peak last year, and we’re gonna go down. And and I reckon in the next 2 to 3 years is where we will start to really learn how to use AI for sensible business functions instead of just jamming AI into everything because it’s trendy. You know, it’s to say it’s been the same with many other technologies in the past where everyone got hysterical, and I think everybody needs to just take a deep breath and stop being hysterical a bit.

Karissa Breen [00:26:31]:
Kobi, did you have anything else to add to that?

Kobi Leins [00:26:33]:
Or I’m just quietly chuckling. I’m just reflecting on having run a train I have case studies for for the trainings we do. And one of the case studies is, you know, your customers are deeply dissatisfied, and there’s this chatbot option. And no one picked up on the fact that customers were dissatisfied. And if you wanna annoy a customer, one of the best things you can do is put a chatbot in front of a dissatisfied customer. And getting people to understand Kate’s point that you’ve actually gotta look at your business problem. If if you’ve got, you know, general problems in the business, layering tech over the top may mask it momentarily. It may also exacerbate it.

Kobi Leins [00:27:03]:
So it’s always always going back to what’s your business problem and which tool is the best tool to use. Because what we haven’t mentioned, and I will not give any talk without mentioning it, is that these systems, in addition to being perhaps the costlier versions of some of the tech, are also environmentally very, very impactful. And so thinking about where to use them, how to use them, and where to use them, as Kate says, in ways that are gonna give you a strategic edge rather than just everyone using them in the same way is is really what I think we’re we’re gonna see happen over time.

Karissa Breen [00:27:31]:
Yeah. And a chatbot that doesn’t understand natural language processing as well. So it’s like, I can’t understand you. So we’re gonna revert back to the beginning. That’s even more frustrating.

Kobi Leins [00:27:40]:
Yeah. Exactly.

Kate Carruthers [00:27:41]:
Look. Chatbots are really hard to build, and that’s that’s why generative AI and the agentic AI were both really interesting innovations. They’re promising being able to make chatbots that are really quite intuitive and I’ve seen a couple that are pretty good, but they need to be backed up by sound business processes and an understanding. So there’s a whole lot of thinking that a business needs to put behind a chatbot. So they need to have a really good understanding of their business processes, of bottlenecks, and understand what problem that chatbot and process automation underlying it is going to solve. And if you don’t have a clear vision for how you’re gonna solve a problem with that, then it’s probably gonna be wasted money.

Karissa Breen [00:28:25]:
So getting back to the regulation side of things, that’s often what and I don’t know. Like, when I’m speaking to people about and not not just about AI, just about problems in general, what I often hear people say is typically they’ll just say, oh, government needs to handle the problem. I I often see that it’s like, it’s just the government’s gotta manage all of the problems, but it’s like we’ve got really capable people in industry that could solve these problems. So what does the eyes of the regulator look like now within this AI new world and this new paradigm?

Kobi Leins [00:28:58]:
I think there’s a real disconnect in Australia between industry and government. So in other places in the world, I’m most familiar with Germany because I play there a little bit, but there’s a lot more movement between academia, industry, and government, so that expertise moves in and out. And I think you’re a 100% right. It’s very dangerous to think government’s gonna solve all the problems because people at the coalface doing the work are, in fact, often, you know, the real expertise lies in industry. So improving those conversations and having the right people advising government is also key. Unfortunately, we’re we’re being led at the moment by a lot of other countries who are doing this work, and I’m looking forward to us being a little more forward thinking. I think the other issue that I that I really struggle with is this dichotomy that’s presented of regulation hindering innovation, which is a complete furphy. You can actually move much, much faster in an organization when you know what your strategy and vision and your tech strategy aligns with that vision and strategy is.

Kobi Leins [00:29:50]:
You can pull ahead much more quickly. You’re not poking around in the dark doing these AI assessments with no criteria. You’re really clear on what you’re trying to achieve and not trying to achieve. And so even without regulation, companies can definitely ready themselves and including using those those AI standards, the Australian ones to which we’ve referred, to really set themselves up for success for when regulation does come, basically. Government’s not gonna save us.

Kate Carruthers [00:30:12]:
Also, Australia is is typically a fairly light touch jurisdiction with technology and privacy regulations, so they they don’t have a very heavy hand like some other jurisdictions.

Karissa Breen [00:30:23]:
Yeah. These are very interesting observations and viewpoints. It’s, to your point, Kobi, when I did this event last year, we were talking about, you know, collaboration between private public sector, etcetera. So do you have any sort of advice maybe what people can do? And then I’ll I’ll caveat that with saying that it’s all well and good, but everyone says they’re gonna do something, but then, you know, when new year starts, everyone’s forgotten about all these great, you know, grandiose promises that we made. So what would you say or what is Germany doing that Australia can start to do to have more movement between different industries?

Kobi Leins [00:30:59]:
Oh, there’s so much. Well, culturally, I think that movement has existed for a very long time. So it’s something that, again, how do you shift culture? That that’s a long term goal. What, what a lot of European countries do and Germany does as well is have free education. So there there’s a really solid focus on educating people, including in these areas. In some countries, Stoney and Finland have also got really good education in tech specifically, so people are more capable of asking the right questions. I think what Australia needs is a and this is universities have been trying to set this up, interdisciplinary research centers where you’re creating experts who can do this work. But short of that, going out into industry and getting experience and and feeding that back into policy is really, really important.

Kobi Leins [00:31:41]:
So having that cycle happen is important. How to make that happen here culturally. I don’t know. The silos here are a lot stronger, but I think making sure that we listen to the actual experts, people like Kate, who’ve done this for years and others who’ve been the coalface, either doing AI impact assessments or working on standards or have that expertise, which sort of goes to your question about how do you find the experts. It’s people who’ve been doing it for a long time, not people who have a 3 week course from Harvard, which I heard about the other day. It’s really looking looking at people who have experience and hands hands on like cyber was before the degrees existed, I think.

Karissa Breen [00:32:14]:
So do you think education’s ever gonna be in Australia? Now I asked that question because as you know, there’s all these people online saying, oh, we don’t have cyber people. We’ve got a deficit of this. And now there is there is stress coming from the government saying, well, you know, there are there are risks associated to national security, let alone cybersecurity, and, you know, they’re worried about critical infrastructure, etcetera. So are they just gonna say, okay. It’s free because we need people because now of the AI conversation of people being worried they’re not gonna have a job and all of that sort of conversation and chatter is emerging as you would see online. So what do you think about that?

Kate Carruthers [00:32:49]:
Well, every every university in Australia is churning out cybersecurity graduates as fast as they can, and, you know, that is a very big business for them. They will be churning out AI graduates pretty quickly. They’re churning out people with masters of data science and data analytics. So there’s there’s people coming through, but the big gap that there seems to be for me is actually how do we educate the people who are going to need to manage them? And looking at the cyber professionals, like, I’ve got a lot of colleagues for other areas who’ve done cyber training and are trying to break into cyber and they can’t get their 1st job. So there’s a real unwillingness in Australia to give people a chance with their 1st job. Now even with the credential, it’s really hard to break in, so I think it’s going to be interesting to see how that plays out how that plays out for AI. Will AI just take anybody off the street just because they need warm bodies, or will they be like cyber where somebody with a credential and, you know, 10 years experience in an IT role won’t be able to get a gig because they don’t have cyber experience? Yeah. I think

Kobi Leins [00:34:00]:
there’s also confusion around what skills are needed. So I’m seeing companies now, some companies are starting to appoint chief AI officers, which is interesting. I was sort of off the back of Biden’s order that each government department needs to have a chief AI officer with a certain level of qualification. But I think companies, a lot of those ads are still requiring the same old skills that were needed for any kind of tech role. And even if you’ve got AI knowledge, what’s needed are soft skills, like diplomatic skills, working across execs, having, you know, being able to break through silos in corporate, being able to convince or help others to understand how, to your point, Katie, why would you invest this money if it’s, you know, why wouldn’t you just sort of muddle along and, you know, if it’s not really gonna cause you a regulatory problem, who cares? But that’s a really these roles are gonna be quite challenging, and I think will be people will increasingly realize they need a broader set of skills. And I’ve been talking to a couple of people over the summer, actually, who are recruiting and quietly looking that they are aware and are starting to look for those people. But for for those not in the know, I think it’s very easy to fall back in the into the usual patterns of getting the usual suspects, and that’s not they’re not gonna be the skill sets they need.

Karissa Breen [00:35:06]:
The other thing that I was at Cisco Live last year. I interviewed their VP of, like, learning education, and he was just sort of saying that there’s gonna be a reduction in these traditional university college courses of, like, 4 years or whatever it is, the standard 4 to 5 years, because things move so quickly. So we’re gonna see more of these micro courses, etcetera. Would you agree with that sort of approach, or how do you sort of see education unfolding now?

Kate Carruthers [00:35:31]:
Look, education is a very big business, and their business is predicated on a 3 3 year degree, but for a bachelor and a 1 to 2 year for a master. So they’re not going to change that anytime soon. But one of the things every university in Australia is exploring is how they do micro credentials that badge up to, you know, a degree. So for example, one of the courses that I teach at AGSM, you know, you get certain amount of points that you can put towards a graduate certificate for example. So that’s the kind of thing that most of the unis are looking at at the moment, but realistically every university is looking at how is AI going to change how we teach or what we teach. So this whole fundamental question of what nugget of knowledge do I need to pass on to the next generation and how do I how do they demonstrate to me that they know it. So every university is trying to work this out and every university is using AI pretty much trying to work out how to use it. They’re trying to fly the plane while they’re building the plane.

Kate Carruthers [00:36:37]:
So I really think that it’s gonna be interesting over the next few years to see how that how that happens. Like, I’ve got a friend, Lynne Gribble, who’s doing some amazing stuff using AI avatars in her business teaching, but she’s they’re still using very traditional methods of assessment. But, realistically, we can’t do assessment as higher education practitioners the way we used to because locking everyone in a room with pen and paper and do a closed book exam is actually really hard when you’ve got 60 or a 100000 students, so we can’t go backwards. We’re gonna have to go forwards. And how do you how do you assess how people use things like generative AI? Like, what’s a valid answer? What’s what quantum of generative AI is a valid answer? So universities are trying to unpack this stuff right now, and I suspect some answers will start to emerge in the next 12 to 18 months.

Karissa Breen [00:37:35]:
So you said going forward. That probably leads me to sort of my last main question would be, where do we go from from here? Like, there’s a lot of different conversations, etcetera. I know it’s a big sort of question to answer, but I’m just sort of curious, like, where does your mind sort of go when I ask you that?

Kate Carruthers [00:37:52]:
Well, for me, you know, I will just I would just like to counsel all organizations, go do the essential aid. Like, they are the fundamentals. If you can do those, your data will be more protected than any of you don’t. There’s some fundamental practices there that are that really important. You know, things like getting applications whitelisted, and that is actually really hard for organizations to do. So getting that will protect the data. The other thing is having robust written down security cybersecurity and data protection protocols for your organization. Like, how do I do stuff and have it written down? Ethical oversight.

Kate Carruthers [00:38:33]:
How do you control for fairness and bias issues in your AI? How how can you make sure that your AI is explainable? Or if it’s a black box, how do you make sure that the that it has valid inputs and outputs? And then how do you educate your people to a baseline level where they understand what AI is and what it should and shouldn’t be used for?

Karissa Breen [00:38:58]:
It’s gonna be interesting. I’m really excited to see what’s gonna happen. I think that in the consumer front, on the business side, how people are evolving in terms of their education, how businesses are moving and adopting AI. So I’m excited to see what’s gonna happen, especially in the next 12 months. So, Kobi, do you have any closing comments or final thoughts? And Kate, much the same?

Kobi Leins [00:39:18]:
It’s an interesting moment now when organizations from the AI side, they need to think about the what. So what is being reviewed and how they define that is really important, and then what the touch points are. When does that happen to Kate’s point? So what are the what are the triggers? Really concretely, what does that look like? And that feeds from ideally from a risk appetite around AI, which should be linked to cyber and the other the other areas as well. But, yeah, there’s a real need that we’re seeing for deeper conversations and connections across a lot of the existing pieces in the larger organizations and uplift in the smaller organizations. So, yeah, really looking forward to seeing some of that happening happen and helping to do it.

Kate Carruthers [00:39:53]:
Yep. The the last thing that I would say is every organization needs to think about a data literacy program to get every single one of their people in their organization from their board down to their frontline associates up to a baseline understanding of AI. And the other side of that is how to protect data in your day to day work. How how what actions should everyone do to protect their data?

Share This