The Voice of Cyber®

KBKAST
Episode 319 Deep Dive: Paul O’Rourke | The New Function Of A CRO And What This Actually Means
First Aired: June 27, 2025

In this episode, we sit down with Paul O’Rourke, Chief Risk Officer at TabCorp, as he explores the evolving function of the Chief Risk Officer (CRO) and what it means for organizations today. Paul highlights the growing necessity for CROs to possess deep technology and cyber risk skills, emphasizing that these competencies are rapidly becoming non-negotiable in tech-reliant industries. He reflects on the historical divide between business and tech risk functions, the importance of alignment and integrated approaches such as fusion centers, and how risk professionals must now balance traditional domains with new challenges like cybercrime, AI, and rapidly emerging threats.

Paul O’Rourke commenced as Chief Risk Officer in June 2024.

Paul brings a great depth of experience in risk management, including with respect to cybersecurity and technology risk management.

Prior to joining Tabcorp, Paul was Managing Director and Partner of Boston Consulting Group where he led their Global Cyber and Digital Risk practice, and was also the Australian Risk Leader.

He was previously the Global and Asia Pacific Cybersecurity Leader at PwC, and was Chief Information Security Officer of ANZ Bank Limited.

Paul holds a Bachelor of Commerce (Economics) and is a Graduate Member of AICD.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Paul O’Rourke [00:00:00]:
Next generation CROs need to be much more adept at domains that are emerging. I think that the days of not having comprehensive technology skills will make it very hard for CROs to be as effective in their roles in the future as they need to be.

Karissa Breen [00:00:21]:
Joining me now is Paul O’ Rourke, Chief Risk Officer from TabCorp. And today we’re discussing the new function of a CRO and what this actually means. So, Paul, thanks for joining me and welcome.

Paul O’Rourke [00:00:44]:
Thanks Karissa. Good to join.

Karissa Breen [00:00:46]:
Okay, so Paul, when you and I spoke originally about doing the interview, you and I discussed, hey, like what can we talk about? And one of the things that you sort of brought up to me was the new function of the Circle CRO and what does that sort of look like? So walk us through. How do you see that in your mind?

Paul O’Rourke [00:01:05]:
It’s a good place to start. So I think, yeah, traditional view of chief risk officer is obviously managing traditional domains of credit risk, market risk, operational risk, et cetera, regulatory risk. That doesn’t change. So all of those elements are still there. But there’s a number of new and emerging elements that I think need to be part of the repertoire of what I’d call loosely a modern CRO. And I think in particular I’d call that esg. But the two that I think stand out the most are technology and cyber. I think the next generation of CROs need to be much more technology literate.

Paul O’Rourke [00:01:40]:
Traditionally, CROs have come much more from the credit risk domain. And yes, it’s absolutely still a core element of risk. Nearly every company today is both digital native and technology reliant. And to not have deep technology skills to guide, to challenge and to oversight, all areas of the business, I think makes it very difficult for next generation of CROs to be more effective or highly effective in their roles.

Karissa Breen [00:02:09]:
So would you say as well, and I’m hearing this a lot, even from the board level, more people need to have a technology background or technical background. Are you starting to see now companies hiring people like yourself, that is sort of like a non negotiable, that you need to have that tech background to be able to talk around, like you said, the tech side of it, but the cyber component, et cetera. What are your thoughts then on that with boards and executives looking to hire folks like yourself in these roles?

Paul O’Rourke [00:02:34]:
Well, I think the Expectations from boards. Because to the very point I made before, you know, tech reliant organizations, to not have your person in charge of your risk domain, highly literate, and ideally even more than literate, well experienced in technology, makes it very difficult for some of the CROs to be as effective as they need to be in their roles and yes, that bring those skills. But a CRO needs to obviously challenge and form opinions themselves around the risks they’re carrying and to the very nature of the risks that organisations carry around their technology estate. Nearly every organisation, be it a bank, which is real time, a lot of organizations are processing real time payments, be it airlines, whatever it is, they’re technology reliant organizations, even online merchants. Now without tech, you could argue no technology, no organization. And that probably covers transports across most organizations now, no tech, no operations. And so it’s a core component of capability. If we look at the other one, which is cyber, nearly every major organization, the top three enterprise risks for them is cyber.

Paul O’Rourke [00:03:42]:
If you look at the Australian marketplace, just recent stats out Australia’s top five for ransomware payments in the world, cyber’s not going away. It’s a major risk, it’s a financial risk, it’s an operational risk and in many ways it’s an existential risk for some organisations where they have a significant cyber attack. Again, without commensurate skills in cyber and technology, I just think it makes it harder for CROs to be as effective as, as they need to be in their roles.

Karissa Breen [00:04:10]:
So I’m going to sort of take a step back for a moment and talk historically about the CRO function. What were the key sort of skills or characteristics of these individuals and how we’re going to now see, to your point, tech and cyber really harmonizing and that’s sort of the way forward. But I’m keen to just really understand maybe the history and see how we are evolving now the CRO function and then what we can start to see now on the horizon.

Paul O’Rourke [00:04:34]:
It’s a good point. Before we look forward, we need to look back and where we’ve come from a traditional view of risk function, particularly those that are familiar with three lines of defence, it’s a second line of defence. So it primarily does the oversight, it does the challenge to the first line function, which could be technology or it could be other functions. And it really focused on frameworks, policy, oversight, challenge and looked at, as I referred to earlier, traditional risk domains of credit risk, market risk, operational risk and in some cases technology risk as well. So that was the traditional view tended to be a much smaller function. Again, in the three lines of defence analogy, the first line was usually anywhere between 5 and 10 times the size of the second line function. And the second line function primarily was there for challenge, oversight, guidance and reporting and owned that relationship with the board as well, and external stakeholders as well, particularly shareholders around risk.

Karissa Breen [00:05:33]:
What’s coming to my mind as you’ve been speaking and this really, when I maybe originally met you when I was working in the bank, so we used to have tech risk and then business risk. So in the function that I was doing in this, in the security arena, I would go to these meetings and then, you know, tech risk would have their view from a tech background, as we know, and then business risk. One thing which was interesting now if I were to look back, going back to your, you know, looking at the history and this is probably more than 10 plus years ago, it was not a lot of alignment in terms of tech risk would say X, business risk would say, why? And maybe you can answer this, Paul, but one of the things that was really interesting is to your point earlier, you know, these business risk folks don’t have a tech background. So I don’t kind of blame them for maybe not quite understanding at the more nuanced level. So often we get into sort of these arguments around, well, tech risk things that business thinks this and there wasn’t a lot of agreement on how do we move forward as a unit together. Do you sort of envision these business risk, tech risks being amalgamated into the same role? I mean, a lot of them were operating different silos, they’re reported to very different people. But ultimately, as you, as, you know, majority of businesses or every business is underpinned by technology now. So how do you sort of see that function evolving?

Paul O’Rourke [00:06:48]:
If we take the analogy you just covered, for those business risk people to be effective going forward, it’s not just a CRO I’m talking about, yeah, risk domain professionals need to be technology literate. I don’t think it’s excusable in 2025 for business risk people to use arguments such as, you know, I’m not adept at technology, I don’t understand the technology risk side because it’s very hard to form an opinion on business risk without really understanding technology risk. Again, as I said before, no technology, no operations. So it’s difficult for whatever function it is. Could be operations, it could be markets, it could be credit, whatever it is in an organization. How are those business risk people going to perform at the level they need to without the Right level of knowledge. And you raised an interesting issue as well, and that is alignment. And I think that that’s part of the whole thing of where the market needs to evolve.

Paul O’Rourke [00:07:43]:
And you know, parts of the discussion we’re having today is what does a modern function look like? Because we can’t have these disparate, siloed functions. Just doesn’t work going forward. I’m not saying all of them will be integrated going forward. They still will be separate to a degree. There needs to be much greater alignment, much greater information sharing and much greater consensus on what is the risk position, rather than just relying on separate form, separate functions, forming opinion and reporting up separately.

Karissa Breen [00:08:12]:
Okay, this is really interesting. So I want to get into a little bit more. So you said before, how are these people going to perform? So to your point, you’ve got again, you know, business risk, tech risk. Do you see that? Both of those functions. Yes, they’ll become, you know, they’ll work in tandem more than perhaps historically, but they’re still going to be underpinned by understanding the technology. However, perhaps a business risk professional will have that lens towards, you know, the risk side of the business, but then the tech risk folk will have more of a lens towards the technology, sort of stacking, for example. So would you say that both of those, the backbone will still be technology moving forward regardless of if your business will take risk?

Paul O’Rourke [00:08:52]:
I think you hit the nail on the head. The tech risk people will always be much deeper tech professionals and have an adept skill set, much deeper around the tech stack, so they’ll be focused there. The business risk people need to understand a core element of their risk is technology. Again, it’s not just about taking the opinion from the tech risk people that helps inform it, but they need to have the skill set and the challenge themselves around what risk they’re carrying in their business risk with the underlying technology infrastructure. Because every business function relies on technology. They just can’t rely on someone telling them, yeah, this is the technology answer for your business. They need to have the skills. And if I was recommending to someone in university today who wanted to work in risk, what other areas to focus on? I’d absolutely say a combination of risk and technology will set you up in the future for a much more expansive career in risk.

Paul O’Rourke [00:09:48]:
If you have both domains.

Karissa Breen [00:09:50]:
Yeah, okay. This is interesting because again, going back to those examples, like, you know, when someone. And again, I’m just giving you examples of so people can understand a little bit more. And my thinking here would. So I was going into These meetings, like, you know, the, the business folk would sort of get on there and it sort of just. It just seemed completely left field to what everyone else in the meeting was sort of saying. And then, like I said, it caused a lot of consternation. There’s a conundrum between, well, who’s right? Because the business risk folk, they put, you know, that sat under their function.

Karissa Breen [00:10:17]:
It caused a lot of. We lose. We started to lose sight of the actual goal of reducing the risk, protecting the business. Instead, we were looking at semantics and who said what and all of those sort of things. So how do we start to get away from those tanglements in these businesses and actually focus on the vision to protect the business and reduce the risk? Because like you’ve been saying, a lot of these things are underpinned by perhaps not knowing about the technology. Not just, not necessarily to the nth degree, but having that fund fundamental understanding. So how do we sort of move forward now, as you had to look forward, and how can companies start to modernize this function and moving forward?

Paul O’Rourke [00:10:58]:
So there’s a few things in that. I think the first element, what you were referring to is disconnect, a disconnect between business and technology. And so if we’re to evolve and if we’re to mature these functions, we need much greater alignment. And so a core element here is, and you talked about reducing the risk. I think that the key problem that most organizations face in this sort of conundrum you’re referring to is understanding and quantifying the risk before they even reduce it. You need to get alignment between, in this example, the technology and business functions on what is the underlying risk before we talk about treatments or reducing the risk. And I think that’s been the problem in the past and you alluded to it, that we end up almost with two opinions and then it goes up to various stakeholders above to get consensus and agree that opinion. What we need to do much more is have alignment at all areas of the organisation.

Paul O’Rourke [00:11:51]:
So as we’re looking at risks in business units, as we’re looking higher at a division level, whatever it is, that we have alignment between the technology and risk functions. And so the way we do that is core to this is I think the business areas have to have greater breadth of skills across the technology estate. But it’s not just on the business people. The technology people in the past, I think, have been divorced from business strategy. They’re looking at purely technology. So there needs to be a much greater appreciation from the technology people around what are the risks that the business are facing, what’s the nature of the business? What is the growth strategy? What is the trajectory of that business unit or division, whatever it is, because that helps inform the risks within the technology estate as well. It could be end of life risk, could be concentration risk, whatever it is. And I think I’d sum this up to say we need greater alignment, but we need greater training as well.

Paul O’Rourke [00:12:43]:
We need to have both sides of the equation working much better together. Now we can either do that through silos or I think a modern risk organization starts to integrate some of these functions, not all of them, but some like minded functions, and actually extract some savings and more importantly effectiveness through doing that as well.

Karissa Breen [00:13:01]:
Okay, you said something before, Paul, you said divorced from business strategy. So one thing that’s come up a lot of my interviews with and as you would know, like you know, cybersecurity people, sometimes, especially if you’re working large enterprise, you start to lose sight of how the business actually makes money. And I mean to, to protect it, to understand the risk, like you mentioned before, you really need to know how it works. Would you say in your experience people are maybe divorced from reality around how their, their core enterprise makes money? Because if you don’t really know that, it’s kind of hard to, to protect it. Right. So are you seeing that a lot? Especially as you’ve got thousands and thousands of people and maybe people don’t actually see the, you know, their input that they’re doing every day. And it is, it is easy to become divorced from the business side of how it works.

Paul O’Rourke [00:13:45]:
I think technology functions in particular have been divorced from the broader business strategy and both strategy today and also trajectory where the organization is going could be new markets, could be new operations, it could be a merger and acquisition bringing in new, new function, new division, whatever it is, is a lot of the technology people see their role as obviously deep technology, which is core to their role. But you can’t be divorced from where the organization’s going because the criticality of the technology going forward, as we’ve referred to before, is fundamental to the success of the organization. It’s absolutely critical. And this is on the management within the technology function that they both share that information, they instill that information, they reinforce that information with the technology people, with their teams, be it in house or third parties who work for them as well. So that again, we’re coming back to the word alignment is we can’t just have this siloed view of the world and expect us to have an Aligned function of risk going forward.

Karissa Breen [00:14:46]:
Okay, I want to touch on divisional level. So. Okay, this is really interesting. So I want to walk you through my thinking and then I’m keen to hear thoughts. So in my experience you go to these different divisions and then you’ve obviously got, you know, people that roll up to, you know, ex executive, whatever. One thing that I’ve experienced, especially for people that have been working in these roles for so long, when you get super high up, they’ll just go, okay, well we’d present all the risks and then someone would just overturn them to be like well I’m at this level, I need this thing to go live because I’m gonna hit my KPIs or whatever the reasoning is and then they wouldn’t understand it and then they just accept the risk. How do you enter your point? How do we get greater alignment then on that? Look, I get it because, because someone’s like well I need this thing to go live. And so they’re just like well I don’t really understand it but you know, I’m going to be questioned by my boss so I need to just get this thing going.

Karissa Breen [00:15:37]:
Now how from your perspective can we get people, and I understand that people can overturn these risks but to get that alignment across these different divisions because again something, it’s a domino effect. So I’m really keen to. How do you get that cross pollination between these divisions to understand inherently your, you are accepting a risk. This is a problem for the entire organization, not just your division.

Paul O’Rourke [00:16:01]:
So you raise a really good question and a really good issue there. And that is, are those people accepting the risks in the right position to accept it? And I’d almost argue no in a lot of cases, in the absence of having the right skill set at the higher levels, a lot of times people are accepting risks in the absence of really having a full knowledge and understanding of what risks they’re accepting. If we go back and look at the position we spoke about earlier around technology underpinning the organisation, if you don’t have a thorough understanding of the technology risks and how they translate, in some cases it’s hard to accept that risk. But it’s also incumbent upon the risk areas to actually do three things and that is to provide those accepting the risk the right level of information at the right level of detail at the right time. And that doesn’t always happen. I think that’s part of the problem is people write volumist reports and put that up and make it very difficult for those Accepting the risk to really understand it. And I think one of the challenges for risk people is to get much more concise around the risks they’re reviewing and the risks they’re recommending. And it makes it much more informative for those accepting the risks to actually make a decision.

Karissa Breen [00:17:13]:
Okay, this is interesting. So would you say that people are writing these, to your point, voluminous reports that make it harder for people to understand, and therefore the people that are signing off on these risks don’t understand it and therefore they just go, well, I’m just going to accept it because it’s just too hard to make sense of it. Is that more the problem or would you think that people are just like, I just want to accept it because I need this thing to go live, because I’m, you know I’m going to get in massive trouble if my project doesn’t go live?

Paul O’Rourke [00:17:44]:
I think the answer is probably somewhere in between. I think it’s probably a bit of both. But if we want to inform those that are making the decision so they really understand what risks they’re accepting, we need to get much more concise, we need to get much more informative and we need to provide that information in a really timely manner to them. And if they want to accept the risk and the area is recommending below them not to accept the risk, there needs to be an escalation path as well within the organisation and the right level of governance and almost a culture of raising issues like this that allows people to actually raise risks. It’s happened many times in the past where functions have accepted the risk, whereas others below them have actually recommended the risk not be accepted and it’s actually impacted that organisation. I think that reflects on the risk culture of the organisation that doesn’t allow an escalation path above those making the risk acceptance decision.

Karissa Breen [00:18:36]:
So one thing I’m curious then to know is when all these problems are rolling up to you in your function, how do you sort of mediate all of these issues to make sure, okay, perhaps so and so wrote a convoluted report, therefore they don’t understand it. Or to your point, like you said, it’s somewhere in the middle. Perhaps it’s like, okay, I do understand the frustration. We are 12 months behind on this project. How do you sort of manage all these people’s expectations as well as people that are relying on you to keep the business safe, but also make sure that they’re moving forward in terms of features and functions and all these sort of things that are going on in within a business, how do you handle that day to day?

Paul O’Rourke [00:19:14]:
Well, I think the role of a chief risk officer is ultimately to protect the organization, but it’s also to support the organization as well. So if it’s black or white, it’s easy. The decision, if it’s a risk that can be accepted or can be approved and you can see the logic behind it, that’s easy. If it’s a risk that obviously can’t be accepted, it could be a regulatory violation, it could be breaking the law, anything like that, that’s an easy one. The complexity obviously comes into where we’ve got gray and it’s that in many ways the tightrope of supporting the organisation, growing the organisation, but also ensuring that you protect the organisation. I don’t think there’s ever a hard and fast answer for anything like that, but I think in many ways it comes back to two elements. One is experience and also skill sets and we’ve spoken about this a few times, is ensuring that you have the skill sets across all the current risk domains and emerging risk domains could be cyber, could be crypto, whatever. So if you’re making the decision, you understand the implications and you’re making an informed decision at the right time.

Karissa Breen [00:20:20]:
So now, Paul, I sort of want to switch gears for a moment and talk about. You discussed with me a fusion center. So I want to understand what’s a fusion center.

Paul O’Rourke [00:20:31]:
So a fusion center is one of these elements of what I’ve called a modern risk function. So if we step back for a moment and look at what’s a traditional function, I’m talking cyber fraud investigations, physical security, money laundering in this sort of domain. That’s what a fusion center covers. If you look at a traditional view, they’re very siloed. They often and usually reported into different functions. There was usually no information sharing between them and reporting between them. And again, if we bring it back to our previous discussions, those reports then went up to a higher level. It could be up to within the risk organisation and someone had to synthesise those reports and actually make a call on it.

Paul O’Rourke [00:21:11]:
But in the meantime the organization’s been exposed unnecessarily. So what a fusion center is, it’s looking at the aggregation of these like minded control functions and extracting value. The primary purpose of doing a fusion center is effectiveness. It’s not about taking cost out. There may be some cost savings, but that’s not the focus here. The focus is around how do we make our control functions of anti money laundering, of fraud and of cyber Investigations much more effective. And you do this through a few elements. One is, and primarily data share is if you think of a fusion center, each of these functions informs themselves and both themselves and others as well.

Paul O’Rourke [00:21:54]:
So if we think of an incidence of scanning of a network by a bot, something like that, or compromise of customer credentials in the market, that could be an early indication of for the fraud team of fraud passed on to the cyber team and shared with the cyber team real time through a data layer that’s all within the same function could also be early indication of a cyber compromise. And so this is where we get a much more effective function. We’re not running siloed functions. There’s no use having fraud looking at this and addressing it, not informing the cyber function and the cyber function then facing a major cyber attack. So what we’re trying to do here is collectively understand the risk, manage the risk and reduce the risk by there’s a few elements here. It could be putting the people co locating them together in the same function. It’s often the key element. It could be putting them on the same technology stack usually takes a bit of time to get there, but it’s a key element.

Paul O’Rourke [00:22:47]:
But fundamentally this is around two elements. It’s around data sharing and making sure that we share the data real time across all these functions and also removing the traditional silos of reporting into different functions and not sharing. If we look at what a modern risk function is doing around these areas, facing very difficult risks that no one’s ever seen before. In a lot of cases, a lot of the scams, a lot of the cyber attacks is we need to be on the forefront of understanding and mitigating these risks. I think fusion centres are one of the best ways to do that.

Karissa Breen [00:23:19]:
Okay, so there’s a couple of things there I want to get into which is interesting that you mentioned going back to the fusion centres. Would you say that companies out there are doing this or would you say majority of these organizations enterprise are still doing the independent silo? If you had to sort of weight it, where would you sort of say majority of these businesses are in terms of their maturation?

Paul O’Rourke [00:23:42]:
I would. On a maturation I’d say 80% plus are traditional. The modern ones are going integrated. Integrated is difficult. Fusion’s difficult. It requires a rethink at an organizational level. It’s not just within your own function redesigning, it’s about aggregating functions within one area. So some leaders have to give up some functions.

Paul O’Rourke [00:24:03]:
It could be fraud or investigations, whatever to aggregate them under A separate leader and often that’s a hard part. So there’s a political element to this. There’s a maturity angle, there’s an understanding of what to do and then there’s a factor over time of how to mature it. But I think most organisations are just not there yet. I think they will get there in the next roughly 24, 36 months. And why do I think that? I think that because they have to. If they’re to stay at the forefront, they need to address something like this.

Karissa Breen [00:24:32]:
Okay, so going back to your comment before and this Fusion center you said putting perhaps some of these people in the same function. So would that become a new function? What I mean by that question is you’ve got human resources as a function. Would this fusion center become the function which sort of encompasses the, you know, the cyber risk folks as well as the people that inform anti money laundering like all of those sort of people that you’re sort of saying, is that what you’re envisioning?

Paul O’Rourke [00:24:56]:
No, I don’t think it goes that far. I think it’s an aggregation of functions and it’s usually fraud cyber investigations as the first element. AML is a much anti money laundering is much bigger problem. So that’s usually a second phase. So I think what we talk about is phase one, phase two, but I don’t think it’s a separate function. It will either sit in traditional view of line one or it will sit in line two, increasingly in line two in organizations. But it won’t be a separate function.

Karissa Breen [00:25:23]:
Got it. Okay, that makes sense. And so then would you also say in terms of like practicality sharing the data as you mentioned? Yes. Okay, we’ve got technology in terms of dashboards and platforms, all these things to share data. But what about just people getting in a room actually triaging and talking to each other, how does that then sort of work day to day or how do you see that?

Paul O’Rourke [00:25:42]:
Well that’s one of the elements around Fusion Center. It traditionally doesn’t happen in a lot of organizations because these are to your point before run in separate areas of the organization, you know, structurally different, structurally separate. They could be in different buildings, whatever. If we get to a position of co location where we put like minded control functions together in some ways reinforce a alignment position but we also just ways of working just makes it much easier for groups to work together, share information at the data layer but also at the personal level just working together, triaging an incident. And I think that’s fusion center in its core element around that cyclical element of data, but it’s about fundamentally changing ways of working for these different functions.

Karissa Breen [00:26:28]:
The other thing that you raised before, which is interesting, you said how to face a risk no one’s ever seen, especially now with cyber security and everything that’s going artificial intelligence. How do you face that?

Paul O’Rourke [00:26:40]:
Before I answer that, let’s think of traditional areas of risk. So let’s take credit risk for organizations that carry credit risk is there’s a traditional view and there’s a historical view of credit risk within the organisation, within the Australian sector, within the global sector as well. Could be financial services, you know, bad loans and other elements within the credit risk portfolio and how to price it and how to manage the risk. And what I’m saying there is, there’s a proven capability and a proven experience layer of how to manage those risks. Go forward and then look at cyber risks, look at the new scams, look at the new cyber attacks. There’s no historical basis to look up how to do things like this. There’s no historical basis. You know, credit risks you can often model going forward in terms of what could be the impact be if we had X level of bad loans and, and write offs and things like that, that’s not easily transportable around some of these new and emerging risks.

Paul O’Rourke [00:27:35]:
It could be AI, it could be crypto, particularly around cyber as well. So this is where the skill set we spoke at the start of the call was, is well, how do I make those decisions and how do I make informed risk and governance decisions if I don’t have a thorough understanding of technology? I just think it’s a core element going forward of loosely what a modern CRO needs to be to effectively manage and govern their function.

Karissa Breen [00:28:01]:
Yeah, okay, no, I totally hear what you’re saying. And then would you say as well, and from my experience, like if we look at AI for example, like there’s still a lot of stuff that people just don’t know. And to your point, like we don’t have the historical data or the blueprint that we’ve done it for 50 years, this is the way we’ve done it. How businesses or people like yourself addressing things that, hey, even if we’ve got the tech people here, we’re still going to be entering uncharted waters here.

Paul O’Rourke [00:28:26]:
Well, I think if we take AI there, it’s uncharted waters now and uncharted waters into the future. Because AI similar to cyber evolves at such an incredible pace. And even over the last 24, 36 months the evolution of AI and the maturity of AI has moved exponentially. Yeah, that represents an enormous risk. And the other one is a lot of organizations have a lot of experimentation across all areas of the business in AI for productivity reasons, for competitive reasons, for just driving growth within the organization. Fully supported, but at what cost? And I think there needs to be swim lanes within an organization, there needs to be policies, there needs to be guardrails to actually both manage and govern the use of AI. And I don’t want this to come across that I’m not supportive of AI. I think it’s a core element that every organisation should and must adopt, but they need to understand the risk of what they’re adopting and continually refresh and challenge themselves around the emerging risks and what’s happening globally and what have other organisations faced and how do we stay on top of it.

Paul O’Rourke [00:29:30]:
It’s going to be a very challenging area for organisations in the next three to five years.

Karissa Breen [00:29:36]:
That’s a great point. You raise around continuously understanding the risks that we’re accepting. So would you say that companies out there are just like, okay, we’ve sort of looked at the, you know, this AI risk, for example, yeah, okay, risk tolerance there, we’ve accepted it. But then they don’t go back to maybe reevaluate that and then it blindsides them. Do you see that or do you potentially see that happening? As you mentioned, things are evolving so quickly, we don’t know what’s going to happen. We haven’t seen a lot of these things. There’s no, you know, history repeats itself out there. How do you see that?

Paul O’Rourke [00:30:07]:
I think there’s almost definitely going to be incidences of compromise around the AI. And so I think it’s a learning, not just at an organizational level, it’ll be at a country level, it’ll be at a global level of what are the risks, how do we stay on top of them, how do we manage them, how do we govern them, how do we report them and how do we challenge ourself? I think what AI represents is a much greater refresh cycle of the risk framework around AI. Traditionally, you look at it every 2012 or 24 months. I think what we have to do going forward, it needs to be on the agenda much more frequently of just challenging ourselves as a risk function. What are the risks in AI? What are the emerging risks? What have we seen? Are we on top of them? And to the point I made before, I think the complexity with AI, probably different to most other areas, it’s pervasive across nearly all elements of organizations. And so just having a manifest and a clear view of where it’s being used and what elements of risk that’s introducing I think is going to be a challenge for organizations.

Karissa Breen [00:31:12]:
So Paul, do you have any sort of closing comments or final thoughts you’d like to leave our audience with today?

Paul O’Rourke [00:31:17]:
I’ve covered a few times, but I think next generation CROs need to be much more adept at domains that are emerging. We’ve just spoken around AI, we’ve spoken about cyber. I just underpin it to say it’s a technology literate. You don’t need to be a technology expert here. That’s not required. But I think that the days of not having comprehensive technology skills will make it very hard for CROs to be as effective in their roles in the future as they need to be. But I don’t want to see this and this to come across at the expense of traditional areas of risk, you know, credit, market risk, governance, reporting, et cetera. In many ways, the addition of these skill sets around emerging areas, cyber, AI, et cetera and technology will just help the CROs be much more effective in their roles and face what I think will be different risks than what we’ve faced in the last 10 years.

Share This