The Voice of Cyber®

KBKAST
Episode 248 Deep Dive: Corien Vermaak | The Future of Cybersecurity: Expert Insights on Challenges and Opportunities Presented by A.I.
First Aired: March 13, 2024

In this episode, we are joined by Corien Vermaak  (Head of Cybersecurity – Cisco ANZ) shares insights on the evolving role of AI in cybersecurity. From early threat detection to ethical considerations, we explore the potential and challenges of AI in safeguarding digital environments. Corien discusses how AI can proactively identify anomalies and augment human skills, enhancing security measures and response times. Join us as we delve into the integration of AI in cybersecurity and the impact on job roles, and a thought-provoking discussion on the ethical implications of AI decision-making.

Corien Vermaak started her career as a technology law specialist within the telecommunication space and soon fell in love with data privacy and the legal structures governing cybercrime. In her career,  Corien specialised in cybercrime legislation and data privacy while representing large multinationals on these matters.

She holds a master’s degree in law specialising in cybercrime and data privacy. She has been involved in the writing of legislation in this regard as well as consulting to Africa Union and Interpol on issues relating to cybercrimes and privacy.

Corien is a Qualified Digital Forensic Auditor, Lead ISO 27001 auditor and C|CISO. Corien has been a CISO Advisor for the Cisco Security Centre of Excellence in the Asia Pacific, Japan and China region where she has led market.

KB has previously interviewed Corien, but asked for a more in-depth interview. This is likely to be followed with an in-depth interview with Jeetu Patel, Global Exec for Cybersecurity at Cisco.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Corien Vermaak [00:00:00]:
I absolutely think there’s more than one application in how we see this play out in the future and that we absolutely have an opportunity to speed up the way we respond, to zone in into those early detection phase, but bring the intelligence to that machine. Start looking for other concern indexes or concerning behavior based on a existing pattern, or get ahead of something really breaking out, ransomware spreading laterally or scans spreading across the network.

Karissa Breen [00:00:53]:
Joining me today is Coreen Bamak, head of cybersecurity from Cisco ANZ. And today, we’re discussing what will AI do for the future of cybersecurity. So Coreen, thanks for joining, and welcome.

Corien Vermaak [00:01:03]:
Thank you very much for having me, Katie.

Karissa Breen [00:01:05]:
So on that note, what will AI do for the future of cybersecurity from your point of view? Now I know it’s a big topic. A lot of people have varying opinions, but I’m keen to hear yours.

Corien Vermaak [00:01:14]:
So I think this is such an opportune time to really debunk a little bit of the myth factors. You know, I feel I feel that this the AI conundrum has really taken the technology world by storm, and and it’s 2 part. There’s there’s one, there’s a bit of a hype cycle around it, but there’s there’s also a bit of ignorance around it. And and what does it really mean? Now as a securist, I am tremendously excited about the new reiteration of artificial intelligence that we’re seeing. And what I mean with a new reiteration is the fact that we’ve now gone from AI just doing predicting and recommending to AI really doing creating. Now now what that means to us in cybersecurity is our systems are now no longer just spotting vulnerable code and knowing that that code is vulnerable based on previous learnings, but it can now create. And and that creation means it could actually block. It could learn how operational teams respond to these specific vulnerabilities, and it can go and create a solution.

Corien Vermaak [00:02:33]:
It can recommend and mend and remediate that, which has tremendous which has been in the past a tremendous heavy lifting by our teams. So when we look at this, what we call generative AI, and the ability to interact with AI learning and and systems in a natural language format. I think it it should excite everybody in our industry, industry, because it’s giving us that opportunity, not only for the systems to start being part of the solution, but secondly, it also opens up our resource pool, where we now have people that are young, inexperienced, possibly just left or joined the industry, you know, left university or any kind of formal education, and and, oh, just new start as coming into our industry. And we have the ability through this natural language interface to really ask smart questions, use that thing that, Kaby, you and I have previously talked about is that critical thinking is so important in this industry, to use that to truly quiz the system, and and that gives us the opportunity to augment our resourcing.

Karissa Breen [00:03:48]:
Okay. There’s a couple of things in there, Karina, you said, which was quite interesting. I wanna press them a little bit more around the myths that are in AI. Now I probably will have to buy mainstream media for some of these myths that are out there, but from your perspective, you’re obviously at the coalface. You’re on the front line speaking to customers. Do you what are some of the myths that people have? And then how can you go about sort of defying some of these myths or debunking them to your sort of phraseology before?

Corien Vermaak [00:04:14]:
Yes. And I I say this with a tremendous smile on my face, because I think the biggest myth in AI is the fact that there’s a sense that it’s new. It’s a new technology, and it is absolutely absolutely not the case. If if you look at the development of predictive algorithms, it dates back to the 19 fifties. 19 51 saw the first academic application of a chessboard software that, you know, Dietrich Prince built, and it was just an algorithm to predict the next move in a chess game. Now that in itself debunks the fact that AI is new. What we need to understand is that the way we train machines through, firstly, supervised machine learning, then you get the sort of AI the predictive AI models that can that can predict and recommend the outcome of certain things. What we’re seeing now is the leap to the creative AI, so the generative AI.

Corien Vermaak [00:05:17]:
So one of the examples that I wanna give with this is 2 years ago, we were our minds were blown when a fridge got launched that can now order milk when the milk cartridge space within the fridge is empty. Now all that is is a is a concrete measurement. The milk part is empty and then the fridge goes into an action to order new milk. It seemed groundbreaking, but what I mean with this creative phase is that with the new reiteration of AI, the fridge can now sense that your usage is up over weekends because you may entertain and you use more milk, and therefore, they can anticipate that you’ll run out of milk quicker. Now that is that creative part where the system and the machines are learning behavior analytics and that that natural behavior patterns that humans are so well known for, and then predicting what comes nanny. And also, they have the the ability to create. And I think that was made famous with Chargegpt and all of these generative AI platforms that we saw in the early part of 2023 take media by storm. That’s the one massive myth that I that I would wanna debunk.

Corien Vermaak [00:06:37]:
The second of the only 2 myths that I think we really need to focus on for conversations around technology is the fact that there’s going to be a massive loss of employment because of AI, and I truly believe that that is not going to be the case. I think I think we are in an opportune space to realize that and and there’s there’s a there’s a lot of sayings circulating in the industry around that. People say you won’t lose your job to AI, but you may lose your job to somebody using AI. And I think that’s the important thing to to focus on. We, as security practitioners, more specifically, will augment our existing skill. We will streamline our processes. We will also have the opportunity to do things. You, as a operational practitioner yourself, would know that, even as shortsighted in into history as 4 years ago, how cumbersome a forensic investigation could be, or how cumbersome a a real deep breach investigation could be and how long some of these remediation actions would take.

Corien Vermaak [00:07:50]:
Now with generative AI in hand, that augmented with the ability of natural language, we have such a tremendous opportunity to now ask the system what is its recommendation, immediately pivot back to ask the system what would be a code remediate how do you secure this specific piece of code, and then sense checking that without needing to go to our traditional libraries. And that in itself has got a great opportunity to reduce our time to respond and our time to remediate, for instance, in in cybersecurity. And then I’m obviously not even talking about the massive values that it brings in vulnerability studies and and getting that creative lens on what does this vulnerability actually mean to my to my estate? And again, saying that that we have now the ability to interact with these systems through natural language is the true big leap that we saw in the last 18 months. And that really excites me because while we debunk the myth, it absolutely sets us up for a future using AI to just fight our adversaries.

Karissa Breen [00:09:02]:
Yeah. Great explanation on that. And I think you are right. And around these myths that people have, especially around the the loss of employment, would you say as well, Corinne, it’s just gonna take time. They say time heals all wounds for people to maybe feel a bit more comfortable with the sense of, hey. Like, AI is not gonna fully take all of our jobs, and it’s gonna help augment to your point. Will that just take a matter of years or months or whatever that may look like for for this new norm to be a little bit more, I guess, apparent in our everyday life? Because if you look at how the when the Internet started, people were afraid of that. Right? But now, imagine if we didn’t have any Internet, like, people go nuts pretty quickly.

Karissa Breen [00:09:42]:
So do you think that as we traverse into the next 5 or so years, it’s just gonna become normal that we are operating in our job and with AI?

Corien Vermaak [00:09:50]:
I think even if you look at the AI readiness study that Cisco has done, one of the key indicators in that is the fact that organizations are very aware that they do not have the correct skills. And those skills are spread across different different, how can I say it, different roles? The development of AI, there’s there’s a big need for developers to bring this AI platforms into what people already do. For instance, how I deal with my bank will now be augmented through generative AI, and a lot of the questions could be answered through a bot. So that development part is is critical for organizations. Then the adoption part has really taken flight. All organizations are looking at how do they help their employees be more productive. And again, we’ve seen a lot of that in security. How do we help our teams do more with less quicker? AI is one of those answers, but to adopt that into our current usage and toolset is a skill onto itself.

Corien Vermaak [00:11:00]:
There’s there’s absolutely focus on that. And then, obviously, also talking around all the peripheral skills needed, like project management and change management, to get people on board with all of these AI programs is is very, very critical. And this then, lastly, as a thought, spills over into, how do we do this responsibly? How do we do this ethically? How do we ensure that the code that we integrate, that the bot that we build are inclusive, and that that it’s it’s easy to deal with them and that we don’t exclude people from this process or that we don’t apply this technology in any discriminating way. My my forward thinking is telling me or my future eye is saying that I’m expecting a lot of legal and compliance roles to also pop up to oversee ethical development and adoption of AI. And this ultimately creates a bit of a, I wanna say, a war in the job market. And and I expect people would need to cross skill a little bit. And and again, that’s a exciting opportunity to to really dive into this, AI world and see how that can augment your role and whether that role is in technology or not.

Karissa Breen [00:12:23]:
So you said before getting people on board. Would you say generally, and I’m I’m saying this generally because, obviously, we’re having a general conversation, that people are on board with AI, or would you say people are sort of relatively apprehensive?

Corien Vermaak [00:12:36]:
I always, tongue in cheek, tell a little tale of, actually, quite recently, somebody said to me at a barbecue, what what is chatJPT? And and my my face as a technologist obviously immediately went into, you and I can’t be friends. But what dawned on me after that conversation is that for us as technologists, it’s a run of the mill. All of us were playing around with generative AI tools within near immediacy of it being available to consumers, and and we saw that. A lot of these platforms toppled over post launch because of just a contention issue to many people trying to get access to the same platform. However, there’s still a large cohort of our general population that’s not that’s not working with AI and on AI and playing around with it and seeing how it could help them in their day to day work. So I would say, in the technology spheres, my sense in dealing with customers on a day to day basis is there’s a very high and mature rate of adoption. Most people can mention or name more than 3 different platforms that they’re interacting with, whether those are language curation platforms like Grammarly or Trygbt for create creating content, drafting things, redrafting, or doing copy edits for you, and and whether those are play platforms like Beautiful. Io that actually can create presentations, most of us are using these tools.

Corien Vermaak [00:14:10]:
However, when when you look to slow technology adopters, my my sense is that there’s still a big group of people that do have not adopted it. And as we are aware, in organizations, there’s a good mixture of of these people. So organizations will absolutely have to bring those non technology adopters or those non technology native people along on this journey and help them to understand how these AI tools can help them with productivity. And and I think that that is a that is a great opportunity. But in the in the technology world, I think there’s 2 things that we have to acknowledge. 1, we are early adopters, so we’re seeing a lot of it and a lot of exploration happening. That, when we put our cybersecurity hat on, absolutely needs to go with protection, because it is a critical sphere for data leakage, for breaches to occur when you you sit with sort of this shadow IT leak kind of environment where employees upload corporate and confidential information into these platforms. AI embedded that we may not even be aware of.

Corien Vermaak [00:15:35]:
You would be very aware as a practitioner yourself that just the algorithm around behavior analytics has become quite rife in what we do in cybersecurity. I most recently read about a start up that’s got the capability to biometrically identify your keystroke, which, effectively, they’ve built an algorithm to log how hard you strike the key, how quickly you strike the keys in between each other when you enter your password, and that becomes a unique fingerprint. And that algorithm can predict whether it’s you entering your password or not. Now now that is AI at its best, But, again, most people aren’t aware of these technologies that are out there that can absolutely be used to protect the way we protect our organizations.

Karissa Breen [00:16:26]:
So would you say at this stage, Corinne, that there’s still a lot of unknowns as much as there’s people building frameworks, a lot of reporting, that’s going on to get data to understand a little bit more. But would you say there are still blind spots that exist out there that perhaps we we as an industry are are missing, would you say?

Corien Vermaak [00:16:45]:
Absolutely. I think I think there is still blind spots that that because we don’t fully unpack the application of these technologies, we will, for the foreseeable future, be in a constant fact finding, I would say. Our compliance teams will need to look at each each application that employees request to use in a unique and specific way with a data governance as well as a security lens. And because these frameworks aren’t matured yet, I would say that they would they would have to do a bit of critical thinking and problem solving whilst they look at these applications to ensure that it is underpinned by by by good governance. And and when you look at what organisations like us, Cisco prides ourselves on our, for instance, responsible AI usage policy that we that we developed, and our security and trust organization constantly refines that. One of the things that is very forward thinking in the way we see our policies is we’ve underpinned the principle in our policies of transparency, which which is a really good principle to have when it comes to AI. That effectively requires anybody utilizing AI to disclose that. So when I drafted an email to you or when I or when one of my engineers concluded a very technical draft document and they had an AI tool, SENSE check that for them, or redrafted that, the use of that AI tool, in that creative process is disclosed.

Corien Vermaak [00:18:33]:
And and I think that’s really good because it comes back to ethics. It comes back to owning the work that you did yourself, but also owning where you’ve augmented your own skills with the with the AI tool. And and that shows shows maturity for organizations to start looking at the principles that underpin the usage of these tools.

Karissa Breen [00:18:54]:
I wanna get into the ethics side of things. But before we do that, I wanna first explore how smart AI models work.

Corien Vermaak [00:19:02]:
Oh, that’s a great question. And it it really goes back to that that I mentioned earlier, is how how we developed, and and I’ll I’ll I’ll get a I’ll explain it with a bit of a use case. But when we crossed the Rubycon on supervised machine learning to a predictive AI model and and and just wanting to pin a date on that a little bit. Predictive AI reached quite a high level of maturity in 2017 already. So, again, debunking that myth that it’s a new technology, the ability for artificial intelligence models and algorithms to predict an outcome across a vast majority of dataset was really quite mature in in 2017 already. And and, therefore, we already, at that stage, saw very mature adoption of, like, the user behavior authentication that that I briefly mentioned previously. Understanding whether a user is accessing something that they regularly access, or for instance, is users have got very distinct patterns in how they interact with systems. You know, they log on at a certain time of the morning.

Corien Vermaak [00:20:21]:
They generally have a, routine around checking mail and then possibly logging on to systems that they have to work on. And those patterns can absolutely be learned by by predictive models based on traffic flows. And and that effectively then means that that we can pinpoint when a user acts out of behavior pattern. And that, again, as a security practitioner, should raise all kind of alarm bells. Bells. You know, when when a user logs on to a system that is, for instance, off territory in an unsanctioned country at a time frame that’s not normal, like 4 o’clock in the morning or 2 o’clock in the morning, that user behavior should immediately prompt high alert. And and that’s that’s a great example where where we see these predictive AI models really helping our our defensive teams look at and now now these technologies have been around for for quite some time. When when we look at that, we also have the ability to step some of that user behavior.

Corien Vermaak [00:21:30]:
One of the things that we we do quite quite well and quite that a lot of people aren’t even aware of is with the increase of on networks, we actually have the ability to fingerprint a data flow and and identify it based on the way the packets are broken up and sent and prioritized. This creates a very specific fingerprint and it’s called encrypted traffic analytics or encrypted threat analytics, where we know how a command and control establishment behaves from a from a rudimentary switching point of view and therefore that becomes a fingerprint that we can identify without decrypting the the information and we can raise alarms around that. Now, again, going back to how does these models work, what we have, for instance, done within Cisco is with our vast amount of threat knowledge that we have through our our Talos organization and the amount of threats that we see transverse our networks globally, we we were able to to firstly identify that fingerprint within encrypted traffic, then we were able to build those models and train algorithms to identify those models on traffic flows, and effectively we were able to roll that out across all of our switching equipment, which is which is a tremendous feat if you think about it. Now it means wherever traffic flows touch, Cisco network or our equipment, we have the ability to identify and fingerprint it against those known anomalies, and and we are able to also clear clean clean traffic flows, like a Google search. And that really puts us in a position to to train these models in a in a very succinct way. The more threats we see, the more of these models we can build, see them as fingerprints, and the more of that we know, the the more succinctly we can defend against them in the early stages of that traffic flow behavior. And and that’s a great example of how these models work. I I wanna, again, just maybe go back to the fridge analogy and and say that we’ve gotta change our mindset around how we how we look at AI and the training of these models, because the smarter and more predictive they become, the the greater our risk.

Corien Vermaak [00:24:04]:
Because if I go back to the fridge example where your fridge used to only be able to order milk or alert you that the milk is running out and you could do the ordering part, it now has the ability to order the milk on your behalf, which common sense would tell you it means it’s got credit card information, personal home delivery details. And then it takes that step forward in the generative form where it takes your patterns into consideration and now sees that maybe you you don’t use milk on a Thursday, which means you’re not home on a Thursday, and and effectively becomes that critical vulnerability to knowing your movements in and around your house. So when we look at an example like that, you could absolutely arrive at the fact that we have to do some heavy lifting in securing these systems all around.

Karissa Breen [00:24:56]:
Yeah. Great analogy. And when you were talking, I was I was actually thinking about the milk example, but I just wanna press you a little bit more just to set the scene. So you said before intercepting incidents or or breaches or what whatever it may be at the earliest stage. How much earlier though would that be? So for example, when you spoke about employees logging on in in unsanctioned country, obviously, they would log on. You’d get the alert. But if there’s the predictive side of it, do you have any sort of indicator on, like, oh, Carissa could start to go down that path because, I don’t know, just some of the stuff she’s doing on her laptop is is appearing like a a rogue employee, which therefore would might mean that she could do something or, like, they they could or it could be anything really, but it’s just more and as a reporting analyst myself, historically, I I really have I really value the predictive side of things, especially the analytics. So I’m just then curious to know, like, how good do you sort of envision this sort of getting?

Corien Vermaak [00:25:57]:
I think that’s such a brilliant question. And and this goes back to our discussion point earlier where the human skill and knowledge is still such a critical part of how we apply this. Now the case that you used where a machine is query, we say in the medical terms, query rogue, there’s a suspicion that this machine is is running a reconnaissance or there’s scans being conducted from this machine. That behavior in itself is then going back to the human decision making as well as the policies, and us, as the practitioners, will be responsible to build these policies and playbooks. Now, again, if that is that rogue machine, I wanna I wanna stay with the example that you gave. If that’s just a rogue machine, the policy can then determine that that machine should be put in a heightened monitoring state. So we think that Clarissa Clarissa is somewhere, and she may be traveling, and therefore, her time time frame or login time frames are out of whack. Therefore, let’s just put it in monitoring to see whether there’s then the next level of what we know in terms of the attack frameworks.

Corien Vermaak [00:27:16]:
Is there data hoarding happening? Is there scans happening? Is there some lateral movement? Is there an attempt to escalate privileges? Those are the things that we can then zone in and focus our efforts on that one machine, which gives us a tremendous opportunity to be to really catch the perpetrators early in the in the attack framework. However, if we go to another example, let’s say you and I were both around when WannaCry broke, it helped. Let’s say your your AI or you identify one machine with a worm like virus, like WannaCry, that has this tremendous spreading capability in a short amount of time. When you identify or when the system identifies that, you can immediately give instruction that wherever the AI systems or wherever the algorithms detect that, to quarantine those devices and possibly shut them down. Now through MDM capabilities and how we architect our security platforms, we have the ability to do this, and and that’s where our speed to response will be tremendously faster through AI. Because I can now immediately identify that this is something that runs the potential of breaking out on my network and taking down a a lot of devices. I can now, through natural language, within the foreseeable future, give an instruction to say, quarantine everything where you see this concern index. Now that we have not been able to do in the past, and that’s a great example, as you could think, of where we could absolutely use these artificial intelligence engines and software to to speed up our efficiency in a sort of emergency environment.

Corien Vermaak [00:29:11]:
So I absolutely think there’s more than one application in how we see this play out in the future and that we absolutely have an opportunity to speed up the way we respond, to zone in into those early detection phase, but bring the intelligence to that machine. Start looking for other concern indexes or concerning behavior based on a existing pattern and or get ahead of something really breaking out, ransomware spreading laterally or scans spreading across the network. And and those are the things that we could build into a automated response with these platforms, and the platforms can respond and then send alert after the fact. It’s much easier to go and unlock machines that was locked with the by the system versus having something run rampant on your network.

Karissa Breen [00:30:09]:
So just to summarize that a little bit more with that example, Corinne. So what you’re saying is that with everything you just talked about at length could be perhaps the difference between a major incident, ransomware, etcetera, because there’s that window of time, whether it’s through the heightened monitoring, the alert that’s come a little bit early, etcetera, to be able to respond faster than perhaps before.

Corien Vermaak [00:30:29]:
Absolutely. And with these machine models, we can spot things that I wanna say the human eye can’t. And I wanna just give you a quick example of that. When you build a model to understand the general download behavior patterns of a user, you can use that as a fingerprint to immediately identify when a device is data hoarding. Now now, you and I would both know and our audience would know that data hoarding precedes data exfiltration, and that’s what we’re trying to fight against. So if we if if we can, and we can, we have the ability through network visibility tools to identify when a device or a user or a identity of a user is downloading more than their regular pattern, and we can then immediately go and investigate that further. And that precedes that data exfiltration step, and the system could do that. It’s you would have to have a tremendously large SOC to even kick off an investigation around the behavior flows and the amount of data people download, where a system can just learn the behaviors over time and realize that a great example is that this person works in finance, and at the end of the month, they download heaps of information.

Corien Vermaak [00:31:54]:
So once the system has observed that end of the month cycle, they will build that into the model and and continue that as a normal baseline, which which which is a heavy lifting for any human analyst.

Karissa Breen [00:32:08]:
So with that in mind, do you envision that breaches, incidents will start to reduce then? If just in a perfect world, every single organization in the planet is is using smart AI, all this predictive analytics, will the breaches and incidents start to decline then, would you say, or is there gonna be another problem that arises?

Corien Vermaak [00:32:28]:
So I think sticking with your language around the perfect world, we we should we should be seeing that a lot of our network incidents could be minimized, or the early diagnostics or the early detection on these events should improve our high fidelity cases or reduce our high fidelity cases. Now saying that, I have to caveat that, unfortunately, our offenders are weaponizing AI against us as quickly as we are using it in defense. However, I I also have to say that we have observed over the last 18 months a big pivot towards organized low technology scamming. And and that really goes back to that email threat vector and blocking that early entry through multifactor authentication, doing those those basics really well, and then, obviously, bolstering those basic hygiene effort with AI, like user behavior analytics and keystroke monitoring when you enter your password to to use it as a 3rd and 4th level of authentication. Because we’re seeing that these, I wanna say, low level scams, impersonations, and with the use of AI, super, super deep fakes is becoming a true concern. Specifically, when you look at the financial industry, the insurance industry are seeing a lot of these scams increase through the use of AI. Therefore, even though I’m a I’m an ever optimist and I’m a true believer that AI is going to reduce a lot of our network based attacks, I think it is a grand opportunity to also ensure that we use AI to protect against the vectors that we know there’s an increase in attacks. And those are the more, I wanna say, low tech attacks, scams, and email compromises that that come through to a just a tired user that that clicks on a link.

Corien Vermaak [00:34:46]:
And and it’s one of those that we are that we’ve we’ve been talking around for maybe more than a decade.

Karissa Breen [00:34:51]:
So I wanna flip back now to discuss the ethical side of AIs. You say that AI must be ethical from the start. So then I’m curious to know and maybe if you could define ethical. And then where’s the start?

Corien Vermaak [00:35:05]:
Yes. So great question. Ethics is one of the older oldest disciplines, and I actually most recently presented a seminar on that. I found when I did my research for that, that what we’re expecting in this AI ethic world is is not vastly different from what we expect from ethics as a whole. I think the base assumption in it is that it is fair, that it is equitable, and that it is in no way malice against humanity. And and I think that is obviously, wholly oversimplifying the ethical values that that we need to underpin these AI structures with. Now now then, that permits us to the second part of your question, but where where does it start? And that’s a really good question because I think, like, we’ve seen model legislation force us to be secure by design, taking that security lens earlier in the development process. My sense is when it comes to artificial intelligence that that ethical lens needs to be really early in the development and and that goes back to my example of adoption and building it into into what we do.

Corien Vermaak [00:36:22]:
When we use AI in any way, shape, or form, whether it’s a voice responding on an IVR, a bot that somebody can speak to, we need to ensure that the application of that is inclusive, that it is not to the detriment of anybody, that it is unbiased. We’ve we’ve seen heaps of legal research being done around, for instance, the ethical platform that some of the US police forces use in their chest cams to identify and alert a responding police officer to whether somebody has got a preexisting criminal record. Now some people may say, but what does ethics have to do with that? What the research found is that the amount of violence that the responding officer use in some of these arrests escalate when they are made aware of somebody’s previous criminal convictions. Now that in a sense is not fair because if that is a rehabilitated criminal that is in the wrong spot at the wrong time, to be met by excessive violence could be frowned upon by society. Now that is one of the examples. When we look at self driving cars, ensuring that a self driving car doesn’t see a small child as an animal is really critical when it comes to the development of the software. And these are but a few examples when when you look at the development of filters and skin tones and emojis and these kind of things. We wanna ensure that how we develop these platforms and all of these, that they are inclusive, and that they aren’t harmful in any way to anybody that that may use it.

Corien Vermaak [00:38:14]:
And that really comes to the really beginning beginning days of developing or integrating these technologies when we use them. And and that goes back to, I think, companies need to start with developing their AI frameworks. And and I’m happy to share some of what we’re doing within Cisco because we really pride ourselves on on how we see the ethical development of AI throughout all of our tools. And all of our technology has got AI embedded in them somewhere. Even if you look at some of our speech tools and our our collaboration tools, we have the ability to drown out collateral noises that are not voices, which means a barking dog in the background in a home office is no longer an issue, and that’s all done via AI. Now as we develop these things, we ensure that they are done ethically, that they are inclusive, and that they do not harm anybody. And, again, the use of them are disclosed at any point of time. So these are some of the base guidelines that I would say, and the process couldn’t be earlier as they dot.

Karissa Breen [00:39:26]:
As you were talking examples, I was thinking, oh, self driving cars, and then you literally said it. So I wanna maybe I’m curious about that just as an example. And I remember doing an interview about this a few a few years ago, actually. But just say you’re driving along and there’s an accident, and I know every you know, I’m just it’s gonna be a bit of a basic accident and just say there’s a cat or a dog. You have to swerve. How does the ethics then sort of come into it around while someone may be hit in this, who’s it going to be? Is that going to be really hard to determine? Because again, it’s not a black and white answer. There are variables. There are factors into it.

Karissa Breen [00:40:04]:
Is that where you think there is going to be arguments across to say, well, actually, I like cats more, so forget the dog, and then equally vice versa. So how’s that thing gonna play out through ethics? Is it gonna create more consternation than we already have around the whole AI hype cycle?

Corien Vermaak [00:40:22]:
Now we have spent 100, 100 and 100 of years refining our our legal models around the reasonability test. And when you look at that, that that is a great parallel or analogy to use when it comes to the AI decision making. When if if that car is driven by a human, and and for some reason, they land up in court because they unfortunately harm somebody’s cat or dog, whichever they decided, whichever side, they are going to have to recount for what was their decision making criteria in the circumstances. It could have been that there was a small child on the dog’s side, and to avoid the child, they had to swerve towards the cat’s side. There was a a solid pole, and they had a minor in the car on the cat’s side, so they had to swerve towards the dog side. And and these are examples where, you know, you you have the ability to recount that decision making points that you had in under consideration. And then the court applies something that they call the the reasonable person test. And and the reasonable person test effectively says that, would somebody with your skills and knowledge in the same circumstances behave in a similar manner? And if a reasonable person would have behaved in a similar manner, it is considered to be reasonably fair or fair in those circumstances.

Corien Vermaak [00:41:52]:
Now that is what we look at when we look at the ethical behavior of AI. But now because this is a trained algorithm and we don’t necessarily can recount always how they arrived at a certain decision, that now becomes an ethical conundrum because we need to have the ability to either keep the developer accountable, keep the driver accountable that was sitting behind the self driving car for not taking control over. And and that’s then where where it becomes really tricky. So when we look at AI development, we we, as society and as practitioners in these fields need to absolutely drive our focus so that these trained algorithms behave in a manner that is acceptable and fair and that we can then substantiate that manner and and those decision making protocols. And that’s why the testing of these algorithms are so critical. And, again, back to your self driving car example, all of the developers across the world that works on self driving car have got subjects and copious amounts of test dummies and animals and all kinds of road scenarios to put that algorithm under pressure because that’s effectively when the decision making becomes critical. And once under pressure, that decision making gets tested in different scenarios and also in a repetitive manner. Will the car behave the same consistently under the same set of circumstances? And how how much change in the circumstances then move moves the car’s algorithm to make a different decision? And those things need to be tested in an ongoing way to ensure that these self driving cars is a great example, and self driving buses is now an absolute reality if you look at where, for instance, is going with their buses throughout Europe.

Corien Vermaak [00:43:55]:
Now you not only have 1, 2, or maybe 4 passengers at the realm of an algorithm, you may have see young schoolchildren heading to class. That’s a massive responsibility to put on an algorithm, and and the the developers need to ensure that there’s no bias in the system, that the system can continuously and consistently behave in the right manner and do it under pressure.

Karissa Breen [00:44:24]:
Wow. That’s wild. Do you envision though that things could go rogue though? Because I mean, like, eve even today, we had a little bit of technical difficulties on this platform, and that’s not a whole bus driving around 60 children. Like, that’s that is crazy that we’re we’re all sort of, to some degree, putting our our our faith in an algorithm and things do go wrong. So are you ever worried about that? Because, again, like, when it comes to and I mean, I’ve worked in a bank myself. I always used to say, like, when someone loses money, okay, that’s annoying and that’s bad, but we can give it back. When you’re talking about life or death, you you can’t get your life back.

Corien Vermaak [00:44:59]:
So I wanna I wanna start this answer by really caveating it, and I always say it to everybody that knows me, that I am the ever optimist. And as an early technology adopter, I can’t wait for self driving cars that is 100 percent autonomous because I think I think that once the algorithms are tested and pressure tested correctly, the margin for error is less. And and I wanna give a maybe a silly example. A car’s algorithm cannot drop their mobile phone on the floor and reach to pick it up and swerve to another lane. A self driving car can technically be programmed not to exceed the safe speeding limits. A self driving car can immediately come to a halt or slow down the speed when tires deflate, which an astute driver may not even pick up, or a young driver, a p plater, may not even be aware that they’re driving with a flat tire. Those are the kind of things that excites me about AI. But going back to your question about, can it go rogue? Absolutely.

Corien Vermaak [00:46:06]:
I think we are always, as security practitioners, aware of, 1, the vulnerabilities of all systems, and anybody that develops AI systems will build and have to build a very, very robust security architecture regime and framework around how do they secure these platforms from external attacks. What you don’t want is to have your car drive at a 110 kilometers on a freeway, and then somebody hacks the system and it switches the car off altogether. And you’ve got a free moving 2 ton vehicle at a 110 kilometers per hour. That we don’t want, obviously. However, I do sense and I see with the high level of inspection as well as the high level of recall cost for all things AI. And I’m not only talking about self driving cars. I’m talking about smart fridges and all of the smart devices that we have in and around us. For any of these manufacturers to recall a faulty device is really quite a costly exercise.

Corien Vermaak [00:47:17]:
Now that effectively means that the base embedded code needs to be critically secured and needs to be quality tested at a very, very high level. So my sense is that the manufacturers of these devices and or software and systems are making a tremendous amount of effort to ensure that they get it right the first time and that if there are any tweaks required or bug fixes, that they are small enough to be done via a software update. And that is where the manufacturers then put it in the hand of the operator, And it becomes our responsibility to ensure that these devices are updated regularly, that they are patched well, and and that we we operate them with within the usage frameworks. Because then it’s then, as they say, when all else fails, refer to the manual. It is then when we are jointly with with the manufacturer become responsible use users of of these AI systems, tools, and and devices.

Karissa Breen [00:48:27]:
So, Corinne, I’m conscious of time, but really quickly, do you have any closing comments or final thoughts to leave our audience with today?

Corien Vermaak [00:48:34]:
Oh, yes. I I am tremendously excited about what AI is gonna do for our industry. I wanna really invite your audience to play around with some of the tools and and to see how and understand which of the tools that they’re working with already has got some of these capabilities, and to sharpen the blade. Every day is a school day, and and we have a tremendous opportunity to learn this new world of artificial intelligence and how it can really help us do our job on a day to day basis. So, I would really want to invite your audience to go and dabble with all of these stories. You can’t really break most of them and and see how you can adopt some of these to improve your own

Karissa Breen [00:49:30]:
Thanks for tuning in. For more industry leading news and thought provoking articles, visit kbi.media to get access today.

Share This