The Voice of Cyber®

KBKAST
Episode 353 Deep Dive: River Nygryn | Trust, Test, Transform: Executive Playbook for AI Leadership
First Aired: February 04, 2026

In this episode, we sit down with River Nygryn, CISO and AI thought leader, as she explores the critical concepts outlined in the executive playbook for AI leadership: Trust, Test, and Transform. River provides a comprehensive overview of AI’s evolution—from its historical roots in early automated machines and neural networks to the development of large language models (LLMs) and generative assistants. She emphasizes the importance of “trust but verify” in deploying AI, warning against overreliance and the risk of diminishing critical thinking skills. River introduces the 4Ds—dull, dangerous, difficult, and dirty work—where AI delivers the greatest value, and cautions about the loss of creativity and authenticity with widespread use of AI-generated content. She encourages organizations to leverage their unique data sets, underscoring that human judgment and oversight are essential for harnessing AI’s transformative opportunities.

River is a visionary cybersecurity and technology leader with a dynamic career spanning traditional banking, cutting-edge blockchain innovation, and Web3 transformation. As a Chief Information Security Officer (CISO) and fractional C-suite executive, River has driven security and operational excellence across highly regulated industries, including healthcare, financial services, and emerging tech.

Renowned for bridging the gap between strategic leadership and hands-on execution, River has played a pivotal role in modernising risk and security frameworks, scaling secure systems, and advising on crypto, digital asset infrastructure, and decentralized technologies. Her influence extends beyond the boardroom – she is a powerful voice in the tech community, advocating for digital trust, innovation, and ethical leadership in the AI era.

In 2025, River was named one of The CEO Magazine’s Top 50 Women of Influence, recognised not only for her technical expertise but for her commitment to shaping a more secure and inclusive digital future. She is a sought-after speaker, frequently appearing on stage at leading conferences, panels, and keynotes to share insights on cybersecurity resilience, leadership, and the evolving Web3 landscape.

With a storytelling style that blends bold insights with deep reflection, River continues to inspire the next generation of cyber leaders and disruptors.

Vanta’s Trust Management Platform takes the manual work out of your security and compliance process and replaces it with continuous automation—whether you’re pursuing your first framework or managing a complex program.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

River Nygryn [00:00:00]:
I’d encourage anyone using AI technologies is to, you know, yeah, trust it. Trust that it’s very advanced technology. It’s got a lot of smarts in it compared to what we’ve been exposed to in the past. It’s come a long way as an industry in its own right, but it still makes mistakes and you really need to verify all the outputs.

Karissa Breen [00:00:39]:
Joining me now is River Nygryn, CISO and AI thought leader. And today we’re discussing Trust, Test and transform the executive playbook for AI leadership. So, river, thanks for joining me and welcome.

River Nygryn [00:00:51]:
Thank you. Great to be here.

Karissa Breen [00:00:52]:
Okay, so I really want to start perhaps with your view in understanding AI now. And this is a big topic and it’s one that people are obviously talking about a lot and it’s one that I have spoken about a lot on the podcast. But because there’s so many different viewpoints, different opinions, different angles, I still think there’s a lot of ground to cover. And I’m really keen to maybe understand for myself, like what it can and can’t do. And I asked this because this is important as most people, like the general sort of person, likely have played around with ChatGPT, et cetera. But I’m keen to maybe explore beyond that, perhaps areas that people don’t know about AI.

River Nygryn [00:01:26]:
For example, I think it’s important to set the foundation of AI and its kind of definition. So AI is a broad term for machines performing tasks that require human intelligence. It’s had a history of over 100 years and just recently it’s evolved in the outstanding ways that we know and experience today. So many people define it, or you just use AI as an all encompassing word to capture everything. But that’s generalizing AI. It’s far more nuanced than that. The concept of artificial intelligence, it’s not new. It’s been around since the early 1900s.

River Nygryn [00:02:03]:
There was a Spanish engineer, Leonardo Quevedo, he created a automated chess playing machine. Once it was set up, it didn’t need any human intervention. It made legal chess moves and even alerted to illegal ones made as well. Then in 1943, there was a really interesting paper by Warren McCulloch and Walter Pitts, and they modeled artificial neurons. So they were simulating brain like functions. And the paper actually explored the idea that the brain could be understood as a computational system and that introduced the concept of artificial neural networks, now a key technology in modern AI. And then throughout the 50s, the field expanded into artificial neural networks, experimentation with mimicking human problem solving abilities. And in 1956, a group of researchers and engineers, they coined the term artificial intelligence in a workshop they were having.

River Nygryn [00:02:53]:
And that’s kind of the official birth date of the term AI. So while the concepts and experimentation with machines thinking and automating tasks, it’s been around since the 1900s, the field really gained traction in the 50s and 60s. So it had a bit of a winter in the 70s where not much progress happened. And then it picked up again in the 80s, right up until what we know it as today. So AI is probably the broadest term. It covers everything from chess playing programs to autonomous robots. Then you’ve got machine learning. So that’s a subset of AI.

River Nygryn [00:03:26]:
It’s a field focused on algorithms. They learn patterns from data, make predictions or decisions without being explicitly programmed. So as examples, spam filter, recommendation system. So what Netflix tells you it thinks you’ll like Uber Eats, suggesting what you should order next, that’s all machine learning. Then there’s deep learning, it’s a subset of machine learning. It’s got many deep layers and that’s probably powered the most recent advancements in AI technology. So they’re things like speech recognition, image recognition, all of that type of stuff. You know the technology on your phone when you, you know, type in, you want to find all photos of your cat and it pops up with all of them in there.

River Nygryn [00:04:08]:
Or you type in the word passport and that file or photo of your passport comes up on your phone. That’s thanks to deep learning and LLMs, they’re the specific type of machine learning model. So they’re based on deep learning and what’s called transformer architectures. So they’re designed to process human input and then generate human like outputs. So they’re trained on huge data sets. They’re probably what we know of and what the public knows of and is calling AI. But really they’re generative AI assistants or LLM based assistance if you want to be more precise. So the field is really huge is what I’m trying to get at.

River Nygryn [00:04:45]:
So there’s even things like small language models which are cut down versions of LLMs that are more efficient and they’re designed for faster processing or specific tasks. But in simple terms, AIs or LLMs, they’re mathematical models that emulate cognitive function. You may have been heard naysayers call LLMs, Google on steroids or advanced predictive text technology. You know, whilst that’s accurate, it’s pretty overly simplistic explanation of what they are. They’re shiny new tool in conversation. But you always need a human in the loop. So they do really well at augmenting the human user to be more productive, explore new ideas, harness new skills that you know, would ordinarily take decades or a university degree to obtain, predicting our tastes and preferences and ultimately they can make the little things a little easier for us. But what they can’t do, they can’t feel, they can’t make ethical judgments in complex scenarios, they can’t perform expert level tasks, they can’t guarantee truth.

River Nygryn [00:05:46]:
They’re always making things up. There was a recent example in the Australian Financial Review of a Deloitte misstep for a government organization in Australia. Piece of work was costing $400,000 or more. And the article actually quoted incorrect facts and quotes from educational studies which some of the lecturers and experts came out saying that it was completely false. So it just really strengthens the fact that when it comes to interacting with artificial intelligence, we always need a human in the loop.

Karissa Breen [00:06:19]:
Okay, so there’s a lot of questions that are coming to my mind and this may be slightly off topic, but I think so when people think about AI, right? So like some people just think it’s going to solve all their problems. I think the part that’s probably getting me at the moment, and I was actually a guest on NetApp’s podcast recently around like is AI? I think the whole, the whole title of the interview is Is AI Turning Our Brains to Mush. Right. So do you think we’re getting to this point? It’s over reliance. So I’ll use an example. So when I was growing up, and I wouldn’t say that I’m old, but we used to have to navigate in Australia as like a refereed ex. So like to get somewhere you had to like literally pull a map out and you had to remember it or you had to advise the person who was driving how to get somewhere. Now people would think that’s crazy.

Karissa Breen [00:07:03]:
You just use your phone, you know, you’ve got a gps, it’s easy, right? You had to you or you’d have to remember the streets to where you’d go, right? But now you don’t even need to think about that because your phone can tell you where to go, right? So do you think that we’re getting to a point with AI where it’s like we have this over reliance that People don’t even need to think at all because we have the tech to do the thinking for us, especially because we’ve come from a generation. I mean, I don’t know how old you are, but I’m assuming you’re not a Gen Z er. But from a generation where we’re used to having that cognitive ability, whereas now it’s like it’s dissipating, it’s evaporating. Right. I know that sounds a little left field, but I think this is really important. I think it’s something that not a lot of people are talking about either.

River Nygryn [00:07:45]:
Yeah, definitely. I mean, so I remember those, those maps, those big books, and then, you know, having to turn the page and map out the number from one page and then, you know, the next map was 20 pages into the book. So it’s really confusing. I mean, the same thing happened with calculators as well. So there were, there was an uproar from teachers with calculators saying, you know, that students would lose their mental arithmetic abilities. Happened with the phone. So the phone that used to have that kind of round dial where you had to kind of really remember the numbers, that all changed as well. So I think the difference within history, though, like this is this is not unusual.

River Nygryn [00:08:24]:
This has happened throughout history where we’ve adapted technology. It’s augmented human capability, and therefore we’ve lost our skills in something. Now to date, those skills that we’ve been losing, you could probably argue weren’t really advanced. They weren’t complex cognitive abilities to do things like that, like remember, you know, where the streets are and all that kind of stuff. But in today’s age, AI is a different beast. AI can allow you to, you know, program an app in an afternoon. It can do advanced research for you where, you know, the human brain is inherently lazy and, you know, you may not go and check all of your sources and, you know, explore different avenues for, to find information, whether that be in physical books, journals, talking to people, all that kind of stuff, because AI can just do it at our fingertips. I think there’s a lot to be learned from the wave of social media and what happened there.

River Nygryn [00:09:23]:
When social media expanded throughout the world and gained mass adoption, I don’t think anybody was. Well, at least it might be the same problem. People aren’t talking about it enough, but nobody was really talking about it in the public about what it could do for your cognitive abilities. So the pattern and the habit of scrolling and constantly getting dopamine hits has an effect on the brain. There’s A lot of evidence to that now and there’s a few papers actually out there that talk about the real risk of losing our skillset, more importantly losing our ability to critically think. So that’s probably the big one. Like you don’t want to offload your cognitive abilities to AI too much. It should be there to help you.

River Nygryn [00:10:08]:
But then on the same token, the positive light, interacting with AI and using it more frequently and then comparing how you use it to others using it, your own AI interacts with you. So as you become more educated with prompting and understanding how to get the best out of and the best outcomes out of AI and knowing what follow up questions to ask you, your results can be far superior to the person next to you. It’s all about just being aware and being self aware of when you are getting into those negative patterns around just letting AI do everything for you. So again, strengthens the argument like human in a loop. Make sure that you’re always validating its output and checking the sources.

Karissa Breen [00:10:51]:
So going back to your calculator example, so I do agree, but then I remember in school it’s like you had to do like long form algebra or mathematics, so you had to show your working out. So it wasn’t necessarily necessarily about the answer’s 36, it’s how you got there. So one of the things I’ve been reading a lot about AI is that like okay, if you’re a doctor, for example, and you go in, you’re like, hey, I’m sick. And they’re like, oh, you’ve got this disease. You want to actually get the doctor to walk you through how you got there rather than them just telling you like, hey, you’ve got it and that’s the end of it. And I’m seeing a lot of people maybe short circuiting their way to the answer, but not actually walking through, hey, this is the mechanics, this is how it made sense. There’s a lot of the, a lot of missing parts in between. So to your point earlier in the critical thinking, but if you’ve grown up in a day and age like these younger generation of folks, they don’t really have that right.

Karissa Breen [00:11:43]:
They don’t have when we had to go to a library and look up an encyclopedia and quote sources in a lot of our assignments and essays and all these sort of things, right? So how do you think that sort of sits there? It’s like maybe these people don’t even have that ability to be aware because they’ve never had to do it before.

River Nygryn [00:12:00]:
Yeah, very true. I think it’s, it’s definitely a social responsibility of, you know, the educational institutions to ensure that they are providing, you know, that information to students and training them to be aware.

Karissa Breen [00:12:16]:
So, okay, so then would you say, given your role and your experience and I mean, there’s probably a number of things, but would you say this is probably the biggest thing that people overlook or maybe don’t get about AI, Would you say like this whole having that awareness around, like, hey, I’m just super reliant on AI and I’m not using my brain at all. Or is there other factors that come to mind when I ask you that question?

River Nygryn [00:12:38]:
Yeah, I think I’ve definitely been reading a lot around comments within corporations saying, you know, the AI said so, so therefore it’s right. And just having that, that over reliance and trust in software. You know, at the end of the day, it’s just software. It’s not there to solve all your problems. It’s there to solve some unique set of use cases. And it can do some things really well. But we need to be mindful of what it is and understand it. And I think because it comes across as being so advanced and you know, when you’re talking to it, it can sound really convincing, like you are talking to a human.

River Nygryn [00:13:15]:
That can play tricks on your mind. You can start to build rapport with an AI system, which is a dangerous territory to be in because you may not be aware in the moments that you’re subconsciously, you’re giving a lot of trust to the answers to you. They sound really good, they sound really convincing, they sound really well thought out. But at the end of the day, it’s just, it’s a mathematical algorithm that’s attributing weightings to words. So when people say, you know, it’s advanced predictive text technology, in a sense they’re right.

Karissa Breen [00:13:49]:
I think even in recent times I’m starting to see like a note to bene to say, like, hey, this is AI. You might want to check your sources. A lot of that appearing now. So it’s like, don’t read everything that’s we’re telling you.

River Nygryn [00:14:00]:
Yeah, there’s a little fine print in some of the AI models as well at the bottom where they say, you know, don’t trust in this output, validate your sources, all that type of stuff. So yeah, I don’t think it’s going to replace a lot of that real human insight and research. And I think that the conversations like this and being aware of the cognitive impact to us and the younger generation just need to be highlighted more. The more we talk about it, the more chances that you’re going to find the younger generation finds it online, reads it, it resonates and they learn something.

Karissa Breen [00:14:34]:
Do you think perhaps if I go back to the critical thinking, this could potentially breed laziness in people. So, for example, I mean, look, we’re all lazy. If I can get a quicker answer, I can do it. So if I’m like, hey, what are the top restaurants in Sydney? Because I got to take someone out and do something, I’m probably going to ask AI, right? Or else I’m going to have to fish around and ask friends or think about restaurants I’ve been to. It takes more time. Now, that’s a very rudimentary example. But if we were to sort of zoom out and look at maybe businesses and consultants that are using this, do you think potentially we are going to get into a stage where it’s like, hey, I am now overall a lazier human being because I’ve got this technology?

River Nygryn [00:15:16]:
Yeah, definitely. I think that’s a real risk. It’s permeating throughout corporations already. You know, if not in Australia, definitely in America. I’ve been reading a lot of content coming from there from the States around workforces being reduced and this over reliance on AI to do the job for people and then realizing that it doesn’t work or trusting in decisions made but they were informed by an AI output that was not validated or checked. The younger generation have grown up with social media as well and seen the onslaught of influencers making money online and having, you know, that really easy. There’s dime a dozen people out there saying you can make millions from your phone without ever having to do too much work. So I think that’s already started to be bred into culture and the younger generation.

River Nygryn [00:16:10]:
And AI definitely risks amplifying that.

Karissa Breen [00:16:13]:
So I want to sort of switch gears slightly for a moment. So I’m keen to get your sort of guidance on how you, you use AI effectively or use it for the 4D. So I’m keen to get you to walk through the 4Ds and like what they are and perhaps what they mean.

River Nygryn [00:16:27]:
There’s a TEA leader in AI, she’s from America. Her name’s Sol Rashadi. I read this on her blog, actually. And then I did some reflecting around technology throughout history and realized humans have been doing this across the four Ds for many years. So what they are are dull or mundane work, dangerous work. So where you’d rather risk A machine than a human life, Difficult work. So really high cognitive load to get an outcome, or dirty and unsanitary work. So ultimately it’s a way to determine what would you like AI to solve for and what should it be used for.

River Nygryn [00:17:05]:
So it’s a bit of a general guide, but given the fact that we’ve always bucketed technology within these four areas to solve specific problems, it’s probably a good guide to start with. So I’ll share some examples, a historical one and then a modern one to shape the story. And you know that humans have been using technology in this way, so it’s not just limited to AI, but dull or mundane work. So assembly line robots, that’s an example from the past, they revolutionized manufacturing to today where we’ve got customer service chatbots on websites handling repetitive, frequently asked questions, then you’ve got dangerous work. So rather risking a machine’s life rather than risking a human’s life, you’d risk a machine’s life. So there’s nuclear inspection robots, they use them in Chernobyl and then to current day times there’s bomb disposal robots. Difficult work. So weather prediction models, cybersecurity, anomaly detection, or algorithmic trading in finance.

River Nygryn [00:18:06]:
So they’re all examples of where technology and, you know, AI models have helped humans complete difficult work, then dirty and unsanitary work. So you know, sewer inspection robots in the 90s right through to, you know, Yoroomba that you might have at your home now, and even crop spraying drones in agriculture. Like if you’re right here, you could probably expand the D theme to distance as well. So we’ve used technology to solve the distance problem as well. So from telephones to the Internet to drones doing things for us now.

Karissa Breen [00:18:37]:
Yeah, okay, that’s super interesting. So, so would you say that companies need to be specific in each of those use cases? So dull, dangerous, difficult, dirty work. Right. So rather than just kind of people always say we want to use AI to the dull, monotonous, repetitive tasks and get humans to do the critical thinking. Although we’re sort of slumping back into being a little bit lazy. But that’s what I’m hearing from people.

River Nygryn [00:19:00]:
Yeah, I think it’s a good guide to have a successful AI deployment. Like there’s a, there’s a lot of data and evidence where a lot of the big four have predicted and you know, done some analysis around AI initiatives and systems being deployed within companies and then they’re failing for some reason and it’s usually because executives or whoever are leading Those don’t understand the problem they’re trying to solve and they may not understand what specific AI system is required to solve that tool. So if you can start small within your organization, look at whether know the problems you’re trying to solve does fit into the 4Ds, you’ve probably got a good chance of seeing some success and some positive outcomes from whatever you’re trying to do with AI within your organization or on a personal level as well. But I think that the dull aspect of it, I think anyone interacting with AI, they’re not specifically using it to just do the monotonous, boring tasks, they’re actually using it for creativity as well. So where someone may have really wished to be an artist, they can now be one with Google’s Gemini. Like, it’s probably my favorite tool for image creation. Like that type of stuff where you may have had something in your mind and wanted to be really creative. Now you can do so with just natural language.

Karissa Breen [00:20:25]:
Enterprise tech leaders know that compliance isn’t just about ticking boxes. It’s about risk, reputation and revenue. That’s why companies trust VANTA to streamline their security and compliance workflows at scale with deep integrations and automated evidence collection. VANTA takes the manual audit grunt work out of the frameworks like ISO 27001, SoC2 and GDPR. Visit vanta.com kbcast V-A-T-A.com kbcast to learn more. So, okay, so this is interesting. So from a media perspective, obviously we look a lot of content and I read a lot of content every day, whether it’s on social, online, et cetera. What I’m starting to see come into the fold of people now who aren’t natural media or content writers.

Karissa Breen [00:21:16]:
They’re obviously using AI to say, hey, write this LinkedIn post. Because what I’m finding as a media journalist person, people’s voices are starting to sound the same. There’s not a lot of authenticity. So I always say to people, look especially social, right? Like, don’t leverage AI because somebody looks at this all the time. I can tell instantaneously whether it’s written by AI, right? So it’s like, how’s this going to go? Like moving forward to your point around, like, right. Like art was all about, oh, you know, I would sit out in a field and do my meditation and I had this great artist idea and then I’d paint it for 20 hours straight because I had the idea, right? We’re not seeing that anymore. So then how do we get to the point where AI is not stifling our creativity and we’re not all just starting to sound the same online either.

River Nygryn [00:22:04]:
Look, at the end of the day it’s, it’s going to come down to how the market responds. So I think that holds true in so many aspects. If you’ve, if you’re constantly posting things that are sounding more and more like the next person, it’s not, you’re not going to get the same response as you would if you were being authentic. And I think that message has held true no matter what. It’s kind of come through on social media as well where you see influencers just speaking to a script or they’ve all got the same playbook, they’ve all downloaded and watched the same course, so they’ve got the same blueprint to try to elevate their profile and gain market attention. But if everyone starts to become the same, you’re going to have to differentiate yourself and stand out from the crowd. I don’t think that fact goes away. I just think that people are becoming more aware of it, which is really good.

River Nygryn [00:22:57]:
So it’s going to force a change in behavior. If you are trying to elevate your profile online, gain traction with followers, gain market attention, sell your products, you are going to have to test the market and change behavior accordingly.

Karissa Breen [00:23:12]:
Do you think as well? So even when social media really became in effect of like, oh well, now everyone can tell their story. This is when traditional media outlets started to lose their prominence. Right? Because it’s like, well, everyone can just start a social media and start talking whatever they want. But are we going to start to see now, especially if we’re scrolling online, it’s like, oh, the last 10 people I just looked at online to the exact same. They sound robotic. But then you will start to see those outliers that it’s like, hey, that’s 100% authentic voice etc. So is it just like, well, yeah, the bar for entry is lower. Doesn’t mean it’s better though.

River Nygryn [00:23:46]:
Yeah, correct. And I think the people that are going to excel in this day and age are going to be the ones that stay true to themselves. They build their own personal brand, understand what they’re trying to do, what they want to impart on their communities in terms of knowledge and just staying true to that vision. You’ll have people gravitate to you. If you like your tribe, I guess gravitate to you. If you stay authentic and you don’t start to sound like everyone and you don’t give in to being lazy and offloading your cognitive abilities to a machine.

Karissa Breen [00:24:20]:
So do you think it’s just going to probably be a bit of time until people start to find, I guess pecking order is not the right word, but in some respect with AI, right. So it’s going to be like, well, sometimes I use it for this, sometimes I don’t, sometimes I’m lazy and I’m hungover today, so I don’t really feel like thinking about where I’m going to take something out for lunch. So do you think that it’s just people are going to find their groove at these sort of things? Right. Even when, yeah, mobile phones came out, like the use case changed and it was like super expensive to call someone back in the day and then it became more ubiquitous and now it’s sort of people doing less, calling them more messaging and all these sort of things, like things have sort of changed. Is it just going to take a little bit of time to see where that balance is and what people are doing more or less of or what do you sort of think on though on that front?

River Nygryn [00:25:05]:
Yeah, I definitely think it’s going to even out as more and more people start to use it, they’re going to understand where it works for them and where it doesn’t. People that are being aware of it will find their groove and those that are kind of sounding really robotic and things like that, I think market response and attention and feedback that they’re going to get naturally will hopefully shift behavior for them.

Karissa Breen [00:25:29]:
So I want to extend this even more, maybe talk through security by design principles. So, so your whole theory is trust but verify when it comes from AI. And I know you’ve touched on this a little bit today around the awareness and your sources and all that, but like walk me through this, especially now as I mentioned, people of this sort of generation and beyond are a bit more used to that, but perhaps the new sort of kids coming through now are not really exposed to that as much. How’s that going to work sort of long term. Any sort of thoughts you have around that river? Yeah, I guess so.

River Nygryn [00:25:59]:
It’s a old school security principle. Always trust but verify can be applied in so many different facets. So even when it comes down to assessing a vendor or a suitable solution, whether that’s AI or not, you know, it’s always you go down into your exploratory phase, you start to talk to people to understand what it does, what it can and can’t do and then you go into a proof of concept phase to Determine whether it’s fit for purpose for you. So that’s where kind of I’d encourage anyone using AI technologies is to, you know, yeah, trust it. Trust that it’s very advanced technology. It’s got a lot of smarts in it compared to what we’ve been exposed to in the past. It’s come a long way as an industry in its own right, but it still makes mistakes. And I think it was Google that they have a link into the, into one of their web browsers, that they were still saying it’s got to make mistakes.

River Nygryn [00:26:53]:
They’re only released the new edition of this out to a thousand people, but they acknowledge that it makes mistakes. It’s. We’re still in this phase of exploring LLMs. Even though they’re all new and exciting to the average user, those that are the experts in the fields and those that have been working in this field for decades do understand that they are still in their infancy. We do still need to test them. We need to security test them especially. There’s, you know, like any new software or system, there’s going to be adversaries trying to break them, there’s going to be curious researchers trying to break them, and they have, and they report those findings. So that’s where I kind of really narrow down onto, yeah, trust them, they’re advanced technology, but you really need to verify all the outputs.

River Nygryn [00:27:40]:
And if even as simple as if you’re getting AI to guide you or teach you something, verify it by actually doing it. So if it’s giving you these steps to learn something and it’s a practical skill, put it into practice somewhere and actually test that what it told you is accurate and worked.

Karissa Breen [00:27:59]:
So the question I have around that, for example, it’s easy when you sort of know, but it’s hard when you don’t know what you don’t know. So, for example, bad example, I don’t know, you’ve got a bee’s nest in your house, you want to get rid of it and you use AI, but you don’t know anything about these. So it could actually give you a fabricated answer to get rid of it. And then you’re allergic to bees and it stings you and you end up in hospital, right? Yeah, correct. It’s scraped the incident and some. Someone’s put all the stuff on Quora or Reddit or whatever it scraped it from to say, like, oh, this is what you do. But because I’m not an expert in that area, I wouldn’t really know. So if I tried to Test it.

Karissa Breen [00:28:36]:
I could fail it in that particular. So how does that sort of work where it’s like, okay, you’re tested enough to the point where it’s not going to do damage?

River Nygryn [00:28:43]:
It’s the same principle as if you were to Google something like the Google’s not policed. Anyone can create a website. I remember when, you know, the Internet first became really popular and my mom’s generation were getting involved in it. And mom went to a website and saw she was fascinated with the concept of a liger, which is a blend between a tiger and a lion, and saw a website and it was a complete, really botched Photoshop of a lion’s head on a tiger’s body. And mum had this, you know, misconception that, oh, it’s on the Internet, so therefore it must be real. And she came to me and showed me this website and said, how could somebody do this? Like they just put that on there. It’s a lie. I said, well yeah, there’s no one policing website, no one there is stopping you from creating a website and just putting absolute lies on there.

River Nygryn [00:29:38]:
So I think the same principles, guardrails and best practice kind of steps that we’ve put into place around, you know, the Internet was fascinating. It connected all of us. It, you know, it was democratizing access to information, all of that. But again, like it’s checking that website, who’s the owner of that website, who created that knowledge? Can I verify that somewhere else to double check? That’s the right thing to do. And I mean, I’ve been guilty of it. Like Sometimes I’ll ask ChatGPT something and then think to myself, oh, let’s just check another AI. I’ll go check Grok because I know it’s trained on different data, so things like that, but then also just Googling it and finding the websites yourself. You can do a lot of things online to verify that the output is going to guide you in generally a good direction.

River Nygryn [00:30:27]:
But at the end of the day, nothing beats real life experience.

Karissa Breen [00:30:31]:
So, okay, so question I have, what’s coming in my mind is so we leveraging AI to effectively save us time, right? So it’s like, oh, I’ve asked this question, it’s giving us a synthesized answer, but then we have to go and verify the sources anyway. So are we sort of in the same position as before then, if not in a worse position because now we’re questioning everything that we’re reading? No, not really.

River Nygryn [00:30:51]:
I don’t think so. I mean personally I like, I use it A lot. You know, I was a little reluctant at first just because I work in technology. I understand the risks, I was understanding the limitations. I’ve got a few friends that have been in the AI industry for 40 plus years. So speaking to them, I was always hearing from them, you know what AI is not. So I was a bit reluctant to try it and then saw another wave of people I know really embracing it and embracing the technology and highlighting different use cases of how great it is and how it can help humanity. And so I gave it a go and had a look.

River Nygryn [00:31:29]:
And what I would say it just makes you faster. It’s not going to. So for me, if I was trying to find something on the Internet just via a Google search, it’s. Google’s not going to scour it or suggest something else for me to search or suggest. Maybe I want to explore this topic as well in conjunction with what I’m currently exploring. Whereas AI does that, it’s kind of got that predictive technology in it to kind of guess what am I looking for, what do I want to find, what am I researching? So all of that’s a massive time saver. Even though you do still have to verify the outputs, it’s probably cut down the work by days or weeks, depending on what task you’re trying to do. The other thing for me was I’ve had aspirations to create my own app for many years and it’s always been murky for me.

River Nygryn [00:32:20]:
I’ve overseen app development projects but never been too close to the developers or to understand what they’re doing. And the programming languages in the app space change so frequently you become obsolete quite quickly. As a developer in that world, I did an experiment with AI. I had a friend of mine, that’s his job, he’s got an app development company. I spent four hours on an afternoon just fleshing out my requirements. And this was before some of the more recent models where you’ve gone now, the models are a lot better at vibe coding. So, you know, speak to it in natural language and it will come up with your requirements. What I did, I actually blended, you know, Gemini, Grok, chatgpt, I went to all of them and kind of, I knew how to build out technical requirements just because I’ve done it before in my corporate history and work experience.

River Nygryn [00:33:10]:
So kind of had that model to work with, building out your functional requirements, your business requirements did that, plugged it in. But I had no idea how to then inform an app developer what to do to make something easier. I proposed a project and wrote it all up, sent it to my friend and he was astounded. He actually said, did I write this? I said, well, no. AI helped. And I was curious because I wanted to know, is it bad? Like, can you work off that or not? And it was about two months of me talking to him on a regular basis, him asking me to provide requirements that I couldn’t. It was just taking me too long to even get my head around it and learn, like, what does an app developer need? And, yeah, AI in four hours allowed me to do it. He said it was better guidance and better instructions than majority of his institutional clients providing.

Karissa Breen [00:34:02]:
Wow, okay. Yeah, I see what you mean. So then, if you were to look forward, then, given your example, where do you sort of see AI now in the future? Whether the future being 6 months, 6 years, 60 years, whatever. Have you want to interpret that question? I’m just curious, like, and we’ve gone through a lot of examples. We’ve gone through a lot of rudimentary examples as well. We talked about the security elements. So anything you can sort of share.

River Nygryn [00:34:26]:
With us today, River, AI models themselves are one thing. There’s a lot of mathematical algorithms that are using transformer architecture. And transformer architecture, just back in really simple terms is it’s just the architecture that LLMs have built on in order to look at a sentence as a whole and then assign attribute weights to the words. In transformer architecture, they’re called tokens, but there’s weighting attributed to that. So if you’ve got a sentence and you’re talking about a cat and you’ve got the word it in it, you know that the technology can infer that when it is used, it’s referring to the cat. So that type of intelligence is there in the models. However, the more valuable part, and I think there’s a lot of business, there’s a lot of individuals starting to understand this more and more is the valuable part for the future of AI is actually your data. So when I talk about this at security conferences, I really emphasize the fact that a data governance initiative is really crucial.

River Nygryn [00:35:29]:
If you want to take advantage of your unique business proposition or value as an organization, you want to leverage AI to do so. Every organization will have unique data. That data might be valuable, it might not be. But understanding that and how you can leverage AI technology to benefit from that or create revenue streams, that’s something that’s really exciting where I think the AI industry will eventually go in future. The other thing I think as well, it’s going to open up the ability for people who may not have been able to access traditional education. I think it’s going to be able to give those individuals real skills that they can test and validate and prove their value as an employee. It’s going to democratize the education space to an extent. It’s going to make research faster as well, people more productive.

River Nygryn [00:36:28]:
At the end of the day, I think it’s got a lot of opportunity and potential to really augment humanity, to become more productive, more efficient, and even more creative in some sense. I mean, I’m excited for the future of it, but I think it’s just really important to be aware of the risks versus the opportunity and be pragmatic about it.

Karissa Breen [00:36:49]:
So river, do you have any sort of final thoughts or any closing comments you’d like to leave our audience with today?

River Nygryn [00:36:55]:
Yeah, I guess you know what we’ve talked about to date. You know, really focus on your data sets. They’re going to be the next big thing and going to be really crucial for whether you’re going to be successful in the AI movement or whether you’re going to fall behind and just again, trust, trust but verify, really do verify those those outputs and be self aware around, you know, what are you getting AI to do? You are the teacher. You are the master of AI. Don’t let it become yours.

Share This