The Voice of Cyberยฎ

KBKAST
Episode 254 Deep Dive: Bob Huber | Deep Fakes and Election Interference: Tackling the Threat of Manipulated Content
First Aired: May 03, 2024

In this episode, we’re joined by Bob Huber (Chief Security Officer and Head of Research – Tenable) as he delves into the pressing issue of misinformation on social media. From the impact on critical situations like elections and natural disasters to the proliferation of deepfake technology, we explored the difficulty of discerning authentic content. Bob shared insights on the challenges of identifying and combating misinformation, emphasizing the need for international norms and proactive measures.

Robert Huber, Tenableโ€™s chief security officer, head of research and president of Tenable Public Sector, LLC, oversees the company’s global security and research teams, working cross-functionally to reduce risk to the organization, its customers and the broader industry. He has more than 25 years of cyber security experience across the financial, defense, critical infrastructure and technology sectors. Prior to joining Tenable, Robert was a chief security and strategy officer at Eastwind Networks. He was previously co-founder and president of Critical Intelligence, an OT threat intelligence and solutions provider, which cyber threat intelligence leader iSIGHT Partners acquired in 2015. He also served as a member of the Lockheed Martin CIRT, an OT security researcher at Idaho National Laboratory and was a chief security architect for JP Morgan Chase. Robert is a board member and advisor to several security startups and served in the U.S. Air Force and Air National Guard for more than 22 years. Before retiring in 2021, he provided offensive and defensive cyber capabilities supporting the National Security Agency (NSA), United States Cyber Command and state missions.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

KB: Joining me back on the show is Bob Huber, CSO and head of research from Tenable. [00:01:00] And today we’re discussing. Threat of AI generated deep fakes in general elections. So Bob, thanks for joining and welcome again.

Bob Huber: Thank you so much for having me.

KB: Now, this is a really big topic and I was super excited to do this interview.

And I’m not just saying that because you know, AI generated deep fakes are emerging pretty rapidly. So let’s sort of maybe. Start with that first, like, what is the sort of threat? And I ask that, I know it sounds basic, but maybe people aren’t as acutely aware of the threat as perhaps someone like you

Bob Huber: Yeah, at first glance, it’s a simple question of like, what’s the, the threat of AI and deep fakes? You know, when I thought about that, I was really thinking about the, the ability to spread misinformation or interfere or influence, right? You’re moving someone possibly to action. And I really thought that was it.

And then the more I spent thinking about the question, I was like, you know, it’s deeper than that. It actually affects my ability to trust the information I’m receiving. That’s what it really comes down to the, the misinformation. That’s just the [00:02:00] effect of, of what these things could have in general. But now I find myself questioning almost everything I see now because I’m trying to figure out myself, like, is this real or is this generated?

Is it synthetic, manipulated? And I think that’s, that’s a deeper issue for us.

KB: Yeah, you’re right. Now I’m going to start with something super basic. So Instagram, there’s conversations floating around on, you know, they’re like, don’t believe everything you see on the internet, which is true. There’s certain applications out there where you could make yourself look like a Victoria’s Secret model, et cetera.

Even if you weren’t look like that in reality. So that’s the start of the, to some degree, the deepfake starting to sort of come into social media, for example. But then yesterday someone sent me a AI generated sort of a video of Joe Biden saying something. And again, it was obviously not real. It definitely looked fake.

It didn’t look legit or real, but. That’s just the start of where it’s going to get to. So obviously there’s, there’s a spectrum to [00:03:00] this and we’ve seen it creep in over the years, but now it’s getting to the stage where, to your point, we don’t know what’s real, what’s fake. How does that sort of look? Are we going to be questioning everything?

Are people going to be questioning this interview? Is Bob Huber really Bob Huber? Is that really Chris Breen? Who knows? KB,

Bob Huber: I was just wondering if it was you myself. So I get it. I think the. You know, when it, when it comes to deep fakes in general, if it’s something that like you can see, there’s a little bit higher chance that you can actually discern something’s not right, like a glitch here, a glitch there, but I think you nailed it.

The technology, unfortunately for generating this has improved over the past few years that even that’s gotten harder to discern. That’s my concern. Even somebody who’s a security and risk management professional, it’s difficult for me to discern that. Right. So unless it’s just a blatantly obvious, but if you’re taking real clips and slightly manipulate them, that becomes, I think, a challenge for, for most folks in the audience.

And, you know, I would hope, you know, technology companies would actually start building [00:04:00] something in that makes it easy to identify this stuff. But. That only goes so far. So for instance, if it’s a robocall related to an election, you know, and that’s not going through a social media platform or any type of filtering software, that’s coming straight to you.

And now you don’t even have anything to see. So I think it would be even harder, you know, if you had a robocall coming in, that was some type of deep fake, it’d be really hard to discern that. In my opinion.

KB: I want to get into the vexations in just a moment, but before we do that, now I had actually your senior researcher from Tenable, Satnam Narain, on the show.

I think we’ll be talking about scammers back in the day and all of these like romance scammers that exist. Um, one of the things we were saying around like identifying the glitch. So for example, I think the, The exact example Satinam gave was, you know, if you’re trying to scam someone out of their whole life savings, there was some romance scam and they’re saying like, I’m having dinner tonight.

But in the background, it was cute. It was, you know, obviously daylight. It’s a very big glitch in sort of the story, but those things are more obvious and more apparent. But as you sort of were saying, it’s going to get harder. [00:05:00] So then what does that sort of mean? Are we going to be questioning everything?

Doesn’t it become exhausting? Like, how does that look though long term? What does that mean for sort of social media giants then as well? Facebook and friends, like, are they going to be accountable? But then they say, no, we’re not accountable. Like, I know there’s a lot of questions in there. I just think it’s a big topic and I don’t really know, um, if people really have the right answers for this.

Hoping you do.

Bob Huber: Yeah, I, I would agree with that. I think, well, first of all, you know, anything you come into, as far as information, you bring your own bias. And unfortunately now there’s enough sources of information out there that I can pretty much filter down the only things I’m interested in. So, you know, if something that’s a manipulator, synthetic or deepfake aligns to my bias or my interests, I’m probably not going to pay attention as closely, right?

Because I already consider that probably a trusted source of information for myself. It aligns to the messaging that I would hope to see or expect to see. So I think I’m not going to question it as much. Now, when it comes to the larger tech companies, and there’s certainly some coalitions out there for, you know, [00:06:00] trying to determine provenance and authenticity with Microsoft and Adobe and Google and others.

But I will tell you, if somebody feels pretty well informed about this, I don’t know what that means to me. Like, what am I looking for? That’s going to tip me off to tell me that, Hey, this might not be real. Right. And, and, you know, I consider myself to be pretty tech savvy. But how would I explain that to my, my mom and dad and my grandparents?

Like, I have no idea how I would go about doing that, you know, and they’re all on a computer just like me, but me trying to explain of like, look for the watermark in the background or some other type of content protection or notice that thing might not be real. I don’t know how I would even try to explain that to somebody.

KB: What are we looking for at the moment? So if you’re scrolling through social media, how would you delineate between, Oh, that’s, that’s clearly fabricated. There’s no, that’s real. Or are you saying you really can’t at this stage?

Bob Huber: Yeah, I think, I think that’s the issue. You know, like, like I know Meta, they had some technology out there, CrowdTangle at one point.

That was, you know, attempting to identify this type of technology and there’d be some notification. They’ve since switched [00:07:00] the technology they’re using. And, and to be honest with you, I don’t know what that new technology looks like or how it’s going to appear in front of me so I can discern whether this is legitimate content or not.

And, and now really that’s like the, what I call the coalition of the willing, like people who are actually attempting to identify this and relate that to their, to their viewership and their users, that’s not the majority of organizations, right? Because there’s a cost to do that. That’s a cost to business to try and identify this content, tip off users, or provide some notification or watermarks.

So I think that becomes difficult. And then certainly on the social media side, I think where it becomes difficult is if you have, and I’m air quoting here, trusted sources of information, whatever that means to you, you know, there’s some, uh, assumption, at least on my part that, Hey, they’ve corroborated the information, right.

It’s, you know, verifiable. I can follow sources. I think by and large, you know, most as a vast number of people that don’t do that anymore. As the information comes, there’s a certain Implicit assumption of trust of the information and they don’t go and look for the links to click [00:08:00] on to verify the reporting of the information or where it came from.

So I think, I think it gets harder and certainly in the social media realm. And certainly at the, you know, they’re all snackable or bite size. So they’re not like really detailed, long articles that you would expect to see significant research into. So it’s like, I’m scrolling the screen and boom, one story, two story, three stories in a matter of minutes.

I think the general. population isn’t going to try and verify whether that’s legitimate content or not, unfortunately.

KB: Yeah. And that’s where information warfare, for example, can get really dangerous. Cause if you’re saying, Oh, the sky is red. And then all of a sudden you keep seeing, you know, a million people on X or wherever saying the sky is red, you then start to believe it because other people are then saying it and you already sort of believe it and then it reinforces it.

So it’s a bit of a, Slippery slope, wouldn’t you agree?

Bob Huber: Oh, absolutely. So what you’re doing for corroboration now is like, is this true? You’re waiting for like the likes on the social media platform, like the thumbs up of like, yes, I agree. Yes, I agree for a bunch of other people who actually don’t know whether the content’s real or [00:09:00] not.

But you know, it’s a snowball effect, right? If you have enough people doing that. And then you’re more likely to believe like, Hey, you know, there’s, you know, a million comments, they all seem favorable to wherever the content is, or they’ve given the thumbs up. That in itself becomes the story.

KB: Absolutely. I think I’ve spoken to this on the show before.

So there was some study done in Times Square in New York. And then just one person stood up, started looking at nothing. And then two people and then three and then 10 and 50 people were just looking at nothing. Cause like. They just followed what the other person was doing. We are herd sort of people. We like to do what other people do.

It makes sense. Right? So the part that gets me the most is you are the CSO and head of research for Tenable, which is a large security company. And even you’re saying that it’s even hard for someone like you of your caliber to detect. So imagine the average person.

Bob Huber: Absolutely. And I think that’s what’s critical is when we talk about this technology in general, And whether it’s incorporated into the technology platforms of social media platforms, we’re talking about training the entire population of the world that has access to this type of content.

Right. [00:10:00] And, and that’s an intractable problem in my mind. It’s like having a license to be on the internet, which is never going to happen, and I don’t promote that. But like having the ability to understand like what’s real and what’s not, what would the alerts look like if they actually came in, that would be difficult to do.

So if you extrapolate that to things that are actually critically important. So if you’re looking at like responses to like natural disasters and reaching out to the population in certain areas to incite some type of action of like, you know, evacuations or take shelter or things like that, that has a massive impact and some of those impacts can lead to harm, unfortunately.

So it’s one thing. To have some, what I’ll call some influence, and I know it gets defined differently, some type of interference, but when it’s, when those things. insight some type of action and certainly that stuff which could be harmful. I think that’s where it’s dangerous.

KB: Yeah, most definitely. I totally agree from that front.

And then going to what’s important to people is, of course, elections, especially in your part of the world. Now, as we know, there is an upcoming election. So what does this sort [00:11:00] of mean for everything we’ve just sort of spoken about? Deep fakes, people, you know, not understanding whether it’s real or fake.

Like, what does that then mean for this upcoming election?

Bob Huber: Yeah, I think this is great for the adversary or whoever’s using the technology to create the deep fakes or misinformation because now there’s a plethora of information, you know, I, you know, having just recently been down there and here in the States as we come up election cycles.

There is no shortage of information about candidates and elections and, and all kinds of information on a, on just a nonstop basis. Right. So I’m faced with it constantly that material that’s fodder for adversaries or people who do want to influence or interfere. Right. So now they have information that can be manipulated and, you know, given everything, you know, I’ve seen in Australia and here in the States.

Like I’ve seen nothing in all the interviews and soundbites and everything else that would indicate to me, like, this is legitimate. So by the same token, you know, the adversary is not going to say it’s not legitimate. So the question is, if you have all this information being streamed at [00:12:00] you nonstop, and it just ramps up.

As we get closer to the elections, how do you discern? And I think that’s a, that’s a slip slippery slope. Like I said, most people get information from a source. They consider their trusted source or at least representative of their interests. So you’re already coming with a bias, right? So I, you know, I think for anybody who creates this information, you can take something that was in news today, tomorrow, spin out a story to whatever audience you’re, you’re targeting using the information that, you know, supposedly is, you know, legitimate information, slightly manipulative the next day.

And achieve a different result. And, and the one thing I think is fantastic in this space overall is people actually discrediting stuff that’s actually real now, right? Like that, that just blows my mind that it’s almost like an out of something can be out there. It’s actually true. It might not put whoever that person, organization or interest in a, in a good light, and now they can discredit it.

They’re saying it’s, you know, synthetic [00:13:00] or it’s a deep fake of some type.

KB: And then do people believe that? Does it say it’s fabricated or obviously they do.

Bob Huber: Yeah, you know, it’s a, it’s a concept called Liar’s Dividend and there’s enough of it out there, uh, already that, uh, I think it holds some value there.

It, it at least certainly brings question into the content, whether it’s legitimate content or not. So if it puts you in a poor light, I think it certainly gives you an option. To discredit the information.

KB: So from your experience, what would sort of indicate something’s real?

Bob Huber: Yeah, there’s, you know, there’s lots of different things you consider and you kind of touched on a little bit, but you know, based on background noise, uh, certainly if it’s video, does the background noise match what you’re actually seeing in the video?

Does it seem to flow smoothly? Can you take any cues in the, in the information being presented that would Relate to time of day or even date. So if there’s something in the information that allow you to discern like, Hey, is this, you know, a day old, two days old, 10 days old, uh, what have you nighttime, daytime, [00:14:00] and even looking into the crowd, you know, if there’s, if there’s crowds of people behind, does that appear to match what you’re actually hearing?

In the, in the video or the audio. So, you know, when you’re watching this stuff, I don’t think short of you and I’ll use some old school terms here, you know, pausing it and rewinding it. So I break up my VHS days, you know, pausing and rewinding it. I don’t know anybody who does that. You know, every now and then I read some stories of people who, you know, that’s how they identify these things.

But I think writ large, most people aren’t doing that. You’re taking the information as it comes at the time it comes. And you’re not pausing, looking at the screen, questioning, looking for dates, looking for time of day or anything like that. So I, I think it gets very difficult to try to discern that. Now, if you come in questioning immediately yourself, you may do that.

Right. But you know, for me, I will tell you in general, you know, there’s definitely some news sources I go to regularly. My news sources, whether people consider them trust or not, it doesn’t matter, it’s my news sources. I have a level of trust and comfort with the information presented there. Now, if the information I [00:15:00] received from that source starts varying to some extent, then I would probably question it because it just doesn’t match what I would perceive as what to expect from that organization.

But that’s, those are all really fine lines. I think that’s just really difficult. And I know certainly in the States, we see this, even the people who question some of the information that does come. Then they get questioned. So that’s why I say, if you go back to the original question, you asked me, he’s like, what’s the threat of this type of technology and capabilities?

It’s trust. That’s exactly what it is. It’s trust. Because now I have to, I have to think really hard on what I’m trusting, who I’m trusting, why I’m trusting the information and ask a lot of questions. And I think. You know, that’s a luxury for most folks.

KB: Seems exhausting. Like no one does anything all day now because we’re trying to trust and then questioning and saying it’s fake when it’s real, but just go back a step.

You said about rewinding it. What do you ascertain from doing that again? Sorry, Bob.

Bob Huber: Yeah. So, you know, if you, if you’re like, you’re getting information in some form, the ability to go back and listen and look, you know, replay [00:16:00] the message. Look at the screen, pause it, make sure things make sense. Looking for any tips inside.

It certainly, if it’s a video that would, um, relay additional information of, like I said, you know, time of day, time of year, you know, nighttime, daytime crowds as a crowd match, you know, where somebody is pretending to be. As far as, you know, region of, of, that they claim they’re in. So, like I said, I, I will do that on occasion, but admittedly, even for myself, I will usually only do that if somebody else has tipped me off, right?

So if somebody else questions is like, Hey, did you see that? And then I might go back and look at it, but I’m just like everybody else, you know, I’m taking the, you know, my information as it comes and I’m not questioning a ton of my information. And, and I guess now what I’m getting to is now I think I have to, so I’m, you know, I’m a paranoid guy anyway, I’m in security, so that’s, that’s kind of part and parcel with the job.

It’s probably gonna lead me to question more information coming my way.

KB: I mean, that’s a lot of effort to go backwards and forwards and question things. Like that’s like almost digital forensics level for one video.

Bob Huber: Yeah, that’s exactly it. And that’s, and that’s why I wonder, [00:17:00] obviously the masses not only don’t have the expertise to actually do technical analysis of it, you know, by and large, people aren’t going to have the time to go back and try to figure this type of stuff out, and that’s where I think.

Those larger organizations and some of those coalitions, uh, like I said, the coalitions of the willing, that’s where I hope they create technologies in their platforms that do make it easier to identify, right? That we all need that tip off of, Hey, this is questionable content or whatever it might be that they come out as far as messaging, uh, regarding synthetic content.

KB: In terms of tip off, would you say the way in which people are phrasing things as well, like the, the words perhaps, what about even the voice? Cause I’ve used an AI generated voice and it didn’t really sound like me unless I’m tone deaf, but just, it didn’t sound like me, but again, like this is just now, what about in five years?

Bob Huber: Yeah. So I’ve heard some, in all honesty, and this was a person I know who did it. You know, they, they use their own voice and then they manipulated it. And it was, it was pretty close. [00:18:00] The longer. The message got the more you were able to think, Hey, this doesn’t quite sound right. Like things like pauses in their conversation, right.

Or transitions in conversation just didn’t sound quite smooth enough. But I think the technology, like you said, it’s advancing rapidly enough that. That’s going to become harder and harder to detect that type of activity.

KB: So all these sort of deep fakes, hard launching really into the market, especially around now.

And I know, you know, Americans take their elections quite seriously. A lot more probably serious than any other parts of the world from my understanding. So I just need to ask the big question with all these deep fakes sort of, you know, waffling around in the space, will this have a major influence from a campaign perspective, to sabotage, to influence?

What are your thoughts on that?

Bob Huber: Yeah, I mean, if you’re familiar with the U. S. politics, I mean, we already have our parties pretty well defined, and, you know, we have, you know, extremes on both ends of the political spectrum, so, I don’t think it’s actually so much going to change, actually, the [00:19:00] results of elections in so much as it might solidify people’s positions even more so, right?

If you have a certain belief, uh, founded or unfounded, uh, you’re more likely to find information that’s going to support your belief. And I just think that makes hard to dislodge folks from whatever their current belief is. So, I guess what I’m saying is, you know, those extremes on both ends are probably more empowered and more emboldened.

Uh, but really do I think it changes overall from an election perspective, not really. There, there may be some people that are, would be considered more moderate where it may have some influence. And that’ll be interesting to see, right, is, is whether the moderate actually does make a difference in the U.

S. elections that are coming up. And that’s usually, you know, what candidates in the states play to as, you know, whether we call them, you know, purple states or swing states or, you know, targeting moderates and you think you can move them just a little bit one way or the other. It’ll be interesting to see if that actually works.

KB: Yes. That’s a great point. Now I’m aware of the extremity of both sides of the coin, but what about as to your point, the people in the middle? So do you think it would help [00:20:00] influence though? For example, if someone were to sabotage and say, oh, well, the other party sucks or whatever, and has all these deep fakes floating around, would that influence maybe the people in the middle or people a bit on the fence?

Bob Huber: Yeah, I don’t think we’re going to know until after the elections come up. I think it’s going to be really too hard to tell just given all the information that’s available out there of whether that actually does affect insignificant numbers to make a difference. So unfortunately I think we’re going to have some hindsight come after November of this year.

I think predicting that prior and of course there’s always polls and surveys that have been going on for quite some time already. I haven’t heard as of yet. Any significant movements of those voters at this point.

KB: And do you think there will be more sabotaging going on in terms of, okay, let’s create these deep fakes to make it look like the other party is a lot worse or we’re trying to bolster what we’re doing to influence these, these middle of the road people in terms of how the, how they sit.

So do you envision that will be part of the. Maybe a more plan, a [00:21:00] strategic plan that’s maybe underlining.

Bob Huber: Yeah, it’s possible. You know, something like that where on earth, the connotations that would be pretty received pretty poorly. So now you’re talking about like serious, you know, influencer interference in elections.

I think that the problem though, is anybody can create this content. So it doesn’t have to be done by a particular party per se. It can be done by anybody. So even small ideologists or others who have a certain belief, That aren’t even mainstream organizations, you know, from, from a numbers perspective. I think they’ll have the ability to create some of that misinformation that may affect some outcomes.

And certainly, and I do believe probably even more so, the more local it becomes, the more likely that is, right? Cause you’re, I think those people have more skin in the game, uh, as far as the information they want to present. Cause I’ll tell you right now, like personally, a lot of the information here in my local community actually does come through Facebook.

Right. There’s just a lot of the groups organized there and then you have to discern like, you know, whether you believe that content or not and trying to prove, you know, if it’s, is it accurate [00:22:00] information. I think at the local level, that becomes very difficult.

KB: So how do we sort of tell people, Hey, what you’re looking at online, maybe fake, maybe fabricated, may not be real.

Like, how do we, How do we even get to that stage where we’re telling people that? And now just even going back before, you’re saying like, you know, rewinding stuff and looking at if it’s true and looking at the setting, like all that, that takes a lot of computational power to even do that part. Even getting to that, we’ve got to get people there first, so you can get to that part.

Bob Huber: Yeah. Yeah. So I think, uh, you know, as I mentioned before, you know, when big tech and some of the coalitions out there are trying to figure out how to address this, like, even for me, I couldn’t tell you across all the different platforms what their notifications warnings looked like. So I couldn’t even tell friends like.

Look for the following things, like the, like, I don’t, I don’t have that yet. So, you know, I, I always come back to, you know, a pretty popular saying here is if it’s too good to be true, it probably is right. So if it, it aligns way too much to your beliefs or seems like that, just, there’s no way that’s great.

I love that. It’s probably not true, right? So it’s just that it’s at [00:23:00] that common sense approach because I think like, like you said, if you, even if you had something that’s, you know, fairly, you know, poorly done from a deep fake capability of, you know, like glitchy or weird pauses or the atmosphere doesn’t match or the weather doesn’t match or something like that.

I think most people aren’t going to pay attention to that. Right. So I, so I think it really just comes down to common sense. But like I said, everybody who receives information comes with their own bias. And if my bias is towards a certain belief, I’m more likely to believe that whether it’s true or not.

And that’s, you know, that, that’s the problem we have now. Forget deep fakes in general. That’s just a general problem we have. And that’s what leads to such polarization.

KB: Cognitive bias for sure. Now you are right. The only thing is that people want to believe what they want to believe and they’ll maybe overlook things.

So what I mean by that is our popular radio show here in Sydney or Australia. Now they had this lady, she had fallen in love with some dude and the other side of the planet. It was clearly a deep fake video. Cause she’s like, no, he sent me a video. It was [00:24:00] obviously fake. And I’m thinking she may have overlooked maybe certain glitches or characteristics of the video.

Anyone would believe that it wasn’t a great deepfake, but she had the belief that no, the guy loves me and all that. So do you think even to your earlier point, even if people believe something and maybe it does look a bit suspect, people are going to overlook those things perhaps because they want to believe what they want to believe?

Bob Huber: Absolutely. You referenced Satnam’s conversation with Satnam previously regarding all the scams he’s covered. And that’s the primary motivator for most of those like, you know, I want to believe this is true. So I’m going to send money or whatever I, whatever I do as a part of the scam. And I think that will always continue deep, deep fakes aside.

KB: So what do you envision sort of happening now, post this election? What do you sort of think’s going to happen in terms of outcomes or hypothesis that you hope don’t come true, but in fact do?

Bob Huber: Yeah, you know, we’re in reactive mode and that’s never where we want to be. Right. So when it comes to, you know, influence or [00:25:00] interference or deep fakes or what have you, we’re in reactive mode.

You know, I’ve used this before, the genie’s out of the bottle for this election cycle, for all those having election cycles this year. The hope is that, you know, between some of the coalitions at the G7, the big tech coalitions start implementing regulations in different regions of the world or different countries that would hopefully either introduce penalties or minimize the ability for these to exist, uh, without some type of notification.

Uh, you hope tech builds that into their platforms. But, you know, I, you know, in my guess too, unfortunately, I don’t, I don’t think that’s like in 2025, it even might be a couple of years out. So I’m hopeful for the next election cycle that, you know, we would see some of those things actually be implemented from a control, what I, you know, I’m a security guy.

So I would say from a controls perspective of like policy, process, regulation, technology, right. Those are my controls. So I would hope to see some of those introduced. I have a feeling it might take a little longer than we anticipate. In the aftermath of all these [00:26:00] elections. There is no doubt going to be a lot of studies and analytics regarding whether this type of technology and misinformation have changed results around the globe.

And I think, you know, the outcome of that is critical to understand, like, how do we tackle the problem? Cause I’m sure they’re going to, you know, go to great lengths to try to figure out like, where was the content being provided? How could you tell? Cause right now, like I would love to say, Hey, here’s the checklist of all the things you need to look for.

Um, that would be great and easy. It still requires people to do something, which I don’t think is going to happen. So that’s why I say like somebody needs to figure those things out. And then as much as possible. Build it in, you know, so hopefully we have some, some good intentions out there building this in, in lots of different areas.

And then hopefully there’s a stick on the other side where there’s some type of penalty for, you know, those who are knowingly producing this information.

KB: Well, who’s the somebody who, who should be figuring this out in your opinion?

Bob Huber: I do believe the, the approach that we’re seeing now of, you know, whether that’s the G7 or some other [00:27:00] consortium of organizations around the world, you have to establish like some type of international norms.

So whether it’s that group of countries or, or someone else, I, I don’t know, but it’s almost like, you know, we have, I’m air quoting here, you know, international cyber norms, if you will. And I guess that’s arguable, but. I think we’re going to have to have norms developed. So that’s going to come in the forms of regulations and policies around the globe.

So I think government does have a role to play in this. I think technology providers do as well, just like they do for now, for privacy and security, right? It’s not wholly different from privacy and security around the world, right? There’s governments and regions have stepped in and created policies and regulations and acts to address it.

And, and they, you know, big tech has also stepped up and done some of that as well. So. You know, as much as I had said, that’s the best hope I think we have.

KB: So it’s going back to social media platforms for a moment. Now, I remember Zuckerberg coming out and saying, well, we can’t control everybody’s piece of content on the platform.

Cause I think someone said, you know, there’s a lot of violence and things like that, that people were seeing that their kids were being exposed to or [00:28:00] something, and obviously they’ve got these content curators, but look how many people are on the platform, like billions, and there’s so many pieces of content a day, it’s very hard to, for AI, Things slip through or it’s hard to be able to manually review it.

They can’t get to everything. Right. So how do you sort of police the deep fakes? Cause how, how do you know whether it’s real or not? Like how, how do you get to that level of understanding where it should sit?

Bob Huber: So, I mean, the only way that can scale is via technology, right? So, you know, there’s lots of, uh, technology companies have organizations that do attempt to go and police this type of information, but just the volume of it and has to be implemented in technology.

To address the bulk of the issues. And I think that’s the only way we’re going to be successful, but you know, that’s a tit for tat capability, right? It’s just like security. You know, we find a new threat, we build defenses up for it and they figured a way to go around it. And I have a feeling we’ll be in that cat and mouse game for the foreseeable future at this point.

Like, I don’t know that there is, there is no end stage where you win, right? It’s just going [00:29:00] to become commonly understood that it’s out there. Uh, and hopefully we have better indicators to identify what, where this information is, uh, and certainly for the lay person, understanding how we would identify, you know, information that may not be.

Uh, fully accurate. Some education will have to take place for folks to understand like, Hey, this may or may not be, you know, factual information or maybe synthetic or manipulated.

KB: That does make sense. The only part of it that I’m looking at now is, you know, people are going to come out and saying, Oh, we need user awareness again, we’re going to go back to that head start, you know, Spacer at all, you know, it’s on the user.

But if we look at use, uh, you know, security awareness, like people can’t even do the basic clicking on a link. And now we’re asking people to become digital forensics and analyze a video to the nth degree to decide whether it’s true or not, like, I just don’t think that’s going to happen.

Bob Huber: No, it’s not, you know, it’s just like any other awareness messenger campaign that comes out regarding, regarding anything, and certainly stuff that comes from the government as well.

You’re going to have that percentage that, that he’d, whatever the, the awareness is and those adult. Then. You just made a great [00:30:00] point. So if, you know, when it comes to security awareness and training, if I had to conduct a cyber phishing exercise, you know, I know 10 to 12 percent of people is average click rate, so 10 to 12 percent will click it.

That means they’re probably going to get compromised. And that’s an organization that actually specifically trained for it probably on a regular basis. So now you’re talking about something that, you know, training’s probably going to be a lot less and there’s no testing of it, I would expect the rates of folks who are able to do stuff like that as, as a very small number.

And that’s why I say like. We’ve got to look just for scaling for governments and technology companies to step into the fray.

KB: Absolutely. And very soon we’ll see that campaign coming out around all the user wins. I just don’t see that happening. You’re talking about everyday people. They’re not working in organizations.

So it’s going to be very hard for people to, to understand perhaps the, the techniques and they have the, you know, the knowledge to be able to do that.

Bob Huber: Basically what I was saying is like, you know, government and big tech. I don’t know about you, but I know quite a few people that don’t trust either of [00:31:00] those organizations.

So I’m not sure how successful that will be.

KB: When you’re talking about deploying a potential deep fake video on a potential social media platform that people don’t trust anyway, so, What is this sort of like, where do we go from here? So obviously I know there’s not, there’s not a silver bullet to this, but what can people sort of do in the, in the interim?

Like, how can we start to understand that this is here? You know, we’ve already been informed about other people that have had these deepfake scams. They’ve been scammed out of a, a lot of money because of it. So obviously this is not the end. This is the start. It’s going to get worse. What would you sort of recommend with your experience and, and really looking at this from a research perspective?

Bob Huber: Yeah, well, first of all, I think, you know, just having it in the media is a good thing. So it’s no different than all the scam stories that come out and they make mainstream media all the time of somebody who was scammed out of their life savings and what have you. I think just being in the media works because I know.

Folks who aren’t cyber experts, like they hear that, right? Cause they’ll ask me questions [00:32:00] like, Oh, you’re a cyber expert. Did you hear about this? What do you think about that? So, so there is some value in that being in mainstream media, right? I think that’s, that’s extremely useful.

KB: But then what about the regulation side?

Like how do we get to a stage where you come back and. Two years elections done and it’s okay. We’ve got regulation. How do you enforce that though? Because the part that is interesting is with the internet, it’s not like, okay, I’m in Australia, you are in the United States of America. And if I do a crime, like in the treaty in place, then I get extradited back to Australia and I get prosecuted here.

It doesn’t really, there’s no real sort of rules on the internet. So how do we police that? This whole thing that’s going on. And I know there’s no real serious answer to this, but what do we, what do we do? It’s like regulation on the internet is very hard to enforce.

Bob Huber: So I think the approach is going to be very specific.

So, you know, certainly in Australia, you have the electoral commission working regulations, very specific to elections, right? So like, I think that can make sense and that’s [00:33:00] very targeted. But to your point, the internet is quite large and there’s every topic in the world out there. Yeah. I just don’t think you can police them all.

So, you know, as I said, hopefully establishing, you know, international norms for synthetic information or deep fakes or what have you. I think that’s extremely useful. And sometime down the road, it may be possible that there, you know, as part of those international norms, there are some agreements between regions of the world or countries that allow action to be taken.

You know, a lot of things develop that way when it’s, when it’s new to people, you know, we don’t have all the regulations and laws in place to start with and they get developed over time. And I see this falling in line with that, which is also to say, I don’t think there’s any answers anytime soon.

KB: What about Zuckerberg and friends?

Do you think they feel a level of responsibility because, you know, if I be honest, majority of the stuff is going to be deployed on social media like X and Facebook and Instagram. Yeah.

Bob Huber: Yeah. Yeah. That’s a great question. And, you know, he’s in front of, uh, US Congress frequently and, you know, this [00:34:00] is, this is my opinion and, and, and only my opinion, I don’t think that’s incited a whole lot of change for how they operate.

Not to say they’re doing a good or a bad job, but he certainly gets called to the carpet enough regarding misinformation, influence, and all kinds of other topics. And, you know, they, they, they have efforts to try and address some of these things. At the end of the day, it’s a business and, you know, unless it somehow isn’t in center for the business to do so, and, and there’s no stick to use a metaphor on the other side, you know, they won’t only deal as much as they have to then, and like I said, that, that’s my, that’s my personal opinion.

KB: Yes. I have seen that over time in terms of the stick side of it. What do you think is reasonable? Because even here in Australia, there’s like. You know, there are fines and penalties, but as I’ve said before, like, you know, people just pull that out of the back of their Maserati, a fine schmoy, and it doesn’t bother them, right?

Like, who cares? It’s, you know, they’ve got more money than cents, some of these people. So, I guess that for us, it’s like, well, I don’t think dealing the penalty in terms of monetary is even [00:35:00] going to do anything.

Bob Huber: Yeah, that’s a great question, because there, even from a privacy perspective, there’s been some pretty significant penalties out there.

I’m not sure that it’s actually had the impact that they, they had hoped for as far as, you know, monetarily being enough to, to make a real difference. It certainly, uh, got attention and it certainly made the media, whether the sticks are enough to really move the needle or not, I, I don’t know. One thing I found in preparing for this, I went out and, and read some policies regarding synthetic media and, and manipulated media and, and deep fakes and, X, you know, formerly known as Twitter.

Some of their clauses they have in here really is more specific to misinformation or disinformation that leads to, leads to actual harm. And then they list out cases of what harm could be considered. And most of those at that point, now you’re talking about criminal activity. Right. And for criminal activity, there’s already, you know, social norms in place and laws in place.

So there’s a line there of like, what can happen that doesn’t cross the line that’s, you know, influence or interference versus what crosses the line and leads to harm or safety issues. [00:36:00] So there’s, you know, so there are some things in place that capture that. But the problem is, I think a lot of these things are playing out in court too, right?

So if you have misinformation and lead to some type of loss of life, safety or harm issue. Many of those are still playing out.

KB: So can you give an example of what that looks like in terms of the terms and conditions? Like, what does that look like?

Bob Huber: Yeah. So they, I mean, essentially, you know, they, they claim to be monitoring the activity on the platform and, you know, they’re specifically calling out like incitement of abusive behavior to a person or a group, risk of mass violence or, you know, civil unrest.

Like those are the things they call out that they’re looking for. Which led me to question, like, how well can they identify that? And, and I don’t know the answer to that question, but, you know, I think these are great clauses they call out of like, here’s the type of things we look for and we consider to be harmful, but how do you actually identify it?

Like who’s doing that? Is that, is that a machine doing, is it technology doing it? Is it people doing that? Is it people reporting it? Like, I don’t know the answer to those questions, but I’d be curious to know because as we discussed earlier, Whatever that system is to do that, it has to be able [00:37:00] to scale, right?

It’s just a sheer volume of information. It has to be able to scale.

KB: So do you think they are genuinely doing that? Or do you think they’re saying they’re doing it or their intention is there, but they’re not really doing it?

Bob Huber: You know, again, my opinion, they say they’re doing it. I think they do have some intention of doing it.

I think it’s proved much more difficult than, than, than what you would read on a policy. I rarely run across stuff in, in any form that I’m a part of that says, you know, synthetic content, manipulated content or anything like that. And by the flip side, I’ve almost never seen anything that had some type of watermark or anything else that I could say was authentic.

KB: But I feel like it’s always defensible though, for these guys. They say, Oh, well, we can’t control. We got billions of people on there. It’s hard. And I get it’s hard. I’m not saying it’s easy. I get that. But then I always feel like, well, there’s, it’s always, there’s never a lot of accountability perhaps. And they, these terms and conditions are written in a way, which is a bit convoluted, there’s a bit of gray area.

Bob Huber: Yeah, that’s it. And I don’t think most people would even, you Be aware of the terms and conditions that actually exist out there. I went in search of them very specifically, and I, you know, understood that [00:38:00] they had something similar to this, but the average person would not understand that, probably wouldn’t understand how to report it.

Or what to do if they receive something, you know, that might violate one of these terms or conditions. My guess is, if they did, it would probably come through a law enforcement avenue, right? So somehow it would lead down that path versus going back to a, a social media platform of some type and trying to report that way.

KB: So in light of all of this, what do you think sort of happens now? Obviously we’re coming up to the election. There’s going to be a lot of things happening. What do you sort of envision over the next sort of 6 to 12 months?

Bob Huber: You know what, I think we’re going to see a lot more of it, certainly in, in forums where I think their ability to curate information is less, you know, have whatever you want to consider that, but if they don’t have capabilities to go out and curate information to determine whether it’s a factual or not, I think we’re going to see a lot more of it.

I think it’s going to ramp up leading into the elections around the globe, no matter where you’re at. Uh, and I think it’s going to make it, uh, very hard for, for voters to be truly informed, uh, voters. So, like I said, the genie’s out of the bottle in this go round of elections. And [00:39:00] then the hope is, you know, hindsight, sometime in the future in 2025, people can look back and start to identify trends related to, identifying misinformation, synthetic information, deep fakes, and interference, influence, and then we can formulate a plan of how to address that stuff.

Because I think even now with some of the, the government’s considering, you know, how they regulate this, I’m not sure that all the tools and capabilities exist for them to say specifically, You know, we expect organizations to do the following things. Like, I don’t, I don’t know that there’s a, there’s norms yet that they could identify very easily, other than to say, you shouldn’t do this.

KB: Do you think as well that because of all the stuff you just listed out, we’re going to see more polarization because it’s like, Hey, I saw this video about you guys complaining about our party and vice versa and. Will we see more of that then, that sort of then transact into more physical crime and people, you know, getting really outraged that I’ve seen in, you know, even the last election, like, are we going to see more of that?

Bob Huber: I, you know, I hate to [00:40:00] say, yeah, but I, I think so. Right. Cause you’re trying to instill beliefs and there’s this, there’s this concept of if you’re, if your views for one way or the other, you may not bring everybody with you, but you’ll bring them closer to you. Right. Your, your views are so outlandish.

Right. That, uh, they may, they may not believe everything you have in there, but it may move them your direction just enough to make a difference. And, and I do think that’s likely that it’s more polarized and it’s easier to create those environments.

KB: So Bob, is there anything specific you’d like to leave our audience with today?

Any closing comments or final thoughts?

Bob Huber: Yeah, I think, you know, for the general population. You just have to, you know, keep, keep your eye on this topic, right? It’s an emerging topic for all of us. Even folks who follow security or insecurity like, like myself, uh, is emerging topic, uh, the technologies to detect and identify this type of information are not fully mature yet.

So we have to go into everything with a little bit of caution and it’s, it’s not like anything else, right? It goes back to what I said earlier. If it’s, if it’s too good to be true, it probably is. So, so keep that in [00:41:00] mind and, and just be aware when you’re receiving information of where it’s coming from.

You know, those, those small things like looking for glitches and all those other things. I don’t expect most people to do that. And honestly, I rarely do it

unless I think, you know, whatever I’m watching just seems hard to believe, but it’s just a space that I hope gets a lot of coverage in media just for that general awareness.

Share This