The Voice of Cyber®

KBKAST
Episode 301 Deep Dive: Ginny Badanes | Threats, AI and Influence Operations Around Elections
First Aired: April 02, 2025

In this episode, we sit down with Ginny Badanes, General Manager of Democracy Forward at Microsoft, as she discusses the multifaceted threats posed by nation-state actors around elections, particularly the use of AI in influence operations. Ginny highlights the critical need for society to adopt a healthy skepticism toward information, scrutinizing the trustworthiness of sources and the potential for AI manipulation. We delve into the activities of significant nation-state actors like China, Russia, and Iran in recent elections, and the emergence of AI-driven fake news sites used for propaganda. Additionally, Ginny provides insights into the deceptive use of AI beyond political contexts, including its impact on women and financial fraud schemes.

Ginny Badanes is the General Manager of Microsoft’s Democracy Forward program, an initiative within Microsoft’s Technology for Fundamental Rights organisation. At Microsoft, protecting fundamental rights means promoting responsible business practices, expanding accessibility and connectivity, and advancing fair and inclusive societies. Ginny’s team is focused on addressing challenges to global democratic stability, with efforts aimed at safeguarding open and secure elections, promoting a healthy information ecosystem, and advocating for corporate civic responsibility. In 2024, a key focus of her team’s work was raising awareness about the deceptive uses of AI in elections and combating these cyber and AI enabled threats.

Ginny has spent her career at the intersection of politics and technology, advising presidential and senate campaigns on leveraging data and technology. She was named among Washingtonian’s 2021 & 2022 “Most Influential People” list for national security and defense.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Ginny Badanes [00:00:00]:
What we really need is to all develop that sort of critical skill where we stop for a moment and ask ourselves a few questions. Do we trust the source that this came from? Do we know where this originally came from? How possible is it that this has been manipulated in some way, whether with AI or just with deceptive editing? If we get to a point where as a society, we are trusting but also skeptical, I think a lot of what sort of goes viral and a lot of the chaos that comes out of these kinds of campaigns, it’ll lose its sting. It won’t be as effective.

Karissa Breen [00:00:51]:
Joining me today is Ginny Badanes, general manager, Democracy Forward from Microsoft. And today, we’re discussing threats, AI, and influence operations around elections. So Ginny, thanks for joining and welcome.

Ginny Badanes [00:01:03]:
Thanks so much for having me.

Karissa Breen [00:01:04]:
So Ginny, as you know, there was an election in your part of the world Yes. Last year and this week in particular that we were recording this podcast interview. So I’m really curious to maybe zoom out and talk me through how nation state threat actors typically behave around elections.

Ginny Badanes [00:01:25]:
Sure. Well, as you know, the US election was sort of the last of a massive global election year. And so that means that there were billions of people around the world voting in these consequential elections for prime ministers, presidents, for congress, parliaments, those kinds of elections. And so we had an opportunity to both work with election authorities and political campaigns and party committees around the world over the course of a year and also track the behavior of these nation state actors. So here’s kind of what we saw, and it really culminated in what we saw in the US elections. There are really four main nation state actors that we tend to track in this space, North Korea, China, Russia, and, of course, Iran. Now I’ll start by saying we didn’t actually see much behavior from North Korea in this space this this whole past year, from an interference perspective. Now to be clear, that doesn’t mean there wasn’t any we don’t have perfect information, but we didn’t see a lot of disruption from them.

Ginny Badanes [00:02:21]:
It might be that they were distracted with with some cryptocurrency efforts they have underway. That seems to be their priority right now. So it’s really more about those other three, and they they each were active in different ways. And when we talk about active and disruption within the context of elections, what we really mean is everything from influence operations to cyber campaigns, you know, attacks, and then, of course, hybrid where they do a little bit of both at the same time. And all three were active. So for example, we could get into more detail and questions if you have them, but at a high level, what we saw Iran doing was a lot of cyber activity, particularly in The US, and they were pretty focused on going after, president Trump and his campaign. There’s some publicly available information about the fact that they were successful in breaking into some email accounts associated with that campaign. They tried some influence activities as well.

Ginny Badanes [00:03:09]:
They weren’t quite as effective. They they have an interesting new thing they’ve been doing where they stand up websites that are partly AI generated and that their fake news sites used to sort of send some of their propaganda and messaging. We also saw Russia to be quite active. You know, to be clear, Russia continues to be very focused on, Ukraine, and that’s where a lot of their efforts are going. But they can do two things at once. So in addition to the efforts around Ukraine, they also were looking to interfere in several elections over the last year. And, of course, The US, is one where we saw that as well both from the influence perspective as well as some cyber activity. They did use some AI videos.

Ginny Badanes [00:03:47]:
Most of the videos they did did not contain AI because those old tricks that they’ve been doing remain effective, and so there wasn’t really, I don’t think, a need for them to use some of the new tools. And then finally, China, also active this cycle. They tend to be more focused on espionage in these spaces, but they have started to get more involved on the influence operation side. One thing I would note, that would be particularly of interest, I think, to those who are looking ahead to future elections is they did set up some accounts attempting to attack Republican house members. So these are these are not, you know, presidential level. These are folks running for small offices, lower ballot races, and they weren’t very effective. But it was interesting to see them sort of dip the toe in the water on trying to influence around local races. And and we do believe that those were targeted around races where candidates had, very specific policies that were not in line with what the Chinese government would want.

Karissa Breen [00:04:42]:
Okay. So all of what you’re saying was so interesting. So one of the things I wanna know first is everything that you’re saying specifically around, you know, China, Russia, Iran. So would you say from twenty twenty election to the most recent election in 2024 in terms of the campaigning, do you think this last election had a lot more impact in terms of, you know, nation state influence, etcetera? You spoke about influence ops. Would you say that’s increased significantly from the previous election?

Ginny Badanes [00:05:11]:
You know, you used the word impact, so I do wanna make a comment that’s maybe not totally obvious when we talk about what we saw from an activity perspective is we’re not sure what kind of impact this activity had. And so I always wanna be cautious not to say just because actors were involved in influence operations meant they had any influence. That often is not the case. Sometimes chaos and disruption is the point more than actually trying to influence outcomes or or how people choose to vote. But from an activity perspective, we did continue to see, in some cases, sort of steady state of activity, and in some cases, a few spikes. And so, again, from the China perspective, being involved on influence operations is somewhat new from what we have observed of them, and so that is new activity and more of it. It’s not necessarily saying that that’s what they will do in future elections. It was a very small operation that we observed, but it was worthy of noting just because that was not a typical activity.

Ginny Badanes [00:06:07]:
So there were some steady state status quo kind of work that these actors were doing. Iran has had similar campaigns in the past, combination of hacking into accounts and then sending emails to those accounts of one of those hybrid campaigns. That that was pretty consistent with what we’re seeing them do this cycle, but it was different kind of activity, I would say, from China.

Karissa Breen [00:06:27]:
And so you said before, Jeanne, hacking, into emails, accounts, etcetera. So is that what you mean by chaos and disruption by those sort of, you know, hacking an email, sending emails, all those types of things?

Ginny Badanes [00:06:38]:
Yeah. A lot of times, we can’t always know what these actors are trying to accomplish. We can tell when they target certain campaigns. So I mentioned Iran was targeting Trump, and, you know, it did appear to us that Russia was targeting the Harris campaign, first the Biden campaign, and then subsequently the Harris campaign. And it’s hard to know if they were if their intentions are persuade people or to the point I made before, if it’s just to create some kind of chaos and disruption in the system. I do think that is often the the objective. And through hacking into accounts and then, you know, doing some kind of influence operation, a lot of times they were trying to distract people and to change the conversation.

Karissa Breen [00:07:16]:
So I’m gonna go back to for a moment, you said fake news sites. So I’ve been in the space a bit over a decade, and then I’ve always thought imagine if, as you see on Facebook, like, you can do sponsored sponsored ads, for example, sponsored content. Imagine just setting up a fake news site, I don’t know, saying that the the sky is rainbow and then having all these fake people saying, like, yes. It’s rainbow. It sort of, like, gets into that information warfare that we see in terms of, you know, the whole debacle with Twitter a fair few years ago and why your mom must took it over, etcetera. But is this something we’re starting to see more of? And this is really interesting in terms of people discerning between, quote, unquote, real information slash or misinformation. Going back to your example before around the fake news sites, what does that then look like in terms do you think that actually drives people’s influence, would you say, in your experience?

Ginny Badanes [00:08:05]:
What I think is so compelling about the fake news sites is that it’s a clever approach that, to your point, has been around for a while. This is not a new thing. In fact, it’s not always just nation states that do this. It’s It’s actually been a political activity, particularly one we’ve seen in The US where people create sites and they pick a local town, and then they add the name Harold at the end. Right? So it sounds like it’s an authentic news source for from people who aren’t from that area maybe, and they don’t realize that that’s the Savannah Times is not an actual news site. So to begin with, it’s a really clever way of of making someone think that they are reading trustworthy news, and that creates a a mental space where they think that they’re consuming journalistic content. And then from there, whether it’s a political operative or a nation state operative, they will pull out some real articles often, and then they’ll mix in some some fake articles or or they’ll insert different information into existing and real articles. So, again, this is a practice we’ve seen for years from a a variety of actors.

Ginny Badanes [00:09:02]:
Some people refer to these as pink slime sites, and sometimes the purpose of them is actually just to get advertising and clicks, essentially, to to make money. So as much as we look at the ways that people are doing to change minds and to change narratives, a lot of times, it’s also got a financial motivation. So you always have to sort of keep in mind that you’re not always sure what the motive is of the of the groups who spin them up. What’s different about what we saw this cycle in particular is we believe the use of AI that allows these sites to spin up faster, that allows foreign actors who probably don’t speak the native language very well to write articles that come across more believable or allows them to take real articles and edit them through AI to make them not easily easily caught by scanning systems, and therefore, they can create more content quickly. In the end, I think a lot of the actors, yes, this is one of the laundering mechanisms for propaganda and narratives that they want people to believe, and that’s usually why I believe these actors in particular have been doing this recently. But there there’s always a money motivation to consider because what we found is that as these sites spin up, automated ads get placed on them. The more volume that they attract, the more clicks on the ads, and then the people who are running those sites actually do make money.

Karissa Breen [00:10:15]:
Yeah. Okay. So that’s interesting. Because, obviously, people wanna read stuff around election time, etcetera. So it may not be to influence. It may just be, hey. It’s a quick way to get really high web traffic to be able to run ads to make money. Wow.

Karissa Breen [00:10:28]:
And would you say and, I mean, you don’t you know, this is just what you what you’re speaking about. Would you say that at all any of these sites would have influence or not necessary? It may have done as a byproduct, but not necessarily. It wasn’t the main driver.

Ginny Badanes [00:10:39]:
I mean, it’s just I always wanna be careful to assume intent because that’s not the kind of intelligence we get. But I I imagine that a lot of times it is to launder stories and narratives, and I’m sure it has had effect of helping build a narrative around something that an organization or a country is trying to get folks to believe. When we think about how a bad actor can get a story that is completely false or mostly false into the mainstream where people who are not spending their time on the dark web will actually come across that information in a in a way that it is believable. There’s this sort of cycle that they go through to create sort of the laundry effect of of the fake news, of the propaganda. And it usually starts I’ll give an example of how we’ve seen Russia do this in particular. It usually starts with a video, a fake video. Again, sometimes AI edited, sometimes just a regular video they’ve filmed, often posted to Telegram, sometimes on x. Obviously, there are other platforms as well.

Ginny Badanes [00:11:35]:
They post that, and then they intentionally go and find Russia state media to run a story about the video. So, again, not a lot of people trust Russia state media, so that’s probably not sufficient for getting it into into the bloodstream. So the next thing that they will do is go and find social media influencers who are connected in some ways to to the Russian government or to this media organization and have them start to post on their social media about it. Through that, they start to get unwilling, unconnected people who will see one of their posts or will read the article, who will also start to amplify that content themselves. At that point, there will be some slightly more normal, not nation state aligned media sites who will pick up on some of that social media chatter, and they will write a real article about this story. And that’s what sort of gives permission then for the broader media ecosystem to start commenting on, writing about, and, responding to that narrative. So it’s this sort of laundering effect of a fake video that goes through this very specific process that ends up and not all of them do, by the way, but the ones that that really resonate with people will often end up into a fairly mainstream news site who is really reporting on what they saw on social media, which was reporting on what they saw on Russian state media, which was intentionally reporting on a fake video.

Karissa Breen [00:12:53]:
Before we jump into the Australian side of things because we have an election coming up this year as you know, I wanna just, hear your thoughts, Shini, around, a lot of people I’ve spoken to on the show recently have, you know, discussed, you know, the rise of AI and how real some of these videos currently are, but how they will become in the not too distant future and how that has an impact on discerning whether it’s fabricated or if it’s real, etcetera. So then what are your thoughts then on elections generally and how we move forward even if the, you know, the election, you know, the next couple of years for The US, for example, with the rise of AI, and deep fakes, etcetera?

Ginny Badanes [00:13:32]:
So at the beginning of the last year, there was a lot of concern about this issue. Because of all of the big elections that were happening, there was a lot of chatter and conversations about, are we going to see this as the big AI apocalypse? You know, are there going to be candidate deep fakes that are going to persuade people on how they vote? Is it gonna be a catastrophic election cycle? In the end, that’s not what we saw. We did see the use of AI in ways that we hadn’t anticipated, in some ways that we had. It wasn’t used at a scale or wasn’t as effective to the point that it influenced or or impacted the outcome of of any elections as far as as we know and we can tell. That’s not necessarily how it will always stay, though, because the technology is moving quickly, and adversaries are finding ways to weaponize it. So some of the things that we did collectively across industry and with governments when we were focusing on this, first, we were part of a group of 20 companies in February of last year at the Munich Security Forum who signed an agreement that did a few things. First, we came together and said, we see this as a risk that is emerging and could affect democracies around the world. And second, we are the technology companies who are building the products and and innovating and distributing content depending on the company.

Ginny Badanes [00:14:41]:
And therefore, we have a responsibility around this to make sure that we are being thoughtful. And then we put together a series of commitments at each of the companies followed through on in their own way based on what their products were. And that was a start to sort of acknowledge that just because we hadn’t seen it yet does not mean that this might not have a massive disruptive effect on the population. What we actually saw from a AI intervention perspective was mostly deepfake audio files. So I do think most of us were expecting that we would see video, but what we really saw was that the most effective kind of deepfake out there is is using audio. And that’s in part because it is just there’s a lot more content to build models on. It’s a lot less complicated to replicate a voice. The other is it’s really hard to detect, and detection of deepfakes is already quite hard.

Ginny Badanes [00:15:30]:
I’m happy to talk about that more if you have questions there, but audio deepfakes are really hard to detect. So that’s the one trend that we did see start to emerge that we’re continuing to track. We’re concerned about a couple examples of how we saw adversaries actually use audio deepfakes. There was the sort of prominent example of a audio call that president Biden made in New Hampshire to voters telling them not to vote. That one was a domestic actor in in The US who since been caught, and I I believe has gone to jail for that. But that was sort of the actual full text of the audio was created in AI. But how we’ve actually seen nation states deploy it, for example, is one thing that we observed was Russia had a video of Kamala Harris. It was a real video of a rally.

Ginny Badanes [00:16:14]:
She was really talking. But what they did was they spliced a very small piece of an audio deepfake, just one line that they put into the video in a way that was so subtle that you couldn’t tell without running some analysis that she hadn’t actually said it. And it was a derogatory comment about president Trump and the assassination attempt against him. Now that video didn’t end up taking off, and people caught it fairly quickly and debunked it no in a way that I don’t believe people believed it. But it was quite well done, and it was, we thought, a sign of of things to come.

Karissa Breen [00:16:46]:
That’s really interesting. Okay. So going back to your comment around it’s hard to, you know, detect deep fakes, talk to me a little bit more about that.

Ginny Badanes [00:16:55]:
Well, I mean, going back even five, six years at Microsoft with our Microsoft research team, we saw this trend coming. Right? We we saw what AI was was going to be able to do, and and we had some really smart people in the company who were saying, hey. This might be a problem for democracies and elections. That was an area that people pretty quickly thought would be an issue. So we started working on a detection system because that seemed like the most obvious way to sort of solve this. Can’t we just use AI to identify when AI has been used? And it sounds fairly simple. We found a few things as did others who were who were doing this work. To be clear, this was a lot of folks across industry who were trying to work on this challenge.

Ginny Badanes [00:17:33]:
We found that we built some pretty good detectors, and we we still have them today as do other companies where you could run a video or an audio or an image through, and it would give you some percentage of certainty as to whether that had been generated by or edited by AI. And that is an important component as we think about the challenges and what we can do about them. However, a couple things we identified kinda early on in this process. One is that if you get an 85% accuracy rating, that’s that’s still not a %. Most of these classifiers and detection systems will not be able to give you that level of certainty. And so in and of itself, that is already a bit of a bit of a challenge. The other is as you put detection systems out there, people who want to get around them will figure out how to get around them. Right? There are really smart people on both sides of this, and there are ways that engineers will figure out, okay.

Ginny Badanes [00:18:23]:
This is how this classifier works, so I’m gonna change my deep fake so it’s not gonna get caught by that classifier. So there will there could be sort of a sense of an arms race of creating a better detector and then creating a way to get around it, and it’s hard to know where you are in that cycle. Are we at the part where we’re better, or are we at the part where they’re better? And so that makes that makes this all a challenge as well as you you’re just you can’t have full confidence, I don’t think, in whether these detection systems are are a % there. And then finally, there’s just Nuance to how people are using AI. It’s not actually quite so black and white. What what these detectors are very good at is if you have a wholly generated AI image and you run it through, they’re gonna give you a pretty strong indicator in most cases that that was AI. So there are some applications where it can be pretty accurate. Where the nuance comes in is, say, you edit a picture with AI in the slightest way.

Ginny Badanes [00:19:14]:
For example, there was a picture of president Trump, sort of an iconic picture after the assassination attempt with his fist in the air and blood on his face, and he’s surrounded by these Secret Service agents. A version of that photo, began making the rounds really quickly where the only difference was that the Secret Service agents were smiling. And that was trying to serve a narrative that the government had tried to kill Trump. And so the Secret Service agents who work for the government were smiling. And that, of course, isn’t real, and we we all knew what the other picture looked like, so you could pretty quickly debunk it just with your own eyes. But if you had been dependent on a detector to tell you whether or not that was a real picture or AI generated, you’re going to get kind of a mixed response from those from those detectors because most of the picture was in fact real. It was only a small small portion of it that was that was AI edited. So these are the kinds of complications that we’re all grappling with as we deal with the detection side of it.

Ginny Badanes [00:20:07]:
So, again, it’s an important component. There’s a lot of good work being done, a lot of good companies creating classifiers and detection systems, but it has to be part of a broader strategy because in and of itself, these detection systems are are just not sufficient.

Karissa Breen [00:20:20]:
Wow. That was really interesting, everything that you were saying, in terms of thorough as well. I really appreciate that. Okay. So now I wanna slightly change gears for a moment. And as I touched on before with you, Ginny, as you know, there’s an election coming up here in Australia. So is there anything that you can sort of share with, you know, the intelligence that you’ve just spoken about today and also based on US election that Australians may need to know?

Ginny Badanes [00:20:46]:
Yeah. I think the main point I would make for sort of everyday Australians when it comes to this is we really think that there’s a there’s an important component of society in combating both nation state propaganda as well as deep fakes. And a lot of that is this, like, healthy skepticism. If you see something online that either doesn’t seem right or seems too right, you know, like, man, this this really feeds right into a a belief, a core belief I have about this politician or about this organization. What we really need is to all develop that sort of critical skill where we stop for a moment and ask ourselves a few questions. How do we trust the source that this came from? Do we know where this originally came from? How possible is it that this has been manipulated in some way, whether with AI or just with deceptive editing? If we get to a point where, as a society, we are trusting but also skeptical, I think a lot of what sort of goes viral and a lot of the chaos that comes out of these kinds of campaigns, I I just think that they’ll it’ll lose its sting. It won’t be as effective. And here’s an example where we’ve seen this go, pretty well.

Ginny Badanes [00:21:54]:
In Taiwan, they had big elections this year as well or last year as well, and we do know that China targeted them with some influence operations. We we we heard that there were some pretty compelling, in fact, deep fakes. There was a candidate who dropped out and allegedly made a video endorsing another candidate that that seemed odd to people. It didn’t seem like someone he would have endorsed, and in fact, he had not endorsed him. It was a deepfake. But what we heard from, the government in Taiwan, from NGOs there, and from people as well is that people, for the most part, just didn’t believe the influence operations that were being thrown their way. They didn’t believe those deep fakes that they came across online even if they were technically quite good. And a large part of that is because they kinda knew it was coming.

Ginny Badanes [00:22:35]:
Their government had spent a lot of time talking to them, doing PSAs. Again, the NGO and civil society community was quite active there, making sure that people knew, hey. People are gonna try and get you to think things that aren’t necessarily true. You know, it’s up to you how you wanna vote, but give give thought to what you’re seeing. Be a little bit skeptical. And from what we can tell, it does seem to have worked.

Karissa Breen [00:22:57]:
So what was going on my mind as you were speaking is would you say and based on that just example here, would you say that people generally, people are getting a bit more skeptical around, oh, like, that’s definitely fake. That’s AI generated. I’m seeing that a lot more in, like, Instagram reels, and then, obviously, it’s fake. And then I go to the comments just to curious to know, and people are obviously already calling out that. Would you say that people’s discernment is getting a little bit better than perhaps in previous years? And they say that probably with contrary to, you know, people clicking on, you know, really terrible phishing emails, for example.

Ginny Badanes [00:23:30]:
Sure. Yeah. Well, that’s a great analogy, in fact, because, you know, there was a time when we would all click on those links, and we didn’t know that a Nigerian prince wasn’t real, and we were trying to be scammed. Right? We got to that place of skepticism and awareness through, frankly, trainings and a whole lot of effort across, society, both governments as well as employers and companies. And so we we as a society have gotten more skeptical about phishing emails, and we’ve learned the things that we should do. Scroll your mouse over and look at the URL, never put your information into the site that you didn’t go to directly. Right? That kind of thing. Similarly, I do think we’re starting to see people get more skeptical of what they’re seeing online, questioning things because they’re starting to become more aware of what the technology can do, and they realize that it’s not something that they can necessarily spot with their own eyes anymore.

Ginny Badanes [00:24:19]:
You know, four years ago, if you were looking at AI generated content, you could usually tell because their skin was really glassy or they had six fingers or, you know, the the lettering wasn’t quite right. These were all cues we were given several years ago that are just not accurate anymore because the technology has advanced so much. And and the extent to which it’s advanced in the last year, I mean, I ex expect we should see that compound in the in the next six months. And so it it isn’t as easy anymore to spot it with your own eyes, but people are more skeptical. What I’m frankly concerned about actually is the other side of that skepticism, which is why I was trying to use terms like healthy skepticism. I worry a little bit that we get to a place where people just don’t believe anything anymore. They don’t trust anything they see online. Everything is is AI, and that’s that’s not a healthy place for us to be either.

Ginny Badanes [00:25:08]:
So we we really need to work through trust signals and understanding how do we get to a place where we have skepticism, but we still have sources that we trust and places we go to where we can get real content and information without just assuming everything we see online is fake.

Karissa Breen [00:25:23]:
Yeah. That’s an interesting point. What okay. So just on that note then and then your comment there, you said trust signals. So what I’m starting to see on, I think, Facebook and Instagram is is actually taggy with, like, AI generated content. Do you think people are just exhausted, though, to be like, is this real? Is this fake? Like, for the, you know, the everyday sort of person. And and to your point, as a result of being exhausted, is that why they’re just like, well, I don’t believe anything on the Internet anymore?

Ginny Badanes [00:25:46]:
Yeah. I mean, there’s good news and bad news here. The good news is the the the companies realize that we we need to figure out this labeling question of when do you label something as AI, how do you know that it is. Where we are right now is is not the right place with that because in some cases, you know, if I use a filter on my Instagram, that’s AI. Does it need to be labeled that I use that? Right. So I think we still have some nuance of what it means for something to be AI generated or AI edited, and I don’t think we have that quite figured out yet. However, industry and and governments are really working through these challenges and trying to find what are indicators of trust, what are consistent labels, what are the standards for how we do this. One really promising technology is this concept called content provenance, and it’s an open standard that’s run by this, nonprofit group called c two p a.

Ginny Badanes [00:26:36]:
And what it is about is actually tagging with cryptographic metadata the origin and authenticity of an image, a video, or an audio file. And what that means is if it’s built into your camera at the point of click, it attaches information about where it was taken, who took it, and it signs it in a way that as you then load that picture onto LinkedIn, for example, which does read this standard and will give you a label, it’ll then tell you what you wanna know about that image. It gives you the context. And it if it was AI generated, so if if you were to use, just for example, Bing image creator to create an image, we apply this standard to that file. If you load that to LinkedIn, then when you scroll over, you’ll see generated by AI as one of the indicators. So what we’re all kind of moving towards is this idea of not is it AI or not, but just more context about where it came from and who it’s from, who stands behind this image. It keeps people from a brand perspective from stealing each other’s content. So there’s some real applications in the real world for for why this could be used, but I think it’ll be really helpful from a trust perspective.

Ginny Badanes [00:27:39]:
If we can get to a place where we are when when we create something that we are tagging it as belonging to us in a way that we stand behind it, and people will know that authentically it came from this company or this individual or this political candidate. So going back to c two p a,

Karissa Breen [00:27:54]:
is that what you meant before around generating, like, trust signals? Is that an example of one?

Ginny Badanes [00:27:59]:
One example. I think there are other the concept of trust is a really tricky one. I think there are other ways we can talk about trust. But, yes, I I view c two p a and content provenance as as sort of a a trust signal that we should all be able to get around as a society and start asking for, frankly, and and then hopefully get to a part where that’s just just like the lock URL at the top of at at the top of your website. Hopefully, we get to a place where all looking for that seal so we know more about the image and where it came from.

Karissa Breen [00:28:26]:
I guess, like, to your point before, it does protect, like, from a copyright perspective for creators, etcetera, because you often hear someone in the comments saying, hey. That was my video that you just used, and then you generated all these views, etcetera. So okay. That’s that’s really interesting. So now I wanna sort of flip over and maybe speak a little bit broadly around deceptive use of AI. I know we sort of touched on it before, but I wanted to, you know, get your thoughts a little bit deeper here.

Ginny Badanes [00:28:51]:
Yeah. I mean, there’s there’s a couple examples that we’ve seen it being used in, and it’s not always in the in the political context, which I think is also helpful for people to consider. So a couple examples where it’s not political necessarily, but we’ve seen it be quite effective, or at least we we’ve seen adversaries really trying to use it effectively. One is with the Paris Olympics. So not surprisingly, Russia, has an issue with the Olympic Committee, and they have been working to undermine trust in in that organization for quite a while. One of the things they did leading up to the Paris Olympics just last year was they did a a series of videos. One video in particular that was supposed to be like a Netflix style documentary, and it was called Olympics Has Fallen. And this documentary opens with the image of Tom Cruise, and he appears to be the narrator.

Ginny Badanes [00:29:38]:
He he did not participate, to be clear, in this documentary, but it appears that he is your your host and narrator. And then throughout the documentary, his voice is the narrator, and that is, of course, not really his voice. It was an AI generated voice of Tom Cruise. Part of the mixing of mediums, you know, his visual of his face and then his and then his voice being so close and clear combined with, you know, obviously, the way that they built out the the documentary, it was quite compelling. And it it gave us a bit of a bit of insight into if they want to do something bigger, if they want to do something more elaborate, what their skill set is at right now and and what they could accomplish. So that’s one deceptive use of AI that, again, is maybe political in nature, but not really about politics or or elections. The areas that we’re really seeing this harm show up today, though, as much as we say we haven’t really seen it in elections yet, at least not in a major way, we are seeing this affecting women in particular. So just to be really clear, when we talk about deepfakes, more than 90% of the deepfakes that that are on the Internet right now are of women, and they’re almost entirely pornographic in nature.

Ginny Badanes [00:30:43]:
And that is a whole other side of deceptive use of AI where you’re where you’re trying to use whether it’s a celebrity’s face or a female politician or someone else, try to depict them in a place that they weren’t. What is particularly problematic and bothersome to me anyways when I think about it for women in public life, politicians, is the chilling effect that it has on women to enter into this space. It’s already quite difficult to get women to run for office. I think that’s a trend that we see pretty globally. When you add this additional factor knowing that once they step into the public eye, this is a thing that is almost guaranteed, if not incredibly likely, to happen to them. I I just I could see that becoming a real chilling effect, which is a real problem for our democracies. And then a third use of deceptive AI that we’re seeing emerge in the real world and actually happening is around financial fraud. And so especially, these voice, deepfakes are being used to target vulnerable populations such as the elderly.

Ginny Badanes [00:31:38]:
They get a phone call. They think it’s their grandchild. They need money quickly. They’re stranded. They’ve been kidnapped, whatever the crisis is. And a lot of these folks don’t know that this technology is out there, so it doesn’t occur to them that that’s not actually their grandchild. And just like phishing campaigns of the past where they insert a lot of urgency and crisis, a lot of people are losing money due to these scams. We’re also seeing it at the corporate level too.

Ginny Badanes [00:32:02]:
There was a company in Hong Kong who a an individual at the company was having a face chat with their CEO who was telling them to wire millions of dollars urgently. They wired the money, and it turned out that that was not their CEO. It was a it was a live deepfake. Obviously, a very sophisticated operation, but a live deepfake of their CEO, and they lost quite a bit of money that way. So these are currently ways that we’re seeing this technology being used in a way that is quite detrimental and certainly, deceptive in how it’s being deployed.

Karissa Breen [00:32:32]:
So just going back to the second pointer in the pornographic specific to women, would you say that now we’re gonna see a reduction in women wanting to take, like you said, you know, political roles or any sort of prominent figures because of that chilling effect?

Ginny Badanes [00:32:45]:
I mean, I don’t have any data to back that up. I can say anecdotally in conversations that I’ve had, I can I can hear some of the anxiety from women who are in the political space around this topic? I do think that’s an area that’s sort of ripe for better understanding, and, we’re having conversations with women political leaders and other organizations to understand how their members are feeling about this issue. So I should be cautious to to say definitively it’s having a chilling effect. I think it seems quite likely that it could, at least at the individual level for some women. Whether or not it’s it’s happening in sort of any trend, I I don’t know for sure. But I I do think at the individual level, there are a lot of people who are worried about this and see this as just yet another reason why it’s not worth it, to to put your hat in the ring.

Karissa Breen [00:33:30]:
So I now wanna talk a little bit more about you and your role. So as I announced at the top of the interview, you know, democracy forward. So tell us a little bit more about this, and what

Ginny Badanes [00:33:42]:
does it mean? Sure. I mean, our team has been doing a version of this work for, really, the last ten years. And what we’re really set up to do is work with election authorities, political campaigns, party committees, and then all of these other groups that sort of surround and empower a democracy. And so that includes the news and the media and journalists, but it also includes people who work in sort of academic institutions at times, NGOs, think tanks, etcetera. And our we have a few objectives when we engage with them. One is around protection for their infrastructure. This is a little less interesting. It tends to be, you know, not the topic of a lot of podcasts, but we wanna make sure that, when when especially when our technology is being used in a critical way around an election, that we are working closely with our customers around things like reviewing their infrastructure when when they want us to, when we’re invited in, providing them with recommendations, creating real connections and networks into our team, which is then a connection point back into the broader company.

Ginny Badanes [00:34:40]:
What we found is a lot of these organizations, while they’re while they’re well known and they, you know, they they may seem like they’re big just because they’re they have politicians involved in them or because they run the elections for a country. The reality is a lot of them are low resource and actually quite small, but they’re really highly targeted. And so that’s kind of how we define the groups that we work with, these highly targeted, low resource groups who are fundamental, to democracy, and then we spend time with them trying to figure out how we can be helpful. And sometimes that’s in cybersecurity protections. Sometimes it’s just giving them a phone number so if something goes wrong or they have questions, they know how to find us. We do have programs and things that we run with them as often as we can. We make that free or, at low cost, obviously. We have legal restrictions sometimes working with governments, but that’s our objective where we can.

Ginny Badanes [00:35:27]:
So that’s one of our key priorities is global elections, protecting, and and connecting with, the folks who are in that space. And then on the other side of the work, we spend a lot of time, considering what a healthy information ecosystem looks like and what Microsoft’s role is in contributing to that. And so that includes, again, a lot of the same players working with the news, organizations around the world and also working with our product teams, you know, making sure that our our colleagues, within Microsoft News and within Bing and within Copilot have access to both our expertise, in some cases, our networks when they can help them with things. Sometimes we help access data sets for them that are relevant to the work they’re doing. So we often serve as sort of the subject matter experts on these topics out to our product teams and and try and support them in that way. All of that work has been going on for many years, but frankly, in the last two years, a lot of it has been done through the lens of AI. Thinking about both where can we help extend opportunities using this new technology to the folks who are in this space, but probably where we spend more of our time, unfortunately, is looking at, okay, how will people weaponize this tool? What are our obligations? Let’s look around corners and try and anticipate how this might be misused and then work both with those communities, but then also with our internal teams to create the appropriate gating mechanisms, frameworks, protections, that kind of thing.

Karissa Breen [00:36:46]:
So, So, Jenny, just to build on that a little bit more, what about your team here in Australia? What are they sort of doing? Is it sort of the same sort of thing in terms of, you know, looking at how people are weaponizing this, or is there anything you can share?

Ginny Badanes [00:36:56]:
Well, I’ll start by saying having spent a few days here already in Sydney with our team, I’m I’m just so incredibly impressed with how they take on so many big issues that are important to both the company and to the country. How we’ve been working over the past few days in the lead up, or I should say the past few months in the lead up to this election is is looking at a lot of the challenges that we’ve just laid out and how that specifically applies here in Australia. So we’re working with our threat analysts, and our threat intelligence teams to try and understand what do we think, again, with this idea of looking around corners, not only what are we seeing adversaries do right now in in any ways that it might impact the country, but also what do we anticipate we could see. It includes having meetings and conversations with the election authorities and and political parties and, news media organizations around Sydney and Canberra and elsewhere to really identify what I had just talked about. Where where can we be helpful from an infrastructure protection perspective? What kind of cyber protections and and programs and services can we offer? And our teams here on the ground are really the ones who are have those networks and relationships built out and will help us back in The States with the execution on whatever that support program looks like.

Karissa Breen [00:38:05]:
So, Jenny, do you have any sort of final thoughts or closing comments you’d like to leave our audience with today?

Ginny Badanes [00:38:10]:
Oh, gosh. Well, I guess I’d say you never know what’s coming. You can plan for a million different scenarios, and then it’s the it’s the one you hadn’t thought of that that really that really surprises you. So when we think about how to build out a process of support and how we work with our colleagues in this space and how we work with voters and consumers, people who are out there who are heading into this election cycle and, probably not thinking as deeply on these issues as we are, One of the things that I think is really helpful is to identify the areas where you can do something regardless of what the incident might look like. Right? So preparation is just so important when it comes to these kinds of when these kinds of things. So that you know, it’ll almost be repetitive, really, but I look at what can people who are running elections do to create an environment that is quick to respond to crisis. I think election officials tend to be quite good at that. That’s literally what they do.

Ginny Badanes [00:39:05]:
But then for the voters and for people out there who are a bit confused about AI and a little concerned about the information environment, really just putting into practice this idea of, you know, pause for a moment, think about where things are coming from and why they’re targeting you with them, and then just be cautious with how you proceed sharing information with others. I I think that’s about the best we can ask of each other. And what again, we we’re not gonna know what’s gonna happen over the next six months or or several months leading up to this election. But as long as people are thoughtful and prepared, I’m sure that the Australian election will go very smoothly and that you’ll all be ready for whatever comes at you.

Share This