Deepfakes are Spiralling Out of Control, But How Far Will It Go?
Posted: Thursday, Aug 14
  • KBI.Media
  • $
  • Deepfakes are Spiralling Out of Control, But How Far Will It Go?
Karissa Breen, crowned a LinkedIn ‘Top Voice in Technology’, is more commonly known as KB, and widely known across the cybersecurity industry. A serial Entrepreneur and co-founder of the TMFE Group, a holding company and consortium of several businesses all relating to cybersecurity. These include an industry-leading media platform, a marketing agency, a content production studio, and the executive headhunting firm, MercSec. She is also the former Producer and Host of the streaming show, 2Fa.tv. Our flagship arm, KBI.Media, is an independent and agnostic global cyber security media company led by KB at the helm of the journalism division. As a Cybersecurity Investigative Journalist, KB hosts her renowned podcast, KBKast, interviewing cybersecurity practitioners around the globe on security and the problems business executives face. It has been downloaded in 65 countries with more than 300K downloads globally, influencing billions of dollars in cyber budgets. KB is known for asking the hard questions and getting real answers from her guests, providing a unique, uncoloured position on the always evolving landscape of cybersecurity. She sits down with the top experts to demystify the world of cybersecurity, and provide genuine insight to executives on the downstream impacts cybersecurity advancement and events have on our wider world.

i 3 Table of Contents

Deepfakes are Spiralling Out of Control, But How Far Will It Go?

David Wiseman, Vice President of Secure Communications at BlackBerry spoke with me about the increase of deepfakes which are our own voices and countenance that can be fabricated to present like it’s us. But its not us.

“Anyone’s voice can be deep faked at this point,” warned Wiseman. “Anyone with some basic set of tooling can generate that deep fake. Then with identity spoofing interjecting into networks, that fake information can be easily redistributed in a way that not only does it sound like you, but it looks like it came from you.”

The dilemma that now exists is that it’s now accessible to anyone with just a rudimentary cut and paste tool. Mr Wiseman cited a recent case in a Southeast Asian country, “One of the top leaders, they had an identity attack on their WhatsApp system and their account was taken over and fake messages started to go out.”

“Someone’s reaching out with information that’s not correct, and it can reflect poorly or cause angst in society as a whole.”

The risks are increased because governments and corporations all too frequently use consumer-grade apps like WhatsApp or Signal for official communications, sacrificing control and security in the process for the sake of familiarity.

“We’ve seen a bigger push now to get government communications off of these consumer systems, which are great for consumers, but not appropriate for government for the reasons we talked about before [on the podcast] around data security, knowing who you’re talking to, and record keeping being the primary ones.”

Convenience is now blurring the lines between office chat and casual messaging, which in turn can allow for data leaks, blackmail, and mistaken identities. Wiseman stressed the “behaviour that people need to be more cognisant of now” to basically not mix personal and business channels. Business and friendship relationships are blurring now too, so it can hard to switch back to being on a business channel.

“It’s not always just about security. It’s also just about limiting your potential to make a mistake, typing, pasting something in the wrong window, for example.” Mistakes, or oversights can quickly spiral out of control. “There’s a lot of reports of younger people basically being blackmailed by messages and data on their phone. And it can even lead to youth suicides…so it’s something that’s not just for celebrities. It can happen to everyday people and it does happen to everyday people.”

Thanks now to AI, sophisticated deepfakes are easy to generate and easy to lure people into believing them.

“We took a six or seven second clip, put it into an AI model that processed that voice into a pattern that then they applied to a new set of text in a different language…When you play it back, the voice, it sounds just like that executive’s voice, but he’s saying exactly the opposite of what he said.”

While some fakes are clearly staged, Wiseman pointed to the growing subtlety and the reality of the risk. “Maybe it’s a message from your spouse, ‘Hey, can you open the garage door?’…There’s a spectrum of things there where you might not have any thought at all that something might be fake.”

The constant stream of uncertainty and second guessing has now created decision fatigue.

“You need to share information that is important with the broad public. How do you do that now when they have that super high level of skepticism? There are technologies that are out there that can help…cryptographic identity validation to do things like confirm that the message left the other phone is the same one that came to your phone.”

Still, Wiseman believes skepticism a careful, thoughtful approach to online interactions will remain foundational. The risks for enterprises and governments are higher.

“People just have to be more aware personally and be more skeptical…The process part needs to be built into the tools that people are using.”

Additionally, as spam, social engineering, and information mining is on the rise, technology companies and policymakers will need to collaborate to incorporate safeguards.

“At the end of the day you still have to communicate with people, right? You can’t give that up. Try to be aware of what you’re communicating, whom you’re communicating with over what channel and, and try to…segregate your business life and your personal life and those communications, that’s at least something you can do.”

Share This