The Voice of Cyber®

KBKAST
Episode 327 Deep Dive: David Wiseman | Do You Really Know Who You’re Speaking To?
First Aired: July 30, 2025

In this episode, we sit down with David Wiseman, Vice President, Secure Communications at BlackBerry, as he explores the growing challenges of authenticating identity in digital communication channels. David discusses recent high-profile incidents—including compromised government messaging apps and political deep fakes—that highlight vulnerabilities in platforms like Signal and WhatsApp. He highlights the risks associated with AI-powered voice and message spoofing, and emphasizes the importance of maintaining clear boundaries between business and personal communications to prevent data leaks and blackmail. David also explains how evolving AI tools are making targeted spam, phishing attacks, and metadata mining more effective, and calls for stronger controls, technological safeguards, and user awareness to preserve trust in digital communications.

Experience

David has 25+ years of experience in software, security, information management, mobility and communications at BlackBerry, IBM, SAP, Sybase, and the US Navy. His expertise in Secure Communications leads BlackBerry in the vision of securing a connected future you can trust, helping governments to augment and fortify digital defences to strengthen national security

Notable Achievements

David helped design the world’s first large-scale environmental geo-spatial database for NASA. He also helped design the software for one of the first shipboard radar data fusion systems for the US Navy.   At BlackBerry, David and his team have helped NATO and multiple global governments operating in challenging geo-political environments to establish trusted, secure communications channels from the battlefield to the boardroom – using military-grade software to ensure classified conversations and messages remain private.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

David Wiseman [00:00:00]:
Anyone’s voice can be deep faked at this point, and anyone with some basic set of tooling can generate that deep fake. Then with identity of spoofing interjecting into networks, that fake information can be easily redistributed in a way that not only does it sound like you, but it looks like it came from you. Telecom networks, consumer messaging, app networks. At the end of the day, the number one design goal is anyone can reach anyone in a very easy manner. You probably have your phone number for the next 30 years. And if someone can get a hold of that, then they can target you in lots of different ways. We need to protect that information, have control over it ourselves, develop more of a trust in that channel.

Karissa Breen [00:00:59]:
Joining me back on the show is David Wiseman, Vice President, secure communications at BlackBerry. And today we’re discussing do you really know who you are speaking to? So, David, welcome back.

David Wiseman [00:01:15]:
Thanks. Glad to have a chance to speak with you and your audience again.

Karissa Breen [00:01:18]:
Okay, so last time we had a chat, I mean, there were so many interesting things that you were talking to me about, and I think we sort of just ran out of time and there were so many other things that I wanted to explore. So maybe give us a little bit of an update on what’s sort of been happening in your version of since Trump’s advisors added to that journo signal group that we discussed at a high level. And I think the time in which we did the recording was around when that incident happened. So maybe you know, what sort of your update since that’s happened from your.

David Wiseman [00:01:48]:
Perspective, there’s actually a lot that’s happened both in the United States, but also in other countries around the world related to the that whole situation. Maybe we’ll start with the United States. You know, it obviously became a large political consideration. Some senior officials actually lost their jobs or got reassigned to different roles that were less important. But then it actually got a bit worse. One of the challenges on the signal situation was there was no ability to keep official government records. So they started using some tools that would allow them in a somewhat awkward way, but a way to collect that data and bring it into an archiving system. Shortly thereafter, that tool got hacked and turns out all that data was being stored on the Internet in the clear.

David Wiseman [00:02:37]:
So it just kind of compounded the situation kind of what’s happened in the U.S. you know, so we’re back to the situation where, you know, there’s no records that can be kept when these tools are used. Organizations are trying to figure out what’s a good path forward. But related to that, we’ve seen similar types of things in other countries around the world where there’s a bigger push now to get government communications off of these consumer systems, which are great for consumers, but not appropriate for government for the reasons we talked about before, around data security, knowing who you’re talking to and record keeping being the primary ones. But we’ve also seen similar things happening in a Southeast Asian country. One of the top leaders, they had an identity attack on their WhatsApp system and their account was taken over and fake messages started to go out. So it’s not particularly tied to a particular application as the concept of the government using an application they don’t have appropriate controls over. So that’s just continued.

David Wiseman [00:03:39]:
That momentum’s continued to build, both in terms of bad things that happen, but also in terms of governments saying we need to take a different approach.

Karissa Breen [00:03:48]:
That’s really interesting. I didn’t hear about the Southeast Asian one, but okay, so with the fake messages, you’re aware that, you know how companies are, like, doing these business whatsapps now. So would you say, and I know it sort of depends, because a lot, you know, some of them aren’t obviously government companies. Would you sort of say, like, perhaps it’s probably not a wise idea to use a business WhatsApp, because I know a lot of places like India and Brazil and all these sort of countries use it a lot?

David Wiseman [00:04:17]:
Yeah, I think it depends, as you guessed, on the, you know, how important the information is. If you’re using it to make a dinner reservation, if you’re confirming your arrival time at a hotel, things like this, I think it’s fine. But if you’re actually doing, you know, serious business transactions, financial transactions, sharing very sensitive information, then, you know, I would be hesitant to do that and you may discuss it. But to actually conduct that transaction, I think it’s important to do it on a separate channel where there’s a lot more control.

Karissa Breen [00:04:49]:
And then those fake messages that you mentioned before, was that them trying to solicit information from people to obviously use it against them and weaponize it, or do you think it was like, still credit card information or what was sort of the intent behind.

David Wiseman [00:05:03]:
I think the intent behind that was to cause confusion, put in political problems for people. So someone’s reaching out with information that’s not correct and it can reflect poorly or caused angst in society as a whole and reflect poorly on that politician. I think that that’s was really the goal of that one. And who knows if it’s a malicious state thing or maybe it’s just hackers having fun.

Karissa Breen [00:05:27]:
Well, before we jumped on our interview today, I was reading the headlines. There was a large gambling company here in Australia. They got fined for like spam messages. So then do you think if to your example here before, and they’re just sending out like random messages and then, you know, in Australia you have the A triple C, like companies are getting fined or pinged for doing this, you know, sending out messages without people’s consent and stuff like that. So do you potentially that opens up another issue on, on that side of.

David Wiseman [00:05:51]:
Things or, you know, I, I think it opens up a new risk vector. And that is, you know, since there are fines, you know, you know, both in Australia and the eu, other countries around spamming type of activities and personal privacy, that, you know, an attack vector against an organization could be generating lots and lots of fake messages that seem to come from that organization, which then leads into an investigation cycle which could lead to fines and things. At the end of the day, it wasn’t behavior they actually were undertaking.

Karissa Breen [00:06:23]:
Yeah, this is where it gets interesting because I actually, when you were just talking, then it’s, it’s like I hadn’t thought about it, but it’s like, okay, well you got one issue, but then it’s sort of opening up a whole other realm of issues. So then on that front, obviously what we’ve been talking about already today, they’re definitely data security concerns off the back of the incident. In terms of the Trump incident, did you have any sort of additional thoughts then that perhaps people aren’t thinking about when it comes to like how we communicating, on what platform, et cetera? Of course take into consideration the sensitivity of what you’re communicating. But is there anything else you can sort of talk through here?

David Wiseman [00:06:56]:
David, people need to be cautious of, and we might have talked about this a little bit before, about segregating the type of communications they have on different channels and not mixing a personal channel with a business channel. And it’s not always just about security. It’s also just about limiting your potential to make a mistake, typing, pasting something in the wrong window, for example. I think that comes into play. And then I think the next evolution of that is when you are doing business and you’re working with organizations, can you always have two ways to communicate with them? So if something doesn’t seem correct that you have another channel that you can trust to verify it. You might think about the messages you get from your bank sometimes and it says we’re never going to ask you through a message for your password or something like that. So that’s kind of one example where you’re talking to the agent on the phone and then they do a text confirmation. So it’s got that second channel and a lot of financial organizations and businesses in general require that second channel.

David Wiseman [00:08:04]:
So for example, you might be on a call with the CEO, in a meeting, CEO says transfer $50 million to this customer. But that’s not really, but that was not really a true command, that was a deep fake or something like that. So the banks and the OR and large businesses, they require a second written follow up. So I think from a personal perspective, having that second channel for confirmation when things, you know, might not quite seem right is really important.

Karissa Breen [00:08:33]:
So going back to your comment around, you don’t want to mix sort of, you know, your personal stuff, the work stuff, which I understand. Do you think that also potentially opens up an opportunity to sort of like blackmail certain people? If you’re in a very senior role, you’re quite a prominent figure, etc. I mean we hear a lot of these things, I mean on these celebrity gossip, tabloid things like oh, you know, this person said this and then they got leaked or something that happened. I mean that’s obviously a very bad example. But I’m sort of just saying that when people are communicating about these platforms, perhaps there’s information in there to your point, around personal stuff that maybe could be weaponized against them as well.

David Wiseman [00:09:07]:
Oh, I think there’s been a lot of real examples of that. You know, there are the ones in the uk there was situation with certain royal family members where that data was stolen from their phones, went to court. You know, it was a big court case. But in a more realistic thing in everyday life, you know, there’s a lot of reports of younger people basically being blackmailed by messages and data on their phone. And it can even lead to like youth suicides and things like that. So it’s something that’s not just for celebrities. It can happen to everyday people and it does happen to everyday people.

Karissa Breen [00:09:41]:
So do you think obviously it’s a behavior thing in terms of, you know, especially if you’re focusing on like government organizations, it’s like, okay, well if you’re communicating about working, they use these channels. If it’s outside of work, you can use WhatsApp or whatever your preferred platform is. But if People aren’t doing that. And perhaps, I don’t know, nowadays with people working from home, working from anywhere, sometimes a little bit work, it’s a little bit personal. So how do you sort of think? Well, actually we’ve started, we started talking about work, but now we’re sort of trying to drift off into personal stuff. Do people have to sort of catch themselves and say, oh, we’ve got to change platforms or how do you sort of navigate that situation?

David Wiseman [00:10:15]:
That is something, a behavior that people need to be more cognizant of now and then kind of a learn behavior that at a certain point you, you probably do need to switch that channel. Right. Because let’s just say you’re on a government conversation. Then you start talking about kids and you hey, here’s some pictures of my kids and stuff. And you can see pretty soon how you could start pasting the wrong information and you know, hey, I forgot I was pasting this photo to a colleague or something. It’s more a learned behavior or trained and awareness. It’s gotta be a personal desire to do that.

Karissa Breen [00:10:50]:
Do you think that’s hard for people though? Because sometimes it’s like, oh, I forgot or oh yeah, you’re right, you know, obviously we can say practice good security hygiene. You go use this platform. But then maybe people like, oh, but it was like literally one sentence I posted, you know, I spoke about my child or I posted one photo and then went back on work. Like is a lot of those sort of comments being made?

David Wiseman [00:11:09]:
Yeah, I think we have to be reasonable about it. And you know, there’s all, hey, there’s a sentence, a little update. Yeah, I, I think it’s more about what’s your go to point when you’re going to start a conversation. Do I think this is more personal, more business? There always could be a little bleeding. But if you kind of make that conscious thought process, then it becomes easier over time to keep that separation. And if you’re on a call and that is totally changing, you know, you might say, hey, let’s pick this up on our personal.

Karissa Breen [00:11:37]:
So now I want to sort of get your thoughts on AI voice spoofing. So before we jumped on, you showed me a little video as an example to use as a key talking point. So maybe you can talk through that. But I mean I have lots of questions around voice spoofing, so perhaps maybe share. I know people can’t see what you showed me, but give a little bit of a paint the picture and we can sort of get into it after that.

David Wiseman [00:11:59]:
Sure. What I showed you was taking a short clip for a few seconds off of a social media post. In this case, it was LinkedIn, one of the executives at BlackBerry speaking. They happened to be speaking in German about a business conference they were at that was posted on LinkedIn. Then one of our engineers went and took a, I think it was about a six or seven second clip, put it into an AI model that processed that voice into a pattern that then they applied to a new set of text in a different language. So it went from German to English. When you play it back, the voice, it sounds just like that executive’s voice, but he’s saying exactly the opposite of what he said from the clip that was captured. So what, what does that mean? That means anyone’s voice can be deep faked at this point because it’s probably a very rare person that doesn’t have some audio clip of themselves on the Internet.

David Wiseman [00:12:53]:
And that’s kind of the first aspect of it. So as you saw in the video, the other aspect is it’s not a very high technical barrier anymore. It’s just a cut and paste click tool. Take the video clip, type some other words, and you get the output. And then when you combine that with the ability to do it very quickly in a very lightweight manner, it can be done at scale and it can be done almost in real time. And that just makes this whole deep fake thing really scary in the sense that anyone could be a victim of it because the sample’s already out there. It might have been out there five years ago, but you probably talk the same way now. And that anyone with some basic set of tooling can generate that deep fake.

David Wiseman [00:13:36]:
And then with the things we’ve talked about before, around identity spoofing, interjecting into networks, that fake information can be easily redistributed in a way that not only does it sound like you, but it looks like it came from you.

Karissa Breen [00:13:52]:
Yeah. Okay, so what you did show me was interesting. I have been seeing other deep fakes online. The issue that I see, and I mean, look, this is right now, obviously these things gonna get better. It, some of them do look super fake though. And I know there’s been all these like, oh, this celebrity, I was dating him, I thought it was him talking to me. It’s like, it’s clearly fake though, and maybe like not fake. It’s not as fake to the, the average sort of person.

Karissa Breen [00:14:17]:
But I know these things are going to get better over time and more sophisticated. But wouldn’t you say though, David, like Some of these, some of these things that we’re seeing online, it’s clearly evident that it’s fabricated.

David Wiseman [00:14:28]:
Oh, yeah. And a lot of them, it’s evident on purpose. Right. That people just want to make it clear. In fact, what I showed you, it’s probably pretty evident it was fabricated when, you know, when you compared what was said before and after, it shows to people what can be done. What’s more dangerous in real life is something that doesn’t feel like it would be fabricated. Maybe it’s a message from your spouse, hey, can you open the garage door? You know, I lost my whatever. And now something intruder comes in your house.

David Wiseman [00:14:58]:
There’s a spectrum of things there where you might not have any thought at all that something might be fake.

Karissa Breen [00:15:04]:
And so obviously over time these things are going to get better. Right. So do you think it’s going to get to a point where it’s like, hey, I think I’m talking to David, but I don’t know whether I’m talking to David. So do you think we’re getting to this point where it becomes like decision fatigue? Because we’re constantly checking to see if it’s David or what does that sort of look like now as these things are going to get more sophisticated, better, more advanced?

David Wiseman [00:15:26]:
Yeah, I think there’s obviously a point where it’s not a sustainable environment. But the good news is there are technologies that are out there that can help with that. This. So I think it’s become much more common to do cryptographic identity validation to do things like confirm that the message left the other phone is the same one that came to your phone. So there are techniques there and I think these techniques will evolve and they’ll be embedded in more and more places. But, you know, there’s always going to unfortunately be in the back of people’s minds now. You know, there’s the potential that what I’m hearing is not the truth and it’s not really new. I’m sure for thousands of years there are fake written messages.

David Wiseman [00:16:12]:
So this is just the latest paradigm around that. But I think that applying the right set of tools, which not as an individual you would have to do, that technology is embedded in the tools that you use over time is really the only solution to the challenge. And then when you move to an enterprise, a government agency, the level of validation that you need to embed in there becomes much higher. And that’s where organizations like BlackBerry focus on how do we provide that trust vector at that level.

Karissa Breen [00:16:43]:
I’ve spoken a bit on the show about like content provenance. I don’t know how much you know about that, but obviously that’s like the process of tracing and verifying the origin and the history of the digital content. So what do you. Where does that sort of sit with you then? Apparently that’s going to be a thing and it’s going to start becoming more sort of ubiquitous now, especially as we’re online and looking at content all day.

David Wiseman [00:17:04]:
Content Providence is the source of the content. Now, if you think about, let’s say, a messaging type of application like WhatsApp as an example, one thing is you’re typing a message or you’re leaving a voice message or something like that. So there’s the content provenance of that information that you created right then and there. But then there’s the. You attach a photo, you attach a document, you attach a link to a website or some content that’s not being generated in the moment, then I think that’s probably where a lot of your research and conversations have gone about, how do I know this is a real document? How do I know this is a real photo? So that’s an area that there’s a lot more research and a lot more work that needs to be done on that. AI tools can help with that analysis. So probably that analysis starts getting done more often and maybe over time, content is, we’ll call it digitally watermarked in a way to do that. But you know, the things we’re working on at BlackBerry is more that in the moment data, the message, the voice, and being confident that when you get the delivered, it came from who you thought it came from and it was what they said or wrote.

David Wiseman [00:18:13]:
If they’re including attachments and things like that currently, that’s a whole other area of risk that needs to be evaluated and understood. And it’s in. There’s a lot of research in that area.

Karissa Breen [00:18:24]:
So, David, you’ve probably heard people industry say, well, that’s not a tech problem, that’s a process problem. So where does your sort of, where’s your mind go there when we’re talking about combating all of these, you know, deep fakes that are online in the moment, just so people are understanding what we’re talking about, not just content that’s online in terms of the origin. So do you think that. And I mean, there’s people out there that are saying, oh, you know, we just keep putting a band aid solution on. We’re just doing this tech to create. But it’s a process problem like where what do you sort of think about. When I ask you that question, I.

David Wiseman [00:18:54]:
Think about two things. One, I go back to where I started originally, which is people just have to be more aware personally and be more skeptical. That’s just a behavioral thing that people are going to have to learn and adopt over time. The other part about that, though, is the process part needs to be built into the tools that people are using. And that means that people need to have a better understanding of what are the actual tools that we’re using. What are they doing in this area? Not just, hey, this is the coolest new app, but just like there’s evaluations on, you know, is there malware and things like that in an application or in a system? Are there other security problems? Probably a standard evaluation over time has to be, you know, how do they deal with evaluation of content that’s moving through that system? So the process problem, you can’t solve that. Just for every person’s personal process or every company’s business process, there is a. You do have to solve that part, I think, at a technology level.

Karissa Breen [00:19:51]:
So going back to your comment around being skeptical, people would be more skeptical. Do you think people are skeptical? And, I mean, depends on who you ask. Depends on the person. I understand that, but what do you generally sort of think nowadays? Because people are hearing on more of these scams and people are a little bit more fearful, perhaps, than back in the day. Where does that sort of sit with you?

David Wiseman [00:20:09]:
I definitely think people are more skeptical. Everyone talks about things like fake news and all this stuff. So I think, you know, a lot of people are in general skeptical. They’re not sure they trust anything. But what that means to me is that introduces kind of an. A new problem. And the new problem is something is important and you need to share information that is important with the broad public. How do you do that now when they have that super high level of skepticism? So that’s a challenge.

David Wiseman [00:20:38]:
Whether it’s a government organization, whether it’s a business, whether it’s a community organization, we’ll call it the cost and cost not just in money, but a number of other factors to your organization of people not trusting information becomes very high. And that can have a lot of negative impacts, both for the organization, but also for the recipients. Maybe someone doesn’t trust this message that says, hey, there’s a flood that’s coming, so you should need to evacuate. They don’t trust it anymore. So I think from a society perspective, that’s a big challenge.

Karissa Breen [00:21:13]:
So what do you think’s worse Too much trust, not enough trust.

David Wiseman [00:21:17]:
I think right now, the state we’re in right now, too much trust. But we could easily in certain areas be getting to a tipping point of.

Karissa Breen [00:21:24]:
Going the other way, of going the other way.

David Wiseman [00:21:27]:
Which then gets me back to unfortunately where I started a little bit. Which is part of the ways that people develop trust is they trust the channels that they’re using, which means they need to feel they have more control over it. Which is, you know, why I think people should segment different types of things to different channels. And things that are super casual, you know, who really cares? Things that are more serious, you know, use a channel that’s going to remind you of the seriousness of what you’re discussing or communicating about and then develop more of a trust in that channel.

Karissa Breen [00:22:01]:
Do you think as well, people sort of just fob off the whole situation by saying, oh, well, it won’t happen to me, David, because who am I? I’m just, you know, I’m just an analyst and a government agenc. Like I’m no one important. Do you think I hear this a lot from companies and businesses and maybe they were accidentally, you know, randomly targeted, Maybe it wasn’t some sophisticated attack, but it happens. So do you. But do you think that people sort of try to obfuscate the problem by just saying that?

David Wiseman [00:22:27]:
That’s just core human nature. Right? The easiest way to not be too stressed about something is saying, it’s not going to happen to me. And I think the way people need to think about it is it could be happening you directly or it could be happening through you to impact someone else directly. So if you’re just an analyst in a government agency, do they really care that much about you personally? Maybe not. But maybe they care about the fact if they can hijack your identity and then use your identity to communicate to other parties that they can actually at the end be targeting, you know, a very senior official, but they’ve got a way to get to them now where the information that person is going to get is going to seem trusted to them. So you not only protect, you know, it’s not only about you, it’s about the people you interact with.

Karissa Breen [00:23:15]:
So do you think from your experience, given your role, do you think people are starting to think about this now? They’re thinking about it more, not necessarily since the last time I spoke to you, but just more so than ever, probably since the signal situation and things that are happening now more online with the whole, you know, AI, voice moving, etc. Where does that sort of Sit with you.

David Wiseman [00:23:37]:
I think they are thinking about it more, you know, just in the past six months. And it really started more along with the typhoon attacks last summer and fall, and then it bridged into the signal situation. I have a lot more organizations reaching out to me in countries around the world to say, hey, can you talk to us about this? We like to understand the risk, what we could do. Before, it was more us trying to get their attention, wave the flag a little bit, hey, you should look at this. And now it’s, hey, explain it to us. So. So I think that’s a big difference that, you know, and this is just a sample point of myself and BlackBerry. But I would assume the same type of thing is, you know, happening with other organizations.

David Wiseman [00:24:19]:
And it’s not even always the organizations that traditionally would have an interest in this, call it everyday type of businesses, businesses with a few hundred employees. But, you know, maybe their executives have started to really focus on risk in this area a lot more.

Karissa Breen [00:24:36]:
So I want to do a slight gear change and talk about perhaps the use of AI to mine metadata. So how does this work? And would you also say this is something, again, that people overlook or they don’t really actively think about?

David Wiseman [00:24:51]:
Yeah, I think, you know, metadata is probably one of those words. When people first hear it, they kind of roll their eyes a little bit. But at the same time, when you explained it to people a little bit, the concept that your phone number is basically your identity, that kind of makes sense. And hey, you probably have your phone number for the next 30 years. And if someone can get a hold of that, then they can target you in lots of different ways. So I think people, they understand that concept. There was just another situation of the pizza metadata. You know about the pizza metadata in Washington, D.C.

David Wiseman [00:25:24]:
no, it turns out that whenever there’s some situation going on in the middle of night and all of a sudden large numbers of pizzas are being delivered to the White House complex, something’s going on. And so that just happened just a couple days ago with some of the things that are going on in the Middle east right now. So that’s another form of metadata, right? By knowing who’s communicating with whom, the pizza guys bringing something to the White House, and there’s a change in that behavior, you can make some assumptions. And so I think people understand that. And so when you talk to them about the fact that now this is becoming much and much, much easier to do, that the AI tools are able to do this in a very effective way, and you start to tie it back to things like spam text and things that they get. I think people understand it, but I’m not sure they personally know what they, what they can do about it at this point in time. But that’s really one of the real drivers why the government agencies that we’re talking to are more and more focused on not just protecting the communications with encryption, but protecting the metadata that’s there that encompassed inside of those. And then I think we start to see the same thing in businesses.

David Wiseman [00:26:35]:
More interest in the metadata topic. Whereas before it was more I need to keep the records for legal compliance reasons. Now the conversation is beyond that and it’s also about we need to protect that information, have control over it ourselves. But I think from a consumer’s viewpoint some of the things that governments can do is working with the technology companies to have tools and settings that are easy for consumers to understand, allow them to self protect more. But from a government perspective just making sure the tools that government employees are using have those protections built in from the start.

Karissa Breen [00:27:15]:
So going on spam tech, so there was a point in time here in Australia, it’s got really bad. They’ve definitely made some improvements. Like I was getting them all the time. I maybe get like one a day now spam calls or whatever texts as well. Do you think that people again don’t think about why this is happening? Like the reason why I asked that is kind of like where does that responsibility sort of sit? I mean I’ve asked people this question on the show and they’ll give you varying answers. But like obviously this spam text sort of things obviously you know, lends itself into the metadata. So then who should be managing this really David? Like because I’ve had a lot of people complain about it even when you read like the consumers complaining about it as well. Like I feel like there are people out there that it’s sort of like well it’s everyone’s problem and it’s telco and it’s this, but it’s these people, there’s these providers, but I don’t know how we make sense of that.

David Wiseman [00:28:07]:
Telecom networks, consumer messaging, app networks. At the end of the day the number one design goal is anyone can reach anyone in a very easy manner. That’s the fundamental design goal. And if that wasn’t the case then they wouldn’t be as attractive to people is a platform. However, the side effect is, you know, things we’re talking about right now like the spam text. One thing I would say we talked a little bit earlier about the AI and the impact a lot of the spam texts that you historically have gotten, you could tell they’re mass market type of things. And so, you know, hey, okay, well, they’re just going to send in some percentage of idiots or whatever are going to respond to it. Right.

David Wiseman [00:28:47]:
But with AI, those can become much more focused. You don’t just get the, in the AI being able to process the metadata. So I’ll give you an example. You probably get the text, hey, it’s been a while, how you doing? That’s a common one that comes. And you say, that’s spam. Or I’ve got a new phone, I lost my contacts. Who is this? But now it’s not really hard with all the metadata to say, hey, we were talking at the pizza restaurant, at the calendar last week, I wanted to follow up with you. And that has a much higher.

David Wiseman [00:29:17]:
And the metadata knew you were in that location. Right. So the potential effectiveness can be a lot higher. And the people, you don’t automatically say, oh, this is fake. They, particularly if you’re distracted at a moment, you, you might respond. So I think when you talk to people about the metadata and the spamming and the AI, I think that is where it kind of comes together, that it all becomes much more effective. Now how do you deal with it? Which was, which is your issue. And it gets a little bit back to what you talked about earlier about content Providence.

David Wiseman [00:29:51]:
The same thing can apply to things like messages and stuff. That maybe there has to be some validation service that they go through. I almost call it like the postal system, emails or messages, that if there’s some nominal validation, if there’s some nominal fee that people have to pay, that the volume just goes down. Because the economics change. You know, if you think about junk mail, whenever they raise the price on junk mail, all those people complain because it’s going to change the economics of how many, you know, catalogs they mail out. People might have to start thinking about that. And I talked about this idea of channels and how do you trust channels? You know, maybe there’s channels that are free to publish on and then maybe there’s channels where someone, you didn’t have to pay a fee. It’s not like a news site where you paid a fee to read it.

David Wiseman [00:30:43]:
Maybe it’s the other way around where someone had to pay a fee to deliver that content. And maybe a model like that is something that needs to start having some consideration.

Karissa Breen [00:30:53]:
So going back to the spam stuff just for a moment. And yes, I know there’s those, you know, you get added in some random group chat about some crypto, and straight away so many people are leaving. So, you know, it’s obviously spam and it gets. I get out of it pretty quickly. So then question on that. I get a lot of these, you know, reporters, spam. Do you think anything actually happens with that? Like, okay, well, we’ve got, you know, 50,000 requests because we reported as spam. Like, because then, like, they can just sort of spin up new accounts, get new numbers and keep doing it.

Karissa Breen [00:31:23]:
So it’s like you kind of just kick the can down the road. I know it’s not like an easy thing to sort of combat, but are these platforms really doing anything about it or is it more just, you know, placating to go, yes, Carissa, we’ve heard you. We’ve reported a spam and then that’s the end of it.

David Wiseman [00:31:37]:
It’s more to make people feel like something’s being done because, you know, even if you shut it down, as you mentioned, instantly there’s another one created. I think it’s more of like a feel good mechanism or a, hey, if someone comes after us and they try to get legal cases and everything, we can say we tried to do something. That’s my personal opinion. That’s not a BlackBerry opinion. That’s. That’s my opinion.

Karissa Breen [00:31:57]:
Okay, so then to extend that a little bit more. So it’s not uncommon for me, considering the role that, I mean, I message people on Signal, Telegram, WhatsApp, because they’re in other parts of the world, like yourself, David. So people are messaging me, you know, all the time or any. And sometimes it’s quicker to get to me that way than reading my emails, for example. So it’s not uncommon for people to get a message from me at random times of the day because I’m either working US hours or UK hours or whatever. But then I’m like thinking, well, hang on, I’ve got the photo on WhatsApp, which you could probably get off the Internet. It’s not uncommon for, you know, people to sort of understand my verbiage and vernacular, considering I’ve done all these podcasts or whatever that are out there and I’m just using myself as an example. How easy then would it be for people to sort of impersonate me? Because I don’t want to pick on anyone because, you know, I do use those platforms because it’s convenient for me and people in different parts of the world and it’s faster.

Karissa Breen [00:32:51]:
How easy then would it be because I’m also aware that, you know, there’s some, you know, there’s wire typing services out there that we maybe sort of touched on last time. But I really want to get into this because I mean, there’s not a person out there, David, that I don’t know who doesn’t use any of those platforms day to day, especially for personal stuff.

David Wiseman [00:33:08]:
Unfortunately, it’d be pretty easy. I showed you the video earlier where we did the voice thing. You combine that with the ability to intercept a communication channel. I can’t remember if last time we talked about this or not, but Google Threat research had put out a report fairly recently where Russian intelligence services were using the ability on signal to have your desktop connected to your phone. So you had messages in both places. They were able to use that to reach to that desktop and assume that identity. If someone can assume that identity and then it’s easy enough to generate content that seems like you, and maybe they even, you can probably even do the disconnect and then you’re not getting a copy on your phone right away, then I think that’s scary in a sense. Now that’s a state actor.

David Wiseman [00:33:57]:
It’s probably going to be done very targeted manner. But the other part about this is this hacking as a service that we had talked about, and the idea here is that we talked several times about your phone number is your identity and your phone number is basically your identity in most of these systems. Since the phone system’s designed for global connectivity, any phone carrier can in a sense say, hey, this number’s on my network right now. Route everything through my network and you don’t get a message. Whenever your phone jumps to another network that’s all invisible to you and then they just hairpin it back. So even your phone company doesn’t really realize anything’s going on. It’s just using some low level signaling in the phone networks. So what does this mean? It means, you know, if you’re a phone company that’s a little unscrupulous.

David Wiseman [00:34:49]:
And there’s, you know, there’s thousands of these registered companies around the world. So there’s plenty of them that are unscrupulous in some sense. They can reroute any number that’s requested. And so that’s how these systems work. You go into a website, you put in your credit card, you know, you pay your $50 and they’re going to reroute, you know, this number for you for a month. So as soon as that number is rerouted, now you have the Ability to actually re register as a person on a platform and start communicating as that person through their real account, which is even scarier. Or you have the ability to mirror that data so you can read everything they’re talking about. Which is probably the motivation for the people that want to spend 50 bucks a month because they want to listen into their, I don’t know, former friends conversations or something.

David Wiseman [00:35:37]:
Right. It’s pretty disturbing when you think about it. I’m not sure what you can do as an individual, but you know, from a government, from a business perspective, that gets back to more about, you got to use something where it’s identity that you control, where there’s cryptography around it, where it’s not just your phone numbers, your identity. But yeah, these services, you know, and I think they even have a customer support desk, you know, if it’s not working right, you can open a ticket. It’s a business, it’s a business built for scale, it’s priced for scale.

Karissa Breen [00:36:06]:
So then in terms of, again, going back to my commentary and responsibility, will this gap start to close? Because I mean, I’ve spoken to people about this before that they’re like, well, there should be pressure on the telcos because ultimately they’re the one that are providing these numbers and these services, et cetera. But then people have said, well, I don’t want to because it’s not regulated. Because when it’s that they’re going to pay more money and all of these sort of things. Right. So where do you think that’ll then get to in the future? Because I mean, this keeps happening again, going back to people are going to lose trust and they are losing trust because of certain breaches that have happened here in Australia. And then don’t you think the government’s going to step in to say, well, we’ve got to get some regulation around this. We just can’t have this happening. Like this is not good enough.

David Wiseman [00:36:45]:
Yeah. You know, and the government regulation angle can go in a couple of directions. So one direction it can go in is, hey, and we see this in Australia, the uk, other countries is, you know, there’s always got to be a back door. Right. Another way it can go is we actually do want your number to be your identity. There have been message consumer messaging services that have in the past where there was no real tie at all to a person. You can imagine the bad things that have happened as a result of that. But, you know, so that’s kind of one aspect of it.

David Wiseman [00:37:18]:
The other aspect of it on, on I think from a regulation perspective is it’s going to require a lot of joint international work to really be in a position to fundamentally stop these types of things because they’re cross border and people are everywhere. You got to get that, that aspect. I don’t think regulation’s the answer. It might help and they could put control, you know, it could be rules on these types of systems, what are things they have to do. But sometimes from a consumer viewpoint, from a privacy viewpoint, they could be positive or they might not be that positive from the end user’s perspective.

Karissa Breen [00:37:57]:
So David, given everything that we’ve spoken today, which has still covered a lot of ground yet again, do you have any sort of closing comments or final thoughts you’d like to leave our audience with?

David Wiseman [00:38:05]:
Today we talked about things and everyone can be really worried but at the end of the day you still have to communicate with people, right? You can’t give that up. And so my advice on that is try to be aware of what you’re communicating, whom you’re communicating with over what channel and, and try to, as I talked about before, to kind of segregate, you know, we’ll call it your business life and your personal life and those communications, that’s at least something you can do. If one’s affected, the other isn’t. But also I think thinking about, you know, what are some ways you can, when something seems off, you know what, what’s the second way you have to communicate with that person to really validate that, that you know, you maybe you always have at least two or three ways to communicate with someone. That’s probably unimportant topics and that’s probably my best Advice.

Share This