Thereโs been a lot of publicity in the market about Zoom. A lot of companies are jumping on this as their platform of choice as a fundamental tool for connecting with their staff and customers; totally understandable considering where things are at in the world at the moment.
I wanted to take a closer look at Zoom from a security & privacy perspective, so sought different perspectives to shed some light on the topic. A lot of the time we trade privacy for convenience, so I was curious to see if this is whatโs actually going on with Zoomโฆ
I managed to speak with Laura Norรฉn. Laura is VP of Privacy and Trust at Obsidian Security, a cybersecurity company based in Newport Beach where she runs privacy operations. Norรฉn is a frequent speaker on data ethics, privacy legislation, and the social impact of data-driven technologies. She continues to publish in academic journals as well and in the popular press. She holds undergraduate degrees from MIT and a PhD from NYU where she completed a Moore-Sloan postdoc at the Center for Data Science. She knows this space. The below is a transcription of our conversation.
KB: Can you provide a view on the Zoom landscape; why do you believe so many people have adopted Zoom so readily over other vendors?
LN: With everyone working from home and prohibited from traveling, people need a way to stay connected to one another. Zoom has been designed to be as frictionless as possible which has made it easy to adopt. It’s also able to handle meetings with more than 50 people, something that most companies, universities, and hospitals need to keep their organizations in sync as situations change rapidly. Zoom can handle very large calls – 300 people – with an optional webinar feature that can accommodate even larger audiences. Some competing video conferencing applications canโt handle large meetings and webinars, which is one reason Zoom has been so popular.
KB: Zoom has fixed a lot of vulnerabilities in recent times, can you articulate what they were initially, and perhaps opine why they were not fixed for such a long period?
LN: Zoom has committed to spending 90 days of engineering cycles addressing security and privacy vulnerabilities. Of course, I was not involved in any decisions made at Zoom, but it appears that they made decisions to prioritise frictionless use over security and privacy considerations. Those ease of use features probably have contributed to their success during the coronavirus at-home posture even as they have raised some red flags and caused certain customers like the New York City public schools to move to competitors like Microsoft Teams.
KB: There were allegations that Zoom was selling data back to Facebook but has since revoked the access to do that. Why do you believe they were doing this in the first place?
LN: The Zoom phone app – not the overall Zoom platform – was sharing data with Facebook, likely because Facebook (and Google) represent the dominant ad networks of the current moment. I cannot speculate on Zoom’s business model around its phone app, but obviously phone apps are able to generate richer location data than desktop or laptop usage. Perhaps Zoom knew or suspected that the phone app would be used by consumers more than business clients and sought to support free users through better placement of targeted ads. I really don’t know, but Zoom has curtailed that relationship (though one notes that Facebook is a competitor to Zoom for those users and it may have been in Zoom’s best interest to shut that down, as well).
KB: What are your concerns from a security perspective by people and companies utilising Zoom?
LN: We are worried about a couple things. For universities, we do worry about โzoombombingโ which is often likely to be an insider threat. Some students may get lolz out of disrupting their classes or meetings of diversity, equity, and inclusion groups with racist, sexual, or other offensive material. What is funny to certain students is deeply offensive and even traumatizing to others. We consider these to be security threats as well as threats to the social fabric of organizations. Getting a very clear view of how Zoom meetings are configured, who has attended, if the name field matches what you’d expect in the email field, if there are outlying locations, all of these can be helpful in tracking down zoombombers. That use case has so far been fairly specific to universities and to certain types of open webinars. Users are getting better at avoiding these things by refraining from posting Zoom meeting links openly on websites and in social media, but if insiders post to reddit, there are still avenues for Zoombombers.
We’re also concerned about a similar exploit of the same vulnerability that manifests in an opposite way. Someone may have access to sensitive meetings – including board meetings – that they should not have access to. Rather than make a big splash like a Zoombomber might, another type of malicious participant may lurk as quietly as possible, maybe calling in rather than joining via laptop, changing the name to match the name of an employee at the company, arriving a little late, leaving a little early, and listening to what’s being said. Zoom does allow recordings to be made without a notification feature or an assertive ‘recording in progress’ appearance change to the UI, so we’re watching those recording behaviors, too.
Zoom can also be used to transfer files. Since very few companies are able to monitor file transfers occurring through this route, it is a known vulnerability, though I am not able to point to any specific exploits.
KB: Can you talk to me about your concerns when it comes to privacy? Do you still believe people arenโt aware when it comes to privacy?
LN: Zoom is being used in highly regulated fields like health care. Of course, we want everyone to be able to access their doctors and therapists even if they can’t meet in person. But because of the feature rich environment and the difficulty of educating every single user on best configuration practice, we are worried that there may be violations of everything from HIPAA and FERPA to wiretapping laws that mandate every single participant to a call must consent prior to recording. Every organization is responsible for using Zoom in compliance with the regulations that impact their states and their industries, but often it has been adopted so quickly and has so many features that would allow privacy violations, that we are worried about consequences of near instantaneous adoption. Our educational customers are not legally allowed to Tweet screen caps of all the students – names or faces – on their Zoom calls without the explicit consent of each student. But it has happened, usually out of an abundance of good cheer. However, with the increasing use of facial recognition, facial images are more valuable and more sensitive than they have been in the past. Clearview AI, for instance, makes a tool that allows for facial search – upload a photo and find all photos of that person ever posted to the internet. This can potentially out some employees as sex workers, as having committed past crimes, and so forth, which can raise a host of concerns.
Again, I cannot point to any instances in which facial images from Zoom have been misused in this way, but we are aware of the heightened privacy risks associated with facial imagery that is part of the core Zoom functionality.
KB: What would be your approach on how companies and individuals can implement specific security controls to ensure better security in the context of communicating online?
LN: The security and privacy controls required to ensure compliance with local laws and ethical imperatives are best achieved within frameworks that are flexible and robust enough to encompass a huge range of typical practices. Then, within each application, organization, and industry, specific best practices and technological affordances can be developed to address those legal and ethical frameworks in meaningful ways.
KB: What should be the line of questioning organisations need to start asking before adopting a new technology?
LN: As a privacy professional working in a tech startup, I’m a huge advocate for having a technically knowledgeable privacy advocate onboard from early days. Ditto when it comes to security. It’s very difficult to “fix” privacy and security after the fact. True respect for and adherence to privacy and security expectations cannot just be attestations or bolt-ons. Meaningful respect for privacy and security generally has to be baked into the engineering architecture, the collection of trusted vendors, the data that is collected (which is always more than the data displayed to users), and the decision tree model of organizational leaders. There is no post hoc procedure that can begin to compare to the strength of an always present privacy-by-design approach and zero-trust orientation towards security.
Because privacy and security are aligning more closely with what some (sophisticated) customers want, we may see an increase in hiring for privacy and security specialties going forward. For startups, the trick is to find employees who are able to wear multiple hats. This has always been true. That collection of hats needs to include privacy and security hats…and the privacy hat cannot be worn by expensive outside counsel who is either too expensive or not technical enough (or both) to provide meaningful day-to-day oversight with respect to marketing, engineering, and product privacy needs.
Laura can be reached on her LinkedIn Profile for further comment: https://www.linkedin.com/in/laura-noren-93a846145/
Published with StoryChief