Bob Huber, Chief Security Officer and Head of Research, Tenable
These transcriptions are automatically generated. Please excuse any errors in the text.
You are listening to KBKast, the cybersecurity podcast for all executives cutting through the jargon and height to understand the landscape where risk and technology meet. Now here's your host, Karissa Breen.
Tenable detected two critical vulnerabilities in Microsoft's Azure platform in March and alerted Microsoft. Tenable assessed the severity of the vulnerabilities, both of which could be exploited by anyone utilising the Azure Synapse service. Before downplaying the risk to users, microsoft discreetly patched one problem. It wasn't until they learned that Tenable was going public that their story altered. They privately admitted the severity of the security issue 89 days after the initial vulnerability notification. So today I'm interviewing Bob Huber, Chief Security Officer and Head of Research at Tenable on the show to hear their side of the story. Hey Bob, welcome to the show. Thanks for making time.
Yeah, thanks so much for having me, I appreciate it.
So Bob, at a high level, can you talk me through what happened?
Absolutely. So let me start a little bit of background of how research operates within Tenable. So we have a team of folks we call the Zero Day team as a part of research and they're tasked with identifying zero day vulnerabilities across a myriad of solutions. They are funded and resourced to do that research and one of the areas that we targeted this year was the cloud service providers and specifically in this case Microsoft and their Azure capabilities. And the researchers are free to target whatever components of the platform they're interested in based on their own capabilities and desires. And in this case they targeted Microsoft synapse service. So it's an analytics capability that Microsoft has, which is a managed service. Right. You don't run the services yourself, they provide the services for you. And the searchers identified a few vulnerabilities as you mentioned in the platform. And being good industry citizens, we follow a responsible disclosure process. So of course we identify the vulnerabilities, we send those to Microsoft and provide them all the details so they can in essence simulate the vulnerabilities we've identified and one validate that they are indeed vulnerabilities and it also allows them time to understand what issues this may cause for their larger customer base and gives them an opportunity to address them as well.
Right, so you found these vulnerabilities and then you alerted Microsoft straight away, I'm guessing, then what happened after that?
Yeah, unfortunately there's a lot of back and forth the way disclosure process works, certainly for industry in general, and security and Microsoft as well. Most organisations have a 90 day disclosure policy. And the reason being is we want folks to be able to review the data we're providing them. Like I said, validate it's an issue, relay the concerns, make fixes or patches if they can, and then also give folks an opportunity to inform their customers or their user base as well. And like I said, that's very standard Practise across our industry. And you know, in honesty, what seemed to happen here is the communication process failed our researchers, this is what they do every day. So they're used to back and forth dialogue with lots of different organisations. Admittedly some are hostile, that wasn't the case here. They weren't hostile, they just weren't responsive. So once we entered what I would consider to be the communication process and the disclosure period, it took us quite some time to receive attention and actually get some of our questions answered. And unfortunately for us, it also took some time for them to ask us questions to get additional detail.
What you want to see happen is when somebody reaches out to you. So if I'm on the receiving end, you want to quickly acknowledge, you want to interact with the researchers very quickly to understand their posture, how they're looking at the problem. And the idea is the research is going to help you understand what the possible impact is to your organisation or your enterprise or your service. And what we come to find out is we had a lot of back and forth but it was a lot of meaningless dialogue. Like we didn't really get answers to the questions that we were fielding to Microsoft, we didn't get a lot in the way of two way dialogue, back of them asking us questions or looking for additional information when they initially classified the vulnerabilities, not as vulnerabilities but more as hey, these are things you should do for best Practise. So almost more like you should do this for better configuration. It would be a more secure configuration, but not necessarily vulnerability. And even for us to get to that point, that took quite some time to even get down to the fact that they didn't view it as much as a critical vulnerability, but maybe as more as a configuration.
Best Practise and a lot of frustration, I'm sure, on both sides between our researchers and I would imagine Microsoft as well, that the dialogue wasn't more prosperous and it wasn't more clear and concise. But like I said, as we lay out that 90 day disclosure period, we're marching towards a deadline so the timer starts immediately from 90 days and counts backwards. And like I said, it's generally accepted Practise across industry that after 90 days, whether they fix it, don't fix it, or don't even acknowledge the issue whatsoever. The code is that the company that has the researchers can publicly disclose that, so we can make other folks aware of the issue. If the actual service provider is not going to do that.
Yeah, fair. So just to confirm, was the first in my introduction to you, it was the 89 days. Was that the first conversation you had or was it before then and you sort of didn't get anywhere and then you're like, okay, the story then changed on the 89 day mark.
Yeah. So of course we did get some acknowledgement and admittedly even that took days. It might have taken even longer than a week to get initial acknowledgement. So they acknowledge, hey, we reported the issue, that's good, that's what we like to see. And then it took several weeks again before they acknowledged that, hey, this is a more of a best Practise recommendation or configuration. It wasn't so much security issue. And in fact, they actually did tell us that it wasn't a critical security issue. And when you're looking at the vulnerabilities coming in or the issues being reported to you coming inbound, you look at the criticality and you have to rack and stack and decide where you dedicate resources and how fast you respond. So I'm sure, and I'm not going to make excuses for Microsoft by any means, but I'm sure they look at it if somebody makes an initial classification that it's more of a best Practise recommendation or configuration, and then they move on to the more pricing issues. And like I said, I can't speak for them, but that is feasible. I'm sure they get many vulnerabilities reported per day and per week.
So it took some time before we got the feedback of, hey, this isn't a big deal. In essence, now as we got closer to the 90 day period expiring, somebody obviously reviewed the case. And if I were guessing, they probably track all these vulnerabilities and where they're at in the time window of the 90 days. And I would imagine as it gets closer to 90 days, people take another look and somebody took another look and say, hey, you know what, this isn't just a best Practise configuration recommendation, it actually is a security vulnerability. So unfortunately that came to us, what, I could be the last minute and we're pretty amenable to work with partners that approach the 90 day window as long as they're working with us. So if you continue to have a dialogue, we can extend a day or even a week or two weeks, as long as we have a dialogue and they're making meaningful progress and there actually is a risk that we're trying to help them address as well.
Have you reported issues in the past with Microsoft and they've come back quite quickly? Is this just sort of the first time where they've been like this or they sort of repeat offenders or what's your experience?
Yeah, so we certainly have. But I will say this is the first time we've done it for some of their cloud services. So our first interaction on the cloud services, we've done it before in their traditional enterprise products. And generally the experience has been pretty positive for us. They immediately acknowledge, they reach out to us, they request detail, we have recurring dialogue, we work together to address the issue, we test patches for them where we can. So in general it's worked very well. But I will note that a couple of our peers, other companies also more recently found issues in some of Microsoft's cloud services and their story very much matches ours. Where the issues were downplayed, the dialogue wasn't there, and even more recently with a Felina, which is another Microsoft vulnerability and that was discovered. It was a very similar story. So I can't say it's a repetitive pattern for tenable. We've not had this experience more than once that I'm aware of in my time here. But for sure we can point to other peers that more recently did have the experience. So Microsoft has their Security Response Centre. You would think this would be a pretty polished process across the enterprise.
This just didn't start happening yesterday. This has been going on quite some time. So I think what's unfortunate is I would say historically they've been pretty good at this. All the recent stuff has been cloud based, so that's a little bit different. So maybe it could be a maturity of the Security Response Centre and how they work with their cloud services. That's some speculation on my side, I don't know, but it has been a recurring pattern between us and other peers.
And specific to the cloud services. Yes, exactly right. Okay. So would you be able to perhaps reiterate what the risk to users is specifically?
Yeah, of course. The way I like to describe any cloud service provider and Microsoft or any other cloud service providers, when they provide managed services for you, you're just getting a service. So that means essentially everything that happens, if you think of a restaurant, everything happens behind the counter. You don't worry about somebody's doing that for you and they deliver a service to your product, to you. And in essence that's what this service is. I'm paying them for the service. Microsoft runs and maintains and creates everything that is required, the infrastructure to run that service. And we have an outside in view. And the vulnerability we identified actually allowed our researchers to escalate privilege. So to an administrative user, which is not something a user of a managed service would normally have. And the issue with that is now we can actually peek under the covers, if you will, and we can kind of see how things work behind the counter using my example from a restaurant, and understand what other components make up that system that in itself a severity where we can escalate privileges and kind of see what I call the fabric of a cloud service.
What becomes interesting is whether you can use that to do other nefarious things. So if I could use the escalated privilege to gain additional access within their environment. Obviously, that's an issue, but probably more pressing is if I can use that to gain access to additional customers or, as they would say, cross tenant capability. Now, I'll note having said that, and I'm not saying we did that, because we have to follow their rules of engagement for their managed service. So when you sign up to use the service, they have a detailed list. They call it the rules of engagement. It's fairly friendly to penetration testers and researchers, but it does prohibit you from testing things that would, in essence, be considered exploits, and certainly cross tenant exploits, where you're looking at other customers data. So that's prohibited by the rules of engagement. So one of the issues we run into is even if we think or feel that we might be able to access other customers data or move laterally across the environment, the rules prohibit that. So we stay within that realm and do not specifically tried to exercise to see if that's a possibility.
So we're very reliant upon Microsoft to say it is or isn't an issue for their broader customer base.
Wow, okay. Yeah, that makes sense. Since going public, have you had any sort of response back from Microsoft customers and about how they're sort of approaching the issue?
We've seen some good dialogue across our social media, and it's really been around transparency of the providers. We're a provider. Tenable. We are a provider. So we have a software application as a service. Right. And the idea is to make sure you're transparent, because we're placing a lot of trust within the providers, such as ourselves and such as the cloud service providers. And arguably they provide all of your infrastructure and now they provide all of your services. And I think the issue is if there is a period of vulnerability, and like I said, the normal windows 90 days, and most of the folks in the industry understand what that means, you do want to be notified at some point, right. So the hope is they'll fix it. And they also told you they fixed it. Because what you do is you want to be aware that at some point in time I did have a risk. I know you fixed the risk now, but did I have it for one day or did I have it for 90 days? Far be it for me to say we're the only organisation that found this vulnerability. We could have been one of many.
Maybe they didn't report it. Maybe they're already using it for their own purposes. Who knows? That's how the word world works, right? It's a race. So that's my concern, is that I have a vulnerability period from one day to 90 days. Did they look for active signs that it was actually being exploited or somebody else made use of the vulnerability exploit? That's the type of stuff you want to hear and see and we have a couple more recent examples in industry where that was one of the issues, if you recall, I'm going to say octus, third party breach, it wasn't so much an issue of the fact that they were breached. That is a big concern, of course, but it was the period of time that had passed before they notified anybody and people right away wondered, okay, in the intervening time period, what in the world happened? Was data going out the door with my data at risk? Was I as a customer at risk? You don't know. And I think that's what industry raised as a point of concern. I have no idea.
That's about the point. So what I think I'm curious about now is just speaking large companies like Microsoft, I'm assuming that they would have a policy. So 90 days responsible disclosure, this is our policy. So for example, if you go and complain to an insurance company about something, they usually will say, okay, here's a copy of our complaints policy. And you read it and it's like X amount of pages. Does Microsoft have something in place or are they just a level of maturation that needs to occur here?
Yeah, they absolutely do. So this is something they've been doing for years. If you roll back I've been at this a long time, if you roll back 20 years when nobody did it, the first thing a company did was threaten you with a lawsuit. Right. There was no process or procedure to respond to these types of issues. Now it's common Practise and Microsoft in many sense has led the way. It's fair enough to say they have, and generally they have been pretty good at it. But it just seems like whether it's specific to the cloud services or they just need to take another look at the programme again, it just seems like they didn't follow their own process and procedure. So we do know they have one. We've worked with them before, issues not related to cloud, and that has served both of us well. I think in this particular case it didn't fare so well and I'm not sure if it's related to cloud enrolling, maybe the cloud services under the umbrella of security response or if it's some other issue.
Yeah, so that's sort of where I was trying to get out with that question to understand, yes, I've got a policy in place, but it's more so the adherence to it that's been ignored or it's gone by the wayside or somebody else's problem. That's sort of what I'm hearing from you, is that correct?
Yeah, absolutely. Admittedly, just the proliferation of managed services by the cloud provider, they come out with new services, it seems like every day in my world. So the next time there's an Azure DevOps conference or one of the other cloud service providers has their next conference, they release new capabilities and features and new services and it could be as simple as, hey, this is a new service. They didn't have the process and procedures set up to deal with an inbound security issue or security vulnerability and they weren't integrated in that process. I'm speculating on their behalf, I don't know, but I will say they've released new products on a pretty frequent basis. So it's hard to say if maybe this wasn't practised or tabletopped by that team tabletop exercise where they say, hey, we're going to get a zero day vulnerability, what do we do with this? That's something you want to test, you want to make sure that product team is also understands how that process works.
Yeah, most definitely. Absolutely. So, just to confirm, Microsoft still hasn't acknowledged to its customers about this vulnerability, is that correct?
That is correct, yes.
Why do you think that's the case, though, considering you guys? I've obviously seen a cervical sort of post and things are circulating on social media, even this interview, for example. Why do you think now I get, like before, you guys haven't gone public with it, but because you have, why is there still no response, if you had to guess?
You know, my belief is, and this is where the cloud service providers or any service provider, it adds a layer of obscurity. And the reason being is for this vulnerability in particular, I as a user and the other users, we don't have to do anything to fix it. Like, we're not the ones who actually take action to patch it. So I don't know if they felt maybe, hey, you know what, we acknowledge it internally, we've addressed the issue, we've rolled out the patch. We don't need to notify customers specifically because they don't have to take action. I don't know if that's the reason. I could see that as part of the thinking that there's no action required. And when they fix it, it's not like it takes them months to roll out the fix. The fix rolls out in a matter of days across the world to all customers impacted. But I think it goes back again to at least being transparent enough. I appreciate that they create a fixture, at least one of the issues, and they roll it out, but be transparent enough to notify customers regardless. So we understand we might have had some exposure and it may lead to other questions we ask that actually helps them with their security.
We may ask them questions like, is there any indication that this was utilised in attack? What kind of work did you do? What things did you look for? And it would get them to rethink how they look at this event as well. And they may come back and say, you know what, we should have done X, Y or Z. We should have given our customer reasonable assurances that we actually did do some work to look for utilisation or exploit of this vulnerability and provide some assurances and I'm not going to say that it's going to be at least provide some assurances that they were thorough in their investigation and in their response and.
So far there's been nothing of that matter being communicated in terms of investigation. No response?
If there has been, I haven't seen it. I wish I could say otherwise and it's less time to come out in the last few days. I do not believe so. And that's where opta for their incidents and that's where they kind of got a blockage. It took them a long time. They eventually did, they were very forthcoming. I think their chief information security officer held a few briefings as well. They posted additional information out the public regarding the issue they had. Now obviously that was a breach so it's different than a vulnerability but I think there's an opportunity for lessons learned.
So Tenable was saying that Microsoft refused to acknowledge this as a security issue which you alluded to before or you sort of said by the 89 day it changed because maybe escalated to someone who then reviewed it and then changed the status of it. Is there anything in writing that sort of stipulates that Microsoft refused to acknowledge this?
Yeah, we actually do have this in writing. So if you were to look at Tenable's technical blog which we host on Medium and this is our researchers raw blog, they do capture the exact quote where they did say they didn't believe it was a security issue per se and it spice it to say interestingly enough, our researchers recorded all dialogue that comes inbound with any organisation they report or disclose to and the reason being is there is some risk or liability associated with that. So we do, we save all the dialogue, all the emails, everything we have just in case somebody were to raise the threat of legal action or liability. We make sure we have our tracks covered and we protect our researchers. So we do have that in writing.
So would you say it's fair to say that perhaps markers up are trying to get their ducks in a row before releasing an official statement considering they are a big organisation and there are many moving parts? I mean I'm just playing devil's advocate and asking the question, I know working large corporations myself, but yes, in theory it would be great if people could move a lot faster. I understand the severity of the issue but perhaps it's just a big moving beast with a lot of employees. Sometimes things get lost or mismanaged or there's a dislodgement. What are your thoughts on that?
Yeah, I would totally agree with you. My belief is, my personal belief is this was more likely just a miss. Do I think this is business as usual for them? I don't think so, but they certainly missed in this instance. So I think as they look back and maybe highlighting some of this in the media, once we hit the 90 day mark and we go public, it gives them an opportunity to step back and say how could we have done this better? Maybe it was a newer service and that service wasn't familiar with how the security response process worked and they didn't have the right contacts. It's just like anything else. Right. We all have misses. You look back and you figure out what's the best way we can resolve this and make this better moving forward.
So if someone from Microsoft were listening to this, do you have any sort of words of advice for them perhaps that they can take away and work on this problem to get better in the future?
Yes, I think as I would say, one of the first things is I'm sure they have this always have some type of hot wash or after action. Right. What could we have done better for any incident regardless of whether specific to this or anything else. Right, what could we have done better? And then I think as well for all the services they provide through the cloud services is just making sure that all those components are integrated. Like I said, they launched a lot of different products on a regular basis. As a part of that, what I would call on boarding, you would want to make sure that they understand like how does this process work, who are the key contacts. So just like anything else there's an opportunity for improvement here. I don't think this is indicative, although we do have a couple of more recent events that echo what we said. But you know what it could be even if that's three or four, it could be three or four out of to your point thousand or 100. It's Microsoft. They are a juicy target, they deal with lots of these every single day.
Patch Tuesday is indicative of the number of vulnerabilities they address on a recurring basis. Right. And that's just the ones that are lumped in with Patch Tuesday and not their cloud products. I'm sure the cloud products could be double, triple or even more, who knows?
So what I'm fundamentally hearing from you Bob is more so. Yes, of course Patrick, there's thousands of issues always going on. It's more so the Acknowledgment, hey, thanks letting us know we are aware or we're not aware of this issue and this is what we're going to do about it. Would you say that perhaps you may have like backed off a little bit more if they had that Acknowledgment?
Yeah, I think if we would have had better dialogue throughout the 90 day period it would have been a different story. Like I said, it was a missing communication process. I'm sure there are opportunities from proven on both sides but that's what we're looking for. Right. Researchers don't quote the slam people, we're just trying to make things more secure and the way we do that is working together, the researchers themselves don't just make things more secure inherently. We have to work with a partner to help them understand the issue and help them address the issue. And oftentimes we test and validate to make sure their fix actually works. So, like I said, several misses, a lot of them on the communication fronts, which is often the problem in the world we live in. Right? Most of the virtues stem out of communications, so I think with those opportunities improvement, hopefully the next time that's better. I think us going publicly and having some folks weigh in, it just raised the attention enough to make sure that we all improve and hopefully there's a little bit more transparency that comes out the other side.
Yeah, I think that's fair. I think that's fair, what you're saying. I think it's having accountability on both sides. I'm curious to know now if you decide to zoom out a little bit more on this incident, but perhaps other ones that you've sort of seen in the past is how can we as an industry implement better Practises to managing these affairs? What I mean by this is like, no one really wants to be called out and like you said, your researchers aren't there to slam companies, but we're there to sort of keep companies honest and accountable. What are your thoughts, and perhaps other people who are listening about better steps that they can have within their organisation to ensure, like, the communication is better or to ensure that there is transparency there for their customers?
Yeah, for sure. Step one, ensure you have a process to respond. Not every organisation does. Microsoft obviously does. They have for a long time, but not every organisation does. And with the proliferation of technology, every day there's going to be new companies, stand up technology and have no process to deal with something like this, unfortunately, until it goes public or you reach out to the CEO directly. So I think everybody in the industry needs to ensure they have a process and they should have their policies stated internally as well. If you do security research, you should have your own policies about how you go about doing responsible disclosure and understanding how to work with these entities. And that's everybody, from the researchers to our partners on the legal team who we work very close to, let these dialogues go on. So I think that's a large part, it really comes down to communication and then when it comes to transparency, of course you don't want to go out the door and notify everybody, you have a zero day because that just leads to risk of exploitation, but sometimes you have to weigh that against actual active exploitation.
So Atlassian had a vulnerability recently, they did not have a patch for and they still went public, which that happens very seldomly and I think kudos to them because they wanted to make people aware of the risks they didn't have a patch, there was no fix. There's a risk, there's active exploitation going on. We're making you aware, and you have to balance that. That is a fine line to disclose something like that without a fix. And of course, a quick occurred quickly thereafter, but that was pretty exemplary for them. And I think from a Microsoft perspective of minimally, not notifying their customers at all, making folks aware that they had this and how long it persisted and like I said, they did their investigation, all those things, it just kind of breaks that chain of trust.
Well, thanks very much, Tenable, for making Time to share Tenable's views on the Azure synapse of Vulnerabilities. Thanks for making time.
Yeah, thank you so much for having me.
Thanks for tuning in. We hope that you found today's episode useful and you took away a few key points. Don't forget to subscribe to our podcast to get our latest episodes. If you'd like to find out how KBI can help grow your side of the business, then please head over to KBI.digital.
This podcast was brought to you by KBI Media, the voice of Cyber.