The Voice of Cyber®

KBKAST
Episode 313 Deep Dive: Rose Alvarado & Jenna Eagle | Security Data Pipeline and the Future of SOC
First Aired: June 06, 2025

In this episode, we sit down with Rose Alvarado, Regional Sales Manager, and Jenna Eagle, Solutions Engineer Lead, from Cribl as they unpack the evolving role of the security data pipeline and the future of the SOC. Rose and Jenna discuss the increasing demand for flexibility and control over data management, highlighting how organizations are moving away from vendor lock-in to gain better cost efficiencies and visibility. They examine the challenges of managing explosive data growth, the shift from traditional SIEM solutions to data pipelines for pre-processing, and the importance of routing high-value data to appropriate platforms.

Rose Alvarado is an accomplished security specialist with more than eleven years of experience in the Australian Enterprise IT industry who partners with CISOs, CIOs and IT Managers to provide them with the best IT security and data solutions that meet their needs. Currently, she is the Regional Sales Manager for NSW at Cribl, helping organisations adopt a better data management strategy, improving their security and observability operations while reducing their cost of ownership. She is passionate about cybersecurity and data management, she constantly seeks to expand her professional skills and learn from industry experts.

Jenna’s cyber journey began at Accenture Federal Services, where she collaborated with U.S. federal agencies on mission-critical projects. She then transitioned to Splunk as a Public Sector Solutions Engineer, specializing in compliance, automation, and IT and security modernisation. Now, as a Solutions Engineering Manager for ANZ at Cribl, she helps organisations take control of data growth through optimisation, data tiering, and breaking vendor lock-in. When she’s not “Cribbling,” you’ll find her exploring her new home in Australia or spending time with her American Bulldog.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Rose Alvarado [00:00:00]:
What we’re seeing is that a lot of companies and organizations are just saying I don’t want to be locked in. I want to be able to have the control, the flexibility and the choice to know where I’m going to put my data. Just because we don’t want to be dependent on vendor A or B across all our organization. So that’s why it’s so important to have this flexibility on where are you putting your data and how are you controlling it.

Karissa Breen [00:00:47]:
Joining me today is Rose Alvarado, Regional Sales Manager and Jenna Eagle, Solutions Engineer Lead from Cribl. And today we’re discussing, discussing the security data pipeline and the future of a soc. So Rose, Jenna, welcome and thanks for joining.

Jenna Eagle [00:01:00]:
Thank you so much for having us.

Rose Alvarado [00:01:03]:
Yes, thank you, Carissa.

Karissa Breen [00:01:04]:
So maybe Jenna, let’s start with you first. Before I swap over to Rose. I’m keen to understand for people who aren’t familiar, what’s a data pipeline and what’s your definition?

Jenna Eagle [00:01:14]:
Great question. So a little history lesson. So you know, a decade plus ago, two decades ago, people had a single platform to send all of their data to, right. So it was all of my sources. I’m going to index everything, send it to this single destination. I’m also going to store my data there and I’m going to use it for real time analytics. So it was that single platform that however, became very unsustainable. Right.

Jenna Eagle [00:01:43]:
Because of the massive amount of data growth we look at. You know, data is growing at 28% CAGR year over year and IT budgets are standing at about like 7%. When we talk to customers, it’s a lot lower than that. Right. So what got you to today isn’t going to get you even a year or two down the road. So customers have started to or organizations have started to take a look at separating collection tier, creating this data pipeline that sits before these destinations so that you can break apart and send the right data to the right place. So it’s no longer index everything, send it to one place, it sends the data where it should go.

Rose Alvarado [00:02:28]:
I would like to add that security data pipeline is now a market and it’s placed between XDR siems and data lake. The SIEM nowadays are just becoming economically unsustainable.

Jenna Eagle [00:02:40]:
Right.

Rose Alvarado [00:02:40]:
Because data from the cloud endpoints, identity, network tools and even now AI are just growing exponentially. So the shots have to either reduce the Logging and risk visibility gaps or shorten data retention to less than 90 days and hinder investigations or they just absorb these massive fields. So this is where a security data pipeline is quite important because it serves as a pre processing layer. It helps you clean, enrich, filter and root all the data before it reaches your cyber tools. And it helps you to remove the noise, which means less steam and better storage cost. You can accelerate your threat investigations and you just let your SOC team focus on the real important things.

Jenna Eagle [00:03:22]:
Great point about the growth of data. So what, where is that growth coming from? We either have the organic growth, right? Just year over year data is increasing and then we also have new workloads. Right? So customers who are deploying in cloud, maybe they get a cloud vulnerability management tool and they need to add that workload into their siem or you know, AI. The massive amounts of data that AI is going to start creating, a lot of times the SIM license capacity or that analytics logging platform capacity dictates what data can go into that centralized logging platform.

Karissa Breen [00:04:02]:
Okay, so Jenna, I’m just going to stick with you for a moment. In your experience, given your role, what do you think people sort of overlook? What do you think I really get about, you know, a data pipeline?

Jenna Eagle [00:04:13]:
Sure, great question. So when we think about the data pipeline, it opens up the possibility of data tiering, which a lot of customers haven’t thought about today. What in your siem, what in your logging analytics platform is the high valuable data? What’s the data that you are searching on over the past seven days over and over again? What’s the data that you are alerting and detecting on and then what’s the data that’s the just in case data? The data that is the noise or that therefore when you need it or if something happens. So if we can start breaking apart that data from high value and low value, the data pipeline helps to route that data to either the highest cost premium platform for that high value data, down to the lowest cost platform for the just in case data or the low value data.

Karissa Breen [00:05:07]:
And so just. Okay, so going back to the high high value versus low value, how do you think people classify that? So for example, like some people think, oh this is high value because I use it every day, but to perhaps another department or another executive might be like, well who cares about that? So how do you get to that sort of common ground where everyone agrees that this makes sense, it’s high value.

Jenna Eagle [00:05:27]:
Yeah, I think one of the best examples of this are those debug logs. So, so debug logs. Nobody cares about that in the sim. Developers are going to care about that when they need it for the actual debugging. But it’s going to produce a ton of data, right? So does that data need to go into your expensive log analytics platform or do we just need it for the just in case moment? The other piece is what is the SIEM alerting and detecting on that is going to be our high value data. The low value data could be filtering out any of that additional noise even within events. If we have null fields, if we have duplicate timestamps, if we have super long UUID that are taking up hundreds of characters and creating bloat in each of our events.

Karissa Breen [00:06:20]:
Okay, so Rose, do you have any sort of commentary to add then to that? What’s coming up in your mind as Jen is answering that question?

Rose Alvarado [00:06:27]:
What I have seen so far from the organizations I have worked with is that usually on high value we’re seeing xdr, EDR and networking tools signals. So those are the definitely things that you want to be on top sending as a high value data to your sim and then just like. Can I say things that are like application that are important for developers that is not like the things that you have to have your eyes 24, 7 and that will be the ones that you just enter that storage place because you still need to have that data for compliance and for hunting exercises, but not to just get the same analytics straight away.

Karissa Breen [00:07:10]:
So would you say then as well people like we’ve gone through stages of the industry, it’s like okay, we’re going to ingest all of this data now we’re trying to remove it because of, you know, depending on what it is because of PII breaches in terms of even storage is costing a lot more money now than ever before. Walk me through what’s happening out there and where people are sort of sitting with, with having all of this data at the moment.

Jenna Eagle [00:07:32]:
So what we see, I would say probably upwards of 90% of organizations that have a data pipeline are also sending data to something like low cost object storage. When we talk about low cost object storage, that’s going to take your full fidelity copy of your log sitting compressed in, you know, AWS, S3, Azure, Blob, Google Cloud Storage at like $0.02 per gig compressed too, right? So then when you compare that to what you’re spending per gig in your log analytics platform, it’s going to make sense to store that data and start separating that system of retention or that system of record from your system of analysis or that premium platform like your sibling.

Karissa Breen [00:08:21]:
The other thing that I really I’m curious to know. So when I conduct a lot of these interviews, I go to my network and I crowdsource conversations that people are having or answers that people in the industry want to know and just sort of, sort of press on a little bit more around. What I’m hearing is there are certain sort of seams out there that are clunky would be sort of people’s definition. So what do you think when people say, oh, it’s clunky, what do you think that sort of means? Maybe. Rose, do you have any sort of comments on that?

Rose Alvarado [00:08:45]:
Right now there’s a few seams out in the market and each SIEM is attached to a specific technology, right? Like you have Crowdstrike working on the file and complete platform on the Ignition scene. Then you have Microsoft working on Sentinel with the native tools, staying with now Cisco and Splunk and XCM is now working on their Palo Alto platform. So there’s a consolidation platform thing happening now. But the thing is that how do you make this SIEM a universal receiver of all your data that’s out there? The problem is that each technology generates a different data format that you need to convert to send it to, you know, theme of your choice. So that’s where the clunky piece starts to happen. Because if you want to get all your information across your environment, it’s not going to be one single vendor. You’re going to have one scene for protecting your email and then your and then your SDR and then what do you do for analytics and then your legacy CRM tools, you know, so it’s all starting to stack up, all different formats. How do you send that to the scene in a seamless way? That’s where Krito comes along.

Jenna Eagle [00:09:58]:
Yeah, great point, Rose. With the platforms that like the xims, the next gen Sims, the Sentinels, a lot of those started as let’s get get our first party data in. We’ll create the detections and alerting and everything around that and it works great. And then we need to start onboarding all of those third party data sources because we need all of those other data sources to complete our full SIB capacity. Right. And so something like a telemetry pipeline, a security data pipeline can help move that data, all of those third party sources into those platforms forms as well and make it less clunky.

Karissa Breen [00:10:36]:
So now I sort of just want to move maybe 2, 2 millimeters and talk more around push and pull models. For ingestion. Because I think this is something that people are a little bit confused by. So I want to make this a little bit clear on how this works. So Jenna, would you be able to talk through the mechanics of how this looks?

Jenna Eagle [00:10:56]:
Yeah, of course. So push and pull models for ingestion. So if we think about the different protocols, so there are some protocols that are going to. Or certain applications, devices that are going to be able to push data out to a destination, right? So you have your data pipeline and your data pipeline server is going to be listening for certain data to come to it, right? Those are going to be the pushes, your syslog, your HTTP, all of those. Then you have your pulls where that same data pipeline server is now going to go to your other servers or go to your other applications and pull that data when data is readily available for it. So now the data pipeline is reaching out to say your Azure event hubs or your SQS based S3 and pulling that data in and doing a coalescing job as well.

Karissa Breen [00:11:56]:
And then so just to build that a little bit more, would you say as well, like, and this is probably a common question that people sort of starting to ask and I really want to focus this question now would be around the cost of a lot of these things. So I know like in the, in, in the past people said like the logs for X seems very expensive or you said before around the 2 cent per gig. So maybe that, you know, can help reduce the cost. So talk through that because now we’re starting to see in the market that people are very focused on how do I get more bang for my buck?

Jenna Eagle [00:12:27]:
Exactly, exactly. So, and I’ll come back to this push and pull model too and kind of the architecture of the data pipeline when talking about this, but ultimately when we’re, when we’re filtering out the noise, when we’re filtering out that low value data before it gets sent off to your siem, we are helping with easing that tension between the data growth and the budget. Right? So we’re helping to ease some of that tension and creating headroom in either your SIM capacity or your log analytics platform capacity. As you know, I said before, it’s that license capacity usually dictates what visibility you have in that centralized logging platform. So how do we get more sources in, filter out some of the noise from each of those sources, but get more sources in and create wider visibility so we can actually decrease costs while ingesting a backlog of resources as well. The other piece about the architecture of a data Pipeline, I have an organization as a customer where the use case started as we are moving to this other cloud platform, right? But we have a ton of data in cloud platform A, so we need to move all of this data out of cloud platform A and into cloud platform B. And our egress costs are substantial just by doing that. So we have our ingest cost now of this cloud platform B, but we also have our egress cost.

Jenna Eagle [00:13:55]:
So if we put the data pipeline close to where the data is originating in that first cloud platform, say we get 50% optimization and reduction in the data there, then we’re only egressing out half of that data now, helping to save on egress cost as well.

Rose Alvarado [00:14:14]:
I’m just seeing a reduction of half of your bill on SIEM and that storage by using Cribble, which is exactly what everyone’s right. You want to be able to analyze all your data, get on top of any threats coming to your organization and pay half of it.

Karissa Breen [00:14:29]:
Like sign me up, right?

Jenna Eagle [00:14:31]:
When we talk to organizations, it’s okay, we, we need help with SIM migration, we need help with onboarding all of this backlog of sources. We need help with creating headroom in our sim. When it comes down to it, as Rose was saying, it’s that business value is the. Okay, what, what’s the cost here? How am I saving money? Can you show me how much money I’m saving?

Karissa Breen [00:14:50]:
So sticking with that example you mentioned before, Jenna, and would you say in your experience, given your role, that in the past people, meaning organizations, haven’t filtered out a lot of this sort of data, so therefore they were paying for all of these extra data sources in terms of the ingestion and they probably didn’t need like if you were to look back retrospectively, right?

Jenna Eagle [00:15:12]:
So it goes back to that. The first solution to sending your data somewhere was to send it all to one place and index it all, because you don’t know what data you need or what task of your data, right? So you just keep sending all of this data, all of this data, and your log analytics platform just becomes this unsustainable cost. And so putting that data pipeline there helps to manage what data goes there, helps manage, okay, send the high value stuff there, send it here. When we talk to customers, they’re already thinking about this. They’re already thinking about decoupling that collection tier, having a data pipeline that says where to send the data. Because prior to something like that, it was just index it all. And then I had a customer just the other day Say, you know, they turned on their cloud vulnerability management logs and sent them off to their siem. As soon as they sent that, it was upwards of three plus terabytes per day being sent additional to their log analytics platform.

Jenna Eagle [00:16:23]:
They spiked in that SIEM had to go back and turn off the logging that was being sent and now it’s not going to their centralized place for their security analysts because that license capacity dictated it. So instead bring in a data pipeline where you can get reduction on the current sources that you have in your SIEM and then make headroom for all of those additional sources as well as.

Karissa Breen [00:16:48]:
So when you say get reduction, how, how hard is this process? Because as you would know, both you would know like when people start to, people start to feel overwhelmed, like, oh my gosh, this is going to be like a so long to do or don’t have the resources. So like how do you, how do you get to that point where it’s like, well, I get it, it might be hard in the beginning, but it’s going to make it easier, it’s going to be more cost effective. Everyone wants to save more money, but how does that conversation sort of start?

Jenna Eagle [00:17:14]:
Yeah, I think Rose, you mentioned this earlier, so you probably give some more information here. But when we speak to organizations, it’s always very typical sources, right? It’s your firewall logs, it’s your EDR logs, it’s your Windows security events, and it’s very known logs that are the top volume sources. We usually advise organizations start with those. Start with your highest volume sources, your problem child sources and hopefully your data pipeline also has some out of the box magic that can get you at a starting point. Right. So we call them packs, content packs and packs. And those will get you, you know, 80, 20 rule. It gets you about 80% of the way there and then you can tweak that additional 20%.

Jenna Eagle [00:17:59]:
But it really helps kick off customers quickly in this. And Rose has a few organizations locally that she’s worked with on this to get them up and running quickly as well.

Rose Alvarado [00:18:09]:
Thank you, Jenna. Yeah, a good example would definitely be firewall logs.

Karissa Breen [00:18:12]:
Right.

Rose Alvarado [00:18:12]:
It’s kind of like it’s so noisy, they bring so much volume, but you do need them. Cradle can optimize them by 50% because we just get rid of all the noise that is inside those raw logs and then we enrich it with geographical and IP information. So not only you’re getting a lot of the noise taken out, you’re having more valuable and relevant information getting to those logs. And Getting to your scene.

Jenna Eagle [00:18:43]:
So when you do move the enrichment out of your log analytics platform, you’re also helping with performance because you’re offloading that log analytics platform that’s doing maybe today your enrichment plus your searching and you’re alerting. If you start offloading that enrichment to your data pipeline, you’re going to be saving on performance in your log analytics platform as well. And that can be done in your data pipeline.

Karissa Breen [00:19:10]:
You said before Rose around and there’s a couple of questions I want to ask that you both just sort of touched on here. The first question would be you said reducing a lot of the noise. So what does noise look like? Like define noise from your perspective.

Rose Alvarado [00:19:24]:
If you look at our raw law and there will be some information that comes in a string format, I really know how to define noise. Jen, I. Help me here.

Jenna Eagle [00:19:34]:
Yeah, yeah. So when we talk about the noise, if we look at, say what your correlation rules are, what, what your searches are, what are you alerting and detecting on in your sim, what do you actually need in your sim and what is just the extra fields within each event? So we can take a look at reducing from within each event and then also full events. So an example of within event reduction would be removing duplicate timestamps, removing null fields, removing the long UUID field that you aren’t using in any of your alerting and detection. When we look at optimizing full events, maybe we drop or just sample east west traffic or duplicate events. Maybe we define what a duplicate event looks like and then drop those. So it can be both from full events and then also from within each event.

Karissa Breen [00:20:34]:
Okay, so then that sort of leads me to my next question. So you talk a lot about reducing the noise. Do you think that sort of then predicated on Kribble’s conditional logic?

Jenna Eagle [00:20:45]:
Yeah, great, great question. So when, when your source events. So when your logs enter into your data pipeline or cribble. Cribble uses filtering logic. So a routing table based on filters that then says, hey, if this event is a PALO traffic log, send it to the PALO traffic pipeline and then off to its destination. And then we’re going to go down to the next route that says, okay, now if it’s a EDR log, do this pipeline and then send it out to the destination of choice. So that filtering logic within the routing table is going to be based on metadata within the data. Then when we think about filtering, as in filtering out the noise and reduction, we use the functions, we call them functions inside of crobo to either parse the events, remove the fields that we don’t need and serialize it all back together so we can take an event, unstitch it, remove the stuff we don’t need, stitch it back together in its original format and send it downstream.

Jenna Eagle [00:21:54]:
We can also do things like transform Windows XML logs to JSON before sending it downstream if that log analytics platform accepts JSON. Just doing that and changing the format of the log is going to get you like 30% reduction as well.

Karissa Breen [00:22:10]:
And so just to confirm on that Jenna, all of this is done automatically is what you’re saying we have.

Jenna Eagle [00:22:15]:
So we aren’t a, we aren’t like a black box, right? Like we want organizations to know what data is being removed and we want to empower the users to make those decisions as well. So we do have out of the box functions and what we call packs to help you along. We also have a copilot within the product that says hey, I’m onboarding my firewall logs and I’m sending them to this destination. Do you have any recommendations for me? And it can create a pipeline for you. Everything within Cribble is also version controlled so you commit and deploy and you have ownership over what data is being taken out. So still. But it does come with some out of the box material to get you started as well.

Karissa Breen [00:23:03]:
And so then the other sort of question that’s been coming up my mind as you’ve both been speaking would be around the performance. So what is it nowadays that customers interested in knowing around performance?

Jenna Eagle [00:23:15]:
Yeah, I would say the question that we get the most is around latency. So if you add a data pipeline between your sources and your destinations, what does that latency look like? Right. And how do we scale our data pipeline? Either vertically by adding more CPU and memory or horizontally by adding more what we call worker nodes to it. How does that help with any performance related things? So with data pipelines, your data pipeline should be sub millisecond latency. When you’re thinking about this, right. That, that should be a standard across the board now within our destinations performance within our sims, performance within our log analytics platforms, when you do filter out that noise, you are going to get better search performance as well downstream too.

Karissa Breen [00:24:10]:
Okay, so then just staying with that for a moment and so what does the latency sort of side of things look like? I have heard that. But then also when you said about adding more cpu, does that mean that the cost would then go up by default?

Jenna Eagle [00:24:23]:
Yeah, great, great questions. So sub millisecond latency is what we See, so you know, very little latency sub millisecond. Right. For cpu. Sure. If you are adding depending on how much data is coming in. Right. That’s how we size a data platform is how much data is coming in, what’s the amount of processing taking taking place and how much data is going out.

Jenna Eagle [00:24:49]:
And then we can. Right. Size the CPU and memory and any disk space. If we’re doing persistent queuing and all of that, what’s needed there can be deployed self hosted so you can host it yourself or it can be KRIBBLE managed as well. So then we handle the CPU and all of that and. Right. Size it for you.

Karissa Breen [00:25:12]:
So in terms of use case, so companies that are sort of self hosting, are they sort of larger enterprises because that might help reduce the cost or what are you sort of seeing on that front?

Jenna Eagle [00:25:20]:
Yeah. So for self hosting I would say those are going to be the very highly regulated customers. Right. Those are going to be the ones that we need to have it in air gap networks or there’s certain security standards that aren’t in place yet in my data pipeline. So in the SaaS version, so I need to host it myself. All other customers to the SaaS model, Rose, what would you say? What do you see?

Rose Alvarado [00:25:47]:
We’re seeing more of SaaS model to be really honest because everyone needs having a cloud transformation or just future proofing their environment and getting things in the cloud. Obviously you will have federal government and this is specific organizations that have requirements to keep things on air gap environments and then they can use KRIBBLE in their own premises. So we offer both options but definitely the dominant is SaaS.

Karissa Breen [00:26:17]:
The other thing maybe I’m curious about, maybe Rose, you can answer this. One is the whole. I’m hearing this a lot like everyone now was so worried about the whole vendor lock in thing. So I know that you guys obviously don’t have that problem in terms like with the, you know the, in terms of the data sources and those vendors that you’re working with, et cetera, what does that then look like? Because I’m hearing this a lot from people, especially when you look at like cloud providers and stuff like that. Correct? Yes. People are really hating that. They want to feel like if we can move, we can move quickly in terms of porting across, et cetera.

Rose Alvarado [00:26:47]:
I can see why.

Jenna Eagle [00:26:48]:
Right.

Rose Alvarado [00:26:49]:
Like if you get a data breach tomorrow and you have all your information, let’s say in aws, AWS instance that you’re in just shuts down, how do you get access to all Your data, it’s all there. So that’s where it’s coming. The need of we have to diversify and get all our information in different places. Kind of like a data loss and recovery plan.

Jenna Eagle [00:27:10]:
Right.

Rose Alvarado [00:27:11]:
So if a cloud fails, you have the second cloud there for you. And this is where CREDO can help you because you can have your data, have different copies of your data in different places and just have access to them whenever you need it without paying a leg for this.

Jenna Eagle [00:27:30]:
Yeah, I would say think too about every downstream platform, you know, comes with their own collection tier, their own way to get data in, their own agents. So it’s really hard when an organization wants to go test out a new platform. Right? So SIM A has their SIM agents, SIMBA has their SIM agents, they have their own collection tier, they have all of that. What happens when you want to test out simb? Then these organizations need to go and have multiple agents on all of their servers. They need to find a way to now also deploy a separate collection tier for that one. So that also aids in this vendor lock in. Right. Because it becomes such a hassle to break away from that vendor if you want to go and test something.

Jenna Eagle [00:28:29]:
So migrations are taking years to complete, right? Maybe like longer than that because of all of that additional infrastructure and moving the data around. When you put a data pipeline in the middle, you don’t need to go back to those source systems and change firewalls and start routing the data somewhere else inside of your data pipeline, inside of kribble, because going to make a copy of that data and just start sending it off to that new SYN B to start testing. You don’t need to go back and change all of your production sources in order to do that.

Karissa Breen [00:29:03]:
That’s an interesting point. So when I worked in an enterprise and we were doing a whole uploading all of that data to the siem, apparently someone did a calculation. They’re like, at this rate, it’s going to take 50 years. And I’m not joking when they said that.

Jenna Eagle [00:29:15]:
That’s exactly what we hear from organizations where they’re just like, we don’t have another option. It’s too much, it’s too big of a lift. And we can’t interrupt what we’re currently doing. It’s going to take forever to do. So they feel locked in.

Karissa Breen [00:29:28]:
But what gets me is how can something take 50 years? Like, that’s crazy.

Jenna Eagle [00:29:33]:
Yes, 50 years. This is a bit crazy. We’re seeing, you know, with, without cripple, without your data pipeline, without Something there to help aid in that migration. We’re seeing migrations take multiple years to get done and that you still end up having some workloads, still going to that old platform for a really long time. So some of our migrations that we’ve done recently, Rose has, has an organization that she’s worked with recently where they came to us and they said, hey, our license is ending with SIM A. We need to be completely off of that in 60 days time. Can Kribble help us? And we can help and work towards getting you from that multiple years in migration down to that, like you know, 60, 90, 180 days.

Karissa Breen [00:30:32]:
So then just to, just to add to this a little bit more so because people now, I mean, admittedly that example was, it was a rough calculation when the people were saying this and it was like 12 years ago. So obviously things have changed substantially since then. But would you say that because people hate the whole locking of the vendor. They’re getting disgruntled if something’s going to take years of migration. Like people are, people are getting to the point now or they need stuff right this second. So where do we sort of see the industry moving towards people can’t wait. People are going to do migration for years. Because equally people move off.

Karissa Breen [00:31:03]:
They, you know, people retire, people get fired, people resign. Like, who knows how to pick up a project in your mid migration? Like you, this seems messy.

Rose Alvarado [00:31:12]:
It is messy, you’re absolutely right. But what we’re seeing is that a lot of companies and organizations are just saying I don’t want to be logged in. I want to be able to have the control, the flexibility and the choice to know where I’m going to put my data. Just because we don’t want to be dependent on vendor A or B across all our organization. So that’s why it’s so important to have these flexibility on where are you putting your data and how are you controlling it.

Jenna Eagle [00:31:43]:
Yeah, and folks come in with different agendas too. Like you were saying, you know, who knows the next person that comes in, maybe they’re a fan of SIM B and they want to bring SIM B in. We need to be able to easily pivot that data and start sending to a new destination quickly. As customers are also saying, you know, yes, KRIBBLE and this data pipeline is helping me with this current migration. But we know that this is going to come up again in a couple of years. Right. The rate that more platforms are being developed and there’s so many more options available now, people want to be able to test the next best thing and see if that’s a good option for them that’s getting into that future proofing themselves too.

Karissa Breen [00:32:28]:
So this is interesting because based on what you’re both saying and what I’m hearing in the market as well, so if we look at, if you look at back in the day like even 15 years ago, more or even less than people had a big affinity to certain vendors, they had loyalty. Now I’m not seeing that much anymore. If they get cheaper, faster, better. Right. So in terms of if they can get that elsewhere, they will. I’m seeing that a lot more. You said before flexibility options. So obviously you both work for a vendor.

Karissa Breen [00:32:54]:
So I’m curious to see now do you think that the vendor landscape, even just in the whole scheme arena, whatever, is this going to get more and more competitive? Because okay, well we have to be able to do this faster, cheaper, better, all of that in terms of moving forward because people aren’t just buying from the same old vendor that they played golf with 30 years ago. People now are being scrutinized for how much things are costing. CFOs are wanting to know what money is being spent where, with what vendor. So what does that then look like given your perspective of working in a vendor?

Rose Alvarado [00:33:26]:
You’re absolutely right. Things are just changing. The future of the same is now we have niche and siem. The on prem siems are just long news, I think from the past. Now the modern siems are evolving into these model of platforms where they’re separating storage from analytics. So much things happening in the market right now, like to any customer that you talk to right now, they’re going to say that they have something between 40 and 50 cyber tools in their toolkit right now and it’s difficult to manage this whole thing. So that’s why it’s so important to have something like Cradle, which is a security data pipeline platform. Right.

Rose Alvarado [00:34:01]:
Because you need to be able to get a pre filtering and have a central place to say I am vendor agnostic right now, where I’m going to send my information voice is going to be easier to look at for data lakes and cloud storage and scenes. And what I’m going to do in Test 1, XDR and email protection and exposure management. You know, there’s so many things out there that are complicating how we manage our data and it’s going to get even worse. Especially now that AI is emerging and agentic AI is coming as well.

Jenna Eagle [00:34:35]:
Yeah. And when we talk to organizations, it really Comes down to budget when they say they need to get off of this thing, even if they’ve been loyal to this brand, this vendor, for decades, right. They’ve built their career off of this vendor. And we asked them, why do you need to get off of this? They say, because I can’t afford it. Right. It comes down to the budget. So if, you know, another sim vendor comes in and says, hey, here’s this total cost of this new platform and this is all the stuff you’re going to get and it’s X percent cheaper than what you’re spending today, they’re going to want to take a look at that. Right.

Jenna Eagle [00:35:17]:
What we also do is, yeah, we help organizations migrate and test new sims. But if you have built your career off of this one platform, right, and you have such an affinity to it, and the only reason why you need to get rid of it is because of the cost, can a data pipeline help you get control of that current platform as well so that you don’t need to move away from it? Because the competition, as Rose was saying, the competition is growing like crazy and it does come down to budget and what people can afford.

Karissa Breen [00:35:49]:
So would you both say collectively, just the vendor space in general and nowadays, leaders, if we look at sort of a new generation coming up through the ranks, Millennial and Friends are now becoming your next sort of executives. Do you think that these people don’t have a strong affinity to a lot of these, perhaps older school brands and perhaps, you know, the generation before, for example? Because I’m seeing that a bit more in terms of if they can get, they, meaning executives, leaders, sides, those that can get cheaper or better or whatever, they’re quite happy to go with perhaps a company that hasn’t been around for 50 years, for example. Are you seeing a bit of that?

Jenna Eagle [00:36:25]:
Yeah, I would say so. I think Rose works closer with those C level execs and I work more closely with those techies. And I do see the techies continuing to have this brand loyalness and wanting to stick with what they know and what they have. But Rose, I wonder what it’s like at the C level.

Rose Alvarado [00:36:47]:
Yeah, there’s no such thing as vendor loyalty as much as before. It’s more an outcome. Sea level is quite pressure right now from the board to get an outcome aligned to what the business wants overall and what the stakeholders are asking them. So if vendor A cannot get me to the outcome that I need, so the board gives me the money that I need to keep doing my work on a daily basis, I’M going to move to vendor B. So yeah, there’s not that super loyalty that somewhere 50 years ago because it was also a simpler environment as well. Now given that things are just data is exploding, things are a lot more complex, AI is coming machine learning and the companies overall have to reduce cost because of the geopolitical situation. Yeah, no one can afford to be as loyal as before.

Karissa Breen [00:37:40]:
So in terms of any sort of closing comments and final thoughts. Rose, what would you like to leave our audience with today?

Rose Alvarado [00:37:46]:
I guess from my point of view from having worked from different security vendors, it’s refreshing to be at credible because here pretty much we’re giving you the to have control, flexibility and choice of what to do with your data so you can do a future proof of your security operations center. You control the data having full visibility, you access your data. You don’t depend on A, B or C or third planning doing the exercises for you. You can do your proper team exercises, you can do your hunting, you can store the data for compliance, you can decide whatever you want which technology you want to try and how do you overall manage the security across your whole environment. I love that because you are the one in control. So I guess that will be my message from here.

Karissa Breen [00:38:38]:
And Jenna, any sort of closing comments for our audience today?

Jenna Eagle [00:38:42]:
I would echo Rose there and just say when you have that control flexibility and choice of what you do with your data, you are able to start the data tiering, you’re able to start decoupling your collection tier, you can send to multiple destinations. You’re not locked in by a CP single vendor and that’s how you know you have that full control, flexibility and choice.

Share This