The Voice of Cyber®

KBKAST
Episode 333 Deep Dive: Prashant Vadlamudi | Building Secure Foundations for Agentic AI
First Aired: September 10, 2025

In this episode, we sit down with Prashant Vadlamudi, Senior Vice President of Product Security at Salesforce, as he explores how organisations can build secure foundations for agentic AI. Prashant offers a holistic view of agentic AI, highlighting its shift from simple generative models to autonomous agents capable of reasoning, sequencing complex tasks, and performing actions—while emphasising the productivity benefits and the imperative for strong trust and security principles. The conversation covers the balance between fostering innovation and maintaining robust governance and security, the evolving nature of guardrails as AI models mature, and the importance of ongoing policy updates to keep pace with rapid technological changes. Prashant also discusses Salesforce’s approach to deploying AI responsibly, the role of trust metrics such as bias and hallucination scores, the necessity for data governance as the backbone of AI strategies, and the shared responsibility between providers and customers to ensure that agentic AI operates securely and transparently.

Prashant Vadlamudi is a strategic leader with two decades of experience driving transformative information security and compliance initiatives. As Senior Vice President of Product Security at Salesforce, he is responsible for safeguarding the company’s products and ensuring adherence to global standards. His career is marked by pivotal leadership roles, including Vice President of Information Security and Cloud Compliance at Cisco, where he established robust security baselines for SaaS offerings, and Director of Technology GRC at Adobe, where he architected the Adobe Common Controls Framework (CCF), a cornerstone of their global trust strategy. Prashant’s expertise spans cloud security, global certifications, and data-driven risk management, allowing him to navigate and mitigate complex security challenges in today’s dynamic landscape. He utilises a risk-based approach to decision-making and focuses on operational efficiency. He has experience in team development and mentoring, with individuals he has mentored holding security leadership positions across the industry. His professional experience includes strategic planning, technical implementation, and team management within the field of information security.

Help Us Improve

Please take two minutes to write a quick and honest review on your perception of KBKast, and what value it brings to you professionally. The button below will open a new tab, and allow you to add your thoughts to either (or both!) of the two podcast review aggregators, Apple Podcasts or Podchaser.

Episode Transcription

These transcriptions are automatically generated. Please excuse any errors in the text.

Prashant Vadlamudi [00:00:00]:
So the promise of AI for enterprise is clear. AI is here. It is going to be in our lives, both in our personal and professional lives. But it requires a deliberate focus on trust, security and governance as these use cases and that option increases.

Karissa Breen [00:00:33]:
Joining me today is Prashant Vadlamudi, Senior Vice President of Product Security at Salesforce. And today we’re discussing how to build secure foundations for agentic AI. So, Prashant, thanks for joining me and welcome.

Prashant Vadlamudi [00:00:49]:
Thank you, Karissa. Thanks for having me here.

Karissa Breen [00:00:51]:
Okay, so this is a really interesting topic, one of which that many people out there have a lot of questions. So I’m keen to maybe get into this with you. But let’s let me start with when I ask you, what do you think people just don’t get about AI agents? Because again, there’s so many things online, people have different opinions, different views. Some people are more well versed than others. But what sort of comes to mind when I ask you that?

Prashant Vadlamudi [00:01:15]:
Yeah. So let me take a step back and give you a holistic view around what agentic AI is. So agentic AI represents a shift from, from the traditional rules to a reasoning stance. What this means is that the traditional generative AI, the way people interacted with generative AI was they would send prompts into the generative AI, which was powered by a certain LLM, and reading the prompt, the generative AI would generate a certain content. However, with the advent of agentic AI, what has happened is the AI is now more powered and capable to perform reasoning level activities. It not only reads the prompt from the user who is interacting with the AI, but it also understands the particular action that needs to be taken. So it analyzes the request, it understands the actions, breaks them the complex problems into simple multiple subtasks, plans a sequence of actions, executes on them, and where needed, it can bring humans back into the loop to take on further more complex actions which need human intervention. That’s what agentic AI is.

Prashant Vadlamudi [00:02:32]:
With the, with the agentic AI use case, we are also noticing that it is essentially creating an unlimited pool of digital labor. This is extremely beneficial for organizations and the entire global industry in general. It is not only creating an unlimited pool of digital labor, but it is also, as a result of that, introducing significant amount of productivity benefits. So as this is going on, it is absolutely imperative that the use of agentic AI is also stacked along with the right trust and security principles. We at Salesforce believe in a trust first culture. That is what we serve our day to day operations by. So trust is our core principle and the first value that we use and operate on a day to day basis. And what it essentially means that when we interact with customers, we want to make sure that the customers understand and we are also able to demonstrate back to the customers that their data with us is safe and their data with us is actually their data.

Prashant Vadlamudi [00:03:43]:
This is a big shift in the AI space and taking the security lens here with the as more and more adoption of agentic AI is happening, the key thing to understand in the world of security is that we not only need to secure the AI agents and the models supporting these agents, but we need to secure the entire ecosystem. Deploying trusted AI models is definitely the core foundational principle that all of the security professionals should work towards. And that is what we do at Salesforce. And not only working towards securing the AI models, we at Salesforce are also using agentic AI to serve us within our own day to day security practices. Our security is security team at Salesforce is well into the adoption of agentic AI in its day to day use cases. And what we have done within our own security or get Salesforce is we have assessed and evaluated the manual repetitive tasks that need to be performed in any traditional security organization. So within our Oracle, so we use those and perform those and wherever there is these manual repetitive tasks are consuming human effort. We have built AI models using the power of agentic AI.

Prashant Vadlamudi [00:05:08]:
We are using the agents to perform these tasks as a result of which it is saving a lot of bandwidth of our skilled resources within the security org and they are able to use their saved time to perform true security activities where a human effort, skill and knowledge and the context itself is needed. That’s what we are using, the power of agentic AIs at Salesforce and that’s what we are serving the agent back to the community and our customers.

Karissa Breen [00:05:38]:
Okay, so I know you took a step back, but maybe taking a step forward again around when you’re talking to people or people’s misconception about agentic AI, like what do you think they still don’t really understand still would you say it’s still relatively new for a lot of people. Right. And I feel like there’s a lot of people that have varied definitions of what this means.

Prashant Vadlamudi [00:05:59]:
So agentic AI brings into context the power of reasoning. The traditional AI or the generative AI that used to exist. The concept on the value add of a traditional generative AI was when a human interacted with a generative AI model and they would enter some prompt or some action to be performed by that AI, that AI and the model that is powering the AI would take that particular prompt and the request from the user, understand the request, and would generate a content very particular to the action and the prompt that the user has asked for. Generative AIs was primarily responding to an agent’s to an user’s query depending on what the query was, and they would just generate a content depending on what exactly, exactly the user has asked and only serve to that user’s asked specific content. However, with the agentic AI, the way things are evolving is when in this case, instead of just generating a content, the user could ask the agent to perform tasks on behalf of them or for them. They could. In many cases where we are talking about a service agent or a customer support agent, a user or a customer who is engaging with an agentic AI, in this case, when they are performing that engagement, the question is no longer the interaction is no longer just a question asked and the agent responding back to the question, but it is also it’s an action that is tied to the question. So it could be like the customer comes in and asks the agent that I want to update my order or could you find I want to spend some time with my family and like on a vacation and could you find the best possible scenario, build the best possible package for me? In this case, the agentic AI is able to understand the context of the question, understand the user and using the data that powers the agentic AI behind the scenes, through the LLMs and the data that is existing, supporting the customer’s background and whatever information exists for them.

Prashant Vadlamudi [00:08:04]:
The agentic AI is able to take that request in, understand the context and create a sequence of tasks and sub steps. By analyzing the customer, their preferences, what they want to do and taking the data in it is also able to take it to the next level and actually create a package for them. In this case maybe a vacation package. Depending on what sort of request come in, how many, what’s the customer’s preference, where they want to spend the, what their budget is, where they want to spend the vacation in are at, and then in response to that, it can actually go back and perform the actual booking for them. So the power that the agentic AI brings in as compared to the previous generative AI here is that it is actually understanding the customer’s question understanding the context as well as using the data that that exists behind the scenes related to this customer, it is able to perform those subtasks and actions for them. And, and at some point it is very likely that through this interaction and engagement with the customer, the agentic AI may come to a point where it says that hey, you’re asking a question that is beyond my capabilities. I can’t perform this. At that point, the agent is able to hand off the transaction to an actual human.

Prashant Vadlamudi [00:09:22]:
So in essence, in this case, the agentic AI is replacing a repetitive task of a human interaction. Think of this as the tier one task. And now it is converting that into an agentic experience through actions, through closing these actions and giving an experience of actually completing some actions for the customers. And where stuck it can at that point hand it over to the humans. That’s the power of not just responding back to a certain question with content, but actually performing actions to through reasoning. That is what agentic AI is bringing in. Now, as these actions are being performed by the agentic AI, it’s important that understand that there is a power of autonomy that exists with this agentic AI. And with the power of autonomy that exists, it is also critical to build in the right trust and security and privacy principles.

Prashant Vadlamudi [00:10:23]:
That’s why security, trust and privacy play key role here. As the adoption of agentic AI increases, they have access to data models, they have access to data which in a sense allows them to serve better back to the customers. So in this case, from a security standpoint, we need to make sure that the data governance processes and controls are appropriate because the agent’s capability or the agent AI strategy is only as good as the data strategy that’s behind the scene. The data governance models make sure that the agent is able to read the particular customer’s data and the context that exists and then respond to that only. Right? So these sort of principles around security, privacy and trust come into play as the power of autonomy is associated with these agentic AIs in this world of artificial intelligence.

Karissa Breen [00:11:19]:
Okay, so I want to slightly switch gears and talk about chatter around security of AI agents. And this is really important because we obviously want to grow innovation, but we don’t want to stifle it either because of the security element. So given your experience and your background, your pedigree, how do you sort of find the balance between securing these AI agents?

Prashant Vadlamudi [00:11:41]:
So the balance between trust and innovation tied to agents is the biggest challenge that all the leaders are facing these days. Okay, balancing trust, security and privacy. In the agentic AI world is it’s imperative. There has to be a perfect balance and we are all struggling to make sure that this balance is properly established. As the autonomy increases, the expectations for a robust governance and secure guardrails, as well as the guidance that needs to be provided and applied to the use of these agentic AIs, it only increases innovation especially tied to the agentic AI use cases without guardrails could lead to serious and unexpected risks. This means that there has to be a perfect balance between trust and innovation. One of the key factor, as I was telling earlier, is making sure that the data governance process behind the scenes that power the agentic AI actions is appropriately established is key. Classifying the data, implementing the right specific policies around AI use cases, very critical.

Prashant Vadlamudi [00:12:56]:
In addition to that, oftentimes companies also face challenges while deploying new technologies, especially with the agentic rollouts and security considerations need to be taken and put in place as these rollouts happen. Some of the considerations that always should be thought through and applied as the agentic rollout happens are making sure that as the agent gets deployed, the principle of secure by default is tied and associated with all deployments. The data being used to train and operate agentic agents should always be properly governed, should be stored appropriately. Only need to know limited privilege access to this data should be provided. In addition, making sure that the data as it is being consumed by these agents to generate results, it’s made sure that they are complying with various regulations such as GDPR and ccpa. Also in addition to that, access controls principles are key. As the agentic AI systems are developed and applied, made available to consumers. The agents behind the scenes, the user permissions that are powering the agents, they should be appropriately applied, right? Authentication models should be there.

Prashant Vadlamudi [00:14:15]:
Access is limited to specific user roles. These are all basic considerations tried and trusted security philosophies. And that should be applied as the balance between trust and innovation, especially in the field of agentic AI, is try to be established. I would also reiterate that the concept that trust and innovation are the opposite ends. I don’t think that’s true. I think trust is a catalyst and a business enabler. Especially in the field of agentic AI and pretty much as it was in the technology field from the get go. Agents as they’re evolving can take on more complex tasks and they can perform activities that can allow for additional bandwidth for the humans, right? If agents can perform repetitive tasks that humans were performing, that means that the humans have additional bandwidth right now to perform Tasks as the agents have providing them that additional bandwidth by performing these repetitive and time consuming activities.

Prashant Vadlamudi [00:15:17]:
As that is happening, focus on AI safety and quality control also becomes critical. One other concept as AI agentic AI capabilities are served by organizations to the customers, in our case, as Salesforce is serving them, is the philosophy of understanding, shared discipline and shared responsibility. It is making sure that the customers who use who are using these AI models and the AI capabilities offered by organizations such as Salesforce and they’re establishing at their end, they are well aware of what the shared responsibility model is between them and Salesforce in this case and making sure that they’re part of the responsibility to secure the agents is applied and implemented properly. So what’s the balance here? What’s the balance between trust and innovation? We spoke about a lot of things. Agents have to be built in a trusted and secure manner from the inception. A secure by default should be the foundation as agents are being developed and made available to customers and perform as agents are consuming data, making sure that the agents are grounded in trusted and governed data. Especially when agents are providing a fact or a content in response to a certain prompt, it is key to understand that they’re not just guessing it, but it is also able to be demonstrated back that where this fact was cited from where it came from. Is it fresh? Is it the most updated data which was collected to generate this fact in response to a certain prompt or an inquiry and does it meet with the compliance requirements? And lastly, I will say a philosophy in agentic AI use case that is being used is the RAG model R A G.

Prashant Vadlamudi [00:17:09]:
It stands for Retrieval Augmented Generation. As the RAG models are getting enabled, make sure that compliance is kept in mind. If I were to take an analogy, think of a student going to give an exam. If they were in the exam, they were answering all the questions using what they have, whatever they have memorized. That was the traditional standard LLM. However, a RAG enabled LLM is where the student is able to answer all these questions, but they have also access to a textbook and Internet where they can not only derive the results out of their memory, but also can access this textbook and the Internet sites to provide the appropriate answer. If we take that analogy and put it in context of Iraq and putting the philosophy of compliance around that, making sure that the right textbook, the allowed textbook and the right Internet sites are used to generate the results. That’s the key in making sure that compliance and security is baked in as these agents are made available to consumers and provide the other capabilities.

Karissa Breen [00:18:15]:
So I want to go back to guardrails. Now this is, this is important because now given everything that’s happening, it’s relatively new, right? So with guardrails, would you say that people are probably going to, you know, undercook it, overcook it? Because there’s still a lot of variables, sort of things we don’t know. So still relatively new. There’s no real blueprint. So how do people start to establish those guardrails that are specific to their company, for example? Because it’s easy for us to sit here and talk about it, but again, there’s so many different complexity to these things with different companies as well. So I’m keen to understand how would someone start to establish that and evolve it over time.

Prashant Vadlamudi [00:18:54]:
I would say that at the beginning we will see the, we’ll see an overuse of these guardrails tied to the agent AI models. The key is as the confidence in the AI models and the agentic AI use cases begins to grow, the guardrails will still exist. But how much of a blocking these guardrails will do that will start to taper down because the agents will be trained better to give better results out to the customers, the customers that they’re interacting with. As an example, user authentication is a crucial component of agentic AI adoption and it’s one of the foundational steps to build a secure agent. As an agent authentication as a guardrail, if you were to take that into context in production, as an agentic AI is being built, it will at the very beginning be built in a layered approach. And the way it will be done is it will start off by defining the right roles and the scopes, then restricting access to the right data, governing the public and private actions, then enforcing guardrails right at the beginning, at the very beginning, as the confidence of the agent, the confidence on these agentic AI starts to develop, we’ll see that the restricted access to data will be higher as the confidence starts building. And then the agents will be provided as they’re trained on better data and better data models. That’s when they will be given more and more access to data.

Prashant Vadlamudi [00:20:27]:
Also the control that will be put in place over the public and private actions that the agents can perform and enact. It will be more tighter at the very beginning and they will start to be loosening up the confidence and the data training around these models happens properly. That’s what we will see. I would say that security conscious organizations, as they are developing agent TKI models to be served to their respective customers, they will have more tighter guardrails in place to begin with. And as the agents get properly trained on the data and the confidence on the results of these agentic AI use cases grows, we will see that more these agents will have, the access for these agents to the data will be increased slowly.

Karissa Breen [00:21:11]:
So because these models are evolving, they’re being updated, et cetera, quite quickly. Does that mean that with these guardrails and policies, et cetera, people just have to continuously to keep updating them? Because it’s not something that’s going to stay static, it’s going to stay the same for too long. So how does that, do you think that’s what people should be focused on as well in terms of their policies, et cetera? Because things are moving significantly faster than before. How does that question sort of sit with you?

Prashant Vadlamudi [00:21:38]:
Yeah, I would say so, yes, the guardrails will get updated. I think the philosophy of the trusted agents will come into play. Like what’s the trust level on these agents? That’s what is the key. If we were to look at the traditional security world, I mean, when a new technology was introduced in the past, if a technology organization brings in a service provider from outside, at the very beginning of the use case itself for this service provider, the amount of guardrails that will be put in place, they will be way more tighter. And as time evolves and as the confidence on the service providers increases, then the guardrails will be tuned and tweaked and updated depending on the confidence on these service providers. Right. The same philosophy applies with these agents. At the beginning, a security conscious organization will put tighter guardrails on these agents.

Prashant Vadlamudi [00:22:30]:
And as the agents start delivering better results and they are getting trained on better data and the confidence on the output of these agents especially tied to like lower bias, lower hallucination, also better results in terms of serving the better outcome. By reading the right data models and providing better output, as that confidence increases, confidence increases, the guardrails also will get updated. I would say that’s the direction it will take. I mean also the use cases of the agents will also come into play here. If an agent is being used to serve a very sensitive purpose, then of course we’ll see at the beginning that the guardrails will be way tighter, they’ll be way higher. The risk impact of the agents were delivering an incorrect result, that will be way higher. So as the confidence on the results increases, the guardrails also will get updated accordingly.

Karissa Breen [00:23:21]:
So you said before trust level on these agents. So when you say trust level like what do you sort of mean? Like, how do you sort of gauge like, yes, we’ve got some trust levels on this agent.

Prashant Vadlamudi [00:23:32]:
When I talk about the trust level for a certain agent, I would say that as the output of these agents is evaluated, the bias score, the hallucination score, and the accuracy of the results using the data that have been trained on, using the data they have access to, as that increases, that’s when I would say the trust on these agents also will keep increasing. I think that’s what I would rate as the trust score. We have noticed that as agents get trained and by training is they’re performing and they’re performing the actions on an ongoing basis, and the output, the efficacy and the accuracy of the actions are getting better, they’re pushing the limits to a point where in the past, if the agent would hand it off a certain action to a human, and they’re pushing the limit and at this point the agent have been delivering better results and they don’t need. If we were to score one to 10 and one is the very beginning of the agent interaction, and if the agent were handing off the interaction to a human at, let’s say, level five, whereas as they’re getting trained and they’re getting better and delivering better accurate results, and at some point they are now ready to hand off the to the human at level seven, that’s when I would say the trust score for the agent will increase. And as this trust score is evaluated against an agent, of course we need to make sure that it is also delivering with like low bias, low hallucination, and accuracy of the results are also high. So all these will come into factor when the trust around the agent needs to be evaluated.

Karissa Breen [00:25:13]:
Okay, I want to get into the handing off to the human side of things. So historically in security, I’ve got a security background as well. Like originally security wanted to look at every single thing that we’re doing. And so now we’re at this point in time when it’s these agents are operating and performing tasks in the background and we’re sort of relinquishing control. Right. And in some respects, security teams are sort of being viewed as historically like helicopter parents, right? Like always governing, looking at what people are doing, et cetera, panicking at certain things. So how do you think it sort of sits with people now that we are relinquishing a lot of this control? Like, yes, like you said, if it can’t work it out because of the, you know, conditional logic and all of the stuff it’s been Trained on, yes, it hands off to a human. But given you are a security guy yourself, how do you think people are responding to this? That they’ve got a, you know, like the little birds left the nest sort of thing and the birds are just going to fly on their own a little bit more now.

Karissa Breen [00:26:06]:
Right. And that sort of worries people a little bit. Does that sort of worry you or.

Prashant Vadlamudi [00:26:10]:
It does. I mean, it does. I mean at Salesforce, trust is the bedrock of our organization. And when I say that that’s the bedrock, it’s the core philosophy that we use as we perform all of our day to day operations. So we want to make sure that while the autonomous agents can boost efficiency and productivity, we want to make sure that they’re doing so securely. And keeping compliance in mind, that’s the key factor. We don’t want boost of efficiency and productivity at the cost of security and compliance. That’s why building the agents, the trusted agents on the trusted data source is key.

Prashant Vadlamudi [00:26:47]:
We want to make sure that as the agents are giving more autonomy, the concepts of a proper security are applied behind the scenes so that the agents can with this autonomous power be also delivering responsible and secure results. Right. That’s the key. A strong foundation building is set by defining the right boundaries and guardrails and also keeping in mind that at some point, like in that whole interaction with the agent, humans should be in the loop or ready to be brought in the loop and maybe escalation to humans should happen and defining that perimeter of where that escalation should happen is key. Within our own security organization, we are not only working towards securing the agent force product that Salesforce has, but we also have many successes in applying agent force capabilities through agentic AI in our own day to day security practices. We today use Agent Aki through the power of agentforce in our own cybersecurity operations, incident response, responding to customer inquiries and also performing third party security. We do this by having identified what are the repetitive tasks that are performed by security organization on an ongoing basis across these domains and where we know that these are repetitive and there is enough data to train the agents to, to provide these results. We have incorporated agentic use cases, we use them extensively within our detection and response team, within our customer inquiry teams as well as third party screening.

Prashant Vadlamudi [00:28:27]:
And we’re expanding that beyond these three to compliance as well as the secure development lifecycle. Going back to your question that at what confidence do we have to provide and relinquishing the control to these agents as they’re performing autonomous Actions. The key is to set up strong and clear policy guidelines. It’s super high level rules for all the staffs, but more importantly very specific rules around data usage that is applicable to these agents. In addition to that, like making sure that the traditional battle tested philosophies of managing vulnerability, managing vulnerabilities, maybe using the practices of OWASP 10, enabling guardrails preventative as well as putting detective monitoring and alerts for these agents is key. All these together enable a proper secure and use, secure and trusted use of the agents. And this is what allows us and gives us confidence to give the autonomous power to these agents to perform actions which were performed by the humans in. And then of course as I mentioned earlier, that we will have humans in the loop because the agents will hit a block and a threshold where the humans need to be brought in or the results that are generated by the agents need a second level of review to build additional confidence.

Prashant Vadlamudi [00:29:52]:
That’s the point where we bring in humans to perform those very specific skill level tasks and activities. That’s the position we take by having agents and humans running functions such as security at its course.

Karissa Breen [00:30:06]:
So there’s another quick question on that. Do you think generally speaking, because security teams generally understaffed, fatigued, alert, fatigued, stressed out, churn insecurities, obviously a massive one, people there for a few years and they move on. Do you think it’ll get to the point where people are like okay, stuff’s going on in the background. Even if there was like okay now John, you need to intervene, something’s gone wrong. Do you think there will still be a level of laziness is not the word I want to use, but perhaps overlooking certain things because they’ve got so many things that they’ve got to do day to day that it’s like oh, now I’ve been alerted to, to intervene and do something, make a decision. I just don’t have the time. So I’m just going to have to, you know, maybe not think about it as much or as thoroughly or use my critical thinking skills as before.

Prashant Vadlamudi [00:30:53]:
In the security space, I would say that there are some core foundations and fundamentals of security which need to be addressed and looked at by humans, especially in this world of agent first that we are living in and that is going to be a part of our lives moving forward. Security is paramount and the key objective in a security organization should be that as agents are being built, as AI is used to perform these activities, a fundamental question that we should always be aware of and be Able to answer using the right and resource ourselves properly to address this question is what is powering our AI? And second is where did this data come from and can we prove it? So I think these three philosophies are super key. Making sure that the security organizations are staffed properly to perform and support these actions is key. Yes, you’re right that there are multiple repetitive tasks that happen that are time consuming, that take up a lot of effort identifying those and classifying these separate from actions that need the human skills and the capabilities of true security professionals. That’s critical. And using agents to perform those repetitive tasks will be a really good use case. And that’s what we are doing at Salesforce to mature our own security organization. We see a mix of agents performing these activities on an ongoing basis.

Prashant Vadlamudi [00:32:23]:
We already do that in our incident detection and response space, customer inquiry space and third party security engineering space as well. And we complement that with the right skills and the humans that we have in the security or to provide a good mature security posture of trust secure and compliant service in our agentic offerings that we have have.

Karissa Breen [00:32:50]:
So in terms of the future with AI agents now I know we’ve covered a lot of ground and it’s still future isn’t written. There’s still a lot of things that we’re trialing, exploring in this space. I understand that. But what are your sort of thoughts or what are some of the things that you’re thinking about as we sort of traverse now into the end of 2025 and beyond?

Prashant Vadlamudi [00:33:10]:
So the promise of AI for enterprise is clear. AI is here. It is going to be in our lights, both in our personal and professional lives. But it requires a deliberate focus on trust, security and governance as these use cases and that option increases today, it provides and it is going to continue providing real business value. That’s what AI will be used for. It will also be used for providing enhanced customer experiences and also to increase operational efficiencies. The key is to make sure that as this is happening, the use of AI agents is increasing to provide these values and services. It is delivered securely and responsibly.

Prashant Vadlamudi [00:33:56]:
My recommendation to any organization looking to balance AI benefits with security is making sure that the leaders and the experts within that organization have a unified understanding as AI rapidly develops and integrates into their organization. Making sure that all the individuals in the organization have the access and the avenues to ask for help and advice as implementation of AI expands within their organizations. And also that it can comply with the internal policies, regulations and the business direction. And then I think one other thing that should be done is to make sure that they’re creating and supporting the concepts of champions that will promote cross functional alignment and innovative use of AI to the standardized stance of that organization. The goal is to understand that AI is here to stay. It is already in our lives. The use cases will expand, organizationals will look towards AI to provide better business value, provide increased efficiencies to their customers overall and also internally. Doing so securely in a proper governed manner and responsibly is the key and that’s the direction that needs to be taken and focused on.

Karissa Breen [00:35:13]:
So Prashant, do you have any sort of closing comments or final thoughts you’d like to leave our audience with today?

Prashant Vadlamudi [00:35:19]:
Agentic AI is already taken its place in this global economy. It’s here to stay, it’s here to expand and it’s here to help us and provide better operational efficiency and deliver better business results and values. As a security professional, making sure that this is done. As this is happening, as the use case of AI is increasing, adoption of AI is increasing and the serving of AI to provide our services to customers is increasing. Doing so securely in a trustworthy manner is key. Making sure that the AI strategy is developed by and supported by a very mature data governance strategy is the key. The data fed into the AI or data that is used to train AI is what will make the AI better. Making sure that it is done securely and in a proper, trustworthy manner is the key.

Share This