Federico Torreti [00:00:00]:
There may be benefits from a type of model, while business process automation or a research application in healthcare may require different capability. And this is where our approach recognizes there is no one size that fits all solution. But then there is a line that is important not to cross, which is where model choice becomes a hindrance.
Karissa Breen [00:00:39]:
Joining me now is Federico Torreti, Senior Director of Product for AI & ML at Oracle. And today we’re discussing organizations getting choice and flexibility for AI experimentation. So Federico, thanks for joining and welcome.
Federico Torreti [00:00:52]:
Thank you for having me.
Karissa Breen [00:00:53]:
KB okay, so Federico recently was announced the Xai Grox model which is now available on oci. So perhaps if people missed that update, give us a little bit of a lay of the land, what does that sort of include? What does this mean? So yeah, people are aware of what that means for them.
Federico Torreti [00:01:10]:
Well, thanks for the question KB we’re very excited that XCI has decided to partner with Oracle and offer the Groq models family of choice through our Oracle cloud infrastructure offering. And this just emphasizes our key messaging around having an enterprise Oracle offering an enterprise AI strategy that centered around choice, security and enterprise readiness. And this partnership emphasizes the fact that as part of our strategy we are giving OCI customers access to cutting edge AI models while maintaining the enterprise grade security and governance that they need and they came to expect from Oracle.
Karissa Breen [00:01:50]:
Okay, I’ve got a couple of questions in terms of what’s happening in the market now. There are more of these AI sort of players appearing, et cetera. Obviously we’ve seen geek seek and all the others sort of really emerge. But given your role and what you’re seeing, what is sort of the main sort of questions that customers are asking around artificial intelligence, even maybe machine learning? Is there anything you can share?
Federico Torreti [00:02:12]:
Yes, absolutely. There are a number of questions that arise in the conversations with executives across a broad range of industries and they ultimately boil down to three key aspects. One aspect is actually counterintuitive and it’s around model Choice. You see, 2023, 2024 were very much years of experimentation and many enterprises, now that they’re venturing well into 2025, have been really starting to ask the question what is the true return on my AI investment? And when we unpack that question, most of the discussions ultimately anchor on the the fact that there is inherent complexity in choosing a model for their applications. And one of the key challenges that enterprises have is that as they’re trying to derive value from large language models and in general from generative AI technology, they do face the challenge of having too many unpredictable models to choose from. And the fact that it’s also difficult to ground models in the specialized knowledge of their enterprise. And the approach that we have taken at Oracle is important because we are offering what we call curated choice. In other words, while it is relatively easy for a company to take an idea into a shiny demo, the last mile of bringing an Agentix solution or generative AI solution into production, the final 20% of development is extremely difficult and complex.
Federico Torreti [00:03:44]:
And part of what we’re doing through this specific partnership with XAI and the work we’ve done also with Medic and Cohere, is to provide OCI customers with curated models that are suitable for enterprise specific applications. And so while we are expanding customer choice, we’re doing that in a very curated manner. The second part is around infrastructure and customers are looking to get most value out of their data and it is also important for them to understand the implications of data movement, whether this is to respond to specific regional or sovereignty requirements, but also when transferring their data outside of their data centers into a third party model provider of choice. And so again, this is where having Oracle guarantee around enterprise grade security and governance that they need. And what’s particularly important as it relates to the XAI announcement that we just made is that all the data that is sent to GROK models is processed on zero data retention endpoints, which provides an extra layer of protection that enterprises require.
Karissa Breen [00:04:55]:
Okay, so there’s a couple of questions I’m keen to ask now going back to this. AI in general, AI interfaces in general, people are sort of saying like words like which one’s better? And it’s sort of hard to be like, well, depends on which one’s better. But then what’s your view on how do you sort of determine like this is a better option than perhaps the others? Because now as I said, there are more appearing than before. Is that sort of a question that people are asking? And I know that’s sort of hard to answer, but it’s more just curious now because people maybe are still unsure what they, they don’t know what they don’t know. So therefore that the question of is it better than the other option is, is that something that you’re hearing?
Federico Torreti [00:05:38]:
Oh, absolutely, and you’re spot on. Ultimately it boils down to the fact that there is not single model that is going to be ruling them all. And this is where Choice is important. So model choice is critical because different business scenarios require different capabilities. As an example, through our OCI generative AI service, we offer customer the flexibility to choose between open source and proprietary models from various providers, whether that’s Meta Cohereer or now Xai’s graph models. And again examples there would be for specific content creation. There may be benefits from a type of model, while business process automation or a research application in healthcare may require a different capability. And this is where our approach recognizes there is no one size that fits all solution.
Federico Torreti [00:06:24]:
But then there is a line that is important not to cross, which is where model choice becomes a hindrance. And this is where is important. And we’ve been very deliberate in our choice of offering a curated selection of model because it allows us to drive the conversation with customers as to working backwards from their specific business problem, working backwards from the specific process automation that they’re trying to solve for to determine and identify what is the specific model that is best suited or combination of models that are best suited for. For. For those applications as opposed to the reality that many companies are facing right now and they’ve shared with me and the team, as in, it is very daunting when you are exposed to 20, 30, 40 different models to select from.
Karissa Breen [00:07:20]:
And when you said before it the hindrance is it just more decision fatigue because there’s so many now. So it’s like, which one do I go with? Which one is quote unquote better? Which one is more suitable? How do you sort of get to the point where you’re like, okay, this makes sense, this is where I’m going to go in terms of option Because I mean, look, if we were to zoom out for a moment, everything with AI and all this sort of stuff happening in the space, like it is still new. Ish for people, it’s still very uncharted waters. So would you say that people are, to the operative word that we’ve been saying, you know, sort of experimenting on what may work, what may not work in terms of the models, are we still going to see that and maybe people move away from certain things or is it going to be a little bit more of change? People aren’t going to be locked into these sort of things obviously. But yeah, I mean it’s still early. Early days for companies.
Federico Torreti [00:08:05]:
Very much is early days for companies. And As I mentioned, 2023 and 2024 were very much the years of experimentation and prototyping. 2025 is a year where many enterprises are really anchoring on what’s the R in ROI as it relates to their AI investments. And you ask an interesting question, is it just decision fatigue or there is more to it? And it really goes back to the fact as to why companies are trying to do whatever it is that they’re trying to do with generative AI technology. It ultimately boils down to actually determining or the implications of a model choice over the ability to deliver business value. That is a key part of as to why this decision is complex. Because what you find is that there is a cycle of POC, what I call a cycle of POCs, where you have company that engages into building a specific solution that’s very much anchored around a model recognizing two months in that a certain accuracy or business outcomes are difficult to obtain. And as they will cause it.
Federico Torreti [00:09:12]:
It boils down that there is better value to be extracted out of a different model provider. This ultimately cycles back to what you called as decision fatigue because this becomes. You have companies that are getting trapped into this decision loop and they’re essentially deadlocked. And ultimately these spirals out of control into frustration over AI investments that is not paying off or meeting expectations. And it is not uncommon to see companies to grow frustrated out of their investments because they’ve been exposed to essentially what’s a broad model garden without very much an anchoring around what is a specific use case that they’re trying to solve for.
Karissa Breen [00:09:53]:
Okay, so what was coming up my mind as you were speaking and I’m curious to know. So if we just look at. Let’s just look at OpenAI, generally speaking, someone asked it a question, right? Or whatever interface you want to use, do you think that there’s more of a chance, generally speaking of hallucinations, because, and I had this conversation probably about five days ago with a guy saying, well, for example, if there’s content being written by AI and it’s sort of like the old school photocopier, when you’re taking a photocopy of something and then you take a photocopy of the photocopy, it just keeps getting like the fidelity is lost. Right. So would you just say, generally speaking, given your experience, that there is a higher chance of more hallucination because of the content that’s being generated? It’s not. And if we just keep developing AI content and people training AI models on AI related content, it just keeps getting weaker or there’s more hallucinations that are out there versus perhaps maybe a small language model or an internal to your point before, like companies using it internally on their stuff to to train it. What are your thoughts on that? Because I find this super interesting in terms of what’s going to really be the source of truth, generally speaking. And for someone who works in media and content, what I’ve also observed is the voice starts to sound super the same.
Karissa Breen [00:11:08]:
The same sort of words we start to see coming through. It feels like maybe that uniqueness has been lost, that authenticity. If I were to just focus on content.
Federico Torreti [00:11:16]:
As an example, you’re highlighting a what I call a character defining characteristics of what is essentially the probabilistic model. And we’re referring to these solutions, large language models, but essentially they’re just statistical models that tend to infer the next likely output based on a series of input and you called out OpenAI, but it’s not an OpenAI problem. Generative AI bots and generative AI solutions based on foundation models have produced at least some form of hallucinations. And this occurs based on the information statistical modeling and based on the information that a specific model has access to. Now, if you acknowledge this as a company, then the conversation the right question to ask is not whether or not you can eliminate hallucinations, but rather how you can manage hallucinations and what type of guardrails can you put in place and what are the implications of specific hallucinations for a specific use case? There are use cases where you want to over index for a specific type of performance of a model, and so this is important to combine the autonomy of a large language models with a human in the loop. There are other use cases where it may not be as critical if an answer is incorrect. And so it is extremely important to acknowledge this and to acknowledge that not only generative model but also reasoning models can be affected or can expose users to hallucinations. Now the next immediate question is can I actually use this technology in any production workloads? Can I expose my customers to any of this? The approach here is to ground the applications on the specific business process you’re trying to solve for.
Federico Torreti [00:13:08]:
And this is again, I go back to the fact that Oracle enterprise AI strategy is fundamentally different from many other hyperscalers because what we focus on is making AI relevant for enterprise and government context at every layer of our stack. And so we’re not offering AI as a standalone service, but we are embedding it across our full portfolio from development tools, applications, infrastructure and databases as we bring AI directly as an example into our SaaS offerings, which means that customers can leverage AI within their existing workflows rather than having to build entirely new systems. We’re able to combine the best data with the best technology to give customers hyper relevant results that are grounded in their unique business data. So now grounding answers in enterprise data set is a mechanism, a way through which you can manage as a company the phenomenon of hallucinations. And I think it also helps to eliminate the common challenge of AI being impressive in demos but difficult to implement in a real business context. It is not the only approach, but it is an approach that that is very effective in the context of enterprise applications. Another one many talk about automation KB and look, there may be specific use cases where it may be okay for a specific business automation. Say you and I work in the same company and having an automatic message being sent from me to you as fellow colleagues.
Federico Torreti [00:14:34]:
But it may require the risk exposure of sending a message externally is higher. Think about the implications of brand position. Think about the implications of you talking about content creation. So making sure that your editor is in the review loop before a specific content or piece of article is published to your audience, it becomes becomes important to carries a different risk exposure. So having a very thoughtful risk management approach to AI applications is particularly critical.
Karissa Breen [00:15:05]:
So I’ve been doing a lot more like listening to YouTube podcasts around AI and just hearing what people are saying. Obviously there’s very conflicting views out there. There. One of the things that I was listening to someone say, and I don’t know whether you have any insight on this would be forward. Let’s use like a CRM for example. Like you know, back in the day, you have to put in all the stuff if you want to know something, you have to manually navigate through it. When they’re just saying like you’ll just be able to ask the chat function and you just get an answer. Moving more towards that style.
Karissa Breen [00:15:31]:
Would you agree with that? And what does that sort of then look like moving forward in terms of like critical thinking? Are we potentially decreasing our critical thinking because we don’t really have to work stuff out as much as back in the day, whereas now we can just ask something and be getting an answer.
Federico Torreti [00:15:47]:
That is such a relevant question. I’ve grown a strong belief that there are very few moments of this continuous change in history that present the opportunity to fundamentally redesign technology patterns. And this is an exciting time to be in technology because what we are seeing is we’re entering an era where the boundaries between SaaS applications like CRM that you’re referring to others like your email systems are becoming more fluid. How businesses operate and how people work is changing and we are entering into a world that is far more dynamic and intelligent than the traditional world of API integrations. And this shift is particularly exciting for me and for Oracle because it builds on everything that I’ve personally learned about driving innovation and embracing change. And what is intriguing is the fact that the path forward isn’t clear. So I do believe that we are a beginning of a new chapter in enterprise software where the ability to seamlessly connect and enhance business workflows across systems is essentially redefining what’s possible. Now think about this.
Federico Torreti [00:16:56]:
Ultimately siloed data leads to fragmented insights having information in your CRM system. But you can tell me when was the last time that your CRM system contained everything you needed to know about specific sales lead? What you often find is that it’s probably, I don’t know, 70, 80% of the information. Then you’d have the remaining information scattered across other sources. And what we are seeing is this opportunity to fundamentally rewire the enterprise where siloed apps and systems that today require employees to extract data from each. And synthesizing insights is fundamentally changing. What we can do with this technology is fundamentally shifting this view where we can make a lot easier for people to process large amount of data. And if there is one thing that this technology can do extremely well and in the context of augmenting how we go about our day and our work is exactly the ability to process significant amount of information. But we also need to be aware that there is almost an experience paradox.
Federico Torreti [00:18:02]:
It is true that there is this architectural shift that’s coming. Basically it’s allowing us to reimagine entire user experiences across company stack. And so what that means is that fundamentally we’re changing our you and I are interfacing with knowledge or can even create app experiences in the case of our Codesys tool. But what we’re also seeing is that companies can produce hyper personalized experiences, high quality content that’s very much user aware. And all of this is particularly fascinating because it’s creating essentially is delivering upon the premise of the custom experience that provides information about what you need to know, when you need to know where. And so when we think about what AI is doing there, there are multiple shifts that are happening, right? There is workflow level innovation being driven by SaaS applications, there is an experience level breakthrough right in front of our eyes from not only chat experience, conversational experience are very interesting and intuitive, but there is a fundamental shift that is you’re talking about voice that is changing how humans interact with computers and Then the role in general, the role of computers is going to change and evolve from machines that were just computation and automation machines. And yes, they will do calculations, they will drive productivity, they will help us augment how we can complete and better execute business processes. And so I think it’s going to be a very, very profound change because these machines are going, are changing and are evolving from being just computational and number crunching devices to true personal assistance in our personal life.
Federico Torreti [00:19:52]:
And as a result of that, I think that the change of role of computers in our life is going to be very important. And then as a result of that, the nature of work is also going to evolve and we’re going to see more and more companies having opportunities to find what I call top line opportunities for growth where their workforce is going to be able to by nature of being more productive. They’re going to have this opportunity to have more time to, to do value add work and think about the number of things that they couldn’t get to before. Right. Imagine if your workday were 48 hours instead of 24 hours. Imagine how much more, how many more things you could do.
Karissa Breen [00:20:32]:
So true. And I mean I’m asking a lot of these questions because I’m going to a lot of events and there’s still variance of opinions and given your role and experience, like I said, it’s still early days. So I think addressing these questions are really, really important. So people have a bit of an insight on what does this mean moving forward and how do they start to leverage AI into their organizations, what does that look like? And so then it sort of moves me to the next, I’m going to take sort of a 2 millimeter sideways step. I’m curious maybe to understand AI development on oci. Then like what does this sort of look like? I know we’ve sort of talked more generally about AI, how companies are leveraging, et cetera, some of the concerns people may have. But what does this look like now given the announcement and how we sort of move forward?
Federico Torreti [00:21:20]:
Well, one of the most common feedback that we hear is, and I mentioned is at beginning is around two challenges, data security and practical implementation. And customers on one side are excited about AI’s potential, but they need assurance that their sensitive business data is protected and governed properly. And that’s why Oracle’s zero data retention approach and our enterprise grade security capabilities are so important, especially in the context of the XI announcement. The second bit of feedback is about moving from proof of concept to production. Again, many organizations are struggling with making AI work with their existing systems and data and they’re looking for AI that’s grounded in their unique business context. And this is why our enterprise AI strategy within Oracle Cloud is very much grounded around being available at all layers of the stack, whether it’s our infrastructure, whether it’s our database or whether it’s our SaaS applications. AI is very much something that is built throughout the stack and not just a bolt on the availability of model providers like xai through the Graph 3 family of models is enabling our customers, is enabling our customers to really deliver against their specific enterprise use cases. And there are two aspects to this.
Federico Torreti [00:22:41]:
When Oracle got into the cloud business, we had the opportunity to really look at the cloud industries with second mover advantage. And part of it, and part of it is really around the fact that. So we want to make sure that everybody understands that with oci we’re really changing the dynamics of the cloud industry when we were focused on scale, and when we think about scale, it’s not just large regions, it’s too ubiquitous presence globally. The other piece, kb, is that Oracle Cloud and our AI strategies focus on modularity. We are in a position, as I was mentioning, we came into the cloud game with our second mover advantage where we really had an opportunity to think about building the cloud so that can be distributed differently than other hyperscalers. And we have created dedicated regions now can shrink down to three racks and it’s deployable into anyone’s data centers anywhere in the globe. And our AI strategy is differentiated, focused on enterprise and governments and supporting sovereign regions. And all our AI offerings, including our generative AI service, including provision of XAI models, is available irrespective of the deployment strategy.
Federico Torreti [00:23:47]:
So again, the key term here to your question is really enterprise AI and what makes us different than anyone else in the industry is really bringing the latest and the latest technology and making it relevant for enterprise and government context.
Karissa Breen [00:24:01]:
And so do you think now, and I know we’ve been talking about, you know, experimenting, which is best, et cetera. Do you think this will continue in terms of experimentation with companies, what they’re looking for, what is the best solution for them? Are we going to see this now for the next couple of years? Feels like a long time in the AI sort of world. But is this a true reality in terms until people find what works for them and what makes sense as well?
Federico Torreti [00:24:26]:
I think we’ll find two vectors. The vector number one is that customers. It’s very hard for me to believe in a future where customers will now want to have choice choice in not only in how to deploy cloud and AI, but in the type of large language models, whether it’s open source or proprietary that they use. And our partnership strategy with Xai QR Meta really builds on this and I think that’s a very durable tenant, if you will. The second thing that I think it’s going to be very durable is the fact that data is important and our Oracle’s AI advantage is that we combine the best data with the best technology so that customers can actually have the best possible AI application for their enterprise use case. So with one source of integrated business data, combined with the highest performance, scalability and security available, we are able to embed AI into existing workflows for applications for enterprises and give our customers hyper relevant results. So again this goes back to the fact that it’s very difficult for me to believe in the future where customers want something that’s relevant for their business. And so I think these two dimensions are extremely durable.
Federico Torreti [00:25:34]:
To your specific questions around are we going to see experimentation? I think experimentations is going to be there and it’s very hard for me to believe that companies won’t continue to experiment. I think we will see a lot more faster iterations and this is where the value having some of these capabilities like Xai Grok 3 family of models available as part of a fully managed service like OCI generative AI service enables customers to run through these iterations very quickly. But indeed I do think that it is going to be extremely important and we will continue to see companies wanting to move and iterate quickly to understand how to best answer the question to whether it’s their board or their customers or ultimately how is AI enabling them to provide better and differentiated customer experiences.
Karissa Breen [00:26:25]:
You also mentioned something before around what is the return? And I want to go back on that because that’s a really great question in terms of like ROI and all these questions these businesses ask. So what is the return and what does that sort of look like would you say? Or what is it that people are trying to ascertain?
Federico Torreti [00:26:42]:
Well, as I was mentioning, many organizations struggle with making AI work with their existing systems and data. And what ultimately boils down to the fact is that customers are looking for for AI solution that’s grounded in their unique business context and not generic responses. And this is where being able to tie a specific AI technology into their existing workflows and enable them to leverage their integrated business data to address this directly is particularly important. And that is what they’re trying to certain what they’re trying to certain is how is this technology actually helping me either redesign a specific customer experience, add additional value to my customers and ultimately drive whether it’s top line or bottom line efficiencies. And we’re also seeing that there is this evolution, typical engagement start where companies are looking at AI as a productivity enhancer and then over time they’re looking at also what are other avenues I can as a company extract value out of these technologies. And very quickly to the earlier comment, you’ll see this realization which more mature organizations have around. Well, actually I can not only make me do my existing job faster, but I can actually now do much more because I have this extra time available. I can deploy my resources into priorities that I couldn’t get to before.
Federico Torreti [00:28:11]:
And so we are seeing this acceleration of business outcomes beyond just productivity announcement, but also into being able to tackle new investment areas that companies couldn’t get to before.
Karissa Breen [00:28:24]:
Yeah, no, that is really interesting. And wouldn’t you say though, generally speaking people won’t find it too generic and they will extract the value because like I was even thinking of use cases and what I do day to day to be like, oh, if I knew that that would make it a lot easier or just even some insights derived from leveraging AI to be able to quickly compute and like you said, do things faster, that would be significantly helpful. So wouldn’t you say overall people will start to see significant value, it’s worth.
Federico Torreti [00:28:53]:
The investment in the context of an enterprise. You may be thinking about enterprise search and there is value in redesigning or enriching and improving the traditional search experience with generative AI. And we’re seeing that deployed also in consumer applications today we have a number of customers that are using OCI services to reimagine customer service experiences. And if you will, that’s also a search in many cases, a search use case where you need to troubleshoot a specific issue by navigating documentation, et cetera. And yes, you can make all these experiences much better. But let me ask you a question, KB, would you find that experience to be 10x better if it were more relevant for you as opposed to provide you with a generic answer?
Karissa Breen [00:29:41]:
100% has to be relevant because like you were saying before, it’s like when I was saying when generic content comes through to us, it’s like everyone sort of sounds the same because it sounds generic, right? But if it was more focused and it was like, hey, it sounds like KB asking me that question 100%, that makes more sense.
Federico Torreti [00:29:56]:
And the same is relevant for enterprises. So in the context of enterprise, this is where as part of our AI strategy we really don’t bring data to AI, but we rather we’re building AI into our database offering so we can bring AI capability to data so that customers don’t have to move their data. So there is better controls, better governance, a better grounding and better customization and personalization.
Karissa Breen [00:30:19]:
So the other thing I’m curious to understand perhaps is the whole sovereign piece. So really curious to understand like what does it actually mean to run AI on shore? The reason why I asked that question maybe like even 12, 18 months, 24 months ago, everyone was talking about, you know, sovereign capability in terms of, in the media, what was coming up in interviews, I mean sort of died down. It sort of made a bit of a resurgence. Even last night I was at an event and someone was talking about it. And obviously these things come in waves, but I’m really curious to understand what does that actually mean?
Federico Torreti [00:30:49]:
Data sovereignty is absolutely critical, especially for government and enterprise customers. In regions like Australia, our approach to sovereign AI is built our unique cloud architecture. We can deploy dedicated regions that can scale down to just three racks and can be deployed into anyone’s data center anywhere in the world. But when it comes to data sovereignty, it really means that customers have complete control over where their data resides and how it’s processed. And for sovereign AI requirements, we can provide the full AI stack, from infrastructure to models within a customer’s own geographic boundaries or even within their own facilities. And this helps us address both regulatory requirements and security concerns. We have extensive experience with sovereign AI applications. We have really learned that there are five key pillars to enable sovereign AI.
Federico Torreti [00:31:35]:
Well, first and foremost, you have to have a comprehensive AI portfolio. You need to have strong data residency controls so that you can have data management within specified boundaries. Data privacy controls enables them to determine who can access what where. And then of course, legal and security and resiliency. So legal control, so think about compliance, certifications, contracts, experience, that it’s relevant for the specific geography. And as it comes to security and resiliency, it’s really around the full stack. Security and whatever regional resiliency you can provide. And when you look at our enterprise grade capabilities today, they’re really there to ensure that companies have a clear path to strong data governance, management and security while still accessing the latest AI technologies.
Federico Torreti [00:32:21]:
Like I mentioned, with the most comprehensive AI portfolio that’s available. And when it comes against sovereign AI, if you will, it’s really just about giving customers the choice of how to both deploy cloud and AI. And we do that whether it’s in our public cloud, dedicated regions, on their own sovereign environment. So it’s particularly important to recognize that with Oracle Cloud customers have and can build digital sovereignty guardrails for their business so they can maximize AI innovation while managing their associated risk.
Karissa Breen [00:32:53]:
So you mentioned we had privacy before. Where do you think that sort of sits now? Obviously this is security podcast, so I’m curious maybe to understand just from that perspective and I mean it’s interesting because as the years have progressed I hear a lot less about privacy in terms of on the show, in the media, et cetera. Because nowadays, look, it’s hard, it’s hard. If you want to live in today’s day and age, you need to operate on the Internet with that. You’ve got to forgo X, Y or Z. Even now banks, you can’t even go into a bank anymore, just doesn’t exist, it’s all automated. So do you think a lot of these privacy concerns like yes, whilst we understand that we can’t have data breaches and all those sort of things, is that not going to be, I don’t know, people still really concerned around the privacy side of things. And I, and I asked that only because it’s just not something that I’m actively hearing from a media perspective nowadays versus even five years ago for example.
Federico Torreti [00:33:43]:
I think it’s absolutely critical maintaining privacy. Protecting data and intellectual property is one of the most common feedback that we hear from customers alongside data security and the practical implication of implementing AI. Customers are excited about AI’s potential, but they need the assurance that their sensitive business data is protected and governed properly. This is where going from the last mile, going from 0 to 80% is relatively easy. But when you try to then bridge the final mile, the last mile before putting a solution into production, there are stumbling blocks, there are complexities and it is non trivial, not just because of what model you’re choosing, but also for considerations around how you’re thinking about the protection of your data and intellectual property and the implications around data security. And so this is why considerations like zero data retention approach and the price grade security capabilities are so important when it comes to AI deployments.
Karissa Breen [00:34:43]:
So Federico, do you have any sort of closing comments or final thoughts you’d like to leave our audience with today?
Federico Torreti [00:34:48]:
I’m particularly excited about all the work that we’re doing at Oracle. At Oracle Cloud, I think our key takeaways really centered around choice. Curated choice is important security and enterprise readiness. And I do think that this partnership with XAI gives OCI customers access to one of the most cutting edge model or family of models while allowing customers to maintain that underpriced grid security and governance they need. And we’re really excited to see what customers are going to be building on top of ocr.