Elastic Announces General Availability Of LLM Observability For Google Cloud’s Vertex AI
SREs can now monitor, analyse and optimise the performance of AI deployments using models from Vertex AI
Posted: Tuesday, Apr 15
  • KBI.Media
  • $
  • Elastic Announces General Availability Of LLM Observability For Google Cloud’s Vertex AI
Elastic Announces General Availability Of LLM Observability For Google Cloud’s Vertex AI

Elastic, the Search AI Company, announced the general availability of the Elastic Google Cloud Vertex AI platform integration in Elastic Observability. This integration offers large language model (LLM) observability support for models hosted in Google Cloud’s Vertex AI platform, providing insights into costs, token usage, errors, prompts, responses and performance. Site Reliability Engineers (SRE) can now optimise resource usage, identify and resolve performance bottlenecks, and enhance model efficiency and accuracy. 

“Comprehensive visibility into LLM performance is crucial for SREs and DevOps teams to ensure that their AI-powered applications are optimised,” said Santosh Krishnan, general manager of Observability and Security at Elastic. “Google Cloud’s Vertex AI platform integration provides users robust LLM observability and detection of performance anomalies in real-time, giving them critical insights into model performance that help with bottleneck identification and reliability improvements.”  

Availability 

Support for the Elastic Google Cloud’s Vertex AI platform integration is available today. 

Additional Resources  

Share This