Openlit
OpenLIT is an open-source observability platform designed specifically for large language model (LLM) applications and AI agents. Built natively on OpenTelemetry, it offers monitoring, tracing, evaluation, and optimization tools that cover the AI application lifecycle from development through production. OpenLIT supports automatic instrumentation for a wide range of components including LLM providers like OpenAI and Anthropic, AI frameworks such as LangChain, vector databases like ChromaDB, and GPUs from NVIDIA and AMD, all without requiring code modifications. The platform enables zero-code integration via command-line tools or a single import statement, providing real-time distributed tracing, token cost tracking, latency monitoring, and hallucination detection. It also supports Kubernetes observability through an Operator that automatically injects instrumentation into deployments. OpenLIT is released under the Apache-2.0 license and is supported by a community on Slack and GitHub.
OpenLIT is an open-source, OpenTelemetry-native platform for observability of LLM applications and AI agents with zero-code instrumentation.
Monitoring LLM Applications
AI engineering teams can monitor request flows, latency, and token usage of LLM-powered applications in production without modifying code.
Kubernetes Deployment Observability
Teams deploying AI agents on Kubernetes can use the OpenLIT Operator to automatically instrument workloads for observability.
AI Model Evaluation
Developers can evaluate AI model outputs and prompts through the OpenLIT UI and SDKs to improve application performance.
pip install openlit to install the OpenLIT SDK.import openlit; openlit.init() in your Python code or use openlit-instrument python your_app.py for zero-code setup.helm install openlit-operator openlit/openlit-operator and apply the AutoInstrumentation YAML.