
LiteLLM
SDK and proxy to call 100+ LLM APIs in OpenAI format
The Lens
LiteLLM is a proxy and Python library that puts a unified OpenAI-compatible API in front of 100+ LLM providers: OpenAI, Anthropic, Gemini, Cohere, Azure, Bedrock, Ollama, and more. Write your code once using the OpenAI format and switch providers by changing one line.
Run it as a proxy server and you get rate limiting, cost tracking, fallback routing, and load balancing across providers. Teams use it to control which models engineers can call, track spend per team, and add retry logic without touching application code. MIT-licensed, free to self-host.
Engineering teams building on multiple LLMs or managing costs across a company get the most value from the proxy. Individual developers using it as a Python library just want to avoid rewriting LLM calls when switching providers. Both use cases are free.
The catch: the proxy adds latency. Not much, usually under 10ms, but it is a network hop. And the feature set moves fast enough that staying current requires attention.
Get tools like this every Wednesday
One featured tool, three on the radar. No fluff.
Free vs Self-Hosted vs Paid
free self hosted paid cloud### Free (SDK) The Python SDK is fully open source. Call 100+ LLM providers using the OpenAI format. Streaming, function calling, vision, embeddings, all supported. `pip install litellm` and you're running.
### Free (Self-Hosted Proxy) The proxy server adds centralized API key management, load balancing across providers, caching, rate limiting, spend tracking per user/team, and a management UI. Self-host with Docker. Apache 2.0 for core features.
### Paid (Enterprise) SSO/SAML, advanced audit logs, priority support, custom SLAs. Pricing is not public; contact sales. Likely $1,000+/mo based on comparable tools.
### Self-Hosted Costs The proxy itself is lightweight; a $10-20/mo VPS handles most workloads. Your real costs are the LLM API bills: OpenAI, Anthropic, etc. LiteLLM helps you track and optimize those costs but doesn't reduce them directly.
### What LiteLLM Saves You Without it: maintaining separate API integrations for each provider, custom retry logic, manual spend tracking. A team calling 3+ providers saves 20-40 hours of integration work upfront and ongoing maintenance.
### Verdict The SDK is free and worth using even for a single provider. The proxy is free to self-host and pays for itself in spend visibility.
SDK is free. Proxy is free to self-host. Enterprise pricing is custom. Your real costs are the LLM API bills themselves.
Similar Tools
About
- Stars
- 46,357
- Forks
- 7,897
Explore Further
More tools in the directory
openclaw
Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞
370.3k ★claw-code
The repo is finally unlocked. enjoy the party! The fastest repo in history to surpass 100K stars ⭐. Join Discord: https://discord.gg/5TUQKqFWd Built in Rust using oh-my-codex.
190.9k ★n8n
Fair-code workflow automation with native AI capabilities
187.3k ★




