
LiteLLM
SDK and proxy to call 100+ LLM APIs in OpenAI format
Coldcast Lens
LiteLLM gives you one API to call 100+ LLM providers. Switch between OpenAI, Anthropic, Google, Mistral, and local models by changing a string — no code rewrites. The proxy server adds load balancing, automatic retries, fallbacks, and spend tracking. For indie hackers building on multiple models, it's infrastructure you'd otherwise build yourself.
OpenRouter is the SaaS equivalent — 400+ models, unified billing, no self-hosting required, but no self-hosted option and limited governance. Direct API calls work for single-provider apps but become a maintenance burden at three or more providers.
Use LiteLLM if you're calling multiple LLM providers and want centralized cost tracking, fallback routing, and the ability to switch models without code changes.
The catch: the "Other" license has commercial restrictions on the proxy server. Python's GIL limits single-process throughput — P95 latency spikes at high concurrency. Running the proxy in production requires PostgreSQL and Redis. And every abstraction layer adds latency — if you're only using one provider, just call their API directly. LiteLLM solves the multi-provider problem; don't adopt it for single-provider simplicity.
About
- Stars
- 40,525
- Forks
- 6,688
Explore Further
More tools in the directory
Get tools like this delivered weekly
The Open Source Drop — the best new open source tools, analyzed. Free.