
LiteLLM (YC W23)
@litellm
Call every LLM API like it's OpenAI 👉 github.com/BerriAI/litellm
ID: 1607849281280671744
https://github.com/BerriAI/litellm 27-12-2022 21:22:10
840 Tweet
3,3K Takipçi
165 Takip Edilen

LiteLLM (YC W23) v1.68.2-nightly brings support for sending email invites to users you invite to LiteLLM This release brings the following improvements - Support for sending emails when a user is invited to the platform - Support for sending emails when a key is created for a user


LiteLLM (YC W23) v1.68.2-nightly brings support for using AWS Bedrock Guardrails PII Masking with LiteLLM This allows you to run your Bedrock PII masking guardrails with 100+ LLMs on LiteLLM. Start here: docs.litellm.ai/docs/proxy/gua…


Roo Code v3.16 introduces LiteLLM (YC W23) integration, enabling seamless access to over 100 language models via automatic discovery. This enhancement simplifies model management and expands your AI toolkit. Explore all the new features and improvements in the full notes: 🔗


LiteLLM (YC W23) v1.69.2-nightly brings support for using @google ADK (Agent Developer Kit) with LiteLLM Python SDK & LiteLLM Proxy Start here: docs.litellm.ai/docs/tutorials…



LiteLLM (YC W23) v1.69.3-nightly brings support for configuring PII Entities and their actions on LiteLLM UI. This means that you can use LiteLLM UI to control what PII entities to mask vs block


Thrilled to launch support for adding Guardrails on LiteLLM (YC W23) UI This release brings support for adding Microsoft Presidio, AWS Bedrock Guardrails, Protect AI LLM Guard Endpoints, AIM Guardrails, Lakera Guardrails on LiteLLM


LiteLLM (YC W23) v1.70.0-nightly brings major improvements for PII, PHI masking use cases. With this release you can do the following: - Configuring PII masking entities and their action on the LiteLLM UI - eg, you can set a guardrail to block all CREDIT_CARD entities




LiteLLM (YC W23) v1.70.5-nightly will have a 94% faster median response time and 350% higher RPS You can read more about the change here: github.com/BerriAI/litell…

⚡️ LiteLLM (YC W23) v1.72.0-nightly brings major performance improvements to LiteLLM. This release brings aiohttp support for all LLM api providers. This means that LiteLLM can now scale to 200 RPS per instance with a 40ms median latency overhead. Improvements on this release👇:


Thrilled to launch the ability to add MCP Servers on LiteLLM UI on LiteLLM (YC W23) v1.71.3-nightly This means you can add your MCP SSE Server URLs on LiteLLM and list + test the tools available on the LiteLLM UI


🚀 Learn how to build modular RAG pipelines that boost answer quality with smart re-ranking in the latest tutorial by ManthaPavanKumar. ➡️ Re-rankers from cohere, ColBERT, Jina AI, and Voyage AI by MongoDB ➡️ Easy LLM switching with @litellm ➡️ Full observability and trace tracking using


Thrilled to launch support for Amazon Web Services Bedrock Agents on LiteLLM (YC W23) This means that you can now call all your Bedrock Agents in the OpenAI Request/Response format Start here: docs.litellm.ai/docs/providers…


LiteLLM (YC W23) v1.72.1-nightly brings support for profiling LiteLLM on Datadog, Inc. Profiling This allows you to monitor and debug issues that are reported.


