vLLM
@vllm_project
A high-throughput and memory-efficient inference and serving engine for LLMs. Join slack.vllm.ai to discuss together with the community!
ID: 1774187564276289536
https://github.com/vllm-project/vllm 30-03-2024 21:31:01
327 Tweet
12,12K Takipçi
15 Takip Edilen
Two exciting updates! * vLLM is already widely adopted, and we want to ensure it has open governance and longevity. We are starting to join LF AI & Data Foundation! * We are doubling down in performance. Please checkout our roadmap. blog.vllm.ai/2024/07/25/lfa…