vLLM (@vllm_project) 's Twitter Profile
vLLM

@vllm_project

A high-throughput and memory-efficient inference and serving engine for LLMs. Join slack.vllm.ai to discuss together with the community!

ID: 1774187564276289536

linkhttps://github.com/vllm-project/vllm calendar_today30-03-2024 21:31:01

327 Tweet

12,12K Followers

15 Following

vLLM (@vllm_project) 's Twitter Profile Photo

πŸš€ Amazing community project! vLLM CLI β€” a command-line tool for serving LLMs with vLLM: βœ… Interactive menu-driven UI & scripting-friendly CLI βœ… Local + HuggingFace Hub model management βœ… Config profiles for perf/memory tuning βœ… Real-time server & GPU monitoring βœ… Error

πŸš€ Amazing community project!

vLLM CLI β€” a command-line tool for serving LLMs with vLLM:
βœ… Interactive menu-driven UI & scripting-friendly CLI
βœ… Local + HuggingFace Hub model management
βœ… Config profiles for perf/memory tuning
βœ… Real-time server & GPU monitoring
βœ… Error