vLLM (@vllm_project) 's Twitter Profile
vLLM

@vllm_project

A high-throughput and memory-efficient inference and serving engine for LLMs. Join slack.vllm.ai to discuss together with the community!

ID: 1774187564276289536

linkhttps://github.com/vllm-project/vllm calendar_today30-03-2024 21:31:01

327 Tweet

12,12K Followers

15 Following

vLLM (@vllm_project) 's Twitter Profile Photo

Announcing the completely reimagined vLLM TPU! In collaboration with Google, we've launched a new high-performance TPU backend unifying PyTorch and JAX under a single lowering path for amazing performance and flexibility. πŸš€ What's New? - JAX + Pytorch: Run PyTorch models on

Announcing the completely reimagined vLLM TPU! In collaboration with <a href="/Google/">Google</a>, we've launched a new high-performance TPU backend unifying <a href="/PyTorch/">PyTorch</a> and JAX under a single lowering path for amazing performance and flexibility.

πŸš€ What's New?
- JAX + Pytorch: Run PyTorch models on