Travis Addair (@travisaddair) 's Twitter Profile
Travis Addair

@travisaddair

Co-Founder & CTO @Predibase

OSS: LoRAX (loraexchange.ai) | horovod.ai | @ludwig_ai

ID: 2702302872

linkhttps://predibase.com/ calendar_today03-08-2014 01:40:09

354 Tweet

582 Followers

223 Following

Travis Addair (@travisaddair) 's Twitter Profile Photo

# H100s needed to serve: DeepSeek-R1 --> 16 QwQ-32B --> 1 QwQ-32B is now available in Predibase to fine-tune (w/ GRPO) and serve.

# H100s needed to serve:

DeepSeek-R1 --> 16
QwQ-32B --> 1

QwQ-32B is now available in Predibase to fine-tune (w/ GRPO) and serve.
Predibase (@predibase) 's Twitter Profile Photo

Ever wonder what makes #AI go from “pretty good” to “incredible”? 🚀 The secret sauce is #finetuning or in other words customizing a pretrained model for your use case. Historically, this required massive amounts of #labeled data. To teach your old dog (model) a new trick, you

Ever wonder what makes #AI go from “pretty good” to “incredible”? 🚀

The secret sauce is #finetuning or in other words customizing a pretrained model for your use case.

Historically, this required massive amounts of #labeled data. To teach your old dog (model) a new trick, you
Andrew Ng (@andrewyng) 's Twitter Profile Photo

Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the

Saam Motamedi (@saammotamedi) 's Twitter Profile Photo

Huge release from @Predibase today -- the first end-to-end platform for Reinforcement Fine-Tuning Bringing the techniques that power DeepSeekR1 to any open source model and data

Predibase (@predibase) 's Twitter Profile Photo

🔐 Want to run Llama 4 at blazing fast speeds without sending a single token over the public internet? Now you can—with Predibase, you can deploy Meta 's most advanced #opensource LLM directly in your Virtual Private Cloud (#VPC) with just a few lines of code. And of course you

🔐 Want to run Llama 4 at blazing fast speeds without sending a single token over the public internet?

Now you can—with Predibase, you can deploy <a href="/Meta/">Meta</a> 's most advanced #opensource LLM directly in your Virtual Private Cloud (#VPC) with just a few lines of code. And of course you
Predibase (@predibase) 's Twitter Profile Photo

🚀 Introducing #LoRAX: Efficient Multi-LoRA Serving on Amazon Web Services (AWS)! Discover how LoRAX, our #OpenSource inference software, enables concurrent serving of multiple #LoRA adapters on a single LLM instance with this new blog from our partners at AWS. 𝐖𝐡𝐲 𝐝𝐨𝐞𝐬

🚀 Introducing #LoRAX: Efficient Multi-LoRA Serving on Amazon Web Services (AWS)!

Discover how LoRAX, our #OpenSource inference software, enables concurrent serving of multiple #LoRA adapters on a single LLM instance with this new blog from our partners at AWS.

𝐖𝐡𝐲 𝐝𝐨𝐞𝐬
Predibase (@predibase) 's Twitter Profile Photo

🚀 Serve and fine-tune #Qwen3 — in your cloud or ours with blazing fast #inference speeds! No need to share your data. 🚀 Qwen 3 is the latest #opensource LLM dominating the leaderboards. Don't get left behind! Now you can serve and customize the latest Qwen models instantly

🚀 Serve and fine-tune #Qwen3 — in your cloud or ours with blazing fast #inference speeds! No need to share your data. 🚀

Qwen 3 is the latest #opensource LLM dominating the leaderboards. Don't get left behind! 

Now you can serve and customize the latest Qwen models instantly
Predibase (@predibase) 's Twitter Profile Photo

🚨 Qwen 3 is here—and it’s a beast. From a featherweight 0.6B model to a 235B Mixture-of-Experts (#MoE) powerhouse, Alibaba’s latest #opensource LLM is turning heads. But here’s what most people don’t know: you can deploy and serve any #Qwen3 model securely inside your own

🚨 Qwen 3 is here—and it’s a beast.

From a featherweight 0.6B model to a 235B Mixture-of-Experts (#MoE) powerhouse, Alibaba’s latest #opensource LLM is turning heads.

But here’s what most people don’t know: you can deploy and serve any #Qwen3 model securely inside your own
Predibase (@predibase) 's Twitter Profile Photo

Introducing #Qwen3 Endpoints on Amazon Web Services (AWS): Private, On-Demand, and Production-Ready Predibase is proud to be the first—and only—provider offering private, on-demand Qwen 3 #endpoints on AWS. Experience the power of the most popular open-source LLM, now available

Introducing #Qwen3 Endpoints on Amazon Web Services (AWS): Private, On-Demand, and Production-Ready

Predibase is proud to be the first—and only—provider offering private, on-demand Qwen 3 #endpoints on AWS. Experience the power of the most popular open-source LLM, now available
Zihao Ye (@ye_combinator) 's Twitter Profile Photo

We’re thrilled that FlashInfer won a Best Paper Award at MLSys 2025! 🎉 This wouldn’t have been possible without the community — huge thanks to LMSYS Org’s sglang for deep co-design (which is crtical for inference kernel evolution) and stress-testing over the years, and to

Predibase (@predibase) 's Twitter Profile Photo

🥋 Build-vs-Buy Showdown: #AI Infrastructure 🥋 This is the GenAI Stack #Playbook you need to read before investing another $$ in potential infra headaches. Inside our new 50+ page comprehensive guide is everything you need to know about building or buying an #LLM serving and

🥋 Build-vs-Buy Showdown: #AI Infrastructure 🥋

This is the GenAI Stack #Playbook you need to read before investing another $$ in potential infra headaches.

Inside our new 50+ page comprehensive guide is everything you need to know about building or buying an #LLM serving and
Predibase (@predibase) 's Twitter Profile Photo

🚀 Ship first, perfect later. That’s the new #AI mantra. The most successful GenAI apps aren’t flawless, they're practical. The real innovation happens in production. Enter #IntelligentInference, a new paradigm for production AI defined by systems that learn and optimize once

🚀 Ship first, perfect later. That’s the new #AI mantra.

The most successful GenAI apps aren’t flawless, they're practical. The real innovation happens in production. Enter #IntelligentInference, a new paradigm for production AI defined by systems that learn and optimize once
Travis Addair (@travisaddair) 's Twitter Profile Photo

It was an honor getting to work together with the DeepLearning.ai team and my colleague Arnav Garg on this course covering all things Reinforcement Fine-Tuning and GRPO. Similar to our last course on efficient LLM inference, we wanted to really drill into the intuition