OpenPipe (@openpipeai) 's Twitter Profile
OpenPipe

@openpipeai

OpenPipe: Fine-tuning for production apps. Train higher quality, faster models. (YC S23)

ID: 1686408031268200448

linkhttps://openpipe.ai/ calendar_today01-08-2023 16:06:36

94 Tweet

2,2K Followers

2 Following

OpenPipe (@openpipeai) 's Twitter Profile Photo

PSA because apparently we're getting confused with closed-down competitors: all OpenPipe users can export their fine-tuned models at any time. No lock-in nonsense here!

OpenPipe (@openpipeai) 's Twitter Profile Photo

Our CEO Kyle Corbitt is giving a talk next Wed in SF w/ Emmanuel Turlay and Charles 🎉 Frye. Kyle's topic is on fine-tuning best practices to get far higher quality LLM results vs prompting alone, and when to explore this optimization technique. check it out! lu.ma/jfsyz0xc

Our CEO <a href="/corbtt/">Kyle Corbitt</a> is giving a talk next Wed in SF w/ <a href="/neutralino1/">Emmanuel Turlay</a> and <a href="/charles_irl/">Charles 🎉 Frye</a>. Kyle's topic is on fine-tuning best practices to get far higher quality LLM results vs prompting alone, and when to explore this optimization technique.

check it out! lu.ma/jfsyz0xc
Kyle Corbitt (@corbtt) 's Twitter Profile Photo

@WisprAI gets very low latency, low costs and extremely high quality by using fine-tuned models through OpenPipe. Proud to have them as a customer, and congrats on the launch!

Groq Inc (@groqinc) 's Twitter Profile Photo

We saw lots of fun games at our AIxGames Meetup with Clementine & OpenPipe. The Groq ASR API provides ultra-low latency audio transcription & translation for Whisper models. If you're a game developer, reach out to [email protected] to discuss the best model for your needs.

We saw lots of fun games at our AIxGames Meetup with Clementine &amp; <a href="/OpenPipeAI/">OpenPipe</a>.
The Groq ASR API provides ultra-low latency audio transcription &amp; translation for Whisper models. If you're a game developer, reach out to sales@groq.com to discuss the best model for your needs.
OpenPipe (@openpipeai) 's Twitter Profile Photo

OpenPipe linked up w/ Wyatt Marshall CTO & Co-Founder of Halluminate so he could have an in-depth conversation on how to build a robust Evals system for your production GenAI technology w/ Reid Mayo (Founding AI Engineer). Check it out!: youtube.com/watch?v=1ygD4o…

Yacine Mahdid (@yacinelearning) 's Twitter Profile Photo

I'm in the unfortunate position to let you know that I've fallen for the RL-LLMs propaganda 100% with these results from openpipe I am now fully RL pilled and there is no turning back very sorry folks

I'm in the unfortunate position to let you know that I've fallen for the RL-LLMs propaganda 100% with these results from openpipe 

I am now fully RL pilled and there is no turning back

very sorry folks
Plateau de Saclay hate account (@pdshateaccount) 's Twitter Profile Photo

Yacine Mahdid One thing that surprised me pretty much is how well feedback from LLM is for automatic improvement. OpenPipe is one example with RL, but it also works well with prompt optimization (sometimes outperforming RL), see GEPA paper. arxiv.org/abs/2507.19457

<a href="/yacinelearning/">Yacine Mahdid</a> One thing that surprised me pretty much is how well feedback from LLM is for automatic improvement. OpenPipe is one example with RL, but it also works well with prompt optimization (sometimes outperforming RL), see GEPA paper. 

arxiv.org/abs/2507.19457
Kyle Corbitt (@corbtt) 's Twitter Profile Photo

🚀 Big launch from OpenPipe: We just launched Serverless RL — train agents faster and cheaper with zero infra headaches. Compared to running your own GPUs, Serverless RL is: - 40% cheaper - 28% faster wall‑clock - instantly deployed to prod via Weights & Biases Inference

🚀 Big launch from <a href="/OpenPipeAI/">OpenPipe</a>: We just launched Serverless RL — train agents faster and cheaper with zero infra headaches.

Compared to running your own GPUs, Serverless RL is:
 - 40% cheaper
 - 28% faster wall‑clock
 - instantly deployed to prod via <a href="/weights_biases/">Weights & Biases</a> Inference
Lukas Biewald (@l2k) 's Twitter Profile Photo

We've started a great tradition at CoreWeave of shipping an integrated new product weeks after acquisition - congrats OpenPipe on the serverless RL launch!

Santiago Pombo (@santiagopombo) 's Twitter Profile Photo

The Custom SLMs era is upon us 🙌 - Nanochat by Andrej Karpathy - Thinker (PEFTaaS) by Thinking Machines - Tunix (Post-train in Jax) by Google AI - Art (Agent RL) by OpenPipe - Environments Hub by Prime Intellect - NeMo Microservices by NVIDIA

Weights & Biases (@weights_biases) 's Twitter Profile Photo

LIVE: Kyle Corbitt, Head of the OpenPipe team at CoreWeave, joins ThursdAI to talk about launching the first Serverless Reinforcement Learning capability. x.com/i/broadcasts/1…