Hynek Kydlíček (@hkydlicek) 's Twitter Profile
Hynek Kydlíček

@hkydlicek

MLE @huggingface 🤗
Prague, CZ
🇪🇺 eu/acc

ID: 1470207594727940099

calendar_today13-12-2021 01:43:13

458 Tweet

607 Takipçi

381 Takip Edilen

Hynek Kydlíček (@hkydlicek) 's Twitter Profile Photo

They won't give the poor guy a break. This one is especially annoying, because it actually passes couple of tests previous models had issues with.

They won't give the poor guy a break. This one is especially annoying, because it actually passes couple of tests previous models had issues with.
Lewis Tunstall (@_lewtun) 's Twitter Profile Photo

Excited to share OpenEnv: frontier-grade RL environments for the open-source community 🔥! huggingface.co/blog/openenv 🧩 Modular interfaces: a clean Gymnasium-style API (reset(), step(), state()) that plugs into any RL framework 🐳 Built for scale: run environments in containers

Excited to share OpenEnv: frontier-grade RL environments for the open-source community 🔥!

huggingface.co/blog/openenv

🧩 Modular interfaces: a clean Gymnasium-style API (reset(), step(), state()) that plugs into any RL framework

🐳 Built for scale: run environments in containers
Georgia Channing (@cgeorgiaw) 's Twitter Profile Photo

🔥 It's here: OpenFold3 is now live. THE open-source foundation model for predicting 3D structures of proteins, nucleic acids & small molecules. This is where the future of drug discovery and biomolecular AI lives. Built by openfold. Hosted on Hugging Face. 👇 more

🔥 It's here: OpenFold3 is now live.

THE  open-source foundation model for predicting 3D structures of proteins, nucleic acids & small molecules. This is where the future of drug discovery and biomolecular AI lives.

Built by <a href="/open_fold/">openfold</a>. Hosted on <a href="/huggingface/">Hugging Face</a>.
👇 more
Carlos Miguel Patiño @ ICLR (@cmpatino_) 's Twitter Profile Photo

On-policy distillation is a promising way to train small models, but it’s usually limited to teacher–student pairs sharing the same tokenizer. With our GOLD method, you can now distill across different model families and even outperform GRPO! huggingface.co/spaces/Hugging…

On-policy distillation is a promising way to train small models, but it’s usually limited to teacher–student pairs sharing the same tokenizer.

With our GOLD method, you can now distill across different model families and even outperform GRPO!

huggingface.co/spaces/Hugging…
Loubna Ben Allal (@loubnabenallal1) 's Twitter Profile Photo

After ~4 years building SOTA models & datasets, we're sharing everything we learned in ⚡The Smol Training Playbook We cover the full LLM cycle: designing ablations, choosing an architecture, curating data, post-training, and building solid infrastructure. We'll help you

After ~4 years building SOTA models &amp; datasets, we're sharing everything we learned in ⚡The Smol Training Playbook

We cover the full LLM cycle: designing ablations, choosing an architecture, curating data, post-training, and building solid infrastructure.

We'll help you