Fangchen Liu (@fangchenliu_) 's Twitter Profile
Fangchen Liu

@fangchenliu_

Ph.D. @Berkeley_AI, prev @PKU1898 @HaoSuLabUCSD

ID: 1595520033743990784

linkhttp://fangchenliu.github.io calendar_today23-11-2022 20:50:07

91 Tweet

1,1K Takipçi

355 Takip Edilen

Max Fu (@letian_fu) 's Twitter Profile Photo

We had all the ingredients years ago—CLIP has been around since 2021! OTTER shows that combining these existing tools in the right way unlocks powerful robotic control capabilities. Lightweight (~30M-params policy!), real-time, and fully open-sourced @ ottervla.github.io

Philipp Wu (@philippswu) 's Twitter Profile Photo

New VLA work from Fangchen Liu Raven Huang Max Fu and its all open source! Cool insights on how to better leverage pretrained vision and languge models for robotics. Code in both jax and torch!

Wentao Zhu (@walterzhu8) 's Twitter Profile Photo

Join us at the 1st Workshop on Humanoid Agents #CVPR2025! #CVPR2025 Speakers in CV, CG, Robotics & CogSci will share insights on building virtual & physical human-like AI agents. 💃🤖🦾 📢 Submit your work & spark interdisciplinary discussions! 🔗 Details: humanoid-agents.github.io

Join us at the 1st Workshop on Humanoid Agents <a href="/CVPR/">#CVPR2025</a>! #CVPR2025

Speakers in CV, CG, Robotics &amp; CogSci will share insights on building virtual &amp; physical human-like AI agents. 💃🤖🦾

📢 Submit your work &amp; spark interdisciplinary discussions!
🔗 Details: humanoid-agents.github.io
Agentica Project (@agentica_) 's Twitter Profile Photo

Introducing DeepCoder-14B-Preview - our fully open-sourced reasoning model reaching o1 and o3-mini level on coding and math. The best part is, we’re releasing everything: not just the model, but the dataset, code, and training recipe—so you can train it yourself!🔥 Links below:

Introducing DeepCoder-14B-Preview - our fully open-sourced reasoning model reaching o1 and o3-mini level on coding and math.

The best part is, we’re releasing everything: not just the model, but the dataset, code, and training recipe—so you can train it yourself!🔥

Links below:
Laura Smith (@smithlaura1028) 's Twitter Profile Photo

My goal throughout my PhD has been to take robots out of the lab and into the real world. It was so special to be a part of this effort and see this dream become reality! Excited to keep pushing model capabilities—and, of course, keep playing with robots 🤖

Fangchen Liu (@fangchenliu_) 's Twitter Profile Photo

Ppl are collecting large-scale teleoperation datasets, which are often just kinematics-level trajectories. Real2Render2Real is a new framework that can generate these data w.o. teleoperation or tricky sim+rl. High data quality for BC + nice scaling effect, plz dive in for more!

Younggyo Seo (@younggyoseo) 's Twitter Profile Photo

Excited to present FastTD3: a simple, fast, and capable off-policy RL algorithm for humanoid control -- with an open-source code to run your own humanoid RL experiments in no time! Thread below 🧵

Guanya Shi (@guanyashi) 's Twitter Profile Photo

✈️to #CVPR2025 to give three workshop/tutorial talks about learning humanoid whole-body control and loco-manipulation: - Wed 8:30am @ 3D Scene Understanding, 106C - Wed 10am @ Humanoid Agent, 101D - Thu 11am @ Robotics 101 tutorial, 202B Excited to meet old & new friends!

Qiyang Li (@qiyang_li) 's Twitter Profile Photo

Everyone knows action chunking is great for imitation learning. It turns out that we can extend its success to RL to better leverage prior data for improved exploration and online sample efficiency! colinqiyangli.github.io/qc/ The recipe to achieve this is incredibly simple. 🧵 1/N

Max Fu (@letian_fu) 's Twitter Profile Photo

Qiyang Li @ ICML will present OTTER tomorrow at #ICML2025! A lightweight, instruction-following VLA! See OG post below! 👉Code already released at ottervla.github.io Poster will be presented at West Exhibition Hall B2-B3 #W-409 Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT