Zhen Wu (@zhenkirito123) 's Twitter Profile
Zhen Wu

@zhenkirito123

MSCS @Stanford. Character Animation & Robotics 🤖

ID: 1512964859432157184

linkhttps://www.linkedin.com/in/zhen-wu-326a70230 calendar_today10-04-2022 01:25:20

3 Tweet

8 Takipçi

256 Takip Edilen

Qiayuan Liao (@qiayuanliao) 's Twitter Profile Photo

Want to achieve extreme performance in motion tracking—and go beyond it? Our preprint tech report is now online, with open-source code available!

Sirui Chen (@eric_srchen) 's Twitter Profile Photo

Introducing HEAD🤖, an autonomous navigation and reaching system for humanoid robots, which allows the robot to navigate around obstacles and touch an object in the environment. More details on our website and CoRL paper: stanford-tml.github.io/HEAD

Carlo Sferrazza (@carlo_sferrazza) 's Twitter Profile Photo

Excited to share that I'll be joining UT Austin in Fall 2026 as an Assistant Professor with UT Mech Engineering Texas Robotics! I'm looking for PhD students interested in humanoids, dexterous manipulation, tactile sensing, and robot learning in general -- consider applying this cycle!

Excited to share that I'll be joining <a href="/UTAustin/">UT Austin</a> in Fall 2026 as an Assistant Professor with <a href="/utmechengr/">UT Mech Engineering</a> <a href="/texas_robotics/">Texas Robotics</a>!
I'm looking for PhD students interested in humanoids, dexterous manipulation, tactile sensing, and robot learning in general -- consider applying this cycle!
Siheng Zhao (@sihengzhao) 's Twitter Profile Photo

ResMimic: a two-stage residual framework that unleashes the power of pre-trained general motion tracking policy. Enable expressive whole-body loco-manipulation with payloads up to 5.5kg without task-specific design, generalize across poses, and exhibit reactive behavior.

Alejandro Escontrela (@alescontrela) 's Twitter Profile Photo

Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread 🧵

Alejandro Escontrela (@alescontrela) 's Twitter Profile Photo

How can we standardize conditioning signals in image/video models to achieve the iterative editing & portability that Universal Scene Descriptors provide in graphics? Introducing Neural USD: An object-centric framework for iterative editing & control 🧵

How can we standardize conditioning signals in image/video models to achieve the iterative editing &amp; portability that Universal Scene Descriptors provide in graphics?

Introducing Neural USD: An object-centric framework for iterative editing &amp; control

🧵
Yanjie Ze (@zeyanjie) 's Twitter Profile Photo

Excited to introduce TWIST2, our next-generation humanoid data collection system. TWIST2 is portable (use anywhere, no MoCap), scalable (100+ demos in 15 mins), and holistic (unlock major whole-body human skills). Fully open-sourced: yanjieze.com/TWIST2

Yitang Li (@li_yitang) 's Twitter Profile Photo

Meet BFM-Zero: A Promptable Humanoid Behavioral Foundation Model w/ Unsupervised RL👉 lecar-lab.github.io/BFM-Zero/ 🧩ONE latent space for ALL tasks ⚡Zero-shot goal reaching, tracking, and reward optimization (any reward at test time), from ONE policy 🤖Natural recovery & transition