Yuanhang Zhang (@yuanhang__zhang) 's Twitter Profile
Yuanhang Zhang

@yuanhang__zhang

MS @CMU_Robotics | @Amazon FAR Team

ID: 1732418818947878912

linkhttp://yuanhangz.com calendar_today06-12-2023 15:16:58

112 Tweet

379 Followers

202 Following

CMU Robotics Institute (@cmu_robotics) 's Twitter Profile Photo

The team from the RI LeCAR Lab at CMU and the NVIDIA GEAR robotics research lab recently presented ASAP's capabilities at #RSS2025 🤖🚀🦾 The article on this incredible work is out now!: ri.cmu.edu/robots-with-mo…

Yuanhang Zhang (@yuanhang__zhang) 's Twitter Profile Photo

We now open-source a general sim2sim/sim2real deployment codebase for FALCON: github.com/LeCAR-Lab/FALC…, supporting both Unitree SDK and Booster SDK !

We now open-source a general sim2sim/sim2real deployment codebase for FALCON: github.com/LeCAR-Lab/FALC…, supporting both Unitree SDK and Booster SDK !
Haoyang Weng (@elijahgalahad) 's Twitter Profile Photo

It's the best infra I ever used for sim2real. Nice decoupled design enabling seamless sim2sim and sim2real transfer. Happy to see it open sourced! (I also developed one based on FALCON. Will release it some time in the future!)

Jin Cheng (@catachiii) 's Twitter Profile Photo

🐕 I'm happy to share my paper: RAMBO: RL-augmented Model-based Whole-body Control for Loco-manipulation has been accepted by IEEE Robotics and Automation Letters (RA-L) 🧶 Project website: jin-cheng.me/rambo.github.i… Paper: arxiv.org/abs/2504.06662

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

Falcon is a really cool paper and this was a fun discussion. How do we get humanoids to lift heavy weights, pull carts, etc? This is one of the huge advantages of the humanoid form factor, so it's great that Yuanhang Zhang got such cool results!

Michael Cho - Rbt/Acc (@micoolcho) 's Twitter Profile Photo

Having this force awareness is critical; I think this paper really shows that with the right "brain", these relatively small humanoids (Booster T1s & Unitree G1s) can actually do a fair bit! Tks for the sharing Yuanhang Zhang !

Tairan He (@tairanhe99) 's Twitter Profile Photo

🚀 ASAP is now FULLY open-source! 🚀 ✅ Humanoid RL motion tracking & delta actions ✅ Motion retargeting to any humanoid ✅ ASAP Benchmark motions + pretrained policies ✅ Sim2Sim & Sim2Real ready — run ASAP in sim or on your G1 robot! 🔗 github.com/LeCAR-Lab/ASAP

Qiayuan Liao (@qiayuanliao) 's Twitter Profile Photo

Want to achieve extreme performance in motion tracking—and go beyond it? Our preprint tech report is now online, with open-source code available!

Sirui Chen (@eric_srchen) 's Twitter Profile Photo

Introducing HEAD🤖, an autonomous navigation and reaching system for humanoid robots, which allows the robot to navigate around obstacles and touch an object in the environment. More details on our website and CoRL paper: stanford-tml.github.io/HEAD

Zhi Su (@zhisu22) 's Twitter Profile Photo

🏓🤖 Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis — fully autonomous, sub-second reaction, human-like strikes.

Zhecheng Yuan (@fancy_yzc) 's Twitter Profile Photo

👐How can we leverage multi-source human motion data, transform it into robot-feasible behaviors, and deploy it across diverse scenarios? 👤🤖Introduce 𝐇𝐄𝐑𝐌𝐄𝐒: a versatile human-to-robot embodied learning framework tailored for mobile bimanual dexterous manipulation.

Tairan He (@tairanhe99) 's Twitter Profile Photo

Two weeks ago I passed my PhD thesis proposal 🎉 Huge thanks to my advisors Guanya Shi & Changliu Liu, my committee, and everyone who has helped me along the way. Last week I also gave a talk at UPenn GRASP on our 2-year journey in humanoid sim2real—reflections, lessons, and

Two weeks ago I passed my PhD thesis proposal 🎉 Huge thanks to my advisors <a href="/GuanyaShi/">Guanya Shi</a>  &amp; <a href="/ChangliuL/">Changliu Liu</a>, my committee, and everyone who has helped me along the way.

Last week I also gave a talk at UPenn GRASP on our 2-year journey in humanoid sim2real—reflections, lessons, and
Dvij Kalaria (@dvijkalaria) 's Twitter Profile Photo

❓How can humanoids learn to squat and open a drawer? Reward-tuning for every such whole-body task is infeasible. 🚀Meet DreamControl: robots "dream" how people move and manipulate objects in varied scenarios, practice using them in simulation, and then act naturally in the

Haoyang Weng (@elijahgalahad) 's Twitter Profile Photo

We present HDMI, a simple and general framework for learning whole-body interaction skills directly from human videos — no manual reward engineering, no task-specific pipelines. 🤖 67 door traversals, 6 real-world tasks, 14 in simulation. 🔗 hdmi-humanoid.github.io

Kevin Zakka (@kevin_zakka) 's Twitter Profile Photo

I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.

Zhen Wu (@zhenkirito123) 's Twitter Profile Photo

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR

Harsh Gupta (@hgupt3) 's Twitter Profile Photo

✈️🤖 What if an embodiment-agnostic visuomotor policy could adapt to diverse robot embodiments at inference with no fine-tuning? Introducing UMI-on-Air, a framework that brings embodiment-aware guidance to diffusion policies for precise, contact-rich aerial manipulation.