Tairan He (@tairanhe99) 's Twitter Profile
Tairan He

@tairanhe99

Robotics&AI PhD Student @CMU_Robotics Research Intern at @NVIDIA Prev: @MSFTResearch @sjtu1896 Emboddied AI; Humanoid; Robot Learning

ID: 1600156293876142082

linkhttps://tairanhe.com calendar_today06-12-2022 15:53:04

560 Tweet

4,4K Takipçi

618 Takip Edilen

Tairan He (@tairanhe99) 's Twitter Profile Photo

Lessons I learned in this project: - Upper-lower body decoupling enables efficient RL training - Decoupled WBC (lower-body mobility&height, upper-body precision) is robust & reliable - Humanoids makes everything harder — but far from impossible. We can still do better in terms of

Tong Zhang (@tongzha22057330) 's Twitter Profile Photo

🤖 Can a humanoid robot hold extreme single-leg poses like Bruce Lee's Kick or the Swallow Balance? 🤸 💥 YES. Meet HuB: Learning Extreme Humanoid Balance 🔗 Project website: hub-robot.github.io

Tairan He (@tairanhe99) 's Twitter Profile Photo

Impressive progress on collecting in-the-wild human demos with portable devices. There's a clear trend: more diverse human data fuels better robot learning. Today’s pipelines still rely heavily on wearable pose trackers—excited to see future work push toward device-free

Max Fu (@letian_fu) 's Twitter Profile Photo

Tired of teleoperating your robots? We built a way to scale robot datasets without teleop, dynamic simulation, or even robot hardware. Just one smartphone scan + one human hand demo video → thousands of diverse robot trajectories. Trainable by diffusion policy and VLA models

Wenlong Huang (@wenlong_huang) 's Twitter Profile Photo

How to scale visual affordance learning that is fine-grained, task-conditioned, works in-the-wild, in dynamic envs? Introducing Unsupervised Affordance Distillation (UAD): distills affordances from off-the-shelf foundation models, *all without manual labels*. Very excited this

Tairan He (@tairanhe99) 's Twitter Profile Photo

Excited to be at #ICRA this week! Working on humanoids, RL, or sim-to-real? Let’s grab coffee—DMs are open. See you there! Presentation for: HOVER: Versatile Neural Whole-Body Controller for Humanoid Robots 📍 Room 307 (Regular Session WeET6: Learning for Legged Locomotion 1) ⏰

Jim Fan (@drjimfan) 's Twitter Profile Photo

What if robots could dream inside a video generative model? Introducing DreamGen, a new engine that scales up robot learning not with fleets of human operators, but with digital dreams in pixels. DreamGen produces massive volumes of neural trajectories - photorealistic robot

Guanqi He (@guanqi_he) 's Twitter Profile Photo

How can we align simulation with real-world dynamics for legged robots? Check out our new work: SPI-Active — Sampling-based system identification with active exploration for sim-to-real transfer in legged systems. We leverage sampling-based optimization to estimate robot

Guanya Shi (@guanyashi) 's Twitter Profile Photo

System ID for legged robots is hard: (1) Discontinuous dynamics and (2) many parameters to identify and hard to "excite" them. SPI-Active is a general tool for legged robot system ID. Key ideas: (1) massively parallel sampling-based optimization, (2) structured parameter space,

Tairan He (@tairanhe99) 's Twitter Profile Photo

How often do physics "bugs" turn out to be asset mis-specs? Want them right? Check out SPI-Active—simple, smart, efficient sysid for legged robots 👉 lecar-lab.github.io/spi-active_/

Younggyo Seo (@younggyoseo) 's Twitter Profile Photo

Excited to present FastTD3: a simple, fast, and capable off-policy RL algorithm for humanoid control -- with an open-source code to run your own humanoid RL experiments in no time! Thread below 🧵

Changyi Lin (@changyi_lin1) 's Twitter Profile Photo

Introducing LocoTouch: Quadrupedal robots equipped with tactile sensing can now transport unsecured objects — no mounts, no straps. The tactile policy transfers zero-shot from sim to real. Core Task-Agnostic Features: 1. High-fidelity contact simulation for distributed tactile

Tairan He (@tairanhe99) 's Twitter Profile Photo

Ever seen a humanoid robot serve beer without spilling a drop? Now you have. 🍻 Introducing Hold My Beer: learning gentle locomotion + stable end-effector control. lecar-lab.github.io/SoFTA/

Mandi Zhao (@zhaomandi) 's Twitter Profile Photo

How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.

Jingyun Yang (@yjy0625) 's Twitter Profile Photo

Introducing Mobi-π: Mobilizing Your Robot Learning Policy. Our method: ✈️ enables flexible mobile skill chaining 🪶 without requiring additional policy training data 🏠 while scaling to unseen scenes 🧵↓

Tairan He (@tairanhe99) 's Twitter Profile Photo

Cool and solid work. The vision-pro humanoid teleop setup is we did with OmniH2O (omni.human2humanoid.com), but this work used MoE distillation, and better lidar odometry on G1 robot. Excited to see people pushing the limits of humanoid whole-body teleop!