Xuanbin Peng (@xuanbin_peng) 's Twitter Profile
Xuanbin Peng

@xuanbin_peng

Research Assistant @UCSD; Robotics, Embodied AI

ID: 1762682386507325440

linkhttps://xuanbinpeng.github.io/ calendar_today28-02-2024 03:33:39

25 Tweet

137 Followers

283 Following

Arthur Allshire (@arthurallshire) 's Twitter Profile Photo

our new system trains humanoid robots using data from cell phone videos, enabling skills such as climbing stairs and sitting on chairs in a single policy (w/ Hongsuk Benjamin Choi Junyi Zhang David McAllister)

Yuanhang Zhang (@yuanhang__zhang) 's Twitter Profile Photo

🦾How can humanoids unlock real strength for heavy-duty loco-manipulation? Meet FALCON🦅: Learning Force-Adaptive Humanoid Loco-Manipulation. 🌐: lecar-lab.github.io/falcon-humanoi… See the details below👇:

Wenlong Huang (@wenlong_huang) 's Twitter Profile Photo

How to scale visual affordance learning that is fine-grained, task-conditioned, works in-the-wild, in dynamic envs? Introducing Unsupervised Affordance Distillation (UAD): distills affordances from off-the-shelf foundation models, *all without manual labels*. Very excited this

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

On my way to ICRA! Our group will be presenting Mobile-TeleVision (below) and WildMA (wildlma.github.io). Looking forward to chatting!

Guanya Shi (@guanyashi) 's Twitter Profile Photo

System ID for legged robots is hard: (1) Discontinuous dynamics and (2) many parameters to identify and hard to "excite" them. SPI-Active is a general tool for legged robot system ID. Key ideas: (1) massively parallel sampling-based optimization, (2) structured parameter space,

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Imagine robots learning new skills—without any robot data. Today, we're excited to release EgoZero: our first steps in training robot policies that operate in unseen environments, solely from data collected through humans wearing Aria smart glasses. 🧵👇

yisha (@yswhynot) 's Twitter Profile Photo

For years, I’ve been tuning parameters for robot designs and controllers on specific tasks. Now we can automate this on dataset-scale. Introducing Co-Design of Soft Gripper with Neural Physics - a soft gripper trained in simulation to deform while handling load.

Mengda Xu (@mengdaxu__) 's Twitter Profile Photo

Can we collect robot dexterous hand data directly with human hand? Introducing DexUMI: 0 teleoperation and 0 re-targeting dexterous hand data collection system → autonomously complete precise, long-horizon and contact-rich tasks Project Page: dex-umi.github.io

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Teaching robots to learn only from RGB human videos is hard! In Feel The Force (FTF), we teach robots to mimic the tactile feedback humans experience when handling objects. This allows for delicate, touch-sensitive tasks—like picking up a raw egg without breaking it. 🧵👇

Xuxin Cheng (@xuxin_cheng) 's Twitter Profile Photo

Coordinating diverse, high-speed motions with a single control policy has been a long-standing challenge. Meet GMT—our universal tracker that keeps up with a whole spectrum of agile movements, all with one single policy.

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

This work is not about a new technique. GMT (General Motion Tracking) shows good engineering practices that you can actually train a single unified whole-body control policy for all agile motion, and it works in the real world, directly with sim2real without adaptation. This is

Mazeyu Ji (@jimazeyu) 's Twitter Profile Photo

Humanoids have shown incredible capabilities in simulation. What’s missing in the real world is a unified policy that can generalize across all these motions. Now it’s here!!! Use it to power your own tasks and build the next generation of humanoid applications.

Xuanbin Peng (@xuanbin_peng) 's Twitter Profile Photo

Single generalist policy for tracking diverse, agile humanoid motions! Check out our new paper, GMT—a universal motion tracking framework leveraging Adaptive Sampling and a Motion Mixture-of-Experts architecture to achieve seamless, high-fidelity motion tracking. Thrilled to be

yisha (@yswhynot) 's Twitter Profile Photo

🚀Heading to #RSS2025? Swing by EEB 248 on Wednesday, June 25 at 3:30 PM for a live demo of our data-driven, co-design soft gripper 🥢 at the workshop Robot Hardware-Aware Intelligence!

yisha (@yswhynot) 's Twitter Profile Photo

Enjoying the first day of #RSS2025? Consider coming to our workshop 🤖Robot Hardware-Aware Intelligence on Wed! Robotics: Science and Systems Thank you to everyone who contributed 🙌 We'll have 16 lightning talks and 11 live demos! More info: rss-hardware-intelligence.github.io

Jianglong Ye (@jianglong_ye) 's Twitter Profile Photo

How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping 🖐️and articulation 💻 tasks using a simple C-VAE model.

Ruihan Yang (@rchalyang) 's Twitter Profile Photo

How can we leverage diverse human videos to improve robot manipulation? Excited to introduce EgoVLA — a Vision-Language-Action model trained on egocentric human videos by explicitly modeling wrist & hand motion. We build a shared action space between humans and robots, enabling