Xiongyi Cai (@xiongyicai) 's Twitter Profile
Xiongyi Cai

@xiongyicai

ID: 1902408176177430528

calendar_today19-03-2025 17:14:02

0 Tweet

0 Takipçi

23 Takip Edilen

Xuanbin Peng (@xuanbin_peng) 's Twitter Profile Photo

What if a humanoid robot could choose how to interact with the environment 🤖 — soft when it needs compliance, stiff when it needs precision, and force-aware when it must push/pull? That’s exactly what our Heterogeneous Meta-Control (HMC) framework enables. Our new framework

Elie Aljalbout (@elijalbout) 's Twitter Profile Photo

Cool results🤖🚀 I'm curious if we could scale such methods beyond quasi-static tasks... kinematic retargeting is great, but for dynamic retargeting it seems like we'd need some human hand/arm sys-id, but maybe average values might be enough too!

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

Egocentric manipulation data at scale allows for following of previously-unseen language instructions -- this is a very important work for indicating the value of egocentric data as a key part of the training pipeline for humanoid robots.

Geng Chen (@gengchen358) 's Twitter Profile Photo

Check out our latest work on scaling egocentric human data🫳 for dexterous manipulation🦾! Compared to teleportation, human data is much easier to scale in quantities, objects, scenes, and tasks, which unlocks key capabilities such as novel language following and enhanced

CyberRobo (@cyberrobooo) 's Twitter Profile Photo

If we want humanoid robots to truly learn to "work like humans," In-N-On may offer a very realistic path. It uses first-person human video (the perspective of a worker wearing glasses) as a large-scale demonstration, supplemented with a small amount of the robot's own task data,

Xiongyi Cai (@xiongyicai) 's Twitter Profile Photo

How do you teach a robot to do something it has never seen before? 🤖 With human data. Our new Human0 model is co-trained on human and humanoid data. It allows the robot to understand a novel language command and execute it perfectly in the wild without prior practice.

Rui Yan (@hi_im_ruiyan) 's Twitter Profile Photo

Meet ACE-F — a novel, foldable teleoperation platform for collecting high-quality robot demonstration data across robot embodiments. Using a specialized soft-controller pipeline, we interpret end-effector positional deviations as virtual force signals to provide the user with

Rui Yan (@hi_im_ruiyan) 's Twitter Profile Photo

ACE-F is finally open sourced, with a hardware assembly tutorial and teleoperation code for Franka and Xarm7 robots. Check out our website and more below!Hardware: github.com/ACEFoldable/ac… Software: github.com/ACEFoldable/ac… Webpage: acefoldable.github.io Arxiv:

cw j (@cwj99770123) 's Twitter Profile Photo

Can we bridge the Sim-to-Real gap in complex manipulation without explicit system ID? 🤖 Presenting Contact-Aware Neural Dynamics — a diffusion-based framework that grounds simulation with real-world touch. Implicit Alignment: No tedious parameter tuning. Tactile-Driven: