We’ve long faced the gap between strong power grasps and fine precision grasps — high-dof hands rarely do both well. So we turned to co-design: optimizing control and fingertip geometry together. The result is a single hand that achieves both, almost like having a gripper and a
A large human behavior model.
Introducing In-N-On, our latest findings in scaling egocentric data for humanoids.
1. Pre-training and post-training with human data
2. 1,000+ hours of in-the-wild data and 20+ hours of on-task data with accurate action labels
Website:
Meet ACE-F — a novel, foldable teleoperation platform for collecting high-quality robot demonstration data across robot embodiments.
Using a specialized soft-controller pipeline, we interpret end-effector positional deviations as virtual force signals to provide the user with
For years, researchers have believed that simulation could help scale up data, but physics remains the main bottleneck for sim-to-real deployment. Aligning simulation with real-world physics using visual 👁️ and contact 🤝 observations through a network could be a promising step.
A lot of people ask me about the sim2real physics gap. I would say that it’s also highly important and at the same time non-trivial to bridge. Here’s a solution but obviously there’s a long way to go.
interesting. We also observed that contact predictions help locomotion control. I wonder if this would be general for learning-based control if we first predict the contacts, e.g., for VLA/WMs.
"Cross-embodiment" is a sign of generalization. We’ve seen huge progress in manipulation and navigation — but what about humanoid whole-body control? Can ONE policy control multiple different humanoids?
Meet our #ICRA2026 work 🦅EAGLE: Embodiment-Aware Generalist Specialist