Haotian(David) Zhan (@haotianz_david) 's Twitter Profile
Haotian(David) Zhan

@haotianz_david

robotics @nyuniversity | incoming M.S. @CMU_Robotics

ID: 1816311189795663872

calendar_today25-07-2024 03:15:27

2 Tweet

27 Takipçi

85 Takip Edilen

Vincent Liu (@vincentjliu) 's Twitter Profile Photo

The future of robotics isn't in the lab – it's in your hands. Can we teach robots to act in the real world without a single robot demonstration? Introducing EgoZero. Train real-world robot policies from human-first egocentric data. No robots. No teleop. Just Aria glasses and

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Teaching robots to learn only from RGB human videos is hard! In Feel The Force (FTF), we teach robots to mimic the tactile feedback humans experience when handling objects. This allows for delicate, touch-sensitive tasks—like picking up a raw egg without breaking it. 🧵👇

Venkatesh (@venkyp2000) 's Twitter Profile Photo

Making touch sensors has never been easier! Excited to present eFlesh, a 3D printable tactile sensor that aims to democratize robotic touch. All you need to make your own eFlesh is a 3D printer, some magnets and a magnetometer. See thread 👇and visit e-flesh.com

Kallol Saha (@_ksaha) 's Twitter Profile Photo

🚨Introducing SPOT: Search over Point Cloud Object Transformations. SPOT is a combined learning-and-planning approach that searches in the space of object transformations. Website: planning-from-point-clouds.github.io Paper: arxiv.org/abs/2509.04645 Code: github.com/kallol-saha/SP…

Yishu Li (@lisayishu) 's Twitter Profile Photo

A closed door looks the same whether it pushes or pulls. Two identical-looking boxes might have different center of mass. How should robots act when a single visual observation isn't enough? Introducing HAVE 🤖, our method that reasons about past interactions online! #CORL2025

A closed door looks the same whether it pushes or pulls. Two identical-looking boxes might have different center of mass. How should robots act when a single visual observation isn't enough?

Introducing HAVE 🤖, our method that reasons about past interactions online! #CORL2025
Divyam Goel (@divyamgo10) 's Twitter Profile Photo

How do we discover a robot's failure modes before deploying it in the real world? Standard benchmarks often don't capture the full picture, leaving policies vulnerable to plausible variations in object shape. Thrilled that our work, "Geometric Red-Teaming for Robotic

Alexis Hao (@hao_alexis) 's Twitter Profile Photo

Introducing FMVP: a method that adapts to natural arm motions during robot-assisted dressing. Pre-trained on vision in sim, fine-tuned with limited real-world vision+force data, and tested in a 12-user, 264-trial study, FMVP is robust across garments and motions. #CoRL2025

Raunaq Bhirangi (@raunaqmb) 's Twitter Profile Photo

When Anya Zorin and Irmak Guzey open-sourced the RUKA Hand (a low-cost robotic hand) earlier this year, people kept asking us how to get one. Open hardware isn’t as easy to share as code. So we’re releasing an off-the-shelf RUKA, in collaboration with WowRobo-Leo Xiao and zhazhali.