Zi-ang Cao (@ziang_cao) 's Twitter Profile
Zi-ang Cao

@ziang_cao

MS student @Stanford | Working on vision and learning of Robotics

ID: 1431351894304120833

calendar_today27-08-2021 20:24:44

13 Tweet

166 Takipçi

232 Takip Edilen

Jingyun Yang (@yjy0625) 's Twitter Profile Photo

Want a robot that learns household tasks by watching you? EquiBot is a ✨ generalizable and 🚰 data-efficient method for visuomotor policy learning, robust to changes in object shapes, lighting, and scene makeup, even from just 5 mins of human videos. 🧵↓

Jingyun Yang (@yjy0625) 's Twitter Profile Photo

The key insight of our method is that embedding equivariance in the policy architectures allows to generalize across unseen object appearance, poses, and scales. We integrate equivariance into a diffusion policy to ensure robust learning performance. [1/5]

Jingyun Yang (@yjy0625) 's Twitter Profile Photo

In our real robot experiments, we use hand tracking and open vocabulary image segmentation models to parse 5 mins of human videos into point clouds and actions. Then, we train the policy using this data. [2/5]

Jingyun Yang (@yjy0625) 's Twitter Profile Photo

Our method learns to make the bed from watching how to fold a handkerchief; learns how to close a suitcase from watching how to close a smaller carry-on luggage; and learns how to pack different objects after watching how to pack one object. [3/5]

Jingyun Yang (@yjy0625) 's Twitter Profile Photo

EquiBot expands the capabilities of our prior work EquivAct, offering more robust policy learning, a wider selection of observation and action space choices, and no separate representation learning phase. [4/5] x.com/yjy0625/status…

Jingyun Yang (@yjy0625) 's Twitter Profile Photo

This project is co-led with Zi-ang Cao and in collaboration with Congyue Deng, Rika Antonova, Shuran Song, and Jeannette Bohg. Website: equi-bot.github.io Paper: arxiv.org/abs/2407.01479 Code: github.com/yjy0625/equibot [5/5]

Zeyi Liu (@liu_zeyi_) 's Twitter Profile Photo

🔊 Audio signals contain rich information about daily interactions. Can our robots learn from videos with sound? Introducing ManiWAV, a robotic system that learns contact-rich manipulation skills from in-the-wild audio-visual data. See thread for more details (1/4) 👇

Zi-ang Cao (@ziang_cao) 's Twitter Profile Photo

The Stanford Robotics Center just launched last weekend, and we demonstrated dual robot bed making with EquiBot along with other demos in the domestic suite. We will present EquiBot at #CoRL2024 in Poster Session 1 (11/6 11AM). Please come to learn more. equi-bot.github.io

Haochen Shi (@haochenshi74) 's Twitter Profile Photo

Time to democratize humanoid robots! Introducing ToddlerBot, a low-cost ($6K), open-source humanoid for robotics and AI research. Watch two ToddlerBots seamlessly chain their loco-manipulation skills to collaborate in tidying up after a toy session. toddlerbot.github.io

Yanjie Ze (@zeyanjie) 's Twitter Profile Photo

🤖Introducing TWIST: Teleoperated Whole-Body Imitation System. We develop a humanoid teleoperation system to enable coordinated, versatile, whole-body movements, using a single neural network. This is our first step toward general-purpose robots. 🌐humanoid-teleop.github.io