Lucy Shi(@lucy_x_shi) 's Twitter Profileg
Lucy Shi

@lucy_x_shi

Student researcher at Stanford. Working on robot learning and multimodal learning. Interested in robots, rockets, and humans.

ID:1446952547504177154

linkhttps://lucys0.github.io/ calendar_today09-10-2021 21:35:59

179 Tweets

1,2K Followers

530 Following

Ayzaan Wahid(@ayzwah) 's Twitter Profile Photo

For the past year we've been working on ALOHA Unleashed 🌋 @GoogleDeepmind - pushing the scale and dexterity of tasks on our ALOHA 2 fleet. Here is a thread with some of the coolest videos!

The first task is hanging a shirt on a hanger (autonomous 1x)

account_circle
Lucy Shi(@lucy_x_shi) 's Twitter Profile Photo

As impressive as always, great work Tony Z. Zhao!!

Seeing Tony’s dexterous manipulation policies has changed my mind about what data can solve in the past year.
Now a question that keeps me up at night is what data cannot or should not solve.

account_circle
Sergey Levine(@svlevine) 's Twitter Profile Photo

Yelling at robots is not very nice, because usually they can't understand what you're saying. But now thanks to Lucy Shi et al., they can! In YAY Robot, we study how language corrections (which can be spoken too, not yelled) allow robots to get better.

A thread 👇

account_circle
Tian Gao(@TianGao_19) 's Twitter Profile Photo

Imitation learning often involves significant human effort to collect a large dataset for robust policy learning. How can we train robust policies in low-data regimes?

Our imitation learning framework PRIME scaffolds manipulation tasks with behavior primitives, breaking down

account_circle
Chelsea Finn(@chelseabfinn) 's Twitter Profile Photo

Robots often make mistakes in long-horizon tasks, which require a lot of data to fix.

Our robot can:
- incorporate verbal corrections on-the-fly, AND
- use those to improve the policy over time

No further teleop needed!

Paper & Code: yay-robot.github.io

account_circle
Karl Pertsch(@KarlPertsch) 's Twitter Profile Photo

In all seriousness though, being able to 'program' *and* 'debug' your robot in natural language will be tremendously useful when the job of teaching robots new skills is no longer done by machine learning experts in labs but end users in homes!
Great job Lucy!! :)

account_circle
Alexander Khazatsky(@SashaKhazatsky) 's Twitter Profile Photo

After two years, it is my pleasure to introduce “DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset”

DROID is the most diverse robotic interaction dataset ever released, including 385 hours of data collected across 564 diverse scenes in real-world households and offices

account_circle
Youngwoon Lee(@YoungwoonLee) 's Twitter Profile Photo

We're diving into the world of humanoid robotics 🤖! We were curious how to train humanoid robots and ended up making HumanoidBench, a simulated benchmark for humanoid robots!

Give it a try and let us know what you think!

account_circle
Karl Pertsch(@KarlPertsch) 's Twitter Profile Photo

Access to *diverse* training data is a major bottleneck in robot learning. We're releasing DROID, a large-scale in-the-wild manipulation dataset. 76k trajectories, 500+ scenes, multi-view stereo, language annotations etc
Check it out & download today!

💻: droid-dataset.github.io

account_circle
Chen Wang(@chenwang_j) 's Twitter Profile Photo

Can we use wearable devices to collect robot data without actual robots?

Yes! With a pair of gloves🧤!

Introducing DexCap, a portable hand motion capture system that collects 3D data (point cloud + finger motion) for training robots with dexterous hands

Everything open-sourced

account_circle
Karol Hausman(@hausman_k) 's Twitter Profile Photo

🚨 Big news 🚨
Together with a set of amazing folks we decided to start a company that tackles one of the hardest and most impactful problems - Physical Intelligence

In fact, we even named our company after that: physicalintelligence.company or Pi (π) for short
🧵

account_circle