Zoey Chen (@zoeyc17) 's Twitter Profile
Zoey Chen

@zoeyc17

PhD student at the University of Washington. I blog about computer vision, robotics and artificial intelligence at:qiuyuchen14.github.io

ID: 908162147351273472

linkhttps://qiuyuchen14.github.io/ calendar_today14-09-2017 02:55:03

110 Tweet

958 Followers

542 Following

Carolina Higuera (@carohiguerarias) 's Twitter Profile Photo

Can we narrow the reality gap for vision-based tactile sensors? We present Tactile Diffusion for generating synthetic tactile images. On real Braille reading task with a DIGIT sensor, a classifier trained with our model outperforms other data adaptation approaches. (1/5)

Wenxuan Zhou (@wenxuan_zhou) 's Twitter Profile Photo

How can robots learn generalizable manipulation skills for diverse objects? Going beyond pick-and-place, our recent work “HACMan” enables complex interactions for unseen objects, such as flipping, pushing, or tilting, using spatial action maps + RL with point clouds. (w/ @MetaAI)

Zoey Chen (@zoeyc17) 's Twitter Profile Photo

I put together some slides on "How to train your robot with limited data" for a class at UW, sharing them in case it's useful for anyone who is interested. It covers some aspects of data augmentation, domain adaptation, and sim2real for robotics. tinyurl.com/aytnwp

I put together some slides on "How to train your robot with limited data" for a class at UW, sharing them in case it's useful for anyone who is interested. It covers some aspects of data augmentation, domain adaptation, and sim2real for robotics.  tinyurl.com/aytnwp
Vikash Kumar (@vikashplus) 's Twitter Profile Photo

#𝗥𝗼𝗯𝗼𝗔𝗴𝗲𝗻𝘁 -- A universal multi-task agent on a data-budget 💪 with 12 non-trivial skills 💪 can generalize them across 38 tasks 💪& 100s of novel scenarios! 🌐robopen.github.io w/ Homanga @ RSS Jay Vakil Mohit Sharma, Abhinav Gupta, Shubham Tulsiani

Jiafei Duan (@djiafei) 's Twitter Profile Photo

🚨Is it possible to devise an intuitive approach for crowdsourcing trainable data for robots without requiring a physical robot🤖? Can we democratize robot learning for all?🧑‍🤝‍🧑 Check out our latest #CoRL2023 paper-> AR2-D2: Training a Robot Without a Robot

Chen Wang (@chenwang_j) 's Twitter Profile Photo

How to chain multiple dexterous skills to tackle complex long-horizon manipulation tasks? Imagine retrieving a LEGO block from a pile, rotating it in-hand, and inserting it at the desired location to build a structure. Introducing our new work - Sequential Dexterity 🧵👇

Yi Ru (Helen) Wang (@yiruhelenwang) 's Twitter Profile Photo

🤖Ever wondered how well Large Language Models (LLMs) like GPT-4 can understand and reason about the physics of everyday objects? Our paper 🍎"NEWTON: Are Large Language Models Capable of Physical Reasoning" dives deep into this! Project site: newtonreasoning.github.io 🧵1/n

Jiafei Duan (@djiafei) 's Twitter Profile Photo

For large-scale robotic deployment🤖 in the real-world 🌏, robots must adapt to changes in environment and objects. Ever questioned the generalizability of your robot's manipulation policy? Put it to the test with The Colosseum 🏛️. Check out our project: robot-colosseum.github.io

Marcel Torné (@marceltornev) 's Twitter Profile Photo

How can we train robust policies with minimal human effort?🤖 We propose RialTo, a system that robustifies imitation learning policies from 15 real-world demonstrations using on-the-fly reconstructed simulations of the real world. (1/9)🧵 Project website: real-to-sim-to-real.github.io/RialTo/

Chen Wang (@chenwang_j) 's Twitter Profile Photo

Can we use wearable devices to collect robot data without actual robots? Yes! With a pair of gloves🧤! Introducing DexCap, a portable hand motion capture system that collects 3D data (point cloud + finger motion) for training robots with dexterous hands Everything open-sourced

Zoey Chen (@zoeyc17) 's Twitter Profile Photo

Really diverse robot manipulation dataset collected in the wild, with great effort across many institutes! It was fun to participate and I'm really excited to see all the tasks this enables!

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

So you want to do robotics tasks requiring dynamics information in the real world, but you don’t want the pain of real-world RL? In our work to be presented as an oral at ICLR 2024, Marius Memmel showed how we can do this via a real-to-sim-to-real policy learning approach. A 🧵 (1/7)

Homanga Bharadhwaj (@mangahomanga) 's Twitter Profile Photo

Track2Act: Our latest on training goal-conditioned policies for diverse manipulation in the real-world. We train a model for embodiment-agnostic point track prediction from web videos combined with embodiment-specific residual policy learning homangab.github.io/track2act/ 1/n

Zoey Chen (@zoeyc17) 's Twitter Profile Photo

come to check out our new work URDFormer for cheaply generating interactive simulation content from real-world images! paper, code, website: urdformer.github.io, 👇detailed thread from Abhishek Gupta

Chuning Zhu (@chuning_zhu) 's Twitter Profile Photo

How can we train RL agents that transfer to any reward? In our NeurIPS Conference paper DiSPO, we propose to learn the distribution of successor features of a stationary dataset, which enables zero-shot transfer to arbitrary rewards without additional training! A thread 🧵(1/9)

YI LI (@yi_li_uw) 's Twitter Profile Photo

🚀 Meet 🐹HAMSTER, our new hierarchical Vision-Language-Action (VLA) framework for robot manipulation! 🔹 High-level VLM for perception & reasoning 🔹 Low-level 3D policy for precise control 🔹 Bridged by 2D paths for trajectory planning HAMSTER learns from cost-effective