Yunzhu Li (@yunzhuliyz) 's Twitter Profile
Yunzhu Li

@yunzhuliyz

Assistant Professor of Computer Science @Columbia @ColumbiaCompSci, Postdoc from @Stanford @StanfordSVL, PhD from @MIT_CSAIL. #Robotics #Vision #Learning

ID: 947911979099881472

linkhttps://yunzhuli.github.io/ calendar_today01-01-2018 19:26:41

377 Tweet

5,5K Followers

492 Following

Apparate Labs (@apparatelabs) 's Twitter Profile Photo

Introducing Proteus 0.1, REAL-TIME video generation that brings life to your AI. Proteus can laugh, rap, sing, blink, smile, talk, and more. From a single image! Come meet Proteus on Twitch in real-time. ↓ Sign up for API waitlist: apparate.ai/early-access.h… 1/11

Fei-Fei Li (@drfeifei) 's Twitter Profile Photo

Come and work with robots and smartest students Stanford Vision and Learning Lab ! We have w a postdoc opening, focusing on robotics & robotic learning. You'll be working directly with me and co-PI Jiajun Wu and our amazing students and researchers. We will both be at #CVPR2024 and can chat more.

Jim Fan (@drjimfan) 's Twitter Profile Photo

I'm going to CVPR next week! My main goal is to build the world's best team for embodied AGI. We prioritize candidates in one of the domains: 1) Large-scale multimodal LLM and generative model training. 2) Deep robotics expertise: physics simulation, sim2real, and/or robot

I'm going to CVPR next week! My main goal is to build the world's best team for embodied AGI. We prioritize candidates in one of the domains:
1) Large-scale multimodal LLM and generative model training.
2) Deep robotics expertise: physics simulation, sim2real, and/or robot
Ruoshi Liu (@ruoshi_liu) 's Twitter Profile Photo

How can a visuomotor policy learn from internet videos? We introduce Dreamitate, where a robot uses a fine-tuned video diffusion model to dream the future (top) and imitate the dream to accomplish a task (bottom). website: dreamitate.cs.columbia.edu paper: arxiv.org/abs/2406.16862

Haochen Shi (@haochenshi74) 's Twitter Profile Photo

As a follow-up to RoboCraft and RoboCook, we explore how to embed tactile information into GNN-based dynamics learning in RoboPack! We find that particle-based representation with tactile features benefits in challenging tasks like dense packing📦! Big congrats to Bo Ai!

Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

Robotic packing requires a fine-grained understanding of whether a squeezing action can create space. Our latest paper at RSS 2024 (Robotics: Science and Systems) demonstrates the critical role of tactile sensing in modeling and planning physical interactions for packing tasks. 🤖 This work,

Boyuan Chen (@boyuanchen0) 's Twitter Profile Photo

Introducing Diffusion Forcing, which unifies next-token prediction (eg LLMs) and full-seq. diffusion (eg SORA)! It offers improved performance & new sampling strategies in vision and robotics, such as stable, infinite video generation, better diffusion planning, and more! (1/8)

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

This #RSS2024 on July 19, we are organizing a tutorial on supervised policy learning for real world robots! Talks by Mahi Shafiullah 🏠🤖 & Russ Tedrake will cover the fundamentals of imitation, recent algorithms, walk-through code, and practical considerations. supervised-robot-learning.github.io

This #RSS2024 on July 19, we are organizing a tutorial on supervised policy learning for real world robots!

Talks by <a href="/notmahi/">Mahi Shafiullah 🏠🤖</a> &amp; <a href="/RussTedrake/">Russ Tedrake</a> will cover the fundamentals of imitation, recent algorithms, walk-through code, and practical considerations.  

supervised-robot-learning.github.io
Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

Check out our #RSS2024 paper (also the Best Paper Award at the #ICRA2024 deformable object manipulation workshop) on dynamics modeling of diverse materials for robotic manipulation. 🤖 We considered a diverse set of objects, including ropes, clothes, granular media, and rigid

Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

Turns out I'm the very first to register for the main conference at #RSS2024! 😉 I'll be giving two invited talks at the workshops tomorrow, 7/15 (Mon): [1] 9:30 - 10:00 am at Koopman Operators in Robotics: sites.google.com/yale.edu/rss-2… [2] 10:30 - 11:05 am at Structural Priors for

Turns out I'm the very first to register for the main conference at #RSS2024! 😉

I'll be giving two invited talks at the workshops tomorrow, 7/15 (Mon):
[1] 9:30 - 10:00 am at Koopman Operators in Robotics: sites.google.com/yale.edu/rss-2…
[2] 10:30 - 11:05 am at Structural Priors for
Huy Ha (@haqhuy) 's Twitter Profile Photo

I’ve been training dogs since middle school. It’s about time I train robot dogs too 😛 Introducing, UMI on Legs, an approach for scaling manipulation skills on robot dogs🐶It can toss, push heavy weights, and make your ~existing~ visuo-motor policies mobile!

Binghao Huang (@binghao_huang) 's Twitter Profile Photo

So proud of my tactile sensor! Touch is essential for enhancing robot capabilities. Feel free to schedule a demo with me to see how tactile sensors can enhance your robot's performance! 🤖✨ #Robotics #TactileSensors

Haozhi Qi (@haozhiq) 's Twitter Profile Photo

When I started my first project on in-hand manipulation, I thought it would be super cool but also quite challenging to make my robot hands spin pens. After almost 2.5 years of effort in this line of research, we have finally succeeded in making our robot hand "spin pens."

Jiawei (Joe) Zhou (@jzhou_jz) 's Twitter Profile Photo

🚀As July winds down, we're just 1 week away from the TTIC Multimodal AI Workshop! This rare gathering features an incredible lineup of keynote/speakers Mohit Bansal Saining Xie Ranjay Krishna Manling Li Pulkit Agrawal Xiaolong Wang from diverse fields. Excited buff.ly/3LaXVhF

🚀As July winds down, we're just 1 week away from the TTIC Multimodal AI Workshop! This rare gathering features an incredible lineup of keynote/speakers <a href="/mohitban47/">Mohit Bansal</a> <a href="/sainingxie/">Saining Xie</a> <a href="/RanjayKrishna/">Ranjay Krishna</a> <a href="/ManlingLi_/">Manling Li</a>  <a href="/pulkitology/">Pulkit Agrawal</a> <a href="/xiaolonw/">Xiaolong Wang</a> from diverse fields. Excited buff.ly/3LaXVhF
Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

Super excited to finally release our work, ReKep, a unified task representation using relational keypoint constraints. 🤖 rekep-robot.github.io A few key takeaways: 1. Building on the success of VoxPoser, VLM-generated code has proven to be extremely versatile in task

Chen Wang (@chenwang_j) 's Twitter Profile Photo

We found that the relations between keypoints are a powerful way to represent tasks. What’s more exciting is that these keypoint relations can be formulated as constraint satisfaction problems, allowing us to use off-the-shelf optimization solvers to generate complex robot

Ruohan Zhang (@ruohanzhang76) 's Twitter Profile Photo

I believe in big data for robotics. But in this work with Wenlong Huang , he taught me this important lesson again: the right representation + foundation models + constrained optimization can enable robots to perform very challenging tasks without task-specific training data.