Weiyu Liu (@weiyu_liu_) 's Twitter Profile
Weiyu Liu

@weiyu_liu_

Postdoc @Stanford. I work on semantic representations for robots. Previously PhD @GTrobotics

ID: 1451625932171665409

linkhttp://weiyuliu.com/ calendar_today22-10-2021 19:06:20

62 Tweet

973 Takipçi

460 Takip Edilen

Weiyu Liu (@weiyu_liu_) 's Twitter Profile Photo

Can robots choose actions (position objects by pushing) and precisely execute them (decide the push trajectories) to fulfill abstract goals? We use a relational model that considers actions and the resulting dynamics between objects and environments. Check out Yixuan's 🧵!

Karmesh Yadav (@karmeshyadav) 's Twitter Profile Photo

ICRA workshop dates are out. The first workshop on Vision-Language Models for Navigation and Manipulation will be held on 17th May in Yokohama, Japan. We are currently accepting submissions on OpenReview. openreview.net/group?id=IEEE.…

Chen Wang (@chenwang_j) 's Twitter Profile Photo

Can we use wearable devices to collect robot data without actual robots? Yes! With a pair of gloves🧤! Introducing DexCap, a portable hand motion capture system that collects 3D data (point cloud + finger motion) for training robots with dexterous hands Everything open-sourced

IEEE Transactions on Robotics (T-RO) (@ieeetro) 's Twitter Profile Photo

A T-RO paper by researchers University of Utah Robotics Center, NVIDIA, HRL Laboratories and Georgia Tech describes a #robot that can rearrange novel objects in diverse environments with logical goals by flexibly combining primitive actions including pick, place and push. ieeexplore.ieee.org/document/10418…

A T-RO paper by researchers <a href="/URoboticsCenter/">University of Utah Robotics Center</a>, <a href="/nvidia/">NVIDIA</a>, <a href="/HRLLaboratories/">HRL Laboratories</a> and <a href="/GeorgiaTech/">Georgia Tech</a> describes a #robot that can rearrange novel objects in diverse environments with logical goals by flexibly combining primitive actions including pick, place and push.
ieeexplore.ieee.org/document/10418…
Yunfan Jiang (@yunfanjiang) 's Twitter Profile Photo

Does your sim2real robot falter at critical moments 🤯? Want to help but unsure how, all you can do is reward tuning in sim 😮‍💨? Introduce 𝐓𝐑𝐀𝐍𝐒𝐈𝐂 for manipulation sim2real. Robots learned in sim can accomplish complex tasks in real, such as furniture assembly. 🤿🧵

Wenlong Huang (@wenlong_huang) 's Twitter Profile Photo

What structural task representation enables multi-stage, in-the-wild, bimanual, reactive manipulation? Introducing ReKep: LVM to label keypoints & VLM to write keypoint-based constraints, solve w/ optimization for diverse tasks, w/o task-specific training or env models. 🧵👇

Stephen Tian (@stephentian_) 's Twitter Profile Photo

Learned visuomotor robot policies are sensitive to observation viewpoint shifts, which happen all the time. Can visual priors from large-scale data help? Introducing VISTA: using zero-shot novel view synthesis models for view-robust policy learning! #CoRL2024 🧵👇

Tianyuan Dai (@rogerdai1217) 's Twitter Profile Photo

Why hand-engineer digital twins when digital cousins are free? Check out ACDC: Automated Creation of Digital Cousins 👭 for Robust Policy Learning, accepted at @corl2024! 🎉 📸 Single image -> 🏡 Interactive scene ⏩ Fully automatic (no annotations needed!) 🦾 Robot policies

Yunzhi Zhang (@zhang_yunzhi) 's Twitter Profile Photo

Accurate and controllable scene generation has been difficult with natural language alone. You instead need a language for scenes. Introducing the Scene Language — a visual representation for high-quality 3D/4D generation by integrating programs, words, and embeddings — 🧵(1/6)

Weiyu Liu (@weiyu_liu_) 's Twitter Profile Photo

What can we learn from demonstrations of long-horizon tasks? I am presenting our #CoRL2024 paper "Learning Compositional Behaviors from Demonstration and Language" today, showing we can learn a library of behaviors that can be composed to solve new tasks. blade-bot.github.io

Weiyu Liu (@weiyu_liu_) 's Twitter Profile Photo

To build complex 3D structures with many parts, like IKEA furniture, we usually follow instructions and how-to videos. We introduce a new dataset with dense annotations on internet videos to study the grounding of video instructions in 3D. Check out the thread for details!

Keshigeyan Chandrasegaran (@keshigeyan) 's Twitter Profile Photo

1/ [NeurIPS D&B] Introducing HourVideo: A benchmark for hour-long video-language understanding!🚀 500 egocentric videos, 18 total tasks & ~13k questions! Performance: GPT-4➡️25.7% Gemini 1.5 Pro➡️37.3% Humans➡️85.0% We highlight a significant gap in multimodal capabilities🧵👇

Yunfan Jiang (@yunfanjiang) 's Twitter Profile Photo

🤖 Ever wondered what robots need to truly help humans around the house? 🏡 Introducing 𝗕𝗘𝗛𝗔𝗩𝗜𝗢𝗥 𝗥𝗼𝗯𝗼𝘁 𝗦𝘂𝗶𝘁𝗲 (𝗕𝗥𝗦)—a comprehensive framework for mastering mobile whole-body manipulation across diverse household tasks! 🧹🫧 From taking out the trash to

Weiyu Liu (@weiyu_liu_) 's Twitter Profile Photo

How to extract spatial knowledge from VLMs to generate 3D layouts? We combine spatial relations, visual markers, and code into a unified representation that is both interpretable by VLMs and flexible for generating diverse scenes. Check out the detailed post by Fan-Yun Sun!

Manling Li (@manlingli_) 's Twitter Profile Photo

Today is the day! Welcome to join #CVPR2025 workshop on Foundation Models meet Embodied Agents! 🗓️Jun 11 📍Room 214 🌐…models-meet-embodied-agents.github.io/cvpr2025/ Looking forward to learning insights from wonderful speakers Jitendra MALIK Ranjay Krishna Katerina Fragkiadaki Shuang Li Yilun Du

Today is the day! Welcome to join <a href="/CVPR/">#CVPR2025</a> workshop on Foundation Models meet Embodied Agents!

🗓️Jun 11
📍Room 214
🌐…models-meet-embodied-agents.github.io/cvpr2025/

Looking forward to learning insights from wonderful speakers <a href="/JitendraMalikCV/">Jitendra MALIK</a> <a href="/RanjayKrishna/">Ranjay Krishna</a> <a href="/KaterinaFragiad/">Katerina Fragkiadaki</a> <a href="/ShuangL13799063/">Shuang Li</a> <a href="/du_yilun/">Yilun Du</a>
Neil Nie (@neil_nie_) 's Twitter Profile Photo

Thank you for sharing our work Y Combinator! Please checkout vernerobotics.com to schedule a free pilot to see how our robots can transform your business! - Thank you to (co-founder) Aditya, my research advisors, colleagues, friends and family for your support!

Weiyu Liu (@weiyu_liu_) 's Twitter Profile Photo

Congratulations to Neil Nie and the team on the launch! Neil has worked with us at Stanford over the past two years on learning generalizable long-horizon manipulation from as few as five demonstrations. I’m excited to see where his next adventure takes him and how our

Weiyu Liu (@weiyu_liu_) 's Twitter Profile Photo

I’m at #CoRL2025 in Seoul this week! I’m looking for students to join my lab next year, and also for folks excited to build robotic foundation models at a startup. If you’re into generalization, planning and reasoning, or robots that use language, let's chat!

Weiyu Liu (@weiyu_liu_) 's Twitter Profile Photo

To complete a wide range of household tasks, such as cleaning after a meal, robots must actively look 👀 at task-relevant objects that change across stages, manipulate 🙌 with two arms for both coordinated and independent actions, and move 👣 intelligently to facilitate