Chaoyi Pan (@chaoyipan) 's Twitter Profile
Chaoyi Pan

@chaoyipan

Ph.D. @CarnegieMellon @CMU_ECE @CMU_Robotics @LeCARLab
| @Tsinghua_Uni alum | Exploring the intersections of Learning & Control

ID: 1551031947350003712

linkhttp://panchaoyi.com calendar_today24-07-2022 02:30:24

61 Tweet

350 Followers

240 Following

Guanya Shi (@guanyashi) 's Twitter Profile Photo

Sim2Real RL works by scaling up offline computation in training time. How about scaling up online computation using sampling-based MPC in test time? DIAL-MPC is a training-free sampling-based MPC method that adopts the key idea from diffusion to gradually refine solutions via

Chaoyi Pan (@chaoyipan) 's Twitter Profile Photo

Fantastic work! Using partial reference tracking makes total sense since not all tasks require full-body tracking control. The visualizations look amazing too!

Chaoyi Pan (@chaoyipan) 's Twitter Profile Photo

🚀 Our DIAL-MPC sim2real + sim2real pipeline is live! We kept dependencies minimal and the stack fully in python. Check it out, and don’t miss our presentation at the #CoRL2024 WCBM workshop! 😉 lecar-lab.github.io/dial-mpc/

zeji yi (@zejiyi) 's Twitter Profile Photo

I'll be in Vancouver for #NeurIPS2024 presenting Model-Based Diffusion (MBD) lecar-lab.github.io/mbd/ a diffusion-based training-free trajectory optimization method. I'm eager to chat about the latest in control and learning theory - please DM me if interested!

Max Simchowitz (@max_simchowitz) 's Twitter Profile Photo

Hey Everyone!! I will be giving a lecture at the RL Theory Virtual Seminar tomorrow, on my new paper about the “Pitfalls of Imitation Learning" in continuous action spaces. 🧵 below; please read because the time is somewhat TBD..... 🧐

Hey Everyone!! I will be giving a lecture at the RL Theory Virtual Seminar tomorrow, on my new paper about the “Pitfalls of Imitation Learning" in continuous action spaces.  🧵 below;  please read because the time is somewhat TBD..... 🧐
Yuanhang Zhang (@yuanhang__zhang) 's Twitter Profile Photo

🦾How can humanoids unlock real strength for heavy-duty loco-manipulation? Meet FALCON🦅: Learning Force-Adaptive Humanoid Loco-Manipulation. 🌐: lecar-lab.github.io/falcon-humanoi… See the details below👇:

Tony Tao @ RSS 🤖 (@_tonytao_) 's Twitter Profile Photo

Training robots for the open world needs diverse data But collecting robot demos in the wild is hard! Presenting DexWild 🙌🏕️ Human data collection system that works in diverse environments, without robots 💪🦾 Human + Robot Cotraining pipeline that unlocks generalization 🧵👇

Guanya Shi (@guanyashi) 's Twitter Profile Photo

On my way ✈️ to ATL for IEEE ICRA! LeCAR Lab at CMU will present 8 conference papers (including DIAL-MPC as the Best Paper Finalist) and one RA-L paper. Details: lecar-lab.github.io Hope to meet old & new friends and chat about building generalist 🤖 with agility 🚀

Chaoyi Pan (@chaoyipan) 's Twitter Profile Photo

Check out our #ICRA2025 Best Paper Finalist paper DIAL-MPC! We’re presenting at the Wednesday award session — 8:30am, Room 302! I’ll be around all week at ICRA — feel free to DM me to connect! See you there! ✨🤖📍 lecar-lab.github.io/dial-mpc/

Nikhil Sobanbabu (@nikhilsoban353) 's Twitter Profile Photo

🦿How to identify physical parameters of legged robots while collecting informative data to reduce the Sim2Real gap? 🤖 Meet SPI-Active: Sampling-Based System Identification with Active Exploration for Legged Robot Sim2Real Learning Webite: lecar-lab.github.io/spi-active_/ Details 👇:

Akash Sharma (@akashshrm02) 's Twitter Profile Photo

Robots need touch for human-like hands to reach the goal of general manipulation. However, approaches today don’t use tactile sensing or use specific architectures per tactile task. Can 1 model improve many tactile tasks? 🌟Introducing Sparsh-skin: tinyurl.com/y935wz5c 1/6

Yitang Li (@li_yitang) 's Twitter Profile Photo

🤖Can a humanoid robot carry a full cup of beer without spilling while walking 🍺? Hold My Beer ! Introducing Hold My Beer🍺: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control Project: lecar-lab.github.io/SoFTA/ See more details below👇

Haoyu Xiong (@haoyu_xiong_) 's Twitter Profile Photo

Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust

Haoru Xue (@haoruxue) 's Twitter Profile Photo

🚀 Introducing LeVERB, the first 𝗹𝗮𝘁𝗲𝗻𝘁 𝘄𝗵𝗼𝗹𝗲-𝗯𝗼𝗱𝘆 𝗵𝘂𝗺𝗮𝗻𝗼𝗶𝗱 𝗩𝗟𝗔 (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. 🧵 ember-lab-berkeley.github.io/LeVERB-Website/