Sirui Chen (@eric_srchen) 's Twitter Profile
Sirui Chen

@eric_srchen

PhD in Stanford CS, Prev Undergrad at HKU. Interested in robotics

ID: 1702202822576766976

linkhttp://ericcsr.github.io calendar_today14-09-2023 06:09:25

70 Tweet

384 Followers

536 Following

Tairan He (@tairanhe99) 's Twitter Profile Photo

🚀 ASAP is now FULLY open-source! 🚀 ✅ Humanoid RL motion tracking & delta actions ✅ Motion retargeting to any humanoid ✅ ASAP Benchmark motions + pretrained policies ✅ Sim2Sim & Sim2Real ready — run ASAP in sim or on your G1 robot! 🔗 github.com/LeCAR-Lab/ASAP

Qiayuan Liao (@qiayuanliao) 's Twitter Profile Photo

Want to achieve extreme performance in motion tracking—and go beyond it? Our preprint tech report is now online, with open-source code available!

Kaizhe Hu (@hkz222) 's Twitter Profile Photo

How do we learn motor skills directly in the real world? Think about learning to ride a bike—parents might be there to give you hands-on guidance.🚲 Can we apply this same idea to robots? Introducing Robot-Trains-Robot (RTR): a new framework for real-world humanoid learning.

Homanga Bharadhwaj (@mangahomanga) 's Twitter Profile Photo

excellent work on whole-body reaching, with an intuitive modular approach! also, great to see my former labmate Yufei Ye collecting data in-the-wild with Aria glasses 🙂

Robert Scoble (@scobleizer) 's Twitter Profile Photo

Every second eight new posts hit my screens. So many that we can't think about what it all means. What does it mean? Robot learning is speeding up. Stay alive, we are about to see wonderous things.

Yufei Ye (@yufei_ye) 's Twitter Profile Photo

Delivering the robot close enough to a target is an important yet often overlooked prerequisite for any meaningful robot interaction. It requires robust locomotion, navigation, and reaching all at once. HeAD is an automatic vision-based system that handles all of them.

Haochen Shi (@haochenshi74) 's Twitter Profile Photo

ToddlerBot is accepted to CoRL, and we will bring Toddy (2.0 version) to Seoul for a trip. Come and say hi to Toddy if you're around😁! Our arxiv paper is also updated with more technical details in the appendix: arxiv.org/abs/2502.00893

ToddlerBot is accepted to CoRL, and we will bring Toddy (2.0 version) to Seoul for a trip. Come and say hi to Toddy if you're around😁! Our arxiv paper is also updated with more technical details in the appendix: arxiv.org/abs/2502.00893
Zhi Su (@zhisu22) 's Twitter Profile Photo

🏓🤖 Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis — fully autonomous, sub-second reaction, human-like strikes.

Zhengyi “Zen” Luo (@zhengyiluo) 's Twitter Profile Photo

If you missed Yuke Zhu’s talk at #CoRL2025 here is the link youtube.com/watch?v=rh2oxU… 👇demo we at GEAR have been cranking at: fully autonomous, human like, locomanipulation via language + vision input. Uncut. Sleepless nights to get the humanoid to move naturally pays off🥹

Xie Zhaoming (@zhaomingxie) 's Twitter Profile Photo

Spot is playing Ping Pong! Spin is a crucial part of the game, but few robots can handle it. We show receiving and generating significant spin using MPC. Collaboration with David Nguyen and Zulfiqar Zaidi! Video: youtu.be/3GrnkxOeC14?si…. Paper: arxiv.org/pdf/2510.08754.

Wenli Xiao (@_wenlixiao) 's Twitter Profile Photo

What if robots could improve themselves by learning from their own failures in the real-world? Introducing 𝗣𝗟𝗗 (𝗣𝗿𝗼𝗯𝗲, 𝗟𝗲𝗮𝗿𝗻, 𝗗𝗶𝘀𝘁𝗶𝗹𝗹) — a recipe that enables Vision-Language-Action (VLA) models to self-improve for high-precision manipulation tasks. PLD

Guanya Shi (@guanyashi) 's Twitter Profile Photo

It was a great pleasure to host Yuke Zhu to give a CMU Robotics Institute seminar talk! Link (including a very insightful 25-min Q&A session): youtu.be/49LnlfM9DBU?si… Definitely check it out if you are interested in how to build generalist humanoid, robot learning, and data pyramid!