Haoru Xue (@haoruxue) 's Twitter Profile
Haoru Xue

@haoruxue

PhD @berkeley_ai | prev. @CMU_Robotics @LeCARLab | Robot Learning, Humanoids

ID: 1741360754152927232

linkhttps://haoruxue.github.io/ calendar_today31-12-2023 07:29:00

172 Tweet

1,1K Takipçi

287 Takip Edilen

Haoru Xue (@haoruxue) 's Twitter Profile Photo

Congrats on the amazing work! Tairan He @ICRA . The long-lasting myth of Sim2Real is getting quickly demystified every day by works like ASAP. 2025 might be the year that we finally crack it.

Haoru Xue (@haoruxue) 's Twitter Profile Photo

How do I see Figure and 1x demo from a data perspective? My first blog: scaling data collection for robotics foundation model. Teleop is not the end game. I bet on pre-training via 𝙬𝙤𝙧𝙡𝙙 𝙢𝙤𝙙𝙚𝙡 𝙖𝙣𝙙 𝙜𝙚𝙣𝙚𝙧𝙖𝙩𝙞𝙫𝙚 𝙨𝙞𝙢. Here is why: haoruxue.github.io/data-scaling-l…

How do I see Figure and 1x demo from a data perspective?

My first blog: scaling data collection for robotics foundation model.

Teleop is not the end game.

I bet on pre-training via 𝙬𝙤𝙧𝙡𝙙 𝙢𝙤𝙙𝙚𝙡 𝙖𝙣𝙙 𝙜𝙚𝙣𝙚𝙧𝙖𝙩𝙞𝙫𝙚 𝙨𝙞𝙢.

Here is why: haoruxue.github.io/data-scaling-l…
Toru (@toruo_o) 's Twitter Profile Photo

Sim2Real RL for Vision-Based Dexterous Manipulation on Humanoids toruowo.github.io/recipe/ TLDR - we train a humanoid robot with two multifingered hands to perform a range of dexterous manipulation tasks robust generalization and high performance without human demonstration :D

Benjamin Bolte (@benjamin_bolte) 's Twitter Profile Photo

This robot will rapidly become more capable over the next six months and it will seem to observers like we are accelerating, but in fact, most of the hard, risky work is done now. We've got software problems instead of hardware problems.

Baifeng (@baifeng_shi) 's Twitter Profile Photo

Next-gen vision pre-trained models shouldn’t be short-sighted. Humans can easily perceive 10K x 10K resolution. But today’s top vision models—like SigLIP and DINOv2—are still pre-trained at merely hundreds by hundreds of pixels, bottlenecking their real-world usage. Today, we

Next-gen vision pre-trained models shouldn’t be short-sighted.

Humans can easily perceive 10K x 10K resolution. But today’s top vision models—like SigLIP and DINOv2—are still pre-trained at merely hundreds by hundreds of pixels, bottlenecking their real-world usage.

Today, we
Xindi Wu (@cindy_x_wu) 's Twitter Profile Photo

Introducing COMPACT: COMPositional Atomic-to-complex Visual Capability Tuning, a data-efficient approach to improve multimodal models on complex visual tasks without scaling data volume. 📦 arxiv.org/abs/2504.21850 1/10

Introducing COMPACT: COMPositional Atomic-to-complex Visual Capability Tuning, a data-efficient approach to improve multimodal models on complex visual tasks without scaling data volume. 📦

arxiv.org/abs/2504.21850

1/10
Tong Zhang (@tongzha22057330) 's Twitter Profile Photo

🤖 Can a humanoid robot hold extreme single-leg poses like Bruce Lee's Kick or the Swallow Balance? 🤸 💥 YES. Meet HuB: Learning Extreme Humanoid Balance 🔗 Project website: hub-robot.github.io

Haoru Xue (@haoruxue) 's Twitter Profile Photo

🏆 𝗕𝗲𝘀𝘁 𝗣𝗮𝗽𝗲𝗿 𝗙𝗶𝗻𝗮𝗹𝗶𝘀𝘁 for DIAL-MPC at #ICRA2025! Catch us at the award session: Wednesday 8:30am Room 302 We’ve also released a new demo: ball-spinning on finger tip. Enjoy 👉 github.com/LeCAR-Lab/dial…

Haoru Xue (@haoruxue) 's Twitter Profile Photo

Career Update: I’m interning at NVIDIA GEAR Lab supervised by Jim Fan and Yuke Zhu. Looking forward to frontier robot learning research with the magnificent team!

Career Update: I’m interning at NVIDIA GEAR Lab supervised by <a href="/DrJimFan/">Jim Fan</a> and <a href="/yukez/">Yuke Zhu</a>. 

Looking forward to frontier robot learning  research with the magnificent team!
Yitang Li (@li_yitang) 's Twitter Profile Photo

🤖Can a humanoid robot carry a full cup of beer without spilling while walking 🍺? Hold My Beer ! Introducing Hold My Beer🍺: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control Project: lecar-lab.github.io/SoFTA/ See more details below👇