Xiaomeng Xu (@xiaomengxu11) 's Twitter Profile
Xiaomeng Xu

@xiaomengxu11

PhD student in robotics @Stanford | Prev @Tsinghua_Uni

ID: 1547468355249979392

linkhttps://xxm19.github.io/ calendar_today14-07-2022 06:30:04

36 Tweet

562 Followers

215 Following

Calvin Luo (@calvinyluo) 's Twitter Profile Photo

Internet-scale datasets of videos and natural language are a rich training source! But can they be used to facilitate novel downstream robotic behaviors across embodiments and environments? Our new #ICLR2025 paper, Adapt2Act, shows how.

Shuran Song (@songshuran) 's Twitter Profile Photo

This is so cool 🤯! Imagine pairing this robot hardware platform with generative hardware design (like the one from Xiaomeng Xu Huy Ha 👉 dgdm-robot.github.io), we can really get customized hardware for any object or task almost instantly.

This is so cool 🤯! Imagine pairing this robot hardware platform with generative hardware design (like the one from  <a href="/XiaomengXu11/">Xiaomeng Xu</a> <a href="/haqhuy/">Huy Ha</a>  👉 dgdm-robot.github.io), we can really get customized hardware for any object or task almost instantly.
Xiaomeng Xu (@xiaomengxu11) 's Twitter Profile Photo

DexUMI exoskeleton makes YOUR hand move like the robot hand, so demonstrations you collect transfer directly to the robot. Zero retargeting! 🔥

Mandi Zhao (@zhaomandi) 's Twitter Profile Photo

How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.

Lukas Ziegler (@lukas_m_ziegler) 's Twitter Profile Photo

It's a 3D printer, and 3D assembly station! 🖨️ The Functgraph developed at Meiji University starts as a regular 3D printer but upgrades itself into a mini factory. It can print parts for its own tools, pick them up, clean them, and put them together, all by itself. Think of it

Priya Sundaresan (@priyasun_) 's Twitter Profile Photo

How can we move beyond static-arm lab setups and learn robot policies in our messy homes? We introduce HoMeR, an imitation learning agent for in-the-wild mobile manipulation. 🧵1/8

Yuxin Chen (@thomasyuxinchen) 's Twitter Profile Photo

💡Can we let an arm-mounted quadrupedal robot to perform task with both arms and legs? Introducing ReLIC: Reinforcement Learning for Interlimb Coordination for versatile loco-manipulation in unstructured environments. [1/6] relic-locoman.rai-inst.com

Mandi Zhao (@zhaomandi) 's Twitter Profile Photo

Our lab at Stanford usually do research in AI & robotics, but very occasionally we indulge in being functional alcoholics -- Recently we hosted a lab cocktail night, and created drinks with research-related puns like 'reviewer#2' and 'make 6 figures', sharing the full recipes

Our lab at Stanford usually do research in AI &amp; robotics, but very occasionally we indulge in being functional alcoholics -- Recently we hosted a lab cocktail night, and created drinks with research-related puns like 'reviewer#2' and 'make 6 figures', sharing the full recipes
Xiaomeng Xu (@xiaomengxu11) 's Twitter Profile Photo

Perception is inherently active. 🧠👀 With a flexible neck, our robot learns how humans adjust their viewpoint to search, track, and focus—unlocking more capable manipulation. Check out Vision in Action 👇

yisha (@yswhynot) 's Twitter Profile Photo

Enjoying the first day of #RSS2025? Consider coming to our workshop 🤖Robot Hardware-Aware Intelligence on Wed! Robotics: Science and Systems Thank you to everyone who contributed 🙌 We'll have 16 lightning talks and 11 live demos! More info: rss-hardware-intelligence.github.io

Xiaomeng Xu (@xiaomengxu11) 's Twitter Profile Photo

I'll present RoboPanoptes at #RSS2025 tomorrow 6/22 🐍 Spotlight talk: 9:00-10:30am (Bovard Auditorium) Poster: 12:30-2:00pm, poster #31 (Associates Park)

Stanford AI Lab (@stanfordailab) 's Twitter Profile Photo

In Los Angeles for RSS 2025? 🤖 🌴Be sure to check out the great work by students from the Stanford AI Lab! ai.stanford.edu/blog/rss-2025/

Stanford AI Lab (@stanfordailab) 's Twitter Profile Photo

Robot learning has largely focused on standard platforms—but can it embrace robots of all shapes and sizes? In Xiaomeng Xu's latest blog post, we show how data-driven methods bring unconventional robots to life, enabling capabilities that traditional designs and control can't

Robot learning has largely focused on standard platforms—but can it embrace robots of all shapes and sizes? In <a href="/XiaomengXu11/">Xiaomeng Xu</a>'s latest blog post, we show how data-driven methods bring unconventional robots to life, enabling capabilities that traditional designs and control can't
yisha (@yswhynot) 's Twitter Profile Photo

Missed our RSS workshop? Our recordings are online: youtube.com/@hardware-awar…. All talks were awesome, and we had a very fun panel discussion session 🧐 Huge thanks to our organizers for all the hard work Huy Ha Xiaomeng Xu Zhanyi Sun Yuxiang Ma Xiaolong Wang Mike Tolley

Russ Tedrake (@russtedrake) 's Twitter Profile Photo

TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the

Zhanyi S (@s_zhanyi) 's Twitter Profile Photo

How to prevent behavior cloning policies from drifting OOD on long horizon manipulation tasks? Check out Latent Policy Barrier (LPB), a plug-and-play test-time optimization method that keeps BC policies in-distribution with no extra demo or fine-tuning: project-latentpolicybarrier.github.io

Qiayuan Liao (@qiayuanliao) 's Twitter Profile Photo

Want to achieve extreme performance in motion tracking—and go beyond it? Our preprint tech report is now online, with open-source code available!

Kaizhe Hu (@hkz222) 's Twitter Profile Photo

How do we learn motor skills directly in the real world? Think about learning to ride a bike—parents might be there to give you hands-on guidance.🚲 Can we apply this same idea to robots? Introducing Robot-Trains-Robot (RTR): a new framework for real-world humanoid learning.