Ajay Mandlekar (@ajaymandlekar) 's Twitter Profile
Ajay Mandlekar

@ajaymandlekar

NVIDIA AI Research Scientist | EE PhD @Stanford | Teaching 🤖 to imitate humans.

ID: 1192500851492831237

linkhttps://ai.stanford.edu/~amandlek/ calendar_today07-11-2019 17:55:46

207 Tweet

2,2K Takipçi

369 Takip Edilen

Ajay Mandlekar (@ajaymandlekar) 's Twitter Profile Photo

Data collection for humanoids is painful. Can we use simulation to automate it? Introducing DexMimicGen, the newest iteration of the MimicGen data generation system! DexMimicGen trains near-perfect agents for a wide range of challenging bimanual dexterous tasks.

Ajay Mandlekar (@ajaymandlekar) 's Twitter Profile Photo

A very cool imitation learning project that makes use of efficient robot-free data collection, smart robot hardware design, and co-training on human and robot data jointly - congrats to the team!

Kyoung Whan Choe (@kywch500) 's Twitter Profile Photo

I wanted to test LeRobot on complex tasks without a physical robot arm. MimicGen has 26K+ demonstrations across 12 tasks. I created mg2hfbot to convert datasets, train LeRobot & RoboMimic policies, and evaluate them. Repo: github.com/kywch/mg2hfbot Thanks to Ajay Mandlekar,

Joel Jang (@jang_yoel) 's Twitter Profile Photo

Excited to share that 𝐋𝐀𝐏𝐀 has won the Best Paper Award at the CoRL 2024 Language and Robot Learning workshop, selected among 75 accepted papers! Both Seonghyeon Ye and I come from NLP backgrounds, where everything is built around tokenization. Drawing inspiration from

Excited to share that 𝐋𝐀𝐏𝐀 has won the Best Paper Award at the CoRL 2024 Language and Robot Learning workshop, selected among 75 accepted papers!

Both <a href="/SeonghyeonYe/">Seonghyeon Ye</a>  and I come from NLP backgrounds, where everything is built around tokenization. Drawing inspiration from
Danfei Xu (@danfei_xu) 's Twitter Profile Photo

I gave an Early Career Keynote at CoRL 2024 on Robot Learning from Embodied Human Data Recording: youtube.com/watch?v=H-a748… Slides: faculty.cc.gatech.edu/~danfei/corl24… Extended summary thread 1/N

Ryan Hoque (@ryan_hoque) 's Twitter Profile Photo

🚨 New research from my team at Apple - real-time augmented reality robot feedback with just your hands + Vision Pro! Paper: arxiv.org/abs/2412.10631 Short thread below -

Siddhant Haldar (@haldar_siddhant) 's Twitter Profile Photo

The most frustrating part of imitation learning is collecting huge amounts of teleop data. But why teleop robots when robots can learn by watching us? Introducing Point Policy, a novel framework that enables robots to learn from human videos without any teleop, sim2real, or RL.

Ajay Mandlekar (@ajaymandlekar) 's Twitter Profile Photo

Excited to announce that the DexMimicGen simulation environments, datasets, and code to reproduce policy learning results, have been released! github.com/NVlabs/dexmimi…

Bowen Wen (@bowenwen_me) 's Twitter Profile Photo

📢Time to upgrade your depth camera! Introducing **FoundationStereo**, a foundation model for stereo depth estimation in zero-shot (accepted to CVPR 2025 with full scores) [1/n] Code: github.com/NVlabs/Foundat… Website: nvlabs.github.io/FoundationSter… Paper: arxiv.org/abs/2501.09898

Ajay Mandlekar (@ajaymandlekar) 's Twitter Profile Photo

Synthetic data generation tools like MimicGen create large sim datasets with ease, but using them in the real-world is difficult due to the large sim-to-real gap. Our new work uses simple co-training to unlock the potential of synthetic sim data for real-world manipulation!

Ajay Mandlekar (@ajaymandlekar) 's Twitter Profile Photo

Code for AHA has been released - we hope this makes using VLMs for failure reasoning in robotics more accessible! Code: github.com/NVlabs/AHA Website: aha-vlm.github.io Paper: arxiv.org/abs/2410.00371

Ajay Mandlekar (@ajaymandlekar) 's Twitter Profile Photo

Excited to share DexMachina, our new algorithm that can learn dexterous manipulation across different robot hands all from just a single human demonstration. Great work led by Mandi Zhao during her internship in our group!

Siddhant Haldar (@haldar_siddhant) 's Twitter Profile Photo

Current robot policies often face a tradeoff: they're either precise (but brittle) or generalizable (but imprecise). We present ViTaL, a framework that lets robots generalize precise, contact-rich manipulation skills across unseen environments with millimeter-level precision. 🧵

Russ Tedrake (@russtedrake) 's Twitter Profile Photo

TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the

Jiafei Duan (@djiafei) 's Twitter Profile Photo

1/ 🚀 Announcing #GenPriors — the CoRL 2025 workshop on Generalizable Priors for Robot Manipulation! 📍 Seoul, Korea 📅 Sat 27 Sep 2025. Mark your calendars & join us for a full day of discussion on building generalist robot policies are those capable of performing

1/ 🚀 Announcing #GenPriors — the CoRL 2025 workshop on Generalizable Priors for Robot Manipulation!
 📍 Seoul, Korea 📅 Sat 27 Sep 2025.
 Mark your calendars &amp; join us for a full day of discussion on building generalist robot policies are those capable of performing