Arjun Gupta @ RSS 2025 (@arjun__gupta) 's Twitter Profile
Arjun Gupta @ RSS 2025

@arjun__gupta

PhD Student at UIUC

ID: 1279636966548504577

linkhttps://arjung128.github.io calendar_today05-07-2020 04:43:30

71 Tweet

202 Followers

683 Following

Haonan Chen (@haonanchen_) 's Twitter Profile Photo

How can we train robot policies without any robot data—just using two-view videos of humans manipulating tools? Check out our new paper: "Tool-as-Interface: Learning Robot Policies from Human Tool Usage through Imitation Learning" Honored to be a Best Paper Finalist at the

Mustafa Mukadam (@mukadammh) 's Twitter Profile Photo

Sparsh-skin, our next iteration of general pretrained touch representations Skin like tactile sensing is catching up on the prominent vision-based sensors with the explosion of new dexteorus hands A crucial step in leveraging full hand sensing; work led by Akash Sharma 🧵👇

Yifan Zhu (@yifanzhu_) 's Twitter Profile Photo

Our paper, "One-Shot Real-to-Sim via End-to-End Differentiable Simulation and Rendering", was recently published at IEEE RA-L. Our method turns a single RGB-D video of a robot interacting with the environment, along with the tactile measurements, into a generalizable world model.

Mustafa Mukadam (@mukadammh) 's Twitter Profile Photo

This was a key feature in enabling DexterityGen, our teleop that can support tasks like using a screw driver Led by Zhao-Heng Yin, now open source

Hello Robot (@hellorobotinc) 's Twitter Profile Photo

Soaking up the sun at the Robotics: Science and Systems conference in Los Angeles this weekend? Stop by the Hello Robot booth to say hi and get a hands on look at Stretch! Hope to see you there 😎 roboticsconference.org

Soaking up the sun at the Robotics: Science and Systems conference in Los Angeles this weekend? Stop by the Hello Robot booth to say hi and get a hands on look at Stretch! Hope to see you there 😎
roboticsconference.org
Nima Fazeli (@nimafazeli7) 's Twitter Profile Photo

🚀 #RSS2025 sneak peek! We teach robots to shimmy objects with fingertip micro-vibrations precisely—no regrasp, no fixtures. 🎶⚙️ Watch Vib2Move in action 👇 vib2move.github.io #robotics #dexterousManipulation

Mahi Shafiullah 🏠🤖 (@notmahi) 's Twitter Profile Photo

Workshop on Mobile Manipulation in #RSS2025 kicking off with a talk from Jeannette Bohg! Come by EEB 132 if you’re here in person, or join us on Zoom (link on the website)

Workshop on Mobile Manipulation in #RSS2025 kicking off with a talk from <a href="/leto__jean/">Jeannette Bohg</a>! Come by EEB 132 if you’re here in person, or join us on Zoom (link on the website)
Arjun Gupta @ RSS 2025 (@arjun__gupta) 's Twitter Profile Photo

How can we build mobile manipulation systems that generalize to novel objects and environments? Come check out MOSART at #RSS2025! Paper: arxiv.org/abs/2402.17767 Project webpage: arjung128.github.io/opening-articu… Code: github.com/arjung128/stre…

Shivansh Patel (@shivanshpatel35) 's Twitter Profile Photo

🚀 Introducing RIGVid: Robots Imitating Generated Videos! Robots can now perform complex tasks—pouring, wiping, mixing—just by imitating generated videos, purely zero-shot! No teleop. No OpenX/DROID/Ego4D. No videos of human demonstrations. Only AI generated video demos 🧵👇

Hello Robot (@hellorobotinc) 's Twitter Profile Photo

How do you build a robot that can open unfamiliar objects in new places? This study put mobile manipulation systems through 100+ real-world tests and found that perception, not precision, is the real challenge.🤖 ▶️youtube.com/watch?v=QcbMnE… 📑arjung128.github.io/opening-articu…

Haonan Chen (@haonanchen_) 's Twitter Profile Photo

Don’t miss out on our #CoRL2025 paper 👉 tool-as-interface.github.io Tool-as-Interface: Learning Robot Policies from Observing Human Tool Use Robots learn robust and generalizable manipulation skills directly from human tool-use videos, bridging the embodiment gap without

Mustafa Mukadam (@mukadammh) 's Twitter Profile Photo

Bored of working on toy tasks in the lab? We are solving robot manipulation at massive real world scales with Vulcan at Amazon I am at #CoRL2025 and my team is looking for PhD research interns, postodcs, and scientists aboutamazon.com/news/operation…

Harsh Gupta (@hgupt3) 's Twitter Profile Photo

💾 Data from across the country. 🚁 No access to the drone. 🤖 Still works, zero-shot. UMI-on-Air makes large-scale data collection → real-world deployment possible.

Haonan Chen (@haonanchen_) 's Twitter Profile Photo

What if robots could decide when to see and when to feel like humans? We built a system that lets them. Multi-Modal Policy Consensus learns to balance vision 👁️ and touch ✋. 🌐 Project: policyconsensus.github.io 1/N

Pranay Thangeda (@pthangeda_) 's Twitter Profile Photo

I rarely get to share the specific embodiments I work with every day, but I'm thrilled to finally see this one go public Meet Amazon's newest robotic system: Blue Jay!

Chengshu Li (@chengshuericli) 's Twitter Profile Photo

We are excited to release MoMaGen, a data generation method for multi-step bimanual mobile manipulation. MoMaGen turns 1 human-teleoped robot trajectory into 1000s of generated trajectories automatically.🚀 Website: momagen.github.io arXiv: arxiv.org/abs/2510.18316

Yunfan Jiang (@yunfanjiang) 's Twitter Profile Photo

Synthesizing robot data in simulation is a promising way for scaling up. While most prior work focuses on static manipulation, check out our new work MoMaGen led by Chengshu Li Mengdi Xu Arpit Bahety Hang Yin , where we extend data synthesis to mobile manipulation. MoMaGen

Runpei Dong (@runpeidong) 's Twitter Profile Photo

Thrilled to share our work AlphaOne🔥 at EMNLP 2025 2025, Junyu Zhang and I will be presenting this work online, and please feel free to join and talk to us!!! 📆Date: 8:00-9:00, Nov 7, Friday (Beijing Standard Time, UTC+8) 📺Session: Gather Session 4