Sarvesh Patil (@servo97) 's Twitter Profile
Sarvesh Patil

@servo97

Your friendly neighborhood PhD in Robotics @CMU. Soft robots for Manipulation | Causal Inference | MARL.

ID: 2891434130

linkhttp://servo97.github.io calendar_today06-11-2014 01:16:06

855 Tweet

276 Followers

550 Following

Uksang Yoo (@uksangyoo) 's Twitter Profile Photo

Can robots make pottery🍵? Throwing a pot is a complex manipulation task of continuously deforming clay. We will present RoPotter, a robot system that uses structural priors to learn from demonstrations and make pottery IEEE-RAS Int. Conf. on Humanoid Robots (HUMANOIDS) CMU Robotics Institute 👇robot-pottery.github.io 1/8🧵

Gokul Swamy (@g_k_swamy) 's Twitter Profile Photo

A dream I've had for five years is finally coming true: I'll be co-teaching a course next sem. on the algorithmic foundations of imitation learning / RLHF with my advisors, Drew Bagnell and Steven Wu! Sign up if you're at CMU (17-740) or follow along at interactive-learning-algos.github.io!

Rohan Choudhury (@rchoudhury997) 's Twitter Profile Photo

Excited to finally release our NeurIPS 2024 (spotlight) paper! We introduce Run-Length Tokenization (RLT), a simple way to significantly speed up your vision transformer on video with no loss in performance!

So Yeon (Tiffany) Min on Industry Job Market (@soyeontiffmin) 's Twitter Profile Photo

I am on the industry job market, and am planning to interview around next March. I am attending NeurIPS Conference, and I hope to meet you there if you are hiring! My website: soyeonm.github.io Short bio about me: I am a 5th year PhD student at CMU MLD, working with Russ Salakhutdinov

Saumya Saxena (@saxena_saumya) 's Twitter Profile Photo

Can 3D scene graphs act as effective online memory for solving EQA tasks in⚡️real-time? Presenting GraphEQA🤖, a framework for grounding Vision Language Models using multimodal memory for real-time embodied question answering.

Kensuke Nakamura (@kensukenk) 's Twitter Profile Photo

Do you think that robot safety is “just collision avoidance”? In the open world, safety is more than collisions and must represent failures like spilling or breaking. Our new latent safety filters detect–and prevent–any policy from violating hard-to-specify constraints! (1/10)

Shivam Vats @ ICLR (@shivaamvats) 's Twitter Profile Photo

Excited to share that our paper, "Multi-Robot Motion Planning with Diffusion Models," has been selected for a Spotlight presentation at #ICLR2025!🔦🤖 We scale diffusion planning to dozens of robots *without* multi-robot data by using search. Project: multi-robot-diffusion.github.io 🧵

CMU Intelligent Autonomous Manipulation Lab (@iamlab_cmu) 's Twitter Profile Photo

🤖We are introducing Grounded Task Axes (GTA) — a zero-shot skill transfer framework that enables robots to perform multi-step manipulation tasks on novel objects by generalizing modular controllers grounded through visual foundation models. ✅ No training, no demo, no

Akash Sharma (@akashshrm02) 's Twitter Profile Photo

Robots need touch for human-like hands to reach the goal of general manipulation. However, approaches today don’t use tactile sensing or use specific architectures per tactile task. Can 1 model improve many tactile tasks? 🌟Introducing Sparsh-skin: tinyurl.com/y935wz5c 1/6

Gokul Swamy (@g_k_swamy) 's Twitter Profile Photo

Say ahoy to 𝚂𝙰𝙸𝙻𝙾𝚁⛵: a new paradigm of *learning to search* from demonstrations, enabling test-time reasoning about how to recover from mistakes w/o any additional human feedback! 𝚂𝙰𝙸𝙻𝙾𝚁 ⛵ out-performs Diffusion Policies trained via behavioral cloning on 5-10x data!

Gokul Swamy (@g_k_swamy) 's Twitter Profile Photo

It was a dream come true to teach the course I wish existed at the start of my PhD. We built up the algorithmic foundations of modern-day RL, imitation learning, and RLHF, going deeper than the usual "grab bag of tricks". All 25 lectures + 150 pages of notes are now public! 🧵

It was a dream come true to teach the course I wish existed at the start of my PhD. We built up the algorithmic foundations of modern-day RL, imitation learning, and RLHF, going deeper than the usual "grab bag of tricks". All 25 lectures + 150 pages of notes are now public! 🧵
Simon Stepputtis (@simonstepputtis) 's Twitter Profile Photo

Thrilled to join Virginia Tech as an assistant professor in Virginia Tech Mechanical Engineering this fall! At the TEA lab (tealab.ai), we’ll explore hybrid AI systems for efficient and adaptive agents and robots 🤖 Thank you to everyone who has supported me along the way!

Thrilled to join <a href="/virginia_tech/">Virginia Tech</a> as an assistant professor in <a href="/VirginiaTech_ME/">Virginia Tech Mechanical Engineering</a> this fall! 

At the TEA lab (tealab.ai), we’ll explore hybrid AI systems for efficient and adaptive agents and robots 🤖

Thank you to everyone who has supported me along the way!
Tabitha Edith Lee (@tabularobot) 's Twitter Profile Photo

Announcing our EXAIT@ICML workshop paper: CURATE! Have a difficult target task distribution with sparse rewards that you want to train an RL agent to complete? 🤔 We tackle this problem using our curriculum learning algorithm, CURATE. 🎓 Link: openreview.net/forum?id=mAeQu… 1/6

Announcing our EXAIT@ICML workshop paper: CURATE!

Have a difficult target task distribution with sparse rewards that you want to train an RL agent to complete? 🤔

We tackle this problem using our curriculum learning algorithm, CURATE. 🎓

Link: openreview.net/forum?id=mAeQu…

1/6
Seth Karten (@sethkarten) 's Twitter Profile Photo

🔴 Final speaker lineup confirmed - PokéAgent Challenge Hackathon starts in 48 hours! NeurIPS 2025 competition featuring two tracks advancing AI decision-making through Pokémon: 🥊 Competitive battling 🏃 RPG speedrunning Research talks Saturday 12-1:30 PM EDT $2k in GCP

🔴 Final speaker lineup confirmed - PokéAgent Challenge Hackathon starts in 48 hours!

NeurIPS 2025 competition featuring two tracks advancing AI decision-making through Pokémon:
🥊 Competitive battling 
🏃 RPG speedrunning

Research talks Saturday 12-1:30 PM EDT
$2k in GCP
Prasanna Sriganesh (@realprassi007) 's Twitter Profile Photo

Spot dressed up for Halloween!🎃 It's on a mission for its favorite 'candy'! 🔋 But two 'ghosts' were blocking the path… A fun demo of our new paper on how robots can intelligently 'make way' on cluttered stairs! (1/4) CMU Robotics Institute

Chaoyi Pan (@chaoyipan) 's Twitter Profile Photo

🕸️ Introducing SPIDER — Scalable Physics-Informed Dexterous Retargeting! A dynamically feasible, cross-embodiment retargeting framework for BOTH humanoids 🤖 and dexterous hands ✋. From human motion → sim → real robots, at scale. 🔗 Website: jc-bao.github.io/spider-project/ 🧵 1/n

Chaoyi Pan (@chaoyipan) 's Twitter Profile Photo

Generative models (diffusion/flow) are taking over robotics 🤖. But do we really need to model the full action distribution to control a robot? We suspected the success of Generative Control Policies (GCPs) might be "Much Ado About Noising." We rigorously tested the myths. 🧵👇

Yutong (Kelly) He (@electronickale) 's Twitter Profile Photo

Diffusion/Flow-based models can sample in 1-2 steps now 👍 But likelihood? Still requires 100-1000 NFEs (even for these fast models) 😭 We fix this! Introducing F2D2: simultaneous fast sampling AND fast likelihood via joint flow map distillation. arxiv.org/abs/2512.02636 1/🧵

Diffusion/Flow-based models can sample in 1-2 steps now 👍 But likelihood? Still requires 100-1000 NFEs (even for these fast models) 😭

We fix this! Introducing F2D2: simultaneous fast sampling AND fast likelihood via joint flow map distillation.

arxiv.org/abs/2512.02636
1/🧵
Max Simchowitz (@max_simchowitz) 's Twitter Profile Photo

👋👋New Generative Modeling Paper from Yutong (Kelly) He and XinyueAi: Evaluating sample likelihoods is a fundamental primitive in flow-based generative modeling . Now we can compute them faster. Much faster. Like 10-100x faster. ✈️✈️ Check out our new work on fast likelihood