Roberto (@robobertomm) 's Twitter Profile
Roberto

@robobertomm

Assistant CS Professor at UT Austin. Former Stanford and TUBerlin. Researching at the intersection of vision, learning and robotics 🏳️‍🌈

ID: 1164660849841016832

linkhttps://robertomartinmartin.com/ calendar_today22-08-2019 22:09:28

156 Tweet

2,2K Followers

262 Following

Zizhao Wang (@duke_zzwang) 's Twitter Profile Photo

In multi-object env, why do most Unsupervised Skill Discovery methods fail to learn complex skills like tool use? Because they simply maximize state coverage. Introducing our solution SkiLD: Skill Discovery Guided by Factor Interactions (NeurIPS24) wangzizhao.github.io/SkiLD/

Roberto (@robobertomm) 's Twitter Profile Photo

Tired of guessing what tasks people want robots to do for them? Check our study! We correlate time spent and emotions people felt while performing tasks with the desire to automate them, comparing between different groups. And with an online tool for you to play with the data!

Roberto (@robobertomm) 's Twitter Profile Photo

Giving a talk as New Faculty Highlight at AAAI tomorrow morning (9:30am)! aaai.org/conference/aaa… Come if you want to get an overview of some of the works from the lab

Fei Xia (@xf1280) 's Twitter Profile Photo

✨Super excited to share what the team has been working on! ♊️🤖 Gemini Robotics is a family of frontier models that are dexterous, interactive, and general. It builds on top of Gemini's world understanding, enhancing it's spatial/embodied reasoning, and producing robot

Roberto (@robobertomm) 's Twitter Profile Photo

So happy for Jiaheng Hu ! He has been rocking it, with outstanding work that pushes the limits of what robot learning can achieve in mobile manipulation and other domains. And one of my first Ph.D. students! Congratulations! 🦾🦾🦾🦾

Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

Extremely excited to announce that I will be joining UT Austin Computer Science at UT Austin in August 2025 as an Assistant Professor! 🎉 I’m looking forward to continuing to develop AI agents that interact/communicate with people, each other, and the multimodal world. I’ll be recruiting PhD

Extremely excited to announce that I will be joining <a href="/UTAustin/">UT Austin</a> <a href="/UTCompSci/">Computer Science at UT Austin</a> in August 2025 as an Assistant Professor! 🎉

I’m looking forward to continuing to develop AI agents that interact/communicate with people, each other, and the multimodal world. I’ll be recruiting PhD
Roberto (@robobertomm) 's Twitter Profile Photo

Loved working on this with our MIT/Stanford/OpenAI collaborators. It brings "The Bitter Lesson" to data curation: skip the hand-tuned heuristics (visual similarity, motion...) and let the data speak for itself! Datamodels is a fascinating framework 🤯

Jiaheng Hu (@jiahenghu1) 's Twitter Profile Photo

Excited to be in ATL for #ICRA2025 to present 🔥FLaRe: fine-tuning large transformer policies with #RL, 15:25 Tuesday @ room 410! I will also be attending the 📷Doctoral Consortium on Monday to talk about my research on self-improving robots. Happy to meet old and new friends!

Roberto (@robobertomm) 's Twitter Profile Photo

🚨RL training for contact-rich tasks with a mobile manipulator IN THE REAL WORLD?!🤯 We're not crazy—just equipped with the right action space! SLAC learns a safe, effective action space via unsupervised RL in sim, enabling real-world RL training in minutes. Check it out!🚀

Huihan Liu (@huihan_liu) 's Twitter Profile Photo

Meet Casper👻, a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you stay in control! Instead of passively receiving commands, what if a robot actively sense what you need in the background, and step in when confident? (1/n)

Arpit Bahety (@arpitbahety) 's Twitter Profile Photo

Imagine a future where robots are part of our daily lives — How can end users teach robots new tasks by directly showing them, just like teaching another person? 🧵👇

Roberto (@robobertomm) 's Twitter Profile Photo

It was time to improve our evaluations in robot learning! We introduce a methodology based on anonymous A/B testing: fairer, stronger, community-driven. Awesome work by Karl Pertsch Pranav Atreya Tony Lee and an incredible crowdsourcing team. Upload and test your model! 🚀

Rutav (@rutavms) 's Twitter Profile Photo

Intelligent humanoids should have the ability to quickly adapt to new tasks by observing humans Why is such adaptability important? 🌍 Real-world diversity is hard to fully capture in advance 🧠 Adaptability is central to natural intelligence We present MimicDroid 👇 🌐

Gautam Kamath (@thegautamkamath) 's Twitter Profile Photo

📢 Call for Community Activities #AAAI2026 We invite submissions of proposals for including and open activities that help broaden community participation in the AI field. October 4: Submission Deadline October 18: Acceptance Notifications Roberto Maru Cabrera AAAI

📢 Call for Community Activities #AAAI2026

We invite submissions of proposals for including and open activities that help broaden community participation in the AI field.

October 4: Submission Deadline 
October 18: Acceptance Notifications
<a href="/RobobertoMM/">Roberto</a> <a href="/marucabrera27/">Maru Cabrera</a> <a href="/RealAAAI/">AAAI</a>
Roberto (@robobertomm) 's Twitter Profile Photo

Simple but *so effective idea*! And it can be used with any feature data selector. Great work led by Sateesh Kumar . Do not miss it at #CoRL2025 (Spotlight 4 & Poster 2 on Sept 29)!

Jiaheng Hu (@jiahenghu1) 's Twitter Profile Photo

Excited that SPARTA (vision.cs.utexas.edu/projects/spart…) won the best poster award at the CoRL RINO workshop! Big congrats to the project lead Priyanka, who worked so hard on this project, as well as to the rest of the co-authors Shivin Dass Sagnik Majumder Roberto and Kristen!

Excited that SPARTA (vision.cs.utexas.edu/projects/spart…) won the best poster award at the CoRL RINO workshop! 

Big congrats to the project lead Priyanka, who worked so hard on this project, as well as to the rest of the co-authors <a href="/ShivinDass/">Shivin Dass</a> <a href="/sagnikmjr/">Sagnik Majumder</a> <a href="/RobobertoMM/">Roberto</a> and Kristen!
naveen manwani (@naveenmanwani17) 's Twitter Profile Photo

🚨CoRL 2025 Best Poster Award 🏆 Paper Alert 🚨 ➡️Paper Title: Mash, Spread, Slice! Learning to Manipulate Object States via Visual Spatial Progress 🌟Few pointers from the paper 🎯Most robot manipulation focuses on changing the kinematic state of objects: picking, placing,

Shivin Dass (@shivindass) 's Twitter Profile Photo

A little late to this but excited to share that DataMIL won the best paper at the Data workshop at #CoRL! If you haven't already, check it out! 👇

A little late to this but excited to share that DataMIL won the best paper at the Data workshop at #CoRL!

If you haven't already, check it out! 👇
Chengshu Li (@chengshuericli) 's Twitter Profile Photo

We are excited to release MoMaGen, a data generation method for multi-step bimanual mobile manipulation. MoMaGen turns 1 human-teleoped robot trajectory into 1000s of generated trajectories automatically.🚀 Website: momagen.github.io arXiv: arxiv.org/abs/2510.18316