Vishal Mandadi (@vishalmandadi) 's Twitter Profile
Vishal Mandadi

@vishalmandadi

Uplifting Robots in Every Way! |
RA @ CVIT and RRC, IIIT-H |
Previously CS @ IIIT-Hyderabad 23'

ID: 1275101725309853701

linkhttp://vishal-2000.github.io calendar_today22-06-2020 16:22:28

574 Tweet

183 Followers

1,1K Following

kepano (@kepano) 's Twitter Profile Photo

This unexpectedly became the most popular (controversial?) thing I've written Because I wrote it so quickly, I should clarify a few things...

Cheng Chi (@chichengcc) 's Twitter Profile Photo

mm level precision beyond actuator limits, so much torque that you need to manage thermals. Owning the whole stack from HW to AI is the only way 🦾

Tangible (@tangiblerobots) 's Twitter Profile Photo

Hello, Eggie. The world was built around humans. Eggie doesn't just look human, Eggie interacts like us. Dexterous. Mobile. Compliant. We’re building Eggie to be the smartest robot to ever walk on Earth. Join us. Built from scratch and with love in California. 🫶

Sunday (@sundayrobotics) 's Twitter Profile Photo

After 18 months in stealth, dozens of prototypes, millions of real-home demonstrations, and one final all-nighter, we’re thrilled for you to say hello to Memo

Michal Nauman (@mic_nau) 's Twitter Profile Photo

Multi-task RL can be highly sample-efficient and when done right, it unlocks LLM-style transfer and fine-tuning. We’re excited to introduce BRC, a simple recipe for multi-task RL that outperforms SOTA single-task agents while using less compute (!)

Vishal Mandadi (@vishalmandadi) 's Twitter Profile Photo

Maybe, just like LLM Arena and Elo ratings, we should rank robots by putting them in a boxing or fencing arena - the ultimate test of control and safety :)

Max Simchowitz (@max_simchowitz) 's Twitter Profile Photo

🧐🧐 Why do we pretrain LLMs with log likelihood? Why does action chunking work so well in robotics? Why is EMA so ubiquitous? And could their be a mathematical basis for Moravec’s paradox? 🤖🤖 Come check out our NeurIPS  2025 Tutorial “Foundations of Imitation Learning” with

🧐🧐 Why do we pretrain LLMs with log likelihood? Why does action chunking work so well in robotics? Why is EMA so ubiquitous? And could their be a mathematical basis for Moravec’s paradox? 🤖🤖

Come check out our NeurIPS  2025 Tutorial “Foundations of Imitation Learning” with
Max Simchowitz (@max_simchowitz) 's Twitter Profile Photo

⏰⏰ New Science of Robot Learning Paper: "Much Ado About Noising." TL;DR we answer why generative models, like flow and diffusion models, actually work for robotic control tasks🤖🤖 (hint: its not multimodality). This leads to a new minimal iterative policy (MIP) that matches

Chen Tessler (@chentessler) 's Twitter Profile Photo

At NVIDIA, we built ProtoMotions to help us, and researchers world-wide, innovate quickly without compromising on applicability. We're proud to announce ProtoMotions3 -- our biggest release yet! 🧵👇

Max Simchowitz (@max_simchowitz) 's Twitter Profile Photo

👋👋New Generative Modeling Paper from Yutong (Kelly) He and XinyueAi: Evaluating sample likelihoods is a fundamental primitive in flow-based generative modeling . Now we can compute them faster. Much faster. Like 10-100x faster. ✈️✈️ Check out our new work on fast likelihood

Max Simchowitz (@max_simchowitz) 's Twitter Profile Photo

⏰⏰(another) science of robot learning paper. Why does action chunking work so well in robotic manipulation? Probably lots of reasons. but here’s one you may not have thought of: control stability. After months of polishing and 5 revisions, check out“ Action Chunking and

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become

Kevin Zakka (@kevin_zakka) 's Twitter Profile Photo

Just shipped a major domain randomization overhaul in mjlab and I'm super excited about it! The biggest highlight is physically consistent inertia randomization. Mass, center of mass, and the inertia tensor now vary together through a pseudo inertia parameterization, so every

Aviral Kumar (@aviral_kumar2) 's Twitter Profile Photo

🚨🚨 New paper on flow-matching value functions Last year, we showed training RL value functions with a flow-matching loss achieved SOTA results. But why does it work? And what could it possibly tell us about other things that have nothing to do with VFs or even RL? Short

🚨🚨 New paper on flow-matching value functions

Last year, we showed training RL value functions with a flow-matching loss achieved SOTA results.

But why does it work? And what could it possibly tell us about other things that have nothing to do with VFs or even RL? 

Short
Deepak Pathak (@pathak2206) 's Twitter Profile Photo

We hosted Prof. Alyosha Efros (UC Berkeley) at Skild AI! He didn't believe that robots could actually cook eggs reliably. :) Tested back-to-back 5times without fail! One batch of scrambled eggs every ~2.5mins nonstop. The same model assembles a GPU on a server rack too.