Mustafa Mukadam (@mukadammh) 's Twitter Profile
Mustafa Mukadam

@mukadammh

Robotics and AI researcher @amazon | Prev: @AIatMeta, @GTrobotics

ID: 1467594987424649216

linkhttps://www.mustafamukadam.com calendar_today05-12-2021 20:41:37

242 Tweet

1,1K Takipçi

325 Takip Edilen

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

Genuinely really impressive stuff from (some parts of) my old team at meta. Tactile sensing for manipulation has a ways to go but it's coming so far.

Fangchen Liu (@fangchenliu_) 's Twitter Profile Photo

1/N Most Vision-Language-Action models need tons of data for finetuning, and still fail for new objects and instructions. Introducing OTTER, a lightweight, easy-to-train model that uses text-aware visual features to nail unseen tasks out of the box! Here's how it works 👇

Mustafa Mukadam (@mukadammh) 's Twitter Profile Photo

📢🔥Excited to share that I have joined Amazon Robotics to work on some of the hardest real-world manipulation problems at unprecedented scale Looking forward to this new chapter after spending 5+ wonderful years at Meta FAIR leading advances in touch perception and dexterity

Andy Jassy (@ajassy) 's Twitter Profile Photo

Very cool breakthrough by our physical AI and robotics teams-- Vulcan is the first robot that combines sight and touch, and can feel its way through cluttered spaces the way humans do. Vulcan is helping make work safer by handling ergonomically challenging tasks, while creating

Mustafa Mukadam (@mukadammh) 's Twitter Profile Photo

Thrilled to be working on the Vulcan team. We have one of the first use cases in Amazon where contact-rich manipulation in clutter is necessary. Not many ways to go about this than to really embrace contact and leverage the sense of touch!

Julen Urain (@robotgradient) 's Twitter Profile Photo

Anyone interested in tactile sensing for robotics should be following Akash's solid releases. How should we integrate rich tactile sensing modality for policy learning?

Carolina Higuera (@carohiguerarias) 's Twitter Profile Photo

Magnetic-based tactile sensors offer a versatile solution for full-hand sensing. However, magnetic signals are not trivial to process to capture relevant contact information. Check out Sparsh-skin, general representations for magnetic skins!

Mustafa Mukadam (@mukadammh) 's Twitter Profile Photo

Sparsh-skin, our next iteration of general pretrained touch representations Skin like tactile sensing is catching up on the prominent vision-based sensors with the explosion of new dexteorus hands A crucial step in leveraging full hand sensing; work led by Akash Sharma 🧵👇

Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

Learning task-agnostic tactile representations is very valuable for dexterity! Check out this cool work by Akash Sharma that explores this while integrating the history of tactile information. This enables highly dexterous tasks—like plug insertion with a giant hand! 😁

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Very interesting work! And great to see self-supervised learning being used for tactile data. This is critical to scaling tactile to the level that vision has scaled.

Sudharshan Suresh (@suddhus) 's Twitter Profile Photo

I'm a featured interview in our latest behind-the-scenes release! We break down the ML and perception that drives the whole-body manipulation behaviors from last year. It starts with a neat demo of Atlas's range-of-motion and our vision foundation models. youtu.be/oe1dke3Cf7I?si…

Ademi Adeniji (@ademiadeniji) 's Twitter Profile Photo

Everyday human data is robotics’ answer to internet-scale tokens. But how can robots learn to feel—just from videos?📹 Introducing FeelTheForce (FTF): force-sensitive manipulation policies learned from natural human interactions🖐️🤖 👉 feel-the-force-ftf.github.io 1/n

Vikash Kumar (@vikashplus) 's Twitter Profile Photo

📢Life is a sequence of bets – and I’ve picked my next: myolab.ai It’s incredibly ambitious, comes with high risk, & carries unbounded potential. But it’s a version of the #future I deeply believe in. I believe: ➡️AI will align strongly with humanity - coz it maximizes its own

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

So you’ve trained your favorite diffusion/flow based policy, but it’s just not good enough 0-shot. Worry not, in our new work DSRL - we show how to *steer* pre-trained diffusion policies with off-policy RL, improving behavior efficiently enough for direct training in the real

Sumedh Sontakke (@sota_kke) 's Twitter Profile Photo

We demo’d Amazon grasp model at RSS this year. We performed over 600 grasps over one day at roughly 80-90% SR 1. On an open item set (people gave random often adversarial items), 2. In a random scene fully outdoor throughout the day 3. On a new embodiment (different from