Marius Memmel (@memmelma) 's Twitter Profile
Marius Memmel

@memmelma

Robotics PhD student @UW and Intern @Bosch_AI Pittsburgh 🐺 previously @DHBW @EPFL, @TUDarmstadt 🇩🇪

ID: 1377627705873739779

linkhttp://memmelma.github.io calendar_today01-04-2021 14:23:53

68 Tweet

368 Followers

417 Following

Rohan Baijal (@rohanbaijal) 's Twitter Profile Photo

Long Range Navigator (LRN) 🧭— an approach to extend planning horizons for off-road navigation given no prior maps. Using vision LRN makes longer-range decisions by spotting navigation frontiers far beyond the range of metric maps. personalrobotics.github.io/lrn/

Hongchi Xia (@xhongchi97338) 's Twitter Profile Photo

Glad to introduce our #CVPR2025 paper "DRAWER", allowing one to create a realistic and interactable digital twin from a video of a static scene without any interactions with the environment. It unlocks many opportunities in gaming and robotics! Webpage: drawer-art.github.io

Wei-Chiu Ma (@weichiuma) 's Twitter Profile Photo

I've been wanting to make 3D reconstructions not just realistic, but also **interactable** and **actionable** for years. Thanks to Hongchi Xia, we're now a step closer! Introducing DRAWER — a framework for the automatic construction of realistic, interactive digital twins.

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

Very excited to be at #ICLR2025 in Singapore helping present some of the work done by our group! We'll be presenting 4 papers: 1. Rapidly Adapting Policies to the Real-World via Simulation-Guided Fine-Tuning weirdlabuw.github.io/sgft/ 2. Robot Sub-Trajectory Retrieval for

Marius Memmel (@memmelma) 's Twitter Profile Photo

Small poster, big ideas! Stop by at our poster session starting now to find out about trajectory retrieval in robotics! Hall 3 - 33 🤖

Small poster, big ideas! Stop by at our poster session starting now to find out about trajectory retrieval in robotics! Hall 3 - 33 🤖
Marius Memmel (@memmelma) 's Twitter Profile Photo

DRAWER can generate an interactive simulation of a real-world scene from just a single video! The best part? We can use it to train policies in simulation and transfer them back to the real world!

Avinandan Bose (@avibose22) 's Twitter Profile Photo

Excited to be at #AISTATS2025 ! Catch me at: 📍 Poster: Hall A–E 17–18 🕒 Sat, May 3rd at 3PM Presenting our accepted works on Offline Multi-task RL and Certified Robustness to Dynamic Data Poisoning. Also happy to chat about LoRe and DoomArena (github.com/ServiceNow/Doo…)!

Excited to be at #AISTATS2025 ! Catch me at:
📍 Poster: Hall A–E 17–18
🕒 Sat, May 3rd at 3PM
Presenting our accepted works on Offline Multi-task RL and Certified Robustness to Dynamic Data Poisoning.

Also happy to chat about LoRe and DoomArena (github.com/ServiceNow/Doo…)!
Allen School (@uwcse) 's Twitter Profile Photo

If you visited the UW Cherry Blossoms, did you “spot” an unusual visitor among the blooms? Researchers in the University of Washington #UWAllen’s #Robotics group recently took advantage of some nice weather to take our Boston Dynamics robot dog for a stroll around campus. #AI 1/4

If you visited the <a href="/uwcherryblossom/">UW Cherry Blossoms</a>, did you “spot” an unusual visitor among the blooms? Researchers in the <a href="/UW/">University of Washington</a> #UWAllen’s #Robotics group recently took advantage of some nice weather to take our <a href="/BostonDynamics/">Boston Dynamics</a> robot dog for a stroll around campus. #AI 1/4
Marcel Torné (@marceltornev) 's Twitter Profile Photo

Giving history to our robot policies is crucial to solve a variety of daily tasks. However, diffusion policies get worse when adding history. 🤖 In our recent work we learn how adding an auxiliary loss that we name Past-Token Prediction (PTP) together with cached embeddings

Jesse Zhang (@jesse_y_zhang) 's Twitter Profile Photo

Reward models that help real robots learn new tasks—no new demos needed! ReWiND uses language-guided rewards to train bimanual arms on OOD tasks in 1 hour! Offline-to-online, lang-conditioned, visual RL on action-chunked transformers. 🧵

Jesse Zhang (@jesse_y_zhang) 's Twitter Profile Photo

How can non-experts quickly teach robots a variety of tasks? Introducing HAND ✋, a simple, time-efficient method of training robots! Using just a **single hand demo**, HAND learns manipulation tasks in under **4 minutes**! 🧵

Yunchu (@yunchuzh) 's Twitter Profile Photo

How should a robot perceive the world? What kind of visual representation leads to robust visuomotor policy learning for robotics? Policies trained on raw images are often fragile—easily broken by lighting, clutter, or object variations—making it challenging to deploy policies

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

Learned visuomotor policies are notoriously fragile, they break with changes in conditions like lighting, clutter, or object variations amongst other things. In Yunchu's latest work, we asked whether we could get these policies to be robust and generalizable with a clever

Ilir Aliu - eu/acc (@iliraliu_) 's Twitter Profile Photo

You don’t need more robot data. You need to look inside the data you already have. [📍 bookmark for later] Instead of just copying demos, STRAP pulls semantically meaningful pieces from large offline datasets to improve robustness and performance… no fine-tuning needed. Why

Arhan Jain (@prodarhan) 's Twitter Profile Photo

A way to do diverse and distributed evaluations for robotics! Checkout the sim eval tool I’ve made to help cheaply eval and debug policies trained for DROID :) Then submit your policies trained on the DROID platform to the arena and get real world feedback and comparisons!

Avinandan Bose (@avibose22) 's Twitter Profile Photo

🚨 Code is live! Check out LoRe – a modular, lightweight codebase for personalized reward modeling from user preferences. 📦 Few-shot personalization 📊 Benchmarks: TLDR, PRISM, PersonalLLM 👉 github.com/facebookresear… Huge thanks to AI at Meta for open-sourcing this research 🙌

Andrew Wagenmaker (@ajwagenmaker) 's Twitter Profile Photo

Diffusion policies have demonstrated impressive performance in robot control, yet are difficult to improve online when 0-shot performance isn’t enough. To address this challenge, we introduce DSRL: Diffusion Steering via Reinforcement Learning. (1/n) diffusion-steering.github.io

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

So you’ve trained your favorite diffusion/flow based policy, but it’s just not good enough 0-shot. Worry not, in our new work DSRL - we show how to *steer* pre-trained diffusion policies with off-policy RL, improving behavior efficiently enough for direct training in the real

Yi Ru (Helen) Wang (@yiruhelenwang) 's Twitter Profile Photo

🚨Tired of binary pass/fail metrics that miss the bigger picture? 🤖Introducing #RoboEval — an open benchmark that shows *how* robot manipulation policies behave and *why* they fail, not just *if* they succeed. 🧵1/n 🔗 robo-eval.github.io 📄 robo-eval.github.io/media/RoboEval…