David McAllister (@davidrmcall) 's Twitter Profile
David McAllister

@davidrmcall

PhD Student @berkeley_ai | prev @LumaLabsAI

ID: 1801743598784036864

calendar_today14-06-2024 22:28:46

73 Tweet

230 Takipçi

183 Takip Edilen

Takara Truong (@takaratruong) 's Twitter Profile Photo

They say the best time to tweet about your research was 1 year ago, the second best time is now. With RAI formerly known as Boston Dynamics AI Institute, we present DiffuseCloC - the first guidable physics-based diffusion model. diffusecloc.github.io/website/

Catherine Glossop (@catglossop) 's Twitter Profile Photo

Inherent biases and imbalances in robot data can make training steerable VLA policies challenging. We introduce CAST, a method to augment datasets with counterfactuals to induce better language following cast-vla.github.io ← paper, code, data, and more available here! 🧵

University of California (@uofcalifornia) 's Twitter Profile Photo

Republican and Democratic voters share common ground when it comes to the University of California: Both sides express widespread support for UC, its research, medical centers and ability to elevate the lives of students, a statewide poll shows. (via Los Angeles Times)

Oleg Rybkin (@_oleh) 's Twitter Profile Photo

Want more scaling laws for value-based RL? Preston and I analyzed scaling model size! Larger models predictably improve data efficiency, performance, reduce overfitting, and allow using larger batch size. After this, I am now more optimistic than ever abt TD-learning.

Jason Liu (@jasonjzliu) 's Twitter Profile Photo

Ever wish a robot could just move to any goal in any environment—avoiding all collisions and reacting in real time? 🚀Excited to share our #CoRL2025 paper, Deep Reactive Policy (DRP), a learning-based motion planner that navigates complex scenes with moving obstacles—directly

Ritvik Singh (@ritvik_singh9) 's Twitter Profile Photo

Happy to announce that we have finally open sourced the code for DextrAH-RGB along with Geometric Fabrics: github.com/NVlabs/DEXTRAH github.com/NVlabs/FABRICS

Ethan Weber (@ethanjohnweber) 's Twitter Profile Photo

It’s live! 🎉 🗺️ It was very fun working with Nikhil Keetha and our team Meta for this release. I’m excited to see how the community uses it. 😃

Justin Kerr (@justkerrding) 's Twitter Profile Photo

Should robots have eyeballs? Human eyes move constantly and use variable resolution to actively gather visual details. In EyeRobot (eyerobot.net) we train a robot eyeball entirely with RL: eye movements emerge from experience driven by task-driven rewards.

Kevin Zakka (@kevin_zakka) 's Twitter Profile Photo

I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.

Kevin Frans (@kvfrans) 's Twitter Profile Photo

Clean and well-executed new work from Danijar Hafner Wilson Yan , and it's cool to see shortcut models working at scale! The exciting finding is that you can train the world model largely on *unlabelled* videos, and only need a small action-anchoring dataset.

Zhen Wu (@zhenkirito123) 's Twitter Profile Photo

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR

Ethan Weber (@ethanjohnweber) 's Twitter Profile Photo

📢 SceneComp @ ICCV 2025 🏝️ 🌎 Generative Scene Completion for Immersive Worlds 🛠️ Reconstruct what you know AND 🪄 Generate what you don’t! 🙌 Meet our speakers Angela Dai, Aleksander Holynski, Varun Jampani, Zan Gojcic Andrea Tagliasacchi 🇨🇦, Peter Kontschieder scenecomp.github.io #ICCV2025

SemiAnalysis (@semianalysis_) 's Twitter Profile Photo

Teaching a humanoid just from a random iPhone recording? The Conference on Robot Learning's best student paper, VideoMimic, does just that: takes a video of a human acting, and teaches a robot do the same. How? (1/5) 🧵

Ruoshi Liu (@ruoshi_liu) 's Twitter Profile Photo

Everyone says they want general-purpose robots. We actually mean it — and we’ll make it weird, creative, and fun along the way 😎 Recruiting PhD students to work on Computer Vision and Robotics UMD Department of Computer Science for Fall 2026 in the beautiful city of Washington DC!

Everyone says they want general-purpose robots.

We actually mean it — and we’ll make it weird, creative, and fun along the way 😎

Recruiting PhD students to work on Computer Vision and Robotics <a href="/umdcs/">UMD Department of Computer Science</a> for Fall 2026 in the beautiful city of Washington DC!
Jiaxin Ge (@aomaru_21490) 's Twitter Profile Photo

✨Introducing ECHO, the newest in-the-wild image generation benchmark! You’ve seen new image models and new use cases discussed on social media, but old benchmarks don’t test them! We distilled this qualitative discussion into a structured benchmark. 🔗 echo-bench.github.io

Alejandro Escontrela (@alescontrela) 's Twitter Profile Photo

Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread 🧵