David McAllister (@davidrmcall) 's Twitter Profile
David McAllister

@davidrmcall

PhD Student @berkeley_ai | prev @LumaLabsAI

ID: 1801743598784036864

calendar_today14-06-2024 22:28:46

73 Tweet

230 Takipรงi

183 Takip Edilen

Takara Truong (@takaratruong) 's Twitter Profile Photo

They say the best time to tweet about your research was 1 year ago, the second best time is now. With RAI formerly known as Boston Dynamics AI Institute, we present DiffuseCloC - the first guidable physics-based diffusion model. diffusecloc.github.io/website/

Catherine Glossop (@catglossop) 's Twitter Profile Photo

Inherent biases and imbalances in robot data can make training steerable VLA policies challenging. We introduce CAST, a method to augment datasets with counterfactuals to induce better language following cast-vla.github.io โ† paper, code, data, and more available here! ๐Ÿงต

University of California (@uofcalifornia) 's Twitter Profile Photo

Republican and Democratic voters share common ground when it comes to the University of California: Both sides express widespread support for UC, its research, medical centers and ability to elevate the lives of students, a statewide poll shows. (via Los Angeles Times)

Oleg Rybkin (@_oleh) 's Twitter Profile Photo

Want more scaling laws for value-based RL? Preston and I analyzed scaling model size! Larger models predictably improve data efficiency, performance, reduce overfitting, and allow using larger batch size. After this, I am now more optimistic than ever abt TD-learning.

Jason Liu (@jasonjzliu) 's Twitter Profile Photo

Ever wish a robot could just move to any goal in any environmentโ€”avoiding all collisions and reacting in real time? ๐Ÿš€Excited to share our #CoRL2025 paper, Deep Reactive Policy (DRP), a learning-based motion planner that navigates complex scenes with moving obstaclesโ€”directly

Ritvik Singh (@ritvik_singh9) 's Twitter Profile Photo

Happy to announce that we have finally open sourced the code for DextrAH-RGB along with Geometric Fabrics: github.com/NVlabs/DEXTRAH github.com/NVlabs/FABRICS

Ethan Weber (@ethanjohnweber) 's Twitter Profile Photo

Itโ€™s live! ๐ŸŽ‰ ๐Ÿ—บ๏ธ It was very fun working with Nikhil Keetha and our team Meta for this release. Iโ€™m excited to see how the community uses it. ๐Ÿ˜ƒ

Justin Kerr (@justkerrding) 's Twitter Profile Photo

Should robots have eyeballs? Human eyes move constantly and use variable resolution to actively gather visual details. In EyeRobot (eyerobot.net) we train a robot eyeball entirely with RL: eye movements emerge from experience driven by task-driven rewards.

Kevin Zakka (@kevin_zakka) 's Twitter Profile Photo

I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.

Kevin Frans (@kvfrans) 's Twitter Profile Photo

Clean and well-executed new work from Danijar Hafner Wilson Yan , and it's cool to see shortcut models working at scale! The exciting finding is that you can train the world model largely on *unlabelled* videos, and only need a small action-anchoring dataset.

Zhen Wu (@zhenkirito123) 's Twitter Profile Photo

Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing ๐—ข๐—บ๐—ป๐—ถ๐—ฅ๐—ฒ๐˜๐—ฎ๐—ฟ๐—ด๐—ฒ๐˜๐ŸŽฏ, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with ๐—บ๐—ถ๐—ป๐—ถ๐—บ๐—ฎ๐—น RL: - 5 rewards, - 4 DR

Ethan Weber (@ethanjohnweber) 's Twitter Profile Photo

๐Ÿ“ข SceneComp @ ICCV 2025 ๐Ÿ๏ธ ๐ŸŒŽ Generative Scene Completion for Immersive Worlds ๐Ÿ› ๏ธ Reconstruct what you know AND ๐Ÿช„ Generate what you donโ€™t! ๐Ÿ™Œ Meet our speakers Angela Dai, Aleksander Holynski, Varun Jampani, Zan Gojcic Andrea Tagliasacchi ๐Ÿ‡จ๐Ÿ‡ฆ, Peter Kontschieder scenecomp.github.io #ICCV2025

SemiAnalysis (@semianalysis_) 's Twitter Profile Photo

Teaching a humanoid just from a random iPhone recording? The Conference on Robot Learning's best student paper, VideoMimic, does just that: takes a video of a human acting, and teaches a robot do the same. How? (1/5) ๐Ÿงต

Ruoshi Liu (@ruoshi_liu) 's Twitter Profile Photo

Everyone says they want general-purpose robots. We actually mean it โ€” and weโ€™ll make it weird, creative, and fun along the way ๐Ÿ˜Ž Recruiting PhD students to work on Computer Vision and Robotics UMD Department of Computer Science for Fall 2026 in the beautiful city of Washington DC!

Everyone says they want general-purpose robots.

We actually mean it โ€” and weโ€™ll make it weird, creative, and fun along the way ๐Ÿ˜Ž

Recruiting PhD students to work on Computer Vision and Robotics <a href="/umdcs/">UMD Department of Computer Science</a> for Fall 2026 in the beautiful city of Washington DC!
Jiaxin Ge (@aomaru_21490) 's Twitter Profile Photo

โœจIntroducing ECHO, the newest in-the-wild image generation benchmark! Youโ€™ve seen new image models and new use cases discussed on social media, but old benchmarks donโ€™t test them! We distilled this qualitative discussion into a structured benchmark. ๐Ÿ”— echo-bench.github.io

Alejandro Escontrela (@alescontrela) 's Twitter Profile Photo

Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread ๐Ÿงต