Entong Su
@entongsu
Ph.D. Student @uwcse @uw_robotics
ID: 1563788156864012288
28-08-2022 07:18:51
23 Tweet
344 Followers
1,1K Following
How can we train RL agents that transfer to any reward? In our NeurIPS Conference paper DiSPO, we propose to learn the distribution of successor features of a stationary dataset, which enables zero-shot transfer to arbitrary rewards without additional training! A thread 🧵(1/9)
World modeling and imitation learning have largely been considered two disparate worlds. In our recent work, Unified World Models, just accepted to #RSS2025, Chuning Zhu provides a dead-simple unifying solution: just train a joint diffusion model over actions and future states,
DRAWER is a joint work by me (Hongchi), Entong Entong Su, Marius Marius Memmel, Arhan Arhan Jain, Raymond, Numfor, Prof. Ali Farhadi, Prof. Abhishek Gupta Abhishek Gupta, Prof. Shenlong Wang Shenlong Wang, and Prof. Wei-Chiu Ma Wei-Chiu Ma. Thanks for all your contributions!