Entong Su (@entongsu) 's Twitter Profile
Entong Su

@entongsu

Ph.D. Student @uwcse @uw_robotics

ID: 1563788156864012288

calendar_today28-08-2022 07:18:51

23 Tweet

344 Followers

1,1K Following

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

Two PhD students graduated two weeks ago: Yuzhe Qin Yuzhe Qin (co-advised with Hao Su), and Yueh-Hua Wu Kris Wu. They are my first batch of robotics students. When I was a student, Alyosha told me: "Good students are your friends, you can learn from them." Yuzhe and Yueh-Hua

Two PhD students graduated two weeks ago: Yuzhe Qin <a href="/QinYuzhe/">Yuzhe Qin</a> (co-advised with Hao Su), and Yueh-Hua Wu <a href="/yh_kris/">Kris Wu</a>. They are my first batch of robotics students. When I was a student, Alyosha told me: "Good students are your friends, you can learn from them." Yuzhe and Yueh-Hua
Entong Su (@entongsu) 's Twitter Profile Photo

Congratulations Yuzhe Qin Kris Wu! Being part of Prof. Xiaolong Wang's lab during my master's has been incredible. Yuzhe has been exceptionally reliable and helpful, with full-stack robotics knowledge, and taught me many essential skills. Best wishes for your future!

Jiafei Duan (@djiafei) 's Twitter Profile Photo

Humans learn and improve from failures. Similarly, foundation models adapt based on human feedback. Can we leverage this failure understanding to enhance robotics systems that use foundation models? Introducing AHA—a vision-language model for detecting and reasoning over

Chuning Zhu (@chuning_zhu) 's Twitter Profile Photo

How can we train RL agents that transfer to any reward? In our NeurIPS Conference paper DiSPO, we propose to learn the distribution of successor features of a stationary dataset, which enables zero-shot transfer to arbitrary rewards without additional training! A thread 🧵(1/9)

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

How can we enable transferable decision-making for *any* reward zero-shot? MBRL is task-agnostic but suffers from compounding error, while MFRL is task-specific. We propose a new class of world models that transfers across tasks zero-shot and avoids compounding error! A 🧵 (1/9)

Marcel Torné (@marceltornev) 's Twitter Profile Photo

Robot learning is fundamentally limited by data – human teleoperation on real robots is expensive! 🤖👨‍💻We propose an alternative – scalable data collection in simulation by crowdsourcing video scans of homes. In our latest work, we study how we can scale up policy training over

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

So I heard we need more data for robot learning :) Purely real world teleop is expensive and slow, making large scale data collection challenging. I’ve been excited about getting more data into robot learning, going beyond just real-world teleop data. To this end, we’ve been

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

Haven't been to a conference in a while, really excited to be at #NeurIPS2024! I'll be helping present 4 of our group's recent papers: 1. Overcoming the Sim-to-Real Gap: Leveraging Simulation to Learn to Explore for Real-World RL arxiv.org/abs/2410.20254 2. Distributional

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

In my experience, robot 'generalists' are often jacks of all trades but masters of none. In training across multiple tasks and environments, robot policies fail to generalize robustly and effectively to each particular test setting. What if at test time, we non-parametrically

Marius Memmel (@memmelma) 's Twitter Profile Photo

Have some offline data lying around? Use it to robustify few-shot imitation learning! 🤖 STRAP 🎒 is a retrieval-based method that leverages semantic sub-trajectories in offline datasets to augment the training data. 🧵 1/6

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

So we did a bunch of projects with real world reinforcement learning - but it was often too inefficient to be practical to train tabula rasa. This suggests we need better priors, but acquiring these from on-robot data can often be expensive as well. In our recent work, we show

Patrick Yin (@patrickhyin) 's Twitter Profile Photo

Current RL finetuning methods are too inefficient to make autonomous real world robot learning tractable. We propose Simulation-Guided Fine-Tuning (SGFT) - a simple, general sim2real framework that extracts structured exploration priors from sim to accelerate real world RL. 🧵1/6

YI LI (@yi_li_uw) 's Twitter Profile Photo

🚀 Meet 🐹HAMSTER, our new hierarchical Vision-Language-Action (VLA) framework for robot manipulation! 🔹 High-level VLM for perception & reasoning 🔹 Low-level 3D policy for precise control 🔹 Bridged by 2D paths for trajectory planning HAMSTER learns from cost-effective

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

World modeling and imitation learning have largely been considered two disparate worlds. In our recent work, Unified World Models, just accepted to #RSS2025, Chuning Zhu provides a dead-simple unifying solution: just train a joint diffusion model over actions and future states,

Hongchi Xia (@xhongchi97338) 's Twitter Profile Photo

Glad to introduce our #CVPR2025 paper "DRAWER", allowing one to create a realistic and interactable digital twin from a video of a static scene without any interactions with the environment. It unlocks many opportunities in gaming and robotics! Webpage: drawer-art.github.io

Hongchi Xia (@xhongchi97338) 's Twitter Profile Photo

Check more results of DRAWER on our webpage: Webpage: drawer-art.github.io Paper and code could be found below: Paper: arxiv.org/pdf/2504.15278 Code: github.com/xiahongchi/DRA…

Check more results of DRAWER on our webpage:
Webpage: drawer-art.github.io

Paper and code could be found below:
Paper: arxiv.org/pdf/2504.15278
Code: github.com/xiahongchi/DRA…
Hongchi Xia (@xhongchi97338) 's Twitter Profile Photo

DRAWER is a joint work by me (Hongchi), Entong Entong Su, Marius Marius Memmel, Arhan Arhan Jain, Raymond, Numfor, Prof. Ali Farhadi, Prof. Abhishek Gupta Abhishek Gupta, Prof. Shenlong Wang Shenlong Wang, and Prof. Wei-Chiu Ma Wei-Chiu Ma. Thanks for all your contributions!

DRAWER is a joint work by me (Hongchi), Entong <a href="/EntongSu/">Entong Su</a>, Marius <a href="/memmelma/">Marius Memmel</a>, Arhan <a href="/prodarhan/">Arhan Jain</a>, Raymond, Numfor, Prof. Ali Farhadi, Prof. Abhishek Gupta <a href="/abhishekunique7/">Abhishek Gupta</a>, Prof. Shenlong Wang <a href="/ShenlongWang/">Shenlong Wang</a>, and Prof. Wei-Chiu Ma <a href="/weichiuma/">Wei-Chiu Ma</a>. 

Thanks for all your contributions!
Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

Very excited to be at #ICLR2025 in Singapore helping present some of the work done by our group! We'll be presenting 4 papers: 1. Rapidly Adapting Policies to the Real-World via Simulation-Guided Fine-Tuning weirdlabuw.github.io/sgft/ 2. Robot Sub-Trajectory Retrieval for

Abhishek Gupta (@abhishekunique7) 's Twitter Profile Photo

Constructing interactive simulated worlds has been a challenging problem, requiring considerable manual effort for asset creation and articulation, and composing assets to form full scenes. In our new work - DRAWER, we made the process of creating scenes in simulation as simple