Jiafei Duan (@djiafei) 's Twitter Profile
Jiafei Duan

@djiafei

Robotics and AI PhD Student @uwcse @uw_robotics | @NVIDIA Intern |AI Research @ASTARsg | BEng from @ntueee. Research in robot learning and embodied AI

ID: 1350139361824673792

linkhttp://www.duanjiafei.com calendar_today15-01-2021 17:54:52

537 Tweet

1,1K Followers

813 Following

Jiafei Duan (@djiafei) 's Twitter Profile Photo

Excited to announce our upgraded AR2-D2 user interface, EVE 🤖! Led by Jun Wang, this project dives deep into user studies across diverse backgrounds to find the most intuitive ways to collect robot data—no robot needed. Thrilled to share that this work is accepted at

Jiafei Duan (@djiafei) 's Twitter Profile Photo

Excited to announce our upgraded AR2-D2 user interface, EVE 🤖! Led by my undergrad Jun Wang, this project dives deep into user studies across diverse backgrounds to find the most intuitive ways to collect robot data—no robot needed. Thrilled to share that this work is

Sriyash Poddar (@sriyash__) 's Twitter Profile Photo

How can we align foundation models with populations of diverse users with different preferences? We are excited to share our work on Personalizing RLHF using Variational Preference Learning! 🧵 📜: arxiv.org/abs/2408.10075 🌎: weirdlabuw.github.io/vpl/

Jiafei Duan (@djiafei) 's Twitter Profile Photo

I think there a few things that makes a robotics simulator good for the research community. 1. good implementations of baselines both from RL and BC, and maybe (zero-shot methods) 2. Easy to set up, customize, and debug. 3. Great and responsive tech support. I would say this is

Jiafei Duan (@djiafei) 's Twitter Profile Photo

Great work from Wenlong Huang on leverage visual prompting for generating task specific keypoints and constraints to zero-shot tasks! Further suggesting the notion that when VLMs are used the right way we can obtain ‘free’ robot data or zero-shot deployment from the process.

Jiafei Duan (@djiafei) 's Twitter Profile Photo

Really impressive performance from QWen2-VL, time to change out Qwen-VL for object part-level detection in Manipulate-Anything.

Jiafei Duan (@djiafei) 's Twitter Profile Photo

Happy to share that both of our work, Manipulate-Anything and RoboPoint have been accepted to #CoRL2024 ! Look forward seeing everyone at CoRL in Germany🇩🇪. Manipulate-Anything: robot-ma.github.io RoboPoint: robo-point.github.io Both code release coming soon!

Jiafei Duan (@djiafei) 's Twitter Profile Photo

26K demo is way too much for 5 tasks, and not questioning how generalizable the policy is to different environmental perturbation. We want robots that can give beyond just 5 tasks, but if 26k demo is needed then we need to find ways to scale data collection in a scale-able

Jiafei Duan (@djiafei) 's Twitter Profile Photo

I saw the list of accepted papers for CoRL this year, and there are tons of papers focused on VLM/VLA, Tele-ops system and humanoids. It seems like a strong reflection of current trends in robotics, aligning with what's happening in the real world.