Bowen Wen (@bowenwen_me) 's Twitter Profile
Bowen Wen

@bowenwen_me

Senior Research Scientist @NVIDIA, Computer Vision, Robotics | previously@GoogleX, @Meta, @Amazon.
Opinions are my own.

ID: 961094113696387073

linkhttps://wenbowen123.github.io/ calendar_today07-02-2018 04:27:47

116 Tweet

596 Takipçi

360 Takip Edilen

Yu Xiang (@yuxiang_irvl) 's Twitter Profile Photo

I was preparing a video to introduce our lab Intelligent Robotics and Vision Lab @ UTDallas for a meeting. Happy to share the video here! We are looking forward to collaborating with both academia and industry. Please feel free to reach out

amaman (@amarectv) 's Twitter Profile Photo

動画でフレーム間の一貫性が得られるか気になる あと処理速度と人物などまるっこいもの

Ajay Mandlekar (@ajaymandlekar) 's Twitter Profile Photo

Synthetic data generation tools like MimicGen create large sim datasets with ease, but using them in the real-world is difficult due to the large sim-to-real gap. Our new work uses simple co-training to unlock the potential of synthetic sim data for real-world manipulation!

Bowen Wen (@bowenwen_me) 's Twitter Profile Photo

Super cool project! Glad to see FoundationPose (github.com/NVlabs/Foundat…) enables learning from low-cost hand-object demonstrations.

Bowen Wen (@bowenwen_me) 's Twitter Profile Photo

Come and join us! Also make sure you have signed up for our social event (events.nvidia.com/nvcvprresearch…) and earn free GPU 😍 #CVPR #CVPR2025

Bowen Wen (@bowenwen_me) 's Twitter Profile Photo

Want a better representation for collision avoidance and grasping from dense clutter? Try out RaySt3R: our new 3D shape completion pipeline from single-view RGBD (led by Bardienus Duisterhof)!

Bowen Wen (@bowenwen_me) 's Twitter Profile Photo

Kudos to Aria team and exciting support of FoundationStereo (nvlabs.github.io/FoundationSter…)! High quality 3D human demonstration data collection for robot learning will be a breeze 😌 NVIDIA AI Developer AI at Meta #xr #technology

Bowen Wen (@bowenwen_me) 's Twitter Profile Photo

Incredible learned behavior (assuming no human intervention) at 48:05 when it failed a couple of times but then it suddenly knows how to make it right. Amazing progress!

Yu Xiang (@yuxiang_irvl) 's Twitter Profile Photo

I use two factors to analyze robot autonomy: environment diversity and task diversity. If a robot just replays data from a single task and environment, of course it’ll succeed. Real autonomy lies in pushing toward the top-right corner of this figure—generalizing both.

I use two factors to analyze robot autonomy: environment diversity and task diversity.
If a robot just replays data from a single task and environment, of course it’ll succeed.
Real autonomy lies in pushing toward the top-right corner of this figure—generalizing both.
NVIDIA Robotics (@nvidiarobotics) 's Twitter Profile Photo

Explore a variety of perception models and systems from #NVIDIAResearch that support a unified 3D perception stack for #robotics. These tools enable robots to understand and interact with unfamiliar environments in real-time. 🤖 Learn more 👉 nvda.ws/4jXKSPE

Bowen Wen (@bowenwen_me) 's Twitter Profile Photo

Stereo depth sensing is set to revolutionize 3D perception. Can't wait to see the new innovations and applications that emerge! #3Dperception #computervision #robotics realsenseai.com/news-insights/…

Xin Eric Wang @ ICLR 2025 (@xwang_lk) 's Twitter Profile Photo

Why don't you just say "this message is for Chinese researchers"? Besides, I am also amazed by your superpower to recognize the ethnicity of anonymous reviewers. Otherwise, how could one just assume a negative review is from a WeChat user?

Haochen Shi (@haochenshi74) 's Twitter Profile Photo

On-board (Jetson Orin NX 16GB) real-time (10Hz) depth estimation from stereo fisheye cameras with github.com/NVlabs/Foundat… by Bowen Wen (5/n)