Jianing “Jed” Yang (@jed_yang) 's Twitter Profile
Jianing “Jed” Yang

@jed_yang

Research Intern @Adobe @Meta | PhD 🎓 @UMich working on 3D Vision and Embodied AI. Prev. @CarnegieMellon @GeorgiaTech. Graduating in 2025, DM for jobs!

ID: 745826425

linkhttp://jedyang.com calendar_today08-08-2012 19:49:01

165 Tweet

920 Takipçi

1,1K Takip Edilen

Mikael Henaff (@henaffmikael) 's Twitter Profile Photo

Excited to share our Fast3R paper, to be presented at CVPR 2025. This recasts 3D reconstruction and camera pose estimation from video as an end-to-end learning problem, leading to ~4x-300x improvements in speed while maintaining performance. Code, model & demo in thread!

Mikael Henaff (@henaffmikael) 's Twitter Profile Photo

Btw, the lead author Jianing “Jed” Yang is graduating this year and will be on the job market. Jed is highly motivated and creative, a great engineer and researcher who gets stuff to work, and has been a pleasure to work with...if you're hiring I suggest reaching out to him!

MrNeRF (@janusch_patas) 's Twitter Profile Photo

DUSt3R has become much much Fast3R! Make sure to run the code or try the web demo! It's great! Don't forget to share your results below in the comments!

Sasha (Alexander) Sax (@iamsashasax) 's Twitter Profile Photo

Introducing ⚡️Fast3R: the bitter lesson comes for SfM. By using a big dumb ViT, we can reconstruct pointmaps for 1000 images in a single forward pass @ 250 FPS. How do we do this? Using techniques from LLMS. Website: fast3r-3d.github.io Demo: fast3r.ngrok.app 🧵

Yash Bhalgat (@ysbhalgat) 's Twitter Profile Photo

Excited to announce the 1st Workshop on 3D-LLM/VLA at #CVPR2025! 🚀 #CVPR2025 Topics: 3D-VLA models, LLM agents for 3D scene understanding, Robotic control with language. 📢 Call for papers: Deadline – April 20, 2025 🌐Details: 3d-llm-vla.github.io #llm #3d #Robotics #ai

Excited to announce the 1st Workshop on 3D-LLM/VLA at #CVPR2025! 🚀 <a href="/CVPR/">#CVPR2025</a> 

Topics: 3D-VLA models, LLM agents for 3D scene understanding, Robotic control with language.

📢 Call for papers: Deadline – April 20, 2025

🌐Details: 3d-llm-vla.github.io

#llm #3d #Robotics #ai
Yining Hong (@yining_hong) 's Twitter Profile Photo

Excited to host the 1st Workshop on 3D-LLM/VLA at #CVPR2025! #CVPR2025 This workshop explores integrating LLMs and VLA models with 3D perception to enhance foundation models for embodied agents and robot control. Paper Deadline: April 20, 2025 Website: 3d-llm-vla.github.io

Excited to host the 1st Workshop on 3D-LLM/VLA at #CVPR2025! <a href="/CVPR/">#CVPR2025</a>
This workshop explores integrating LLMs and VLA models with 3D perception to enhance foundation models for embodied agents and robot control. 
Paper Deadline: April 20, 2025
Website: 3d-llm-vla.github.io
Jianing “Jed” Yang (@jed_yang) 's Twitter Profile Photo

🚀 Excited to announce the 1st Workshop on 3D-LLM/VLA at #CVPR2025! 🌟 This workshop explores bridging language, vision, and action through foundational models—such as LLMs and VLA models—and applying them to embodied agents and robotic control. 🎤 Featuring an incredible

🚀 Excited to announce the 1st Workshop on 3D-LLM/VLA at #CVPR2025!

🌟 This workshop explores bridging language, vision, and action through foundational models—such as LLMs and VLA models—and applying them to embodied agents and robotic control.

🎤 Featuring an incredible
Jiaxin Lu (@jacinth_lu) 's Twitter Profile Photo

🚨 Introducing HUMOTO! 🚨 Our new 4D dataset of human-object interactions with stunning details ✨, capturing daily activities from cooking 🍳 to organizing 📚. Perfect for robotics 🤖, computer vision 👁️ & animation 🎬!

Tianyuan Zhang (@tianyuanzhang99) 's Twitter Profile Photo

Bored of linear recurrent memories (e.g., linear attention) and want a scalable, nonlinear alternative? Our new paper “Test-Time Training Done Right” propose LaCT (Large Chunk Test-Time Training) — a highly efficient, massively scalable nonlinear memory with: 💡 Pure PyTorch

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

SAB3R: Semantic-Augmented Backbone in 3D Reconstruction Xuweiyi Chen, Tian XIA, Si.X, Jianing “Jed” Yang @ CVPR, Joyce Chai, Zezhou Cheng tl;dr: MASt3R+distillation->open-vocabulary segmentation+3D reconstruction arxiv.org/abs/2506.02112

SAB3R: Semantic-Augmented Backbone in 3D Reconstruction

<a href="/ChenXuweiyi/">Xuweiyi Chen</a>, <a href="/TianX_ia/">Tian XIA</a>, <a href="/6SihanXu/">Si.X</a>, <a href="/jed_yang/">Jianing “Jed” Yang @ CVPR</a>, Joyce Chai, <a href="/ZezhouCheng/">Zezhou Cheng</a>

tl;dr: MASt3R+distillation-&gt;open-vocabulary segmentation+3D reconstruction

arxiv.org/abs/2506.02112
Jianing “Jed” Yang (@jed_yang) 's Twitter Profile Photo

🚀 Off to CVPR tomorrow! I’ll be in Nashville for the week — excited to catch up with old friends and meet new ones. Let’s grab a meal or chat if you're around! Also — I’m actively looking for full-time opportunities starting this September, especially in Robotics / 3D /

🚀 Off to CVPR  tomorrow! I’ll be in Nashville for the week — excited to catch up with old friends and meet new ones. Let’s grab a meal or chat if you're around!

Also — I’m actively looking for full-time opportunities starting this September, especially in Robotics / 3D /
Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

Can we scale 4D pretraining to learn general space-time representations that reconstruct an object from a few views at any time to any view at any other time? Introducing 4D-LRM: a Large Space-Time Reconstruction Model that ... 🔹 Predicts 4D Gaussian primitives directly from

Sukjun (June) Hwang (@sukjun_hwang) 's Twitter Profile Photo

Tokenization has been the final barrier to truly end-to-end language models. We developed the H-Net: a hierarchical network that replaces tokenization with a dynamic chunking process directly inside the model, automatically discovering and operating over meaningful units of data

Jianing “Jed” Yang (@jed_yang) 's Twitter Profile Photo

Trick: Add “X” and line as “aim” (as if you are playing CS GO) to the image you input to VLA, and get immediate improvement on manipulation tasks! Very simple and interesting idea from labmate Yinpei Dai! This finding indicates current VLAs still need better spatial reasoning.

Jianing “Jed” Yang (@jed_yang) 's Twitter Profile Photo

I joined Figure 4 days ago. Everyday I walk into the office, it feels like walking into a sci-fi movie. Robots work, humans build, machines hum. 3D printers sculpt, CNCs carve, actuators roar—it’s Iron Man’s lab, but real.