Yue Chen (@faneggchen) 's Twitter Profile
Yue Chen

@faneggchen

PhD Student @Westlake_Uni. 3D/4D Reconstruction, Virtual Humans.
Muggle dreaming of the Wizarding World

ID: 1601830788252995586

linkhttp://fanegg.github.io calendar_today11-12-2022 06:46:49

43 Tweet

207 Takipçi

201 Takip Edilen

Yuliang Xiu (@yuliangxiu) 's Twitter Profile Photo

Checkout the thread of #Feat2GS, the 3D awareness of visual foundation models (VFMs) "should" and "could" be evaluated on large-scale casual video, rather than the data with 3D labels.

Gerard Pons-Moll (@gerardponsmoll1) 's Twitter Profile Photo

If you want to know which foundation models are best for 3D, check out Yue Chen feat2gs. Key idea is to predict Gaussian Splats from foundation features and use novel view synthesis as the metric for 3D accuracy.

Kwang Moo Yi (@kwangmoo_yi) 's Twitter Profile Photo

Preprint of today: Chen et al., "Easi3R: Estimating Disentangled Motion from DUSt3R Without Training" -- easi3r.github.io Turns out Dust3r cross-attention layers can be used to identify dynamic objects in scenes -- you can then segment them out for better 3D estimation

Yuliang Xiu (@yuliangxiu) 's Twitter Profile Photo

If you're into digital humans, you know how tough it is to fit a SMPL body onto a 3D clothed scan—rendering, OpenPose, triangulation, and a bunch of dataset-specific hyperparameters. Time to switch to a native 3D solution, with Equivariant Tightness Vector. The name ETCH, given

Anpei Chen (@anpeic) 's Twitter Profile Photo

Too many artifacts for GS reconstruction? Please checkout GenFusion: Closing the Loop between Reconstruction and Generation via Videos 🌐 Project page: genfusion.sibowu.com 💻 Code: github.com/Inception3D/Ge… #3D #DiffusionModels #ViewSynthesis #GenFusion #CVPR2025

Anpei Chen (@anpeic) 's Twitter Profile Photo

Feature up up up 🖼️✨ We tackle the resolution bottleneck of Vision Foundation Models (like DINOv2 & CLIP) with a coordinate-based cross-attention upsampler. Plug and play — stronger, faster than ever! 🚀 andrehuang.github.io/loftup-site/ #VisionModels #DeepLearning #ComputerVision

Tingting Liao (@tingtin36139994) 's Twitter Profile Photo

🚀 Introducing SOAP: Style-Omniscient Animatable Portraits - a style-agnostic method for one-view, animation-ready 3D portrait reconstruction. 📄 Project: tingtingliao.github.io/soap/ 💻 Code: github.com/TingtingLiao/s… 🎥 Video: youtu.be/mLlMfODZnTw #AI3D #CG #Avatar

Anpei Chen (@anpeic) 's Twitter Profile Photo

📢 We’re presenting two posters at #CVPR2025 today! 🗓️ June 13 | 🕓 16:00–18:00 | 📍 Exhibit Hall D 🔹 Genfusion — Booth 61 🔹 Feat2GS — Booth 93 Come by to chat about generative 3D, geometry, and beyond. See you there! #CVPR25 #3Dvision #AI

Gerard Pons-Moll (@gerardponsmoll1) 's Twitter Profile Photo

Had a blast at cvpr! Unfortunately, I have to return earlier! Really nice to see many researchers excited to share their cool works and ideas. The best part: catching up with old friends and colleagues!

Had a blast at cvpr! Unfortunately, I have to return earlier! Really nice to see many researchers excited to share their cool works and ideas.

The best part: catching up with old friends and colleagues!
Anpei Chen (@anpeic) 's Twitter Profile Photo

📢 Our new paper GaVS – 3D-Grounded Video Stabilization is out! Key idea: feed-forward Dynamic Gaussian Splatting + test-time optimization Robust, consistent, and cropping-free 📹 🎥 Project: sinoyou.github.io/gavs Zinuo You Stamatios Georgoulis Siyu Tang @VLG-ETHZ Dengxin Dai #SIGGRAPH25 #3DGS

Xianghui Xie (@xianghuixie) 's Twitter Profile Photo

📢Is your multi-view generation (MVG) model 3D consistent? Do they produce high-quality and semantically correct novel views? How to fairly compare and make them even better? Introducing MVGBench: a comprehensive benchmark for MVGs, accepted to #ICCV25 #ICCV2025