Matteo Poggi (@mattpoggi) 's Twitter Profile
Matteo Poggi

@mattpoggi

Tenure-Track Assistant professor - University of Bologna

ID: 287332799

linkhttp://mattpoggi.github.io calendar_today24-04-2011 19:48:39

94 Tweet

315 Followers

567 Following

Anton Obukhov (@antonobukhov1) 's Twitter Profile Photo

Huawei Research Center Zürich is looking for a Research Scientist intern to work with me on advancing foundation models for computer vision, focusing on enhancing computational photography features in mobile phones. Details below ˙✧˖°📸⋆。˚

Anton Obukhov (@antonobukhov1) 's Twitter Profile Photo

Thanks to everyone who joined the 4th MDEC! We hit a new record with 41 submissions — nearly double last year’s. If you participated and received an email from us, please submit your report by March 27, 23:59 PST. See you at CVPR in Nashville!

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

HS-SLAM: Hybrid Representation with Structural Supervision for Improved Dense SLAM Ziren Gong,Fabio Tosi, Youmin Zhang, Stefano Mattoccia, Matteo Poggi tl;dr:hash grid+tri-planes+one-blob->hybrid rep.; sample patches of non-local pixels->supervision; BA arxiv.org/abs/2503.21778

HS-SLAM: Hybrid Representation with Structural Supervision for Improved Dense SLAM

Ziren Gong,<a href="/fabiotosi92/">Fabio Tosi</a>, Youmin Zhang, Stefano Mattoccia, <a href="/mattpoggi/">Matteo Poggi</a>

tl;dr:hash grid+tri-planes+one-blob-&gt;hybrid rep.; sample patches of non-local pixels-&gt;supervision; BA

arxiv.org/abs/2503.21778
Matteo Poggi (@mattpoggi) 's Twitter Profile Photo

🍸🍸The TRICKY25 challenge: "Monocular Depth from Images of Specular and Transparent Surfaces" is live! 🍸🍸 Hosted at the 3rd TRICKY workshop #ICCV2025, with exciting speakers! Anton Obukhov Andrea Tagliasacchi 🇨🇦 He Wang Site: sites.google.com/view/iccv25tri… Codalab: codalab.lisn.upsaclay.fr/competitions/2…

Benjamin Busam (@busambenjamin) 's Twitter Profile Photo

🚨 Call for Participation — TRICKY @ ICCV 2025 🌺 Join our challenges on monocular depth & category-level pose for transparent/specular objects! 🧊💎 📅 Challenges: June 📄 Paper due: July 4 🔗 sites.google.com/view/iccv25tri… #ICCV2025 #3DV #AI #CV #TRICKYchallenge

Anton Obukhov (@antonobukhov1) 's Twitter Profile Photo

Don’t miss the (4th) Monocular Depth Estimation Workshop at #CVPR2025! Keynotes by Peter Wonka, Yiyi Liao, Konrad Schindler, discussion of the challenge results, and more! Thu 12 Jun, noon PDT, Location: 109 cvpr.thecvf.com/virtual/2025/w…

Martin Oswald (@martin_r_oswald) 's Twitter Profile Photo

Your #ICCV2025 paper got rejected? Give it another try and submit to our proceedings track! Your #ICCV2025 paper got accepted? Congrats! Give it even more visibility by joining our nectar track. More info: sites.google.com/view/neuslam/c…

Matteo Poggi (@mattpoggi) 's Twitter Profile Photo

#ICCV2025 The call for proceedings papers at TRICKY 2025 workshop in Honolulu is still open! Submit your work by July 4 :) cmt3.research.microsoft.com/TRICKY2025

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

WarpRF: Multi-View Consistency for Training-Free Uncertainty Quantification and Applications in Radiance Fields Sadra Safadoust, Fabio Tosi, Fatma Güney, Matteo Poggi tl;dr: rendered depth->reprojection->uncertainty->next best view->later GS arxiv.org/abs/2506.22433

WarpRF: Multi-View Consistency for Training-Free Uncertainty Quantification and Applications in Radiance Fields

Sadra Safadoust, <a href="/fabiotosi92/">Fabio Tosi</a>, Fatma Güney, <a href="/mattpoggi/">Matteo Poggi</a>

tl;dr: rendered depth-&gt;reprojection-&gt;uncertainty-&gt;next best view-&gt;later GS

arxiv.org/abs/2506.22433
Martin Oswald (@martin_r_oswald) 's Twitter Profile Photo

We have extended the submission deadline for the proceedings track to July 5! #ICCV2025 NeuSLAM Workshop sites.google.com/view/neuslam/c…

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

DINO-SLAM: DINO-informed RGB-D SLAM for Neural Implicit and Explicit Representations Ziren Gong, Xiaohan Li, Fabio Tosi, Youmin Zhang, Stefano Mattoccia, Jun Wu, Matteo Poggi tl;dr: DINO enhances NeRF/3DGS SLAM arxiv.org/abs/2507.19474

DINO-SLAM: DINO-informed RGB-D SLAM for Neural Implicit and Explicit Representations

Ziren Gong, Xiaohan Li, <a href="/fabiotosi92/">Fabio Tosi</a>, Youmin Zhang, <a href="/s_matt/">Stefano Mattoccia</a>, Jun Wu, <a href="/mattpoggi/">Matteo Poggi</a>

tl;dr: DINO enhances NeRF/3DGS SLAM

arxiv.org/abs/2507.19474
Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

Ov3R: Open-Vocabulary Semantic 3D Reconstruction from RGB Videos Ziren Gong, Xiaohan Li, Fabio Tosi, Jiawei Han, Stefano Mattoccia, Jianfei Cai, Matteo Poggi tl;dr: CLIP->SLAM3R; CLIP+DINO+CG3D->2D-3D fused descriptor arxiv.org/abs/2507.22052

Ov3R: Open-Vocabulary Semantic 3D Reconstruction from RGB Videos

Ziren Gong, Xiaohan Li, <a href="/fabiotosi92/">Fabio Tosi</a>, Jiawei Han, <a href="/s_matt/">Stefano Mattoccia</a>, Jianfei Cai, <a href="/mattpoggi/">Matteo Poggi</a>

tl;dr: CLIP-&gt;SLAM3R; CLIP+DINO+CG3D-&gt;2D-3D fused descriptor

arxiv.org/abs/2507.22052
Kwang Moo Yi (@kwangmoo_yi) 's Twitter Profile Photo

Poggi and Tosi, "FlowSeek: Optical Flow Made Easier with Depth Foundation Models and Motion Bases" Optical flow is highly relevant to depth discontinuities. Makes a lot of sense to integrate modern monocular depth estimators into the RAFT pipeline.

Poggi and Tosi, "FlowSeek: Optical Flow Made Easier with Depth Foundation Models and Motion Bases"

Optical flow is highly relevant to depth discontinuities. Makes a lot of sense to integrate modern monocular depth estimators into the RAFT pipeline.
Matteo Poggi (@mattpoggi) 's Twitter Profile Photo

Aloha! If you are interested in some of the latest advances of stereo (or are still jetlagged like me) I'm giving a talk at the Embedded Vision Workshop at 9am! #ICCV2025

Aloha! If you are interested in some of the latest advances of stereo (or are still jetlagged like me) I'm giving a talk at the Embedded Vision Workshop at 9am! #ICCV2025
Federica Arrigoni (@arrigonifede) 's Twitter Profile Photo

Tomorrow morning (Thursday October 23) Rakshith and I will present our paper "On the recovery of cameras from fundamental matrices" - poster #107 #ICCV2025

Tomorrow morning (Thursday October 23) Rakshith and I will present our paper "On the recovery of cameras from fundamental matrices" - poster #107  #ICCV2025