Wei Lin @ ECCV 2024 (@weilincv) 's Twitter Profile
Wei Lin @ ECCV 2024

@weilincv

Research associate @ ELLIS Unit, LIT AI Lab, Institute for Machine Learning, JKU Linz. Collab with MIT-IBM Watson AI Lab. PhD@TU Graz

ID: 1488103455134818318

linkhttps://wlin-at.github.io/ calendar_today31-01-2022 10:55:08

50 Tweet

117 Followers

206 Following

Wei Lin @ ECCV 2024 (@weilincv) 's Twitter Profile Photo

Excited to share our work PerLA accepted at CVPR 2025!🎉 PerLA enhances 3D scene understanding by integrating fine details with global context, improving accuracy in 3D QA and dense captioning while reducing hallucinations. Check out the details: 🔗Project gfmei.github.io/PerLA/

Wei Lin @ ECCV 2024 (@weilincv) 's Twitter Profile Photo

Happy to share that I will be attending ICLR 2025 in person in Singapore Expo! Looking forward to discussions and connections 😄 #ICLR2025 #ICLR25 #ICLR

Happy to share that I will be attending ICLR 2025 in person in Singapore Expo! Looking forward to discussions and connections 😄 #ICLR2025 #ICLR25 #ICLR
Wei Lin @ ECCV 2024 (@weilincv) 's Twitter Profile Photo

Our work LiveXiv will be presented at ICLR 2025 TODAY, April 25th, from 10:00 to 12:30 (Poster #356) !! 🚀🚀. LiveXiv is a challenging, maintainable, and contamination-free scientific multi-modal live dataset—designed to set a new benchmark for Large Multimodal Models (LMMs).

Wei Lin @ ECCV 2024 (@weilincv) 's Twitter Profile Photo

Reporting live from CVPR: survived three delayed flights and a marathon of airport sprints 🏃‍♂️✈️ #CVPR2025 #CVPR25 #CVPR

Reporting live from CVPR: survived three delayed flights and a marathon of airport sprints 🏃‍♂️✈️ #CVPR2025 #CVPR25 #CVPR
Roei Herzig (@roeiherzig) 's Twitter Profile Photo

🚨 Our panel kicks off at 11:30 AM in Room 207 A–D (Level 2)! Don't miss an amazing discussion with: Ludwig Schmidt, Andrew Owens, Arsha Nagrani, and Ani Kembhavi 🔥

Wei Lin @ ECCV 2024 (@weilincv) 's Twitter Profile Photo

Our MMFM Panel Discussion "What is Next in Multimodal Foundation Models?" will happen at 11:30am in room 207 A-D Moderator: Roei Herzig (UC Berkeley) Panelists: Ludwig Schmidt, Andrew Owens, Arsha Nagrani, Ani Kembhavi #3 MMFM Workshop #CVPR2026

Our MMFM Panel Discussion "What is Next in Multimodal Foundation Models?" will happen at 11:30am in room 207 A-D
Moderator: Roei Herzig (UC Berkeley)
Panelists: Ludwig Schmidt, Andrew Owens, Arsha Nagrani, Ani Kembhavi
<a href="/MMFMWorkshop/">#3 MMFM Workshop</a>
<a href="/CVPR/">#CVPR2026</a>
Wei Lin @ ECCV 2024 (@weilincv) 's Twitter Profile Photo

Check our new work pLSTM that brings the power of linear RNNs to arbitrary DAGs and multi-dimensional data, enabling parallel computation and long-range modeling. It outperforms Transformers on extrapolation tasks and handles images, graphs, and grids with remarkable efficiency.

Wei Lin @ ECCV 2024 (@weilincv) 's Twitter Profile Photo

🚨 New #ICCV2025 2025 paper! Can GPT-4o actually localize an object from just a few examples? Turns out not really. In our #ICCV2025 paper, we propose a simple fix: teach it from video tracking data. Results? Better few-shot localization, stronger context grounding.

Wei Lin @ ECCV 2024 (@weilincv) 's Twitter Profile Photo

🚀 🚀 We are introducing VisualOverload🎨🖼️, a VQA benchmark designed to test fundamental vision skills in visually dense scenes. 2,720 Q&A pairs across 6 tasks, 150 high-res artworks, and private ground truth. Even top VLMs hit only ~20% on the hardest tasks. Try it yourself🤖👉