Michael Dorkenwald (@mdorkenw) 's Twitter Profile
Michael Dorkenwald

@mdorkenw

PhD student @UvA_Amsterdam @ELLISforEurope working on SSL, Vision&Language, Learning from Videos | Prev. intern @awscloud

ID: 1251254493599141888

linkhttp://mdorkenwald.com calendar_today17-04-2020 21:01:39

107 Tweet

271 Takipçi

343 Takip Edilen

Shashank (@shawshank_v) 's Twitter Profile Photo

If you are at #ECCV2024 and excited about building vision foundational models using videos? Join our tutorial tomorrow morning 30th Sept. at 09:00 at Amber 7+8. European Conference on Computer Vision #ECCV2026

mrz.salehi (@mrzsalehi) 's Twitter Profile Photo

🚀 Excited to present SIGMA at European Conference on Computer Vision #ECCV2026 ! 🎉 We upgrade VideoMAE with Sinkhorn-Knopp on patch-level embeddings, pushing reconstruction to more semantic features. With Michael Dorkenwald. Let’s connect at today's poster session at 4:30 PM, poster number 256, or send us a DM.

Yann LeCun (@ylecun) 's Twitter Profile Photo

I've made this point before: video generation systems are not good world models (at least, not necessarily). They could be mode-collapsed, and you wouldn't know.

Yuki (@y_m_asano) 's Twitter Profile Photo

Excited to announce that today I'm starting my new position at Technische Universität Nürnberg as a full Professor 🎉. I thank everyone who has helped me to get to this point, you're all the best! Our lab is called FunAI Lab, where we strive to put the fun into fundamental research. 😎 Let's go!

Yuki (@y_m_asano) 's Twitter Profile Photo

Ever wondered if better LLMs actually have a better understanding of the visual world? 🤔 As it turns out, they do! We find: An LLM's MMLU performance correlates positively with zero-shot performance in a CLIP-like case when using that LLM to encode the text. 🧵👇

Ever wondered if better LLMs actually have a better understanding of the visual world? 🤔
As it turns out, they do!
We find: An LLM's MMLU performance  correlates positively with zero-shot performance in a CLIP-like case when using that LLM to encode the text.
🧵👇
Michael Dorkenwald (@mdorkenw) 's Twitter Profile Photo

📢 Announcing TVBench: Temporal Video-Language Benchmark 📺 We reveal that widely used Video-Language benchmarks, such as MVBench, fall short in testing temporal understanding and propose an alternative TVBench: huggingface.co/datasets/FunAI…

JB (@iamjbdel) 's Twitter Profile Photo

TVBench: Redesigning Video-Language Evaluation Datasets: huggingface.co/datasets/FunAI… (Likes: 5, Downloads: 1) Tags: Visual Question Answering, Video modality

Nikhil Parthasarathy (@nikparth1) 's Twitter Profile Photo

Great to have more benchmarks like this! Ridiculous that we still have been benchmarking "video understanding" on so many evals that can be solved with single frames or language/text only!

Cees Snoek (@cgmsnoek) 's Twitter Profile Photo

📢📢 Beyond Model Adaptation at Test Time: A Survey by Zehao Xiao. TL;DR: we provide a comprehensive and systematic review on test-time adaptation, covering more than 400 recent papers 💯💯💯💯 🤩 #CVPR2025 #ICLR2025 arxiv.org/abs/2411.03687

📢📢 Beyond Model Adaptation at Test Time: A Survey by <a href="/zehao_xiao/">Zehao Xiao</a>. TL;DR: we provide a comprehensive and systematic review on test-time adaptation, covering more than 400 recent papers 💯💯💯💯 🤩 #CVPR2025 #ICLR2025 
arxiv.org/abs/2411.03687
Marcus 🇬🇧✊🏾 (@elgobsucram) 's Twitter Profile Photo

The thing about Bluesky is... When you open it... There are no bots, no outrage engagement farms, no ads, no propaganda, no Musk forced into your timeline... It's a breath of fresh air. It's like old twitter. When you logged on and instantly you were having fun and socialising.

Carsten T. Lüth (@cartlueth) 's Twitter Profile Photo

We’re thrilled to welcome Charlotte Bunne (Charlotte Bunne ), assistant professor at EPFL, to our heidelberg.ai / NCT Data Science Seminar series on January 23rd at 4 pm in Heidelberg for a hybrid event. Join us for an engaging and inspiring session!

We’re thrilled to welcome Charlotte Bunne (<a href="/_bunnech/">Charlotte Bunne</a> ), assistant professor at <a href="/EPFL/">EPFL</a>, to our heidelberg.ai / NCT Data Science Seminar series on January 23rd at 4 pm in Heidelberg for a hybrid event.

Join us for an engaging and inspiring session!
Papers of the day (@arxivtoday) 's Twitter Profile Photo

New paper: KV Cache Steering - a lightweight method to make small language models reason better by modifying their key-value cache. One-shot intervention, no fine-tuning needed, works surprisingly well. 🧵

New paper: KV Cache Steering - a lightweight method to make small language models reason better by modifying their key-value cache. One-shot intervention, no fine-tuning needed, works surprisingly well. 🧵
Max Belitsky (@mbelitsky1) 's Twitter Profile Photo

Introducing cache steering – a new method for implicit behavior steering in LLMs Cache steering is a lightweight method for guiding the behavior of language models by applying a single intervention to their KV-cache. We show how it can be used to induce reasoning in small LLMs.

Introducing cache steering – a new method for implicit behavior steering in LLMs

Cache steering is a lightweight method for guiding the behavior of language models by applying a single intervention to their KV-cache. We show how it can be used to induce reasoning in small LLMs.