David Chan (@_dmchan) 's Twitter Profile
David Chan

@_dmchan

Postdoc at @berkeley_ai modeling vision and language. These are the voyages of the... Crap. I don't have a name for my own ship...

ID: 89695768

linkhttps://dchan.cc calendar_today13-11-2009 13:13:31

72 Tweet

130 Followers

138 Following

Tsung-Han (Patrick) Wu @ ICLR’25 (@tsunghan_wu) 's Twitter Profile Photo

We all learned DFS in undergrad — but did you know it can fix hallucinations in VLMs? 💡 Meet REVERSE-VLM: a self-correcting model using DFS-style backtracking + resampling 📉 12% fewer hallucinations (CHAIR-MSCOCO) 📈 28% more accurate (HaloQuest) 🔗 reverse-vlm.github.io

David Chan (@_dmchan) 's Twitter Profile Photo

This is pretty insane: intology.ai/blog/zochi-acl. But also, I wonder what they put for Section E.1 on the ACL checklist 👀

Ritwik Gupta 🇺🇦 (@ritwik_g) 's Twitter Profile Photo

Ever wondered if the way we feed image patches to vision models is the best way? The standard row-by-row scan isn't always optimal! Modern long-sequence transformers can be surprisingly sensitive to patch order. We developed REOrder to find better, task-specific patch sequences.

David Chan (@_dmchan) 's Twitter Profile Photo

🚀 Call for Papers! 🚀 Excited to help organize the 4th Workshop on What is Next in Multimodal Foundation Models? at ICCV in Honolulu, Hawai'i 🌺 Submit work on vision, language, audio & more! 🗓️ Deadline: July 1, 2025 🔗 sites.google.com/view/mmfm4thwo… #MMFM4 #ICCV2025 #AI #multimodal

Tsung-Han (Patrick) Wu @ ICLR’25 (@tsunghan_wu) 's Twitter Profile Photo

📢 Call for Papers! Last chance to hang with the CV crowd in Hawaii 🌴 We're hosting the 4th MMFM Workshop at #ICCV2025 — submit your work on vision, language, audio & more by July 1 🗓️ Also check out the CVPR edition 👉 #3 MMFM Workshop 🔗 sites.google.com/view/mmfm4thwo…

📢 Call for Papers!
Last chance to hang with the CV crowd in Hawaii 🌴

We're hosting the 4th MMFM Workshop at #ICCV2025 — submit your work on vision, language, audio & more by July 1 🗓️

Also check out the CVPR edition 👉 <a href="/MMFMWorkshop/">#3 MMFM Workshop</a> 

🔗 sites.google.com/view/mmfm4thwo…
Roei Herzig (@roeiherzig) 's Twitter Profile Photo

🚨 Rough luck with your #ICCV2025 submission? We’re organizing the 4th Workshop on What’s Next in Multimodal Foundation Models at #ICCV2025 in Honolulu 🌺🌴 Send us your work on vision, language, audio & more! 🗓️ Deadline: July 1, 2025 🔗 sites.google.com/view/mmfm4thwo…

David Chan (@_dmchan) 's Twitter Profile Photo

Me (To Cursor): Refactor this code. Cursor: Sure! I've refactored your code! It's shorter and cleaner now! Me: Are you sure there are no feature regressions? Cursor: The code is missing essential functionality. Me: ....

David Chan (@_dmchan) 's Twitter Profile Photo

I'll be in Vienna for ACL starting Today - I’m presenting work on how LMMs perform in-context updates in a Bayesian way, but I’m excited to talk anything multimodal! Feel free to reach out if you’re around! #ACL2025

Wen-Han Hsieh (@henseoba) 's Twitter Profile Photo

🚀Excited to share that our paper, “Do What? Teaching Vision-Language-Action Models to Reject the Impossible,” has been accepted to #EMNLP2025 Findings! 📄Paper: arxiv.org/pdf/2508.16292 🌎Project page: wen-hanhsieh.github.io/iva.github.io/

Roei Herzig (@roeiherzig) 's Twitter Profile Photo

🌺 Join us in Hawaii at ICCV 2025 for the workshop “What is Next in Multimodal Foundation Models?” 🗓️ Monday, October 20 | 8:00 – 12:00📍Room 326 B We’ve got a stellar lineup of speakers & panelists— details here: 🔗 sites.google.com/view/mmfm4thwo… #ICCV2025

🌺 Join us in Hawaii at ICCV 2025 for the workshop

“What is Next in Multimodal Foundation Models?”
🗓️ Monday, October 20 | 8:00 – 12:00📍Room 326 B

We’ve got a stellar lineup of speakers &amp; panelists— details here: 🔗 sites.google.com/view/mmfm4thwo…

<a href="/ICCVConference/">#ICCV2025</a>
David Chan (@_dmchan) 's Twitter Profile Photo

I'm at #ICCV2025 this week - send me a DM or email if you'd like to find a time to talk anything multimodal! Speaking of multimodal, don't forget to check out our workshop: "What's Next in Multimodal Foundation Models?" on Monday in 326 B! sites.google.com/view/mmfm4thwo…

David Chan (@_dmchan) 's Twitter Profile Photo

This is some awesome new work from our lab - Echo is a way we can build benchmarks automatically from social media! Check it out!

David Chan (@_dmchan) 's Twitter Profile Photo

Two days at ICCV = Two new papers! Interrupting LLMs’ reasoning should have seamless and predictable behavior. Turns out, that’s not the case.