
Julia Chae
@juliachae_
phd @MIT_CSAIL; prev @VectorInst, @UofTRobotics
ID: 1558107485600374784
12-08-2022 15:05:55
44 Tweet
282 Followers
287 Following


Could multimodal vision language models (VLMs) help biodiversity researchers retrieve images for their studies? 🤔 MIT CSAIL, UCL, iNaturalist, The University of Edinburgh, & UMass Amherst researchers designed a performance test to find out. Each VLM’s task: Locate & reorganize the most





We're excited to welcome Julia Chae and Shobhita Sundaram next week on Wednesday, March 5th for a presentation on "Personalized Representation from Personalized Generation" - be sure to check out this session! Thanks to Ahmad Mustafa Anis for organizing this community event ✨


Very excited to host Julia Chae and Shobhita Sundaram next week at Cohere For AI to present their research on "Personalized Representation from Personalized Generation". Register for the session for free (Link in the original tweet).



Adapt object detectors to new data *without labels* with Align and Distill (ALDI), our domain adaptation framework published last week in Transactions on Machine Learning Research (with a Featured Certification [Spotlight]! Certified papers at TMLR)



Drop by our poster at Hall 3 + Hall 2B, #99 at 10 AM SGT! Unfortunately none of us could travel, but our amazing friends Jyo Pari Julia Chae Shobhita Sundaram & Mark Hamilton — will be presenting it 🙌 Feel free to reach out with any questions — I’ll be online & cheering them on 💖




[1/7] Paired multimodal learning shows that training with text can help vision models learn better image representations. But can unpaired data do the same? Our new work shows that the answer is yes! w/ Shobhita Sundaram Chenyu (Monica) Wang, Stefanie Jegelka and Phillip Isola
![Sharut Gupta (@sharut_gupta) on Twitter photo [1/7] Paired multimodal learning shows that training with text can help vision models learn better image representations. But can unpaired data do the same?
Our new work shows that the answer is yes!
w/ <a href="/shobsund/">Shobhita Sundaram</a> <a href="/ChenyuW64562111/">Chenyu (Monica) Wang</a>, Stefanie Jegelka and <a href="/phillip_isola/">Phillip Isola</a> [1/7] Paired multimodal learning shows that training with text can help vision models learn better image representations. But can unpaired data do the same?
Our new work shows that the answer is yes!
w/ <a href="/shobsund/">Shobhita Sundaram</a> <a href="/ChenyuW64562111/">Chenyu (Monica) Wang</a>, Stefanie Jegelka and <a href="/phillip_isola/">Phillip Isola</a>](https://pbs.twimg.com/media/G25xuHaXcAAsrgZ.jpg)


Excited to be co-organizing CV4E Workshop @ ICCV for the second year at #ICCV2025 in Honolulu! We have an amazing suite of panelists (+ speakers) this year, don't miss it 👀