Anand Bhattad (@anand_bhattad) 's Twitter Profile
Anand Bhattad

@anand_bhattad

Research Assistant Professor @TTIC_Connect | Visiting Researcher @berkeley_ai | PhD from @illinoisCS | UG @surathkal_nitk | Knowledge in Generative Models

ID: 309050073

linkhttps://anandbhattad.github.io/ calendar_today01-06-2011 13:04:21

948 Tweet

1,1K Followers

329 Following

Jon Barron (@jon_barron) 's Twitter Profile Photo

The legendary Ross Girshick just posted his CVPR workshop slides about the 1.5 decades he spent ~solving object detection as it relates to the ongoing LLM singularity. Excellent read, highly recommended. drive.google.com/file/d/1VodGlj…

Anand Bhattad (@anand_bhattad) 's Twitter Profile Photo

In case you missed it, our recent work shows that when you train a model for relighting, you get albedos for free from its latent features without needing to see albedo-like images. Not a single labeled image is needed. Completely zero-shot! It emerges as a byproduct of this

In case you missed it, our recent work shows that when you train a model for relighting, you get albedos for free from its latent features without needing to see albedo-like images. Not a single labeled image is needed. Completely zero-shot! It emerges as a byproduct of this
Anand Bhattad (@anand_bhattad) 's Twitter Profile Photo

All slides from our CV 20/20: A Retrospective Vision workshop at #CVPR2024 (#CVPR2024) is now available on our website: sites.google.com/view/retrocv/s… A side note: We started thinking and planning about this workshop on the last day of #CVPR2023 after a very eventful Scholars and Big

Jiawei (Joe) Zhou (@jzhou_jz) 's Twitter Profile Photo

Thrilled to be organizing the first Multimodal AI Workshop at TTI-Chicago! As we push the boundaries of #AI, the timing for the #MultimodalAI Workshop couldn’t be more perfect. Hear from our fantastic speakers across nlp, cv, speech, & robotics. Sign-up and submit your poster👇

Amil Dravid (@_amildravid) 's Twitter Profile Photo

We've release our code and weights for weights2weights. Check out our demo on Hugging Face🤗 powered by Gradio. Code: github.com/snap-research/… Weights: huggingface.co/snap-research/… Demo: huggingface.co/spaces/snap-re… Thanks Linoy Tsaban🎗️ apolinario 🌐 for the collab!

Anand Bhattad (@anand_bhattad) 's Twitter Profile Photo

We're delighted to host the Multimodal AI Workshop at TTIC (TTIC) on Aug 07-08! We will have a stellar lineup of speakers, student poster sessions, and networking opportunities in the beautiful city of Chicago. sites.google.com/view/multimoda…

Anand Bhattad (@anand_bhattad) 's Twitter Profile Photo

It's interesting to see many #NeurIPS2024 ACs sharing their pool statistics with lower scores, but it's important to remember that reviewing processes and pools vary widely; different topics and AC pools can result in varying acceptance rates and scores. Two examples that show

It's interesting to see many #NeurIPS2024 ACs sharing their pool statistics with lower scores, but it's important to remember that reviewing processes and pools vary widely; different topics and AC pools can result in varying acceptance rates and scores.

Two examples that show
TTIC (@ttic_connect) 's Twitter Profile Photo

Today is day 2 of the Multimodal Artificial Intelligence workshop, as part of TTIC's 2024 Summer Workshop Program. We have enjoyed a stellar list of speakers, keynote talks, panels, posters, and lively discussions. Learn more about our workshop program: tinyurl.com/429u2x4x

Today is day 2 of the Multimodal Artificial Intelligence workshop, as part of TTIC's 2024 Summer Workshop Program. We have enjoyed a stellar list of speakers, keynote talks, panels, posters, and lively discussions. Learn more about our workshop program: tinyurl.com/429u2x4x
Anand Bhattad (@anand_bhattad) 's Twitter Profile Photo

The code of our #ECCV2024 paper VideoShop is available now. tldr: VideoShop is a training-free approach that can exploit current image-to-video models for localized, accurate and consistent video editing. Code: github.com/sfanxiang/vide… Lagniappe: This project, which started as

Anand Bhattad (@anand_bhattad) 's Twitter Profile Photo

Last year at #ICCV2023, David Forsyth hinted at some projects we were working on in his talk at the QVCV workshop. 1) What do generative models know about the visual world? This was the main focus of my PhD thesis, and we had an interesting follow-up right after I moved to

Last year at #ICCV2023, David Forsyth hinted at some projects we were working on in his talk at the QVCV workshop. 

1) What do generative models know about the visual world? 

This was the main focus of my PhD thesis, and we had an interesting follow-up right after I moved to
tyler bonnen (@tylerraye) 's Twitter Profile Photo

do large-scale vision models represent the 3D structure of objects? excited to share our benchmark: multiview object consistency in humans and image models (MOCHI) with Stephanie Fu Yutong Bai Thomas O'Connell Yoni Friedman Nancy Kanwisher @[email protected] Josh Tenenbaum and Alexei Efros 1/👀

do large-scale vision models represent the 3D structure of objects?

excited to share our benchmark: multiview object consistency in humans and image models (MOCHI)

with <a href="/xkungfu/">Stephanie Fu</a> <a href="/YutongBAI1002/">Yutong Bai</a> <a href="/thomaspocon/">Thomas O'Connell</a> <a href="/_yonifriedman/">Yoni Friedman</a> <a href="/Nancy_Kanwisher/">Nancy Kanwisher @NancyKanwisher@mas.to</a> Josh Tenenbaum and Alexei Efros

1/👀