Amita Kamath (@kamath_amita) 's Twitter Profile
Amita Kamath

@kamath_amita

PhD student at @UCLANLP | Previously, Predoctoral Young Investigator at @Allen_AI and Stanford CS MS student @StanfordNLP

ID: 1195847613628612608

calendar_today16-11-2019 23:34:28

10 Tweet

392 Followers

168 Following

Amita Kamath (@kamath_amita) 's Twitter Profile Photo

Expect the unexpected: How to know what you don't know when the distribution shifts! Q&A starting soon at #acl2020nlp: virtual.acl2020.org/paper_main.503… (Q&A at 11:00 and 13:00 PDT, 7/7)

Tanmay Gupta (@tanmay2099) 's Twitter Profile Photo

Excited to share snippets from our latest video explaining the ideas behind "General Purpose Vision" Video: youtu.be/ok2-Y58PGAY Paper, code & demo: prior.allenai.org/projects/gpv Work done with collaborators Amita Kamath @ ECCV 2024 Ani Kembhavi Derek Hoiem Ai2 @IllinoisCS 🧵

Amita Kamath (@kamath_amita) 's Twitter Profile Photo

VL models fail at spatial reasoning, but biases in benchmarks (dogs are usually UNDER tables) mask even worse capabilities. 📢 A new benchmark at #EMNLP2023 without this bias (yes, we put a dog on a table) in What’s up with VL models? arxiv.org/pdf/2310.19785… Jack Hessel uclanlp

Amita Kamath (@kamath_amita) 's Twitter Profile Photo

How much information is lost by VL models' text encoders? Turns out, a lot, as compositionality increases: which we show affects multimodal performance. 📢 Text encoders bottleneck compositionality in contrastive VL models @ #EMNLP2023 arxiv.org/pdf/2305.14897… Jack Hessel uclanlp

Amita Kamath (@kamath_amita) 's Twitter Profile Photo

Our "mug under a table" picture from the What's Up benchmark fools GPT4-V! 💪 Check out the benchmark and code at github.com/amitakamath/wh…, and our EMNLP 2023 paper at arxiv.org/abs/2310.19785

Adyasha Maharana @ ACL 2024 🇹🇭 (@adyasha10) 's Twitter Profile Photo

Are unified VL models consistent across predictions for different tasks on the same image? Thrilled to share our @TMLRorg paper where we find that VL models show significant cross-task inconsistency in their predictions for the same image across tasks. adymaharana.github.io/cococon/ 🧵

Are unified VL models consistent across predictions for different tasks on the same image?

Thrilled to share our @TMLRorg paper where we find that VL models show significant cross-task inconsistency in their predictions for the same image across tasks.
adymaharana.github.io/cococon/
🧵
Amita Kamath (@kamath_amita) 's Twitter Profile Photo

Hard negative finetuning can actually HURT compositionality, because it teaches VLMs THAT caption perturbations change meaning, not WHEN they change meaning! 📢 A new benchmark+VLM at #ECCV2024 in The Hard Positive Truth arxiv.org/abs/2409.17958 Cheng-Yu Hsieh Ranjay Krishna uclanlp