Kate Saenko (@kate_saenko_) 's Twitter Profile
Kate Saenko

@kate_saenko_

AI Researcher in dataset bias, vision & language models /

FAIR /

Professor at Boston University /

NeurIPS 2023 co-PC /

she/her/hers

ID: 885874631525249026

linkhttp://ai.bu.edu calendar_today14-07-2017 14:52:25

429 Tweet

5,5K Followers

161 Following

Nikhila Ravi (@nikhilaravi) 's Twitter Profile Photo

Thrilled to share that Segment Anything was awarded with "Best Paper: Honorable Mention" at #ICCV2023 today, one of the top 3 papers out of 8,260 submissions & 2,161 accepted! It's been incredible to see the tremendous impact of SAM in the research community & for Meta products!

Thrilled to share that Segment Anything was awarded with "Best Paper: Honorable Mention" at #ICCV2023 today, one of the top 3 papers out of 8,260 submissions &amp; 2,161 accepted! It's been incredible to see the tremendous impact of SAM in the research community &amp; for <a href="/Meta/">Meta</a> products!
Kate Saenko (@kate_saenko_) 's Twitter Profile Photo

Next up: after a weeks long pressure campaign and numerous plagiarism allegations, ChatGPT is fired for not properly citing its sources. 😂

BU Computing & Data Sciences (CDS) (@bu_cds) 's Twitter Profile Photo

The Faculty of Computing & Data Sciences (CDS) is participating in Boston University’s 10th Annual Giving Day. To commemorate the event and our milestone-filled year, take a drone tour of CDS & the Center for Computing & Data Sciences.📽️Full Video [Link] bit.ly/CDSGiving24

Kate Saenko (@kate_saenko_) 's Twitter Profile Photo

My group at FAIR (Meta) is looking for a postdoc in vision and language! Please apply here metacareers.com/jobs/141798883…

My group at FAIR (Meta) is looking for a postdoc in vision and language!
Please apply here 
metacareers.com/jobs/141798883…
AI at Meta (@aiatmeta) 's Twitter Profile Photo

Today we’re releasing OpenEQA — the Open-Vocabulary Embodied Question Answering Benchmark. It measures an AI agent’s understanding of physical environments by probing it with open vocabulary questions like “Where did I leave my badge?” More details ➡️ go.fb.me/7vq6hm

AI at Meta (@aiatmeta) 's Twitter Profile Photo

📝 New from FAIR: An Introduction to Vision-Language Modeling. Vision-language models (VLMs) are an area of research that holds a lot of potential to change our interactions with technology, however there are many challenges in building these types of models. Together with a set

📝 New from FAIR: An Introduction to Vision-Language Modeling.

Vision-language models (VLMs) are an area of research that holds a lot of potential to change our interactions with technology, however there are many challenges in building these types of models. Together with a set
Ronghang Hu (@ronghanghu) 's Twitter Profile Photo

SAM 2.1 Developer Suite (new checkpoints, training code, web demo) is released -- check it out at github.com/facebookresear…

Kate Saenko (@kate_saenko_) 's Twitter Profile Photo

We extended the poster deadline for the Simon's Institute workshop on Domain Adaptation held in Berkeley on Tuesday, November 12  from 4-5 PM SUBMISSION FORM LINK:  forms.gle/oywbgZkXAoeQ68… Please apply by Wednesday, October 2, 2024.

Kate Saenko (@kate_saenko_) 's Twitter Profile Photo

Boston University has an opening at the Assistant Professor level in ECE/ME in Computer Vision/Perception for Robotics. Here is the application site (November 15 application deadline): academicjobsonline.org/ajo/jobs/28226

Kate Saenko (@kate_saenko_) 's Twitter Profile Photo

Talking about our recent work investigating whether pre-training is the key to domain generalization models' success (with Piotr Teterwak as lead author) arxiv.org/abs/2412.02856

Shiry Ginosar (@shiryginosar) 's Twitter Profile Photo

Think LMMs can reason like a 3-year-old? Think again! Our Kid-Inspired Visual Analogies benchmark reveals where young children still win: ey242.github.io/kiva.github.io/ Catch our #ICLR2025 poster today to see where models still fall short! Thurs. April 24 3-5:30 pm Halls 3 + 2B #312

Think LMMs can reason like a 3-year-old?

Think again!

Our Kid-Inspired Visual Analogies benchmark reveals where young children still win: ey242.github.io/kiva.github.io/

Catch our #ICLR2025 poster today to see where models still fall short!

Thurs. April 24
3-5:30 pm
Halls 3 + 2B #312
Kate Saenko (@kate_saenko_) 's Twitter Profile Photo

My team at FAIR (Meta AI research) is looking for postdocs! If you want to work on the next generation of foundation models for perception like Segment Anything (SAM) with an awesome team of researchers, apply here: metacareers.com/jobs/101940012… (note location is flexible)