Quentin Garrido (@garridoq_) 's Twitter Profile
Quentin Garrido

@garridoq_

Research Scientist, FAIR at Meta.
PhD student at @MetaAI and Université Gustave Eiffel with Yann LeCun and Laurent Najman.

Ex MVA and @ESIEEPARIS

ID: 1387836071455698953

linkhttps://garridoq.com calendar_today29-04-2021 18:28:08

250 Tweet

1,1K Takipçi

243 Takip Edilen

Association for Computing Machinery (@theofficialacm) 's Twitter Profile Photo

Meet the recipients of the 2024 ACM A.M. Turing Award, Andrew G. Barto and Richard S. Sutton! They are recognized for developing the conceptual and algorithmic foundations of reinforcement learning. Please join us in congratulating the two recipients! bit.ly/4hpdsbD

Wassim (Wes) Bouaziz (@_vassim) 's Twitter Profile Photo

Want to know if a ML model was trained on your dataset with 1 API call? See you in conferences 🙌 Excited to share that our paper Data Taggants for image data was accepted at ICLR 2025 🎉 Our follow-up on audio data, was accepted at ICASSP 2025! 🎉 Check out the details below 👇

Want to know if a ML model was trained on your dataset with 1 API call? See you in conferences 🙌

Excited to share that our paper Data Taggants for image data was accepted at ICLR 2025 🎉
Our follow-up on audio data, was accepted at ICASSP 2025! 🎉
Check out the details below 👇
Pierre Chambon (@pierrechambon6) 's Twitter Profile Photo

Does your LLM truly comprehend the complexity of the code it generates? 🥰   Introducing our new non-saturated (for at least the coming week? 😉) benchmark:   ✨BigO(Bench)✨ - Can LLMs Generate Code with Controlled Time and Space Complexity?   Check out the details below !👇

Does your LLM truly comprehend the complexity of the code it generates? 🥰
 
Introducing our new non-saturated (for at least the coming week? 😉) benchmark:
 
✨BigO(Bench)✨ - Can LLMs Generate Code with Controlled Time and Space Complexity?
 
Check out the details below !👇
Arna Ghosh (@arna_ghosh) 's Twitter Profile Photo

Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪 🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/reptr… 🧵👇[1/6] #DeepLearning

Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/reptr…
🧵👇[1/6]
#DeepLearning
David Fan (@davidjfan) 's Twitter Profile Photo

Can visual SSL match CLIP on VQA? Yes! We show with controlled experiments that visual SSL can be competitive even on OCR/Chart VQA, as demonstrated by our new Web-SSL model family (1B-7B params) which is trained purely on web images – without any language supervision.

Can visual SSL match CLIP on VQA?

Yes! We show with controlled experiments that visual SSL can be competitive even on OCR/Chart VQA, as demonstrated by our new Web-SSL model family (1B-7B params) which is trained purely on web images – without any language supervision.
AI at Meta (@aiatmeta) 's Twitter Profile Photo

📷 Hello Singapore! Meta is at #ICLR2025 EXPO 📷 Meta will be in Singapore this week for #ICLR25! Stop by our booth to chat with our team or learn more about our latest research. Things to know: 📷 Find us @ Booth #L03 (Rows 3-4, Columns L-M) in Hall 2. 📷 We're sharing 50+

📷  Hello Singapore! Meta is at #ICLR2025 EXPO  📷
Meta will be in Singapore this week for #ICLR25! Stop by our booth to chat with our team or learn more about our latest research.

Things to know:
📷 Find us @ Booth #L03 (Rows 3-4, Columns L-M) in Hall 2.
📷 We're sharing 50+
Kunhao Zheng @ ICLR 2025 (@kunhaoz) 's Twitter Profile Photo

🚨 Your RL only improves 𝗽𝗮𝘀𝘀@𝟭, not 𝗽𝗮𝘀𝘀@𝗸? 🚨 That’s not a bug — it’s a 𝗳𝗲𝗮𝘁𝘂𝗿𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗼𝗯𝗷𝗲𝗰𝘁𝗶𝘃𝗲 you’re optimizing. You get what you optimize for. If you want better pass@k, you need to optimize for pass@k at training time. 🧵 How?

🚨 Your RL only improves 𝗽𝗮𝘀𝘀@𝟭, not 𝗽𝗮𝘀𝘀@𝗸? 🚨

That’s not a bug — it’s a 𝗳𝗲𝗮𝘁𝘂𝗿𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗼𝗯𝗷𝗲𝗰𝘁𝗶𝘃𝗲 you’re optimizing.

You get what you optimize for. If you want better pass@k, you need to optimize for pass@k at training time.

🧵 How?
Andrei Bursuc (@abursuc) 's Twitter Profile Photo

📢 We have a PR[AI]RIE PhD position opening Centre Inria de Paris co advised with R. de Charette & Tuan-Hung VU [please distribute] 💡Topic: Physics-Grounded Vision Foundation Models ⏳Application deadline: 20 May 2025 🗓️ Start date: Fall 2025 📝Detailed description: linked below

📢 We have a PR[AI]RIE PhD position opening <a href="/inria_paris/">Centre Inria de Paris</a> co advised with R. de Charette &amp; <a href="/tuan_hung_vu/">Tuan-Hung VU</a> [please distribute]
💡Topic: Physics-Grounded Vision Foundation Models
⏳Application deadline: 20 May 2025
🗓️ Start date: Fall 2025
📝Detailed description: linked below
Jean-Rémi King (@jeanremiking) 's Twitter Profile Photo

🔎We're looking for volunteers to study the brain: - Native English? - 🇫🇷in Paris? - 🧠Want to participate in a brain imaging experiment? - 💶 8 sessions of 2 hours paid 80€ each - 📩 contact: [email protected] Please RT :)

Quentin Garrido (@garridoq_) 's Twitter Profile Photo

If you're looking for a video model to: - solve recognition tasks - make your favourite LLM go multimodal - train a world model for robotics Check out what the team has been cooking ⬇️⬇️

Federico Baldassarre (@baldassarrefe) 's Twitter Profile Photo

DINOv2 meets text at #CVPR 2025! Why choose between high-quality DINO features and CLIP-style vision-language alignment? Pick both with dino.txt 🦖📖 We align frozen DINOv2 features with text captions, obtaining both image-level and patch-level alignment at a minimal cost. [1/N]

DINOv2 meets text at #CVPR 2025! Why choose between high-quality DINO features and CLIP-style vision-language alignment? Pick both with dino.txt 🦖📖

We align frozen DINOv2 features with text captions, obtaining both image-level and patch-level alignment at a minimal cost. [1/N]
Mathurin Videau (@mathuvu_) 's Twitter Profile Photo

We present an Autoregressive U-Net that incorporates tokenization inside the model, pooling raw bytes into words then word-groups. AU-Net focuses most of its compute on building latent vectors that correspond to larger units of meaning. Joint work with Badr Youbi Idrissi 1/8

We present an Autoregressive U-Net that incorporates tokenization inside the model, pooling raw bytes into words then word-groups. AU-Net focuses most of its compute on building latent vectors that correspond to larger units of meaning.
Joint work with <a href="/byoubii/">Badr Youbi Idrissi</a> 1/8
Wassim (Wes) Bouaziz (@_vassim) 's Twitter Profile Photo

🚨New AI Security paper alert: Winter Soldier 🥶🚨 In our last paper, we show: -how to backdoor a LM _without_ training it on the backdoor behavior -use that to detect if a black-box LM has been trained on your protected data Yes, Indirect data poisoning is real and powerful!

🚨New AI Security paper alert: Winter Soldier 🥶🚨
In our last paper, we show:
-how to backdoor a LM _without_ training it on the backdoor behavior
-use that to detect if a black-box LM has been trained on your protected data

Yes, Indirect data poisoning is real and powerful!
TwelveLabs (twelvelabs.io) (@twelve_labs) 's Twitter Profile Photo

In the 87th session of #MultimodalWeekly, we welcome Quentin Garrido (Research Scientist at AI at Meta) to share his awesome paper titled "Intuitive physics understanding emerges from self-supervised pretraining on natural videos" in collaboration with his Meta AI colleagues.

In the 87th session of #MultimodalWeekly, we welcome <a href="/garridoq_/">Quentin Garrido</a> (Research Scientist at <a href="/metaai/">AI at Meta</a>) to share his awesome paper titled "Intuitive physics understanding emerges from self-supervised pretraining on natural videos" in collaboration with his Meta AI colleagues.