Chaitanya (Chay) Ryali (@wrong_whp) 's Twitter Profile
Chaitanya (Chay) Ryali

@wrong_whp

Here for preprints.

ID: 1005284639768866816

calendar_today09-06-2018 03:05:10

261 Tweet

173 Takipçi

941 Takip Edilen

Christoph Feichtenhofer (@cfeichtenhofer) 's Twitter Profile Photo

At #ICML2023, Chay (Chaitanya (Chay) Ryali) will present Hiera, a hierarchical vision transformer that is fast, powerful, and simple. Code+models at: github.com/facebookresear… If interested, please come to the oral presentation on Tue 25 Jul 5:30pm HST or poster #219 on Wed 26 Jul 2pm HST.

Ari Morcos (@arimorcos) 's Twitter Profile Photo

I'm incredibly excited to announce our new company, DatologyAI! Training models is hard and identifying the right data is the most important and difficult part -- our goal DatologyAI to make optimizing training data at scale easy and automatic across modalities.

Sander Dieleman (@sedielem) 's Twitter Profile Photo

New blog post! Some thoughts about diffusion distillation. Actually, quite a lot of thoughts 🤭 Please share your thoughts as well! sander.ai/2024/02/28/par…

Nikhila Ravi (@nikhilaravi) 's Twitter Profile Photo

✈️Excited to attend #CVPR in Seattle! My team at Meta (FAIR) is hiring Research Scientists & Engineers to work on multimodal models across image/video. *Full-time* roles only starting ASAP in NYC/Bay Area/Seattle. Stop by the Meta booth or DM me to chat! 😀

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Introducing Meta Segment Anything Model 2 (SAM 2) — the first unified model for real-time, promptable object segmentation in images & videos. SAM 2 is available today under Apache 2.0 so that anyone can use it to build their own experiences Details ➡️ go.fb.me/p749s5

Nikhila Ravi (@nikhilaravi) 's Twitter Profile Photo

SAM 2 is the next generation of the Segment Anything Model for images we released last year! SAM 2 comes with all the great features from SAM (promptable, zero shot generalization, fast inference, Apache 2.0 license), but now also for video! Here's what's in SAM 2🧵👇

Joelle Pineau (@jpineau1) 's Twitter Profile Photo

We dropped another awesome open model: SAM 2. This one comes with the data and an easy-to-use demo. It extends the original Segment Anything Model, to work on video. Enjoy!

Yann LeCun (@ylecun) 's Twitter Profile Photo

Meta Segment Anything Model v2 (SAM 2) is out. Can segment images and videos. Open source under Apache-2 license. Web demo, paper, and datasets available. Amazing performance.

A.I.Warper (@aiwarper) 's Twitter Profile Photo

Just wow..... absolutely amazing Balloon scene from up. Video desaturated so you can see the balloon I am tracking with SAM v2

Nikhila Ravi (@nikhilaravi) 's Twitter Profile Photo

👋Catch SAM 2 team at European Conference on Computer Vision #ECCV2026: Sun 09/29: Christoph Feichtenhofer is giving talks at the VOTS2024 workshop (9am) & Omnilabel workshop (2:15pm) Mon 09/30: I'm speaking at the LSVOS workshop (2pm) & Yuan-Ting Hu WiCV (2:45pm) Tue-Fri: Q&A AI at Meta Booth (10:30/4:30)

AI at Meta (@aiatmeta) 's Twitter Profile Photo

We’re on the ground at #ECCV2024 in Milan this week to showcase some of our latest research, new research artifacts and more. Here are 4️⃣ things you won’t want to miss from Meta FAIR, GenAI and Reality Labs Research this week whether you’re here in person or following from your

We’re on the ground at #ECCV2024 in Milan this week to showcase some of our latest research, new research artifacts and more. Here are 4️⃣ things you won’t want to miss from Meta FAIR, GenAI and Reality Labs Research this week whether you’re here in person or following from your
Nikhila Ravi (@nikhilaravi) 's Twitter Profile Photo

🚀 Excited to announce new SAM 2.1 model checkpoints & the SAM 2 Developer Suite: 🤖 We’re releasing full training/fine tuning code for SAM 2 so you can customize it for your use case. 💻For the first time we’re publishing the frontend & backend code for our SAM 2 web demo!

Aidan Clark (@_aidan_clark_) 's Twitter Profile Photo

Something I said the other day offhand and have been reflecting on a lot... Open-ended research questions are the devil. Never pitch or pursue them. The job of a senior research leader is to represent a program of open ended research as a sequence of clear and precise questions.

Lucas Beyer (bl16) (@giffmana) 's Twitter Profile Photo

I agree with the whole thread, it’s very well put. But let’s not forget the whole current focused bet mode across all big labs is entirely based on inventions made in the grassroots mode. Both modes need each other, although their weighting/footprint may vary over time.

Joelle Pineau (@jpineau1) 's Twitter Profile Photo

Excited to share updates about a new collaboration with the Bibliothèque Nationale de France, Fisheye Immersive, and the artist Ruben Fro, who used our SegmentAnything2 model, to produce a breathtaking new work called "Deep Diving".

merve (@mervenoyann) 's Twitter Profile Photo

Don't sleep on this! 🔥 Meta dropped swiss army knives for vision with A2.0 license ❤️ > image/video encoders for vision language and spatial understanding (object detection etc) > VLM outperforms InternVL3 and Qwen2.5VL 🔥 > Gigantic video and image datasets 👏

Don't sleep on this! 🔥

<a href="/Meta/">Meta</a> dropped swiss army knives for vision with A2.0 license ❤️
&gt; image/video encoders for vision language and spatial understanding (object detection etc)
&gt; VLM outperforms InternVL3 and Qwen2.5VL 🔥
&gt; Gigantic video and image datasets 👏
Nikhila Ravi (@nikhilaravi) 's Twitter Profile Photo

🌟Thrilled to share that SAM 2 was awarded a Best Paper Honourable Mention Award at #ICLR2025, one of 6 papers recognized out of 11000+ submissions! 👏This project was the result of amazing work by an exceptional team at AI at Meta FAIR: Valentin Gabeur , Yuan-Ting Hu,Ronghang Hu,

🌟Thrilled to share that SAM 2 was awarded a Best Paper Honourable Mention Award at #ICLR2025, one of 6 papers recognized out of 11000+ submissions! 

👏This project was the result of amazing work by an exceptional team at <a href="/AIatMeta/">AI at Meta</a> FAIR: <a href="/vgabeur/">Valentin Gabeur</a> ,
<a href="/YuanTingHu1/">Yuan-Ting Hu</a>,<a href="/RonghangHu/">Ronghang Hu</a>,