AI at Meta(@AIatMeta) 's Twitter Profileg
AI at Meta

@AIatMeta

Together with the AI community, we are pushing the boundaries of what’s possible through open science to create a more connected world.

ID:1034844617261248512

linkhttps://ai.meta.com calendar_today29-08-2018 16:45:58

1,9K Tweets

543,4K Followers

257 Following

AI at Meta(@AIatMeta) 's Twitter Profile Photo

📝 New from FAIR: An Introduction to Vision-Language Modeling.

Vision-language models (VLMs) are an area of research that holds a lot of potential to change our interactions with technology, however there are many challenges in building these types of models. Together with a set

📝 New from FAIR: An Introduction to Vision-Language Modeling. Vision-language models (VLMs) are an area of research that holds a lot of potential to change our interactions with technology, however there are many challenges in building these types of models. Together with a set
account_circle
AI at Meta(@AIatMeta) 's Twitter Profile Photo

🆕 Want to start building with Meta Llama models? We just published a series of step-by-step tutorials to help you get started with Llama 3 on Linux, Windows, Mac and more.

Watch the videos on YouTube ➡️ go.fb.me/fiq831

🆕 Want to start building with Meta Llama models? We just published a series of step-by-step tutorials to help you get started with Llama 3 on Linux, Windows, Mac and more. Watch the videos on YouTube ➡️ go.fb.me/fiq831
account_circle
AI at Meta(@AIatMeta) 's Twitter Profile Photo

The Niantic team is using Meta Llama to generate reactions in real time in Peridot, transforming their adorable creatures into responsive AR pets that exhibit smart behaviors and the unpredictable nature of real animals.

How Niantic is using Llama ➡️ go.fb.me/21k8n0

account_circle
AI at Meta(@AIatMeta) 's Twitter Profile Photo

6️⃣ days to go — We’ve already seen 2.3k+ submissions for the Meta Comprehensive RAG Benchmark Challenge, get your submission in before the deadline this week!

account_circle
AI at Meta(@AIatMeta) 's Twitter Profile Photo

Newly published work from FAIR, Chameleon: Mixed-Modal Early-Fusion Foundation Models.

This research presents a family of early-fusion token-based mixed-modal models capable of understanding & generating images & text in any arbitrary sequence.

Paper ➡️ go.fb.me/7rb19n

Newly published work from FAIR, Chameleon: Mixed-Modal Early-Fusion Foundation Models. This research presents a family of early-fusion token-based mixed-modal models capable of understanding & generating images & text in any arbitrary sequence. Paper ➡️ go.fb.me/7rb19n
account_circle