Jeff Wang ๐Ÿ‘จโ€๐Ÿš€ (@jffwng) 's Twitter Profile
Jeff Wang ๐Ÿ‘จโ€๐Ÿš€

@jffwng

Product Lead @AIatMeta (FAIR). I like language models. I also like non-language models. Previously at Twitter and startups

ID: 7843142

calendar_today31-07-2007 08:52:03

2,2K Tweet

2,2K Takipรงi

743 Takip Edilen

Jeff Wang ๐Ÿ‘จโ€๐Ÿš€ (@jffwng) 's Twitter Profile Photo

Llama 3.1 is here. 8B, 70B and 405B. Download: github.com/meta-llama/llaโ€ฆ Paper: ai.meta.com/research/publiโ€ฆ Agentic System Examples: github.com/meta-llama/llaโ€ฆ Blog post: ai.meta.com/blog/meta-llamโ€ฆ

Gabriel Synnaeve (@syhw) 's Twitter Profile Photo

A short time ago in 10 timezones from California away... While Llama 3.1 is (rightfully) all the rage, some weirdos are making progress on generating all tokens at once with flow matching (a diffusion family process), and testing on the hardest task to get exactly right: codegen!

AI at Meta (@aiatmeta) 's Twitter Profile Photo

New research paper from Meta FAIR โ€“ Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model. Chunting Zhou, Lili Yu and team introduce this recipe for training a multi-modal model over discrete and continuous data. Transfusion combines next token

New research paper from Meta FAIR โ€“ Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model.

<a href="/violet_zct/">Chunting Zhou</a>, <a href="/liliyu_lili/">Lili Yu</a> and team introduce this recipe for training a multi-modal model over discrete and continuous data. Transfusion combines next token
AI at Meta (@aiatmeta) 's Twitter Profile Photo

New research from Meta FAIR: MoMa โ€” Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts โžก๏ธ go.fb.me/kz3b0c This paper introduces modality-aware sparse architectures for early fusion, mixed-modality foundation models and opens up several promising

New research from Meta FAIR: MoMa โ€” Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts โžก๏ธ go.fb.me/kz3b0c

This paper introduces modality-aware sparse architectures for early fusion, mixed-modality foundation models and opens up several promising
AI at Meta (@aiatmeta) 's Twitter Profile Photo

With Llama 3.2 we released our first-ever lightweight Llama models: 1B & 3B. These models empower developers to build personalized, on-device agentic applications with capabilities like summarization, tool use and RAG where data never leaves the device.

Jeff Wang ๐Ÿ‘จโ€๐Ÿš€ (@jffwng) 's Twitter Profile Photo

A personal favorite of mine from yesterday: Meta AI translations in your own voice in Reels. A direct result of speech and voice research we published last year, now making it into product!

Jeff Wang ๐Ÿ‘จโ€๐Ÿš€ (@jffwng) 's Twitter Profile Photo

Announcing Meta Movie Gen: new foundational media generation models We introduce 4 new capabilities: (1) text-to-video (2) video editing, (3) personalize images of yourself within video (4) video-to-audio w/ optional text prompts. Website: ai.meta.com/research/movieโ€ฆ Blog:

Wei-Ning Hsu (Attending ICASSP) (@mhnt1580) 's Twitter Profile Photo

Now HEAR this (not just watch) - We've got audio covered for generated videos ๐Ÿ”Š Introducing Movie Gen Audio, which adds 48kHz synced SFX and aligned music to amazing videos from Movie Gen Video (and other sources!) Super honored to work with this amazing team! More to come๐Ÿ”ฅ๐Ÿ”ฅ

Sonia ๐ŸŒป (@soniajoseph_) 's Twitter Profile Photo

Going from SF early stage startups to Meta has been another culture shock. This is one of the most functional places Iโ€™ve worked, with the East coast culture of my high school & college years. Operations are clean. Dads go blueberry picking with their kids on weekends. No

AI at Meta (@aiatmeta) 's Twitter Profile Photo

We're at NeurIPS Conference this week showcasing some our latest research across GenAI, FAIR, Reality Labs at Meta Research and more. This year researchers from across Meta had 47+ publications accepted, are taking part in 7+ different talks/workshops/panels and we'll be showcasing a number of

We're at <a href="/NeurIPSConf/">NeurIPS Conference</a> this week showcasing some our latest research across GenAI, FAIR, <a href="/RealityLabs/">Reality Labs at Meta</a> Research and more. This year researchers from across Meta had 47+ publications accepted, are taking part in 7+ different talks/workshops/panels and we'll be showcasing a number of
Jeff Wang ๐Ÿ‘จโ€๐Ÿš€ (@jffwng) 's Twitter Profile Photo

A collection of new AI research released out of FAIR today. Check out the models, papers, code and datasets: ai.meta.com/blog/meta-fairโ€ฆ

Jeff Wang ๐Ÿ‘จโ€๐Ÿš€ (@jffwng) 's Twitter Profile Photo

Say ๐Ÿ‘‹ goodbye to tokens with ๐Ÿฅช BLT: Byte Latent Transformers. Paper: dl.fbaipublicfiles.com/blt/BLT__Patchโ€ฆ Code: github.com/facebookresearโ€ฆ

1dot1x (@1dot1x) 's Twitter Profile Photo

donโ€™t leave anything for later. later, the coffee gets cold. later, you lose interest. later, the day turns into night. later, people grow up. later, people grow old. later, life goes by. later, you regret not doing somethingโ€ฆ and you had the chance.