Adrien Bardes(@AdrienBardes) 's Twitter Profileg
Adrien Bardes

@AdrienBardes

PhD Student at @MetaAI & @Inria with Yann LeCun and Jean Ponce, interested in self-supervised learning and computer vision.

ID:787639668

linkhttp://adrien987k.github.io calendar_today28-08-2012 19:01:18

42 Tweets

538 Followers

231 Following

AI at Meta(@AIatMeta) 's Twitter Profile Photo

Introducing Meta Llama 3: the most capable openly available LLM to date.

Today we’re releasing 8B & 70B models that deliver on new capabilities such as improved reasoning and set a new state-of-the-art for models of their sizes.

Today's release includes the first two Llama 3…

account_circle
Aran Komatsuzaki(@arankomatsuzaki) 's Twitter Profile Photo

Meta presents Image World Model

Learning and Leveraging World Models in Visual Representation Learning

arxiv.org/abs/2403.00504

Meta presents Image World Model Learning and Leveraging World Models in Visual Representation Learning arxiv.org/abs/2403.00504
account_circle
AK(@_akhaliq) 's Twitter Profile Photo

Meta presents Learning and Leveraging World Models in Visual Representation Learning

Joint-Embedding Predictive Architecture (JEPA) has emerged as a promising self-supervised approach that learns by leveraging a world model. While previously limited to predicting missing parts

Meta presents Learning and Leveraging World Models in Visual Representation Learning Joint-Embedding Predictive Architecture (JEPA) has emerged as a promising self-supervised approach that learns by leveraging a world model. While previously limited to predicting missing parts
account_circle
Tom Sander(@RednasTom) 's Twitter Profile Photo

OpenAI may secretly know that you trained on GPT outputs!
In our work 'Watermarking Makes Language Models Radioactive', we show that training on watermarked text can be easily spotted ☢️
Paper: arxiv.org/abs/2402.14904
Pierre Fernandez AI at Meta École polytechnique Inria

OpenAI may secretly know that you trained on GPT outputs! In our work 'Watermarking Makes Language Models Radioactive', we show that training on watermarked text can be easily spotted ☢️ Paper: arxiv.org/abs/2402.14904 @pierrefdz @AIatMeta @Polytechnique @Inria
account_circle
AI at Meta(@AIatMeta) 's Twitter Profile Photo

We believe that V-JEPA is an important step on the path to advancing machine intelligence. As part of our continued support of responsible open science, we've published a paper outlining this work for the research community.

V-JEPA research paper ➡️ bit.ly/4bNZz56

We believe that V-JEPA is an important step on the path to advancing machine intelligence. As part of our continued support of responsible open science, we've published a paper outlining this work for the research community. V-JEPA research paper ➡️ bit.ly/4bNZz56
account_circle
Yann LeCun(@ylecun) 's Twitter Profile Photo

Let me clear a *huge* misunderstanding here.
The generation of mostly realistic-looking videos from prompts *does not* indicate that a system understands the physical world.
Generation is very different from causal prediction from a world model.
The space of plausible videos is…

account_circle
Robin San Roman(@RobinSanroman) 's Twitter Profile Photo

Today we present AudioSeal, a proactive solution for the detection of voice cloning based on localised watermarking.

It relies on 2 jointly trained models: an imperceptible watermark generator and a detector with sample level precision.

1/n

Today we present AudioSeal, a proactive solution for the detection of voice cloning based on localised watermarking. It relies on 2 jointly trained models: an imperceptible watermark generator and a detector with sample level precision. 1/n
account_circle
Grégoire Mialon(@mialon_gregoire) 's Twitter Profile Photo

Exciting developments for the benchmark for General AI Assistants: GAIA (openreview.net/forum?id=fibxv…) has been accepted to !

Our leaderboard (huggingface.co/spaces/gaia-be…) also has a new frontrunner, surpassing GPT-4 w/ plugins. Congrats to hccngu + team on this achievement!

account_circle
Alaa El-Nouby(@alaa_nouby) 's Twitter Profile Photo

Excited to share AIM 🎯 - a set of large-scale vision models pre-trained solely using an autoregressive objective. We share the code & checkpoints of models up to 7B params, pre-trained for 1.2T patches (5B images) achieving 84% on ImageNet with a frozen trunk.

(1/n) 🧵

Excited to share AIM 🎯 - a set of large-scale vision models pre-trained solely using an autoregressive objective. We share the code & checkpoints of models up to 7B params, pre-trained for 1.2T patches (5B images) achieving 84% on ImageNet with a frozen trunk. (1/n) 🧵
account_circle
AK(@_akhaliq) 's Twitter Profile Photo

Apple presents AIM

Scalable Pre-training of Large Autoregressive Image Models

paper page: huggingface.co/papers/2401.08…

paper introduces AIM, a collection of vision models pre-trained with an autoregressive objective. These models are inspired by their textual counterparts, i.e.,…

Apple presents AIM Scalable Pre-training of Large Autoregressive Image Models paper page: huggingface.co/papers/2401.08… paper introduces AIM, a collection of vision models pre-trained with an autoregressive objective. These models are inspired by their textual counterparts, i.e.,…
account_circle
AK(@_akhaliq) 's Twitter Profile Photo

GAIA: a benchmark for General AI Assistants

paper page: huggingface.co/papers/2311.12…

introduce GAIA, a benchmark for General AI Assistants that, if solved, would represent a milestone in AI research. GAIA proposes real-world questions that require a set of fundamental abilities such…

GAIA: a benchmark for General AI Assistants paper page: huggingface.co/papers/2311.12… introduce GAIA, a benchmark for General AI Assistants that, if solved, would represent a milestone in AI research. GAIA proposes real-world questions that require a set of fundamental abilities such…
account_circle
Dmytro Mishkin 🇺🇦(@ducha_aiki) 's Twitter Profile Photo

Bag of Image Patch Embedding Behind the Success of Self-Supervised Learning

Yubei Chen, Adrien Bardes, @LiZengy, Yann LeCun

tl;dr: bag of patches and compositional structure of the horse is all you need.
(except where is Dinov2 comparison?) and
openreview.net/pdf?id=r06xREo…

Bag of Image Patch Embedding Behind the Success of Self-Supervised Learning @Yubei_Chen, @AdrienBardes, @LiZengy, @ylecun tl;dr: bag of patches and compositional structure of the horse is all you need. (except where is Dinov2 comparison?) and openreview.net/pdf?id=r06xREo…
account_circle
TimDarcet(@TimDarcet) 's Twitter Profile Photo

DINOv2+registers=♥️
We are releasing code and checkpoints for DINOv2 augmented with registers and a slightly better training recipe. No more of those pesky artifacts!
Simple one-liner, try it out:
dinov2_vitg14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg')

DINOv2+registers=♥️ We are releasing code and checkpoints for DINOv2 augmented with registers and a slightly better training recipe. No more of those pesky artifacts! Simple one-liner, try it out: dinov2_vitg14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg')
account_circle