Pierre Fernandez (@pierrefdz) 's Twitter Profile
Pierre Fernandez

@pierrefdz

Researcher (Meta, FAIR Paris) • Working on AI, watermarking and data protection • ex. @Inria, @Polytechnique, @UnivParisSaclay (MVA)

ID: 1325092743073443840

linkhttps://pierrefdz.github.io/ calendar_today07-11-2020 15:08:36

123 Tweet

505 Followers

226 Following

Hady Elsahar (@hadyelsahar) 's Twitter Profile Photo

AudioSeal is accepted at #ICML2024! 🚀 AI at Meta AudioSeal is the state of art audio watermarking model designed for deepfakes mitigation. It is robust, lightning-fast & imperceptible⚡️ 📄 paper: arxiv.org/abs/2401.17264 🔗 code (commercial friendly) github.com/facebookresear…

AudioSeal is accepted at #ICML2024! 🚀 <a href="/AIatMeta/">AI at Meta</a>

AudioSeal is the state of art audio watermarking model designed for deepfakes mitigation. It is robust, lightning-fast &amp; imperceptible⚡️

📄 paper: arxiv.org/abs/2401.17264

🔗 code (commercial friendly) github.com/facebookresear…
Badr Youbi Idrissi (@byoubii) 's Twitter Profile Photo

What happens if we make language models predict several tokens ahead instead of only the next one? In this paper, we show that multi-token prediction boosts language model training efficiency. 🧵 1/11 Paper: arxiv.org/abs/2404.19737 Joint work with Fabian Gloeckle

What happens if we make language models predict several tokens ahead instead of only the next one? In this paper, we show that multi-token prediction boosts language model training efficiency. 🧵 1/11
Paper: arxiv.org/abs/2404.19737 
Joint work with <a href="/FabianGloeckle/">Fabian Gloeckle</a>
AI at Meta (@aiatmeta) 's Twitter Profile Photo

Newly published work from FAIR, Chameleon: Mixed-Modal Early-Fusion Foundation Models. This research presents a family of early-fusion token-based mixed-modal models capable of understanding & generating images & text in any arbitrary sequence. Paper ➡️ go.fb.me/7rb19n

Newly published work from FAIR, Chameleon: Mixed-Modal Early-Fusion Foundation Models.

This research presents a family of early-fusion token-based mixed-modal models capable of understanding &amp; generating images &amp; text in any arbitrary sequence.

Paper ➡️ go.fb.me/7rb19n
Joelle Pineau (@jpineau1) 's Twitter Profile Photo

I’m excited to share a few things we’re releasing today at Meta FAIR. These new AI model and dataset releases are part of our longstanding commitment to open science and I look forward to sharing even more work like this from the brilliant minds at FAIR! ai.meta.com/blog/meta-fair…

Robin San Roman @ICML 2024 (@robinsanroman) 's Twitter Profile Photo

AudioSeal training code is now available inside the beautiful audiocraft repo 🚀 github.com/facebookresear… You can now train your own audio watermarking models and define your own tradeoff between fidelity, robustness and message capacity based on your needs.

Axel Darmouni (@adarmouni) 's Twitter Profile Photo

Watermarking Audios to Detect Voice Cloning 🧵📖 Read of the day, day 100: Proactive Detection of Voice Cloning with Localized Watermarking, by Robin San Roman, Pierre Fernandez et al from AI at Meta arxiv.org/pdf/2401.17264 The authors introduce a method to train a pair of models to

Watermarking Audios to Detect Voice Cloning

🧵📖 Read of the day, day 100: Proactive Detection of Voice Cloning with Localized Watermarking, by <a href="/RobinSanroman/">Robin San Roman</a>, <a href="/pierrefdz/">Pierre Fernandez</a> et al from <a href="/AIatMeta/">AI at Meta</a>

arxiv.org/pdf/2401.17264

The authors introduce a method to train a pair of models to
Pierre Fernandez (@pierrefdz) 's Twitter Profile Photo

Our team at FAIR is hiring a postdoctoral researcher to develop novel neural watermarking approaches and other AI safety measures, to help protect users from AI misuse. You can apply directly at metacareers.com/jobs/459320546… or reach out in DM/email

arXiv Sound (@arxivsound) 's Twitter Profile Photo

``Latent Watermarking of Audio Generative Models,'' Robin San Roman, Pierre Fernandez, Antoine Deleforge, Yossi Adi, Romain Serizel, ift.tt/xMida7X

Ingonyama (@ingo_zk) 's Twitter Profile Photo

🔐 Introducing zkDL++ A cutting-edge framework for proving the integrity of any deep neural network. 💡Demo: Provable Watermark Extraction for AI at Meta Stable Signature 🔍 Dive into our preliminary report for more details: hackmd.io/@Ingonyama/zkd…

🔐 Introducing zkDL++ 
A cutting-edge framework for proving the integrity of any deep neural network.

💡Demo: Provable Watermark Extraction for <a href="/AIatMeta/">AI at Meta</a> Stable Signature 

🔍 Dive into our preliminary report for more details: hackmd.io/@Ingonyama/zkd…
Mistral AI (@mistralai) 's Twitter Profile Photo

magnet:?xt=urn:btih:7278e625de2b1da598b23954c13933047126238a&dn=pixtral-12b-240910&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopen.demonii.com%3A1337%2Fannounce&tr=http%3A%2F%2Ftracker.ipv6tracker.org%3A80%2Fannounce

Omer Shlomovits (@omershlomovits) 's Twitter Profile Photo

We unlocked a new ZKP use case: Meta uses the Stable Signature algorithm to watermark their genAI images (Imagine). This watermark is robust to image manipulation but the problem is that the only one who can detect it is Meta. Anyone else who knows the watermark extractor can

fly51fly (@fly51fly) 's Twitter Profile Photo

[LG] A Watermark for Black-Box Language Models D Bahri, J Wieting, D Alon, D Metzler [Google DeepMind] (2024) arxiv.org/abs/2410.02099

[LG] A Watermark for Black-Box Language Models
D Bahri, J Wieting, D Alon, D Metzler [Google DeepMind] (2024)
arxiv.org/abs/2410.02099
Wassim (Wes) Bouaziz (@_vassim) 's Twitter Profile Photo

Want to know if a ML model was trained on your dataset? Introducing ✨Data Taggants✨! We use data poisoning to leave a harmless and stealthy signature on your dataset that radiates through trained models. Learn how to protect your dataset from unauthorized use... A 🧵

Tom Sander (@rednastom) 's Twitter Profile Photo

🔒Image watermarking is promising for digital content protection. But images often undergo many modifications—spliced or altered by AI. Today at AI at Meta, we released Watermark Anything that answers not only "where does the image come from," but "what part comes from where." 🧵

🔒Image watermarking is promising  for digital content protection. But images often undergo many modifications—spliced or altered by AI. Today at <a href="/AIatMeta/">AI at Meta</a>, we released Watermark Anything that answers not only "where does the image come from," but "what part comes from where." 🧵
Aymeric (@aymericroucher) 's Twitter Profile Photo

Meta team just dropped the first Watermarking model that not edit can break!🛡️ 🤔 Ever heard of watermarking? It's a technique that allows you to mark in an image its original source. It's our best shield against AI-generated deepfakes, or content stolen from artists! 🎨 🎭

Meta team just dropped the first Watermarking model that not edit can break!🛡️

🤔 Ever heard of watermarking? It's a technique that allows you to mark in an image its original source. It's our best shield against AI-generated deepfakes, or content stolen from artists! 🎨 

🎭
Alaa El-Nouby (@alaa_nouby) 's Twitter Profile Photo

𝗗𝗼𝗲𝘀 𝗮𝘂𝘁𝗼𝗿𝗲𝗴𝗿𝗲𝘀𝘀𝗶𝘃𝗲 𝗽𝗿𝗲-𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝘄𝗼𝗿𝗸 𝗳𝗼𝗿 𝘃𝗶𝘀𝗶𝗼𝗻? 🤔 Delighted to share AIMv2, a family of strong, scalable, and open vision encoders that excel at multimodal understanding, recognition, and grounding. github.com/apple/ml-aim (🧵)