Levent Sagun (@leventsagun) 's Twitter Profile
Levent Sagun

@leventsagun

🏳️‍🌈 research scientist at FAIR, based in paris (mostly)

ID: 389904534

calendar_today13-10-2011 05:12:40

316 Tweet

1,1K Followers

3,3K Following

Willie Agnew | wagnew@dair-community.social (@willie_agnew) 's Twitter Profile Photo

I'm starting to see "AI safety" as similar to right wing "free speech": an idea that has been hijacked by right wing billionaires as a vehicle for smuggling far right ideas into academic and broader public discussions (in this case, AI safety as control of AI by billionaires 1/2

Abeba Birhane (@abebab) 's Twitter Profile Photo

those with money and power define and control the AI narrative even the narrative is spreading harmful ideology with 0 empirical backings hard to believe seemingly reasonable orgs like the @EU_commission fall for this bs

Arjun Subramonian (they/அவங்க/elle) (@arjunsubgraph) 's Twitter Profile Photo

New preprint (arxiv.org/abs/2309.17417) w/ Levent Sagun & Yizhou Sun. We theoretically & empirically show that Graph Conv Nets can have a preferential attachment bias in link prediction. We analyze how this bias amplifies degree/power disparities & propose an alleviation strategy.

Abeba Birhane (@abebab) 's Twitter Profile Photo

what is the most sensible report/guideline you recently read on the risks of AI and recommendations on how to manage/regulate it???

Eman Abdelhadi (@emanabdelhadi) 's Twitter Profile Photo

University admin were on the wrong side of civil rights. University admin were on the wrong side of Vietnam. University admin were on the wrong side of South Africa. University admin are on the wrong side of Palestine.

Zeerak Talat (زیرک تلت) Zeerak@ mastodon|bsky (@zeeraktalat) 's Twitter Profile Photo

So excited that mine and @abebab's chapter on AI and decoloniality is now available! elgaronline.com/edcollchap/boo… In the chapter, we focus on the tension between AI and decolonization.

Dieuwke Hupkes (@_dieuwke_) 's Twitter Profile Photo

Overall, our conclusion on whether LLMs learned and successfully use tacit world models remains negative. To allow further exploration, we release our data, including our finetuning dataset and various within- and out-of-distribution test sets. github.com/facebookresear… [8/n]

Sasha Luccioni, PhD 🦋🌎✨🤗 (@sashamtl) 's Twitter Profile Photo

TL;DR? Stuffing generative models into absolutely everything comes with a significant cost to the planet, and we should use fine-tuned models in cases when tasks are well-defined. 👩🏼‍💻 Alternative titles we explored include "InferNO" and "Think before you GPT" 😂

MMitchell (@mmitchell_ai) 's Twitter Profile Photo

Dear tech industry, Instead of having a race for who can put the highest numbers on (awkward) benchmarks, can we have a race on who can implement the best mechanisms for data consent?

Özgür Sevgi Göral (@sevgigoral) 's Twitter Profile Photo

Güney Afrika’nın İsrail’e karşı Uluslararası Adalet Divanı’nda açtığı dava üzerine, Türkçe'de gördüğüm en etraflı değerlendirmelerden biri. Aynı zamanda, hukuk sahasının politik mücadeleler tarafından kullanımına dair de nefis tartışma. Özlem'le Duru'nun eline sağlık💜Duru Yavan

Arjun Subramonian (they/அவங்க/elle) (@arjunsubgraph) 's Twitter Profile Photo

Submit your social proposals by April 12 in English, Portuguese, or Spanish! Envie suas propostas para as reuniões sociais até 12 de abril em inglês, português, ou espanhol! Entregue sus propuestas para los eventos sociales para el 12 de abril en ingles, portugués, o español!

Karen Hambardzumyan (@mahnerak) 's Twitter Profile Photo

[1/7] 🚀 Introducing the Language Model Transparency Tool - an open-source interactive toolkit for analyzing Transformer-based language models. We can't wait to see how the community will use this tool! github.com/facebookresear…

[1/7] 🚀 Introducing the Language Model Transparency Tool - an open-source interactive toolkit for analyzing Transformer-based language models.

We can't wait to see how the community will use this tool!
github.com/facebookresear…
A. Feder Cooper (@afedercooper) 's Twitter Profile Photo

Really happy to announce that I'm going to be an assistant professor Yale Computer Science (starting 2026)! I'll also be affiliated with The Information Society Project + a few interdisciplinary centers, and building out a lab that pushes the frontier of research in reliable ML, law & policy!

Adina Williams (@adinamwilliams) 's Twitter Profile Photo

Our responsible AI team is hiring 3 research scientist interns this cycle (2 in Montreal, one in NYC). We're seeking enrolled PhD students who are excited to spend their summer figuring out how to ensure vision and/or language models work for everyone! metacareers.com/jobs/532549086…

Dieuwke Hupkes (@_dieuwke_) 's Twitter Profile Photo

New deep-dive into evaluation data contamination 😍🤩. Curious how much contamination there really is in common LLM training corpora, how much that actually impacts benchmark scores and what is the best metric to evaluate that? Read our new preprint! arxiv.org/abs/2411.03923

New deep-dive into evaluation data contamination 😍🤩.

Curious how much contamination there really is in common LLM training corpora, how much that actually impacts benchmark scores and what is the best metric to evaluate that? Read our new preprint! 

arxiv.org/abs/2411.03923
Skyler Wang (@skylrwang) 's Twitter Profile Photo

In this 📢 NEW PUB 📢, Samuel Bell and I trace how machine learning and AI researchers frame the problem of spurious correlations through ways that deviate from the statistical definition of the problem. Follow this thread to find out more... 1/n arxiv.org/abs/2411.04696