Borhane Blili-Hamelin, PhD (@borhane_b_h) 's Twitter Profile
Borhane Blili-Hamelin, PhD

@borhane_b_h

he/him | Improving AI Governance

ID: 524379994

linkhttp://borhane.xyz/ calendar_today14-03-2012 14:00:18

2,2K Tweet

634 Takipçi

1,1K Takip Edilen

Luiza Jarovsky (@luizajarovsky) 's Twitter Profile Photo

🚨 Studies are starting to show what many of us feared: AI use might lead to overreliance and human disempowerment. Below is the SHOCKING conclusion of this particular study [download for future reference]: "1. We surveyed 319 knowledge workers who use GenAI tools (e.g.,

🚨 Studies are starting to show what many of us feared: AI use might lead to overreliance and human disempowerment. Below is the SHOCKING conclusion of this particular study [download for future reference]:

"1. We surveyed 319 knowledge workers who use GenAI tools (e.g.,
Lujain Ibrahim لجين إبراهيم (@lujainmibrahim) 's Twitter Profile Photo

📣New preprint!📣We’ve long known humans tend to anthropomorphize computers. But with the rise of social AI applications, like AI companions, studying this is now more crucial than ever. We introduce a new method for *empirically evaluating* anthropomorphic behaviors in LLMs🧵

📣New preprint!📣We’ve long known humans tend to anthropomorphize computers. But with the rise of social AI applications, like AI companions, studying this is now more crucial than ever.

We introduce a new method for *empirically evaluating* anthropomorphic behaviors in LLMs🧵
Laura Weidinger (@weidingerlaura) 's Twitter Profile Photo

Very proud of our 📣 new paper on Measuring Anthropomorphism in LLMs! This new multi-turn evaluation & large-scale human study led by Lujain Ibrahim لجين إبراهيم is a key step in better understanding what factors make people perceive LLMs as more "human-like". 🧵

Mostly here now: @davidthewid.bsky.social (@davidthewid) 's Twitter Profile Photo

I recently gave one of the best talks of my whole career, and thankfully it was recorded, so I figured I share that here! Thoughts/ideas/questions welcome! scs.hosted.panopto.com/Panopto/Pages/…

Shayne Longpre (@shayneredford) 's Twitter Profile Photo

What are 3 concrete steps that can improve AI safety in 2025? 🤖⚠️ Our new paper, “In House Evaluation is Not Enough” has 3 calls-to-action to empower independent evaluators: 1️⃣ Standardized AI flaw reports 2️⃣ AI flaw disclosure programs + safe harbors. 3️⃣ A coordination

What are 3 concrete steps that can improve AI safety in 2025? 🤖⚠️

Our new paper, “In House Evaluation is Not Enough” has 3 calls-to-action to empower independent evaluators:

1️⃣ Standardized AI flaw reports
2️⃣  AI flaw disclosure programs + safe harbors.
3️⃣ A coordination
Shayne Longpre (@shayneredford) 's Twitter Profile Photo

Great coverage of our new position paper by Will Knight! Together with 30+ experts from AI, cybersecurity, law & policy we propose 3 ways we think the AI safety & security ecosystem should improve in 2025. wired.com/story/ai-resea…

Alison Gopnik (@alisongopnik) 's Twitter Profile Photo

In the latest Science, Henry Farrell, Cosma Shailizi, James Evans and I make the case for LLM's as powerful, transformative cultural and social technologies, ways for people to learn from other people, rather than intelligent agents. science.org/doi/10.1126/sc…

Atoosa Kasirzadeh (@dr_atoosa) 's Twitter Profile Photo

📢 New paper with Iason Gabriel is out! 2025 is being called the year of AI agents, with overwhelming headlines about them every day. But we lack a shared vocabulary to distinguish their fundamental properties. Our paper aims to bridge this gap. A 🧵

📢 New paper with <a href="/IasonGabriel/">Iason Gabriel</a>  is out! 2025 is being called the year of AI agents, with overwhelming headlines about them every day. But we lack a shared vocabulary to distinguish their fundamental properties. Our paper aims to bridge this gap. A 🧵
Avijit Ghosh (@evijitghosh) 's Twitter Profile Photo

Very excited to announce our CRAFT workshop: "Invisible by Design? Generative AI and Mirrors of Misrepresentation", at ACM FAccT 2025! With stellar team Kimi Stephanie Milani Sachin Pendse Ajeet Singh Laura Dabbish and Geoff Kaufman. Details below 👇

Very excited to announce our CRAFT workshop: "Invisible by Design? Generative AI and Mirrors of Misrepresentation", at <a href="/FAccTConference/">ACM FAccT</a> 2025! With stellar team <a href="/kimiwenzel/">Kimi</a> <a href="/steph_milani/">Stephanie Milani</a> <a href="/SachinPendse/">Sachin Pendse</a> <a href="/OneAjeetSingh/">Ajeet Singh</a> <a href="/dabbish/">Laura Dabbish</a> and Geoff Kaufman. Details below 👇
Hanna Wallach (@hannawallach.bsky.social) (@hannawallach) 's Twitter Profile Photo

Exciting news: the Fairness, Accountability, Transparency and Ethics (FATE) group at Microsoft Research NYC is hiring a predoctoral fellow!!! 🎉 microsoft.com/en-us/research…

Hanna Wallach (@hannawallach.bsky.social) (@hannawallach) 's Twitter Profile Photo

Check out the camera-ready version of our ICML Conference position paper ("Position: Evaluating Generative AI Systems Is a Social Science Measurement Challenge") to learn more!!! arxiv.org/abs/2502.00561

Shayne Longpre (@shayneredford) 's Twitter Profile Photo

Excited to present our AI Flaw Disclosure paper at #ICML2025 in Vancouver!🌲🌊🏔️ Swing by our poster session in East Exhibition Halls A-B E-606!

Benjamin Laufer (@bendlaufer) 's Twitter Profile Photo

Hamidah Oderinwale Hamidah Oderinwale is a huge talent and it was a blast to work on this research project with her and Jon Kleinberg. And I'll be posting more about this research soon :)