Messi H.J. Lee (@l33_messi) 's Twitter Profile
Messi H.J. Lee

@l33_messi

Doctoral Candidate at WashU. I study bias in Large Language Models.

ID: 1355011195724509185

linkhttp://lee-messi.github.io/ calendar_today29-01-2021 04:33:44

68 Tweet

89 Followers

140 Following

Jacob Montgomery (@jacobmontgomery.bsky.social) (@jacob_montg) 's Twitter Profile Photo

Are you hiring this in political science and/or the computational social sciences? Then I have good news for you. Because WashU Political Science has 🎉 ‼️ 🔥FOUR AMAZING STUDENTS 🎉 ‼️ 🔥 on the market this year. Read on and I'll tell you more about them (in alphabetical order).

Calvin Lai (@calvinklai) 's Twitter Profile Photo

Come join us! I will be recruiting a PhD student and have flex to hire a postdoc for Fall 2025 at Rutgers University. If you or someone you know is potentially interested in joining the Diversity Science Lab, please check out calvinklai.com/join-the-lab!

Come join us! I will be recruiting a PhD student and have flex to hire a postdoc for Fall 2025 at <a href="/RutgersU/">Rutgers University</a>.

If you or someone you know is potentially interested in joining the Diversity Science Lab, please check out calvinklai.com/join-the-lab!
Calvin Lai (@calvinklai) 's Twitter Profile Photo

Check out this NPR show, where Ivy Onyeador, Neil Lewis, Jr., PhD, & I talk about the limits of diversity training in addressing discrimination and what we can do instead. Our part begins at 28:45. whyy.org/episodes/the-h…

Calvin Lai (@calvinklai) 's Twitter Profile Photo

🌟Come check out our stellar line-up at #SPSP2025! 🌟 We'll be presenting topics such as confronting sexism, police bias training, and instability in U.S. immigration policy 💫

🌟Come check out our stellar line-up at #SPSP2025! 🌟 We'll be presenting topics such as confronting sexism, police bias training, and instability in U.S. immigration policy 💫
Messi H.J. Lee (@l33_messi) 's Twitter Profile Photo

In a new paper w/ Calvin Lai, I find that OpenAI’s latest reasoning model (o3-mini) exhibits implicit bias-like patterns. What’s exciting about reasoning models is the ability to unpack bias in how models *process* information, rather than just seeing bias in *outputs*. (1/10):

In a new paper w/ <a href="/CalvinKLai/">Calvin Lai</a>, I find that OpenAI’s latest reasoning model (o3-mini) exhibits implicit bias-like patterns. What’s exciting about reasoning models is the ability to unpack bias in how models *process* information, rather than just seeing bias in *outputs*.  (1/10):
Calvin Lai (@calvinklai) 's Twitter Profile Photo

"Implicit" bias + AI research often studies biased outputs. As us psychologists know though, behavior's not the same as process. A model trained on a racist site would show bias, but wouldn't be "implicit"! To study & find biased processing, Messi H.J. Lee & I used reasoning models.

Ai2 (@allen_ai) 's Twitter Profile Photo

Meet Ai2 Paper Finder, an LLM-powered literature search system. Searching for relevant work is a multi-step process that requires iteration. Paper Finder mimics this workflow — and helps researchers find more papers than ever 🔍

Meet Ai2 Paper Finder, an LLM-powered literature search system.

Searching for relevant work is a multi-step process that requires iteration. Paper Finder mimics this workflow — and helps researchers find more papers than ever 🔍
Brian Guay (@brianmguay) 's Twitter Profile Photo

🚨Out today in PNAS PNASNews🚨 pnas.org/doi/10.1073/pn… Why do people overestimate the size of politically relevant groups (immigrant, LGBTQ, Jewish) and quantities (% of budget spent on foreign aid, % of refugees that are criminals)? We analyze 100k estimates to find out🧵👇

🚨Out today in PNAS <a href="/PNASNews/">PNASNews</a>🚨

pnas.org/doi/10.1073/pn…

Why do people overestimate the size of politically relevant groups (immigrant, LGBTQ, Jewish) and quantities (% of budget spent on foreign aid, % of refugees that are criminals)?

We analyze 100k estimates to find out🧵👇
Kiran Garimella (@gvrkiran) 's Twitter Profile Photo

Political neutrality in AI is fundamentally unattainable. Bias is embedded in data, algorithms & interactions. The key is to aim for practical approximations, moving beyond absolutes toward actionable balance. A really nice paper for responsible AI design arxiv.org/abs/2503.05728

Political neutrality in AI is fundamentally unattainable. Bias is embedded in data, algorithms &amp; interactions. The key is to aim for practical approximations, moving beyond absolutes toward actionable balance. A really nice paper for responsible AI design

arxiv.org/abs/2503.05728
Calvin Lai (@calvinklai) 's Twitter Profile Photo

🚨New Paper w/ Joel Le Forestier at JEP: General!🚨 We conducted 2 mega-experiments totaling over 28,000 participants and 50 conditions. We wanted to find the most effective interventions to reduce implicit weight prejudice across 5 implicit measures. 🧵

🚨New Paper w/ Joel Le Forestier at JEP: General!🚨 We conducted 2 mega-experiments totaling over 28,000 participants and 50 conditions. We wanted to find the most effective interventions to reduce implicit weight prejudice across 5 implicit measures.  🧵