Winson Peng (@winsonpeng2011) 's Twitter Profile
Winson Peng

@winsonpeng2011

Professor of Communication @CommDeptMSU | Editor-in-Chief @HCR_Journal | Communication + Social/Mobile Media + Computational Social Science. Tweets are my own.

ID: 440645901

linkhttp://winsonpeng.github.io calendar_today19-12-2011 07:34:47

2,2K Tweet

657 Followers

281 Following

Stefanie Stantcheva s-stantcheva.bsky.social (@s_stantcheva) 's Twitter Profile Photo

Zero-sum thinking is a key mindset that shapes how we view the world. Excited to share a new paper on the roots and consequences of Zero-sum thinking with Sahil Chinoy, Nathan Nunn Sandra.Sequeira. A summary thread🧵1/23 scholar.harvard.edu/files/stantche…

Zero-sum thinking is a key mindset that shapes how we view the world. Excited to share a new paper on the roots and consequences of Zero-sum thinking with <a href="/sahilchinoy/">Sahil Chinoy</a>, <a href="/DrNathanNunn/">Nathan Nunn</a> <a href="/SMGSequeira/">Sandra.Sequeira</a>. A summary thread🧵1/23 scholar.harvard.edu/files/stantche…
William J. Brady (@william__brady) 's Twitter Profile Photo

New preprint ✨ led by Hongkai Mao & team: In observational and experimental studies we find that *differentiation* helps explain how social media discourse descends into negativity over time.

New preprint ✨ led by Hongkai Mao &amp; team: In observational and experimental studies we find that *differentiation* helps explain how social media discourse descends into negativity over time.
Julian Schrittwieser (@mononofu) 's Twitter Profile Photo

As a researcher at a frontier lab I’m often surprised by how unaware of current AI progress public discussions are. I wrote a post to summarize studies of recent progress, and what we should expect in the next 1-2 years: julian.ac/blog/2025/09/2…

Human Communication Research (@hcr_journal) 's Twitter Profile Photo

📢 New in HCR: “What do people watch under adversity?" Netflix viewing histories with diary data show no evidence that daily adversity predicts content choice. Instead, coping strategies shape genre preferences. 🔗 academic.oup.com/hcr/advance-ar… #Coping #MoodManagement #DiaryData

Chenhao Tan (@chenhaotan) 's Twitter Profile Photo

🚀 We’re thrilled to announce the upcoming AI & Scientific Discovery online seminar! We have an amazing lineup of speakers. This series will dive into how AI is accelerating research, enabling breakthroughs, and shaping the future of research across disciplines. 📅 Fridays,

Human Communication Research (@hcr_journal) 's Twitter Profile Photo

📢 New in HCR! Vol. 51, Issue 4 (Oct 2025)! Studies on intellectual humility, AI imaginaries, media parenting, gender and sexist behavior, political communication, identity, and parent-child conversations about mental health. Read the full issue here: academic.oup.com/hcr/issue/51/4

📢 New in HCR! Vol. 51, Issue 4 (Oct 2025)!

Studies on intellectual humility, AI imaginaries, media parenting, gender and sexist behavior, political communication, identity, and parent-child conversations about mental health.

Read the full issue here: academic.oup.com/hcr/issue/51/4
Jingjing Han (@jingjinghan1) 's Twitter Profile Photo

Hi I am looking for a postdoc who will work with me and in our lab equipped with eye trackers and physiological data collection tools, beginning next fall. The application due is Oct. 15. Please chat with me if you are interested. Location: Fudan University in Shanghai, China

Liu L. Sijia (@letti_liu) 's Twitter Profile Photo

Is online alignment the only path to go despite being slow and computationally expensive? Inspired by prospect theory, we provide a human-centric explanation for why online alignment (e.g. GRPO) outperforms offline alignment (e.g. DPO, KTO) and empirically show how to close the

Is online alignment the only path to go despite being slow and computationally expensive?

Inspired by prospect theory, we provide a human-centric explanation for why online alignment (e.g. GRPO) outperforms offline alignment (e.g. DPO, KTO) and empirically show how to close the
Anthony Peng (@realanthonypeng) 's Twitter Profile Photo

🚨 New paper alert! 🚨 Can you believe it? Flawed thinking helps reasoning models learn better! Injecting just a bit of flawed reasoning can collapse safety by 36% 😱 — but we teach large reasoning models to fight back 💪🛡️. Introducing RECAP 🔄: an RL post-training method

🚨 New paper alert! 🚨

Can you believe it? Flawed thinking helps reasoning models learn better!

Injecting just a bit of flawed reasoning can collapse safety by 36% 😱 — but we teach large reasoning models to fight back 💪🛡️.

Introducing RECAP 🔄: an RL post-training method
Aniket Vashishtha (@aniketvashisht8) 's Twitter Profile Photo

A lot is said about LLMs’ counterfactual reasoning, but do they truly possess the cognitive skills it needs? Introducing Executable Counterfactuals, a code framework that (1) shows frontier models lack these skills (2) offers a testbed for improvement via Reinforcement Learning

A lot is said about LLMs’ counterfactual reasoning, but do they truly possess the cognitive skills it needs?

Introducing Executable Counterfactuals, a code framework that (1) shows frontier models lack these skills (2) offers a testbed for improvement via Reinforcement Learning
James Zou (@james_y_zou) 's Twitter Profile Photo

We found a troubling emergent behavior in LLM. 💬When LLMs compete for social media likes, they start making things up 🗳️When they compete for votes, they turn inflammatory/populist When optimized for audiences, LLMs inadvertently become misaligned—we call this Moloch’s Bargain

We found a troubling emergent behavior in LLM.

💬When LLMs compete for social media likes, they start making things up
🗳️When they compete for votes, they turn inflammatory/populist

When optimized for audiences, LLMs inadvertently become misaligned—we call this Moloch’s Bargain
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

The paper says diffusion LLMs hide several small experts, and using them together at test time boosts reasoning. A simple voting trick raises math accuracy to 88.10% without extra training. Diffusion LLMs generate by masking and filling chunks of text in steps. Because

The paper says diffusion LLMs hide several small experts, and using them together at test time boosts reasoning. 

A simple voting trick raises math accuracy to 88.10% without extra training.

Diffusion LLMs generate by masking and filling chunks of text in steps.

Because
Robert Youssef (@rryssf_) 's Twitter Profile Photo

Market research firms are cooked 😳 PyMC Labs + Colgate just published something wild. They got GPT-4o and Gemini to predict purchase intent at 90% reliability compared to actual human surveys. Zero focus groups. No survey panels. Just prompting. The method is called Semantic

Market research firms are cooked 😳

PyMC Labs + Colgate just published something wild. They got GPT-4o and Gemini to predict purchase intent at 90% reliability compared to actual human surveys.

Zero focus groups. No survey panels. Just prompting.

The method is called Semantic
Jay Van Bavel, PhD (@jayvanbavel) 's Twitter Profile Photo

Does AI improve or undercut academic scholarship? Using AI increases both the quantity & quality of academic scholarship and reduces inequality: -Researchers using AI published 36% more papers -There is also rise in the journal impact factor of adopters’ publications -GenAI

Does AI improve or undercut academic scholarship? 

Using AI increases both the quantity &amp; quality of academic scholarship and reduces inequality:

-Researchers using AI published 36% more papers
-There is also rise in the journal impact factor of adopters’ publications
-GenAI
Sharon Y. Li (@sharonyixuanli) 's Twitter Profile Photo

We hear increasing discussion about aligning LLM with “diverse human values.” But what’s the actual price of pluralism? 🧮 In our #NeurIPS2025 paper (with Shawn Im), we move this debate from the philosophical to the measurable — presenting the first theoretical scaling law

We hear increasing discussion about aligning LLM with “diverse human values.”
But what’s the actual price of pluralism? 🧮

In our #NeurIPS2025 paper (with <a href="/shawnim00/">Shawn Im</a>), we move this debate from the philosophical to the measurable — presenting the first theoretical scaling law
Katherine Ognyanova (@ognyanova.bsky.social) (@ognyanova) 's Twitter Profile Photo

We launched a new project tracking public attitudes to higher ed. First report is out: we find people trust universities and oppose funding cuts, but worry about tuition costs and free speech on campus. See edbarometer.org With 🇺🇦 [email protected] Matthew Baum Mauricio Santillana

Eddie Yang (@ey_985) 's Twitter Profile Photo

New paper: LLMs are increasingly used to label data in political science. But how reliable are these annotations, and what are the consequences for scientific findings? What are best practices? Some new findings from a large empirical evaluation. Paper: eddieyang.net/research/llm_a…

New paper: LLMs are increasingly used to label data in political science. But how reliable are these annotations, and what are the consequences for scientific findings? What are best practices? Some new findings from a large empirical evaluation.
Paper: eddieyang.net/research/llm_a…
Jessy Lin (@realjessylin) 's Twitter Profile Photo

🧠 How can we equip LLMs with memory that allows them to continually learn new things? In our new paper with AI at Meta, we show how sparsely finetuning memory layers enables targeted updates for continual learning, w/ minimal interference with existing knowledge. While full

🧠 How can we equip LLMs with memory that allows them to continually learn new things?

In our new paper with <a href="/AIatMeta/">AI at Meta</a>, we show how sparsely finetuning memory layers enables targeted updates for continual learning, w/ minimal interference with existing knowledge.

While full