Richard Futrell (@rljfutrell) 's Twitter Profile
Richard Futrell

@rljfutrell

Language Science at University of California, Irvine

Information theory and language

ID: 846190375589089280

linkhttp://socsci.uci.edu/~rfutrell calendar_today27-03-2017 02:41:21

787 Tweet

2,2K Takipçi

767 Takip Edilen

Qing Yao (@qyao23) 's Twitter Profile Photo

LMs learn argument-based preferences for dative constructions (preferring recipient first when it’s shorter), being quite consistent with humans. Is this from just memorizing the preferences in their training data? New paper w/ Kanishka Misra 🌊, Leonie Weissweiler, Kyle Mahowald

LMs learn argument-based preferences for dative constructions (preferring recipient first when it’s shorter), being quite consistent with humans. Is this from just memorizing the preferences in their training data? New paper w/ <a href="/kanishkamisra/">Kanishka Misra 🌊</a>, <a href="/LAWeissweiler/">Leonie Weissweiler</a>, <a href="/kmahowald/">Kyle Mahowald</a>
Francesco Cagnetta (@fraccagnetta) 's Twitter Profile Photo

Neural scaling laws are powerful and predictive, but what sets the exponent? Previous work links it to power-law data statistics, echoing classical results of kernel theory. arxiv.org/abs/2505.07067 shows that hierarchical structure matters more. Accepted ICML Conference 2025 🎉

Sasha Boguraev (@sashaboguraev) 's Twitter Profile Photo

A key hypothesis in the history of linguistics is that different constructions share underlying structure. We take advantage of recent advances in mechanistic interpretability to test this hypothesis in Language Models. New work with Kyle Mahowald and Christopher Potts! 🧵👇

A key hypothesis in the history of linguistics is that different constructions share underlying structure. We take advantage of recent advances in mechanistic interpretability to test this hypothesis in Language Models.

New work with <a href="/kmahowald/">Kyle Mahowald</a> and <a href="/ChrisGPotts/">Christopher Potts</a>!

🧵👇
Aryaman Arora (@aryaman2020) 's Twitter Profile Photo

new paper! 🫡 why are state space models (SSMs) worse than Transformers at recall over their context? this is a question about the mechanisms underlying model behaviour: therefore, we propose using mechanistic evaluations to answer it!

new paper! 🫡

why are state space models (SSMs) worse than Transformers at recall over their context? this is a question about the mechanisms underlying model behaviour: therefore, we propose using mechanistic evaluations to answer it!
Kanishka Misra 🌊 (@kanishkamisra) 's Twitter Profile Photo

News🗞️ I will return to UT Austin as an Assistant Professor of Linguistics this fall, and join its vibrant community of Computational Linguists, NLPers, and Cognitive Scientists!🤘 Excited to develop ideas about linguistic and conceptual generalization! Recruitment details soon

News🗞️

I will return to UT Austin as an Assistant Professor of Linguistics this fall, and join its vibrant community of Computational Linguists, NLPers, and Cognitive Scientists!🤘

Excited to develop ideas about linguistic and conceptual generalization! Recruitment details soon
Taiga Someya (@agiats_football) 's Twitter Profile Photo

📝 Our #ACL2025 paper is now on arXiv! "Information Locality as an Inductive Bias for Neural Language Models" We quantify how local predictability of a language affects the learnability by neural LMs using our metric, m-local entropy. paper: arxiv.org/abs/2506.05136

UCI Social Sciences (@ucisocsci) 's Twitter Profile Photo

Congrats to Weijie Xu, fourth-year UC Irvine language science grad student & recipient of the UCI Social Sciences Outstanding Scholarship award! The faculty-nominated award recognizes an outstanding grad student for high intellectual scholarship & achievement. socsci.uci.edu/newsevents/new…

Congrats to Weijie Xu, fourth-year <a href="/UCIrvine/">UC Irvine</a> language science grad student &amp; recipient of the <a href="/ucisocsci/">UCI Social Sciences</a>  Outstanding Scholarship award! The faculty-nominated award recognizes an outstanding grad student for high intellectual scholarship &amp; achievement. 
socsci.uci.edu/newsevents/new…
Mario Giulianelli (@glnmario) 's Twitter Profile Photo

Some personal news ✨ In September, I’m joining UCL as Associate Professor of Computational Linguistics. I’ll be building a lab, directing the MSc programme, and continuing research at the intersection of language, cognition, and AI. 🧵

Tatsuki Kuribayashi (@ttk_kuribayashi) 's Twitter Profile Photo

Starting in August, I’ll start an Assistant Professor (NLP) position in MBZUAI. I’d continue to work on interdisciplinary topics bridging NLP to fundamental linguistic/cogsci questions. I'll have a small team and look for one postdoc and many visitors! 👉 kuribayashi4.github.io

Starting in August, I’ll start an Assistant Professor (NLP) position in <a href="/mbzuai/">MBZUAI</a>.
I’d continue to work on interdisciplinary topics bridging NLP to fundamental linguistic/cogsci questions.
I'll have a small team and look for one postdoc and many visitors! 👉 kuribayashi4.github.io
Neil Rathi (@neil_rathi) 's Twitter Profile Photo

new paper 🌟 interpretation of uncertainty expressions like "i think" differs cross-linguistically. we show that (1) llms are sensitive to these differences but (2) humans overrely on their outputs across languages

new paper 🌟

interpretation of uncertainty expressions like "i think" differs cross-linguistically. we show that (1) llms are sensitive to these differences but (2) humans overrely on their outputs across languages
Shivam Duggal (@shivamduggal4) 's Twitter Profile Photo

Compression is the heart of intelligence From Occam to Kolmogorov—shorter programs=smarter representations Meet KARL: Kolmogorov-Approximating Representation Learning. Given an image, token budget T & target quality 𝜖 —KARL finds the smallest t≤T to reconstruct it within 𝜖🧵

Compression is the heart of intelligence
From Occam to Kolmogorov—shorter programs=smarter representations

Meet KARL: Kolmogorov-Approximating Representation Learning.

Given an image, token budget T &amp; target quality 𝜖 —KARL finds the smallest t≤T to reconstruct it within 𝜖🧵
Gregory Hickok (@gregoryhickok) 's Twitter Profile Photo

The UCI Phonotactic Calculator: An online tool for computing phonotactic metrics. New work by my colleague Connor Mayer. link.springer.com/article/10.375…

Noga Zaslavsky (@nogazaslavsky) 's Twitter Profile Photo

📢 I'm looking for a postdoc to join my lab at NYU! Come work with me on a principled, theory-driven approach to studying language, learning, and reasoning, in humans and AI agents. Apply here: apply.interfolio.com/170656 And come chat with me at #CogSci2025 if interested!

Kanishka Misra 🌊 (@kanishkamisra) 's Twitter Profile Photo

Looking forward to attending #cogsci2025! I’m especially excited to meet students who will be applying to PhD programs in Computational Ling/CogSci in the coming cycle. Please reach out if you want to meet up and chat! Email is best, but DM also works if you must quick🧵:

Looking forward to attending #cogsci2025! I’m especially excited to meet students who will be applying to PhD programs in Computational Ling/CogSci in the coming cycle. 

Please reach out if you want to meet up and chat! Email is best, but DM also works if you must

quick🧵:
Neil Rathi (@neil_rathi) 's Twitter Profile Photo

new paper! robust and general "soft" preferences are a hallmark of human language production. we show that these emerge from *any* policy minimizing an autoregressive memory-based cost function w/ Richard Futrell & Dan Jurafsky

new paper!

robust and general "soft" preferences are a hallmark of human language production. we show that these emerge from *any* policy minimizing an autoregressive memory-based cost function

w/ <a href="/rljfutrell/">Richard Futrell</a> &amp; <a href="/jurafsky/">Dan Jurafsky</a>
Xinting Huang (@huangxt233) 's Twitter Profile Photo

Do LLMs store information in interpretable subspaces -- similar to variables in a program? In our new paper, we decompose representation space into smaller, interpretable, non-basis-aligned subspaces with unsupervised learning.

Do LLMs store information in interpretable subspaces -- similar to variables in a program?

In our new paper, we decompose representation space into smaller, interpretable, non-basis-aligned subspaces with unsupervised learning.