Emily Cheng (@sparse_emcheng) 's Twitter Profile
Emily Cheng

@sparse_emcheng

PhD @colt_upf in computational linguistics

What is the happiest state? Maryland 💅🏼

Before: MIT CSAIL, ENS

ID: 1514185494896287746

linkhttp://chengemily1.github.io calendar_today13-04-2022 10:15:59

38 Tweet

145 Followers

149 Following

Gabriel Kreiman (@gkreiman) 's Twitter Profile Photo

Brains, Minds and Machines Summer Course 2025. Application deadline: Mar 24, 2025 mbl.edu/education/adva… See more information here: cbmm.mit.edu/summer-school/…

Christopher Wang (@czlwang) 's Twitter Profile Photo

Want to scale models on brain datasets recorded with variable sensor layouts? Population Transformer at #ICLR2025 may be your answer! 🗺️ Fri, Apr 25 | 10am - 12:30pm (poster @ Hall 3 + Hall 2B #58) 🗣️ Fri, Apr 25 | 4:06 pm - 4:18 pm (oral @ Garnet 216-218) More ⬇️

Francesco Bertolotti (@f14bertolotti) 's Twitter Profile Photo

This paper is a bit heavy; however, the insights are good. The authors characterize the requirements for two representation to be similar. Also, there may be a connection with openreview.net/forum?id=yyYMA… (I should look into this). 🔗arxiv.org/abs/2506.03784

This paper is a bit heavy; however, the insights are good. The authors characterize the requirements for two representation to be similar.

Also, there may be a connection with openreview.net/forum?id=yyYMA… (I should look into this).

🔗arxiv.org/abs/2506.03784
Badr AlKhamissi (@bkhmsi) 's Twitter Profile Photo

🚨New Preprint!! Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge. 1/ 🧵👇

🚨New Preprint!!

Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge.

1/ 🧵👇
Michael Franke (@meanwhileina) 's Twitter Profile Photo

PostDoc position (3.5y) available in interdisciplinary Collaborative Research Center "Common Ground". 🎯 Topic: Pragmatic reasoning about Common Ground 🎓 linguistics | philosophy of language | cognitive science 📅 Deadline: July 14 tinyurl.com/3vd9wa6p Please share!

Andrea Santilli (@teelinsan) 's Twitter Profile Photo

Uncertainty quantification (UQ) is key for safe, reliable LLMs... but are we evaluating it correctly? 🚨 Our ACL2025 paper finds a hidden flaw: if both UQ methods and correctness metrics are biased by the same factor (e.g., response length), evaluations get systematically skewed

Uncertainty quantification (UQ) is key for safe, reliable LLMs... but are we evaluating it correctly?

🚨 Our ACL2025 paper finds a hidden flaw: if both UQ methods and correctness metrics are biased by the same factor (e.g., response length), evaluations get systematically skewed
Mario Giulianelli (@glnmario) 's Twitter Profile Photo

I will be a SPAR mentor this Fall🤖 Check out the programme and apply by 20 August to work with me on formalising and/or measuring and/or intervening on goal-directed behaviour in AI agents More info on potential projects here 🧵

Zhijing Jin✈️ ICLR Singapore (@zhijingjin) 's Twitter Profile Photo

Our "Competitions of Mechanisms" paper proposes an interesting way to interpret LLM behaviors thru how it handles multiple conflicting mechanisms. E.G., in-context knowledge vs. in-weights knowledge🧐This is an elegant philophical way of thinking --

Our "Competitions of Mechanisms" paper proposes an interesting way to interpret LLM behaviors thru how it handles multiple conflicting mechanisms. E.G., in-context knowledge vs. in-weights knowledge🧐This is an elegant philophical way of thinking --
Richard Antonello (@neurorj) 's Twitter Profile Photo

In our new paper, we explore how we can build encoding models that are both powerful and understandable. Our model uses an LLM to answer 35 questions about a sentence's content. The answers linearly contribute to our prediction of how the brain will respond to that sentence. 1/6

Tankred Saanum (@tankredsaanum) 's Twitter Profile Photo

Induction heads are surprisingly powerful. In a new preprint, we find that they can learn what to attend to in-context! We study this in a hierarchical prediction task and uncover a possible mechanism giving rise to in-context learning in induction heads. See thread for details!

Induction heads are surprisingly powerful. In a new preprint, we find that they can learn what to attend to in-context! We study this in a hierarchical prediction task and uncover a possible mechanism giving rise to in-context learning in induction heads. See thread for details!
Julian Coda-Forno (@juliancodaforno) 's Twitter Profile Photo

New paper from my Meta internship! 🚀 We explored dual-architecture communication for latent reasoning in LLMs ☯️ —accepted at the #NeurIPS2025 Foundations of Reasoning in LLMs workshop. Paper: arxiv.org/pdf/2510.00494 1/9 🧵

New paper from my Meta internship! 🚀

We explored dual-architecture communication for latent reasoning in LLMs ☯️ —accepted at the #NeurIPS2025 Foundations of Reasoning in LLMs workshop.

Paper: arxiv.org/pdf/2510.00494

1/9 🧵
Mario Giulianelli (@glnmario) 's Twitter Profile Photo

I am hiring a PhD student to start my lab at UCL! Get in touch if you have any questions, the deadline to apply through ELLIS is 31 October. More details🧵

Pau Rodríguez (@prlz77) 's Twitter Profile Photo

🚀 Excited to share LinEAS, our new activation steering method accepted at NeurIPS 2025! It approximates optimal transport maps e2e to precisely guide 🧭 activations achieving finer control 🎚️ with ✨ less than 32 ✨ prompts! 💻github.com/apple/ml-lineas 📄arxiv.org/abs/2503.10679