Brown NLP (@brown_nlp) 's Twitter Profile
Brown NLP

@brown_nlp

Language Understanding and Representation Lab at Brown University. PI: Ellie Pavlick.

ID: 1321062394722922497

linkhttps://lunar.cs.brown.edu/ calendar_today27-10-2020 12:13:23

152 Tweet

3,3K Takipçi

143 Takip Edilen

Apoorv Khandelwal (@apoorvkh) 's Twitter Profile Photo

Calling all academic AI researchers! 🚨 We are conducting a survey on compute resources. We want to help the community better understand our capabilities+needs. We hope that this will help us all advocate for the resources we need! Please contribute at: forms.gle/3hEie4hj999fiS…

Tian Yun (@tianyunnn) 's Twitter Profile Photo

Excited to share our work mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models? accepted at #NAACL2024 with Etha Tianze Hua Brown NLP Paper: arxiv.org/abs/2404.12444 Code: github.com/ethahtz/multil… Webpage:

Suresh Venkatasubramanian (mostly in the sky now) (@geomblog) 's Twitter Profile Photo

Job Opportunity! At Brown University Brown Data Science Institute I direct the (new) Center for Tech Responsibility (CNTR). I'm looking to hire a Program Manager who would work with me to help operationalize the vision for the Center. If you're interested, apply! brown.wd5.myworkdayjobs.com/staff-careers-…

Suraj Anand ICLR 2025 (@surajk610) 's Twitter Profile Photo

How robust are in-context algorithms? In new work with @michael_lepori, @jack_merullo, and @brown_nlp, we explore why in-context learning disappears over training and fails on rare and unseen tokens. We also introduce a training intervention that fixes these failures.

How robust are in-context algorithms? In new work with @michael_lepori, @jack_merullo, and @brown_nlp, we explore why in-context learning disappears over training and fails on rare and unseen tokens. We also introduce a training intervention that fixes these failures.
Neuroexplicit Models of Language, Vision, Action (@neuroexplicit) 's Twitter Profile Photo

Now hiring: Twelve (!) PhD students to start in fall 2025, for research on combining neural and symbolic/interpretable models of language, vision, and action. Work with world-class advisors at Universität Saarland, MPI Informatics, MPI-SWS, @CISPA, DFKI. Details: neuroexplicit.org/jobs

Brown NLP (@brown_nlp) 's Twitter Profile Photo

LUNAR Lab is looking for a postdoc to work on mechanistic interpretability + AI safety/trustworthiness! The position is for two years, with a possibility of extending. If interested, submit a CV here. Applicants will be considered on a rolling basis. forms.gle/2HFM1BMFxS5ZA3…

Apoorv Khandelwal (@apoorvkh) 's Twitter Profile Photo

Wondering how long it takes to train a 1B-param LM from scratch on your GPUs? 🧵 See our paper to learn about the current state of academic compute and how to efficiently train models! Use our code to test your own models/GPUs! arxiv.org/abs/2410.23261 github.com/apoorvkh/acade…

Ruochen Zhang not @ ICLR (@ruochenz_) 's Twitter Profile Photo

🤔How do multilingual LLMs encode structural similarities across languages? 🌟We find that LLMs use identical circuits when languages share the same morphosyntactic processes. However, they involve specialized components to handle tasks if contain specific linguistic features⤵️

🤔How do multilingual LLMs encode structural similarities across languages?
🌟We find that LLMs use identical circuits when languages share the same morphosyntactic processes. However, they involve specialized components to handle tasks if contain specific linguistic features⤵️
Alexa R. Tartaglini (@artartaglini) 's Twitter Profile Photo

🚨 New paper at NeurIPS Conference w/ Michael Lepori! Most work on interpreting vision models focuses on concrete visual features (edges, objects). But how do models represent abstract visual relations between objects? We adapt NLP interpretability techniques for ViTs to find out! 🔍

🚨 New paper at <a href="/NeurIPSConf/">NeurIPS Conference</a>  w/ <a href="/Michael_Lepori/">Michael Lepori</a>! Most work on interpreting vision models focuses on concrete visual features (edges, objects). But how do models represent abstract visual relations between objects? We adapt NLP interpretability techniques for ViTs to find out! 🔍
Ruochen Zhang not @ ICLR (@ruochenz_) 's Twitter Profile Photo

🤔Ever wonder why LLMs give inconsistent answers in different languages? In our paper, we identify two failure points in the multilingual factual recall process and propose fixes that guide LLMs to the "right path." This can boost performance by 35% in the weakest language! 📈

🤔Ever wonder why LLMs give inconsistent answers in different languages?

In our paper, we identify two failure points in the multilingual factual recall process and propose fixes that guide LLMs to the "right path." This can boost performance by 35% in the weakest language! 📈
Etha Tianze Hua (@ethahua) 's Twitter Profile Photo

Check out our new paper: “How Do Vision-Language Models Process Conflicting Information Across Modalities?”! Vision-language models often struggle with conflicting inputs - we show how their internal representations and key attention heads reveal when and how this happens, and

Ruochen Zhang not @ ICLR (@ruochenz_) 's Twitter Profile Photo

🥳 our recent work is accepted to #EMNLP2025 main conference! In this paper, we leverage actionable interp insights to fix factual errors in multilingual LLMs 🔍 Huge shoutout to Meng Lu (Jennifer) for her incredible work on this! She's applying for PhD this cycle and you should

Thomas Serre (@tserre) 's Twitter Profile Photo

Brown’s Department of Cognitive & Psychological Sciences is hiring a tenure-track Assistant Professor, working in the area of AI and the Mind (start July 1, 2026). Apply by Nov 8, 2025 👉 apply.interfolio.com/173939 #AI #CognitiveScience #AcademicJobs #BrownUniversity

Michael Lepori (@michael_lepori) 's Twitter Profile Photo

What does your favorite language model know about the real world? 🌎 Can it distinguish between possible and impossible events? We find that LM representations not only encode these distinctions, but that they predict human judgments of event plausibility!