Christopher Potts (@chrisgpotts) 's Twitter Profile
Christopher Potts

@chrisgpotts

Stanford Professor of Linguistics and, by courtesy, of Computer Science, and member of @stanfordnlp and @StanfordAILab. He/Him/His.

ID: 408714449

linkhttp://web.stanford.edu/~cgpotts/ calendar_today09-11-2011 19:59:28

2,2K Tweet

12,12K Followers

612 Following

Aryaman Arora (@aryaman2020) 's Twitter Profile Photo

this paper will be presented at COLM later this year! looking back, i'm glad i tried something slightly out of my normal range in interp. ultimately, i feel that real-world models are much messier than can be satisfactorily explained via behaviour -- we must open the blackbox

Aryaman Arora (@aryaman2020) 's Twitter Profile Photo

i forgot the whole point of saying you're at a conference is to advertise your poster please come check out AxBench by Zhengxuan Wu* me* et al. on Tuesday, 15 July at 11 AM - 1:30 PM

Peter Hase (@peterbhase) 's Twitter Profile Photo

Overdue job update -- I am now: - A Visiting Scientist at Schmidt Sciences, supporting AI safety and interpretability - A Visiting Researcher at the Stanford NLP Group, working with Christopher Potts I am so grateful I get to keep working in this fascinating and essential area, and

Zhengxuan Wu (@zhengxuanzenwu) 's Twitter Profile Photo

ICML ✈️ this week. open to chat and learn mech interp from you. Aryaman Arora and i have cool ideas about steering, just come to our AxBench poster. new steering blog: zen-wu.social/steer/index.ht… 中文: zen-wu.social/steer/cn_index…

Omar Khattab (@lateinteraction) 's Twitter Profile Photo

The #SIGIR2025 Best Paper just awarded to the WARP engine for fast late interaction! Congrats to Luca Scheerer🎉 WARP was his ETH Zurich MS thesis, completed while visiting us at @StanfordNLP. Incidentally, it's the fifth Paper Award for a ColBERT paper since 2020!* Luca did an

The #SIGIR2025 Best Paper just awarded to the WARP engine for fast late interaction!

Congrats to Luca Scheerer🎉 WARP was his <a href="/ETH_en/">ETH Zurich</a> MS thesis, completed while visiting us at @StanfordNLP.

Incidentally, it's the fifth Paper Award for a ColBERT paper since 2020!*

Luca did an
Manuel Faysse (@manuelfaysse) 's Twitter Profile Photo

Introducing ColQwen-Omni, a 3B omnimodal retriever that extends the ColPali concept of multimodal retrieval with late interaction to audio chunks and short videos, with no performance degradation on visual document retrieval wrt our best models! (1/N)

Introducing ColQwen-Omni, a 3B omnimodal retriever that extends the ColPali concept of multimodal retrieval with late interaction to audio chunks and short videos, with no performance degradation on visual document retrieval wrt our best models! (1/N)
Lucy Li (@lucy3_li) 's Twitter Profile Photo

imagine the lucy benchmark -- can u get lucy to do IMO problems? can u get lucy to draw a well-proportioned person? can u get lucy to stop scrolling labubus on tiktok? truly difficult, next-level stuff

Lakshya A Agrawal (@lakshyaaagrawal) 's Twitter Profile Photo

How does prompt optimization compare to RL algos like GRPO? GRPO needs 1000s of rollouts, but humans can learn from a few trials—by reflecting on what worked & what didn't. Meet GEPA: a reflective prompt optimizer that can outperform GRPO by up to 20% with 35x fewer rollouts!🧵

How does prompt optimization compare to RL algos like GRPO?

GRPO needs 1000s of rollouts, but humans can learn from a few trials—by reflecting on what worked &amp; what didn't.

Meet GEPA: a reflective prompt optimizer that can outperform GRPO by up to 20% with 35x fewer rollouts!🧵
Mike Taylor (@hammer_mt) 's Twitter Profile Photo

It's so funny that prompt optimization is turning out to be more important than fine tuning. Given every ML engineer told me four years ago prompting was irrelevant.

ACL 2025 (@aclmeeting) 's Twitter Profile Photo

📅 10-Year ToT Award (2015) Thang Luong, Hieu Pham & Christopher D. Manning: “Effective Approaches to Attention-based Neural Machine Translation” EMNLP 2015 🔗 aclanthology.org/D15-1166/ A milestone in neural MT and attention mechanisms. 🔁🧠

Stanford NLP Group (@stanfordnlp) 's Twitter Profile Photo

The Stanford NLP Group founders won both 2025 ACL 2025 test of time awards: ▪25 yrs: Gildea & Dan Jurafsky, Automatic Labeling of Semantic Roles aclanthology.org/P00-1065/ ▪10 yrs: Thang Luong, Hieu Pham & Christopher Manning, Effective Approaches to Attention-based NMT aclanthology.org/D15-1166/

The <a href="/stanfordnlp/">Stanford NLP Group</a> founders won both 2025 <a href="/aclmeeting/">ACL 2025</a> test of time awards:

▪25 yrs: Gildea &amp; <a href="/jurafsky/">Dan Jurafsky</a>, Automatic Labeling of Semantic Roles
aclanthology.org/P00-1065/

▪10 yrs: <a href="/lmthang/">Thang Luong</a>, <a href="/hyhieu226/">Hieu Pham</a> &amp; <a href="/chrmanning/">Christopher Manning</a>, Effective Approaches to Attention-based NMT
aclanthology.org/D15-1166/