Chen Shani (@chenshani2) 's Twitter Profile
Chen Shani

@chenshani2

NLP Postdoc @ Stanford

ID: 1200741300645052416

calendar_today30-11-2019 11:40:26

261 Tweet

226 Followers

302 Following

Ravid Shwartz Ziv (@ziv_ravid) 's Twitter Profile Photo

Yann LeCun and I have been pondering the concept of optimal representation in self-supervised learning, and we're excited to share our findings in a recently published paper! 📝🔍 arxiv.org/abs/2304.09355

Natalie Shapira (@natalieshapira) 's Twitter Profile Photo

New preprint! Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models Preprint: u.cs.biu.ac.il/~yogo/llm-tom.…

New preprint!

Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models

Preprint: u.cs.biu.ac.il/~yogo/llm-tom.…
Chen Shani (@chenshani2) 's Twitter Profile Photo

Humor is a complicated phenomenon relating to culture, context, personal flavor, etc. Contemporary LLMs are impressive, but they are not panacea, especially for such a subjective human trait.

Chen Shani (@chenshani2) 's Twitter Profile Photo

2/2 papers submitted to EMNLP'23 have been accepted, should be the highlight of my PhD! But, I can't be happy when there's a war... EMNLP 2025 #IsraelUnderAttack

lovodkin93 (@lovodkin93) 's Twitter Profile Photo

🎉Excited to announce our paper's acceptance at #EMNLP2023! We explore a fascinating question: When faced with (un)answerable queries, do LLMs actually grasp the concept of (un)answerability?🧐 This work is a collaborative effort with Avi Caciularu Shauli Ravfogel omer goldman and Ido Dagan 1/n

Omri Avrahami (@omriavr) 's Twitter Profile Photo

[1/9] 🚨 We present our recent Google AI project: The Chosen One --- a fully automated solution for the task of consistent character generation in text-to-image diffusion models 🧑‍🎨. Project Page: omriavrahami.com/the-chosen-one

Assaf Zaritsky (@assafzaritsky) 's Twitter Profile Photo

Excited to share our new bioRxiv presenting DISCOVER, a generalized method toward systematic visual interpretability of image-based classification models! Project led by Oded Rotem in collaboration with AIVF! biorxiv.org/cgi/content/sh… 🧵 1/n

Excited to share our new <a href="/biorxivpreprint/">bioRxiv</a> presenting DISCOVER, a generalized method toward systematic visual interpretability of image-based classification models!

Project led by <a href="/oded_rotem/">Oded Rotem</a> in collaboration with <a href="/AiVFtech/">AIVF</a>!

biorxiv.org/cgi/content/sh…

🧵

1/n
Shachar Don-Yehiya (@shachar_don) 's Twitter Profile Photo

The language people use when they interact with each other changes over the course of the conversation. 🔍 Will we see a systematic language change along the interaction of human users with a text-to-image model? #EMNLP23 arxiv.org/abs/2311.12131 W Leshem (Legend) Choshen 🤖🤗 @NeurIPS Omri Abend 🧵👇

The language people use when they interact with each other changes over the course of the conversation.

🔍 Will we see a systematic language change along the interaction of human users with a text-to-image model?
 
#EMNLP23
arxiv.org/abs/2311.12131

W <a href="/LChoshen/">Leshem (Legend) Choshen 🤖🤗 @NeurIPS</a>  <a href="/AbendOmri/">Omri Abend</a> 
🧵👇
Moran Mizrahi (@moranmiz) 's Twitter Profile Photo

🚀 Excited to share our latest paper about the sensitivity of LLMs to prompts! arxiv.org/abs/2401.00595 Our work may partly explain why some models seem less accurate than their formal evaluation may suggest. 🧐 Guy Kaplan, Dan H.M 🎗, Rotem Dror, Hyadata Lab (Dafna Shahaf), Gabriel Stanovsky

🚀 Excited to share our latest paper about the sensitivity of LLMs to prompts!
arxiv.org/abs/2401.00595

Our work may partly explain why some models seem less accurate than their formal evaluation may suggest. 🧐

<a href="/guymkaplan/">Guy Kaplan</a>, <a href="/malk_dan/">Dan H.M 🎗</a>, <a href="/DrorRotem/">Rotem Dror</a>, <a href="/HyadataLab/">Hyadata Lab (Dafna Shahaf)</a>, <a href="/GabiStanovsky/">Gabriel Stanovsky</a>
Chen Shani (@chenshani2) 's Twitter Profile Photo

Stanford NLP Retreat 2024! Ryan Louie and I organized a PowerPoint Karaoke 🎤 My favorite part is Chris' answer: Q: What is the first principle component for both babies and undergrads? Chris Manning: HUNGER! Chris Manning Stanford NLP Group

Chen Shani (@chenshani2) 's Twitter Profile Photo

Stanford NLP Retreat! It was a packed weekend, full of great people and activities (And the car broke down halfway back, another great adventure!)

Stanford NLP Retreat!
It was a packed weekend, full of
 great people and activities
(And the car broke down halfway back, another great adventure!)
ELSC Brain (@elscbrain) 's Twitter Profile Photo

#ELSCspecialseminar with Dr. Chen Shani on the topic of “Designing Language Models to Think Like Humans” will take place on Tuesday, June 25, at 14:00 IST. Come hear the lecture at ELSC: Room 2004, Goodman bldg. Chen Shani

#ELSCspecialseminar with Dr. Chen Shani on the topic of “Designing Language Models to Think Like Humans” will take place on Tuesday, June 25, at 14:00 IST.  Come hear the lecture at ELSC: Room 2004, Goodman bldg.
<a href="/ChenShani2/">Chen Shani</a>
Isabel O. Gallegos (@isabelogallegos) 's Twitter Profile Photo

🚨🚨New Working Paper🚨🚨 AI-generated content is getting more politically persuasive. But does labeling it as AI-generated change its impact?🤔 Our research says the disclosure of AI authorship has little to no effect on the persuasiveness of AI-generated content. 🧵1/6

🚨🚨New Working Paper🚨🚨

AI-generated content is getting more politically persuasive. But does labeling it as AI-generated change its impact?🤔

Our research says the disclosure of AI authorship has little to no effect on the persuasiveness of AI-generated content.

🧵1/6
Ravid Shwartz Ziv (@ziv_ravid) 's Twitter Profile Photo

You know all those arguments that LLMs think like humans? Turns out it's not true. 🧠 In our paper "From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning" we test it by checking if LLMs form concepts the same way humans do Yann LeCun Chen Shani Dan Jurafsky

You know all those arguments that LLMs think like humans? Turns out it's not true.

🧠 In our paper  "From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning" we test it by checking if LLMs form concepts the same way humans do  <a href="/ylecun/">Yann LeCun</a> <a href="/ChenShani2/">Chen Shani</a>  <a href="/jurafsky/">Dan Jurafsky</a>
NYU Center for Data Science (@nyudatascience) 's Twitter Profile Photo

New research shows LLMs favor compression over nuance — a key reason they lack human-like understanding. By Stanford postdoc Chen Shani, CDS Research Scientist Ravid Shwartz Ziv, CDS Founding Director Yann LeCun, & Stanford professor Dan Jurafsky. nyudatascience.medium.com/the-efficiency…

Moran Mizrahi (@moranmiz) 's Twitter Profile Photo

How can we help LLMs move beyond the obvious toward generating more creative and diverse ideas? In our new TACL paper, we propose a novel approach to enhance LLM creative generation! arxiv.org/abs/2504.20643 Chen Shani Gabriel Stanovsky Dan Jurafsky Hyadata Lab (Dafna Shahaf) Stanford NLP Group HUJI NLP