Gili Lior (@gililior) 's Twitter Profile
Gili Lior

@gililior

PhD student at @CSEhuji

ID: 1339555629187383298

calendar_today17-12-2020 12:58:55

58 Tweet

176 Followers

126 Following

Uri Berger (@uriberger88) 's Twitter Profile Photo

1/ Into Image Captioning? Don’t miss this! Struggling to keep up with the influx of new metrics but still see the same 5 (BLEU, METEOR, ROUGE, CIDEr, SPICE) leading? Read our recent Captioning evaluation survey! arxiv.org/abs/2408.04909 w\ Gabriel Stanovsky Omri Abend Lea Frermann >

1/ Into Image Captioning? Don’t miss this!
Struggling to keep up with the influx of new metrics but still see the same 5 (BLEU, METEOR, ROUGE, CIDEr, SPICE) leading?
Read our recent Captioning evaluation survey!

arxiv.org/abs/2408.04909
w\
<a href="/GabiStanovsky/">Gabriel Stanovsky</a>
<a href="/AbendOmri/">Omri Abend</a>
<a href="/leafrermann/">Lea Frermann</a>
&gt;
Guy Kaplan ✈️🇸🇬 ICLR2025 (@gkaplan38844) 's Twitter Profile Photo

📢Paper release📢 : 🔍 Ever wondered how LLMs understand words when all they see are tokens? 🧠 Our latest study uncovers how LLMs reconstruct full words from sub-word tokens, even when misspelled or previously unseen. arxiv.org/pdf/2410.05864 (preprint) 👀 👇 [1/7]

📢Paper release📢 :

🔍 Ever wondered how LLMs understand words when all they see are tokens? 🧠

Our latest study uncovers how LLMs reconstruct full words from sub-word tokens, even when misspelled or previously unseen.

arxiv.org/pdf/2410.05864 (preprint)
👀 👇

[1/7]
Eliya Habba (@eliyahabba) 's Twitter Profile Photo

Care about LLM evaluation? 🤖 🤔 We bring you🕊️ DOVE a massive (250M!) collection of LLMs outputs On different prompts, domains, tokens, models... Join our community effort to expand it with YOUR model predictions & become a co-author!

Michael Hassid (@michaelhassid) 's Twitter Profile Photo

The longer reasoning LLM thinks - the more likely to be correct, right? Apparently not. Presenting our paper: “Don’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”. Link: arxiv.org/abs/2505.17813 1/n

The longer reasoning LLM thinks - the more likely to be correct, right?

Apparently not.

Presenting our paper: “Don’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”.

Link: arxiv.org/abs/2505.17813

1/n
Itay Itzhak (@itay_itzhak_) 's Twitter Profile Photo

🚨New paper alert🚨 🧠 Instruction-tuned LLMs show amplified cognitive biases — but are these new behaviors, or pretraining ghosts resurfacing? Excited to share our new paper, accepted to CoLM 2025🎉! See thread below 👇 #BiasInAI #LLMs #MachineLearning #NLProc

🚨New paper alert🚨

🧠 
Instruction-tuned LLMs show amplified cognitive biases — but are these new behaviors, or pretraining ghosts resurfacing?

Excited to share our new paper, accepted to CoLM 2025🎉!
See thread below 👇
#BiasInAI #LLMs #MachineLearning #NLProc