
Eran Hirsch
@hirscheran
PhD candidate @biunlp ; Tweets about NLP, ML and research
ID: 2796071287
https://eranhirs.github.io/ 07-09-2014 14:32:29
830 Tweet
288 Followers
636 Following


Slides for my lecture โLLM Reasoningโ at Stanford CS 25: dennyzhou.github.io/LLM-Reasoning-โฆ Key points: 1. Reasoning in LLMs simply means generating a sequence of intermediate tokens before producing the final answer. Whether this resembles human reasoning is irrelevant. The crucial

Excited to share our #ACL2025NLP paper, "๐๐ข๐ญ๐๐๐ฏ๐๐ฅ: ๐๐ซ๐ข๐ง๐๐ข๐ฉ๐ฅ๐-๐๐ซ๐ข๐ฏ๐๐ง ๐๐ข๐ญ๐๐ญ๐ข๐จ๐ง ๐๐ฏ๐๐ฅ๐ฎ๐๐ญ๐ข๐จ๐ง ๐๐จ๐ซ ๐๐จ๐ฎ๐ซ๐๐ ๐๐ญ๐ญ๐ซ๐ข๐๐ฎ๐ญ๐ข๐จ๐ง"! ๐ If youโre working on RAG, Deep Research and Trustworthy AI, this is for you. Why? Citation quality is






BIU NLP Itai Mondshine Reut Tsarfaty Our paper "Beyond N-Grams: Rethinking Evaluation Metrics and Strategies for Multilingual Abstractive Summarization": arxiv.org/pdf/2507.08342

Are you still around Vienna? Come hear about a new morphological task at CoNLL at ~11:20 (hall M.1) Reut Tsarfaty



Hearing from a lot of folks that they still fine-tune Qwen2.5 instead of Qwen3 โ simply because โitโs easier to tune.โ Qwen2.5 models seem more steerable: easier to adapt for new behaviors or boost specific capabilities, which means more downstream work builds on them. People



