Eunsol Choi (@eunsolc) 's Twitter Profile
Eunsol Choi

@eunsolc

on natural language processing / machine learning. assistant prof at @NYUDataScience @NYU_Courant prev @UTCompSci @googleai, @uwcse, @Cornell.

ID: 774769139269283842

linkhttps://eunsol.github.io calendar_today11-09-2016 00:38:52

118 Tweet

5,5K Followers

858 Following

Eunsol Choi (@eunsolc) 's Twitter Profile Photo

How does retrieval augmentation impact generation in LFQA? We present a controlled study with many new findings. Most interesting finding for me: the last generated sentence was the *least* attributable to prepended docs. More analysis in the paper!

Eunsol Choi (@eunsolc) 's Twitter Profile Photo

#NeurIPS2023 paper on knowledge-editing. We present a distillation-based method that improves *propagation* of injected facts. This remains a very challenging task, none of the methods (including ours) reliably solve it yet.. lots of analysis! Led by Shankar Padmanabhan, an undergrad!

Eunsol Choi (@eunsolc) 's Twitter Profile Photo

My first prompting paper 👋 We link LM's parametric knowledge to the construction of in-context examples. If an LM lacks knowledge for in-context examples, could it result in hallucinations? If an LM can easily answer, would it make educated guesses on challenging queries?

Fangyuan Xu (@brunchavecmoi) 's Twitter Profile Photo

Instruction-following capabilities of LLMs are a prerequisite to AI ✒️ writing assistance. How are good current LLMs at this task? We present 🥝 𝗞𝗜𝗪𝗜, a dataset with instructions for knowledge-intensive, document-grounded writing for long-form answers to research questions.

Instruction-following capabilities of LLMs are a prerequisite to AI ✒️ writing assistance. How are good current LLMs at this task?

We present 🥝 𝗞𝗜𝗪𝗜, a dataset with instructions for knowledge-intensive, document-grounded writing for long-form answers to research questions.
Manling Li (@manlingli_) 's Twitter Profile Photo

Excited to share the KnowledgeLM workshop at ACL 24 ACL 2025 with Zoey Sha Li Heng Ji Eunsol Choi Michael Zhang Mor Geva Peter Hase. We will also have six amazing speakers and panelists!! Submission By: May 24, 2024 Website: knowledgeable-lm.github.io Let’s make LLMs knowledgeable!

Eunsol Choi (@eunsolc) 's Twitter Profile Photo

Can LLM comprehensively capture information spread across multiple documents? Can LLM distinguish confusing entity mentions? Please check out our preprint on multi-document reasoning for LLM, focusing on entity disambiguation!

Fangyuan Xu (@brunchavecmoi) 's Twitter Profile Photo

✨RECOMP at #ICLR2024! Our poster is at ⏰Thursday 10:45am (Halle B #138). Come check out our work & talk to my advisor Eunsol Choi and collaborator Weijia Shi !

Atula Tejaswi (@atu_tej) 's Twitter Profile Photo

🚀 Exciting new paper alert! Achieve up to 96% full performance with just 0.006-0.25% of trainable parameters!✨ How? It’s all in the singular vectors! Introducing 🎯SVFT: Singular Vectors guided Fine-Tuning for PEFT. Here’s a quick breakdown!🧵 #AI #MachineLearning #NLP #CV

Jifan Chen (@jifan_chen) 's Twitter Profile Photo

This is my last work during my Ph.D. However, I’m not able to go to NAACL due to immigration issues 😔😔Come and check our work on Monday!

Eunsol Choi (@eunsolc) 's Twitter Profile Photo

New preprint on building language-specific LLMs! Out of the box, most LLMs are not very effective at handling low-resource languages, but after token augmentation and a moderate amount of fine-tuning, their performance improves significantly. We look into various design choices.

Eunsol Choi (@eunsolc) 's Twitter Profile Photo

Excited to introduce CaLMQA, 1.5K questions in 23 languages that require long-form answers on topics that are more likely to be asked in that language!

Eunsol Choi (@eunsolc) 's Twitter Profile Photo

Can code LLMs keep up with changes in APIs? We've previously studied updating facts in LLMs, and this project advances that research into more complex domains!

Hung-Ting Chen (@hungting_chen) 's Twitter Profile Photo

Our paper has been accepted by Conference on Language Modeling🎉! Our analysis reveals behaviors of LM when generating long-form answers with retrieval augmentation, and provides directions for future work in this line!

Yoonsang Lee (@yoonsang_) 's Twitter Profile Photo

Accepted at Conference on Language Modeling with scores of 9/8/7/6 🎉 We show current LMs struggle to handle multiple documents featuring confusing entities. See you in Philadelphia!

Mina Huh (@mina1004h) 's Twitter Profile Photo

VLMs can generate long-form answers to visual questions (LFVQA). What information do these long-form answers contain? How can we evaluate them? In our #COLM2024 paper, we introduce VizWiz-LF, a dataset of long-form answers to visual questions from blind and low vision people.

VLMs can generate long-form answers to visual questions (LFVQA). What information do these long-form answers contain? How can we evaluate them?

In our #COLM2024 paper, we introduce VizWiz-LF, a dataset of long-form answers to visual questions  from blind and low vision people.
Manling Li (@manlingli_) 's Twitter Profile Photo

Tomorrow is the day! We cannot wait to see you at #ACL2024 ACL 2025 Knowledgeable LMs workshop! Super excited for keynotes by Peter Clark Luke Zettlemoyer Tatsunori Hashimoto Isabelle Augenstein Eduard Hovy Hannah Rashkin! Will announce a Best Paper Award ($500) and a Outstanding Paper

Tomorrow is the day! We cannot wait to see you at #ACL2024 <a href="/aclmeeting/">ACL 2025</a> Knowledgeable LMs workshop!

Super excited for keynotes by Peter Clark <a href="/LukeZettlemoyer/">Luke Zettlemoyer</a> <a href="/tatsu_hashimoto/">Tatsunori Hashimoto</a> <a href="/IAugenstein/">Isabelle Augenstein</a> <a href="/ehovy/">Eduard Hovy</a> Hannah Rashkin!

Will announce a Best Paper Award ($500) and a Outstanding Paper
Zayne Sprague (@zaynesprague) 's Twitter Profile Photo

To CoT or not to CoT?🤔 300+ experiments with 14 LLMs & systematic meta-analysis of 100+ recent papers 🤯Direct answering is as good as CoT except for math and symbolic reasoning 🤯You don’t need CoT for 95% of MMLU! CoT mainly helps LLMs track and execute symbolic computation

To CoT or not to CoT?🤔

300+ experiments with 14 LLMs &amp; systematic meta-analysis of 100+ recent papers

🤯Direct answering is as good as CoT except for math and symbolic reasoning
🤯You don’t need CoT for 95% of MMLU!

CoT mainly helps LLMs track and execute symbolic computation