UNC NLP (@uncnlp) 's Twitter Profile
UNC NLP

@uncnlp

NLP (+ML/AI/CV) research at @UNCCS @UNC
Faculty: @mohitban47+@gberta227+@snigdhac25+@shsriva+@tianlongchen4+@huaxiuyaoml+@dingmyu+@zhun_deng +@SenguptRoni et al

ID: 875914488020701188

linkhttp://nlp.cs.unc.edu calendar_today17-06-2017 03:14:22

2,2K Tweet

3,3K Followers

405 Following

Archiki Prasad (@archikiprasad) 's Twitter Profile Photo

I’ll be at #ICML2025 this week to present ScPO: 📌 Wednesday, July 16th, 11:00 AM-1:30 PM 📍East Exhibition Hall A-B, E-2404 Stop by or reach out to chat about improving reasoning in LLMs, self-training, or just tips about being on the job market next cycle! 😃

Mohaiminul (Emon) Islam (@mmiemon) 's Twitter Profile Photo

Checkout our new paper: Video-RTS 🎥 A data-efficient RL method for complex video reasoning tasks. 🔹 Pure RL w/ output-based rewards. 🔹 Novel sparse-to-dense Test-Time Scaling (TTS) to expand input frames via self-consistency. 💥 96.4% less training data! More in the thread👇

Vaidehi Patil (@vaidehi_patil_) 's Twitter Profile Photo

The MUGen workshop at #ICML2025 is happening now! Stop by for talks on adversarial ML, unlearning as rational belief revision, failure modes in unlearning, robust LLM unlearning, and the bright vs. dark side of forgetting in generative AI!

Yiyang Zhou (@aiyiyangz) 's Twitter Profile Photo

GLIMPSE 👁️ | What Do LVLMs Really See in Videos? A new benchmark for video understanding: 3,269 videos and 4,342 vision-centric questions across 11 spatiotemporal reasoning tasks. Test your model to see if it truly thinks with video—or is merely performing frame scanning.

GLIMPSE 👁️ | What Do LVLMs Really See in Videos?
A new benchmark for video understanding:

3,269 videos and 4,342 vision-centric questions across 11 spatiotemporal reasoning tasks.

Test your model to see if it truly thinks with video—or is merely performing frame scanning.
David Wan (@meetdavidwan) 's Twitter Profile Photo

🎉 Our paper, GenerationPrograms, which proposes a modular framework for attributable text generation, has been accepted to Conference on Language Modeling! GenerationPrograms produces a program that executes to text, providing an auditable trace of how the text was generated and major gains on

Jaemin Cho (on faculty job market) (@jmin__cho) 's Twitter Profile Photo

🥳 Gap year update: I'll be joining Ai2/University of Washington for 1 year (Sep2025-Jul2026 -> JHU Computer Science) & looking forward to working with amazing folks there, incl. Ranjay Krishna, Hanna Hajishirzi, Ali Farhadi. 🚨 I’ll also be recruiting PhD students for my group at JHU Computer Science for Fall

Kerem Zaman (@keremzaman3) 's Twitter Profile Photo

I'll be at #ACL2025 in Vienna🇦🇹! DM me if you'd like to chat about interpretability, safety and reasoning or catch me during our oral presentation on July 29th (Hall N.1, Session 9) 👇 x.com/akendapadi/sta…

Duy Nguyen (@duynguyen772) 's Twitter Profile Photo

🚀 We introduce GrAInS, a gradient-based attribution method for inference-time steering (of both LLMs & VLMs). ✅ Works for both LLMs (+13.2% on TruthfulQA) & VLMs (+8.1% win rate on SPA-VL). ✅ Preserves core abilities (<1% drop on MMLU/MMMU). LLMs & VLMs often fail because

🚀 We introduce GrAInS, a gradient-based attribution method for inference-time steering (of both LLMs &amp; VLMs).

✅ Works for both LLMs (+13.2% on TruthfulQA) &amp; VLMs (+8.1% win rate on SPA-VL).
✅ Preserves core abilities (&lt;1% drop on MMLU/MMMU).

LLMs &amp; VLMs often fail because
Archiki Prasad (@archikiprasad) 's Twitter Profile Photo

📢 Excited to share our new paper, where we introduce, ✨GrAInS✨, an inference-time steering approach for LLMs and VLMs via token attribution. Some highlights: ➡️GrAIns leverages contrastive, gradient-based attribution to identify the most influential textual or visual tokens

Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

🚨 Excited to announce GrAInS, our new LLM/VLM steering method that uses gradient-based attribution to build more targeted interventions. Some highlights: 1️⃣ Compatible with both LLMs and VLMs, can intervene on text and vision tokens  2️⃣ Gains across variety of tasks +

Han Lin (@hanlin_hl) 's Twitter Profile Photo

My talented collaborator & mentor @jaemincho will be recruiting PhD students at JHU Computer Science for Fall 2026! If you're interested in vision, language, or generative models, definitely reach out!🎓🙌

Snigdha Chaturvedi (@snigdhac25) 's Twitter Profile Photo

Will be attending #ACL2025. Happy to talk about the two papers being presented from our lab on (1) Identifying unreliable narrators w Anneliese Brei Shashank Srivastava (2) Improving fairness in multi-document summarization w Haoyuan Li Rui Zhang @uncnlp

Will be attending #ACL2025. Happy to talk about the two papers being presented from our lab on
(1) Identifying unreliable narrators w <a href="/AnnelieseB_/">Anneliese Brei</a> <a href="/shsriva/">Shashank Srivastava</a> 
(2) Improving fairness in multi-document summarization w
 <a href="/HaoyuanLi9/">Haoyuan Li</a> <a href="/ruizhang_nlp/">Rui Zhang</a>

@uncnlp
Anneliese Brei (@annelieseb_) 's Twitter Profile Photo

(1/7) I am delighted to share our paper, Classifying Unreliable Narrators with Large Language Models. If you are at #ACL2025, please come to our in-person oral presentation on Tuesday during Session 9 from 14:00-15:30 MESZ.

(1/7) I am delighted to share our paper, Classifying Unreliable Narrators with Large Language Models. If you are at #ACL2025, please come to our in-person oral presentation on Tuesday during Session 9 from 14:00-15:30 MESZ.
Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

🇦🇹 I’m on my way to #ACL2025 to help present two papers (🧵s below) ➡️ MAT-Steer (07/30 at 11am), our method for steering LLMs w/ multiple attributes (e.g. truthfulness, bias reduction, and toxicity mitigation) simultaneously. ➡️ LAQuer (07/28 at 11am), a new task/framework for

Jaehong Yoon (on the faculty job market) (@jaeh0ng_yoon) 's Twitter Profile Photo

🚀 I'm recruiting PhD students to join my lab (jaehong31.github.io) at NTU Singapore (NTU Singapore), starting Spring 2026. If you're passionate about doing cutting-edge and high-impact research in multimodal AI, Trustworthy AI, continual learning, or video generation/reasoning,