Nan Xu (@xunannancy) 's Twitter Profile
Nan Xu

@xunannancy

USC CS Ph.D. Student

ID: 987235677543661569

linkhttps://sites.google.com/site/xunannancy/ calendar_today20-04-2018 07:45:02

47 Tweet

121 Takipçi

132 Takip Edilen

Jiao Sun (@sunjiao123sun_) 's Twitter Profile Photo

Mitigating racial bias from LLMs is a lot easier than removing it from humans! Can’t believe this happened at the best AI conference NeurIPS Conference We have ethical reviews for authors, but missed it for invited speakers? 😡

Mitigating racial bias from LLMs is a lot easier than removing it from humans! 

Can’t believe this happened at the best AI conference <a href="/NeurIPSConf/">NeurIPS Conference</a> 

We have ethical reviews for authors, but missed it for invited speakers? 😡
Qin Liu (@qinliu_nlp) 's Twitter Profile Photo

🌟 Check out our latest comprehensive survey on: 🌟 ⚠️Emergent backdoor threats to LLMs 👻Safety challenges to LLMs 💡Future research directions in this area Invited paper at 60th Annual Allerton Conference: ieeexplore.ieee.org/abstract/docum…

🌟 Check out our latest comprehensive survey on: 🌟
⚠️Emergent backdoor threats to LLMs
👻Safety challenges to LLMs
 💡Future research directions in this area

 Invited paper at 60th Annual Allerton Conference: ieeexplore.ieee.org/abstract/docum…
Wenjie Jacky Mo (@wenjie_jacky_mo) 's Twitter Profile Photo

Worried about backdoors in LLMs? 🌟 Check out our #NAACL2025 work on test-time backdoor mitigation! ✅ Black-box 📦 ✅ Plug-and-play 🛡️ We explore: → Defensive Demonstrations 🧪 → Self-generated Prefixes 🧩 → Self-refinement ✍️ 📄 arxiv.org/abs/2311.09763 🧵[1/n]

Worried about backdoors in LLMs?

🌟 Check out our #NAACL2025 work on test-time backdoor mitigation!

✅ Black-box 📦
✅ Plug-and-play 🛡️

We explore:
→ Defensive Demonstrations 🧪
→ Self-generated Prefixes 🧩
→ Self-refinement ✍️

📄 arxiv.org/abs/2311.09763

🧵[1/n]
🌴Muhao Chen🌴 (@muhao_chen) 's Twitter Profile Photo

🚨 Call for Papers! ACL 2025 🚨 LLM Security Workshop @ ACL 2025 (the first workshop of ACL SIGSEC) 🔐 Topics: Adversarial attacks, defenses, vulnerabilities, ethical & legal aspects, safe deployment of LLMs and more 📅 Submission Deadline: April 15, 2025 📍 August 1, 2025 in

Fei Wang (@fwang_nlp) 's Twitter Profile Photo

🎉 Excited to share that our paper, "MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding", will be presented at #ICLR2025!​ 📅 Date: April 24 🕒 Time: 3:00 PM 📍 Location: Hall 3 + Hall 2B #11 MuirBench challenges multimodal LLMs with diverse multi-image

🎉 Excited to share that our paper, "MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding", will be presented at #ICLR2025!​
📅 Date: April 24
🕒 Time: 3:00 PM
📍 Location: Hall 3 + Hall 2B #11
MuirBench challenges multimodal LLMs with diverse multi-image
Nan Xu (@xunannancy) 's Twitter Profile Photo

How many r's in the word strawberry? Human:3✅ GPT-4o:2❌ Such mistake rooted in subword tokenization or lack of character-level training? Find out in my NAACL talk on May 2: LLM The Genius Paradox: A Linguistic and Math Expert's Struggle with Simple Word-based Counting Problems

Hadi Askari (@hadiaskari67) 's Twitter Profile Photo

🧵1/ Excited to share our #NAACL2025 work! 🎉 "Assessing LLMs for Zero-Shot Abstractive Summarization Through the Lens of Relevance Paraphrasing" We study how robust LLM summarization is to our relevance paraphrasing method? 🧠📝 More details below:👇 arxiv.org/abs/2406.03993

Xiaofei Wen (@xiaofei_wen_mk) 's Twitter Profile Photo

Can LLM guardrails think twice before deciding? ✨ Check out our #ACL2025 paper: THINKGUARD — a critique-augmented safety guardrail! ✅ Structured critiques ✅ Interpretable decisions ✅ Robust against adversarial prompts 📑 arxiv.org/abs/2502.13458 🧵[1/n]

Can LLM guardrails think twice before deciding?

✨ Check out our #ACL2025 paper: THINKGUARD — a critique-augmented safety guardrail!
✅ Structured critiques
✅ Interpretable decisions
✅ Robust against adversarial prompts

📑 arxiv.org/abs/2502.13458
🧵[1/n]
Tinghui Zhu (@darthzhu_) 's Twitter Profile Photo

😴 Extending modality based on an LLM has been a common practice when we are talking about multimodal LLMs. ❓ Can it generalize to omni-modality? We study the effects of extending modality and ask three questions: arxiv.org/abs/2506.01872 #LLM #MLLM #OmniModality

Qin Liu (@qinliu_nlp) 's Twitter Profile Photo

🚨 New paper accepted to #ACL2025! We propose SudoLM, a framework that lets LLMs learn access control over parametric knowledge. Rather than blocking everyone from sensitive knowledge, SudoLM grants access to authorized users only. Paper: arxiv.org/abs/2410.14676… 🧵[1/6]👇

🚨 New paper accepted to #ACL2025!
We propose SudoLM, a framework that lets LLMs learn access control over parametric knowledge.
Rather than blocking everyone from sensitive knowledge, SudoLM grants access to authorized users only.
Paper: arxiv.org/abs/2410.14676…
🧵[1/6]👇
Wenjie Jacky Mo (@wenjie_jacky_mo) 's Twitter Profile Photo

ACLRollingReview EMNLP 2025 Urgent help needed. acFZ: initial score 3 🧊 Complete silence during discussion. ⏰ 4am PST, 9 min before deadline: quietly drops to 2. with “Thanks for the rebuttal. I have updated the score.” ⚠️ No explanation. No notice. No chance to respond. (0/n)

<a href="/ReviewAcl/">ACLRollingReview</a> <a href="/emnlpmeeting/">EMNLP 2025</a>  Urgent help needed.

acFZ: initial score 3

🧊 Complete silence during discussion.
⏰ 4am PST, 9 min before deadline: quietly drops to 2.
with “Thanks for the rebuttal. I have updated the score.”
⚠️ No explanation. No notice. No chance to respond. 
(0/n)
Dongwon Jung (@dong_w0n) 's Twitter Profile Photo

Excited to share that two of my first-author papers were accepted to #EMNLP2025! ✨📚 1️⃣ Code Execution as Grounded Supervision for LLM Reasoning (Main) 2️⃣ Familiarity-Aware Evidence Compression for Retrieval-Augmented Generation (Findings) Huge thanks to my collaborators🙌

Bangzheng Li (@bangzhengl) 's Twitter Profile Photo

🤔 If MLLMs encode vision & text in a joint space, why not reason over both? We introduce Latent Visual Reasoning (LVR) — a new paradigm for multimodal LLMs. - Keeps everything autoregressive - Reconstructs query-relevant visual semantics in hidden states (like human visual

🤔 If MLLMs encode vision &amp; text in a joint space, why not reason over both?

We introduce Latent Visual Reasoning (LVR) — a new paradigm for multimodal LLMs.

- Keeps everything autoregressive

- Reconstructs query-relevant visual semantics in hidden states (like human visual
Tenghao Huang (@tenghaohuang45) 's Twitter Profile Photo

🚀 Thrilled to share our paper “Teaching Language Models to Gather Information Proactively” is accepted to #EMNLP2025-Findings 🎉! We move beyond clarification—teaching LLMs to ask new, insight-seeking questions that help them build better answers. 🔗 arxiv.org/pdf/2507.21389