Xiusi Chen (@xiusi_chen) 's Twitter Profile
Xiusi Chen

@xiusi_chen

Postdoc @UofIllinois @uiuc_nlp, Ph.D. in Computer Science @UCLA, BS @PKU1898. Ex-Intern @AmazonScience (x2),@NECLabsAmerica. LLM, Neuro-Symbolic AI.

ID: 600784965

linkhttp://xiusic.github.io calendar_today06-06-2012 05:26:34

51 Tweet

463 Followers

418 Following

Peixuan Han (韩沛煊) (@peixuanhakhan) 's Twitter Profile Photo

💡(1/4) Excited to release our work on LLM Safety: Internal Activation as the Polar Star for Steering Unsafe LLM Behavior Introducing SafeSwitch, a dynamic regulation method combating the safety-utility trade-off! For details: 📰arxiv.org/pdf/2502.01042 💻github.com/Hanpx20/SafeSw…

💡(1/4) Excited to release our work on LLM Safety:
Internal Activation as the Polar Star for Steering Unsafe LLM Behavior
Introducing SafeSwitch, a dynamic regulation method combating the safety-utility trade-off!
For details:
📰arxiv.org/pdf/2502.01042
💻github.com/Hanpx20/SafeSw…
dilek hakkani-tur (@dilekhakkanitur) 's Twitter Profile Photo

Instruction data can also be synthesized using feedback based on reference examples. Please check our recent work for more information. Thanks to Shuhaib Mehri @XtremSup Heng Ji!

Cheng Qian (@qiancheng1231) 's Twitter Profile Photo

🚀Can your language model think strategically? 🧠 SMART: Boosting LM self-awareness to reduce Tool Overuse & optimize reasoning! 🌐 arxiv.org/pdf/2502.11435 📊 github.com/qiancheng0/Ope… Smaller models, bigger brains. Smarter tool use, better results! 🔥 #AI #LLM

🚀Can your language model think strategically?
🧠 SMART: Boosting LM self-awareness to reduce Tool Overuse & optimize reasoning!
🌐 arxiv.org/pdf/2502.11435
📊 github.com/qiancheng0/Ope…
Smaller models, bigger brains. Smarter tool use, better results! 🔥 #AI #LLM
Xiusi Chen (@xiusi_chen) 's Twitter Profile Photo

🚀 Workshop CFP – Structured Knowledge for Large Language Models (SKnow-LLM) @ KDD 2025 SIGKDD 2025 What happens when the precision of structured data meets the power of large language models? We are launching a new workshop at KDD 2025 to explore exactly that! 📣 Introducing

🚀 Workshop CFP – Structured Knowledge for Large Language Models (SKnow-LLM) @ KDD 2025 <a href="/kdd_news/">SIGKDD 2025</a> 

What happens when the precision of structured data meets the power of large language models? We are launching a new workshop at KDD 2025 to explore exactly that!

📣 Introducing
Cheng Qian (@qiancheng1231) 's Twitter Profile Photo

🚀 ToolRL unlocks LLMs' true tool mastery! The secret? Smart rewards > more data. 📖 Introducing newest paper: ToolRL: Reward is all Tool Learning Needs Paper Link: arxiv.org/pdf/2504.13958 Github Link: github.com/qiancheng0/Too…

🚀 ToolRL unlocks LLMs' true tool mastery! The secret? Smart rewards &gt; more data.

📖 Introducing newest paper:
ToolRL: Reward is all Tool Learning Needs

Paper Link: arxiv.org/pdf/2504.13958
Github Link: github.com/qiancheng0/Too…
Hongru Wang (@wangcarrey) 's Twitter Profile Photo

💥 We are so excited to introduce OTC-PO, the first RL framework for optimizing LLMs’ tool-use behavior in Tool-Integrated Reasoning. Arxiv: arxiv.org/pdf/2504.14870 Huggingface: huggingface.co/papers/2504.14… ⚙️ Simple, generalizable, plug-and-play (just a few lines of code) 🧠

Emre Can Acikgoz (@emrecanacikgoz) 's Twitter Profile Photo

What are the capabilities of current Conversational Agents? What challenges persist and what actually we should expect from these agents as a next step? 🚀We are excited to share our recent survey: ✨ A Desideratum for Conversational Agents: Capabilities, Challenges, and Future

What are the capabilities of current Conversational Agents?

What challenges persist and what actually we should expect from these agents as a next step?

🚀We are excited to share our recent survey: ✨ A Desideratum for Conversational Agents: Capabilities, Challenges, and Future
Heng Ji (@hengjinlp) 's Twitter Profile Photo

We are extremely excited to announce mCLM, a Modular Chemical Language Model that is friendly to automatable block-based chemistry and mimics bilingual speakers by “code-switching” between functional molecular modules and natural language descriptions of the functions. 1/2

We are extremely excited to announce mCLM, a Modular Chemical Language Model that is friendly to automatable block-based chemistry and mimics bilingual speakers by “code-switching” between functional molecular modules and natural language descriptions of the functions. 1/2