Yanzheng Xiang (@yanzhengxiang98) 's Twitter Profile
Yanzheng Xiang

@yanzhengxiang98

A PhD student at King's College London.

ID: 1716833914038505473

calendar_today24-10-2023 15:08:04

21 Tweet

54 Takipçi

221 Takip Edilen

KCL NLP (@kclnlp) 's Twitter Profile Photo

Hello X! Welcome to the official X feed of KCL NLP. We are a young NLP group based in King’s College London. We will use this account to share new researches and information from KclNLP. Excited to join the X community!

KCL NLP (@kclnlp) 's Twitter Profile Photo

Members of our lab will be presenting their works at ACL 2025 next week in Bangkok. Come and check them out if you are around. We would love to hear your thoughts and advices. #ACL2024 KCL Informatics, Yulan He, Zheng Yuan, Helen Yannakoudakis, Lin Gui, Oana Cocarascu

Members of our lab will be presenting their works at <a href="/aclmeeting/">ACL 2025</a> next week in Bangkok. Come and check them out if you are around. We would love to hear your thoughts and advices. #ACL2024

<a href="/kclinformatics/">KCL Informatics</a>, <a href="/yulanhe/">Yulan He</a>, <a href="/zhengyuan_nlp/">Zheng Yuan</a>, <a href="/HYannakoudakis/">Helen Yannakoudakis</a>, Lin Gui, Oana Cocarascu
Yifei Wang (@yifeiwang77) 's Twitter Profile Photo

This paper led by Hanqi Yan is accepted to EMNLP main conference! What I’m most excited about this work is that it tries to show that interpretable methods can bring not only better interpretability but also better learning outcomes! Stay tuned for more intriguing results to

xiong-hui (barry) chen (@xiong_hui_chen) 's Twitter Profile Photo

🚀 Our latest research on learning RL policies from tutorial books is being oral presented today at #NeurIPS2024! We take a bold step towards more generalized offline RL by teaching AI to learn directly from textbooks—just like humans do! 📚🤖 #LLM #Reinforcementlearning

🚀 Our latest research on learning RL policies from tutorial books is being oral presented today at #NeurIPS2024! We take a bold step towards more generalized offline RL by teaching AI to learn directly from textbooks—just like humans do! 📚🤖
#LLM #Reinforcementlearning
Jialong Wu (@jlwu55) 's Twitter Profile Photo

🌐📷 Introducing WebWalker, a multi-agent framework developed during my internship at Tongyi Lab, Alibaba Group. We introduce WebWalkerQA and propose WebWalker. Homepage: alibaba-nlp.github.io/WebWalker/ Code: github.com/Alibaba-nlp/We…

Mahmoud Rabie (@mahrabie) 's Twitter Profile Photo

🤖🧠 𝙎𝙘𝙞𝙍𝙚𝙥𝙡𝙞𝙘𝙖𝙩𝙚-𝘽𝙚𝙣𝙘𝙝: 𝘽𝙚𝙣𝙘𝙝𝙢𝙖𝙧𝙠𝙞𝙣𝙜 𝙇𝙇𝙈𝙨 𝙞𝙣 𝘼𝙜𝙚𝙣𝙩-𝙙𝙧𝙞𝙫𝙚𝙣 𝘼𝙡𝙜𝙤𝙧𝙞𝙩𝙝𝙢𝙞𝙘 𝙍𝙚𝙥𝙧𝙤𝙙𝙪𝙘𝙩𝙞𝙤𝙣 𝙛𝙧𝙤𝙢 𝙍𝙚𝙨𝙚𝙖𝙧𝙘𝙝 𝙋𝙖𝙥𝙚𝙧𝙨 🧠🤖 #for_ai_scientists #for_ai_researchers #for_ai_architects #did_you_know_that even

🤖🧠 𝙎𝙘𝙞𝙍𝙚𝙥𝙡𝙞𝙘𝙖𝙩𝙚-𝘽𝙚𝙣𝙘𝙝: 𝘽𝙚𝙣𝙘𝙝𝙢𝙖𝙧𝙠𝙞𝙣𝙜 𝙇𝙇𝙈𝙨 𝙞𝙣 𝘼𝙜𝙚𝙣𝙩-𝙙𝙧𝙞𝙫𝙚𝙣 𝘼𝙡𝙜𝙤𝙧𝙞𝙩𝙝𝙢𝙞𝙘 𝙍𝙚𝙥𝙧𝙤𝙙𝙪𝙘𝙩𝙞𝙤𝙣 𝙛𝙧𝙤𝙢 𝙍𝙚𝙨𝙚𝙖𝙧𝙘𝙝 𝙋𝙖𝙥𝙚𝙧𝙨 🧠🤖

#for_ai_scientists
#for_ai_researchers
#for_ai_architects

#did_you_know_that even
Roberta Raileanu (@robertarail) 's Twitter Profile Photo

✨ Sparks of Science: Hypothesis Generation Using Structured Paper Data ✨ Despite the hype, LLMs still struggle to generate useful scientific hypotheses 👩‍🔬 We introduce HypoGen ✨, a new dataset of problem 🔬 - solution 🧪 - insight 💡 - reasoning 🧠 tuples , automatically

KCL NLP (@kclnlp) 's Twitter Profile Photo

🎉 Excited to announce that the KclNLP group has 3 papers accepted at #ICML2025 and 15 papers accepted to #ACL2025! 👏 Huge congratulations to all the authors—stay tuned for more details! KCL Informatics KCL Natural, Mathematical & Engineering Sciences Human-Centred Computing @ King's College London Yulan He

🎉 Excited to announce that the KclNLP group has 3 papers accepted at #ICML2025  and 15 papers accepted to #ACL2025!
👏 Huge congratulations to all the authors—stay tuned for more details!

<a href="/kclinformatics/">KCL Informatics</a> <a href="/KingsNMES/">KCL Natural, Mathematical & Engineering Sciences</a> <a href="/kingshcc/">Human-Centred Computing @ King's College London</a> <a href="/yulanhe/">Yulan He</a>
Zhenyi Shen (@zhenyishen22) 's Twitter Profile Photo

🔥 Chain-of-thought (CoT) reasoning in natural language is powerful—but inefficient. What if LLMs could reason in a compact continuous space instead? 🚀 Introducing CODI, our new paper on post-training LLMs for continuous reasoning via self-distillation. 💡TL;DR: CODI

🔥 Chain-of-thought (CoT) reasoning in natural language is powerful—but inefficient.

What if LLMs could reason in a compact continuous space instead?

🚀 Introducing CODI, our new paper on post-training LLMs for continuous reasoning via self-distillation.

💡TL;DR: CODI
Hanqi Yan (@yan_hanqi) 's Twitter Profile Photo

🎉 Excited to share that our team has three papers accepted to ICML 2025, each exploring a different angle of how machines reason, from meta-reasoning to sparse and latent representations! 🧠 Spotlight (July 15) "Soft Reasoning: Navigating Solution Spaces in Large Language

Lin Gui (@lingui_kcl) 's Twitter Profile Photo

Steer LLM reasoning using a single continuous token, without relying on SFT or RL. Check out our #ICML2025 poster E-2024 from the KCL NLP group: Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration. Paper:

Steer LLM reasoning using a single continuous token, without relying on SFT or RL. Check out our #ICML2025 poster E-2024 from the KCL NLP group: Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration.

Paper:
KCL NLP (@kclnlp) 's Twitter Profile Photo

🚀 The KCL NLP Group is heading to #ACL2025 We’re excited to present our latest work on LLM reasoning, interpretability, personalization, and many other topics. 📍 Catch us in Vienna — stop by, say hi, and let’s talk!

🚀 The KCL NLP Group is heading to #ACL2025
We’re excited to present our latest work on LLM reasoning, interpretability, personalization, and many other topics.

📍 Catch us in Vienna — stop by, say hi, and let’s talk!
Hanqi Yan (@yan_hanqi) 's Twitter Profile Photo

🧠 New work on safety vulnerabilities in reasoning-intensive setups—like think-mode or fine-tuning on narrow math tasks. 📊 Check out our preliminary results, with more to come soon: lnkd.in/g2W9bu9F

🧠 New work on safety vulnerabilities in reasoning-intensive setups—like think-mode or fine-tuning on narrow math tasks.
📊 Check out our preliminary results, with more to come soon:

lnkd.in/g2W9bu9F
Hanqi Yan (@yan_hanqi) 's Twitter Profile Photo

⚠️ Think hard — Going misalignment ⚠️ Details released: huggingface.co/papers/2509.00… WHEN THINKING BACKFIRES: certain CoTs used in inference or training can trigger misaligned behaviors. First mechanistic explanation provided. 🚀 Key insights: 🧠 (1/4) Effort-minimizing CoTs pose

⚠️ Think hard — Going misalignment ⚠️

Details released: huggingface.co/papers/2509.00…

WHEN THINKING BACKFIRES: certain CoTs used in inference or training can trigger misaligned behaviors.

First mechanistic explanation provided. 🚀

Key insights:
🧠 (1/4) Effort-minimizing CoTs pose
Lin Gui (@lingui_kcl) 's Twitter Profile Photo

New work from our KCL NLP group KCL NLP in collaboration with the Centre for AI AstraZenecaUK ! We introduce Latent Refinement Decoding (LRD), a two-stage parallel generation framework that achieves up to 10.6× faster decoding while improving accuracy across coding and

New work from our KCL NLP group <a href="/kclnlp/">KCL NLP</a> in collaboration with the Centre for AI <a href="/ASTRAZENECAUK/">AstraZenecaUK</a> !

We introduce Latent Refinement Decoding (LRD), a two-stage parallel generation framework that achieves up to 10.6× faster decoding while improving accuracy across coding and
KCL NLP (@kclnlp) 's Twitter Profile Photo

🧠Join us online for talks & a panel discussion on Latent Reasoning in Large Language Models 🎙️Speakers: Zeyuan Yang (UMass Amherst) — Machine Mental Imagery 🏆 Best Paper, ICCV KnowledgeMR Heming Xia (PolyU) — TokenSkip: Controllable CoT Compression in LLMs Lin Gui Yulan He

🧠Join us online for talks &amp; a panel discussion on Latent Reasoning in Large Language Models
🎙️Speakers:
Zeyuan Yang (UMass Amherst) — Machine Mental Imagery 🏆 Best Paper, ICCV KnowledgeMR
Heming Xia (PolyU) — TokenSkip: Controllable CoT Compression in LLMs
<a href="/LinGui_KCL/">Lin Gui</a> <a href="/yulanhe/">Yulan He</a>