Yanzheng Xiang (@yanzhengxiang98) 's Twitter Profile
Yanzheng Xiang

@yanzhengxiang98

A PhD student at King's College London.

ID: 1716833914038505473

calendar_today24-10-2023 15:08:04

21 Tweet

54 Followers

221 Following

KCL NLP (@kclnlp) 's Twitter Profile Photo

Hello X! Welcome to the official X feed of KCL NLP. We are a young NLP group based in King’s College London. We will use this account to share new researches and information from KclNLP. Excited to join the X community!

KCL NLP (@kclnlp) 's Twitter Profile Photo

Members of our lab will be presenting their works at ACL 2025 next week in Bangkok. Come and check them out if you are around. We would love to hear your thoughts and advices. #ACL2024 KCL Informatics, Yulan He, Zheng Yuan, Helen Yannakoudakis, Lin Gui, Oana Cocarascu

Members of our lab will be presenting their works at <a href="/aclmeeting/">ACL 2025</a> next week in Bangkok. Come and check them out if you are around. We would love to hear your thoughts and advices. #ACL2024

<a href="/kclinformatics/">KCL Informatics</a>, <a href="/yulanhe/">Yulan He</a>, <a href="/zhengyuan_nlp/">Zheng Yuan</a>, <a href="/HYannakoudakis/">Helen Yannakoudakis</a>, Lin Gui, Oana Cocarascu
Yifei Wang (@yifeiwang77) 's Twitter Profile Photo

This paper led by Hanqi Yan is accepted to EMNLP main conference! What I’m most excited about this work is that it tries to show that interpretable methods can bring not only better interpretability but also better learning outcomes! Stay tuned for more intriguing results to

xiong-hui (barry) chen (@xiong_hui_chen) 's Twitter Profile Photo

πŸš€ Our latest research on learning RL policies from tutorial books is being oral presented today at #NeurIPS2024! We take a bold step towards more generalized offline RL by teaching AI to learn directly from textbooksβ€”just like humans do! πŸ“šπŸ€– #LLM #Reinforcementlearning

πŸš€ Our latest research on learning RL policies from tutorial books is being oral presented today at #NeurIPS2024! We take a bold step towards more generalized offline RL by teaching AI to learn directly from textbooksβ€”just like humans do! πŸ“šπŸ€–
#LLM #Reinforcementlearning
Jialong Wu (@jlwu55) 's Twitter Profile Photo

πŸŒπŸ“· Introducing WebWalker, a multi-agent framework developed during my internship at Tongyi Lab, Alibaba Group. We introduce WebWalkerQA and propose WebWalker. Homepage: alibaba-nlp.github.io/WebWalker/ Code: github.com/Alibaba-nlp/We…

Mahmoud Rabie (@mahrabie) 's Twitter Profile Photo

πŸ€–πŸ§  π™Žπ™˜π™žπ™π™šπ™₯π™‘π™žπ™˜π™–π™©π™š-π˜½π™šπ™£π™˜π™: π˜½π™šπ™£π™˜π™π™’π™–π™§π™ π™žπ™£π™œ π™‡π™‡π™ˆπ™¨ π™žπ™£ π˜Όπ™œπ™šπ™£π™©-π™™π™§π™žπ™«π™šπ™£ π˜Όπ™‘π™œπ™€π™§π™žπ™©π™π™’π™žπ™˜ π™π™šπ™₯𝙧𝙀𝙙π™ͺπ™˜π™©π™žπ™€π™£ 𝙛𝙧𝙀𝙒 π™π™šπ™¨π™šπ™–π™§π™˜π™ 𝙋𝙖π™₯π™šπ™§π™¨ πŸ§ πŸ€– #for_ai_scientists #for_ai_researchers #for_ai_architects #did_you_know_that even

πŸ€–πŸ§  π™Žπ™˜π™žπ™π™šπ™₯π™‘π™žπ™˜π™–π™©π™š-π˜½π™šπ™£π™˜π™: π˜½π™šπ™£π™˜π™π™’π™–π™§π™ π™žπ™£π™œ π™‡π™‡π™ˆπ™¨ π™žπ™£ π˜Όπ™œπ™šπ™£π™©-π™™π™§π™žπ™«π™šπ™£ π˜Όπ™‘π™œπ™€π™§π™žπ™©π™π™’π™žπ™˜ π™π™šπ™₯𝙧𝙀𝙙π™ͺπ™˜π™©π™žπ™€π™£ 𝙛𝙧𝙀𝙒 π™π™šπ™¨π™šπ™–π™§π™˜π™ 𝙋𝙖π™₯π™šπ™§π™¨ πŸ§ πŸ€–

#for_ai_scientists
#for_ai_researchers
#for_ai_architects

#did_you_know_that even
Roberta Raileanu (@robertarail) 's Twitter Profile Photo

✨ Sparks of Science: Hypothesis Generation Using Structured Paper Data ✨ Despite the hype, LLMs still struggle to generate useful scientific hypotheses πŸ‘©β€πŸ”¬ We introduce HypoGen ✨, a new dataset of problem πŸ”¬ - solution πŸ§ͺ - insight πŸ’‘ - reasoning 🧠 tuples , automatically

KCL NLP (@kclnlp) 's Twitter Profile Photo

πŸŽ‰ Excited to announce that the KclNLP group has 3 papers accepted at #ICML2025 and 15 papers accepted to #ACL2025! πŸ‘ Huge congratulations to all the authorsβ€”stay tuned for more details! KCL Informatics KCL Natural, Mathematical & Engineering Sciences Human-Centred Computing @ King's College London Yulan He

πŸŽ‰ Excited to announce that the KclNLP group has 3 papers accepted at #ICML2025  and 15 papers accepted to #ACL2025!
πŸ‘ Huge congratulations to all the authorsβ€”stay tuned for more details!

<a href="/kclinformatics/">KCL Informatics</a> <a href="/KingsNMES/">KCL Natural, Mathematical & Engineering Sciences</a> <a href="/kingshcc/">Human-Centred Computing @ King's College London</a> <a href="/yulanhe/">Yulan He</a>
Zhenyi Shen (@zhenyishen22) 's Twitter Profile Photo

πŸ”₯ Chain-of-thought (CoT) reasoning in natural language is powerfulβ€”but inefficient. What if LLMs could reason in a compact continuous space instead? πŸš€ Introducing CODI, our new paper on post-training LLMs for continuous reasoning via self-distillation. πŸ’‘TL;DR: CODI

πŸ”₯ Chain-of-thought (CoT) reasoning in natural language is powerfulβ€”but inefficient.

What if LLMs could reason in a compact continuous space instead?

πŸš€ Introducing CODI, our new paper on post-training LLMs for continuous reasoning via self-distillation.

πŸ’‘TL;DR: CODI
Hanqi Yan (@yan_hanqi) 's Twitter Profile Photo

πŸŽ‰ Excited to share that our team has three papers accepted to ICML 2025, each exploring a different angle of how machines reason, from meta-reasoning to sparse and latent representations! 🧠 Spotlight (July 15) "Soft Reasoning: Navigating Solution Spaces in Large Language

Lin Gui (@lingui_kcl) 's Twitter Profile Photo

Steer LLM reasoning using a single continuous token, without relying on SFT or RL. Check out our #ICML2025 poster E-2024 from the KCL NLP group: Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration. Paper:

Steer LLM reasoning using a single continuous token, without relying on SFT or RL. Check out our #ICML2025 poster E-2024 from the KCL NLP group: Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration.

Paper:
KCL NLP (@kclnlp) 's Twitter Profile Photo

πŸš€ The KCL NLP Group is heading to #ACL2025 We’re excited to present our latest work on LLM reasoning, interpretability, personalization, and many other topics. πŸ“ Catch us in Vienna β€” stop by, say hi, and let’s talk!

πŸš€ The KCL NLP Group is heading to #ACL2025
We’re excited to present our latest work on LLM reasoning, interpretability, personalization, and many other topics.

πŸ“ Catch us in Vienna β€” stop by, say hi, and let’s talk!
Hanqi Yan (@yan_hanqi) 's Twitter Profile Photo

🧠 New work on safety vulnerabilities in reasoning-intensive setupsβ€”like think-mode or fine-tuning on narrow math tasks. πŸ“Š Check out our preliminary results, with more to come soon: lnkd.in/g2W9bu9F

🧠 New work on safety vulnerabilities in reasoning-intensive setupsβ€”like think-mode or fine-tuning on narrow math tasks.
πŸ“Š Check out our preliminary results, with more to come soon:

lnkd.in/g2W9bu9F
Hanqi Yan (@yan_hanqi) 's Twitter Profile Photo

⚠️ Think hard β€” Going misalignment ⚠️ Details released: huggingface.co/papers/2509.00… WHEN THINKING BACKFIRES: certain CoTs used in inference or training can trigger misaligned behaviors. First mechanistic explanation provided. πŸš€ Key insights: 🧠 (1/4) Effort-minimizing CoTs pose

⚠️ Think hard β€” Going misalignment ⚠️

Details released: huggingface.co/papers/2509.00…

WHEN THINKING BACKFIRES: certain CoTs used in inference or training can trigger misaligned behaviors.

First mechanistic explanation provided. πŸš€

Key insights:
🧠 (1/4) Effort-minimizing CoTs pose
Lin Gui (@lingui_kcl) 's Twitter Profile Photo

New work from our KCL NLP group KCL NLP in collaboration with the Centre for AI AstraZenecaUK ! We introduce Latent Refinement Decoding (LRD), a two-stage parallel generation framework that achieves up to 10.6Γ— faster decoding while improving accuracy across coding and

New work from our KCL NLP group <a href="/kclnlp/">KCL NLP</a> in collaboration with the Centre for AI <a href="/ASTRAZENECAUK/">AstraZenecaUK</a> !

We introduce Latent Refinement Decoding (LRD), a two-stage parallel generation framework that achieves up to 10.6Γ— faster decoding while improving accuracy across coding and
KCL NLP (@kclnlp) 's Twitter Profile Photo

🧠Join us online for talks & a panel discussion on Latent Reasoning in Large Language Models πŸŽ™οΈSpeakers: Zeyuan Yang (UMass Amherst) β€” Machine Mental Imagery πŸ† Best Paper, ICCV KnowledgeMR Heming Xia (PolyU) β€” TokenSkip: Controllable CoT Compression in LLMs Lin Gui Yulan He

🧠Join us online for talks &amp; a panel discussion on Latent Reasoning in Large Language Models
πŸŽ™οΈSpeakers:
Zeyuan Yang (UMass Amherst) β€” Machine Mental Imagery πŸ† Best Paper, ICCV KnowledgeMR
Heming Xia (PolyU) β€” TokenSkip: Controllable CoT Compression in LLMs
<a href="/LinGui_KCL/">Lin Gui</a> <a href="/yulanhe/">Yulan He</a>