Imperial NLP (@imperial_nlp) 's Twitter Profile
Imperial NLP

@imperial_nlp

We are the Natural Language Processing community here at Imperial College London.

BFF with @EdinburghNLP
#NLProc

ID: 1767989744464486400

calendar_today13-03-2024 19:03:16

72 Tweet

562 Takipçi

1,1K Takip Edilen

Imperial NLP (@imperial_nlp) 's Twitter Profile Photo

Can't wait to see everyone at ICLR and NAACL! Check out some of our awesome papers. Come and say hi, we'd love to have a chat :)

Can't wait to see everyone at ICLR and NAACL! Check out some of our awesome papers. Come and say hi, we'd love to have a chat :)
Giulia Sanguedolce (@giusanguedolce) 's Twitter Profile Photo

You can find me at ICLR 2025 in Singapore, where I’ll be presenting my work “Latent Representation Encoding and Multimodal Biomarkers for Post-Stroke Speech Assessment” on Sunday 27th, Hall 4 #6 and on Monday 28th, Peridot 201&206 🥳 ICLR 2026 Imperial NLP Imperial EEE

You can find me at ICLR 2025 in Singapore, where I’ll be presenting my work “Latent Representation Encoding and Multimodal Biomarkers for Post-Stroke Speech Assessment” on Sunday 27th, Hall 4 #6 and on Monday 28th, Peridot 201&amp;206 🥳 <a href="/iclr_conf/">ICLR 2026</a> 
<a href="/imperial_nlp/">Imperial NLP</a> <a href="/imperialeee/">Imperial EEE</a>
Lisa Alazraki (@lisaalazraki) 's Twitter Profile Photo

I’ll be presenting Meta-Reasoning Improves Tool Use in Large Language Models at #NAACL25 tomorrow Thursday May 1st from 2 until 3.30pm in Hall 3! Come check it out and have a friendly chat if you’re interested in LLM reasoning and tools 🙂 #NAACL

I’ll be presenting Meta-Reasoning Improves Tool Use in Large Language Models at #NAACL25 tomorrow Thursday May 1st from 2 until 3.30pm in Hall 3! Come check it out and have a friendly chat if you’re interested in LLM reasoning and tools 🙂 #NAACL
Lisa Alazraki (@lisaalazraki) 's Twitter Profile Photo

Thrilled to share our new preprint on Reinforcement Learning for Reverse Engineering (RLRE) 🚀 We demonstrate that human preferences can be reverse engineered effectively by pipelining LLMs to optimise upstream preambles via reinforcement learning 🧵⬇️

Thrilled to share our new preprint on Reinforcement Learning for Reverse Engineering (RLRE) 🚀

We demonstrate that human preferences can be reverse engineered effectively by pipelining LLMs to optimise upstream preambles via reinforcement learning 🧵⬇️
Zhenhao Li (@zhenhaoli1) 's Twitter Profile Photo

🙌Happy to share our paper, “DiffuseDef: Improved Robustness to Adversarial Attacks via Iterative Denoising” is accepted to #ACL2025! Great thanks to my co-authors Huichi Zhou, Marek Rei, Lucia Specia! Arxiv: arxiv.org/abs/2407.00248 GitHub: github.com/Nickeilf/Diffu…

🙌Happy to share our paper, “DiffuseDef: Improved Robustness to Adversarial Attacks via Iterative Denoising” is accepted to #ACL2025!
Great thanks to my co-authors Huichi Zhou, <a href="/MarekRei/">Marek Rei</a>, <a href="/lspecia/">Lucia Specia</a>!

Arxiv: arxiv.org/abs/2407.00248
GitHub: github.com/Nickeilf/Diffu…
Joe Stacey (@_joestacey_) 's Twitter Profile Photo

We have a new paper up on arXiv! 🥳🪇 The paper tries to improve the robustness of closed-source LLMs fine-tuned on NLI, assuming a realistic training budget of 10k training examples. Here's a 60 second rundown of what we found!

We have a new paper up on arXiv! 🥳🪇

The paper tries to improve the robustness of closed-source LLMs fine-tuned on NLI, assuming a realistic training budget of 10k training examples. 

Here's a 60 second rundown of what we found!
Joe Stacey (@_joestacey_) 's Twitter Profile Photo

If anyone has done any work improving the robustness of NLI models and we didn't cite you in our appendix, please share a link to your work - I would love to include it Our appendix related work on NLI robustness is a bit of a monster 😅🧌 x.com/_joestacey_/st…

Matthieu Meeus (@matthieu_meeus) 's Twitter Profile Photo

How good can privacy attacks against LLM pretraining get if you assume a very strong attacker? Check it out in our preprint ⬇️

Matthieu Meeus (@matthieu_meeus) 's Twitter Profile Photo

(1/9) LLMs can regurgitate memorized training data when prompted adversarially. But what if you *only* have access to synthetic data generated by an LLM? In our ICML Conference paper, we audit how much information synthetic data leaks about its private training data 🐦🌬️

(1/9) LLMs can regurgitate memorized training data when prompted adversarially. But what if you *only* have access to synthetic data generated by an LLM?

In our <a href="/icmlconf/">ICML Conference</a> paper, we audit how much information synthetic data leaks about its private training data 🐦🌬️
Matthieu Meeus (@matthieu_meeus) 's Twitter Profile Photo

Check out our recent work on prompt injection attacks! Tl;DR: aligned LLMs show to defend against prompt injection; yet with a strong attacker (GCG on steroids), we find that successful attacks (almost) always exist, but are just harder to find. arxiv.org/pdf/2505.15738