Nikolai Rozanov (@ai_nikolai) 's Twitter Profile
Nikolai Rozanov

@ai_nikolai

CS PhD in LLM Agents @ImperialCollege || ex tech-founder || LLMs, Agent AI, NLP, RL. #NLProc

ID: 86134795

linkhttps://github.com/ai-nikolai calendar_today29-10-2009 18:48:55

119 Tweet

99 Followers

306 Following

Stefano Ermon (@stefanoermon) 's Twitter Profile Photo

Excited to share that I’ve been working on scaling up diffusion language models at Inception. A new generation of LLMs with unprecedented capabilities is coming!

Imperial NLP (@imperial_nlp) 's Twitter Profile Photo

Can't wait to see everyone at ICLR and NAACL! Check out some of our awesome papers. Come and say hi, we'd love to have a chat :)

Can't wait to see everyone at ICLR and NAACL! Check out some of our awesome papers. Come and say hi, we'd love to have a chat :)
Preslav Nakov (@preslav_nakov) 's Twitter Profile Photo

MBZUAI with 4 awards at NAACL'2025: Best Theme paper award, Outstanding paper award, SAC award for the Special theme, and SAC award for Resources and Evaluation. hashtag#NAACL2025

MBZUAI with 4 awards at NAACL'2025: Best Theme paper award, Outstanding paper award, SAC award for the Special theme, and SAC award for Resources and Evaluation. hashtag#NAACL2025
Yves-A. de Montjoye (@yvesalexandre) 's Twitter Profile Photo

🚨One (more!) fully-funded PhD position in our group at Imperial College London – Privacy & Machine Learning πŸ”πŸ€– starting Oct 2025 Plz RT πŸ”„

Imperial NLP (@imperial_nlp) 's Twitter Profile Photo

Thanks Choi Dong Hee (far left in picture) for your great presentation on your last day here at Imperial! Been brilliant having you here, and looking forward to following your work in the future :)

Thanks <a href="/iamdongheechoi/">Choi Dong Hee</a> (far left in picture) for your great presentation on your last day here at Imperial!

Been brilliant having you here, and looking forward to following your work in the future  :)
Lisa Alazraki (@lisaalazraki) 's Twitter Profile Photo

Thrilled to share our new preprint on Reinforcement Learning for Reverse Engineering (RLRE) πŸš€ We demonstrate that human preferences can be reverse engineered effectively by pipelining LLMs to optimise upstream preambles via reinforcement learning πŸ§΅β¬‡οΈ

Thrilled to share our new preprint on Reinforcement Learning for Reverse Engineering (RLRE) πŸš€

We demonstrate that human preferences can be reverse engineered effectively by pipelining LLMs to optimise upstream preambles via reinforcement learning πŸ§΅β¬‡οΈ
Joe Stacey (@_joestacey_) 's Twitter Profile Photo

We have a new paper up on arXiv! πŸ₯³πŸͺ‡ The paper tries to improve the robustness of closed-source LLMs fine-tuned on NLI, assuming a realistic training budget of 10k training examples. Here's a 60 second rundown of what we found!

We have a new paper up on arXiv! πŸ₯³πŸͺ‡

The paper tries to improve the robustness of closed-source LLMs fine-tuned on NLI, assuming a realistic training budget of 10k training examples. 

Here's a 60 second rundown of what we found!
Percy Liang (@percyliang) 's Twitter Profile Photo

For trying to understanding LMs deeply, EleutherAI’s Pythia has been an invaluable resource: 16 LMs (70M to 12B parameters) trained on the same data (The Pile) in the same order, with intermediate checkpoints. It’s been two years and it’s time for a refresh.

Mehrdad Farajtabar (@mfarajtabar) 's Twitter Profile Photo

🧡 1/8 The Illusion of Thinking: Are reasoning models like o1/o3, DeepSeek-R1, and Claude 3.7 Sonnet really "thinking"? πŸ€” Or are they just throwing more compute towards pattern matching? The new Large Reasoning Models (LRMs) show promising gains on math and coding benchmarks,

🧡 1/8 The Illusion of Thinking: Are reasoning models like o1/o3, DeepSeek-R1, and Claude 3.7 Sonnet really "thinking"? πŸ€” Or are they just throwing more compute towards pattern matching?

The new Large Reasoning Models (LRMs) show promising gains on math and coding benchmarks,
Imperial NLP (@imperial_nlp) 's Twitter Profile Photo

ACL is almost here πŸŽ‰ Our Imperial NLP community will be presenting several papers at the conference next week. We look forward to seeing everyone in Vienna!

ACL is almost here πŸŽ‰ Our Imperial NLP community will be presenting several papers at the conference next week. We look forward to seeing everyone in Vienna!
Preslav Nakov (@preslav_nakov) 's Twitter Profile Photo

MBZUAI is represented at ACL 2025 in Vienna with 66 papers: 39 Main, 25 Findings, 1 TACL, 1 Demo. Talk to us about faculty and postdoc opportunities (15 of our 18 NLP faculty and several students and postdocs are here). #ACL #ACL2025 #NLProc #MBZUAI

MBZUAI is represented at ACL 2025 in Vienna with 66 papers: 39 Main, 25 Findings, 1 TACL, 1 Demo. Talk to us about faculty and postdoc opportunities (15 of our 18 NLP faculty and several students and postdocs are here). #ACL #ACL2025 #NLProc #MBZUAI
Nikolai Rozanov (@ai_nikolai) 's Twitter Profile Photo

This week we will be presenting our work at #ACL2025. StateAct is like ReAct but better ;) Shunyu Yao We discovered that if you enhance the Agent with β€œself-prompting” and β€œstate-tracking” you get greater long-range reasoning performance for free. arxiv.org/abs/2410.02810

yingzhen (@liyzhen2) 's Twitter Profile Photo

I read the first 4 books extensively during my PhD, highly recommended πŸ‘ I'd also highlight the 5th book as my first read re deep learning. Mind-blowing for a young math undergrad (me) at the time, made me decide to go for ML

I read the first 4 books extensively during my PhD, highly recommended πŸ‘

I'd also highlight the 5th book as my first read re deep learning. Mind-blowing for a young math undergrad (me) at the time, made me decide to go for ML
Lisa Alazraki (@lisaalazraki) 's Twitter Profile Photo

We have released #AgentCoMa, an agentic reasoning benchmark where each task requires a mix of commonsense and math to be solved 🧐 LLM agents performing real-world tasks should be able to combine these different types of reasoning, but are they fit for the job? πŸ€” πŸ§΅β¬‡οΈ

We have released #AgentCoMa, an agentic reasoning benchmark where each task requires a mix of commonsense and math to be solved 🧐

LLM agents performing real-world tasks should be able to combine these different types of reasoning, but are they fit for the job? πŸ€”

πŸ§΅β¬‡οΈ
Lisa Alazraki (@lisaalazraki) 's Twitter Profile Photo

✨ Accepted as a Spotlight at #NeurIPS2025! Huge thanks to my coauthors and everyone who supported us. Check out the details below πŸ‘‡