Xinyi Wang @ ICLR (@xinyiwang98) 's Twitter Profile
Xinyi Wang @ ICLR

@xinyiwang98

Final year UC Santa Barbara CS PhD student trying to understand LLMs. She/her.

ID: 1329401770213183489

linkhttps://wangxinyilinda.github.io/ calendar_today19-11-2020 12:31:59

120 Tweet

1,1K Takipçi

417 Takip Edilen

Alessandro Sordoni (@murefil) 's Twitter Profile Photo

We have few intern positions open in our ML team @ MSR Montreal, come work with Marc-Alexandre Côté Minseon Kim Lucas Caccia Matheus Pereira Eric Xingdi Yuan on reasoning, interactive envs/coding and LLM modularization.. 🤯 Matheus Pereira and I will also be at #NeurIPS2024 so we can chat about this

Qian Liu (@sivil_taram) 's Twitter Profile Photo

🎉 Announcing the first Open Science for Foundation Models (SCI-FM) Workshop at #ICLR2025! Join us in advancing transparency and reproducibility in AI through open foundation models. 🤝 Looking to contribute? Join our Program Committee: bit.ly/4acBBjF 🔍 Learn more at:

🎉 Announcing the first Open Science for Foundation Models (SCI-FM) Workshop at #ICLR2025! Join us in advancing transparency and reproducibility in AI through open foundation models.

🤝 Looking to contribute? Join our Program Committee: bit.ly/4acBBjF

🔍 Learn more at:
Xinyi Wang @ ICLR (@xinyiwang98) 's Twitter Profile Photo

🙌We are calling for submissions and recruiting reviewers for the Open Science for Foundation Models (SCI-FM) workshop at ICLR 2025! Submit your paper: openreview.net/group?id=ICLR.… (deadline: Feb 13) Register as a reviewer: forms.office.com/e/SdYw5U75U3 (review submission deadline: Feb 28)

Rameswar Panda (@rpanda89) 's Twitter Profile Photo

🚨Hiring🚨 We are looking for research scientists and engineers to join IBM Research (Cambridge, Bangalore). We train large language models and do fundamental research on directions related to LLMs. Please DM me your CV and a brief introduction of yourself if you are interested!

Cong Wei (@congwei1230) 's Twitter Profile Photo

🚀Thrilled to introduce ☕️MoCha: Towards Movie-Grade Talking Character Synthesis Please unmute to hear the demo audio. ✨We defined a novel task: Talking Characters, which aims to generate character animations directly from Natural Language and Speech input. ✨We propose

Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning On a synthetic multihop reasoning environment designed to closely replicate the structure and distribution of real-world large-scale knowledge graphs, the authors observe that

Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning

On a synthetic multihop reasoning environment designed to closely replicate the structure and distribution of real-world large-scale knowledge graphs, the authors observe that
𝚐𝔪𝟾𝚡𝚡𝟾 (@gm8xx8) 's Twitter Profile Photo

Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning LLMs trained on synthetic multihop graphs show a U-shaped curve in reasoning: too small underfit, too large overfit. Overparameterization hurts edge completion due to memorization. A linear

Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning

LLMs trained on synthetic multihop graphs show a U-shaped curve in reasoning: too small underfit, too large overfit. Overparameterization hurts edge completion due to memorization. A linear
Alexander Doria (@dorialexander) 's Twitter Profile Photo

A contrarian result I like a lot: smaller language models perform better on knowledge graphs than larger ones, as "overparameterization can impair reasoning due to excessive memorization".

A contrarian result I like a lot: smaller language models perform better on knowledge graphs than larger ones, as "overparameterization can impair reasoning due to excessive memorization".
Xinyi Wang @ ICLR (@xinyiwang98) 's Twitter Profile Photo

I’m attending #ICLR in Singapore! Also excited to share that I’m joining the Princeton Language and Intelligence Lab as a postdoc in July. In Fall 2026, I’ll be starting as an Assistant Professor at the University at Buffalo. I’ll be recruiting—feel free to reach out and chat!

Mingyu_Jin19 (@fnruji316625) 's Twitter Profile Photo

Disentangling Memory and Reasoning in LLMs (ACL 2025 Main) We propose a new inference paradigm that separates memory from reasoning in LLMs using two simple tokens: ⟨memory⟩ and ⟨reason⟩. ✅ Improves accuracy ✅ Enhances interpretability 📄 Read: arxiv.org/abs/2411.13504 #LLM

Disentangling Memory and Reasoning in LLMs (ACL 2025 Main)
We propose a new inference paradigm that separates memory from reasoning in LLMs using two simple tokens: ⟨memory⟩ and ⟨reason⟩.
✅ Improves accuracy
✅ Enhances interpretability
📄 Read: arxiv.org/abs/2411.13504
#LLM
Xunjian Yin (@ard25974550) 's Twitter Profile Photo

Thrilled Gödel Agent got noticed by Sakana AI & excited for their expansion! Accepted at ACL 2025, our 1st fully self-referential agent can read & modify its entire logic (even that logic). Done via recursion. Paper: arxiv.org/abs/2410.04444

Xunjian Yin (@ard25974550) 's Twitter Profile Photo

✈️Currently attending ACL #ACL2025 in Vienna, Austria. Will present at In-Person at Hall 4/5 (July 30, 10:30 - 12:00): 🚩Gödel Agent: A Self-Referential Agent Framework for Recursively Self-Improvement Come and say hi!

✈️Currently attending ACL #ACL2025 in Vienna, Austria.
Will present at In-Person at Hall 4/5 (July 30, 10:30 - 12:00):
🚩Gödel Agent: A Self-Referential Agent Framework for Recursively Self-Improvement
Come and say hi!
Princeton Laboratory for Artificial Intelligence (@princetonainews) 's Twitter Profile Photo

This fall, we're welcoming 8 new postdocs! From reinforcement learning to human-AI collaboration, their work will power forward our initiatives. Meet them and learn more about their research: ai.princeton.edu/news/2025/ai-l…

This fall, we're welcoming 8 new postdocs!

From reinforcement learning to human-AI collaboration, their work will power forward our initiatives.

Meet them and learn more about their research: ai.princeton.edu/news/2025/ai-l…
Shawn Tan (@tanshawn) 's Twitter Profile Photo

We're looking for 2 interns for Summer 2026 at the MIT-IBM Watson AI Lab Foundation Models Team. Work on RL environments, enterprise benchmarks, model architecture, efficient training and finetuning, and more! Apply here: forms.gle/H6dNSywXCjDDyB…