Tejas kasetty (@tejaskasetty) 's Twitter Profile
Tejas kasetty

@tejaskasetty

Graduate student at @mila_quebec and @UMontreal |
AI and Neuroscience

ID: 105476856

linkhttps://tejaskasetty.github.io calendar_today16-01-2010 13:17:47

9 Tweet

156 Followers

274 Following

UNIQUE Center Neuro-AI (@ai_unique) 's Twitter Profile Photo

🌟UNIQUE Student Symposium: EXTENSION DEADLINE🌟 USS is a 2-day conference for students on Neuro-AI🧠🤖. Join us on June 5-6 at campus MIL! ⌛️Deadline to register has been extended to May 30 ! Register now: eventbrite.ca/e/unique-stude… 👉More info: unique-students.github.io

🌟UNIQUE Student Symposium: EXTENSION DEADLINE🌟

USS is a 2-day conference for students on Neuro-AI🧠🤖. Join us on June 5-6 at campus MIL! 

⌛️Deadline to register has been extended to May 30 !  Register now: eventbrite.ca/e/unique-stude…

👉More info: unique-students.github.io
CoLLAs 2025 (@collas_conf) 's Twitter Profile Photo

Bing Liu wrote the book on Lifelong Machine Learning. That's not a metaphor, folks. It's the closest thing we have to a sacred text! 📖 Join us at #CoLLAs2023 and get schooled by the master himself! 🧠 Register here: lifelong-ml.cc"

Bing Liu wrote the book on Lifelong Machine Learning. That's not a metaphor, folks. It's the closest thing we have to a sacred text! 📖 Join us at #CoLLAs2023 and get schooled by the master himself! 🧠 Register here: lifelong-ml.cc"
Nishanth Anand (@itsnva7) 's Twitter Profile Photo

I am grateful to have a helping hand in Pierre-Luc Bacon, Riashat Islam and Pierre Thodoroff when I first started my research career as an MSc student in 2018. The advice they gave me on reading research papers, organizing my thoughts, and thinking like a researcher is still with me. (1/N)

I am grateful to have a helping hand in <a href="/pierrelux/">Pierre-Luc Bacon</a>, <a href="/riashatislam/">Riashat Islam</a> and <a href="/pierthodo/">Pierre Thodoroff</a> when I first started my research career as an MSc student in 2018. The advice they gave me on reading research papers, organizing my thoughts, and thinking like a researcher is still with me. (1/N)
UNIQUE Center Neuro-AI (@ai_unique) 's Twitter Profile Photo

📢UNIQUE Student Symposium 2024: Limited places! If you are interested in Neuro-AI, join us on May 8-10 @ CERVO Brain research Center (Quebec City)! Bus and hotel available - see conditions 🧠 👉Register now: eventbrite.ca/e/unique-stude… ⏳Poster submission: forms.gle/ePZuZNaLn5Tyq6…

📢UNIQUE Student Symposium 2024: Limited places!

If you are interested in Neuro-AI, join us on May 8-10 @ CERVO Brain research Center (Quebec City)! Bus and hotel available - see conditions 🧠

👉Register now: eventbrite.ca/e/unique-stude…
⏳Poster submission: forms.gle/ePZuZNaLn5Tyq6…
Eric Elmoznino (@ericelmoznino) 's Twitter Profile Photo

Introducing our new paper explaining in-context learning through the lens of Occam’s razor, giving a normative account of next-token prediction objectives. This was with Tom Marty Tejas kasetty Léo Gagnon Sarthak Mittal Mahan Fathi Dhanya Sridhar Guillaume Lajoie arxiv.org/abs/2410.14086

Nanda H Krishna (@nandahkrishna) 's Twitter Profile Photo

New preprint! 🧠🤖 How do we build neural decoders that are: ⚡️ fast enough for real-time use 🎯 accurate across diverse tasks 🌍 generalizable to new sessions, subjects, and species? We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes! 🧵1/7

New preprint! 🧠🤖
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
Majdi Hassan (@majdi_has) 's Twitter Profile Photo

(1/n)🚨You can train a model solving DFT for any geometry almost without training data!🚨 Introducing Self-Refining Training for Amortized Density Functional Theory — a variational framework for learning a DFT solver that predicts the ground-state solutions for different

Emiliano Penaloza (@emilianopp_) 's Twitter Profile Photo

Excited that our paper "Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization" was accepted to ICML 2025! We show how Preference Optimization can reduce the impact of noisy concept labels in CBMs. 🧵/9

Eric Elmoznino (@ericelmoznino) 's Twitter Profile Photo

Very excited to release a new blog post that formalizes what it means for data to be compositional, and shows how compositionality can exist at multiple scales. Early days, but I think there may be significant implications for AI. Check it out! ericelmoznino.github.io/blog/2025/08/1…

Aniket Didolkar (@aniket_d98) 's Twitter Profile Photo

🚨Reasoning LLMs are e̵f̵f̵e̵c̵t̵i̵v̵e̵ ̵y̵e̵t̵ inefficient! Large language models (LLMs) now solve multi-step problems by emitting extended chains of thought. During the process, they often re-derive the same intermediate steps across problems, inflating token usage and

Helen Zhang (@helennnnnnzhang) 's Twitter Profile Photo

🚨 New paper! “Understanding Adam Requires Better Rotation-Dependent Assumptions.” Come check out our poster at NeurIPS Conference, or DM me if you would like to chat! 📅 Wednesday, December 3 🕐 4:30 PM PST 📍Exhibit Hall C,D,E #908

🚨 New paper!
“Understanding Adam Requires Better Rotation-Dependent Assumptions.”
Come check out our poster at <a href="/NeurIPSConf/">NeurIPS Conference</a>, or DM me if you would like to chat!
📅 Wednesday, December 3
🕐 4:30 PM PST
📍Exhibit Hall C,D,E #908
Vedant Shah (@veds_12) 's Twitter Profile Photo

LOTs of discourse lately about the correctness of the KL-regularization term used in RLVR fine-tuning of LLMs. Which estimator to use? Whether to add it to the reward or loss? What’s even the difference? 🤔 In our new preprint, we evaluate these choices empirically. 🧵 1/n

LOTs of discourse lately about the correctness of the KL-regularization term used in RLVR fine-tuning of LLMs.

Which estimator to use?  Whether to add it to the reward or loss? What’s even the difference? 🤔

In our new preprint, we evaluate these choices empirically. 🧵

1/n
Emiliano Penaloza (@emilianopp_) 's Twitter Profile Photo

Remember all the self-distillation papers that came out last week. Well, we also propose it 😅, but… But alongside something better 😎 π-Distill We show that with this method, you can distill closed-source frontier models even tho their traces are hidden 🔒. Both our methods