Abhishek Panigrahi (@abhishek_034) 's Twitter Profile
Abhishek Panigrahi

@abhishek_034

Ph.D. @PrincetonCS
Previously Research Fellow @IndiaMSR and undergrad @iitkgp

ID: 1213867000771907585

linkhttps://abhishekpanigrahi1996.github.io/ calendar_today05-01-2020 16:57:10

101 Tweet

667 Followers

1,1K Following

Abhishek Panigrahi (@abhishek_034) 's Twitter Profile Photo

Check out our new paper on modality imbalance in VLMs! We propose a framework to quantify text vs. image learning differences & strategies to bridge the gap, backed by interesting gradient alignment studies. Simon is applying for PhD positions—an exceptional candidate to hire!

Abhishek Panigrahi (@abhishek_034) 's Twitter Profile Photo

Excited to share our work on Context-Enhanced Learning (CEL) for LLMs! Inspired by how humans learn with books, CEL accelerates training with helpful information in context— but avoiding verbatim memorization. Backed by extensive theory & mechanistic experiments on Llama models!

Abhishek Panigrahi (@abhishek_034) 's Twitter Profile Photo

Thrilled and honored to be a recipient of the 2025 Apple Scholars in AI/ML PhD fellowship! I'm extremely grateful to my advisor, mentors, and collaborators for their invaluable support throughout my PhD journey. machinelearning.apple.com/updates/apple-…

Abhishek Panigrahi (@abhishek_034) 's Twitter Profile Photo

Join us at MOSS workshop at ICML Conference 2025 to explore how small-scale experimentation can unlock deep insights into deep learning phenomenons. We welcome your submissions across wide-range of topics. See you all in Vancouver! #ICML2025 #MOSSWorkshop

Vaishnavh Nagarajan (@_vaishnavh) 's Twitter Profile Photo

📢 New paper on creativity & multi-token prediction! We design minimal open-ended tasks to argue: → LLMs are limited in creativity since they learn to predict the next token → creativity can be improved via multi-token learning & injecting noise ("seed-conditioning" 🌱) 1/ 🧵

📢 New paper on creativity & multi-token prediction! We design minimal open-ended tasks to argue:

→ LLMs are limited in creativity since they learn to predict the next token

→ creativity can be improved via multi-token learning & injecting noise ("seed-conditioning" 🌱) 1/ 🧵
Abhishek Panigrahi (@abhishek_034) 's Twitter Profile Photo

🎉 Excited to present 2 papers at #ICML2025 in Vancouver! Happy to chat about curricula, efficient and robust training of LLMs! 🧠 On the power of Context-Enhanced learning in LLMs 🖼️ Spotlight Poster: Tuesday, 11:00am–1:30pm (#E-2107) ⚙️ Generalizing from SIMPLE to HARD

🎉 Excited to present 2 papers at #ICML2025  in Vancouver!

Happy to chat about curricula, efficient and robust training of LLMs!

🧠 On the power of Context-Enhanced learning in LLMs
🖼️ Spotlight Poster: Tuesday, 11:00am–1:30pm (#E-2107)

⚙️ Generalizing from SIMPLE to HARD
Abhishek Panigrahi (@abhishek_034) 's Twitter Profile Photo

Come to learn more about how helpful in-context information can improve optimization in LLMs at our spotlight poster tomorrow (11 am-1.30 pm)! #icml25

Abhishek Panigrahi (@abhishek_034) 's Twitter Profile Photo

Do VLMs perform as well as the LLMs on which they build upon? We say no! But how can we reduce the gap? Come to learn more at our poster tomorrow (7/15) (4.30-7 pm). #icml25

Abhishek Panigrahi (@abhishek_034) 's Twitter Profile Photo

Come to our workshop tomorrow at West Ballroom Hall B, Vancouver center. We have an amazing series of talks, spanning benchmark evaluation, efficient inference, jailbreaking, and reasoning in small LMs by Aditi Raghunathan, Tri Dao, Eric Wong and Yejin Choi. Also, we have an