Brihi Joshi (@brihij) 's Twitter Profile
Brihi Joshi

@brihij

PhD-ing @nlp_usc🏝 + @NLPWithFriends, @AmazonScience @Apple Fellow, ex- @WWCode_Delhi @Snap @GoldmanSachs @IIITDelhi. Sky pics, #NLProc, @5sos and cat content

ID: 2865007695

linkhttp://brihijoshi.github.io calendar_today07-11-2014 02:57:17

871 Tweet

2,2K Followers

3,3K Following

Aditya Chetan (@justachetan) 's Twitter Profile Photo

I will be at #CVPR2025 presenting our work on differential operators for hybrid neural fields! Catch me at our poster: 🗓️ Fri, June 13, 10:30 AM–12:30 PM 📍 ExHall D, Poster #34 🔗 cvpr.thecvf.com/virtual/2025/p… Details below ⬇️

Sarah Wiegreffe (on faculty job market!) (@sarahwiegreffe) 's Twitter Profile Photo

A bit late to announce, but I’m excited to share that I'll be starting as an assistant professor at the University of Maryland UMD Department of Computer Science this August. I'll be recruiting PhD students this upcoming cycle for fall 2026. (And if you're a UMD grad student, sign up for my fall seminar!)

A bit late to announce, but I’m excited to share that I'll be starting as an assistant professor at the University of Maryland <a href="/umdcs/">UMD Department of Computer Science</a> this August.

I'll be recruiting PhD students this upcoming cycle for fall 2026. (And if you're a UMD grad student, sign up for my fall seminar!)
Matthew Finlayson ✈️ NeurIPS (@mattf1n) 's Twitter Profile Photo

I didn't believe when I first saw, but: We trained a prompt stealing model that gets >3x SoTA accuracy. The secret is representing LLM outputs *correctly* 🚲 Demo/blog: mattf1n.github.io/pils 📄: arxiv.org/abs/2506.17090 🤖: huggingface.co/dill-lab/pils-… 🧑‍💻: github.com/dill-lab/PILS

I didn't believe when I first saw, but:
We trained a prompt stealing model that gets &gt;3x SoTA accuracy.
The secret is representing LLM outputs *correctly*

🚲 Demo/blog: mattf1n.github.io/pils
📄: arxiv.org/abs/2506.17090
🤖: huggingface.co/dill-lab/pils-…
🧑‍💻: github.com/dill-lab/PILS
Ana Marasović (@anmarasovic) 's Twitter Profile Photo

My first audio AI paper, thanks to MClem who introduced me to a whole new world of music producing! Getting to work with students who bring their unique passion into PhD work is one of the best perks of a professor's job. Check more in the thread 👇🏻 Soon at #COLM2025

Tuhin Chakrabarty (@tuhinchakr) 's Twitter Profile Photo

Honored to get the outstanding position paper award at ICML Conference :) Come attend my talk and poster tomorrow on human centered considerations for a safer and better future of work I will be recruiting PhD students at Stony Brook University Stony Brook University Dept. of Computer Science coming fall. Please get in touch.

Honored to get the outstanding position paper award at <a href="/icmlconf/">ICML Conference</a> :) Come attend my talk and poster tomorrow on human centered considerations for a safer and better future of work

I will be recruiting PhD students at <a href="/stonybrooku/">Stony Brook University</a> <a href="/sbucompsc/">Stony Brook University Dept. of Computer Science</a> coming fall. Please get in touch.
Qinyuan Ye (👀Jobs) (@qinyuan_ye) 's Twitter Profile Photo

1+1=3 2+2=5 3+3=? Many language models (e.g., Llama 3 8B, Mistral v0.1 7B) will answer 7. But why? We dig into the model internals, uncover a function induction mechanism, and find that it’s broadly reused when models encounter surprises during in-context learning. 🧵

1+1=3
2+2=5
3+3=?

Many language models (e.g., Llama 3 8B, Mistral v0.1 7B) will answer 7. But why?

We dig into the model internals, uncover a function induction mechanism, and find that it’s broadly reused when models encounter surprises during in-context learning. 🧵
Brihi Joshi (@brihij) 's Twitter Profile Photo

I’ll be at ACL 2025 next week to present this work! 🇦🇹 Excited to meet old friends and make new ones. Let’s catch up if you like thinking more about the future of human-centred NLP, personalization and multi-turn interactions or just wanna get some nice Viennese coffee ☕️

XINYUE CUI (@xinyue_cui411) 's Twitter Profile Photo

Can we create effective watermarks for LLM training data that survive every stage in real-world LLM development lifecycle? Our #ACL2025Findings paper introduces fictitious knowledge watermarks that inject plausible yet nonexistent facts into training data for copyright

Can we create effective watermarks for LLM training data that survive every stage in real-world LLM development lifecycle? Our #ACL2025Findings paper introduces fictitious knowledge watermarks that inject plausible yet nonexistent facts into training data for copyright
Brihi Joshi (@brihij) 's Twitter Profile Photo

Our poster slot got moved, so I'll be talking more about this work and in general about personalizing natural language explanations on Tuesday, July 29th at 4PM in Hall X4 X5 at ACL 2025! In case you miss it, our poster is here 👇🏽

Our poster slot got moved, so I'll be talking more about this work and in general about personalizing natural language explanations on Tuesday, July 29th at 4PM in Hall X4 X5 at <a href="/aclmeeting/">ACL 2025</a>!

In case you miss it, our poster is here 👇🏽
Aflah 🍉🕊️ @ ICLR (@aflah02101) 's Twitter Profile Photo

We're thrilled to unveil TokenSmith (github.com/aflah02/tokens…) an open-source library designed to change how researchers and practitioners interact with large language model training data. Say goodbye to cumbersome data workflows!