
Jaydeep Borkar
@jaydeepborkar
PhD-ing @KhouryCollege; Organizer @trustworthy_ml Prev: @MITIBMLab. Huge fan of biking and good listening. Privacy+memorization in language models.
ID: 915103499641266177
http://jaydeepborkar.github.io 03-10-2017 06:37:30
1,1K Tweet
722 Followers
236 Following

Very excited to be joining AI at Meta GenAI as a Visiting Researcher starting this June in New York City!🗽 I’ll be continuing my work on studying memorization and safety in language models. If you’re in NYC and would like to hang out, please message me :)




Following on Andrej Karpathy's vision of software 2.0, we've been thinking about *malware 2.0*: malicious programs augmented with LLMs. In a new paper, we study malware 2.0 from one particular angle: how could LLMs change the way in which hackers monetize exploits?


We (w Zachary Novack Jaechul Roh et al.) are working on #memorization in #audio models & are conducting a human study on generated #music similarity. Please help us out by taking our short listening test (available in English, Mandarin & Cantonese). You can do more than one! Link ⬇️


Signal boosting this awesome opportunity to join the SAM Team at AI at Meta FAIR! Please apply using the link in the post! Several folks from the SAM team will be at CVPR in a couple of weeks, reach out if you want to chat! 👋🏾

The privacy-utility tradeoff for RAG is much worse than you think. Our work detailing one such (stealthy) MIA, now accepted to CCS'25 ACM CCS 2024 !

For trying to understanding LMs deeply, EleutherAI’s Pythia has been an invaluable resource: 16 LMs (70M to 12B parameters) trained on the same data (The Pile) in the same order, with intermediate checkpoints. It’s been two years and it’s time for a refresh.

Our new Google DeepMind paper, "Lessons from Defending Gemini Against Indirect Prompt Injections," details our framework for evaluating and improving robustness to prompt injection attacks.






some really cool ICML Conference trustworthy ml papers here!


