
Avi Schwarzschild
@a_v_i__s
Postdoc at CMU. Trying to learn about deep learning faster than deep learning can learn about me.
ID: 1308460181999714304
http://avischwarzschild.com 22-09-2020 17:37:55
141 Tweet
513 Followers
229 Following

Excited about this work with Asher Trockman Yash Savani (and others) on antidistillation sampling. It uses a nifty trick to efficiently generate samples that makes student models _worse_ when you train on samples. I spoke about it at Simons this past week. Links below.


A few days ago, we dropped ๐ฎ๐ป๐๐ถ๐ฑ๐ถ๐๐๐ถ๐น๐น๐ฎ๐๐ถ๐ผ๐ป ๐๐ฎ๐บ๐ฝ๐น๐ถ๐ป๐ด ๐ . . . and we've gotten a little bit of pushback. But whether you're at a frontier lab or developing smaller, open-source models, this research should be on your radar. Here's why ๐งต




โจ Love 4o-style image generation but prefer to use Midjourney? Tired of manual prompt crafting from inspo images? PRISM to the rescue! ๐ผ๏ธโ๐โ๐ผ๏ธ We automate black-box prompt engineeringโno training, no embeddings, just accurate, readable prompts from your inspo images! 1/๐งต



๐ฃThrilled to announce Iโll join Carnegie Mellon University (CMU Engineering & Public Policy & Language Technologies Institute | @CarnegieMellon) as an Assistant Professor starting Fall 2026! Until then, Iโll be a Research Scientist at AI at Meta FAIR in SF, working with Kamalika Chaudhuriโs amazing team on privacy, security, and reasoning in LLMs!




Excited to share our work with my amazing collaborators, Goodeat, Xingjian Bai, Zico Kolter, and Kaiming. In a word, we show an โidentity learningโ approach for generative modeling, by relating the instantaneous/average velocity in an identity. The resulting model,


