Flavio Calmon (@flaviocalmon) 's Twitter Profile
Flavio Calmon

@flaviocalmon

Associate Professor @hseas. Information theorist, but only asymptotically. Brasileiro/American.

ID: 52728555

linkhttp://people.seas.harvard.edu/~flavio/ calendar_today01-07-2009 13:44:53

54 Tweet

369 Followers

97 Following

Alexandra Olteanu (@o_saja) 's Twitter Profile Photo

Back home from FAccT - I thankful for the work our community is doing & the values it stands for. Serving it has been a labor of love for me & I am beyond grateful to have done so this year along my truly wonderful program co-chairs & human beings michael veale Reuben Binns @[email protected] Flavio Calmon

Flavio Calmon (@flaviocalmon) 's Twitter Profile Photo

This week, I spoke on the panel โ€œAI, Rights, and Democracyโ€ at the Brazilian Supreme Court. Thank you STF for the invitation. It was an incredible experience! See my talk (in pt-br) here: youtu.be/mNkZ_Aw2tFs&t=โ€ฆ

Alex Oesterling @ NeurIPS 2024 (@alex_oesterling) 's Twitter Profile Photo

First up, how do various aspects of trustworthy machine learning interact? Can we expect a production ML system to satisfy all regulatory requirements of fairness, privacy, and interpretability simultaneously when past research generally focuses on one component at a time? (1/n)

Alex Oesterling @ NeurIPS 2024 (@alex_oesterling) 's Twitter Profile Photo

Part 2 of my 2024 publication tweets! Please welcome Multi-group Proportional Representation, a novel metric for measuring representation in image generation and retrieval. This work was recently accepted at NeurIPS Conference 2024. (1/n)

Alex Oesterling @ NeurIPS 2024 (@alex_oesterling) 's Twitter Profile Photo

Finally, I am pleased to announce ๐ŸชขInterpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)๐Ÿชข Joint work with Usha Bhalla, as well as Suraj Srinivas, Flavio Calmon, and ๐™ท๐š’๐š–๐šŠ ๐™ป๐šŠ๐š”๐š”๐šŠ๐š›๐šŠ๐š“๐šž, which was just accepted to NeurIPS 2024! Check out the paper here: arxiv.org/abs/2402.10376

Finally, I am pleased to announce

๐ŸชขInterpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)๐Ÿชข

Joint work with Usha Bhalla, as well as <a href="/Suuraj/">Suraj Srinivas</a>, <a href="/FlavioCalmon/">Flavio Calmon</a>, and <a href="/hima_lakkaraju/">๐™ท๐š’๐š–๐šŠ ๐™ป๐šŠ๐š”๐š”๐šŠ๐š›๐šŠ๐š“๐šž</a>, which was just accepted to NeurIPS 2024! Check out the paper here:
arxiv.org/abs/2402.10376
Maarten Buyl (@maartenbuyl) 's Twitter Profile Photo

Imagine an all-powerful AI with any ideology you don't agree with! Super proud of this work, where we show that every LLM reflects a different ideological worldview, which should worry everyone.

Bogdan Kulynych @ NeurIPS (@hiddenmarkov) 's Twitter Profile Photo

The standard practice in differential privacy of targeting ฮต at small ฮด is extremely lossy for interpreting the level of privacy protection. In practice (e.g., for DP-SGD), we can do much better! We show how in the #NeurIPS2024 paper: arxiv.org/abs/2407.02191 Short summary๐Ÿ‘‡

Lucas Monteiro Paes (@lucas_mpaes) 's Twitter Profile Photo

AI is built to โ€œbe helpfulโ€ or โ€œavoid harmโ€, but which principles should it prioritize and when? We call this alignment discretion. As Asimov's stories show: balancing principles for AI behavior is tricky. In fact, we find that AI has its own set of priorities (comic Randall Munroe)๐Ÿ‘‡

AI is built to โ€œbe helpfulโ€ or โ€œavoid harmโ€, but which principles should it prioritize and when? 

We call this alignment discretion. As Asimov's stories show: balancing principles for AI behavior is tricky.

In fact, we find that AI has its own set of priorities 
(comic <a href="/xkcd/">Randall Munroe</a>)๐Ÿ‘‡
Hao Wang (@hw_haowang) 's Twitter Profile Photo

[1/x] ๐Ÿš€ We're excited to share our latest work on improving inference-time efficiency for LLMs through KV cache quantization---a key step toward making long-context reasoning more scalable and memory-efficient.

[1/x] ๐Ÿš€ We're excited to share our latest work on improving inference-time efficiency for LLMs through KV cache quantization---a key step toward making long-context reasoning more scalable and memory-efficient.
Hadi Khalaf (@hskhalaf) 's Twitter Profile Photo

Happy to share we received best paper at NENLP workshop at Yale ๐Ÿฅณ๐Ÿฅณ! tldr: Current alignment methods give excessive discretion to annotators in defining what good behavior means. This means we don't know what we are aligning to โ€ผ๏ธ We formalize discretion in alignment and

Happy to share we received best paper at NENLP workshop at Yale ๐Ÿฅณ๐Ÿฅณ! 

tldr: Current alignment methods give excessive discretion to annotators in defining what good behavior means. This means we don't know what we are aligning to โ€ผ๏ธ

We formalize discretion in alignment and
Dor Tsur ๐Ÿ‡ฎ๐Ÿ‡ฑ๐Ÿณ๏ธโ€๐ŸŒˆ (@dortsurr) 's Twitter Profile Photo

Can we use coding-theory, heavy-tailed distributions, and optimal-transport to create ๐˜‡๐—ฒ๐—ฟ๐—ผ-๐—ฑ๐—ถ๐˜€๐˜๐—ผ๐—ฟ๐˜๐—ถ๐—ผ๐—ป, ๐—ฒ๐—ฎ๐˜€๐˜† ๐˜๐—ผ ๐˜‚๐˜€๐—ฒ, ๐˜„๐—ฎ๐˜๐—ฒ๐—ฟ๐—บ๐—ฎ๐—ฟ๐—ธ๐˜€ ๐—ณ๐—ผ๐—ฟ ๐—Ÿ๐—Ÿ๐— ๐˜€? We show they can โ€” and the result is pretty exciting! ๐ŸŽ‰ ๐Ÿงต (1/n)

Can we use coding-theory, heavy-tailed distributions, and optimal-transport to create ๐˜‡๐—ฒ๐—ฟ๐—ผ-๐—ฑ๐—ถ๐˜€๐˜๐—ผ๐—ฟ๐˜๐—ถ๐—ผ๐—ป, ๐—ฒ๐—ฎ๐˜€๐˜† ๐˜๐—ผ ๐˜‚๐˜€๐—ฒ, ๐˜„๐—ฎ๐˜๐—ฒ๐—ฟ๐—บ๐—ฎ๐—ฟ๐—ธ๐˜€ ๐—ณ๐—ผ๐—ฟ ๐—Ÿ๐—Ÿ๐— ๐˜€? We show they can โ€” and the result is pretty exciting! ๐ŸŽ‰ ๐Ÿงต (1/n)
Hadi Khalaf (@hskhalaf) 's Twitter Profile Photo

How can we improve LLMs without any additional training? ๐Ÿค” The standard playbook is using Best-of-N: generate N responses โžก๏ธ use a reward model to score them โžก๏ธ pick the best ๐Ÿ† More responses = better results... right? Well, not exactly. You might be reward hacking!

Alex Oesterling @ NeurIPS 2024 (@alex_oesterling) 's Twitter Profile Photo

โ€ผ๏ธ๐Ÿ•šNew paper alert with Usha Bhalla: Leveraging the Sequential Nature of Language for Interpretability (openreview.net/pdf?id=hgPf1kiโ€ฆ)! 1/n