Adam Tauman Kalai (@adamfungi) 's Twitter Profile
Adam Tauman Kalai

@adamfungi

ID: 22057658

linkhttps://en.wikipedia.org/wiki/Adam_Tauman_Kalai calendar_today26-02-2009 20:33:59

113 Tweet

1,1K Followers

83 Following

Adam Tauman Kalai (@adamfungi) 's Twitter Profile Photo

Here's a great opportunity to help GHANA students learn CS: volunteer to mentor HS students Aug 5-20. The MISE Foundation misemaths.org/research/ is an awesome org looking for mentors. Ask me or send resume to [email protected] (they cover costs)

Boaz Barak (@boazbaraktcs) 's Twitter Profile Photo

Great news to hear that Yael Kalai received the ACM prize in computing! Yael has done foundational work in cryptography and particularly in verifiable delegation of computation, finding surprising and fruitful connections to quantum information theory. awards.acm.org/about/2022-acm…

Adam Tauman Kalai (@adamfungi) 's Twitter Profile Photo

The classic Alignment fear is that an AI turns the entire earth into paperclips when asked to "make as many paperclips as possible." A superintelligent AI would know better of course. Good thing LLMs came along before robots! LLMs already understand intent pretty well.

The classic Alignment fear is that an AI turns the entire earth into paperclips when asked to "make as many paperclips as possible." A superintelligent AI would know better of course. Good thing LLMs came along before robots! LLMs already understand intent pretty well.
Adam Tauman Kalai (@adamfungi) 's Twitter Profile Photo

This 2018 algorithm (predating GPT) probably avoids hallucination in generative models, though it does require some human feedback for training.

David Bau (@davidbau) 's Twitter Profile Photo

Interesting work by Rohit Gandikota Hadas Orgad Joanna adds a spectrum of diversity while removing problematic behaviors from an image generator, w/ Yonatan Belinkov Feeling a bit different from ML 101 - No gradients, no new training images. Instead: direct edits of model params

Project CETI (@projectceti) 's Twitter Profile Photo

Thank you to Pulitzer Prize-winning journalist @elizkolbert and The New Yorker for featuring us! Read the article & learn how we’re translating whale communication & get a first look at an unexpected event we witnessed this summer: a sperm whale birth! bit.ly/3P1die5

Eric Zelikman (@ericzelikman) 's Twitter Profile Photo

“Recursive self-improvement” (RSI) is one of the oldest ideas in AI. Can language models write code that recursively improves itself? Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation w/eliana, Lester Mackey, Adam Tauman Kalai (1/n)

“Recursive self-improvement” (RSI) is one of the oldest ideas in AI. Can language models write code that recursively improves itself?

Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
w/<a href="/elianalorch/">eliana</a>, <a href="/LesterMackey/">Lester Mackey</a>, <a href="/adamfungi/">Adam Tauman Kalai</a>
(1/n)
Eric Zelikman (@ericzelikman) 's Twitter Profile Photo

But, many express concern about RSI. We instead focus on RSI code generation, not optimizing the model, which is 1) more interpretable and 2) a test bed for mitigating undesirable RSI strategies. We also find, given the choice, GPT-4 may disable a sandbox flag “for efficiency”

But, many express concern about RSI. We instead focus on RSI code generation, not optimizing the model, which is 1) more interpretable and 2) a test bed for mitigating undesirable RSI strategies. We also find, given the choice, GPT-4 may disable a sandbox flag “for efficiency”
Eric Zelikman (@ericzelikman) 's Twitter Profile Photo

This research was a result of my summer internship at Microsoft Research New England, and I am incredibly thankful for my amazing mentors there, Adam Tauman Kalai and Lester Mackey, as well as all of the other researchers I had a chance to talk to. We’re planning to release our code at github.com/microsoft/STOP

Boaz Barak (@boazbaraktcs) 's Twitter Profile Photo

More than 100 Harvard faculty denounce "false equivalency between attacks on noncombatants and self-defense against those atrocities." The conflict is complex but "the events of this week are not complicated. Sometimes there is such a thing as evil" bit.ly/harvard-agains…

Adam Tauman Kalai (@adamfungi) 's Twitter Profile Photo

The Conference on Language Modeling Conference on Language Modeling looks super interesting with topics like: Alignment, Data, Eval, Societal implications, Safety, ... I hope it becomes a home for AI Safety+RAI research. Please submit your papers, I'm excited to review some. colmweb.org/cfp.html

Adam Tauman Kalai (@adamfungi) 's Twitter Profile Photo

I joined OpenAI a few months ago, and I find it extremely rewarding to try to help with safety/privacy/fairness/etc in AI systems used by many people. If you are interested in these important problems, please apply at openai.com/careers/search…