Ibrahim Dagher (@ibrahimdagher20) 's Twitter Profile
Ibrahim Dagher

@ibrahimdagher20

JD/PhD @ Yale, PD Soros Fellow ‘25

Interested in Law (Crim, ConLaw, FedCourts) and Analytic Philosophy (Metaphysics, Ethics, Phil. Rel., Phil. Logic).

ID: 1487928782048489473

linkhttps://philpeople.org/profiles/ibrahim-dagher calendar_today30-01-2022 23:20:55

789 Tweet

254 Followers

220 Following

Toby Ord (@tobyordoxford) 's Twitter Profile Photo

Evaluating the Infinite 🧵 My latest paper tries to solve a longstanding problem afflicting fields such as decision theory, economics, and ethics — the problem of infinities. Let me explain a bit about what causes the problem and how my solution avoids it. 1/20

Sam Altman (@sama) 's Twitter Profile Photo

Yesterday we did a livestream. TL;DR: We have set internal goals of having an automated AI research intern by September of 2026 running on hundreds of thousands of GPUs, and a true automated AI researcher by March of 2028. We may totally fail at this goal, but given the

Harvey Lederman (@ledermanharvey) 's Twitter Profile Photo

This is such an exciting project! Thanks to Jack Lindsey for discussing our work, and giving me a chance to read a draft and comment on a draft. My view (which I think isn't far from the one in the paper) is that the yes/no on "are you experiencing something unusual?"...

Mariya I. Vasileva (@mariyaivasileva) 's Twitter Profile Photo

The signal-to-noise ratio of knowledge dissemination in ML/AI would be so much better if every second post online didn’t start with “holy shit!!!”, “🚨 RIP”, “a new paper dropped and the discoveries are shocking”, or include absolutisms like “xyz will kill LLMs”, “abc will make

Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

Recently, a tutorial/textbook on diffusion models was released on arXiv. Looks like a great resource, starts from the beginning, explaining diffusion models from variational, score, and normalizing flow perspectives, discusses sampling strategies, distillation, fast generation,

Recently, a tutorial/textbook on diffusion models was released on arXiv.

Looks like a great resource, starts from the beginning, explaining diffusion models from variational, score, and normalizing flow perspectives, discusses sampling strategies, distillation, fast generation,
OpenAI (@openai) 's Twitter Profile Photo

We’ve developed a new way to train small AI models with internal mechanisms that are easier for humans to understand. Language models like the ones behind ChatGPT have complex, sometimes surprising structures, and we don’t yet fully understand how they work. This approach

Ibrahim Dagher (@ibrahimdagher20) 's Twitter Profile Photo

The bridge approach used here is really exciting, especially if sparse models can absorb and simplify circuits in denser models on more complex tasks. This could also be recursive: intermediate-sparsity models absorb circuits from a dense one, then bridge to a sparser model, etc.

Ibrahim Dagher (@ibrahimdagher20) 's Twitter Profile Photo

Very exciting work here — in my opinion this builds very nicely on the Lindsey introspection results from last month. Very likely that a new frontier for interp is opening up, namely, leveraging introspective mechanisms for scaling understanding of internals. Huge.