Nathan Kallus (@nathankallus) 's Twitter Profile
Nathan Kallus

@nathankallus

🏳️‍🌈👨‍👨‍👧‍👦 Assoc Prof @Cornell @Cornell_Tech @Netflix @NetflixResearch
causal inference, experimentation, optimization, RL, statML, econML, fairness

ID: 223440240

linkhttp://www.nathankallus.com calendar_today06-12-2010 11:56:45

325 Tweet

2,2K Followers

238 Following

Clément Canonne (on Blue🦋Sky) (@ccanonne_) 's Twitter Profile Photo

Hey, if Twitter goes down I want you to know that I'll still be writing all my tweets on the whiteboard of room 426, building J12, NSW 2006

James McInerney (@mcinerneyj) 's Twitter Profile Photo

beyond excited to present our #NeurIPS2022 poster Thur on the implicit delta method, a surprisingly simple recipe for estimating epistemic uncertainty for evaluations based on deep models paper: arxiv.org/abs/2211.06457 poster: #327 Hall J, Thur 11am session w/ Nathan Kallus 1/6

James McInerney (@mcinerneyj) 's Twitter Profile Photo

We are still looking for interns for ML research next summer. If you have a strong stats + ML background, I have a variety of exciting uncertainty quantification and time series modeling projects of interest. (Mark me as referral if you want me to take notice of your application)

Streetsblog New York (@streetsblognyc) 's Twitter Profile Photo

Is the city really going to build a temporary highway so it can repair the Brooklyn Queens Expressway? Will no one stop the madness on behalf of entitled car owners who are ruining our city? nyc.streetsblog.org/2023/03/01/cit…

Nathan Kallus (@nathankallus) 's Twitter Profile Photo

The Machine Learning & Inference Research team I co-lead @Netflix Netflix Research is hiring PhD interns for Summer 2024. Looking for a research internship (tackling industry problems while also focusing publishable research!)? Apply thru this listing: jobs.netflix.com/jobs/300628646

Kaiwen Wang (@kaiwenw_ai) 's Twitter Profile Photo

Distributional RL has been very effective in practice, so a natural question is: can we statistically prove when and why DistRL learns faster than vanilla RL? 🤔 Excited to share our #NeurIPS2023 paper that provides an affirmative answer! arxiv.org/abs/2305.15703 🧵⬇️

Distributional RL has been very effective in practice, so a natural question is: can we statistically prove when and why DistRL learns faster than vanilla RL? 🤔

Excited to share our #NeurIPS2023 paper that provides an affirmative answer!  arxiv.org/abs/2305.15703 🧵⬇️
Vasilis Syrgkanis (@syrgkanis) 's Twitter Profile Photo

Excited to share our new applied causal machine learning book causalml-book.org is available online. Any feedback/corrections greatly appreciated!

Martin Huber (@causalhuber) 's Twitter Profile Photo

🔥 Check out the new book on combining #ImpactEvaluation and #ArtificialIntelligence: 'Applied #CausalInference Powered by ML and #AI' by Victor Chernozhukov #peace 🇺🇦, C. Hansen, N. Kallus, Martin Spindler, and Vasilis Syrgkanis. Available online: causalml-book.org #EconTwitter #EpiTwitter

scott cunningham (@causalinf) 's Twitter Profile Photo

No double someone else has posted this, but I can't see. A new causal inference / marching learning / artificial intelligence book is now available written by Victor Chernozhukov, Chris Hansen, Nathan Kallus, Martin Spindler, and Vasilis Syrgkanis. causalml-book.org

Judea Pearl (@yudapearl) 's Twitter Profile Photo

Delighted to see this new book 'Applied #CausalInference Powered by ML and #AI' by Victor Chernozhukov #peace 🇺🇦 , C. Hansen, N. Kallus, Martin Spindler , and Vasilis Syrgkanis, which nicely introduces the principles of Causal Inference to applied economists and health scientists. .

Bindu Reddy (@bindureddy) 's Twitter Profile Photo

Good paper by Netflix on cosine similarity. It goes back to building good RAG systems, which is hard. Before deploying these systems, you have to make intelligent decisions about chunking, hierarchical chunking, embedding, and even the algorithm for similarity look-up.

Good paper by Netflix on cosine similarity.  

It goes back to building good RAG systems, which is hard. Before deploying these systems, you have to make intelligent decisions about chunking, hierarchical chunking, embedding, and even the algorithm for similarity look-up.
Kaiwen Wang (@kaiwenw_ai) 's Twitter Profile Photo

In prediction, we often say a good model is one with low mean-squared error. However, this may translate loosely to good decision making. Is squared loss the right choice for #RL? We show the answer is a resounding no! Excited to share two #ICML2024 papers 🧵👇

Kaiwen Wang (@kaiwenw_ai) 's Twitter Profile Photo

1️⃣ First up: Second-Order Bounds w/ Distributional RL openreview.net/forum?id=kZBCF… We prove that maximum likelihood (mle) loss gives second-order bounds in *both* online and offline RL, yielding rapid O(1/n)-type convergence in near-deterministic systems. 2/

Kaiwen Wang (@kaiwenw_ai) 's Twitter Profile Photo

2️⃣Next: "Switching the Loss Reduces the Cost in Batch RL" openreview.net/forum?id=7PXSc… We prove that FQI w/ log-loss enjoys a first-order bound which converges at a rapid O(1/n)-rate when the optimal policy’s cost V* is small. When does this happen? 4/

Netflix Research (@netflixresearch) 's Twitter Profile Photo

Are you a PhD student with a passion for machine learning and an eye for innovation? Join Netflix as an ML Intern in 2025 and help us redefine entertainment. Apply now or share with someone who’d love an opportunity #OnlyatNetflix explore.jobs.netflix.net/careers/job/79…