Suchi Saria (@suchisaria) 's Twitter Profile
Suchi Saria

@suchisaria

AI prof/Endowed Chair @JohnsHopkins, Foundr @BayesianHealth, investor 15+ AI startups, ex-Stanford AI, MIT TR35, Sloan, @wef YGL, @darpa AI safety & Health AI

ID: 140980031

linkhttps://suchisaria.jhu.edu/ calendar_today06-05-2010 21:45:16

2,2K Tweet

14,14K Takipçi

483 Takip Edilen

Suchi Saria (@suchisaria) 's Twitter Profile Photo

This is what rich people struggling looks like: wine parties w/o wine & billionaires sleeping on couches ;) wsj.com/lifestyle/trav… And apparently everybody is leading the global conversation on #AI! 😂 😂 😂

Suchi Saria (@suchisaria) 's Twitter Profile Photo

Will your LLM model generalize over time or across sites? How do we improve robustness? This is a key issue LLMs struggle with and especially critical as we apply these models to healthcare applications. 👇 New clever technique to improve LLM robustness and generalization.

Will your LLM model generalize over time or across sites? How do we improve robustness? This is a key issue LLMs struggle with and especially critical as we apply these models to healthcare applications. 

👇 New clever technique to improve LLM robustness and generalization.
Suchi Saria (@suchisaria) 's Twitter Profile Photo

#1stworldproblems #1stworldjokes After years, I recently got downgraded from United Global Services to 1K. The air hostesses think they’re being nice when they greet me — “You’re a 1K member! Thank you for your loyalty.” It mostly hurts, feels like they’re rubbing it in! 😂😂

Johns Hopkins University (@johnshopkins) 's Twitter Profile Photo

Johns Hopkins and Columbia University computer scientists have proposed a method to enhance the robustness of AI models used in medical text analysis. The project is part of an ongoing effort to develop an AI safety framework for healthcare applications. bit.ly/48yyPCx

Drew Prinster (same @ bsky) (@drewprinster) 's Twitter Profile Photo

Had a (positive) “deer-in-the-headlights” moment today when scrolled to these kind & generous words from Valeriy M., PhD, MBA, CQF on our new paper w/ Samuel Stanton Anqi (Angie) Liu Suchi Saria! 😆arxiv.org/abs/2405.06627 Humbled & grateful to share this work. Full paper summary thread soon!

Drew Prinster (same @ bsky) (@drewprinster) 's Twitter Profile Photo

Paper 🧵! w/ Samuel Stanton Anqi (Angie) Liu Suchi Saria #ICML2024 arxiv.org/abs/2405.06627 We study 2 Qs: 1) Can AI uncertainty quant. via #ConformalPrediction extend to any data distribution? 2) Are there practical CP algorithms for AI/ML agents 🤖w/ feedback-loop data shifts? 1/

Paper 🧵! w/ <a href="/samuel_stanton_/">Samuel Stanton</a> <a href="/anqi_liu33/">Anqi (Angie) Liu</a> <a href="/suchisaria/">Suchi Saria</a> #ICML2024 arxiv.org/abs/2405.06627

We study 2 Qs:
1) Can AI uncertainty quant. via #ConformalPrediction extend to any data distribution?
2) Are there practical CP algorithms for AI/ML agents 🤖w/ feedback-loop data shifts?
1/
Drew Prinster (same @ bsky) (@drewprinster) 's Twitter Profile Photo

When do AI explanations actually help, & promote appropriate trust? Spoiler, via prospective, multisite Radiology study of 220 doctors: *How* AI explains its advice has big impacts on doctors’ diagnostic performance and trust in AI--even if they *don’t realize it*! 🧵1/

When do AI explanations actually help, &amp; promote appropriate trust?

Spoiler, via prospective, multisite <a href="/radiology_rsna/">Radiology</a> study of 220 doctors: *How* AI explains its advice has big impacts on doctors’ diagnostic performance and trust in AI--even if they *don’t realize it*!

🧵1/
Suchi Saria (@suchisaria) 's Twitter Profile Photo

🚨With an awesome group of collaborators from health system & federal agencies, Jean Feng and I are embarking on an ambitious multi-year project to develop tools for monitoring & updating clinical AI algorithms, with the aim of informing how we accelerate & scale responsible

Drew Prinster (same @ bsky) (@drewprinster) 's Twitter Profile Photo

AI monitoring is key to responsible deployment. Our #ICML2025 paper develops approaches for 3 main goals: 1) *Adapting* to mild data shifts 2) *Quickly Detecting* harmful shifts 3) *Diagnosing* cause of degradation 🧵w/ Xing Han 韩星 Anqi (Angie) Liu Suchi Saria arxiv.org/abs/2505.04608

AI monitoring is key to responsible deployment. Our #ICML2025 paper develops approaches for 3 main goals:

1) *Adapting* to mild data shifts
2) *Quickly Detecting* harmful shifts
3) *Diagnosing* cause of degradation

🧵w/ <a href="/xinghan0/">Xing Han 韩星</a> <a href="/anqi_liu33/">Anqi (Angie) Liu</a> <a href="/suchisaria/">Suchi Saria</a>

arxiv.org/abs/2505.04608
Doctor Radio (@nyudocs) 's Twitter Profile Photo

Could #AI help predict #sepsis? At 2:30pm join Marc Siegel MD as he speaks with Suchi Saria (AI prof/Endowed Chair Johns Hopkins University & Founder @BayesianHealth) and Albert Wu (Professor, director of CHSOR Johns Hopkins Bloomberg School of Public Health) about their research. Stream here: sxm.app.link/DoctorRadio