Michael Wornow (@michaelwornow) 's Twitter Profile
Michael Wornow

@michaelwornow

Computer Science PhD Student @ Stanford

ID: 1633929360662216704

linkhttps://michaelwornow.net/ calendar_today09-03-2023 20:34:59

122 Tweet

379 Takipçi

129 Takip Edilen

Neel Guha (@neelguha) 's Twitter Profile Photo

What's (1) a "drink of fresh fruit pureed with milk, yogurt, or ice cream" and (2) an unsupervised algorithm for test-time LLM routing? Our #NeurIPS2024 paper, Smoothie! 🥤 arxiv.org/abs/2412.04692 1/9

Khaled Saab (@khaledsaab11) 's Twitter Profile Photo

Our approach to evaluating health AI models continues to evolve! (Phase 1) Medical Benchmarks ➡️ (Phase 2) Patient Actor Consultations ➡️ (Phase 3 – coming soon!) Real-World Deployment > (Phase 1) Medical Benchmarks: We first need to make sure our models have extensive medical

Isaac Kohane (@zakkohane) 's Twitter Profile Photo

Medical record to finding a clinical trial through AI? Using "out-of-the-box" "zero-shot" AI model NEJM AI ai.nejm.org/doi/10.1056/AI… Interesting study Stanford University Should all clinicians and patients be using this when no one else is offering a state-of-the-art trial? In cancer,

Sehj Kashyap (@sehjkashyap) 's Twitter Profile Photo

Great to see MAMBA architecture evaluated on EHR-related tasks and robust analysis of EHR context complexities in this new paper with a fun title Michael Wornow arxiv.org/pdf/2412.16178

NEJM AI (@nejm_ai) 's Twitter Profile Photo

In a Case Study, Michael Wornow et al. investigate the accuracy, efficiency, and interpretability of using LLMs for clinical trial patient matching, with a focus on the zero-shot performance of these models to scale to arbitrary trials. Learn more: nejm.ai/4fM0Gmv

In a Case Study, <a href="/MichaelWornow/">Michael Wornow</a> et al. investigate the accuracy, efficiency, and interpretability of using LLMs for clinical trial patient matching, with a focus on the zero-shot performance of these models to scale to arbitrary trials. Learn more: nejm.ai/4fM0Gmv
Frazier Huo (@zepeng_huo) 's Twitter Profile Photo

🎉 Excited to share that our latest research, 𝘛𝘪𝘮𝘦-𝘵𝘰-𝘌𝘷𝘦𝘯𝘵 𝘗𝘳𝘦𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘧𝘰𝘳 3𝘋 𝘔𝘦𝘥𝘪𝘤𝘢𝘭 𝘐𝘮𝘢𝘨𝘪𝘯𝘨, has been accepted at 𝗜𝗖𝗟𝗥 2025! 🚀 🔍 𝗜𝗺𝗽𝗿𝗼𝘃𝗶𝗻𝗴 𝗠𝗲𝗱𝗶𝗰𝗮𝗹 𝗜𝗺𝗮𝗴𝗲 𝗣𝗿𝗲𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗧𝗶𝗺𝗲-𝘁𝗼-𝗘𝘃𝗲𝗻𝘁

Jason Alan Fries (@jasonafries) 's Twitter Profile Photo

🎉 We're thrilled to announce the general release of three de-identified, longitudinal EHR datasets from Stanford Medicine—now freely available for non-commercial research-use worldwide! 🚀 Read our HAI blog post for more details: hai.stanford.edu/news/advancing… 𝗗𝗮𝘁𝗮𝘀𝗲𝘁

Dan Biderman (@dan_biderman) 's Twitter Profile Photo

How can we use small LLMs to shift more AI workloads onto our laptops and phones? In our paper and open-source code, we pair on-device LLMs (ollama) with frontier LLMs in the cloud (@openai, @together), to solve token-intensive workloads on your 💻 at 17.5% of the cloud cost

Percy Liang (@percyliang) 's Twitter Profile Photo

1/🧵How do we know if AI is actually ready for healthcare? We built a benchmark, MedHELM, that tests LMs on real clinical tasks instead of just medical exams. #AIinHealthcare Blog, GitHub, and link to leaderboard in thread!

1/🧵How do we know if AI is actually ready for healthcare? We built a benchmark, MedHELM, that tests LMs on real clinical tasks instead of just medical exams.  #AIinHealthcare
Blog, GitHub, and link to leaderboard in thread!
Hejie Cui (@hennyjiecc) 's Twitter Profile Photo

Introducing TIMER⌛️: a temporal instruction modeling and evaluation framework for longitudinal clinical records! 🏥📈 TIMER tackles challenges in processing longitudinal medical records—including temporal reasoning, multi-visit synthesis, and patient trajectory analysis. It

Avanika Narayan (@avanika15) 's Twitter Profile Photo

can you chat privately with a cloud llm—*without* sacrificing speed? excited to release minions secure chat: an open-source protocol for end-to-end encrypted llm chat with <1% latency overhead (even @ 30B+ params!). cloud providers can’t peek—messages decrypt only inside a