Michael Wornow (@michaelwornow) 's Twitter Profile
Michael Wornow

@michaelwornow

Computer Science PhD Student @ Stanford

ID: 1633929360662216704

linkhttps://michaelwornow.net/ calendar_today09-03-2023 20:34:59

122 Tweet

379 Followers

129 Following

Neel Guha (@neelguha) 's Twitter Profile Photo

What's (1) a "drink of fresh fruit pureed with milk, yogurt, or ice cream" and (2) an unsupervised algorithm for test-time LLM routing? Our #NeurIPS2024 paper, Smoothie! ๐Ÿฅค arxiv.org/abs/2412.04692 1/9

Khaled Saab (@khaledsaab11) 's Twitter Profile Photo

Our approach to evaluating health AI models continues to evolve! (Phase 1) Medical Benchmarks โžก๏ธ (Phase 2) Patient Actor Consultations โžก๏ธ (Phase 3 โ€“ coming soon!) Real-World Deployment > (Phase 1) Medical Benchmarks: We first need to make sure our models have extensive medical

Isaac Kohane (@zakkohane) 's Twitter Profile Photo

Medical record to finding a clinical trial through AI? Using "out-of-the-box" "zero-shot" AI model NEJM AI ai.nejm.org/doi/10.1056/AIโ€ฆ Interesting study Stanford University Should all clinicians and patients be using this when no one else is offering a state-of-the-art trial? In cancer,

Sehj Kashyap (@sehjkashyap) 's Twitter Profile Photo

Great to see MAMBA architecture evaluated on EHR-related tasks and robust analysis of EHR context complexities in this new paper with a fun title Michael Wornow arxiv.org/pdf/2412.16178

NEJM AI (@nejm_ai) 's Twitter Profile Photo

In a Case Study, Michael Wornow et al. investigate the accuracy, efficiency, and interpretability of using LLMs for clinical trial patient matching, with a focus on the zero-shot performance of these models to scale to arbitrary trials. Learn more: nejm.ai/4fM0Gmv

In a Case Study, <a href="/MichaelWornow/">Michael Wornow</a> et al. investigate the accuracy, efficiency, and interpretability of using LLMs for clinical trial patient matching, with a focus on the zero-shot performance of these models to scale to arbitrary trials. Learn more: nejm.ai/4fM0Gmv
Frazier Huo (@zepeng_huo) 's Twitter Profile Photo

๐ŸŽ‰ Excited to share that our latest research, ๐˜›๐˜ช๐˜ฎ๐˜ฆ-๐˜ต๐˜ฐ-๐˜Œ๐˜ท๐˜ฆ๐˜ฏ๐˜ต ๐˜—๐˜ณ๐˜ฆ๐˜ต๐˜ณ๐˜ข๐˜ช๐˜ฏ๐˜ช๐˜ฏ๐˜จ ๐˜ง๐˜ฐ๐˜ณ 3๐˜‹ ๐˜”๐˜ฆ๐˜ฅ๐˜ช๐˜ค๐˜ข๐˜ญ ๐˜๐˜ฎ๐˜ข๐˜จ๐˜ช๐˜ฏ๐˜จ, has been accepted at ๐—œ๐—–๐—Ÿ๐—ฅ 2025! ๐Ÿš€ ๐Ÿ” ๐—œ๐—บ๐—ฝ๐—ฟ๐—ผ๐˜ƒ๐—ถ๐—ป๐—ด ๐— ๐—ฒ๐—ฑ๐—ถ๐—ฐ๐—ฎ๐—น ๐—œ๐—บ๐—ฎ๐—ด๐—ฒ ๐—ฃ๐—ฟ๐—ฒ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด ๐˜„๐—ถ๐˜๐—ต ๐—ง๐—ถ๐—บ๐—ฒ-๐˜๐—ผ-๐—˜๐˜ƒ๐—ฒ๐—ป๐˜

Jason Alan Fries (@jasonafries) 's Twitter Profile Photo

๐ŸŽ‰ We're thrilled to announce the general release of three de-identified, longitudinal EHR datasets from Stanford Medicineโ€”now freely available for non-commercial research-use worldwide! ๐Ÿš€ Read our HAI blog post for more details: hai.stanford.edu/news/advancingโ€ฆ ๐——๐—ฎ๐˜๐—ฎ๐˜€๐—ฒ๐˜

Dan Biderman (@dan_biderman) 's Twitter Profile Photo

How can we use small LLMs to shift more AI workloads onto our laptops and phones? In our paper and open-source code, we pair on-device LLMs (ollama) with frontier LLMs in the cloud (@openai, @together), to solve token-intensive workloads on your ๐Ÿ’ป at 17.5% of the cloud cost

Percy Liang (@percyliang) 's Twitter Profile Photo

1/๐ŸงตHow do we know if AI is actually ready for healthcare? We built a benchmark, MedHELM, that tests LMs on real clinical tasks instead of just medical exams. #AIinHealthcare Blog, GitHub, and link to leaderboard in thread!

1/๐ŸงตHow do we know if AI is actually ready for healthcare? We built a benchmark, MedHELM, that tests LMs on real clinical tasks instead of just medical exams.  #AIinHealthcare
Blog, GitHub, and link to leaderboard in thread!
Hejie Cui (@hennyjiecc) 's Twitter Profile Photo

Introducing TIMERโŒ›๏ธ: a temporal instruction modeling and evaluation framework for longitudinal clinical records! ๐Ÿฅ๐Ÿ“ˆ TIMER tackles challenges in processing longitudinal medical recordsโ€”including temporal reasoning, multi-visit synthesis, and patient trajectory analysis. It

Avanika Narayan (@avanika15) 's Twitter Profile Photo

can you chat privately with a cloud llmโ€”*without* sacrificing speed? excited to release minions secure chat: an open-source protocol for end-to-end encrypted llm chat with <1% latency overhead (even @ 30B+ params!). cloud providers canโ€™t peekโ€”messages decrypt only inside a