Daniel P Jeong (@danielpjeong) 's Twitter Profile
Daniel P Jeong

@danielpjeong

PhD student @mldcmu working on statistical ML for healthcare | prev: @columbia, @nasajpl 🦋 bsky.app/profile/daniel…

ID: 930471580030906368

linkhttp://djeong.com calendar_today14-11-2017 16:24:46

94 Tweet

125 Followers

392 Following

Zachary Lipton (@zacharylipton) 's Twitter Profile Photo

Unpacking what's going on in our paper on Med-* foundation models & their failure to improve over their generic counterparts (think Med-{LLaMa, LLaVa} vs {LLaMa, LLaVa}. A familiar tale of motivated reasoning, sloppy eval, & hidden hyper-optimization. arxiv.org/abs/2411.04118

Daniel P Jeong (@danielpjeong) 's Twitter Profile Photo

Michael is an amazing mentor! I’ve really enjoyed working with him on multiple projects, and I highly recommend him as an advisor 😄. If you’re applying to CS PhD programs and interested in causality/reliable ML/healthcare, consider applying to work with him!

Dylan Sam (@dylanjsam) 's Twitter Profile Photo

Contrastive VLMs (CLIP) lack the structure of text embeddings, like satisfying analogies via arithmetic (king - man = queen). We enhance CLIP’s *reasoning abilities* on such tasks by finetuning w/ text descriptions of image differences! w/ D. Willmott, J.Semedo, Zico Kolter 1/🧵

Contrastive VLMs (CLIP) lack the structure of text embeddings, like satisfying analogies via arithmetic (king - man = queen). We enhance CLIP’s *reasoning abilities* on such tasks by finetuning w/ text descriptions of image differences! w/ D. Willmott, J.Semedo, <a href="/zicokolter/">Zico Kolter</a>

1/🧵
Leqi Liu (@leqi_liu) 's Twitter Profile Photo

How to **efficiently** build personalized language models without textual info on user preferences? Our P-RLHF work: - light-weight user model - personalize all *PO alignment algorithms - strong performance on the largest personalized preference dataset arxiv.org/abs/2402.05133

Seungwook Han (@seungwookh) 's Twitter Profile Photo

🧩 Why do task vectors exist in pretrained LLMs? Our new research uncovers how transformers form internal abstractions and the mechanisms behind in-context learning(ICL).

🧩  Why do task vectors exist in pretrained LLMs? 

Our new research uncovers how transformers form internal abstractions and the mechanisms behind in-context learning(ICL).
Euxhen Hasanaj (@euxhenh) 's Twitter Profile Photo

We have just released SenSet, a novel list of 106 senescence marker genes. We hope this resource accelerates discoveries in aging research, cancer biology, and regenerative medicine. #senescence #aging #pulearning #gene-set #SenNet biorxiv.org/content/10.110…

Pratyush Maini (@pratyushmaini) 's Twitter Profile Photo

1/Being in academia is such a privilege: You get to collaborate with insanely talented & passionate students on their journey to upskill themselves. Very excited to share *OpenUnlearning*: a unified, easily extensible framework for unlearning led by Anmol Mekala Vineeth🧵

1/Being in academia is such a privilege: You get to collaborate with insanely talented &amp; passionate students on their journey to upskill themselves.

Very excited to share *OpenUnlearning*: a unified, easily extensible framework for unlearning led by <a href="/anmol_mekala/">Anmol Mekala</a> <a href="/VineethDorna/">Vineeth</a>🧵
Sachin Goyal (@goyalsachin007) 's Twitter Profile Photo

Excited to be at #ICLR2025 🇸🇬 to talk about my recent works👇that uncover key pitfalls & inefficiencies in pretraining & inference🚨. Final PhD lap —thinking a lot about how pretraining interventions can shape downstream behaviors (like reasoning & safety). DM to chat or vibe!

Excited to be at #ICLR2025 🇸🇬 to talk about my recent works👇that uncover key pitfalls &amp; inefficiencies in pretraining &amp; inference🚨. 
Final PhD lap —thinking a lot about how pretraining interventions can shape downstream behaviors (like reasoning &amp; safety). DM to chat or vibe!
Pratyush Maini (@pratyushmaini) 's Twitter Profile Photo

Looking forward to giving a talk this Friday OpenAI with Zhili Feng on some of our privacy & memorization research + how it applies to production LLMs! We've been gaining momentum on detecting, quantifying & erasing memorization; excited to explore its real-world impact!

Looking forward to giving a talk this Friday <a href="/OpenAI/">OpenAI</a> with <a href="/zhilifeng/">Zhili Feng</a> on some of our privacy &amp; memorization research + how it applies to production LLMs! 

We've been gaining momentum on detecting, quantifying &amp; erasing memorization;  excited to explore its real-world impact!
Rattana Pukdee (@rpukdeee) 's Twitter Profile Photo

In our #AISTATS2025 paper, we ask: when it is possible to recover a consistent joint distribution from conditionals? We propose path consistency and autoregressive path consistency—necessary and easily verifiable conditions. See you at Poster session 3, Monday 5th May.

In our #AISTATS2025 paper, we ask: when it is possible to recover a consistent joint distribution from conditionals? We propose path consistency and autoregressive path consistency—necessary and easily verifiable conditions. 

See you at Poster session 3, Monday 5th May.
Danny To Eun Kim (@teknology.bsky.social) (@teknologyy) 's Twitter Profile Photo

🧵Working with #MCP or building a modular #RAG system, but not sure which rankers to use from your pool? 📊 Rank the Rankers⚡Route smart. This paper shows how. 👨‍🔬 w/ Fernando Diaz Fernando Diaz 💻 Code: github.com/kimdanny/Starl… Paper: arxiv.org/abs/2506.13743

Danny To Eun Kim (@teknology.bsky.social) (@teknologyy) 's Twitter Profile Photo

🇮🇹 We're presenting a tutorial on Retrieval-Enhanced Machine Learning (#REML) at #SIGIR2025! 🗓️Sunday July 13th | 9AM - 12:30PM 📌Donatello Floor 0 Details: retrieval-enhanced-ml.github.io/sigir-2025.html Fernando Diaz Andrew Drozdov Alireza Salemi Hamed Zamani SIGIR 2025

🇮🇹 We're presenting a tutorial on Retrieval-Enhanced Machine Learning (#REML) at #SIGIR2025! 
🗓️Sunday July 13th | 9AM - 12:30PM
📌Donatello Floor 0 
Details: retrieval-enhanced-ml.github.io/sigir-2025.html

<a href="/841io/">Fernando Diaz</a> <a href="/mrdrozdov/">Andrew Drozdov</a> <a href="/SalemiAlireza7/">Alireza Salemi</a> <a href="/HamedZamani/">Hamed Zamani</a> <a href="/SIGIRConf/">SIGIR 2025</a>