Ming Yin (@mingyin_0312) 's Twitter Profile
Ming Yin

@mingyin_0312

ML, RL, AI. @Princeton Postdoc. PhDs in CS & STATs. Ex @awscloud AI. undergrad @USTC Math. Area Chair @NeurIPS @ICML. On the academic job market.

ID: 1036902684106797056

linkhttp://mingyin0312.github.io calendar_today04-09-2018 09:03:59

176 Tweet

1,1K Followers

981 Following

Yuanhao Qu (@yuanhaoq) 's Twitter Profile Photo

🤔 How do you train an AI model to think and reason like a biology expert? We found the answer: let it learn from real expert discussions! Checkout our recent work on a breakthrough approach to improve LLM scientific reasoning - by learning directly from 10+ years of genomics

🤔 How do you train an AI model to think and reason like a biology expert?

We found the answer: let it learn from real expert discussions!

Checkout our recent work on a breakthrough approach to improve LLM scientific reasoning - by learning directly from 10+ years of genomics
Le Cong (@lecong) 's Twitter Profile Photo

🤔 How do you get an LLM to reason like a CRISPR pro—or any top scientist? By training it on real expert conversations. 🛠️ What we built • An automated pipeline that distills learning signals from 10 + years of genomics discussions • Genome-Bench: 3,000 + curated Q&As on

🤔 How do you get an LLM to reason like a CRISPR pro—or any top scientist?
By training it on real expert conversations.
🛠️ What we built
• An automated pipeline that distills learning signals from 10 + years of genomics discussions
• Genome-Bench: 3,000 + curated Q&As on
Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

Reinforcement Learning with Verifiable Rewards Implicitly Incentivizes Correct Reasoning in Base LLMs "The Pass@K metric itself is a flawed measure of reasoning, as it credits correct final answers that probably arise from inaccurate or incomplete chains of thought (CoTs). To

Reinforcement Learning with Verifiable Rewards Implicitly Incentivizes Correct Reasoning in Base LLMs

"The Pass@K metric itself is a flawed measure of reasoning, as it credits correct final answers that probably arise from inaccurate or incomplete chains of thought (CoTs). To
Banghua Zhu (@banghuaz) 's Twitter Profile Photo

Really excited to work with Andrew Ng and DeepLearning.AI on this new course on post-training of LLMs—one of the most creative and fast-moving areas in LLM development. We cover the key techniques that turn pre-trained models into helpful assistants: SFT, DPO, and online RL.

Ming Yin (@mingyin_0312) 's Twitter Profile Photo

🚀 We built CRISPR-GPT, an AI agent that turns anyone into a gene-editing expert in 1 day. No lab experience needed. 90%+ editing efficiency on first try. Now published in Nature Biomedical Engineering. 👇 nature.com/articles/s4155… #CRISPR #AI #BioTech

Le Cong (@lecong) 's Twitter Profile Photo

Great working with Nebius team building powerful domain-specific AI expert models and agents for the future of scientific discovery and life-saving medicine!❤️

Yu Bai (@yubai01) 's Twitter Profile Photo

We released our first open-source language model since GPT-2! It was amazing how the entire team has came together in every stage of this work -- squeezing the absolute best performance, stress-testing and mitigating safety risks to a new standard, and overcoming many unforeseen

Chenlu Ye (@ye_chenlu) 's Twitter Profile Photo

PROF🌀Right answer, flawed reason?🤔🌀 📄arxiv.org/pdf/2509.03403 Excited to share our work: PROF-PRocess cOnsistency Filter! 🚀 Challenge: ORM is blind to flawed logic, and PRM suffers from reward hacking. Our method harmonizes strengths of PRM & ORM. #LLM #ReinforcementLearning

PROF🌀Right answer, flawed reason?🤔🌀
📄arxiv.org/pdf/2509.03403
Excited to share our work: PROF-PRocess cOnsistency Filter! 🚀
Challenge: ORM is blind to flawed logic, and PRM suffers from reward hacking. Our method harmonizes strengths of PRM & ORM. #LLM #ReinforcementLearning
Thinking Machines (@thinkymachines) 's Twitter Profile Photo

Today Thinking Machines Lab is launching our research blog, Connectionism. Our first blog post is “Defeating Nondeterminism in LLM Inference” We believe that science is better when shared. Connectionism will cover topics as varied as our research is: from kernel numerics to

Today Thinking Machines Lab is launching our research blog, Connectionism. Our first blog post is “Defeating Nondeterminism in LLM Inference”

We believe that science is better when shared. Connectionism will cover topics as varied as our research is: from kernel numerics to
Mengdi Wang (@mengdiwang10) 's Twitter Profile Photo

AI scientist has come to the real world: Proud to contribute to the world's first biotech-lab-validated AI scientist — CRISPR-GPT —empowering biotech discovery and gene therapies. From Stanford Medicine & Princeton AI. the-scientist.com/crispr-gpt-tur…

Mengdi Wang (@mengdiwang10) 's Twitter Profile Photo

🚀 Introducing LabOS: The AI-XR Co-Scientist A system that sees, understands, and works with humans in real-world labs. 👁️ Egocentric vision & extended reality 🧠 LLM reasoning & hypothesis generation 🤖 Real-time guidance & multi-modal human-AI collaboration From observation →

🚀 Introducing LabOS: The AI-XR Co-Scientist
A system that sees, understands, and works with humans in real-world labs.
👁️ Egocentric vision & extended reality
🧠 LLM reasoning & hypothesis generation
🤖 Real-time guidance & multi-modal human-AI collaboration

From observation →