Yong Lin (@yong18850571) 's Twitter Profile
Yong Lin

@yong18850571

Postdoc Fellow @PrincetonPLI @Princeton. Focusing on the trustwothiness of LLM. Apple AI/ML PhD Fellow 2023. Obtained PhD degree from @HKUST

ID: 1380847405902352388

linkhttps://linyongver.github.io/Website/ calendar_today10-04-2021 11:37:44

50 Tweet

415 Takipçi

187 Takip Edilen

Yong Lin (@yong18850571) 's Twitter Profile Photo

We are glad that arxiv.org/abs/2411.18872 evaluate our Goedel-Prover (goedel-lm.github.io) on IMO lemmas and compare it with O3-mini as well as Deepseek-Prover-RL. Our model solved 37.9% of the IMO lemmas and O3-mini only solved 23.8%. It is interesting that

We are glad that arxiv.org/abs/2411.18872 evaluate our Goedel-Prover (goedel-lm.github.io) on IMO lemmas and compare it with O3-mini as well as Deepseek-Prover-RL. Our model solved 37.9% of the IMO lemmas and O3-mini  only solved 23.8%. 

It is interesting that
Anirudha Majumdar (@majumdar_ani) 's Twitter Profile Photo

I sent a message to my PhD students and postdocs at Princeton University a couple of weeks ago regarding freezes/cuts to federal research funding (this was before the freeze on federal funding to Princeton). I am sharing it here in case others find it helpful in having similar

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Our VP of Reinforcement Learning David Silver believes we must go “beyond what humans know” - moving towards systems that can learn for themselves, and even discover new scientific knowledge. 🧠 Listen in on his conversation with our podcast host @FryRSquared →

Sanjeev Arora (@prfsanjeevarora) 's Twitter Profile Photo

[1] Kids improve when a good teacher offers adaptive, targeted feedback. Can a small LLM benefit if a large LLM provide helpful feedback, in-context?? Naive ideas fail here. We propose AdaptMI: adaptive, skill-based in-context supervision that boosts 1B models by 6% on

[1] Kids improve when a good teacher offers adaptive, targeted feedback. Can a small LLM benefit if a large LLM provide helpful feedback, in-context?? Naive ideas fail here.
We propose AdaptMI: adaptive, skill-based in-context supervision that boosts 1B models by 6% on