Kazem Meidani (@kazemmeidani) 's Twitter Profile
Kazem Meidani

@kazemmeidani

AI Research @CapitalOne, prev. PhD @CarnegieMellon. AI research intern @NetflixResearch, @EA. AI4Science

ID: 1265472111344340994

linkhttps://mmeidani.github.io calendar_today27-05-2020 02:37:35

33 Tweet

479 Takipçi

1,1K Takip Edilen

Sean Welleck (@wellecks) 's Twitter Profile Photo

Teaching a new course on Neural Code Generation with Daniel Fried! cmu-codegen.github.io/s2024/ Here is the lecture on pretraining and scaling laws: cmu-codegen.github.io/s2024/static_f…

Teaching a new course on Neural Code Generation with <a href="/dan_fried/">Daniel Fried</a>!

cmu-codegen.github.io/s2024/

Here is the lecture on pretraining and scaling laws:
cmu-codegen.github.io/s2024/static_f…
fly51fly (@fly51fly) 's Twitter Profile Photo

[LG] Masked Autoencoders are PDE Learners A Zhou, A B Farimani [CMU] (2024) arxiv.org/abs/2403.17728 - Masked autoencoders can learn useful latent representations for PDEs through self-supervised pretraining on unlabeled spatiotemporal data. This allows them to improve

[LG] Masked Autoencoders are PDE Learners
A Zhou, A B Farimani [CMU] (2024)
arxiv.org/abs/2403.17728

- Masked autoencoders can learn useful latent representations for PDEs through self-supervised pretraining on unlabeled spatiotemporal data. This allows them to improve
Kazem Meidani (@kazemmeidani) 's Twitter Profile Photo

🚨 New Preprint on LLM for Scientific Discovery Check out our work “LLM-SR” using LLM’s (1) Scientific Prior Knowledge, (2) Code Generation, and (3) In-context Reasoning for finding mathematical equations behind scientific data. More details ⬇️

Sean Welleck (@wellecks) 's Twitter Profile Photo

How can informal reasoning improve formal theorem proving? New paper: "Lean-STaR: Learning to Interleave Thinking and Proving" arxiv.org/abs/2407.10040 We introduce a framework for learning to interleave informal thoughts with steps of formal proving. 46.3% on miniF2F 🔥

How can informal reasoning improve formal theorem proving?

New paper: "Lean-STaR: Learning to Interleave Thinking and Proving"

arxiv.org/abs/2407.10040

We introduce a framework for learning to interleave informal thoughts with steps of formal proving. 46.3% on miniF2F 🔥
Sean Welleck (@wellecks) 's Twitter Profile Photo

Interested in LLMs and Lean? Check out LLMLean, a tool for using LLMs to suggest proof steps and complete proofs in Lean: github.com/cmu-l3/llmlean Here's an example of using LLMLean with GPT-4o to solve problems from Mathematics in Lean:

Baran Hashemi (@rythian47) 's Twitter Profile Photo

🚨How can we teach Transformers to learn and model Enumerative geometry? How deep can AI go in the rabbit hole of understanding complex mathematical concepts? 🤔 We’ve developed a new approach using Transformers to compute psi-class intersection numbers in algebraic geometry.

🚨How can we teach Transformers to learn and model Enumerative geometry? 
How deep can AI go in the rabbit hole of understanding complex mathematical concepts? 🤔
We’ve developed a new approach using Transformers to compute psi-class intersection numbers in algebraic geometry.
Miles Cranmer (@milescranmer) 's Twitter Profile Photo

Congratulations Kazem Meidani on your PhD! It was an honor to serve on your thesis committee :) Also – love this symbolic regression-themed graduation present from Amir Barati

Congratulations <a href="/KazemMeidani/">Kazem Meidani</a> on your PhD! It was an honor to serve on your thesis committee :)

Also – love this symbolic regression-themed graduation present from <a href="/AmirBaratiF/">Amir Barati</a>
Kazem Meidani (@kazemmeidani) 's Twitter Profile Photo

Successfully defended my PhD thesis on AI for scientific discovery from Carnegie Mellon University ! I'm grateful for the invaluable support and mentorship from my PhD advisor, Amir Barati. And many thanks to my committee members: Chris McComb (he/him), Sean Welleck, Chandan Reddy, Miles Cranmer.

Successfully defended my PhD thesis on AI for scientific discovery from <a href="/CarnegieMellon/">Carnegie Mellon University</a> !

I'm grateful for the invaluable support and mentorship from my PhD advisor, <a href="/AmirBaratiF/">Amir Barati</a>. And many thanks to my committee members: <a href="/ccmccomb/">Chris McComb (he/him)</a>, <a href="/wellecks/">Sean Welleck</a>, <a href="/chandankreddy/">Chandan Reddy</a>, <a href="/MilesCranmer/">Miles Cranmer</a>.
Hamed Shirzad (@hamedshirzad13) 's Twitter Profile Photo

Graph Transformers (GTs) can handle long-range dependencies and resolve information bottlenecks, but they’re computationally expensive. Our new model, Spexphormer, helps scale them to much larger graphs – check it out at NeurIPS Conference next week, or the preview here! #NeurIPS2024

Graph Transformers (GTs) can handle long-range dependencies and resolve information bottlenecks,  but they’re computationally expensive. Our new model, Spexphormer, helps scale them to much larger graphs – check it out at <a href="/NeurIPSConf/">NeurIPS Conference</a>  next week, or the preview here!
#NeurIPS2024
Kazem Meidani (@kazemmeidani) 's Twitter Profile Photo

Happy to see that our work, LLM-SR, has been accepted to #ICLR 2025! 🇸🇬 If you’re interested in learning how LLMs can be used for scientific equation discovery, check this out 👇🏻

Parshin Shojaee (@parshinshojaee) 's Twitter Profile Photo

Super happy that "LLM-SR" is selected for Oral presentation at #ICLR2025 ! In this paper, we show how LLMs, with their vast scientific knowledge & coding capability, enhance equation discovery in science! arxiv.org/abs/2404.18400

Parshin Shojaee (@parshinshojaee) 's Twitter Profile Photo

Scientific discovery with LLMs has so much potential yet is underexplored. Our new benchmark **LLM-SRBench** enable rigorous evaluations of equation discovery with LLMs! 🧠Key takeaway: Even SOTA discovery models with strong LLM backbones still fail to discover mathematical

Scientific discovery with LLMs has so much potential yet is underexplored. Our new benchmark **LLM-SRBench** enable rigorous evaluations of equation discovery with LLMs!  

🧠Key takeaway: Even SOTA discovery models with strong LLM backbones still fail to discover mathematical
Kazem Meidani (@kazemmeidani) 's Twitter Profile Photo

Can’t attend ICLR 🇸🇬 due to visa issues but Chandan Reddy will have the oral presentation of *LLM-SR*on Friday 👇🏻 + see our new preprint on benchmarking capabilities of LLMs for scientific equation discovery *LLM-SRBench*: arxiv.org/abs/2504.10415

Anja Šurina (@anjasurina) 's Twitter Profile Photo

Excited to share our latest work on EvoTune, a novel method integrating LLM-guided evolutionary search and reinforcement learning to accelerate the discovery of algorithms! 1/12🧵

Parshin Shojaee (@parshinshojaee) 's Twitter Profile Photo

Excited that our benchmark paper for scientific discovery **LLM-SRBench** is accepted to #ICML2025 as *spotlight* !! 🎉🇨🇦 Special thanks to Hieu Nguyen and all other collaborators Kazem Meidani Amir Barati Khoa D. Doan Chandan Reddy