Moon (@mnagai_) 's Twitter Profile
Moon

@mnagai_

PhD student at Koo lab @CSHL | Computational Biology | AI Alignment

ID: 1261246801878716416

calendar_today15-05-2020 10:47:47

408 Tweet

69 Followers

230 Following

Bernardo Almeida (@dealmeida_bp) 's Twitter Profile Photo

🚀 Introducing Nucleotide Transformer v3 (NTv3) Today, we are very excited to share our latest foundation model for biology - Nucleotide Transformer v3 (NTv3). NTv3 is InstaDeep new multi-species genomics foundation model, designed for 1 Mb, single-nucleotide-resolution

🚀 Introducing Nucleotide Transformer v3 (NTv3)

Today, we are very excited to share our latest foundation model for biology - Nucleotide Transformer v3 (NTv3).

NTv3 is <a href="/instadeepai/">InstaDeep</a>  new multi-species genomics foundation model, designed for 1 Mb, single-nucleotide-resolution
Peter Koo (@pkoo562) 's Twitter Profile Photo

Mark your calendars: The AI x Bio meeting of 2026 will be held at CSHL on May 26-31! The program brings together 50+ invited leaders in genomics, transcriptomics, protein design, drug discovery, neuroAI, pathology, agentic AI, and more! Abstract: 3/26 meetings.cshl.edu/meetings.aspx?…

Peter Koo (@pkoo562) 's Twitter Profile Photo

Interested in pursuing a PhD at the intersection of AI & genomics? The Koo Lab is recruiting through the new BioAI PhD Program at CSHL! Send me an email w/ CV. Applications are rolling until position is filled. *A Master’s degree or equiv is required cshl.edu/phd-program/bi…

Moon (@mnagai_) 's Twitter Profile Photo

With all the buzz around Claude Code, I thought about switching from ChatGPT, but then I hit cancel and got offered a free month. One month to see if it can keep up!

Alex Cui (@alexcdot) 's Twitter Profile Photo

Okay so, we just found that over 50 papers published at @Neurips 2025 have AI hallucinations I don't think people realize how bad the slop is right now It's not just that researchers from Google DeepMind, Meta, Massachusetts Institute of Technology (MIT), Cambridge University are using AI - they allowed LLMs to generate

Okay so, we just found that over 50 papers published at @Neurips 2025 have AI hallucinations

I don't think people realize how bad the slop is right now

It's not just that researchers from <a href="/GoogleDeepMind/">Google DeepMind</a>, <a href="/Meta/">Meta</a>, <a href="/MIT/">Massachusetts Institute of Technology (MIT)</a>, <a href="/Cambridge_Uni/">Cambridge University</a> are using AI - they allowed LLMs to generate
Žiga Avsec (@avsecz) 's Twitter Profile Photo

AlphaGenome is out in @nature today along with model weights! 🧬 📄 Paper: nature.com/articles/s4158… 💻 Weights: github.com/google-deepmin… Getting here wasn’t a straight path. We sat down @googledeepmind to discuss the story behind the model, paper & API: youtu.be/V8lhUqKqzUc

AlphaGenome is out in @nature today along with model weights! 🧬

📄 Paper: nature.com/articles/s4158…
💻 Weights: github.com/google-deepmin…

Getting here wasn’t a straight path. We sat down @googledeepmind to discuss the story behind the model, paper &amp; API: youtu.be/V8lhUqKqzUc
Biology+AI Daily (@biologyaidaily) 's Twitter Profile Photo

Toward Interpretable and Generalizable AI in Regulatory Genomics 1 The review reframes seq2func models as living systems that improve through continual AI–experiment feedback loops instead of one-off training, arguing that static benchmarks hide systematic failure modes. 2 It

Toward Interpretable and Generalizable AI in Regulatory Genomics

1 The review reframes seq2func models as living systems that improve through continual AI–experiment feedback loops instead of one-off training, arguing that static benchmarks hide systematic failure modes.

2 It
Moon (@mnagai_) 's Twitter Profile Photo

Excited to share our new Review/Perspective on interpretable and generalizable AI for regulatory genomics. Grateful to be a co-first author with an amazing team. Hope it’s useful to the community!

Anshul Kundaje (anshulkundaje@bluesky) (@anshulkundaje) 's Twitter Profile Photo

Excellent forward looking perspective from Peter Koo . While this is not stated in the perspective & I don't want to put words in Peter's mouth, I think this is an alternative vision to building "virtual cells" that aims to unify cis & trans regulation. 1/

Xinming Tu (@tuxinming) 's Twitter Profile Photo

1/13 Excited to share our (anna spiro Maria Chikina Sara Mostafavi) latest preprint! 🧬💻 Personal Genome Prediction isn't just a downstream task—it’s the ultimate end-to-end benchmark for Variant Effect Prediction. We put the new SOTA AlphaGenome to the test and uncovered a

Chris Painter (@chrispainteryup) 's Twitter Profile Photo

My bio says I work on AGI preparedness, so I want to clarify: We are not prepared. Over the last year, dangerous capability evaluations have moved into a state where it's difficult to find any Q&A benchmark that models don't saturate. Work has had to shift toward measures that

Moon (@mnagai_) 's Twitter Profile Photo

Nice seq2func interpretability approach that assigns attribution to each output bin and visualizes the flow of influence from sequence to outputs, avoiding the loss of info from collapsing the profiles into a single value!

Peter Koo (@pkoo562) 's Twitter Profile Photo

[Postdoc Opportunity] Interested in research at the intersection of AI, regulatory genomics, and plant biology? Ware Lab and Koo Lab at CSHL are seeking a highly motivated postdoc fellow to pioneer this space! - Fully funded: Must be US citizens! - DM or email me CV Please RT!

Anshul Kundaje (anshulkundaje@bluesky) (@anshulkundaje) 's Twitter Profile Photo

Check out this latest proof-of-concept regulatory DNA LM (called ARSENAL #GGMU) by Aman Patel that is pretty much the opposite of the current trends in DNALM literature. (Beware: Long thread) 1/

Abdul Muntakim Rafi (@muntakim_rafi) 's Twitter Profile Photo

excited by this result. Tang et al (koo lab) had an amazing work evaluating models dna-lm embeddings where they found little to no benefit in using dna-lm embeddings across multiple tasks. This seemed to have worked.

Maria Brbic (@mariabrbic) 's Twitter Profile Photo

Are neural nets across modalities really converging to the same representation as they scale, as the Platonic Representation Hypothesis suggests? We show that common representational similarity metrics are confounded by network width & depth. We propose a permutation-based