Daniel Bear (@recursus) 's Twitter Profile
Daniel Bear

@recursus

Director of AI Research @noetik_ai.
Biology, Neuroscience, Evolution, AI, & dad things.

ID: 254337109

calendar_today19-02-2011 02:57:57

3,3K Tweet

1,1K Followers

1,1K Following

Ron Alfa (@ron_alfa) 's Twitter Profile Photo

Announcing OCTO-VirtualCell (vc) a multi-scale, multimodal transformer trained to predict gene expression for a virtual cell in cellular contexts within patient tissue samples. Complete wth the Celleporter demo app to explore the data! 1/

Stephanie Chan (@scychan_brains) 's Twitter Profile Photo

Devastatingly, we have lost a bright light in our field. Felix Hill was not only a deeply insightful thinker -- he was also a generous, thoughtful mentor to many researchers. He majorly changed my life, and I can't express how much I owe to him. Even now, Felix still has so much

Daniel Bear (@recursus) 's Twitter Profile Photo

Looking forward to reading this. Fitting to data doesn't produce understanding (physical or otherwise.) It may be necessary but it's not sufficient.

Sasha Rush (@srush_nlp) 's Twitter Profile Photo

Post-mortem after Deepseek-r1's killer open o1 replication. We had speculated 4 different possibilities of increasing difficulty (G&C, PRM, MCTS, LtS). The answer is the best one! It's just Guess and Check.

Post-mortem after Deepseek-r1's killer open o1 replication.

We had speculated 4 different possibilities of increasing difficulty (G&C, PRM, MCTS, LtS). The answer is the best one! It's just Guess and Check.
Michael Eisen (@mbeisen) 's Twitter Profile Photo

Biology needs its Moneyball. Instead of trying to figure out how to use big data and fancy models to recreate the old way of thinking, we should break completely free and try to build something fundamentally new.

Charlie Marsh (@charliermarsh) 's Twitter Profile Photo

Okay, this is really cool... Ray now includes first-class uv support, as of the latest release. So you can `uv run --with emoji main.py` and all the nodes in your cluster will get the dependencies they need, powered by uv.

Okay, this is really cool... Ray now includes first-class uv support, as of the latest release.

So you can `uv run --with emoji main.py` and all the nodes in your cluster will get the dependencies they need, powered by uv.
Aran Nayebi (@aran_nayebi) 's Twitter Profile Photo

Check out our new work exploring how to make robots sense touch more like our brains! Surprisingly, ConvRNNs aligned best with mouse somatosensory cortex and even passed the NeuroAI Turing Test on current neural data. We also developed new tactile-specific augmentations for

Honglin Chen (@honglin_c) 's Twitter Profile Photo

When technology speaks with warmth and flow, it goes beyond feeling like a tool and starts feeling like a human friend. When Advanced Voice was launched, I remember being impressed by how good it sounded. I never imagined that nine months later, as my first project since

Ron Alfa (@ron_alfa) 's Twitter Profile Photo

Today we announce an exciting collab with Agenus ($AGEN) to deploy NOETIK virtual cell foundation models to support predictive biomarkers development for BOT/BAL. AI / Foundation models have redefined preclinical. Today, we begin a new era where AI will impact clinical

Pramod RT/ಪ್ರಮೋದ್ ರಾ ತಾ (@pramodrt9) 's Twitter Profile Photo

Thrilled to announce our new publication titled 'Decoding predicted future states from the brain's physics engine' with Elizabeth Mieczkowski, Cyn X. Fang, Nancy Kanwisher @[email protected], and Josh Tenenbaum. science.org/doi/full/10.11… (1/n)

Daniel Bear (@recursus) 's Twitter Profile Photo

Great post on the neural logic of NOETIK — a bold bet on how to find the right drug for the right patient. And now lucky to have owl on board!

Daniel Bear (@recursus) 's Twitter Profile Photo

Super optimistic about this line of work making pure vision foundation models (no language, no labels) more GPT-like. No need to fine-tune if you can prompt the model to do any task zero-shot, and I strongly believe most of the information required to do so is in the raw data.

Tyler Zhu (@tyleryzhu) 's Twitter Profile Photo

All this talk about world models but how strong are their perception abilities really? Can they track w/ occlusions, reason over 1hr+ videos, or predict physical scenarios? Test your models in the 3rd Perception Test Challenge at #ICCV2025 w/ prizes up to 50k EUR! DDL: 6 Oct 25

All this talk about world models but how strong are their perception abilities really? Can they track w/ occlusions, reason over 1hr+ videos, or predict physical scenarios? 

Test your models in the 3rd Perception Test Challenge at #ICCV2025 w/ prizes up to 50k EUR! DDL: 6 Oct 25
Daniel Bear (@recursus) 's Twitter Profile Photo

Amazing to see all the things Stanford NeuroAI Lab is doing with counterfactuals + a single "pure" vision foundation model, LRAS. Self-supervised segmentation is my favorite. It gets at a deep philosophical question: what is an object, anyway?

Aran Nayebi (@aran_nayebi) 's Twitter Profile Photo

🚀 New Open-Source Release! PyTorchTNN 🚀 A PyTorch package for building biologically-plausible temporal neural networks (TNNs)—unrolling neural network computation layer-by-layer through time, inspired by cortical processing. PyTorchTNN naturally integrates into the