exXross (@ex_xross) 's Twitter Profile
exXross

@ex_xross

ex researcher post human era \r&t

reposts and thoughts that are helpful to me

web3.0, dao, rm, meta-narrative

ID: 945374736246296576

calendar_today25-12-2017 19:24:36

3,3K Tweet

92 Followers

1,1K Following

David 🇺🇸 (@david_eng_mba) 's Twitter Profile Photo

IGV leads BTC by ~2 days with the strongest significance in the set. IGV moves first because it’s where institutional risk is repriced first; BTC reacts next as the higher-beta liquidity asset.

IGV leads BTC by ~2 days with the strongest significance in the set.  IGV moves first because it’s where institutional risk is repriced first; BTC reacts next as the higher-beta liquidity asset.
Chayenne Zhao (@genai_is_real) 's Twitter Profile Photo

We’re obsessing over trillions of parameters while nature solved self-replication with a 45-nucleotide bootloader. This QT45 ribozyme is basically the ultimate Quine in biological assembly. It’s a 45-token sequence that serves as both the compiler and the source code. While the

AnirbanBandyopadhyay (@anirbanbandyo) 's Twitter Profile Photo

Self-replicating organic molecules will be the new age hardware, organic processors, with almost no energy, to remove the energy deadlock we have for LLM, 7 GW by 2027 and 30GW by 2030 is not easy.

Yacine Mahdid (@yacinelearning) 's Twitter Profile Photo

it’s stuff like this that get me hyped up for the future of deep learning because we barely have a clue of what we are doing

Jen Zhu (@jenzhuscott) 's Twitter Profile Photo

This is why Andrej Karpathy will go into history books as one of the most consequential minds in AI of our time. 243 lines of ruthless compression but a FULL training + inference loop for autoregressive transformer. I feel this is also such a genius, quiet defiance of the “AI is

TuringPost (@theturingpost) 's Twitter Profile Photo

The dark side of reinforcement learning Olive Song, senior researcher at MiniMax (official), about RL models that try to hack rewards and why alignment fails in practice This conversation is an inside look at how Chinese AI labs move fast – testing new models overnight, debugging

LiteFold (@try_litefold) 's Twitter Profile Photo

Announcing Rosalind, the most versatile AI Co-Scientist for computational biology and therapeutics research. Giving every biologist their own frontier research lab. Make every experiment count. It's live. Links in the comments.

Yuri Milner (@yurimilner) 's Twitter Profile Photo

In biology fundamental systems are still coming to light. 3 years ago the Breakthrough recognized the discovery of liquid condensates, a new system of cellular organization. Now it turns out they enable a previously unknown mode of electrical transmission inside cells.

Dr. Hugh Bitt (@cat_states) 's Twitter Profile Photo

Scott Aaronson, a leading mind in quantum computing has weighed in: "serious work" and "entirely plausible." His main concern? The error correcting codes in the paper (LDPC codes) require "wildly nonlocal measurements". This makes them harder to engineer than surface codes,

Bo Wang (@bowang87) 's Twitter Profile Photo

A Chinese hardware team just mass-democratized AI agents. They took a 430,000-line AI assistant that needs a $599 Mac Mini and 1GB of RAM — and rewrote it in Go so it runs on a $9.9 dev board with less than 10MB of memory. Boot time: from 500 seconds to 1 second. Cost: from

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Terence Tao: AI isn’t hype anymore in Math discovery. Terence Tao is one of the greatest living mathematicians, in his new lecture explains how AI and human professional mathematicians are now complementary. "There has been a really visible increase in capability. It is not

Davide Paglieri (@paglieridavide) 's Twitter Profile Photo

🧬 New paper from my internship at Google DeepMind We introduce Persona Generators: functions that generate diverse synthetic populations for arbitrary contexts. We use AlphaEvolve to optimize the generator code, hill-climbing on diversity metrics — not just likelihood —

🧬 New paper from my internship at <a href="/GoogleDeepMind/">Google DeepMind</a>

We introduce Persona Generators: functions that generate diverse synthetic populations for arbitrary contexts.

We use AlphaEvolve to optimize the generator code, hill-climbing on diversity metrics — not just likelihood —
Sumner L Norman (@sumnerln) 's Twitter Profile Photo

“Brains are not just learners; they are architectures of internal teachers. We should try to find those teachers in the brain — and learn from them.” AI may have more to learn from biological intelligence. Great read from Adam Marblestone highlighting work from Steven Byrnes