Cory Slater (@corslater) 's Twitter Profile
Cory Slater

@corslater

Just out here looking for the looker.

ID: 66848987

linkhttp://fivestarspicy.com calendar_today19-08-2009 00:10:22

1,1K Tweet

404 Followers

2,2K Following

Charles Wang (@charleswangb) 's Twitter Profile Photo

If you think the world model is nothing but action and state pairs, or that modeling physics is merely 'scene generation,' you are clueless as to how this creature operates in the wild👇

Josh Wolfe (@wolfejosh) 's Twitter Profile Photo

Extraordinary paper by Joana Xavier Dr. 𝗝𝗼𝗮𝗻𝗮 𝗖. 𝗫𝗮𝘃𝗶𝗲𝗿 + longtime Santa Fe Institute Stuart Kauffman on ORIGIN OF LIFE via auto-catalytic networks (from increasing complexity of combinatorial possibilities of elements > molecules > chemical (auto)catalysis… full paper via

Extraordinary paper by Joana Xavier <a href="/joanarcxavier/">Dr. 𝗝𝗼𝗮𝗻𝗮 𝗖. 𝗫𝗮𝘃𝗶𝗲𝗿</a> + longtime <a href="/sfiscience/">Santa Fe Institute</a> Stuart Kauffman on 

ORIGIN OF LIFE 

via auto-catalytic networks (from increasing complexity of combinatorial possibilities of elements &gt; molecules &gt; chemical (auto)catalysis…

full paper via
Josh Wolfe (@wolfejosh) 's Twitter Profile Photo

Take a look at Lux family co Variant Bio… Partnering with growing number of tribes, indigenous groups, local populations for some of the most interesting as yet unknown undiscovered druggable targets from OUTLIER humans with OUTLIER traits in OUTLIER parts of the world…

Gill Verdon (@gillverd) 's Twitter Profile Photo

Fun little paper to appear tonight on the arXiv. How to do Hamiltonian Monte Carlo on digital Quantum Computers. As physics-based probabilistic ML accelerators are on the horizon, important to test how QC's could try to compete. Best way to predict future is to invent it.🙂

Fun little paper to appear tonight on the arXiv. 

How to do Hamiltonian Monte Carlo on digital Quantum Computers. 

As physics-based probabilistic ML accelerators are on the horizon, important to test how QC's could try to compete.

Best way to predict future is to invent it.🙂
Kyle Baranko (@kyle__cb) 's Twitter Profile Photo

We in energy should look to aerospace and defense as inspiration on how to defeat a regulatory paradigm that inflates costs Rate basing = cost-plus government contracts Exiting incumbent incentive structures are the only way to drastically lower costs and make things like

Ben Nowack (@bennbuilds) 's Twitter Profile Photo

Sharing a bit more about Reflect Orbital today. Tristan Semmelhack and I are developing a constellation of revolutionary satellites to sell sunlight to thousands of solar farms after dark. We think sunlight is the new oil and space is ready to support energy infrastructure. This

Cory Slater (@corslater) 's Twitter Profile Photo

This type of dark pattern should be illegal. Shame on you TaxAct for making it impossible to unsubscribe from marketing emails. I tried in multiple browsers with multiple sessions. The only button that worked was "No".

Josh Wolfe (@wolfejosh) 's Twitter Profile Photo

1/ Quick thread 🧵important new paper 📜from Santa Fe Institute on the THERMODYNAMICS of COMPUTATION... Ever wondered why your gadgets get warm after using them for a while? As with cells🧫, brains🧠 and laptops💻––it's all about energy use and heat🔥....

1/ Quick thread 🧵important new paper 📜from <a href="/sfiscience/">Santa Fe Institute</a> on the THERMODYNAMICS of COMPUTATION...

Ever wondered why your gadgets get warm after using them for a while? As with cells🧫, brains🧠 and laptops💻––it's all about energy use and heat🔥....
Alex Rives (@alexrives) 's Twitter Profile Photo

We have trained ESM3 and we're excited to introduce EvolutionaryScale. ESM3 is a generative language model for programming biology. In experiments, we found ESM3 can simulate 500M years of evolution to generate new fluorescent proteins. Read more: evolutionaryscale.ai/blog/esm3-rele…

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

The TTT layer, as a new mechanism to compress information and model memory, can be a simple replacement for the self-attention layer in Transformer. Recall Transformer explicitly stores all input tokens. If you believe that training neural networks is a good way to compress

The TTT layer, as a new mechanism to compress information and model memory, can be a simple replacement for the self-attention layer in Transformer.

Recall Transformer explicitly stores all input tokens. If you believe that training neural networks is a good way to compress
Ali Behrouz (@behrouz_ali) 's Twitter Profile Photo

Attention has been the key component for most advances in LLMs, but it can’t scale to long context. Does this mean we need to find an alternative? Presenting Titans: a new architecture with attention and a meta in-context memory that learns how to memorize at test time. Titans

Attention has been the key component for most advances in LLMs, but it can’t scale to long context. Does this mean we need to find an alternative? 

Presenting Titans: a new architecture with attention and a meta in-context memory that learns how to memorize at test time. Titans
Active Inference Institute (@inferenceactive) 's Twitter Profile Photo

Awesome work from ReactiveBayes . We took this new notebook example and made it into Julia script with more visualizations and animations. github.com/docxology/RxIn…

Emmett Shear (@eshear) 's Twitter Profile Photo

The way that OpenAI uses user feedback to train the model is misguided and will inevitably lead to further issues like this one. Supervised fine-tuning (SFT) on "ideal" responses is simply teaching the model via imitation, which is fine as far as it goes. But it's not enough...

The way that OpenAI uses user feedback to train the model is misguided and will inevitably lead to further issues like this one.
Supervised fine-tuning (SFT) on "ideal" responses is simply teaching the model via imitation, which is fine as far as it goes. But it's not enough...
Alexi Gladstone (@alexiglad) 's Twitter Profile Photo

How can we unlock generalized reasoning? ⚡️Introducing Energy-Based Transformers (EBTs), an approach that out-scales (feed-forward) transformers and unlocks generalized reasoning/thinking on any modality/problem without rewards. TLDR: - EBTs are the first model to outscale the

How can we unlock generalized reasoning?

⚡️Introducing Energy-Based Transformers (EBTs), an approach that out-scales (feed-forward) transformers and unlocks generalized reasoning/thinking on any modality/problem without rewards.
TLDR:
- EBTs are the first model to outscale the