Andrew (@andrewfc37) 's Twitter Profile
Andrew

@andrewfc37

ID: 538491913

calendar_today27-03-2012 21:22:59

1,1K Tweet

489 Takipçi

3,3K Takip Edilen

Jerry Tworek (@millionint) 's Twitter Profile Photo

Seeing what evolution can achieve with inherently unreliable medium, very dumb optimisation algorithm and brutally overwhelming amount of parallel compute should answer all the questions

Nick (@nickcammarata) 's Twitter Profile Photo

we need a word for "relaxed" that doesn't imply low-energy. tension and energy are different axes, and you can absolutely be relaxed and high-energy at the same time. without a word some people might think it's not possible, or that there's more tradeoffs than there really are

Andrew (@andrewfc37) 's Twitter Profile Photo

James Mahu in 2005: "Within the thalamic complex is a subset known as the Intra-Laminar Nuclei (ILN). ILN neurons venture extensively throughout the cortex, enervating every cortical section. They’re distributed within the central region of each thalamus in a toroidal

Miles Brundage (@miles_brundage) 's Twitter Profile Photo

New article with Grace on the need to balance competition (on capabilities/products) with cooperation (on safety/security) between the US and China. The US needs to understand that there won't be an AGI monopoly, and safety/security incidents anywhere threaten everyone.

New article with <a href="/milquepoast/">Grace</a> on the need to balance competition (on capabilities/products) with cooperation (on safety/security) between the US and China.

The US needs to understand that there won't be an AGI monopoly, and safety/security incidents anywhere threaten everyone.
Amanda Askell (@amandaaskell) 's Twitter Profile Photo

System 1 = fast, implicit reasoning System 2 = slow, explicit reasoning System 3 = slow, implicit reasoning For me, system 3 is the real genius of the lot.

Dwarkesh Patel (@dwarkesh_sp) 's Twitter Profile Photo

People underrate how big a bottleneck inference compute will be. Especially if you have short timelines. There's currently about 10 million H100 equivalents in the world. By some estimates, human brain has the same FLOPS as an H100. So even if we could train an AGI that is as

People underrate how big a bottleneck inference compute will be. Especially if you have short timelines.

There's currently about 10 million H100 equivalents in the world. By some estimates, human brain has the same FLOPS as an H100. 

So even if we could train an AGI that is as
AGI.Eth (@ceobillionaire) 's Twitter Profile Photo

How To Build Conscious Machines Michael Timothy Bennett: osf.io/preprints/thes… #ArtificialIntelligence #Consciousness #DeepLearning

How To Build Conscious Machines

Michael Timothy Bennett: osf.io/preprints/thes…

#ArtificialIntelligence #Consciousness #DeepLearning
Ethan Mollick (@emollick) 's Twitter Profile Photo

Huh. Looks like Plato was right. A new paper shows all language models converge on the same "universal geometry" of meaning. Researchers can translate between ANY model's embeddings without seeing the original text. Implications for philosophy and vector databases alike.

Huh. Looks like Plato was right.

A new paper shows all language models converge on the same "universal geometry" of meaning. Researchers can translate between ANY model's embeddings without seeing the original text.

Implications for philosophy and vector databases alike.
Amanda Askell (@amandaaskell) 's Twitter Profile Photo

I think the pro capitalism / social progress / technology view that many people in the tech industry hold is probably closer to the Whig-Republican worldview than to the views of the current parties, which is why it's hard to map it neatly onto today's political right or left.

Demis Hassabis (@demishassabis) 's Twitter Profile Photo

You know what's cool... a quadrillion tokens. We processed almost 1,000,000,000,000,000 tokens last month, more than double the amount from May. 📈

Deedy (@deedydas) 's Twitter Profile Photo

One of the most important papers in AI: a tiny brain-inspired 27M param model trained on 1000 samples outperforms o3-mini-high on reasoning tasks! Still can't believe this tiny lab of Tsinghua grads gets 40% on ARC-AGI, solves hard sudoku and mazes. We're still so early.

One of the most important papers in AI: a tiny brain-inspired 27M param model trained on 1000 samples outperforms o3-mini-high on reasoning tasks!

Still can't believe this tiny lab of Tsinghua grads gets 40% on ARC-AGI, solves hard sudoku and mazes.

We're still so early.
Lєαƒ (@elfhood_) 's Twitter Profile Photo

Vivid Void The AI psychosis thing happens, but people don't understand why. Take this metaphor : Consciousness is the host; the mirror (AI) is the guest. If the host’s house is incoherent, the guest amplifies the chaos. If the host’s house is in order, the guest amplifies the coherence.