Barney Pell (@barneyp) 's Twitter Profile
Barney Pell

@barneyp

Barney Pell is an entrepreneur and VC. Barney Pell's Syndicate, Ecoation, Moon Express, Singularity U. Prev: Bing, Powerset, Mayfield, NASA, AI games pioneer.

ID: 17657037

linkhttp://www.barneypell.com calendar_today26-11-2008 19:20:24

16,16K Tweet

6,6K Followers

2,2K Following

God of Prompt (@godofprompt) 's Twitter Profile Photo

Google just built an AI that organizes itself. It’s called TUMIX, and it might be the most interesting paper Google has published this year. Instead of training a bigger model, the team built a system where multiple AIs work together at test time. Each agent uses different

Google just built an AI that organizes itself.

It’s called TUMIX, and it might be the most interesting paper Google has published this year.

Instead of training a bigger model, the team built a system where multiple AIs work together at test time. Each agent uses different
Alex Prompter (@alex_prompter) 's Twitter Profile Photo

Holy shit...Google just built an AI that learns from its own mistakes in real time. New paper dropped on ReasoningBank. The idea is pretty simple but nobody's done it this way before. Instead of just saving chat history or raw logs, it pulls out the actual reasoning patterns,

Holy shit...Google just built an AI that learns from its own mistakes in real time.

New paper dropped on ReasoningBank. The idea is pretty simple but nobody's done it this way before. Instead of just saving chat history or raw logs, it pulls out the actual reasoning patterns,
Barney Pell (@barneyp) 's Twitter Profile Photo

This is super interesting. It's an amazing example of emergent learning capability in LLMs. It seems like the LLM must have developed this during pre-training as a great way to handle in-context examples and improve it's predictive accuracy.

Robert Youssef (@rryssf_) 's Twitter Profile Photo

Holy shit… Harvard just proved your base model might secretly be a genius. 🤯 Their new paper “Reasoning with Sampling” shows that you don’t need reinforcement learning to make LLMs reason better. They used a 'Markov chain sampling trick' that simply re-samples from the

Holy shit… Harvard just proved your base model might secretly be a genius. 🤯

Their new paper “Reasoning with Sampling” shows that you don’t need reinforcement learning to make LLMs reason better.

They used a 'Markov chain sampling trick' that simply re-samples from the
Alex Prompter (@alex_prompter) 's Twitter Profile Photo

This paper just exposed the biggest AI research scam 💀 MIT just proved AI can generate novel research papers. Stanford confirmed it. OpenAI showcased examples. the papers passed peer review at major conferences. scored higher than human-written work on novelty and feasibility.

This paper just exposed the biggest AI research scam 💀

MIT just proved AI can generate novel research papers.

Stanford confirmed it. OpenAI showcased examples. the papers passed peer review at major conferences. scored higher than human-written work on novelty and feasibility.
God of Prompt (@godofprompt) 's Twitter Profile Photo

MIT just cracked AI safety. not with more filters. not with more rules. with one insight everyone missed. they taught models to think backwards first. enumerate every possible harm. analyze every consequence. only then respond. they call it InvThink, and it might redefine how

MIT just cracked AI safety.

not with more filters. not with more rules. with one insight everyone missed.

they taught models to think backwards first. enumerate every possible harm. analyze every consequence. only then respond. they call it InvThink, and it might redefine how
Shubham Saboo (@saboo_shubham_) 's Twitter Profile Photo

Stanford researchers just solved why AI agents keep failing. They watched 500+ agent failures across three benchmarks. Found a pattern nobody expected: Early mistakes don't just cause problems - they cascade into complete system meltdowns. It's called error propagation. One

Yesterday Work (@yesterday_work_) 's Twitter Profile Photo

🚨 Google just shocked the world. They dropped "DeepSomatic" and it can find cancer by reading DNA mutations, not tissue samples. That means earlier detection, faster treatment, and survival rates we’ve never seen before. Here’s how it works:

🚨 Google just shocked the world.

They dropped "DeepSomatic"  and it can find cancer by reading DNA mutations, not tissue samples.

That means earlier detection, faster treatment, and survival rates we’ve never seen before.

Here’s how it works:
Carlos E. Perez (@intuitmachine) 's Twitter Profile Photo

1/ Everyone's debating whether AI will take our jobs. But we're missing the bigger story: AI is about to solve the problem that made us choose between markets and planning in the first place. Thread on the coordination revolution no one's talking about 🧵 2/ Here's the

Robert Youssef (@rryssf_) 's Twitter Profile Photo

Holy shit… Meta might’ve just solved self-improving AI 🤯 Their new paper SPICE (Self-Play in Corpus Environments) basically turns a language model into its own teacher no humans, no labels, no datasets just the internet as its training ground. Here’s the twist: one copy of

Holy shit… Meta might’ve just solved self-improving AI 🤯

Their new paper SPICE (Self-Play in Corpus Environments) basically turns a language model into its own teacher no humans, no labels, no datasets just the internet as its training ground.

Here’s the twist: one copy of
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

The brilliant Kimi Linear paper. It's a hybrid attention that beats full attention while cutting memory by up to 75% and keeping 1M token decoding up to 6x faster. It cuts the key value cache by up to 75% and delivers up to 6x faster decoding at 1M context. Full attention is

The brilliant Kimi Linear paper.

It's a hybrid attention that beats full attention while cutting memory by up to 75% and keeping 1M token decoding up to 6x faster.

It cuts the key value cache by up to 75% and delivers up to 6x faster decoding at 1M context.

Full attention is
Carlos E. Perez (@intuitmachine) 's Twitter Profile Photo

Forget Individual Genius—The Future of AI Is All About Family Trees (1/10) The secret to building a superintelligent AI isn't to reward genius. It's to reward good parenting. A new paper just dropped that completely changes how we think about AI self-improvement, and it's a

Forget Individual Genius—The Future of AI Is All About Family Trees

(1/10)

The secret to building a superintelligent AI isn't to reward genius.

It's to reward good parenting.

A new paper just dropped that completely changes how we think about AI self-improvement, and it's a
God of Prompt (@godofprompt) 's Twitter Profile Photo

🚨 China just built Wikipedia's replacement and it exposes the fatal flaw in how we store ALL human knowledge. Most scientific knowledge compresses reasoning into conclusions. You get the "what" but not the "why." This radical compression creates what researchers call the "dark

🚨 China just built Wikipedia's replacement and it exposes the fatal flaw in how we store ALL human knowledge.

Most scientific knowledge compresses reasoning into conclusions. You get the "what" but not the "why." This radical compression creates what researchers call the "dark
Robert Youssef (@rryssf_) 's Twitter Profile Photo

Holy shit... this might be the next big paradigm shift in AI. 🤯 Tencent + Tsinghua just dropped a paper called Continuous Autoregressive Language Models (CALM) and it basically kills the “next-token” paradigm every LLM is built on. Instead of predicting one token at a time,

Holy shit... this might be the next big paradigm shift in AI. 🤯

Tencent + Tsinghua just dropped a paper called Continuous Autoregressive Language Models (CALM) and it basically kills the “next-token” paradigm every LLM is built on.

Instead of predicting one token at a time,
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

A beautiful paper from MIT+Harvard+ Google DeepMind and other top uni. Explains why Transformers miss multi digit multiplication and shows a simple bias that fixes it. The researchers trained two small Transformer models on 4-digit-by-4-digit multiplication. One used a special

A beautiful paper from MIT+Harvard+ <a href="/GoogleDeepMind/">Google DeepMind</a> and other top uni.

Explains why Transformers miss multi digit multiplication and shows a simple bias that fixes it.

The researchers trained two small Transformer models on 4-digit-by-4-digit multiplication.

One used a special
Carlos E. Perez (@intuitmachine) 's Twitter Profile Photo

I just spent a week deriving the formalization of Context-Oriented Programming. What I found isn't just a new way to build AI systems. It's a complete paradigm with axioms, a calculus, composition laws, and resource economics. Let me show you the foundation. 🧵 Here's what

机器之心 JIQIZHIXIN (@synced_global) 's Twitter Profile Photo

New paper surveys the rise of Graph-Augmented LLM Agents (GLA). It shows how graphs can boost LLM agents in planning, memory, tool use, and multi-agent coordination. It maps current progress, gaps, and future directions toward scalable, unified, and multimodal GLA systems.

New paper surveys the rise of Graph-Augmented LLM Agents (GLA).

It shows how graphs can boost LLM agents in planning, memory, tool use, and multi-agent coordination. 

It maps current progress, gaps, and future directions toward scalable, unified, and multimodal GLA systems.
Robert Youssef (@rryssf_) 's Twitter Profile Photo

Holy shit… this might be the most impressive scientific reasoning system anyone has built so far. A new paper just dropped called 'SciAgent' and it basically shows an AI system outperforming human gold medalists across multiple Science Olympiads in one unified architecture.

Holy shit… this might be the most impressive scientific reasoning system anyone has built so far.

A new paper just dropped called 'SciAgent' and it basically shows an AI system outperforming human gold medalists across multiple Science Olympiads in one unified architecture.
Alex Prompter (@alex_prompter) 's Twitter Profile Photo

RIP JSON. AI just got a data format that doesn’t waste tokens, doesn’t confuse models, and doesn’t bury structure under a pile of punctuation and it’s called TOON. If you work with LLMs, this is the part where everything you thought was “good enough” starts looking ancient.

RIP JSON.

AI just got a data format that doesn’t waste tokens, doesn’t confuse models, and doesn’t bury structure under a pile of punctuation and it’s called TOON.

If you work with LLMs, this is the part where everything you thought was “good enough” starts looking ancient.