Eric Hartford (@cognitivecompai) 's Twitter Profile
Eric Hartford

@cognitivecompai

We make AI models Dolphin and Samantha
BTC 3ENBV6zdwyqieAXzZP2i3EjeZtVwEmAuo4
ko-fi.com/erichartford

ID: 2854214132

linkhttps://erichartford.com calendar_today13-10-2014 10:22:51

8,8K Tweet

16,16K Followers

518 Following

Mistral AI (@mistralai) 's Twitter Profile Photo

Introducing Mistral Small 3.2, a small update to Mistral Small 3.1 to improve: - Instruction following: Small 3.2 is better at following precise instructions - Repetition errors: Small 3.2 produces less infinite generations or repetitive answers - Function calling: Small

Introducing Mistral Small 3.2, a small update to Mistral Small 3.1 to improve:  

- Instruction following: Small 3.2 is better at following precise instructions
- Repetition errors: Small 3.2 produces less infinite generations or repetitive answers 
- Function calling: Small
Bindu Reddy (@bindureddy) 's Twitter Profile Photo

AI, not humans, is going to choose the programming languages of the future It will be - english - python 🐍 - typescript - rust Primarily because AI is pretty good at them

Eric Hartford (@cognitivecompai) 's Twitter Profile Photo

The Hessian of multipartite entanglement defines spacetime geometry, while large-N gauge fluctuations of the underlying error-correcting code reproduce General Relativity with emergent dark energy.

The Hessian of multipartite entanglement defines spacetime geometry, while large-N gauge fluctuations of the underlying error-correcting code reproduce General Relativity with emergent dark energy.
𝚐π”ͺ𝟾𝚑𝚑𝟾 (@gm8xx8) 's Twitter Profile Photo

π‘²π’Šπ’Žπ’Š-𝑹𝒆𝒔𝒆𝒂𝒓𝒄𝒉𝒆𝒓 is a fully autonomous agent trained via end-to-end RL. It executes ~23 reasoning steps and explores 200+ URLs per task. Results: - 26.9% Pass@1 on Humanity’s Last Exam (↑ from 8.6% zero-shot) - 69% Pass@1 on xbench-DeepSearch (avg of 4 runs),

π‘²π’Šπ’Žπ’Š-𝑹𝒆𝒔𝒆𝒂𝒓𝒄𝒉𝒆𝒓 is a fully autonomous agent trained via end-to-end RL. It executes ~23 reasoning steps and explores 200+ URLs per task.

Results:
- 26.9% Pass@1 on Humanity’s Last Exam (↑ from 8.6% zero-shot)
- 69% Pass@1 on xbench-DeepSearch (avg of 4 runs),
Omar Sanseviero (@osanseviero) 's Twitter Profile Photo

Welcome Magenta RealTime: open model for music generationπŸŽ‰ - 800 million parameter model - Run in free-tier Colaboratory - Fine-tuning and tech report coming soon hf.co/google/magenta…

Sakana AI (@sakanaailabs) 's Twitter Profile Photo

Introducing Reinforcement-Learned Teachers (RLTs): Transforming how we teach LLMs to reason with reinforcement learning (RL). Blog: sakana.ai/rlt Paper: arxiv.org/abs/2506.08388 Traditional RL focuses on β€œlearning to solve” challenging problems with expensive LLMs and

Wes Roth (@wesrothmoney) 's Twitter Profile Photo

AI just flipped the script on how we teach models to think. Instead of training massive LLMs to "solve" problems, Sakana AI just dropped a method where tiny models teach by explaining solutions β€” and outperform giants like DeepSeek R1. A 7B model trained a 32B student better

AI just flipped the script on how we teach models to think.

Instead of training massive LLMs to "solve" problems, Sakana AI just dropped a method where tiny models teach by explaining solutions β€” and outperform giants like DeepSeek R1.

A 7B model trained a 32B student better