Eric Hartford (@cognitivecompai) 's Twitter Profile
Eric Hartford

@cognitivecompai

We make AI models Dolphin and Samantha
BTC 3ENBV6zdwyqieAXzZP2i3EjeZtVwEmAuo4
ko-fi.com/erichartford

ID: 2854214132

linkhttps://erichartford.com calendar_today13-10-2014 10:22:51

8,8K Tweet

16,16K Followers

518 Following

Mistral AI (@mistralai) 's Twitter Profile Photo

Introducing Mistral Small 3.2, a small update to Mistral Small 3.1 to improve: - Instruction following: Small 3.2 is better at following precise instructions - Repetition errors: Small 3.2 produces less infinite generations or repetitive answers - Function calling: Small

Introducing Mistral Small 3.2, a small update to Mistral Small 3.1 to improve:  

- Instruction following: Small 3.2 is better at following precise instructions
- Repetition errors: Small 3.2 produces less infinite generations or repetitive answers 
- Function calling: Small
Bindu Reddy (@bindureddy) 's Twitter Profile Photo

AI, not humans, is going to choose the programming languages of the future It will be - english - python ๐Ÿ - typescript - rust Primarily because AI is pretty good at them

Eric Hartford (@cognitivecompai) 's Twitter Profile Photo

The Hessian of multipartite entanglement defines spacetime geometry, while large-N gauge fluctuations of the underlying error-correcting code reproduce General Relativity with emergent dark energy.

The Hessian of multipartite entanglement defines spacetime geometry, while large-N gauge fluctuations of the underlying error-correcting code reproduce General Relativity with emergent dark energy.
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ (@gm8xx8) 's Twitter Profile Photo

๐‘ฒ๐’Š๐’Ž๐’Š-๐‘น๐’†๐’”๐’†๐’‚๐’“๐’„๐’‰๐’†๐’“ is a fully autonomous agent trained via end-to-end RL. It executes ~23 reasoning steps and explores 200+ URLs per task. Results: - 26.9% Pass@1 on Humanityโ€™s Last Exam (โ†‘ from 8.6% zero-shot) - 69% Pass@1 on xbench-DeepSearch (avg of 4 runs),

๐‘ฒ๐’Š๐’Ž๐’Š-๐‘น๐’†๐’”๐’†๐’‚๐’“๐’„๐’‰๐’†๐’“ is a fully autonomous agent trained via end-to-end RL. It executes ~23 reasoning steps and explores 200+ URLs per task.

Results:
- 26.9% Pass@1 on Humanityโ€™s Last Exam (โ†‘ from 8.6% zero-shot)
- 69% Pass@1 on xbench-DeepSearch (avg of 4 runs),
Omar Sanseviero (@osanseviero) 's Twitter Profile Photo

Welcome Magenta RealTime: open model for music generation๐ŸŽ‰ - 800 million parameter model - Run in free-tier Colaboratory - Fine-tuning and tech report coming soon hf.co/google/magentaโ€ฆ

Sakana AI (@sakanaailabs) 's Twitter Profile Photo

Introducing Reinforcement-Learned Teachers (RLTs): Transforming how we teach LLMs to reason with reinforcement learning (RL). Blog: sakana.ai/rlt Paper: arxiv.org/abs/2506.08388 Traditional RL focuses on โ€œlearning to solveโ€ challenging problems with expensive LLMs and

Wes Roth (@wesrothmoney) 's Twitter Profile Photo

AI just flipped the script on how we teach models to think. Instead of training massive LLMs to "solve" problems, Sakana AI just dropped a method where tiny models teach by explaining solutions โ€” and outperform giants like DeepSeek R1. A 7B model trained a 32B student better

AI just flipped the script on how we teach models to think.

Instead of training massive LLMs to "solve" problems, Sakana AI just dropped a method where tiny models teach by explaining solutions โ€” and outperform giants like DeepSeek R1.

A 7B model trained a 32B student better