AI Explained (@aiexplainedyt) 's Twitter Profile
AI Explained

@aiexplainedyt

300k+ YouTube subs, growing a friendly, professional AI networking hub, w/ exclusive videos, podcast and interviews on Patreon. ($7.51/month)

ID: 1617841388745363459

linkhttps://www.patreon.com/AIExplained calendar_today24-01-2023 11:07:20

107 Tweet

11,11K Takipçi

239 Takip Edilen

François Chollet (@fchollet) 's Twitter Profile Photo

This a great video if you need a primer on what's going on with LLMs, the ARC-AGI competition, and the quest to increase generality in AI systems. youtube.com/watch?v=PeSNEX…

Etched (@etched) 's Twitter Profile Photo

Meet Sohu, the fastest AI chip of all time. With over 500,000 tokens per second running Llama 70B, Sohu lets you build products that are impossible on GPUs. One 8xSohu server replaces 160 H100s. Sohu is the first specialized chip (ASIC) for transformer models. By specializing,

Meet Sohu, the fastest AI chip of all time.

With over 500,000 tokens per second running Llama 70B, Sohu lets you build products that are impossible on GPUs. One 8xSohu server replaces 160 H100s.

Sohu is the first specialized chip (ASIC) for transformer models. By specializing,
AI Explained (@aiexplainedyt) 's Twitter Profile Photo

Llama 3.1 paper is genuinely incredible, inc. near-perfect predictions of benchmark performance from a given compute budget. Most revealing paper of 2024. Here, though, are the initial results from my SIMPLE bench, as debuted on the channel. 100+ fully private, PhD-vetted,

François Chollet (@fchollet) 's Twitter Profile Photo

There have been "AGI achieved internally" rumors spread by OAI every few weeks/months since late 2022, and you guys are still eating it up -- for the Nth time. If you were actually close to AGI, you wouldn't spend your time shitposting on Twitter.

The Information (@theinformation) 's Twitter Profile Photo

Exclusive: As OpenAI looks to raise more capital, it's trying to launch AI that can reason through tough problems and help it develop a new AI model, 'Orion.' theinformation.com/articles/opena… From Erin Woo, Stephanie Palazzolo and Amir Efrati

AI Explained (@aiexplainedyt) 's Twitter Profile Photo

Whether we will scale LLMs to 10,000x GPT-4 by 2030 (i.e. GPT-6 levels) comes down to 4 big unanswered questions, and the 4th is the most crucial. 1. Will we master the art of training models across geographically distributed data centers? This would relieve local power sources

Weights & Biases (@weights_biases) 's Twitter Profile Photo

🪄 Think you’re an AI wizard? Prove it. We’ve partnered w/ AI Explained to launch the Simple Bench Evals Competition—a challenge so tough, he said: “If anyone gets 20/20 with a general-purpose prompt, I would be truly shocked.” 😳 Details below 👇

🪄 Think you’re an AI wizard? Prove it.

We’ve partnered w/ <a href="/AIExplainedYT/">AI Explained</a> to launch the Simple Bench Evals Competition—a challenge so tough, he said:

“If anyone gets 20/20 with a general-purpose prompt, I would be truly shocked.” 😳

Details below 👇
Nathan Labenz (@labenz) 's Twitter Profile Photo

The shift from a cautious, collaborative attitude wrt China not long ago to a zero-sum competitive outlook today - by both Dario and Sam - has been very disappointing - and they’ve offered no explanation for the change!

AI Explained (@aiexplainedyt) 's Twitter Profile Photo

2 quick updates, and look-ahead, exactly a year on from first testing models on Simple-Bench: 1) Claude 4 busted our rate limits, and my entreaties to Anthropic (to allow us to spend more money!) have yet to bear fruit. A shame, as am fairly confident Opus 4 would be SOTA.