Rob (@rob_gcc) 's Twitter Profile
Rob

@rob_gcc

Entrepreneur, coder, and scientist.

ID: 1067027174530711553

calendar_today26-11-2018 12:07:57

2,2K Tweet

4,4K Takipçi

520 Takip Edilen

Unitree (@unitreerobotics) 's Twitter Profile Photo

Unitree Iron Fist King: Awakening!💪 Let's step into a new era of Sci-Fi, join the fun together! Unitree will be livestreaming robot combat in about a month, stay tuned! #Unitree #Fighting #Boxing #HumanoidRobot #Robot #AI #IronFist #Game

Paul Graham (@paulg) 's Twitter Profile Photo

I just realized something most people are going to lose when (as they inevitably will) they start using AIs to write everything for them. They'll lose the knowledge of how writing is constructed.

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

We’re releasing an updated Gemini 2.5 Pro (I/O edition) to make it even better at coding. 🚀 You can build richer web apps, games, simulations and more - all with one prompt. In Google Gemini App, here's how it transformed images of nature into code to represent unique patterns 🌱

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Introducing AlphaEvolve: a Gemini-powered coding agent for algorithm discovery. It’s able to: 🔘 Design faster matrix multiplication algorithms 🔘 Find new solutions to open math problems 🔘 Make data centers, chip design and AI training more efficient across Google. 🧵

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Announcing the newest releases from Meta FAIR. We’re releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience. 1️⃣ Open Molecules 2025 (OMol25): A dataset

Three.js (@threejs) 's Twitter Profile Photo

Aurelia by holtsetio Completely procedural jellyfish, with verlet physics and fake volumetric lighting. Rendered in WebGPU and TSL. holtsetio.com/lab/aurelia/

jack morris (@jxmnop) 's Twitter Profile Photo

excited to finally share on arxiv what we've known for a while now: All Embedding Models Learn The Same Thing embeddings from different models are SO similar that we can map between them based on structure alone. without *any* paired data feels like magic, but it's real:🧵

Blake Robbins (@blakeir) 's Twitter Profile Photo

In 2009 (!!!), Paul Graham (Paul Graham) wrote a post about the 5 most interesting founders of the last 30 years. The list included Steve Jobs, Larry & Sergey, TJ Rodgers, Paul Buchheit, and... Sam Altman (Sam Altman)

In 2009 (!!!), Paul Graham (<a href="/paulg/">Paul Graham</a>) wrote a post about the 5 most interesting founders of the last 30 years.

The list included Steve Jobs, Larry &amp; Sergey, TJ Rodgers, Paul Buchheit, and...

Sam Altman (<a href="/sama/">Sam Altman</a>)
Ruben Hassid (@rubenhssd) 's Twitter Profile Photo

BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well. Here's what Apple discovered: (hint: we're not as close to AGI as the hype suggests)

BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.

They just memorize patterns really well.

Here's what Apple discovered:

(hint: we're not as close to AGI as the hype suggests)
rohit (@krishnanrohit) 's Twitter Profile Photo

I asked o3 to analyse and critique Apple's new "LLMs can't reason" paper. Despite its inability to reason I think it did a pretty decent job, don't you?

I asked o3 to analyse and critique Apple's new "LLMs can't reason" paper. Despite its inability to reason I think it did a pretty decent job, don't you?
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

A follow-up study on Apple's "Illusion of Thinking" Paper is published now. Shows the same models succeed once the format lets them give compressed answers, proving the earlier collapse was a measurement artifact. Token limits, not logic, froze the models. Collapse vanished

A follow-up study on Apple's "Illusion of Thinking" Paper is published now.

Shows the same models succeed once the format lets them give compressed answers, proving the earlier collapse was a measurement artifact.

Token limits, not logic, froze the models.

Collapse vanished