Tianfu Fu (@tianfuf) 's Twitter Profile
Tianfu Fu

@tianfuf

Member of Technical Staff @OpenAI
MIT McGovern Institute for Brain Research @mcgovernmit
Ex-Research Scientist @Meta

ID: 1188866077020708864

calendar_today28-10-2019 17:12:23

28 Tweet

1,1K Takipçi

220 Takip Edilen

Sam Altman (@sama) 's Twitter Profile Photo

GPT-4.5 is ready! good news: it is the first model that feels like talking to a thoughtful person to me. i have had several moments where i've sat back in my chair and been astonished at getting actually good advice from an AI. bad news: it is a giant, expensive model. we

OpenAI (@openai) 's Twitter Profile Photo

Starting today, memory in ChatGPT can now reference all of your past chats to provide more personalized responses, drawing on your preferences and interests to make it even more helpful for writing, getting advice, learning, and beyond.

Sam Altman (@sama) 's Twitter Profile Photo

GPT-4.1 (and -mini and -nano) are now available in the API! these models are great at coding, instruction following, and long context (1 million tokens). benchmarks are strong, but we focused on real-world utility, and developers seem very happy. GPT-4.1 family is API-only.

Srinivas Narayanan (@snsf) 's Twitter Profile Photo

We have a new model GPT-4.1 in the API that is much better at coding, instruction following and long context. Hope you like it. openai.com/index/gpt-4-1/

Tianfu Fu (@tianfuf) 's Twitter Profile Photo

o3 casually doing perturbative quantum field theory from a napkin sketch? Wild 🥳 Just came across an incredible o3 use case from online: it glanced at a rough sketch on a desk and immediately computed the quantum electrodynamics (QED) scattering amplitude.

o3 casually doing perturbative quantum field theory from a napkin sketch? Wild 🥳

Just came across an incredible o3 use case from online: it glanced at a rough sketch on a desk and immediately computed the quantum electrodynamics (QED) scattering amplitude.
Wenhao Chai (@wenhaocha1) 's Twitter Profile Photo

GPT-5, think more. In our latest LiveCodeBench Pro tests for Competitive Programming, GPT-5 Thinking hit a true 0→1 moment in 2025 Q1 set, the only model to crack the hard split, and this wasn’t even GPT-5 Thinking Pro. Average response length exceeded 100,000 tokens, which is

GPT-5, think more.

In our latest LiveCodeBench Pro tests for Competitive Programming, GPT-5 Thinking hit a true 0→1 moment in 2025 Q1 set, the only model to crack the hard split, and this wasn’t even GPT-5 Thinking Pro. Average response length exceeded 100,000 tokens, which is
Sebastien Bubeck (@sebastienbubeck) 's Twitter Profile Photo

I copy pasted an unpublished manuscript of mine in ChatGPT and asked it to improve it. I expected that the method we're using was pushed to its limit: gpt-5-pro actually proved it. Even I did not expect the models to be capable of such things already ...

I copy pasted an unpublished manuscript of mine in ChatGPT and asked it to improve it. I expected that the method we're using was pushed to its limit: gpt-5-pro actually proved it. 

Even I did not expect the models to be capable of such things already ...
Tianfu Fu (@tianfuf) 's Twitter Profile Photo

Thanks for Wenhao Chai 's benchmark: GPT-5 Thinking just pulled off something historic 🎯 — in the latest LiveCodeBench Pro competitive programming benchmark, it was the only model right now that can solve the “hard split” set in the 2025 Q1 round. And this wasn’t even the Pro