Ron Green (@rgreenjr) 's Twitter Profile
Ron Green

@rgreenjr

AI Expert • Co-Founder & CTO KUNGFU.AI • 🎧 Host of “Hidden Layers: AI and the People Behind It”

ID: 13224822

linkhttp://www.kungfu.ai calendar_today07-02-2008 22:43:47

1,1K Tweet

251 Takipçi

580 Takip Edilen

Jascha Sohl-Dickstein (@jaschasd) 's Twitter Profile Photo

Have you ever done a dense grid search over neural network hyperparameters? Like a *really dense* grid search? It looks like this (!!). Blueish colors correspond to hyperparameters for which training converges, redish colors to hyperparameters for which training diverges.

Andrew Zhao (@andrewz45732491) 's Twitter Profile Photo

❄️Introducing Absolute Zero Reasoner: Our reasoner learns to both propose tasks that maximize learnability and improve reasoning by solving them, entirely through self-play—with no external data! It overall outperforms other "zero" models in math & coding domains. 🧵 1/

❄️Introducing Absolute Zero Reasoner: Our reasoner learns to both propose tasks that maximize learnability and improve reasoning by solving them, entirely through self-play—with no external data! It overall outperforms other "zero" models in math & coding domains.
🧵 1/
Alexander Novikov (@sashavnovikov) 's Twitter Profile Photo

After 1.5 years of work, I'm so excited to announce AlphaEvolve – our new LLM + evolution agent! Learn more in the blog post: deepmind.google/discover/blog/… White paper PDF: storage.googleapis.com/deepmind-media… (1/2)

After 1.5 years of work, I'm so excited to announce AlphaEvolve – our new LLM + evolution agent!
Learn more in the blog post: deepmind.google/discover/blog/…
White paper PDF: storage.googleapis.com/deepmind-media…
(1/2)
jack morris (@jxmnop) 's Twitter Profile Photo

excited to finally share on arxiv what we've known for a while now: All Embedding Models Learn The Same Thing embeddings from different models are SO similar that we can map between them based on structure alone. without *any* paired data feels like magic, but it's real:🧵

Ron Green (@rgreenjr) 's Twitter Profile Photo

Lately, I keep coming back to reinforcement learning. It’s been the focus of the last few Hidden Layer episodes for a reason. I’m convinced it’s not just another tool in the AI toolbox. It’s a turning point. Supervised learning has taken us far, but it’s starting to hit a

Epoch AI (@epochairesearch) 's Twitter Profile Photo

How fast has society been adopting AI? Back in 2022, ChatGPT arguably became the fastest-growing consumer app ever, hitting 100M users in just 2 months. But the field of AI has transformed since then, and it’s time to take a new look at the numbers. 🧵

How fast has society been adopting AI? 

Back in 2022, ChatGPT arguably became the fastest-growing consumer app ever, hitting 100M users in just 2 months. But the field of AI has transformed since then, and it’s time to take a new look at the numbers. 🧵
Alexander Wei (@alexwei_) 's Twitter Profile Photo

1/N I’m excited to share that our latest OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).

1/N I’m excited to share that our latest <a href="/OpenAI/">OpenAI</a> experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
Ron Green (@rgreenjr) 's Twitter Profile Photo

One of the most important questions in AI right now isn’t about compute or alignment. It’s about copyright. Should training AI models on copyrighted content be considered fair use? Courts are starting to weigh in. And while the rulings so far lean toward fair use, they’re also

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

Something fun we discovered: you can use #Genie3 to step into and explore your favorite paintings. Here's a short visit to Edward Hopper's "Nighthawks".

Yulu Gan (@yule_gan) 's Twitter Profile Photo

Reinforcement Learning (RL) has long been the dominant method for fine-tuning, powering many state-of-the-art LLMs. Methods like PPO and GRPO explore in action space. But can we instead explore directly in parameter space? YES we can. We propose a scalable framework for

Adam Zsolt Wagner (@azwagner_) 's Twitter Profile Photo

Really happy to share our new paper on using AlphaEvolve for mathematical exploration at scale, written with Javier Gómez-Serrano, Terence Tao, and Google DeepMind's Bogdan Georgiev. We tested it on 67 problems and documented all our successes and failures. 🧵

Really happy to share our new paper on using AlphaEvolve for mathematical exploration at scale, written with Javier Gómez-Serrano, Terence Tao, and <a href="/GoogleDeepMind/">Google DeepMind</a>'s Bogdan Georgiev. We tested it on 67 problems and documented all our successes and failures. 🧵
clem 🤗 (@clementdelangue) 's Twitter Profile Photo

Personally feels like we've reached the peak of "Proprietary APIs" and that we're entering a much more balanced world for AI where open-source, training, Hugging Face (and other players) will start getting a much bigger share of the attention, usage and revenue. Let's go!

Personally feels like we've reached the peak of "Proprietary APIs" and that we're entering a much more balanced world for AI where open-source, training, <a href="/huggingface/">Hugging Face</a> (and other players) will start getting a much bigger share of the attention, usage and revenue. 

Let's go!