Do-Hyeon Lee (@lead_o_hyeon) 's Twitter Profile
Do-Hyeon Lee

@lead_o_hyeon

| Navigating Mind-Behavior-Brain Intricacy

ID: 1560219069433131010

linkhttp://lee-dohyeon.github.io calendar_today18-08-2022 10:56:40

182 Tweet

78 Followers

257 Following

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

What if AI could write creative stories & insightful #DeepResearch reports like an expert? Our heterogeneous recursive planning [1] enables this via adaptive subgoals [2] & dynamic execution. Agents dynamically replan & weave retrieval, reasoning, & composition mid-flow. Explore

What if AI could write creative stories & insightful #DeepResearch reports like an expert? Our heterogeneous recursive planning [1] enables this via adaptive subgoals [2] & dynamic execution. Agents dynamically replan & weave retrieval, reasoning, & composition mid-flow. Explore
hardmaru (@hardmaru) 's Twitter Profile Photo

Excited to release our technical report: “The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search”‼️ pub.sakana.ai/ai-scientist-v… The AI Scientist-v2 incorporates an “Agentic Tree Search” approach into the workflow, enabling deeper and more

Homanga Bharadhwaj (@mangahomanga) 's Twitter Profile Photo

We're organizing a workshop at ICML 2025 on building physically plausible world models! Come join us and our awesome speakers in exploring this exciting research direction, with applications to video generation, robotics, 3D reconstruction and more... 1/2

We're organizing a workshop at ICML 2025 on building physically plausible world models!

Come join us and our awesome speakers in exploring this exciting research direction, with applications to video generation, robotics, 3D reconstruction and more...

1/2
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

My first work on metalearning or learning to learn came out in 1987 [1][2]. Back then nobody was interested. Today, compute is 10 million times cheaper, and metalearning is a hot topic 🙂 It’s fitting that my 100th journal publication [100] is about metalearning, too. [100]

My first work on metalearning or learning to learn came out in 1987 [1][2]. Back then nobody was interested. Today, compute is 10 million times cheaper, and metalearning is a hot topic 🙂 It’s fitting that my 100th journal publication [100] is about metalearning, too.

[100]
Maryam Miradi, PhD (@maryammiradi) 's Twitter Profile Photo

I dove into the 264-page AI Agent Blueprint by top researchers from Meta, Yale, Stanford, DeepMind & Microsoft. They map AI Agent components—perception, memory, world modeling, reasoning, planning—to human brain. Remarkable work. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗸𝗲𝘆 𝗳𝗶𝗻𝗱𝗶𝗻𝗴𝘀:

Alec Helbling (@alec_helbling) 's Twitter Profile Photo

Flow Matching aims to learn a "flow" that transforms a simple source distribution (e.g. Gaussian) to an arbitrarily complex target distribution. This video shows the evolution of the marginal probability path as a source distribution is transformed to a target distribution.

Sakana AI (@sakanaailabs) 's Twitter Profile Photo

Introducing Continuous Thought Machines New Blog: sakana.ai/ctm/ Modern AI is powerful, but it’s still distinct from human-like flexible intelligence. We believe neural timing is key. Our Continuous Thought Machine is built from the ground up to use neural dynamics as

hardmaru (@hardmaru) 's Twitter Profile Photo

New Paper: Continuous Thought Machines 🧠 Neurons in brains use timing and synchronization in the way that they compute, but this is largely ignored in modern neural nets. We believe neural timing is key for the flexibility and adaptability of biological intelligence. We

hardmaru (@hardmaru) 's Twitter Profile Photo

“I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain.” — @GeoffreyHinton 🧠

Yi Xu (@_yixu) 's Twitter Profile Photo

🚀Let’s Think Only with Images. No language and No verbal thought.🤔 Let’s think through a sequence of images💭, like how humans picture steps in their minds🎨. We propose Visual Planning, a novel reasoning paradigm that enables models to reason purely through images.

🚀Let’s Think Only with Images.

No language and No verbal thought.🤔 

Let’s think through a sequence of images💭, like how humans picture steps in their minds🎨. 

We propose Visual Planning, a novel reasoning paradigm that enables models to reason purely through images.
NotebookLM (@notebooklm) 's Twitter Profile Photo

You weren't dreaming— the NotebookLM mobile app started rolling out this morning! We were eager to get the app into your hands, so this initial version has an MVP feature set with more functionality coming soon! Here are a few of the features we're most excited about: 🧵🧵🧵

Kevin Patrick Murphy (@sirbayes) 's Twitter Profile Photo

I am pleased to announce a new version of my RL tutorial. Major update to the LLM chapter (eg DPO, GRPO, thinking), minor updates to the MARL and MBRL chapters and various sections (eg offline RL, DPG, etc). Enjoy! arxiv.org/abs/2412.05265

I am pleased to announce a new version of my RL tutorial. Major update to the LLM chapter (eg DPO, GRPO, thinking), minor updates to the MARL and MBRL chapters and various sections (eg offline RL, DPG, etc). Enjoy!
arxiv.org/abs/2412.05265
Anil Seth (@anilkseth) 's Twitter Profile Photo

1/3 Geoffrey Hinton once said that the future depends on some graduate student being suspicious of everything he says (via Lex Fridman). He also said was that it was impossible to find biologically plausible approaches to backprop that scale well: radical.vc/geoffrey-hinto….

1/3 <a href="/geoffreyhinton/">Geoffrey Hinton</a> once said that the future depends on some graduate student being suspicious of everything he says (via <a href="/lexfridman/">Lex Fridman</a>). He also said was that it was impossible to find biologically plausible approaches to backprop that scale well: radical.vc/geoffrey-hinto….
Adeel Razi (@adeelrazi) 's Twitter Profile Photo

Why do brains rely on inference, uncertainty, and structure… while AI systems chase rewards in unstructured worlds? Are we missing something fundamental about how intelligence emerges? #NeuroAI #InferenceOverOptimization

hardmaru (@hardmaru) 's Twitter Profile Photo

New Paper! Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents A longstanding goal of AI research has been the creation of AI that can learn indefinitely. One path toward that goal is an AI that improves itself by rewriting its own code, including any code

hardmaru (@hardmaru) 's Twitter Profile Photo

AI that can improve itself: A deep dive into self-improving AI and the Darwin-Gödel Machine. richardcsuwandi.github.io/blog/2025/dgm/ Excellent blog post by Richard C. Suwandi reviewing the Darwin Gödel Machine (DGM) and future implications.

AI that can improve itself: A deep dive into self-improving AI and the Darwin-Gödel Machine.

richardcsuwandi.github.io/blog/2025/dgm/

Excellent blog post by <a href="/richardcsuwandi/">Richard C. Suwandi</a> reviewing the Darwin Gödel Machine (DGM) and future implications.