Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile
Jürgen Schmidhuber

@schmidhuberai

Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.

ID: 1163786515144724485

linkhttps://people.idsia.ch/~juergen/most-cited-neural-nets.html calendar_today20-08-2019 12:15:46

98 Tweet

148,148K Takipçi

0 Takip Edilen

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

1995-2025: The Decline of Germany & Japan vs US & China. Can All-Purpose Robots Fuel a Comeback? In 1995, in terms of nominal GDP, a combined Germany and Japan were almost 1:1 economically with a combined USA and China. Only 3 decades later, this ratio is now down to 1:5!

1995-2025: The Decline of Germany & Japan vs US & China. Can All-Purpose Robots Fuel a Comeback?

In 1995, in terms of nominal GDP, a combined Germany and Japan were almost 1:1 economically with a combined USA and China. Only 3 decades later, this ratio is now down to 1:5!
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

It has been said that AI is the new oil, the new electricity, and the new internet. And the once nimble and highly profitable software companies (MSFT, GOOG, ...) became like utilities, investing in nuclear energy, among other things, to run AI data centres. Open Source and the

It has been said that AI is the new oil, the new electricity, and the new internet. And the once nimble and highly profitable software companies (MSFT, GOOG, ...) became like utilities, investing in nuclear energy, among other things, to run AI data centres. Open Source and the
hardmaru (@hardmaru) 's Twitter Profile Photo

I had the honor to meet Professor Shun-Ichi Amari, a pioneer in neural networks, at the University of Tokyo. He worked on early neural nets using pen and paper, before running computing experiments in the future. He talked not only about AI, but human evolution and consciousness.

I had the honor to meet Professor Shun-Ichi Amari, a pioneer in neural networks, at the University of Tokyo. He worked on early neural nets using pen and paper, before running computing experiments in the future. He talked not only about AI, but human evolution and consciousness.
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Do you like RL and math? Our collaboration, IDSIA-KAUST-NNAISENSE, has the most detailed exploration of the convergence and stability of modern RL frameworks like Upside-Down RL, Online Decision Transformers, and Goal-Conditioned Supervised Learning arxiv.org/abs/2502.05672

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

I am hiring postdocs at #KAUST to develop an Artificial Scientist for the discovery of novel chemical materials to save the climate by capturing carbon dioxide. Join this project at the intersection of RL and Material Science: apply.interfolio.com/162867

I am hiring postdocs at #KAUST to develop an Artificial Scientist for the discovery of novel chemical materials to save the climate by capturing carbon dioxide. Join this project at the intersection of RL and Material Science: apply.interfolio.com/162867
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

During the Oxford-style debate at the "Interpreting Europe Conference 2025," I persuaded many professional interpreters to reject the motion: "AI-powered interpretation will never replace human interpretation." Before the debate, the audience was 60-40 in favor of the motion;

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

What can we learn from history? The FACTS: a novel Structured State-Space Model with a factored, thinking memory [1]. Great for forecasting, video modeling, autonomous systems, at #ICLR2025. Fast, robust, parallelisable. [1] Li Nanbo, Firas Laakom, Yucheng Xu, Wenyi Wang, J.

What can we learn from history? The FACTS: a novel Structured State-Space Model with a factored, thinking memory [1]. Great for forecasting, video modeling, autonomous systems, at #ICLR2025. Fast, robust, parallelisable.

[1] Li Nanbo, Firas Laakom, Yucheng Xu, Wenyi Wang, J.
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

What if AI could write creative stories & insightful #DeepResearch reports like an expert? Our heterogeneous recursive planning [1] enables this via adaptive subgoals [2] & dynamic execution. Agents dynamically replan & weave retrieval, reasoning, & composition mid-flow. Explore

What if AI could write creative stories & insightful #DeepResearch reports like an expert? Our heterogeneous recursive planning [1] enables this via adaptive subgoals [2] & dynamic execution. Agents dynamically replan & weave retrieval, reasoning, & composition mid-flow. Explore
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

My first work on metalearning or learning to learn came out in 1987 [1][2]. Back then nobody was interested. Today, compute is 10 million times cheaper, and metalearning is a hot topic 🙂 It’s fitting that my 100th journal publication [100] is about metalearning, too. [100]

My first work on metalearning or learning to learn came out in 1987 [1][2]. Back then nobody was interested. Today, compute is 10 million times cheaper, and metalearning is a hot topic 🙂 It’s fitting that my 100th journal publication [100] is about metalearning, too.

[100]
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

AGI? One day, but not yet. The only AI that works well right now is the one behind the screen [12-17]. But passing the Turing Test [9] behind a screen is easy compared to Real AI for real robots in the real world. No current AI-driven robot could be certified as a plumber

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Smart (but not necessarily supersmart) robots that can learn to operate the tools & machines operated by humans can also build (and repair) more of their own kind: the "ultimate form of scaling" [1-5]. Life-like, self-replicating, self-improving hardware will change everything.

Smart (but not necessarily supersmart) robots that can learn to operate the tools & machines operated by humans can also build (and repair) more of their own kind: the "ultimate form of scaling" [1-5]. Life-like, self-replicating, self-improving hardware will change everything.
hardmaru (@hardmaru) 's Twitter Profile Photo

New Paper! Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents A longstanding goal of AI research has been the creation of AI that can learn indefinitely. One path toward that goal is an AI that improves itself by rewriting its own code, including any code

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Everybody talks about recursive self-improvement & Gödel Machines now & how this will lead to AGI. What a change from 15 years ago! We had AGI'2010 in Lugano & chaired AGI'2011 at Google. The backbone of the AGI conferences was mathematically optimal Universal AI: the 2003 Gödel

Everybody talks about recursive self-improvement & Gödel Machines now & how this will lead to AGI. What a change from 15 years ago! We had AGI'2010 in Lugano & chaired AGI'2011 at Google. The backbone of the AGI conferences was mathematically optimal Universal AI: the 2003 Gödel
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

10 years ago, in May 2015, we published the first working very deep gradient-based feedforward neural networks (FNNs) with hundreds of layers (previous FNNs had a maximum of a few dozen layers). To overcome the vanishing gradient problem, our Highway Networks used the residual