Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile
Jürgen Schmidhuber

@schmidhuberai

Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.

ID: 1163786515144724485

linkhttps://people.idsia.ch/~juergen/most-cited-neural-nets.html calendar_today20-08-2019 12:15:46

68 Tweet

120,120K Takipçi

0 Takip Edilen

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

2023: 20th anniversary of the Gödel Machine, a mathematically optimal, self-referential, meta-learning, universal problem solver making provably optimal self-improvements by rewriting its own computer code people.idsia.ch/~juergen/goede…

2023: 20th anniversary of the Gödel Machine, a mathematically optimal, self-referential, meta-learning, universal problem solver making provably optimal self-improvements by rewriting its own computer code people.idsia.ch/~juergen/goede…
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Silly AI regulation hype One cannot regulate AI research, just like one cannot regulate math. One can regulate applications of AI in finance, cars, healthcare. Such fields already have continually adapting regulatory frameworks in place. Don’t stifle the open-source movement!

Silly AI regulation hype

One cannot regulate AI research, just like one cannot regulate math.

One can regulate applications of AI in finance, cars, healthcare. Such fields already have continually adapting regulatory frameworks in place.

Don’t stifle the open-source movement!
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Thanks Elon Musk for your generous hyperbole!  Admittedly, however, I didn’t invent sliced bread, just #GenerativeAI and things like that: people.idsia.ch/~juergen/most-… And of course my team is standing on the shoulders of giants: people.idsia.ch/~juergen/deep-… Original tweet by Elon Musk:

Thanks <a href="/elonmusk/">Elon Musk</a> for your generous hyperbole! 

Admittedly, however, I didn’t invent sliced bread, just #GenerativeAI and things like that: people.idsia.ch/~juergen/most-…

And of course my team is standing on the shoulders of giants: people.idsia.ch/~juergen/deep-…

Original tweet by <a href="/elonmusk/">Elon Musk</a>:
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

AI boom v AI doom: since the 1970s, I have told AI doomers that in the end all will be good. E.g., 2012 TEDx talk: youtu.be/KQ35zNlyG-o: “Don’t think of us versus them: us, the humans, v these future super robots. Think of yourself, and humanity in general, as a small stepping

AI boom v AI doom: since the 1970s, I have told AI doomers that in the end all will be good. E.g., 2012 TEDx talk: youtu.be/KQ35zNlyG-o: “Don’t think of us versus them: us, the humans, v these future super robots. Think of yourself, and humanity in general, as a small stepping
hardmaru (@hardmaru) 's Twitter Profile Photo

Amazing that Jürgen Schmidhuber gave this talk back in 2012, months before AlexNet paper was published. In 2012, many things he discussed, people just considered to be funny and a joke, but the same talk now would be considered at the center of AI debate and controversy. Full talk:

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Q*? 2015: reinforcement learning prompt engineer in Sec. 5.3 of “Learning to Think...” arxiv.org/abs/1511.09249. A controller neural network C learns to send prompt sequences into a world model M (e.g., a foundation model) trained on, say, videos of actors. C also learns to

Q*? 2015: reinforcement learning prompt engineer in Sec. 5.3 of “Learning to Think...” arxiv.org/abs/1511.09249. A controller neural network C learns to send prompt sequences into a world model M (e.g., a foundation model) trained on, say, videos of actors. C also learns to
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

So Yann LeCun: "I've been advocating for deep learning architecture capable of planning since 2016" vs me: "I've been publishing deep learning architectures capable of planning since 1990." I guess in 2016 Yann LeCun also picked up the torch. (References attached)

So <a href="/ylecun/">Yann LeCun</a>: "I've been advocating for deep learning architecture capable of planning since 2016" vs me: "I've been publishing deep learning architectures capable of planning since 1990." I guess in 2016 <a href="/ylecun/">Yann LeCun</a> also picked up the torch. (References attached)
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. More than a dozen concrete AI priority disputes under people.idsia.ch/~juergen/ai-pr…

How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. More than a dozen concrete AI priority disputes under people.idsia.ch/~juergen/ai-pr…
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Best paper award for "Mindstorms in Natural Language-Based Societies of Mind" at #NeurIPS2023 WS Ro-FoMo. Up to 129 foundation models collectively solve practical problems by interviewing each other in monarchical or democratic societies arxiv.org/abs/2305.17066

Best paper award for "Mindstorms in Natural Language-Based Societies of Mind" at #NeurIPS2023 WS Ro-FoMo. Up to 129 foundation models collectively solve practical problems by interviewing each other in monarchical or democratic societies arxiv.org/abs/2305.17066
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

The GOAT of tennis Novak Djokovic said: "35 is the new 25.” I say: “60 is the new 35.” AI research has kept me strong and healthy. AI could work wonders for you, too!

The GOAT of tennis <a href="/DjokerNole/">Novak Djokovic</a> said: "35 is the new 25.” I say: “60 is the new 35.” AI research has kept me strong and healthy. AI could work wonders for you, too!
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

2010 foundations of recent $NVDA stock market frenzy: our simple but deep neural net on NVIDIA GPUs broke MNIST arxiv.org/abs/1003.0358. Things are changing fast. Just 7 months ago, I tweeted: compute is 100x cheaper, $NVDA 100x more valuable. Today, replace "100" by "250."

2010 foundations of recent $NVDA stock market frenzy: our simple but deep neural net on <a href="/nvidia/">NVIDIA</a> GPUs broke MNIST arxiv.org/abs/1003.0358. Things are changing fast. Just 7 months ago, I tweeted: compute is 100x cheaper, $NVDA 100x more valuable. Today, replace "100" by "250."
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

In 2016, at an AI conference in NYC, I explained artificial consciousness, world models, predictive coding, and science as data compression in less than 10 minutes. I happened to be in town, walked in without being announced, and ended up on their panel. It was great fun.

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Our #GPTSwarm models Large Language Model Agents and swarms thereof as computational graphs reflecting the hierarchical nature of intelligence. Graph optimization automatically improves nodes and edges. arxiv.org/abs/2402.16823 github.com/metauto-ai/GPT… gptswarm.org

Our #GPTSwarm models Large Language Model Agents and swarms thereof as computational graphs reflecting the hierarchical nature of intelligence. Graph optimization automatically improves nodes and edges. arxiv.org/abs/2402.16823
github.com/metauto-ai/GPT…
gptswarm.org
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

At ICANN 1993, I extended my 1991 unnormalised linear Transformer, introduced attention terminology for it, & published the "self-referential weight matrix." 3 decades later, they made me Chair of ICANN 2024 in Lugano. Call for papers (deadline March 25): e-nns.org/icann2024/call

At ICANN 1993, I extended my 1991 unnormalised linear Transformer, introduced attention terminology for it, &amp; published the "self-referential weight matrix." 3 decades later, they made me Chair of ICANN 2024 in Lugano. Call for papers (deadline March 25): e-nns.org/icann2024/call
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Counter-intuitive aspects of text-to-image diffusion models: only a few steps require cross-attention; most don’t. Skipping the extras gives a great speed-up! Many stars on GitHub :-) github.com/HaozheLiu-ST/T… arxiv.org/abs/2404.02747

Counter-intuitive aspects of text-to-image diffusion models: only a few steps require cross-attention; most don’t. Skipping the extras gives a great speed-up! Many stars on GitHub :-) 
github.com/HaozheLiu-ST/T… 
arxiv.org/abs/2404.02747
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Today we got the ACM SIGEVO 10-Years Impact Award 2024 for our 2014 paper people.idsia.ch/~koutnik/paper… based on our 2013 work people.idsia.ch/~juergen/compr… - the 1st RL directly from high-dimensional input (no unsupervised pre-training). With the awesome Jan Koutník and Faustino Gomez.

Today we got the ACM SIGEVO 10-Years Impact Award 2024 for our 2014 paper people.idsia.ch/~koutnik/paper… based on our 2013 work people.idsia.ch/~juergen/compr… - the 1st RL directly from high-dimensional input (no unsupervised pre-training). With the awesome Jan Koutník and Faustino Gomez.
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

I gave a joint keynote on A.I. for 3 overlapping international conferences in France: the 19th ICSOFT (software technologies), the 13th DATA (data science), and the 5th DeLTA (deep learning theory & applications). I really enjoyed it.

I gave a joint keynote on A.I. for 3 overlapping international conferences in France: the 19th ICSOFT (software technologies), the 13th DATA (data science), and the 5th DeLTA (deep learning theory &amp; applications). I really enjoyed it.
Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

Greetings from #ICML2024 in Vienna, the world's most liveable city. Check out our 5 ICML papers (2 oral), on language agents as optimizable graphs, analyzing programs (= weight matrices) of neural networks, planning & chunking & auxiliary delays for faster reinforcement learning:

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

I am hiring 3 postdocs at #KAUST to develop an Artificial Scientist for discovering novel chemical materials for carbon capture. Join this project with Francesco Faccio at the intersection of RL and Material Science. Learn more and apply: faccio.ai/postdoctoral-p…

I am hiring 3 postdocs at #KAUST to develop an Artificial Scientist for discovering novel chemical materials for carbon capture. Join this project with  <a href="/FaccioAI/">Francesco Faccio</a> at the intersection of RL and Material Science. Learn more and apply: faccio.ai/postdoctoral-p…