Manu Halvagal (@manu_halvagal) 's Twitter Profile
Manu Halvagal

@manu_halvagal

Computational neuroscience PhD student with @hisspikeness learning about learning @FMIscience. Climbs rocks and obsesses over spherical brains in a vacuum.

ID: 1069043151149645825

linkhttps://mshalvagal.github.io calendar_today02-12-2018 01:38:43

433 Tweet

308 Takipçi

765 Takip Edilen

Mo Samsami (@m_r_samsami) 's Twitter Profile Photo

🚀 Thrilled to introduce Recall to Imagine (R2I), the 1st model-based RL approach integrating SSMs to excel in memory-intensive domains. Not just setting new SOTA, but achieving superhuman results in complex memory tasks, while efficiently operating across diverse domains. 1/

Rui Ponte Costa (@somnirons) 's Twitter Profile Photo

🚨 neoSSL: Our story on how information flow in neocortical layers is perfectly placed for self-supervised learning (SSL) is now on bioRxiv doi.org/10.1101/2024.0… 🧵 (1/6)

Friedemann Zenke (@hisspikeness) 's Twitter Profile Photo

1/6 Surrogate gradients (SGs) are empirically successful at training spiking neural networks (SNNs). But why do they work so well, and what is their theoretical basis? In our new preprint led by Julia Gygax, we provide the answers: arxiv.org/abs/2404.14964

1/6 Surrogate gradients (SGs) are empirically successful at training spiking neural networks (SNNs). But why do they work so well, and what is their theoretical basis? In our new preprint  led by <a href="/JuliaGygax4/">Julia Gygax</a>, we provide the answers: arxiv.org/abs/2404.14964
Mitchell Ostrow (@neurostrow) 's Twitter Profile Photo

How can we understand the way sequence models choose to combine info in their context to make good predictions? In our new ICML Conference workshop paper, Adam J. Eisen , FieteGroup and I provide a new theoretical lens from dynamical systems: delay embeddings arxiv.org/abs/2406.11993

How can we understand the way sequence models choose to combine info in their context to make good predictions? 

In our new <a href="/icmlconf/">ICML Conference</a> workshop paper, <a href="/adamjeisen/">Adam J. Eisen</a> , <a href="/FieteGroup/">FieteGroup</a> and I provide a new theoretical lens from dynamical systems: delay embeddings 

arxiv.org/abs/2406.11993
Richard Sutton (@richardssutton) 's Twitter Profile Photo

The one-step trap (in AI research) The one-step trap is the common mistake of thinking that all or most of an AI agent’s learned predictions can be one-step ones, with all longer-term predictions generated as needed by iterating the one-step predictions. The most important

Stefano Fusi (@stefanofusi2) 's Twitter Profile Photo

Observing the birth of an abstract representation with disentangled variables in human HPC!! Now published in Nature. Great collaboration with Hristos Courellis Juri Minxha @amamelak Ueli Rutishauser and others

Yuta Senzai (@yutasenzai) 's Twitter Profile Photo

New preprint post! We show that motor commands in the superior colliculus shift the internal representation of heading during REM sleep despite the immobility of sleeping mice. Thus, the brain simulates actions and their consequences during REM sleep.🧵1/7 doi.org/10.1101/2024.0…

Robert Yang (@guangyurobert) 's Twitter Profile Photo

Introducing Project Sid: the first simulations of 1000+ truly autonomous agents collaborating in a virtual world, w/ emergent economy, culture, religion, and government Humans are the only species to land the moon, because we can cooperate at a vast scale Can AI do the same?

Saeed Salehi (@ssn_io) 's Twitter Profile Photo

Attention is awesome! So we (Jordan Lei Ari Benjamin ML Group, TU Berlin Kording Lab 🦖 and #NeuroAi) built a biologically inspired model of visual attention and binding that can simultaneously learn and perform multiple attention tasks 🧠 Pre-print: doi.org/10.1101/2024.0… A 🧵...

Machine Learning in Science (@mackelab) 's Twitter Profile Photo

How can we train biophysical neuron models on data or tasks? We built Jaxley, a differentiable, GPU-based biophysics simulator, which makes this possible even when models have thousands of parameters! Led by Michael Deistler, collab with @CellTypist @ppjgoncalves biorxiv.org/content/10.110…

Keller Jordan (@kellerjordan0) 's Twitter Profile Photo

New CIFAR-10 speed record: 94% in 2.73 seconds on a single A100 Previous record: 3.09 seconds Changelog: Implemented spectral gradient descent github.com/KellerJordan/c…

Andy Keller (@t_andy_keller) 's Twitter Profile Photo

In the physical world, almost all information is transmitted through traveling waves -- why should it be any different in your neural network? Super excited to share recent work with the brilliant Mozes Jacobs: "Traveling Waves Integrate Spatial Information Through Time" 1/14

Luma AI (@lumalabsai) 's Twitter Profile Photo

Today, we release Inductive Moment Matching (IMM): a new pre-training paradigm breaking the algorithmic ceiling of diffusion models. Higher sample quality. 10x more efficient. Single-stage, single network, stable training. Read more: lumalabs.ai/news/imm

Manu Halvagal (@manu_halvagal) 's Twitter Profile Photo

Happy to share that I successfully defended my PhD!! Thanks to everyone who’s helped me get through this journey :) Especially my advisor Friedemann Zenke Friedemann Zenke and everyone in the lab. It’s been a great few years!

Julian Rossbroich (@j_rossbroich) 's Twitter Profile Photo

I've spent much of my PhD thinking about E/I balance, and our latest preprint represents the culmination of that journey. Huge thanks to Friedemann Zenke for guiding me. Looking forward to your thoughts & comments.

Reed Bender (@reedbndr) 's Twitter Profile Photo

What is "Life"...? In a new preprint with Michael Levin, Karina Kofman, and Blaise Agüera (@blaiseaguera.bsky.social), we used LLMs to map the semantic space emerging from 68 expert-provided definitions for "Life". Here's what we did, what we found, and what it means for what lives... 1/n

What is "Life"...?

In a new preprint with <a href="/drmichaellevin/">Michael Levin</a>, <a href="/karina__kofman/">Karina Kofman</a>, and <a href="/blaiseaguera/">Blaise Agüera (@blaiseaguera.bsky.social)</a>, we used LLMs to map the semantic space emerging from 68 expert-provided definitions for "Life".

Here's what we did, what we found, and what it means for what lives... 1/n
Stephane Deny (@stphtphsn1) 's Twitter Profile Photo

In a classic study of "mental rotation", Shepard and Metzler (1971) found that the time to compare two 3D cube-made objects was proportional to their angular difference. But *what is going in the brain* during this process? 🔗 Metzler & Shepard (1971): semanticscholar.org/paper/Mental-R…

In a classic study of "mental rotation", Shepard and Metzler (1971) found that the time to compare two 3D cube-made objects was proportional to their angular difference.

But *what is going in the brain* during this process?

🔗 Metzler &amp; Shepard (1971): semanticscholar.org/paper/Mental-R…
Randall Balestriero (@randall_balestr) 's Twitter Profile Photo

JOIN US! Self Supervised Learning produces impressive foundation/world models...but we need a central/stable/reproducible codebase to quickly iterate and explore...incremental iterations on 20k lines repo is ok...but exploring the unknown is what we need github.com/rbalestr-lab/s…

JOIN US! Self Supervised Learning produces impressive foundation/world models...but we need a central/stable/reproducible codebase to quickly iterate and explore...incremental iterations on 20k lines repo is ok...but exploring the unknown is what we need github.com/rbalestr-lab/s…
Frank Lanfranchi (@franklanfranchi) 's Twitter Profile Photo

The primate visual system is a marvel of nature, inspiring the convnet. But how did it evolve? Do all highly visual mammals possess a ‘ventral stream’—a hierarchy of brain areas with increasing selectivity for complex forms? In our new study, we tackled these questions using

The primate visual system is a marvel of nature, inspiring the convnet. But how did it evolve? Do all highly visual mammals possess a ‘ventral stream’—a hierarchy of brain areas with increasing selectivity for complex forms? In our new study, we tackled these questions using
Hyeyoung Shin (@shinehyeyoung) 's Twitter Profile Photo

We believe what we see, but we also see what we believe. This is why we see illusions - our prior knowledge of the sensory world tricks us into seeing things that aren't there. How does this happen in the brain? In this study... nature.com/articles/s4159…