Pablo Marcos (@jazzmaniatico) 's Twitter Profile
Pablo Marcos

@jazzmaniatico

PhD student | DL & Cognitive neuroscience

ID: 2240577112

linkhttps://github.com/pablomm calendar_today23-12-2013 22:32:40

60,60K Tweet

22 Takipçi

132 Takip Edilen

Tim Henke (tɪm 'ɦɛŋ.kə) @timhenke.bsky.social (@timhenke9) 's Twitter Profile Photo

Definition: E is evidence of A if P(A|E)>P(A) Proposition: absence of evidence is evidence of absence Proof: assume E is evidence of A. Then P(A)•(P(E)+P(-E))=P(A)=P(A|E)•P(E)+P(A|-E)•P(-E) Since P(A|E)>P(A), it follows that P(A|-E)<P(A) to compensate Hence P(-A|-E)>P(-A)

Almost Sure (@almost_sure) 's Twitter Profile Photo

You’re probably familiar with Fourier transforms. But, did you know that it can be viewed as a 90° rotation in the time-frequency domain and we can rotate by any angle via fractional Fourier transform. Explains why applying FT twice gives reflection (rotation by 180°).

You’re probably familiar with Fourier transforms. But, did you know that it can be viewed as a 90° rotation in the time-frequency domain and we can rotate by any angle via fractional Fourier transform.
Explains why applying FT twice gives reflection (rotation by 180°).
David Rowland (@davidcrowland) 's Twitter Profile Photo

Here it is! A huge milestone in neuroscience – the complete connectome of the fly brain from the FlyWire team (FlyWire). In this special issue, we have 9 research articles. I’ll try to summarize them in a couple threads starting here 🧵

Here it is! A huge milestone in neuroscience – the complete connectome of the fly brain from the FlyWire team (<a href="/FlyWireNews/">FlyWire</a>). In this special issue, we have 9 research articles. I’ll try to summarize them in a couple threads starting here 🧵
Richard Gao (@_rdgao) 's Twitter Profile Photo

neuroscience paper in 2045: "we recorded 5 million neurons in V1 with Neuropixels 9.0 probes while head-fixed mice watched Gabor flickers..."

Jeremy Bernstein (@jxbz) 's Twitter Profile Photo

This is a beautiful illustration of an apparent paradox in deep learning that ~the weights don't move~ I think we resolved this paradox in prior work, so I just want to share our perspective. And before you ask: yes, it's a question of norms 😅 (1/6) x.com/norabelrose/st…

Hugues Van Assel (@hugues_va) 's Twitter Profile Photo

Lots of discussion around JEPA and why latent space prediction works better than input space (e.g., LLMs) for certain modalities. But no one has formalized WHY. The answer lies in whether statistically dominant features are semantically meaningful. NeurIPS Conference spotlight 🧵👇

Lots of discussion around JEPA and why latent space prediction works better than input space (e.g., LLMs) for certain modalities.

But no one has formalized WHY.

The answer lies in whether statistically dominant features are semantically meaningful.

<a href="/NeurIPSConf/">NeurIPS Conference</a> spotlight 🧵👇
Jonathan Gorard (@getjonwithit) 's Twitter Profile Photo

Here's an important question that we must contend with increasingly as AI-for-Mathematics pipelines become more commonplace: Why do we care about solving hard problems? Almost always, the answer is *not* because we particularly want the hard problem to be solved. (1/12)

Idan Beck (@idanbeck) 's Twitter Profile Photo

They hard coded the variance - meaning the VAE encoder is only predicting the mean latent distribution, then they use a scaled identity covariance for the reparam trick - and bingo bango, no more instability and you can train everything e2e Salimans strikes again!