Uri Cohen, PhD (@uricohen42) 's Twitter Profile
Uri Cohen, PhD

@uricohen42

Theoretical computational neuroscience postdoc at Cambridge University (CBL lab); PhD from the Hebrew University; also @uricohen42.bsky.social

ID: 1130588521218101248

linkhttps://uricohen.github.io/ calendar_today20-05-2019 21:38:03

902 Tweet

1,1K Followers

2,2K Following

Alex Atanasov (@abatanasov) 's Twitter Profile Photo

1/n Iโ€™m very excited to present this Spotlight. It was one of the more creative projects of my PhD, and also the last one with Blake Bordelon โ˜•๏ธ๐Ÿงช๐Ÿ‘จโ€๐Ÿ’ป & Cengiz Pehlevan, the best coauthors you can have :) Come by this afternoon to learn "How Feature Learning Can Improve Neural Scaling Laws."

1/n Iโ€™m very excited to present this Spotlight. It was one of the more creative projects of my PhD, and also the last one with <a href="/blake__bordelon/">Blake Bordelon โ˜•๏ธ๐Ÿงช๐Ÿ‘จโ€๐Ÿ’ป</a> &amp; <a href="/CPehlevan/">Cengiz Pehlevan</a>, the best coauthors you can have :) Come by this afternoon to learn "How Feature Learning Can Improve Neural Scaling Laws."
Arabs in Neuroscience (AiN) (@arabsinneuro) 's Twitter Profile Photo

It's that time of the year again... Applications are open for students and teaching assistants for Introduction to Computational Neuroscience 2025 summer school. Applications are open till May 15th. Apply now and RT! #ArabCompNeuro25 #neuroscience #education

It's that time of the year again... Applications are open for students and teaching assistants for Introduction to Computational Neuroscience 2025 summer school. Applications are open till May 15th. Apply now and RT!
#ArabCompNeuro25 #neuroscience #education
Dmitry Krotov (@dimakrotov) 's Twitter Profile Photo

Nice article! I appreciate that it mentions my work and the work of my students. I want to add to it. It is true that there is some inspiration from spin glasses, but Hopfield is much bigger than spin glasses. The key ideas that resurrected artificial neural networks in 1982

Hadi Vafaii (@hadivafaii) 's Twitter Profile Photo

Elegant theoretical derivations are exclusive to physics. Right?? Wrong! In a new preprint, we: โœ…"Derive" a spiking recurrent network from variational principles โœ…Show it does amazing things like out-of-distribution generalization ๐Ÿ‘‰[1/n]๐Ÿงต w/ co-lead Dekel Galor & Jake Yates

Elegant theoretical derivations are exclusive to physics. Right?? Wrong!

In a new preprint, we:
โœ…"Derive" a spiking recurrent network from variational principles
โœ…Show it does amazing things like out-of-distribution generalization
๐Ÿ‘‰[1/n]๐Ÿงต

w/ co-lead <a href="/dekelgalor/">Dekel Galor</a> &amp; Jake Yates
Physical Review X (@physrevx) 's Twitter Profile Photo

A minimax entropy model accurately predicts large-scale neural activity in the data from over 1,000 mouse #neurons, demonstrating that a large amount of information can be compressed into a small number of correlations. Check it out: go.aps.org/43jOIfI

A minimax entropy model accurately predicts large-scale neural activity in the data from over 1,000 mouse #neurons, demonstrating that a large amount of information can be compressed into a small number of correlations.

Check it out: go.aps.org/43jOIfI
Anil Seth (@anilkseth) 's Twitter Profile Photo

1/3 Geoffrey Hinton once said that the future depends on some graduate student being suspicious of everything he says (via Lex Fridman). He also said was that it was impossible to find biologically plausible approaches to backprop that scale well: radical.vc/geoffrey-hintoโ€ฆ.

1/3 <a href="/geoffreyhinton/">Geoffrey Hinton</a> once said that the future depends on some graduate student being suspicious of everything he says (via <a href="/lexfridman/">Lex Fridman</a>). He also said was that it was impossible to find biologically plausible approaches to backprop that scale well: radical.vc/geoffrey-hintoโ€ฆ.
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ (@gm8xx8) 's Twitter Profile Photo

Data Mixing Can Induce Phase Transitions in Knowledge Acquisition ๐‘จ ๐‘ช๐‘ณ๐‘ฌ๐‘จ๐‘ต, ๐‘ญ๐‘ถ๐‘น๐‘ด๐‘จ๐‘ณ ๐‘ฉ๐‘น๐‘ฌ๐‘จ๐‘ฒ๐‘ซ๐‘ถ๐‘พ๐‘ต ๐‘ถ๐‘ญ ๐‘พ๐‘ฏ๐’€ ๐’€๐‘ถ๐‘ผ๐‘น ๐Ÿ•๐‘ฉ ๐‘ณ๐‘ณ๐‘ด ๐‘ณ๐‘ฌ๐‘จ๐‘น๐‘ต๐‘บ ๐‘ต๐‘ถ๐‘ป๐‘ฏ๐‘ฐ๐‘ต๐‘ฎ ๐‘ญ๐‘น๐‘ถ๐‘ด ๐‘ฏ๐‘ฐ๐‘ฎ๐‘ฏ-๐‘ธ๐‘ผ๐‘จ๐‘ณ๐‘ฐ๐‘ป๐’€ ๐‘ซ๐‘จ๐‘ป๐‘จ This paper reveals phase transitions in factual memorization

Data Mixing Can Induce Phase Transitions in Knowledge Acquisition

๐‘จ ๐‘ช๐‘ณ๐‘ฌ๐‘จ๐‘ต, ๐‘ญ๐‘ถ๐‘น๐‘ด๐‘จ๐‘ณ ๐‘ฉ๐‘น๐‘ฌ๐‘จ๐‘ฒ๐‘ซ๐‘ถ๐‘พ๐‘ต ๐‘ถ๐‘ญ ๐‘พ๐‘ฏ๐’€ ๐’€๐‘ถ๐‘ผ๐‘น ๐Ÿ•๐‘ฉ ๐‘ณ๐‘ณ๐‘ด ๐‘ณ๐‘ฌ๐‘จ๐‘น๐‘ต๐‘บ ๐‘ต๐‘ถ๐‘ป๐‘ฏ๐‘ฐ๐‘ต๐‘ฎ ๐‘ญ๐‘น๐‘ถ๐‘ด ๐‘ฏ๐‘ฐ๐‘ฎ๐‘ฏ-๐‘ธ๐‘ผ๐‘จ๐‘ณ๐‘ฐ๐‘ป๐’€ ๐‘ซ๐‘จ๐‘ป๐‘จ

This paper reveals phase transitions in factual memorization
Avi Amit (@aviamit26) 's Twitter Profile Photo

1. ืื”ื•ื“ ื‘ืจืง ืขืฉื” ืืช ื›ืœ ื—ื™ื™ื• ืœืžืขืŸ ื‘ื™ื˜ื—ื•ืŸ ืžื“ื™ื ืช ื™ืฉืจืืœ. ื”ืื™ืฉ ืฉื”ื™ื” ืจืืฉ ืืž''ืŸ, ืจืžื˜ื›''ืœ, ืฉืจ ื‘ื™ื˜ื—ื•ืŸ ื•ืจืืฉ ืžืžืฉืœื” (ื•ื’ื ื”ืงืฆื™ืŸ ื”ืžืขื•ื˜ืจ ื‘ื™ื•ืชืจ ืื™ ืคืขื), ื—ื™ ื•ื ื•ืฉื ื‘ื™ื˜ื—ื•ืŸ. ืื”ื•ื“ ืฉื ื™ืื•ืจืกื•ืŸ ืขืฉื” ืืช ื›ืœ ื—ื™ื™ื• ืœืžืขืŸ ืงื”ื™ืœื™ื™ืช ื”ืžื•ื“ื™ืขื™ืŸ. ื”ื•ื ืฆืžื— ื‘-8200, ื–ื›ื” ื‘ืคืจืก ื‘ื™ื˜ื—ื•ืŸ ื™ืฉืจืืœ ื•ืฆื™ื•ืŸ ืœืฉื‘ื— ืžื”ืจืžื˜ื›''ืœ ื•ื”ื’ื™ืข ืœืคืงื“ ืขืœ

1. ืื”ื•ื“ ื‘ืจืง ืขืฉื” ืืช ื›ืœ ื—ื™ื™ื• ืœืžืขืŸ ื‘ื™ื˜ื—ื•ืŸ ืžื“ื™ื ืช ื™ืฉืจืืœ. ื”ืื™ืฉ ืฉื”ื™ื” ืจืืฉ ืืž''ืŸ, ืจืžื˜ื›''ืœ, ืฉืจ ื‘ื™ื˜ื—ื•ืŸ ื•ืจืืฉ ืžืžืฉืœื” (ื•ื’ื ื”ืงืฆื™ืŸ ื”ืžืขื•ื˜ืจ ื‘ื™ื•ืชืจ ืื™ ืคืขื), ื—ื™ ื•ื ื•ืฉื ื‘ื™ื˜ื—ื•ืŸ. 
ืื”ื•ื“ ืฉื ื™ืื•ืจืกื•ืŸ ืขืฉื” ืืช ื›ืœ ื—ื™ื™ื• ืœืžืขืŸ ืงื”ื™ืœื™ื™ืช ื”ืžื•ื“ื™ืขื™ืŸ. ื”ื•ื ืฆืžื— ื‘-8200, ื–ื›ื” ื‘ืคืจืก ื‘ื™ื˜ื—ื•ืŸ ื™ืฉืจืืœ ื•ืฆื™ื•ืŸ ืœืฉื‘ื— ืžื”ืจืžื˜ื›''ืœ ื•ื”ื’ื™ืข ืœืคืงื“ ืขืœ
Ravid Shwartz Ziv (@ziv_ravid) 's Twitter Profile Photo

You know all those arguments that LLMs think like humans? Turns out it's not true. ๐Ÿง  In our paper "From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning" we test it by checking if LLMs form concepts the same way humans do Yann LeCun Chen Shani Dan Jurafsky

You know all those arguments that LLMs think like humans? Turns out it's not true.

๐Ÿง  In our paper  "From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning" we test it by checking if LLMs form concepts the same way humans do  <a href="/ylecun/">Yann LeCun</a> <a href="/ChenShani2/">Chen Shani</a>  <a href="/jurafsky/">Dan Jurafsky</a>
Andrew Saxe (@saxelab) 's Twitter Profile Photo

How does in-context learning emerge in attention models during gradient descent training? Sharing our new Spotlight paper ICML Conference: Training Dynamics of In-Context Learning in Linear Attention arxiv.org/abs/2501.16265 Led by Yedi Zhang with Aaditya Singh and Peter Latham

Uri Cohen, PhD (@uricohen42) 's Twitter Profile Photo

ืžื™ ืฉื”ื•ืœืš ืœื™ืฉื•ืŸ ืขื ื›ืœื‘ื™ื, ืžืชืขื•ืจืจ ืขื ืคืฉืคืฉื™ื

Danyal (@danakarca) 's Twitter Profile Photo

Incredibly excited for the UK Neural Computation Conference 2025 @ Imperial College London, 9th - 11th July (main event 10th - 11th) ๐Ÿ‡ฌ๐Ÿ‡ง. World leading scientists working in brain computation - from advanced experimental neuroscience to AI and mathematical modelling (and all

Incredibly excited for the UK Neural Computation Conference 2025 @ Imperial College London, 9th - 11th July (main event 10th - 11th) ๐Ÿ‡ฌ๐Ÿ‡ง.

World leading scientists working in brain computation - from advanced experimental neuroscience to AI and mathematical modelling (and all
Sagol School of Neuroscience (@of_sagol) 's Twitter Profile Photo

ืžื•ื–ืžื ื•ืช.ื™ื ืœืงืจื•ื ืขืœ ืื•ื˜ื•ื“ื™ื“ืงื˜ื™ื•ืช, ืขืœ ืกืงืจื ื•ืช ื‘ืœืชื™ ื ืœืื™ืช ืœื”ื‘ื ืช ื”ืžื•ื— ื”ืื ื•ืฉื™, ืขืœ ื”ืชื’ื•ื‘ื•ืช ืžื”ืงื”ื™ืœื” ื”ืžื“ืขื™ืช (ื›ื•ืœืœ ืฉืœ ืคืจื•ืค' ืื™ืœื ื” ื’ื•ื–ืก ื•ืคืจื•ืค' ื‘ื•ืขื– ื‘ืจืง Boaz Barak), ื•ื‘ืขื™ืงืจ ืขืœ ื”ืจืฆื•ืŸ ืœื”ืคืขื™ืœ ืืช ื”ืžื•ื— ื”ืื™ืฉื™ ืฉืœ ืื“ื ืื—ื“ - ืจื’ืข ืœืคื ื™ ืฉืœื ื™ื›ืœ ืขื•ื“. ืงื™ืฉื•ืจ ืœื›ืชื‘ื” ื‘'ื”ืืจืฅ': haaretz.co.il/magazine/2025-โ€ฆ

Saul Sadka (@saul_sadka) 's Twitter Profile Photo

On set today with Inbal Rabin-Lieberman, the 26-year-old heroine whose quick thinking and IDF weapons training meant that her village, Nir Am, was the only one along the Gaza border where none of the residents was killed. Sheโ€™s watching today as they film a reenactment of her

On set today with Inbal Rabin-Lieberman, the 26-year-old heroine whose quick thinking and IDF weapons training meant that her village, Nir Am, was the only one along the Gaza border where none of the residents was killed.

Sheโ€™s watching today as they film a reenactment of her
Rui Ponte Costa (@somnirons) 's Twitter Profile Photo

Pleased to say that our story of how a theory of self-supervised learning in cortical layers accounts for several experimental observations is now out ๐ŸŽ‰ nature.com/articles/s4146โ€ฆ