Florian Mahner (@florianmahner) 's Twitter Profile
Florian Mahner

@florianmahner

PhD candidate with @martin_hebart at @MPI_CBS. Interested in artificial and biological vision. Also spent time @DondersInst, @bccn_berlin, @UniOsnabrueck.

ID: 1018752192617041920

linkhttps://florianmahner.github.io calendar_today16-07-2018 07:00:24

167 Tweet

114 Takipçi

325 Takip Edilen

Daniel Geng (@dangengdg) 's Twitter Profile Photo

What do you see in these images? These are called hybrid images, originally proposed by Aude Oliva et al. They change appearance depending on size or viewing distance, and are just one kind of perceptual illusion that our method, Factorized Diffusion, can make.

Thirza Dado (@thirzadado) 's Twitter Profile Photo

Our new preprint "PAM: Predictive attention mechanism for neural decoding of visual perception" introduces a novel approach that learns output queries. Beneficial if queries are n/a, as in neural decoding! w/ Lynn Le Artificial Cognitive Systems Dr. Yağmur Güçlütürk Umut Güçlü biorxiv.org/content/10.110…

Our new preprint "PAM: Predictive attention mechanism for neural decoding of visual perception" introduces a novel approach that learns output queries. Beneficial if queries are n/a, as in neural decoding! w/ <a href="/lynnle_ai/">Lynn Le</a> <a href="/artcogsys/">Artificial Cognitive Systems</a> <a href="/yagmurgucluturk/">Dr. Yağmur Güçlütürk</a> <a href="/umuguc/">Umut Güçlü</a> 
biorxiv.org/content/10.110…
Ori Press (@ori_press) 's Twitter Profile Photo

Can AI help you cite papers? We built the CiteME benchmark to answer that. Given the text: "We evaluate our model on [CITATION], a dataset consisting of black and white handwritten digits" The answer is: MNIST CiteME has 130 questions; our best agent gets just 35.3% acc (1/5)🧵

Can AI help you cite papers?
We built the CiteME benchmark to answer that.

Given the text:
"We evaluate our model on [CITATION], a dataset consisting of black and white handwritten digits"
The answer is: MNIST

CiteME has 130 questions; our best agent gets just 35.3% acc (1/5)🧵
Sander van Bree (@sandervanbree) 's Twitter Profile Photo

Tuesday 27th (11:30 Aussie time) I'll share some initial results on my project on neural representations in the macaque visual system. Work with Martin Hebart & others. Check out my talk if you're at biomag2024. And I'll be around in Sydney until 10th Sept if you want to chat!

MPI für Kognitions- & Neurowissenschaften (@mpi_cbs) 's Twitter Profile Photo

Humans think in many #dimensions at a time 🧠📷When seeing objects, our #brain uses not just one, but a multitude of behaviorally relevant dimensions, like Oliver Contier & Martin Hebart Universität Gießen show in Nature Human Behaviour: tinyurl.com/426j696e

Humans think in many #dimensions at a time 🧠📷When seeing objects, our #brain uses not just one, but a multitude of behaviorally relevant dimensions, like 
<a href="/OliverContier/">Oliver Contier</a> &amp; <a href="/martin_hebart/">Martin Hebart</a> <a href="/jlugiessen/">Universität Gießen</a> show in 
<a href="/NatureHumBehav/">Nature Human Behaviour</a>: tinyurl.com/426j696e
Oliver Contier (@olivercontier) 's Twitter Profile Photo

Hugely excited that this work with Martin Hebart and Chris Baker is now out in Nature Human Behaviour !!! By moving from a category-focused to a behaviour-focused model, we identified behaviourally relevant object information throughout visual cortex. nature.com/articles/s4156…

Elvis Dohmatob (@dohmatobelvis) 's Twitter Profile Photo

1/n Introducing our new preprint: Strong Model Collapse arxiv.org/abs/2410.04840, wherein we show that within the "scaling laws" paradigm, even 1% bad / synthetic data in the training corpus might lead to model collapse, an eventual critical flattening or even degradation of model

1/n Introducing our new preprint: Strong Model Collapse arxiv.org/abs/2410.04840, wherein we show that within the "scaling laws" paradigm, even 1% bad / synthetic data in the training corpus might lead to model collapse, an eventual critical flattening or even degradation of model
hardmaru (@hardmaru) 's Twitter Profile Photo

Intelligence at the Edge of Chaos arxiv.org/abs/2410.02536 They study the behavior of LLMs trained on 1D cellular automata, and examine their behavior when the CAs are near “edge of chaos” regions. The paper’s ideas still needs to be further refined IMO, but a fun paper to read!

Intelligence at the Edge of Chaos

arxiv.org/abs/2410.02536

They study the behavior of LLMs trained on 1D cellular automata, and examine their behavior when the CAs are near “edge of chaos” regions. The paper’s ideas still needs to be further refined IMO, but a fun paper to read!
Rylan Schaeffer (@rylanschaeffer) 's Twitter Profile Photo

My 2nd to last #neuroscience paper will appear UniReps !! 🧠🧠 Maximizing Neural Regression Scores May Not Identify Good Models of the Brain 🧠🧠 w/ @KhonaMikail Mitchell Ostrow Brando Miranda Sanmi Koyejo Answering a puzzle 2 years in the making openreview.net/forum?id=vbtj0… 1/12

Zyphra (@zyphraai) 's Twitter Profile Photo

Today, in collaboration with @NvidiaAI, we bring you Zamba2-7B – a hybrid-SSM model that outperforms Mistral, Gemma, Llama3 & other leading models in both quality and speed. Zamba2-7B is the leading model for ≤8B weight class. 👇See more in the thread below👇

Today, in collaboration with @NvidiaAI, we bring you Zamba2-7B – a hybrid-SSM model that outperforms Mistral, Gemma, Llama3 &amp; other leading models in both quality and speed.

Zamba2-7B is the leading model for ≤8B weight class.

👇See more in the thread below👇
Nathan Cloos (@nacloos) 's Twitter Profile Photo

⁉️What do model-neural similarity scores tell us? To systematically explore this for different metrics, we develop new numerical tools & analytics to characterize what drives similarity scores and what constitutes a "good" score. (1/10)

Paolo Papale (@paolo_papale) 's Twitter Profile Photo

🎉 This is finally out today on PNASNews! With a new title: V1 neurons are tuned to perceptual borders in natural scenes. And with additional replication on anesthetized monkeys. 📖 Open access paper here: pnas.org/doi/10.1073/pn…

Sander van Bree (@sandervanbree) 's Twitter Profile Photo

sandervanbree.com/posts/3215-whe… What does it mean when neural network dimensions converge? In this blog post I explore possible theoretical implications:

Martin Hebart (@martin_hebart) 's Twitter Profile Photo

How much does occipitotemporal cortex "care" about individual object categories? In a study led by Marco Badwal we addressed this question using multisession fMRI with a very homogenous stimulus class: land mammals. Paper: doi.org/10.1523/JNEURO… Preprint: biorxiv.org/content/10.110…

Lynn Le (@lynnle_ai) 's Twitter Profile Photo

Our new work on reconstructing visual perception is online as a #NeurIPS paper! "MonkeySee: Space-time-resolved reconstructions of natural images from macaque multi-unit activity" 🧠📷: openreview.net/pdf?id=OWwdlxw… Looking forward to connecting in Vancouver at #NeurIPS2024 !

Our new work on reconstructing visual perception is online as a #NeurIPS  paper!

"MonkeySee: Space-time-resolved reconstructions of natural images from macaque multi-unit activity" 🧠📷: openreview.net/pdf?id=OWwdlxw…

Looking forward to connecting in Vancouver at #NeurIPS2024 !
Eghbal Hosseini (@eghbal_hosseini) 's Twitter Profile Photo

Why do diverse ANNs resemble brain representations? Check out our new paper with Colton Casto, Noga Zaslavsky, Colin Conwell, Mark Richardson MD PhD, & Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦 on “Universality of representation in biological and artificial neural networks.” 🧠🤖 tinyurl.com/yckndmjt (1/n)

Sander van Bree (@sandervanbree) 's Twitter Profile Photo

Our review on the theoretical status of oscillations and field potentials is out! What are their causal effects, and what can electrophysiology signals reveal about how the brain works? w/ Dan Levenstein Matt Krause Bradley Voytek Richard Gao cell.com/trends/cogniti…

Luca Schulze Buschoff (@lucaschubu) 's Twitter Profile Photo

Our paper (with Elif Akata, Matthias Bethge, Helmholtz Institute for Human-Centered AI) on visual cognition in multimodal large language models is now out in Nature Machine Intelligence. We find that VLMs fall short of human capabilities in intuitive physics, causal reasoning, and intuitive psychology. nature.com/articles/s4225…

Mariya Toneva (@mtoneva1) 's Twitter Profile Photo

Meenakshi Khosla Indeed! We in fact recently showed this in language--optimizing a speech language model to predict brain recordings of people listening to stories improves to models' performance on downstream semantic tasks. Preprint: arxiv.org/abs/2410.09230 (to appear at ICLR 2025)