talia konkle (@talia_konkle) 's Twitter Profile
talia konkle

@talia_konkle

Visual cognitive computational neuroscientist. Professor, Harvard University.

ID: 25995304

linkhttp://konklab.fas.harvard.edu calendar_today23-03-2009 12:57:55

1,1K Tweet

2,2K Takipçi

588 Takip Edilen

Ken Deng (@llurennn) 's Twitter Profile Photo

🎉We present DetailGen3D: Generative 3D Geometry Enhancement via Data-Dependent Flow, introducing a flow-based 3D generative model for geometry refinement. Project Page: detailgen3d.github.io/DetailGen3D Github Code: github.com/VAST-AI-Resear… Huggingface Demo: huggingface.co/spaces/VAST-AI…

ammar m (@amarvi_) 's Twitter Profile Photo

**ecstatic** to share our ICLR 2025 paper: sparse components distinguish visual pathways & their alignment to neural networks, with Nancy Kanwisher @[email protected] and Meenakshi Khosla (openreview.net/forum?id=IqHeD…) 1/n

Andrew Lampinen (@andrewlampinen) 's Twitter Profile Photo

How do language models generalize from information they learn in-context vs. via finetuning? We show that in-context learning can generalize more flexibly, illustrating key differences in the inductive biases of these modes of learning — and ways to improve finetuning. Thread: 1/

How do language models generalize from information they learn in-context vs. via finetuning? We show that in-context learning can generalize more flexibly, illustrating key differences in the inductive biases of these modes of learning — and ways to improve finetuning. Thread: 1/
Kenneth Li (@ke_li_2021) 's Twitter Profile Photo

🧵1/ Everyone says toxic data = bad models. But what if more toxic data could help us build less toxic models? Our new paper explores this paradox. Here’s what we found 👇

🧵1/
Everyone says toxic data = bad models.
But what if more toxic data could help us build less toxic models?
Our new paper explores this paradox. Here’s what we found 👇
Hafez Ghaemi (@hafezghm) 's Twitter Profile Photo

🚨 Preprint Alert 🚀 📄 seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models arxiv.org/abs/2505.03176 Can we simultaneously learn both transformation-invariant and transformation-equivariant representations with self-supervised learning (SSL)?

Shahab Bakhtiari (@shahabbakht) 's Twitter Profile Photo

Check out our new paper! Vision models often struggle with learning both transformation-invariant and -equivariant representations at the same time. Hafez Ghaemi @ ICML 2025 shows that self-supervised prediction with proper inductive biases achieves both simultaneously.

Rishi Jha (@rishi_d_jha) 's Twitter Profile Photo

I’m stoked to share our new paper: “Harnessing the Universal Geometry of Embeddings” with jack morris, Collin Zhang, and Vitaly Shmatikov. We present the first method to translate text embeddings across different spaces without any paired data or encoders. Here's why we're excited: 🧵👇🏾

I’m stoked to share our new paper: “Harnessing the Universal Geometry of Embeddings” with <a href="/jxmnop/">jack morris</a>, Collin Zhang, and <a href="/shmatikov/">Vitaly Shmatikov</a>.

We present the first method to translate text embeddings across different spaces without any paired data or encoders.

Here's why we're excited: 🧵👇🏾
Phillip Isola (@phillip_isola) 's Twitter Profile Photo

Impressive results! This paper incorporates so many of my favorite things: representational convergence, GANs, cycle-consistency, unpaired translation, etc.

Hannes Mehrer (@hannesmehrer) 's Twitter Profile Photo

Announcement: Workshop at #CCN2025 🧠 Modeling the Physical Brain: Spatial Organization & Biophysical Constraints 🗓️ Monday, Aug 11 | 🕦 11:30–18:00 CET |📍 Room A2.07 🔗 Register: tinyurl.com/CCN-physical-b… #NeuroAI CogCompNeuro

Fenil Doshi (@fenildoshi009) 's Twitter Profile Photo

🧵 What if two images have the same local parts but represent different global shapes purely through part arrangement? Humans can spot the difference instantly! The question is can vision models do the same? 1/15

Fenil Doshi (@fenildoshi009) 's Twitter Profile Photo

This work was a wonderful collaboration with Thomas Fel, talia konkle, and George Alvarez. 🔗 Check out the paper and project for more: Project Page: fenildoshi.com/configural-sha… Paper: arxiv.org/abs/2507.00493 15/15

Stephane Deny (@stphtphsn1) 's Twitter Profile Photo

Beautiful work (and thread)! They revisit the well-known shape-vs-texture bias, this time with objects made of the same subparts. With ablations they confirm the intuition that long-range attention mechanisms are essential for transformers to "see" the global shape in the picture

Symmetry and Geometry in Neural Representations (@neur_reps) 's Twitter Profile Photo

📢 Call for Papers: NeurReps 2025 ‼️‼️‼️ 🧠 Submit your research on symmetry, geometry, and topology in artificial and biological neural networks. Two tracks: Proceedings (9 pages) and Extended Abstract (4 pages). Deadline: Aug 22, 2025. neurreps.org/call-for-papers

Thomas Fel (@napoolar) 's Twitter Profile Photo

🧠 Submit to CogInterp @ NeurIPS 2025! Bridging AI & cognitive science to understand how models think, reason & represent. CFP + details 👉 coginterp.github.io/neurips2025/

Tyler Zhu (@tyleryzhu) 's Twitter Profile Photo

All this talk about world models but how strong are their perception abilities really? Can they track w/ occlusions, reason over 1hr+ videos, or predict physical scenarios? Test your models in the 3rd Perception Test Challenge at #ICCV2025 w/ prizes up to 50k EUR! DDL: 6 Oct 25

All this talk about world models but how strong are their perception abilities really? Can they track w/ occlusions, reason over 1hr+ videos, or predict physical scenarios? 

Test your models in the 3rd Perception Test Challenge at #ICCV2025 w/ prizes up to 50k EUR! DDL: 6 Oct 25
Thomas Fel (@napoolar) 's Twitter Profile Photo

Great excuse to share something I really love: 1-Lipschitz nets. They give clean theory, certs for robustness, the right loss for W-GANs, even nicer grads for explainability!! Yet are still niche. Here’s a speed-run through some of my favorite papers on the field. 🧵👇

Great excuse to share something I really love: 
1-Lipschitz nets.

They give clean theory, certs for robustness, the right loss for W-GANs, even nicer grads for explainability!! Yet are still niche.

Here’s a speed-run through some of my favorite papers on the field. 🧵👇
Andy Keller (@t_andy_keller) 's Twitter Profile Photo

Why do video models handle motion so poorly? It might be lack of motion equivariance. Very excited to introduce: Flow Equivariant RNNs (FERNNs), the first sequence models to respect symmetries over time. Paper: arxiv.org/abs/2507.14793 Blog: kempnerinstitute.harvard.edu/research/deepe… 1/🧵

Jack Lindsey (@jack_w_lindsey) 's Twitter Profile Photo

Attention is all you need - but how does it work? In our new paper, we take a big step towards understanding it. We developed a way to integrate attention into our previous circuit-tracing framework (attribution graphs), and it's already turning up fascinating stuff! 🧵