Will Decker (@jwilldecker) 's Twitter Profile
Will Decker

@jwilldecker

PhD student @GeorgiaTech 🐝 and LIT Lab. Interested in how brains and machines learn + know about the world and use langauage.

ID: 739132883764404225

linkhttp://w-decker.github.io calendar_today04-06-2016 16:33:06

89 Tweet

176 Followers

231 Following

Will Decker (@jwilldecker) 's Twitter Profile Photo

Just finished Tufte's "The Visual Display of Quantitative Information." Got me thinking of some figures/animations I find particularly informative (and pleasing to view). This one is especially cool, and one I often think about!

Catherine Chen (@cathychen23) 's Twitter Profile Photo

How does the human brain represent semantic information from different languages? Our new preprint suggests that bilingual language comprehension relies on shared semantic representations that are systematically modulated by each language! 1/n

How does the human brain represent semantic information from different languages?

Our new preprint suggests that bilingual language comprehension relies on shared semantic representations that are systematically modulated by each language!

1/n
MIT CSAIL (@mit_csail) 's Twitter Profile Photo

76 years ago this week Claude Shannon ushered in the field of information theory with his paper "A Mathematical Theory of Communication", which has been cited over 100,000 times: bit.ly/2H0ZxvR (v/SIAM)

76 years ago this week Claude Shannon ushered in the field of information theory with his paper "A Mathematical Theory of Communication", which has been cited over 100,000 times: bit.ly/2H0ZxvR (v/<a href="/TheSIAMNews/">SIAM</a>)
Gašper Beguš (@begusgasper) 's Twitter Profile Photo

There’s a lot of exiting new developments happening in language sciences. Roger Levy, Kara Federmeier @CABlabUIUC, & Christopher Manning recently organized a wonderful U.S. National Science Foundation workshop on New Horizons in Language Science: Large Language Models, Language Structure, and the Cognitive and

There’s a lot of exiting new developments happening in language sciences. 

<a href="/roger_p_levy/">Roger Levy</a>, Kara Federmeier @CABlabUIUC, &amp; <a href="/chrmanning/">Christopher Manning</a> recently organized a wonderful <a href="/NSF/">U.S. National Science Foundation</a> workshop on New Horizons in Language Science: Large Language Models, Language Structure, and the Cognitive and
Sam Nastase (@samnastase) 's Twitter Profile Photo

Happy to see this work led by Zaid Zada now published in Neuron! We use LLM embeddings to capture word-by-word linguistic content transmitted from the speaker's brain to the listener's brain in real-time, face-to-face conversations: cell.com/neuron/fulltex…

Happy to see this work led by <a href="/zaidzada_/">Zaid Zada</a> now published in <a href="/NeuroCellPress/">Neuron</a>! We use LLM embeddings to capture word-by-word linguistic content transmitted from the speaker's brain to the listener's brain in real-time, face-to-face conversations: cell.com/neuron/fulltex…
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦 (@ev_fedorenko) 's Twitter Profile Photo

Thanks Nature Rev Neurosci for the chance to clarify that1️⃣the language network is not monolithic (we never said it was, but happy to emphasize); and2️⃣language network boundaries don't depend on a specific 'localizer' and can be recovered from task-free data: tinyurl.com/5y3rhfrh

Colton Casto (@_coltoncasto) 's Twitter Profile Photo

🚨New paper!🚨My first (co)first-authored paper is now out in Nature Human Behaviour! We show that neural populations in the language network differ in the size of their temporal receptive windows rdcu.be/dR0sz co-led w/ Tamar Regev 1/ 🧵

Anna Ivanova (@neuranna) 's Twitter Profile Photo

Excited to have been named one of MIT Technology Review's 35 under 35! I am happy that, these days, language & human cognition are topics that the world cares deeply about (thanks to recent developments in AI). Not only are these topics impactful, they are also fun to study!

Excited to have been named one of <a href="/techreview/">MIT Technology Review</a>'s 35 under 35!

I am happy that, these days, language &amp; human cognition are topics that the world cares deeply about (thanks to recent developments in AI).  Not only are these topics impactful, they are also fun to study!
Ali Cohen (@aliocohen) 's Twitter Profile Photo

*TWO* job searches Emory Psychology ‼️ Open rank Neural Mechanisms of Behavior in Small Animal Systems apply.interfolio.com/153594 Assoc/Asst Professor, Clinical Science apply.interfolio.com/152692 And I’m recruiting a PhD student! Several opportunities to join our amazing dept ✨

Aran Nayebi (@aran_nayebi) 's Twitter Profile Photo

1/6 I usually don’t comment on these things, but Rylan Schaeffer et al.'s paper contains enough misconceptions that I thought it might be useful to address them. In short, effective dimensionality is not the whole story for model-brain linear regression, for several reasons:

Sasha Rush (@srush_nlp) 's Twitter Profile Photo

Ev Fedorenko's Keynote at COLM. youtube.com/watch?v=8xS7tj… This talk is quite accessible for computer scientists interested in cognitive and neuro questions. Also touches on many of the shared themes of the two areas.

Thackery Brown @thackerybrown@sciencemastodon.com (@thackerybrown) 's Twitter Profile Photo

Graduate training opportunity! See thread... The Center for Research and Education in Navigation (CRaNE) is seeking a graduate student in our Cognition & Brain Sciences (CBS) Ph.D. Program in the School of Psychology at Georgia Institute of Technology.

Apurva Ratan Murty (@apurvaratan) 's Twitter Profile Photo

🚀At NeurIPS Conference tomorrow? Don't miss Nikolas McNeal and Mainak Deb 's poster at the UniReps workshop on the adversarial sensitivity of vision encoding models of fMRI responses! A brief teaser about what they find.

Seungwook Han (@seungwookh) 's Twitter Profile Photo

🧩 Why do task vectors exist in pretrained LLMs? Our new research uncovers how transformers form internal abstractions and the mechanisms behind in-context learning(ICL).

🧩  Why do task vectors exist in pretrained LLMs? 

Our new research uncovers how transformers form internal abstractions and the mechanisms behind in-context learning(ICL).
Ida Momennejad (@criticalneuro) 's Twitter Profile Photo

Honored that a piece I wrote made it to NYTimes. It's about how my mom's stroke changed my relationship to time, science, and nature. What a privilege to honor my mom in Modern Love. Below is a gift link. Let me know your thoughts🙏🏼 nytimes.com/2024/12/20/sty…