Pablo Contreras Kallens (@pcontrerask) 's Twitter Profile
Pablo Contreras Kallens

@pcontrerask

Ph. D. candidate, Cornell Psychology in the Cognitive Science of Language lab. Serious account.

ID: 1310578687553789953

linkhttp://www.contreraskallens.com calendar_today28-09-2020 13:55:00

89 Tweet

128 Takipçi

56 Takip Edilen

Raphaël Millière (@raphaelmilliere) 's Twitter Profile Photo

Another day, another opinion essay about ChatGPT in the The New York Times. This time, Noam Chomsky and colleagues weigh in on the shortcomings of language models. Unfortunately, this is not the nuanced discussion one could have hoped for. 🧵 1/ nytimes.com/2023/03/08/opi…

Ross (@ross_dkm) 's Twitter Profile Photo

For the Friday evening crowd still mulling over Chomsky’s op-ed, here’s a key passage from our recent letter in CogSci Society. There are absolutely some things LLMs cannot do. But what they can do, they do very well - and that demands attention. onlinelibrary.wiley.com/share/6REPM3XF…

For the Friday evening crowd still mulling over Chomsky’s op-ed, here’s a key passage from our recent letter in <a href="/cogsci_soc/">CogSci Society</a>. There are absolutely some things LLMs cannot do. But what they can do, they do very well - and that demands attention. onlinelibrary.wiley.com/share/6REPM3XF…
Ben Ambridge (@benambridge) 's Twitter Profile Photo

I've seen lots of threads about large language models (LLMs) and their implications for language acquisition BUT not many threads by language-acquisition specialists. So here's my two cents on how LLMs undermine SOME SPECIFIC PROPOSALS for acquisition of syntactic categories 1/n

Ben Ambridge (@benambridge) 's Twitter Profile Photo

In sum, the ideas that children (a) have innate syntactic categories and (b) NEED them because they can't construct them via distributional analyses alone are NOT straw-men but real and influential proposals in the child language literature 7/n

Jeff Yoshimi (@jeffyoshimi) 's Twitter Profile Photo

I'm delighted to announce the publication of our free, open access book, "Horizons of Phenomenology", a collection of essays on the state of the field. A brief thread about the book, and the long and ultimately victorious struggle to publish it open access. 1/

I'm delighted to announce the publication of our free, open access book, "Horizons of Phenomenology", a collection of essays on the state of the field. A brief thread about the book, and the long and ultimately victorious struggle to publish it open access. 1/
Steven Elmlinger (@elmlingersteven) 's Twitter Profile Photo

How do infants learn to produce the consonant sounds of their ambient language? To find out, check out our CogSci proceedings paper “Statistical learning or phonological universals? Ambient language statistics guide consonant acquisition in four languages” A 🧵: /1

Trends in Cognitive Sciences (@trendscognsci) 's Twitter Profile Photo

Information density as a predictor of communication dynamics Spotlight by Gary Lupyan (Gary Lupyan), Pablo Contreras Kallens (Pablo Contreras Kallens), & Rick Dale on recent @NatureHumanBehav work by Pete Aceves (Pete Aceves) & James Evans (James Evans) authors.elsevier.com/a/1ixK34sIRvTA…

Information density as a predictor of communication dynamics

Spotlight by Gary Lupyan (<a href="/glupyan/">Gary Lupyan</a>), Pablo Contreras Kallens (<a href="/pcontrerask/">Pablo Contreras Kallens</a>), &amp; Rick Dale on recent @NatureHumanBehav work by Pete Aceves (<a href="/peteaceves/">Pete Aceves</a>) &amp; James Evans (<a href="/profjamesevans/">James Evans</a>) 

authors.elsevier.com/a/1ixK34sIRvTA…
Morten H. Christiansen (@mh_christiansen) 's Twitter Profile Photo

Cognitive Science of Language Lab alumn Pablo Contreras Kallens talks about how feedback is crucial for getting large language models to produce more human-like language output, such as making similar agreement errors and being sensitive to subtle semantic distinctions

<a href="/CSL_Lab/">Cognitive Science of Language Lab</a> alumn <a href="/pcontrerask/">Pablo Contreras Kallens</a> talks about how feedback is crucial for getting large language models to produce more human-like language output, such as making similar agreement errors and being sensitive to subtle semantic distinctions
Jeff Yoshimi (@jeffyoshimi) 's Twitter Profile Photo

I've been having a nice time talking to Deniz Cem Önduygu about maps and graphs of philosophical discourse, and thought I'd repost my bibliometric map of the phenomenology literature. Dots are authors, links are citations, and colors are clusters.

I've been having a nice time talking to <a href="/denizcemonduygu/">Deniz Cem Önduygu</a> about maps and graphs  of philosophical discourse, and thought I'd repost my bibliometric map of the phenomenology literature. Dots are authors, links are citations, and colors are clusters.