Dezhi Luo (@carrot0817_) 's Twitter Profile
Dezhi Luo

@carrot0817_

foundations of cogsci & AI @UMich @UCL|prev. @SchoolofPPLS

ID: 1150916612460908544

linkhttp://bsky.app/profile/carrot0817.bsky.social calendar_today15-07-2019 23:54:38

89 Tweet

38 Takipçi

345 Takip Edilen

megan peters 🧠 (@meganakpeters) 's Twitter Profile Photo

i have been mulling whether/how to say something this since the press releases started coming out 4 days ago, but feel i need to say something now. i am deeply disappointed in how the media has covered recent findings in consciousness science. 🧵👇

FemmeAndroid (@femmeandroid) 's Twitter Profile Photo

Well, the website is down. I’ve confirmed that this isn’t a test. It didn’t get a non-spam post for 24 hours. Thanks for all the fun, but it was bound to happen eventually!

Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

Vision-Language Models (VLMs) can describe the environment, but can they refer within it? Our findings reveal a critical gap: VLMs fall short of pragmatic optimality. We identify 3 key failures of pragmatic competence in referring expression generation with VLMs: (1) cannot

Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

P.S., We are building GrowAIlikeAChild, an open-source community uniting researchers from computer science, cognitive science, psychology, linguistics, philosophy, and beyond. Instead of putting growing up and scaling up into opposite camps, let's build and evaluate human-like AI

Hokin Deng (@denghokin) 's Twitter Profile Photo

#ICLR please checkout our poster‼️We evaluated 209 models and all of them are stochastic parrots 🦜 🙀Models either believe "the bigger the ball, the quicker it falls" (illusions), or, "not matter how big, physics textbook, Pisa's tower, fall at the same time" (shortcuts).

#ICLR please checkout our poster‼️We evaluated 209 models and all of them are stochastic parrots 🦜

🙀Models either believe "the bigger the ball, the quicker it falls" (illusions), or, "not matter how big, physics textbook, Pisa's tower, fall at the same time" (shortcuts).
Chaz Firestone (@chazfirestone) 's Twitter Profile Photo

SO excited about this new paper by tal boger! How does your mind separate the *content* of an image from the *manner* in which it is depicted? And how can we study this using tools from perception research? Find out in Tal's new Nature Human Behaviour paper 👇

Zory Zhang (@zory_zhang) 's Twitter Profile Photo

👁️ 𝐂𝐚𝐧 𝐕𝐢𝐬𝐢𝐨𝐧 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐌𝐨𝐝𝐞𝐥𝐬 (𝐕𝐋𝐌𝐬) 𝐈𝐧𝐟𝐞𝐫 𝐇𝐮𝐦𝐚𝐧 𝐆𝐚𝐳𝐞 𝐃𝐢𝐫𝐞𝐜𝐭𝐢𝐨𝐧? Knowing where someone looks is key to a Theory of Mind. We test 111 VLMs and 65 humans to compare their inferences. Project page: grow-ai-like-a-child.github.io/gaze/ 🧵1/11

Hokin Deng (@denghokin) 's Twitter Profile Photo

#ICML #cognition #GrowAI We spent 2 years carefully curated every single experiment (i.e. object permanence, A-not-B task, visual cliff task) in this dataset (total: 1503 classic experiments spanning 12 core cognitive concepts). We spent another year to get 230 MLLMs evaluated

#ICML #cognition #GrowAI We spent 2 years carefully curated every single experiment (i.e. object permanence, A-not-B task, visual cliff task) in this dataset (total: 1503 classic experiments spanning 12 core cognitive concepts). 

We spent another year to get 230 MLLMs evaluated
William Yijiang Li (@williamiumli) 's Twitter Profile Photo

🔥 Huge thanks to Yann LeCun and everyone for reposting our #ICML2025 work! 🚀 ✨12 core abilities, 📚1503 tasks, 🤖230 MLLMs, 🗨️11 prompts, 📊2503 data points. 🧠 We try to answer the question: 🔍 Do Multi-modal Large Language Models have grounded perception and reasoning?

myolab.ai (@myolabai) 's Twitter Profile Photo

💥We are excited to share that our #Video2Animation feature is now live at myolab.ai's discord servers. We're giving away massive 𝐟𝐫𝐞𝐞 credits for early users. Head now to 👉 discord.gg/Gc9hapPgQg

Hokin Deng (@denghokin) 's Twitter Profile Photo

#embodied All forms of biological intelligence are grounded movements🏃‍♂️ muscles & motor neurons 🧠 emerge before visual cortex & rods & cones in eyes 👁️ Building monocular better-than-mocap-studio #video2motion is our critical step towards human embodied intelligence.

Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

I’ve always wanted to write an open-notebook research blog to (i) show the chain of thought behind how we formed hypotheses, designed experiments, and articulated findings, and (ii) lay out all the intermediate results that did not make it into the final paper, including negative

I’ve always wanted to write an open-notebook research blog to (i) show the chain of thought behind how we formed hypotheses, designed experiments, and articulated findings, and (ii) lay out all the intermediate results that did not make it into the final paper, including negative
Iason Gabriel (@iasongabriel) 's Twitter Profile Photo

AI agents aim to augment and enhance human agency across a range of domains. How this "surplus agency" is distributed – including whether the benefit is broad or narrow – could have a major impact on the distribution of overall opportunity, and the shape of our social world.

AI agents aim to augment and enhance human agency across a range of domains.

How this "surplus agency" is distributed – including whether the benefit is broad or narrow – could have a major impact on the distribution of overall opportunity, and the shape of our social world.
Anthropic (@anthropicai) 's Twitter Profile Photo

New Anthropic research: Signs of introspection in LLMs. Can language models recognize their own internal thoughts? Or do they just make up plausible answers when asked about them? We found evidence for genuine—though limited—introspective capabilities in Claude.

New Anthropic research: Signs of introspection in LLMs.

Can language models recognize their own internal thoughts? Or do they just make up plausible answers when asked about them? We found evidence for genuine—though limited—introspective capabilities in Claude.
Hokin Deng (@denghokin) 's Twitter Profile Photo

Excited to share my essay with Dezhi Luo Qingying Gao 高清滢 on the representational substrate of world-reasoning in both humans and machines has been accepted to the SpaVLE Workshop at #NeurIPS2025✨ One recent article by Tomer Ullman Halely raises the notion of "Physics" versus

Excited to share my essay with <a href="/carrot0817_/">Dezhi Luo</a> <a href="/Kaiagaoqy/">Qingying Gao 高清滢</a> on the representational substrate of world-reasoning in both humans and machines has been accepted to the SpaVLE Workshop at #NeurIPS2025✨

One recent article by <a href="/TomerUllman/">Tomer Ullman</a> <a href="/HalelyBal/">Halely</a> raises the notion of "Physics" versus
Patrick Butlin (@patrickbutlin) 's Twitter Profile Photo

New paper on AI consciousness! Here we present the theory-derived indicator method for assessing AI systems for consciousness. Link below.

New paper on AI consciousness!

Here we present the theory-derived indicator method for assessing AI systems for consciousness. Link below.
Séb Krier (@sebkrier) 's Twitter Profile Photo

There are broadly two ways people think about AGI and labour: Position A is where humans get fully substituted, which is usually advanced by parts of the AI commentariat. The argument is that if AGI is a scalable input that can do what workers do at lower cost, then the market

There are broadly two ways people think about AGI and labour:

Position A is where humans get fully substituted, which is usually advanced by parts of the AI commentariat.

The argument is that if AGI is a scalable input that can do what workers do at lower cost, then the market
Boyuan Chen (@boyuanchen0) 's Twitter Profile Photo

Introducing Large Video Planner (LVP-14B) — a robot foundation model that actually generalizes. LVP is built on video gen, not VLA. As my final work at Massachusetts Institute of Technology (MIT), LVP has all its eval tasks proposed by third parties as a maximum stress test, but it excels!🤗 boyuan.space/large-video-pl…