Anne Harrington (@annekharrington) 's Twitter Profile
Anne Harrington

@annekharrington

ML Scientist/Engineer @ Liquid AI

ID: 1745082784777961472

calendar_today10-01-2024 13:58:56

2 Tweet

163 Followers

164 Following

Anne Harrington (@annekharrington) 's Twitter Profile Photo

So excited for COGGRAPH!! Please join our workshop events for amazing interdisciplinary discussions around cognitive science and computer graphics! coggraph.github.io

Mark Hamilton ✈️ICLR 2025 (@mhamilton723) 's Twitter Profile Photo

What does the man in the moon🌚have to do with AI identifying tigers 🐯? Our #ECCV2024 paper explores if AI sees faces in random objects like we do. With a new dataset we link animal face detection to pareidolia in algorithms. aka.ms/facesinthings

What does the man in the moon🌚have to do with AI identifying tigers 🐯? Our #ECCV2024 paper explores if AI sees faces in random objects like we do. With a new dataset we link animal face detection to pareidolia in algorithms. aka.ms/facesinthings
Shaden (@sa_9810) 's Twitter Profile Photo

Excited to share our ICLR 2025 paper, I-Con, a unifying framework that ties together 23 methods across representation learning, from self-supervised learning to dimensionality reduction and clustering. Website: aka.ms/i-con A thread 🧵 1/n

Massachusetts Institute of Technology (MIT) (@mit) 's Twitter Profile Photo

“Periodic table of machine learning” could fuel AI discovery: Researchers have created a unifying framework that can help scientists combine existing ideas to improve AI models or create new ones. news.mit.edu/2025/machine-l…

Yutong Bai (@yutongbai1002) 's Twitter Profile Photo

What would a World Model look like if we start from a real embodied agent acting in the real world? It has to have: 1) A real, physically grounded and complex action space—not just abstract control signals. 2) Diverse, real-life scenarios and activities. Or in short: It has to

Jiaxin Ge (@aomaru_21490) 's Twitter Profile Photo

✨Introducing ECHO, the newest in-the-wild image generation benchmark! You’ve seen new image models and new use cases discussed on social media, but old benchmarks don’t test them! We distilled this qualitative discussion into a structured benchmark. 🔗 echo-bench.github.io

Tsung-Han (Patrick) Wu @ ICLR’25 (@tsunghan_wu) 's Twitter Profile Photo

Humans handle dynamic situations easily, what about models? Turns out, they break in three distinct ways: ⛔ Force Stop → Reasoning leakage (won’t stop) ⚡️ Speedup → Panic (rushed answers) ❓ Info Updates → Self-doubt (reject updates) 👉Check out dynamic-lm.github.io

Humans handle dynamic situations easily, what about models? 

Turns out, they break in three distinct ways:

⛔ Force Stop → Reasoning leakage (won’t stop) 
⚡️ Speedup → Panic (rushed answers) 
❓ Info Updates → Self-doubt (reject updates)

👉Check out dynamic-lm.github.io
Ritwik Gupta 🇺🇦 (@ritwik_g) 's Twitter Profile Photo

I am recruiting Ph.D. students at UMD Department of Computer Science starting Fall 2026! I am looking for students in three broad areas: (1) Physics-integrated computer vision (2) VLMs with constraints (2) Dual-use AI policy We're ranked #3 in AI on CSrankings! Specific details in 🧵

I am recruiting Ph.D. students at <a href="/umdcs/">UMD Department of Computer Science</a> starting Fall 2026! I am looking for students in three broad areas:
(1) Physics-integrated computer vision
(2) VLMs with constraints
(2) Dual-use AI policy

We're ranked #3 in AI on <a href="/CSrankings/">CSrankings</a>!
Specific details in 🧵
Lisa Dunlap (@lisabdunlap) 's Twitter Profile Photo

🧵Tired of scrolling through your horribly long model traces in VSCode to figure out why your model failed? We made StringSight to fix this: an automated pipeline for analyzing your model outputs at scale. ➡️Demo: stringsight.com ➡️Blog: blog.stringsight.com

Lisa Dunlap (@lisabdunlap) 's Twitter Profile Photo

🌟NEW PAPER🌟 Do you know that changing a visual marker from red to blue can completely reorder VLM leaderboards? In our most recent work, we explore the fragility of visually prompted benchmarks. lisadunlap.github.io/vpbench/

🌟NEW PAPER🌟
Do you know that changing a visual marker from red to blue can completely reorder VLM leaderboards? In our most recent work, we explore the fragility of visually prompted benchmarks. lisadunlap.github.io/vpbench/
Haven (Haiwen) Feng (@havenfeng) 's Twitter Profile Photo

✨Thinking with Blender~ Meet VIGA: a multimodal agent that autonomously codes 3D/4D blender scenes from any image, with no human, no training! Berkeley AI Research #LLMs #Blender #Agent 🧵1/6

Jiaxin Ge (@aomaru_21490) 's Twitter Profile Photo

We found a surprisingly effective solution of 2D -> 3D: a coding agent. “Thinking with images” is cool for understanding. VIGA goes one step further: thinking in a renderer. It generates 2D/3D/4D from scratch, by writing code, rendering, and self-correcting.

Brent Yi (@brenthyi) 's Twitter Profile Photo

New project! Flow Policy Gradients for Robot Control tldr; a simple online RL recipe for training and fine-tuning flow policies for robots co-led w/ Hongsuk Benjamin Choi: hongsukchoi.github.io/fpo-control

Grace Luo (@graceluo_) 's Twitter Profile Photo

We trained diffusion models on a billion LLM activations, and we want you to use them! New preprint: Learning a Generative Meta-Model of LLM Activations Joint work with Jiahai Feng, trevordarrell, Alec Radford, Jacob Steinhardt. More in thread 🧵

Amil Dravid (@_amildravid) 's Twitter Profile Photo

Considering submitting to our workshop How Do Vision Models Work #CVPR2026 2026! We have both a non-proceedings and proceedings track. More info at sites.google.com/view/how-cvpr-….

Qianqian Wang (@qianqianwang5) 's Twitter Profile Photo

Very excited to share our exploration of a new robotics foundation model at Rhoda AI. We train a causal video model from scratch, unlocking new capabilities for robust, long-horizon closed-loop robot control. Learn more: rhoda.ai/research/direc…

Albert Gu (@_albertgu) 's Twitter Profile Photo

The newest model in the Mamba series is finally here 🐍 Hybrid models have become increasingly popular, raising the importance of designing the next generation of linear models. We've introduced several SSM-centric ideas to significantly increase Mamba-2's modeling capabilities

The newest model in the Mamba series is finally here 🐍

Hybrid models have become increasingly popular, raising the importance of designing the next generation of linear models.

We've introduced several SSM-centric ideas to significantly increase Mamba-2's modeling capabilities
Baifeng (@baifeng_shi) 's Twitter Profile Photo

Humans can see in high-res, high-FPS in real-time. Why can't VLMs? Introducing AutoGaze: ViTs/VLMs "gaze" only at key video regions! Up to 4-100x token savings, 19x speedup, and enables scaling to 4K-res 1K-frame videos. 📄 arxiv.org/abs/2603.12254 🌐 autogaze.github.io 🤗