James Burgess (at ICLR 2025) (@jmhb0) 's Twitter Profile
James Burgess (at ICLR 2025)

@jmhb0

jmhb0.github.io PhD student in ML, computer vision & biology at Stanford 🇦🇺

ID: 1396961452288708610

calendar_today24-05-2021 22:51:04

62 Tweet

205 Takipçi

894 Takip Edilen

Alejandro Lozano (@ale9806_) 's Twitter Profile Photo

Earlier this year, we released the BIOMEDICA dataset, featuring 24 million unique image caption pairs and 30 million image references derived from open-source biomedical literature. It's been great to see the community engaging with it—we're currently seeing around 6K downloads

Earlier this year, we released the BIOMEDICA dataset, featuring 24 million unique image caption pairs and 30 million image references derived from open-source biomedical literature. It's been great to see the community engaging with it—we're currently seeing around 6K downloads
Orr Zohar @ ICLR’25 (@orr_zohar) 's Twitter Profile Photo

Excited to see SmolVLM powering BMC-SmolVLM in the latest BIOMEDICA update! At just 2.2B params, it matches 7-13B biomedical VLMs. Check out the full release: Hugging Face #smolvlm

Orr Zohar @ ICLR’25 (@orr_zohar) 's Twitter Profile Photo

🤗The SmolVLM report is out, with all the experiments, findings, and insights that led to high performance at tiny sizes🤏. 📱These models can run on most mobile/edge devices. 📖Give it a look!

🤗The SmolVLM report is out, with all the experiments, findings, and insights that led to high performance at tiny sizes🤏. 
📱These models can run on most mobile/edge devices. 
📖Give it a look!
Yuhui Zhang (@zhang_yu_hui) 's Twitter Profile Photo

Three papers being presented by my amazing collaborators at #ICLR2025! 🌟 (sadly I can't make it) 1. Mechanistic Interpretability Meets Vision Language Models: Insights and Limitations 🔍 A deep dive into mechanistic interpretation techniques for VLMs & future

Three papers being presented by my amazing collaborators at #ICLR2025! 🌟 (sadly I can't make it)

1. Mechanistic Interpretability Meets Vision Language Models: Insights and Limitations 🔍
   
    A deep dive into mechanistic interpretation techniques for VLMs & future
James Burgess (at ICLR 2025) (@jmhb0) 's Twitter Profile Photo

I'm at #ICLR2025 presenting "Video Action Differencing". Keen to chat with anyone interested in MLLMs - both for general data & for scientific reasoning

Jeff Nirschl (@jnirsch) 's Twitter Profile Photo

My lab is starting at UW-Madison! This is a unique opportunity to contribute to impactful computational neuropathology research in a collaborative environment. Join the Nirschl Lab and help drive discoveries that improve our understanding of neurodegenerative disorders!🧠

James Burgess (at ICLR 2025) (@jmhb0) 's Twitter Profile Photo

I'm at CVPR! Come see me at one of my posters, or reach out for a chit chat. MicroVQA: reasoning llm benchmark in biology Sat 5-7pm, hall D, poster #357 jmhb0.github.io/microvqa/ BIOMEDICA: a massive vision-language dataset Sat 5-7pm, hall D, poster #374 minwoosun.github.io/biomedica-webs…

James Burgess (at ICLR 2025) (@jmhb0) 's Twitter Profile Photo

Get around our very cool #ICML paper that predicts how biological cells respond to drug treatments or gene knockdowns. It was led by the legendary Yuhui Zhang and @hhhhh2033528, and I was happy to contribute a tiny bit :)

Casey Flint (@flintcasey) 's Twitter Profile Photo

I've been working with the Reflection AI team on Asimov, our best-in-class code research agent. I am super excited for you all to try it. Let me know here if you want to try it and I can move you off the waitlist. :)

Rishubh Parihar (@rishubhparihar) 's Twitter Profile Photo

“Make it red.” “No! More red!” “Ughh… slightly less red.” “Perfect!” ♥️ 🎚️Kontinuous Kontext adds slider-based control over edit strength to instruction-based image editing, enabling smooth, continuous transformations!

Mark Endo (@mark_endo1) 's Twitter Profile Photo

Thinking about using small multimodal models? Want a clearer understanding of what breaks when downscaling model size, and why? ✨Introducing our new work on Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models 🧵👇

Thinking about using small multimodal models? Want a clearer understanding of what breaks when downscaling model size, and why?

✨Introducing our new work on Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models
🧵👇