Rohit Gandikota (@rohitgandikota) 's Twitter Profile
Rohit Gandikota

@rohitgandikota

Ph.D. AI @ Northeastern University. Understanding, mapping, and editing knowledge in large generative models. Ex-Scientist Indian Space Research Organization

ID: 1531450458

linkhttps://rohitgandikota.github.io calendar_today19-06-2013 17:29:39

226 Tweet

834 Takipçi

109 Takip Edilen

Mechanistic Interpretability for Vision @ CVPR2025 (@miv_cvpr2025) 's Twitter Profile Photo

Mechanistic Interpretability for Vision Workshop has officially begun #CVPR2025 ! 🚀 Join us at Grand C1 Hall for insightful perspectives on the state of interpretability in vision models by Tamar Rott Shaham.

Mechanistic Interpretability for Vision Workshop has officially begun <a href="/CVPR/">#CVPR2025</a> ! 🚀

Join us at Grand C1 Hall for insightful perspectives on the state of interpretability in vision models by <a href="/TamarRottShaham/">Tamar Rott Shaham</a>.
Rohit Gandikota (@rohitgandikota) 's Twitter Profile Photo

Erasing Concepts from FLUX.1 models 🔥 ESD now supports FLUX models, and we've recently added SDXL support as well. Check out the code and share your experiments with us: github.com/rohitgandikota… (This is a long-awaited request from our users - thank you for your patience!)

Mechanistic Interpretability for Vision @ CVPR2025 (@miv_cvpr2025) 's Twitter Profile Photo

We’re gearing up for the Mechanistic Interpretability in Vision Workshop 2026 👀✨ Who would you love to see as an invited speaker this year? Drop your suggestions below ⬇️

Ostris (@ostrisai) 's Twitter Profile Photo

Tutorial: How to Train a Concept Slider LoRA with AI Toolkit For this tutorial, I show you how to train a detail slider for Qwen Image that allows you to increase or decrease the amount of detail in an image by adjusting the LoRA strength. Links in 🧵

Tutorial: How to Train a Concept Slider LoRA with AI Toolkit

For this tutorial, I show you how to train a detail slider for Qwen Image that allows you to increase or decrease the amount of detail in an image by adjusting the LoRA strength. Links in 🧵
Rohit Gandikota (@rohitgandikota) 's Twitter Profile Photo

What a cool way to build and play with concept sliders! No code - just use the AIToolkit UI by Ostris! There is also a video tutorial on how to use the toolkit 👇

Adobe Research (@adoberesearch) 's Twitter Profile Photo

A first look at Adobe Research's work at #ICCV2025! This year, our research spans breakthroughs in image generation, 3D reconstruction, content authenticity, and more — advancing the science behind the next generation of creative technologies. Check out the blog post to learn

Adobe Research (@adoberesearch) 's Twitter Profile Photo

Ever wonder why AI image generators produce the specific results they do? In this work, Adobe researchers explore ways to map the visual knowledge hidden inside diffusion models – and use it to give users more control over the output. adobe.ly/47evqvD #ICCV2025

Or Patashnik (@opatashnik) 's Twitter Profile Photo

📢 Today I begin my first semester as faculty in Computer Science at Tel Aviv University! Excited to start this new journey, and grateful to teach & research where my own journey began 🩵

📢 Today I begin my first semester as faculty in Computer Science at <a href="/TelAvivUni/">Tel Aviv University</a>!
Excited to start this new journey, and grateful to teach &amp; research where my own journey began 🩵
Arnab Sen Sharma (@arnab_api) 's Twitter Profile Photo

How can a language model find the veggies in a menu? New pre-print where we investigate the internal mechanisms of LLMs when filtering on a list of options. Spoiler: turns out LLMs use strategies surprisingly similar to functional programming (think "filter" from python)! 🧵

How can a language model find the veggies in a menu?

New pre-print where we investigate the internal mechanisms of LLMs when filtering on a list of options.

Spoiler: turns out LLMs use strategies surprisingly similar to functional programming (think "filter" from python)! 🧵
Amil Dravid (@_amildravid) 's Twitter Profile Photo

Our paper "Vision Transformers Don't Need Trained Registers" will appear as a Spotlight at NeurIPS 2025! We uncover the mechanism behind high-norm tokens and attention sinks in ViTs, propose a training-free fix, and recently added an analytical model -- more on that below. ⬇️

Tamar Rott Shaham (@tamarrottshaham) 's Twitter Profile Photo

A key challenge for interpretability agents is knowing when they’ve understood enough to stop experimenting. Our NeurIPS Conference paper introduces a self-reflective agent that measures the reliability of its own explanations and stops once its understanding of models has converged.

A key challenge for interpretability agents is knowing when they’ve understood enough to stop experimenting. 
Our <a href="/NeurIPSConf/">NeurIPS Conference</a> paper introduces a self-reflective agent that measures the reliability of its own explanations and stops once its understanding of models has converged.
Rohit Gandikota (@rohitgandikota) 's Twitter Profile Photo

Self reflection within interpretability agents can nudge them to stop over-experimenting! Checkout this thread for more details 👇