
Understandable Machine Intelligence Lab
@umi_lab_ai
Understandable Machine Intelligence Lab: We bring #explainable #AI to the next level. Part of @LeibnizATB, Ex @TUBerlin, funded by @BMBF_Bund #XAI
ID: 1328702079821615104
17-11-2020 14:11:18
167 Tweet
654 Followers
122 Following

We (Dilyara Bareeva, Galip Ümit Yolcu, Niklas Schmolenski, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin + me) just launched QUANDA — a training data attribution TDA software Built for researchers curious to apply/ develop/ evaluate TDA methods GitHub repo: github.com/dilyabareeva/q…


I am not attending #NeurIPS2024, but I encourage everyone interested in #XAI and #MechInterp to check out our paper on evaluating textual descriptions of neurons! Join Laura Kopf, Anna Hedström, and Marina M.-C. Höhne (née Vidovic) on Thu 09.12, 1 p.m. to 4 p.m. CST at East Exhibit Hall A-C #3107!


First day #NeurIPS2024 jetlagged but happy to be reunited with Understandable Machine Intelligence Lab Marina M.-C. Höhne (née Vidovic) Laura Kopf


I’ll be presenting our work at NeurIPS Conference in Vancouver! 🎉 Join me this Thursday, December 12th, in East Exhibit Hall A-C, Poster #3107, from 11 a.m. PST to 2 p.m. PST. I'll be discussing our paper “CoSy: Evaluating Textual Explanations of Neurons.”


Interested in eval, vision and mechanistic interpretability? Come chat at our NeurIPS Conference poster! #3107 👉 Thu, 12 Dec, 11 AM (East Exhibit Hall A-C) CoSy: Evaluating Textual Explanations of Neurons openreview.net/pdf?id=R0bnWrp…

If you’re still NeurIPS Conference and are curious about how evaluation outcomes of interpretability methods can be adversarially attacked? 👉 Sun, 15 Dec, 4 PM (East Ballroom A and B) The Price of Freedom: An Adversarial Attack on Interpretability Evaluation

If you're at #AAAI2025 don't miss our poster today (alignment track)! Paper 📘: arxiv.org/pdf/2502.15403 Code 👩💻: github.com/annahedstroem/… Team work with Carlos Eiras and Marina M.-C. Höhne (née Vidovic)
