Explainable Machine Learning (@explainableml) 's Twitter Profile
Explainable Machine Learning

@explainableml

Institute for Explainable Machine Learning @HelmholtzMunich and Interpretable and Reliable Machine Learning group @TU_Muenchen

ID: 1425445117121417218

linkhttps://eml-munich.de calendar_today11-08-2021 13:15:26

155 Tweet

2,2K Followers

91 Following

Zeynep Akata (@zeynepakata) 's Twitter Profile Photo

It is a great honor to receive the ZukunftsWissen Prize 2025 from the German Academy of the Sciences Nationale Akademie der Wissenschaften Leopoldina with generous support of the Commerzbank-Stiftung 🎉 This achievement wouldn’t have been possible without my wonderful group Explainable Machine Learning TU München Helmholtz Munich | @HelmholtzMunich

Robin Hesse (@robinhesse_) 's Twitter Profile Photo

Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work! 🗓️ Submissions open until June 26 AoE. 📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track. #ICCV2025 #ICCV2025

Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
đź“„ Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 <a href="/ICCVConference/">#ICCV2025</a>
Leonard Salewski (@l_salewski) 's Twitter Profile Photo

I am very happy to announce that I successfully defended my PhD thesis with the title "Advancing Multimodal Explainability: From Visual Reasoning to In-Context Impersonation".

I am very happy to announce that I successfully defended my PhD thesis with the title "Advancing Multimodal Explainability: From Visual Reasoning to In-Context Impersonation".
Simon Roschmann (@simonroschmann) 's Twitter Profile Photo

How can we circumvent data scarcity in the time series domain? We propose to leverage pretrained ViTs (e.g., CLIP, DINOv2) for time series classification and outperform time series foundation models (TSFMs). 📄 Preprint: arxiv.org/abs/2506.08641 💻 Code: github.com/ExplainableML/…

How can we circumvent data scarcity in the time series domain?

We propose to leverage pretrained ViTs (e.g., CLIP, DINOv2) for time series classification and outperform time series foundation models (TSFMs).

đź“„ Preprint: arxiv.org/abs/2506.08641
💻 Code: github.com/ExplainableML/…
Kirill Bykov (@kirill_bykov) 's Twitter Profile Photo

Personal news: I have defended my PhD thesis “Explaining Representations in Deep Neural Networks” at TU Berlin with summa cum laude (with distinction)! From August, I’ll start a Postdoc at TU München in Explainable Machine Learning group focusing on Mechanistic Interpretability ✨

Personal news: I have defended my PhD thesis “Explaining Representations in Deep Neural Networks” at TU Berlin with summa cum laude (with distinction)!

From August, I’ll start a Postdoc at <a href="/TU_Muenchen/">TU München</a> in <a href="/ExplainableML/">Explainable Machine Learning</a> group focusing on Mechanistic Interpretability ✨
Karsten Roth (@confusezius) 's Twitter Profile Photo

💫 After four PhD years on all things multimodal, pre- and post-training, I’m super excited for a new research chapter Google DeepMind 🇨🇭! Biggest thanks to Zeynep Akata and Oriol Vinyals for all the guidance, support, and incredibly eventful and defining research years ♥️!

💫 After four PhD years on all things multimodal, pre- and post-training, I’m super excited for a new research chapter <a href="/GoogleDeepMind/">Google DeepMind</a> 🇨🇭!

Biggest thanks to <a href="/zeynepakata/">Zeynep Akata</a> and <a href="/OriolVinyalsML/">Oriol Vinyals</a> for all the guidance, support, and incredibly eventful and defining research years ♥️!
Luca Eyring @ICLR (@lucaeyring) 's Twitter Profile Photo

Reward hacking is challenging when fine-tuning few-step Diffusion models. Direct fine-tuning on rewards can create artifacts that game metrics while degrading visual quality. We propose Noise Hypernetworks as a theoretically grounded solution, inspired by test-time optimization.