Kirill Bykov (@kirill_bykov) 's Twitter Profile
Kirill Bykov

@kirill_bykov

PhD student in Interpretable ML @UMI_Lab_AI, @bifoldberlin, @TUBerlin

ID: 143860543

linkhttps://www.kirill-bykov.com calendar_today14-05-2010 16:21:29

734 Tweet

390 Takipçi

1,1K Takip Edilen

Konrad Rieck 🌈 (@mlsec) 's Twitter Profile Photo

Just one week left to submit your proposal to host a competition ahead of SaTML Conference! Last year’s competitions were a huge success with high participation and challenging tasks. We can’t wait to see your ideas! 👉 satml.org/participate-cf…

Just one week left to submit your proposal to host a competition ahead of <a href="/satml_conf/">SaTML Conference</a>!

Last year’s competitions were a huge success with high participation and challenging tasks. We can’t wait to see your ideas!

👉 satml.org/participate-cf…
Understandable Machine Intelligence Lab (@umi_lab_ai) 's Twitter Profile Photo

🚨 We're hiring! 🚨 We have 2 PhD positions available: 1️⃣ Focus on AI for Soil Health & Peat Substitution (Horizon Europe, SPIN-FERT) 2️⃣ Focus on Mechanistic Interpretability in Foundation Models (BMBF, REFRAME) 📅 Apply by August 25, 2024 🔗 loai-comramo.pi-asp.de/bewerber-web/?…

Konrad Rieck 🌈 (@mlsec) 's Twitter Profile Photo

🚀 We have a new open position for a PhD student to research generative AI in security and privacy. Fully funded and based in vibrant Berlin! Apply by September 13: mlsec.org/jobs.html#jobs BIFOLD TU Berlin

Eugene Vinitsky 🍒🦋 (@eugenevinitsky) 's Twitter Profile Photo

Don’t let anyone convince you that you *should* expect to be depressed during your PhD. Intellectual struggle isn’t the same as depression. depression is a symptom that something is wrong

Konrad Rieck 🌈 (@mlsec) 's Twitter Profile Photo

Feeling like you're rushing a paper on security, privacy, or fairness in ML for an upcoming deadline? Why not submit to SaTML Conference in three weeks? SaTML is the top venue for research on secure and trustworthy machine learning. 👉 satml.org/participate-cf… ⏰ Deadline: Sep 18

Feeling like you're rushing a paper on security, privacy, or fairness in ML for an upcoming deadline? Why not submit to <a href="/satml_conf/">SaTML Conference</a> in three weeks?

SaTML is the top venue for research on secure and trustworthy machine learning. 

👉 satml.org/participate-cf…
⏰ Deadline: Sep 18
Tom Burns (@tfburns) 's Twitter Profile Photo

I'm on the job market! Looking for positions where I can leverage my background in AI/ML and computational neuroscience. Below you'll find posts for a few of my publications/interests. I'm open to relocating & industry or academic positions. Feel free to reach out for a chat :)

Laura Kopf (@lkopf_ml) 's Twitter Profile Photo

🎉 Excited to announce that our paper has been accepted to #NeurIPS2024! This is my first first-author publication 🥳 I'm incredibly grateful to my amazing supervisor Kirill Bykov and co-authors Philine Bommer Anna Hedström Sebastian Lapuschkin Marina M.-C. Höhne (née Vidovic)! 📄arxiv.org/abs/2405.20331

🎉 Excited to announce that our paper has been accepted to #NeurIPS2024! This is my first first-author publication 🥳 I'm incredibly grateful to my amazing supervisor <a href="/kirill_bykov/">Kirill Bykov</a> and co-authors <a href="/BommerPhiline/">Philine Bommer</a> <a href="/anna_hedstroem/">Anna Hedström</a> <a href="/SLapuschkin/">Sebastian Lapuschkin</a> <a href="/Marina_MCV/">Marina M.-C. Höhne (née Vidovic)</a>!

📄arxiv.org/abs/2405.20331
Kirill Bykov (@kirill_bykov) 's Twitter Profile Photo

Thank you Stefan Lindow Stefan Lindow for the amazing opportunity to give a talk on Explainable AI at the Potsdam Graduate School AI for Academy program! Grateful for the insightful audience and great questions! ☺️

Thank you Stefan Lindow <a href="/SlineL/">Stefan Lindow</a> for the amazing opportunity to give a talk on Explainable AI at the Potsdam Graduate School AI for Academy program! Grateful for the insightful audience and great questions! ☺️
Kirill Bykov (@kirill_bykov) 's Twitter Profile Photo

I am not attending #NeurIPS2024, but I encourage everyone interested in #XAI and #MechInterp to check out our paper on evaluating textual descriptions of neurons! Join Laura Kopf, Anna Hedström, and Marina M.-C. Höhne (née Vidovic) on Thu 09.12, 1 p.m. to 4 p.m. CST at East Exhibit Hall A-C #3107!

I am not attending #NeurIPS2024, but I encourage everyone interested in #XAI and #MechInterp to check out our paper on evaluating textual descriptions of neurons!

Join <a href="/lkopf_ml/">Laura Kopf</a>, <a href="/anna_hedstroem/">Anna Hedström</a>, and <a href="/Marina_MCV/">Marina M.-C. Höhne (née Vidovic)</a> on Thu 09.12, 1 p.m. to 4 p.m. CST at East Exhibit Hall A-C #3107!
Kirill Bykov (@kirill_bykov) 's Twitter Profile Photo

Nice to see our paper (CoSy, number 5) included on the list among other great works! 🎉 If you’re attending #NeurIPS2024, please check out our poster on Thursday, December 12, in East Exhibit Hall A-C, #3107. Thank you, Neel Nanda, for compiling the list!

Laura Kopf (@lkopf_ml) 's Twitter Profile Photo

I’ll be presenting our work at NeurIPS Conference in Vancouver! 🎉 Join me this Thursday, December 12th, in East Exhibit Hall A-C, Poster #3107, from 11 a.m. PST to 2 p.m. PST. I'll be discussing our paper “CoSy: Evaluating Textual Explanations of Neurons.”

I’ll be presenting our work at <a href="/NeurIPSConf/">NeurIPS Conference</a> in Vancouver! 🎉
Join me this Thursday, December 12th, in East Exhibit Hall A-C, Poster #3107, from 11 a.m. PST to 2 p.m. PST. I'll be discussing our paper “CoSy: Evaluating Textual Explanations of Neurons.”
David Chalmers (@davidchalmers42) 's Twitter Profile Photo

a draft paper (for an invited talk at AAAI next month) with a philosophical analysis of work on mechanistic interpretability, with special attention to methods for propositional interpretability. arxiv.org/abs/2501.15740

Understandable Machine Intelligence Lab (@umi_lab_ai) 's Twitter Profile Photo

🚨 New paper alert! 🚨 We’re excited to share our latest work on interpretability evaluation: "Evaluating Interpretable Methods via Geometric Alignment of Functional Distortions" 📜 Accepted at TMLR 🎉 🔥 Survey certification 🔥 📖 Read: openreview.net/pdf?id=ukLxqA8…

🚨 New paper alert! 🚨

We’re excited to share our latest work on interpretability evaluation:

"Evaluating Interpretable Methods via Geometric Alignment of Functional Distortions"

📜 Accepted at TMLR 🎉
🔥 Survey certification 🔥
📖 Read: openreview.net/pdf?id=ukLxqA8…