Jonathan Kamp (@jb_kamp) 's Twitter Profile
Jonathan Kamp

@jb_kamp

Computational Linguist; PhD Candidate @CLTLVU; Interpretability in NLP; Argument Mining

ID: 1513502208414916609

linkhttp://jbkamp.github.io calendar_today11-04-2022 13:00:34

30 Tweet

71 Followers

92 Following

Selene Baez Santamaria (@sel_baez) 's Twitter Profile Photo

Getting ready to present my work (together with my supervisor @PiekVossen ) at COLING 2022. If you are interested in the evaluation of open-domain dialogue, come to room 104 at 14:45 pm, join us online, or read the paper here: aclanthology.org/2022.ccgpk-1.3…

Jonathan Kamp (@jb_kamp) 's Twitter Profile Photo

Our @CLTLVU colleague Selene Baez Santamaria together with @PiekVossen and Thomas Baier just won the best paper award at the Workshop on Customized Chat Grounding Persona and Knowledge. Hurray! COLING 2022

Our @CLTLVU colleague <a href="/Sel_Baez/">Selene Baez Santamaria</a> together with @PiekVossen and Thomas Baier just won the best paper award at the Workshop on Customized Chat Grounding Persona and Knowledge. Hurray! <a href="/coling2022/">COLING 2022</a>
Ombretta Strafforello (@ombrettast) 's Twitter Profile Photo

Arrived at #ICIP2022 in Bordeaux! Check out our poster on “Humans disagree with the #IoU for measuring #objectdetector localization error”, tomorrow from 10:00 to 12:30, zone 5 😉

Arrived at #ICIP2022 in Bordeaux! Check out our poster on “Humans disagree with the #IoU for measuring #objectdetector localization error”, tomorrow from 10:00 to 12:30, zone 5 😉
Jonathan Kamp (@jb_kamp) 's Twitter Profile Photo

Thanks! Khalid Al-Khatib next time in Groningen? ;) Talking about granularity: any overlap with a paper in this same session by Mattes Ruckdeschel from Leibniz-Institut für Medienforschung? Check the proceedings here: aclanthology.org/volumes/2022.a…

Stefan F. Schouten (@stefanfs93) 's Twitter Profile Photo

What can we learn about how named entities are represented in language models by substituting them for different ones and observing how predictions change? Come see my poster this Thursday December 8th @ BlackboxNLP 2022 in Abu Dhabi (and online).

Jonathan Kamp (@jb_kamp) 's Twitter Profile Photo

Moonlit (moonlit.ai) just released the interview they had with me! Check it out if you want to get some idea of what my research on #interpretability and (legal) #argmining is about! medium.com/@moonlitai.leg…

Jonathan Kamp (@jb_kamp) 's Twitter Profile Photo

So far today: I've been able to virtually attend co-occurring NLLP Workshop and #BlackboxNLP and listen to some inspiring talks. 🧙 Ito hard skills, I learned how to mute/unmute individual zoom chrome tabs

Jonathan Kamp (@jb_kamp) 's Twitter Profile Photo

Some smooth work by Gabriele Sarti and colleagues from the #InDeep project in releasing a tool for interpreting sequence generation models. Keep up the development! @InseqDev

Gabriele Sarti (@gsarti_) 's Twitter Profile Photo

Shout-out to Hosein Mohebbi et al. from our #InDeep consortium for their awesome work "Quantifying Context Mixing in Transformers", introducing Value Zeroing as a new promising post-hoc interpretability approach for NLP! 🎉 Paper: arxiv.org/abs/2301.12971

Shout-out to <a href="/hmohebbi75/">Hosein Mohebbi</a> et al. from our #InDeep consortium for their awesome work "Quantifying Context Mixing in Transformers", introducing Value Zeroing as a new promising post-hoc interpretability approach for NLP! 🎉 Paper: arxiv.org/abs/2301.12971
Lea Krause (@l__kra) 's Twitter Profile Photo

Just wrapped up our presentation at #MMNLG! Big thanks to everyone for the thought-provoking questions, and kudos to @bertugatt for the seamless organisation. Can't wait to dive deeper into this work! #SIGDIALxINLG2023

Jaap Jumelet (@jumeletj) 's Twitter Profile Photo

Looking forward to this! We will be giving a tutorial on the current state of Transformer interpretability methods at EACL! 🇲🇹

Jonathan Kamp (@jb_kamp) 's Twitter Profile Photo

Interested in post-hoc explanation methods? Swing by our #EMNLP2023 poster today at 10:30 to chat about their (dis)agreement and the implications of selecting the top-k most important tokens! (arxiv.org/abs/2310.05619)

Interested in post-hoc explanation methods? Swing by our #EMNLP2023 poster today at 10:30 to chat about their (dis)agreement and the implications of selecting the top-k most important tokens! 
(arxiv.org/abs/2310.05619)
Jonathan Kamp (@jb_kamp) 's Twitter Profile Photo

Did you know that token-level differences between feature attribution methods are smoothed out if we compare them on the syntactic span level? I'll be at LREC COLING 2024 (Turin) next month to present our paper on the linguistic preferences of such methods :) arxiv.org/abs/2403.19424

Jonathan Kamp (@jb_kamp) 's Twitter Profile Photo

Also at LREC COLING 2024 and interested in #interpretability in #nlp / #xai? I'm happy to tell you something about post-hoc attribution methods and their linguistic preferences 🔻 Poster session >> from 17:30 onwards Proceedings: aclanthology.org/2024.lrec-main…

Also at <a href="/LrecColing/">LREC COLING 2024</a> and interested in #interpretability in #nlp / #xai? I'm happy to tell you something about post-hoc attribution methods and their linguistic preferences 🔻

Poster session &gt;&gt; from 17:30 onwards

Proceedings: aclanthology.org/2024.lrec-main…
Gabriele Sarti (@gsarti_) 's Twitter Profile Photo

⚠️ Citations from prompting or NLI seem plausible, but may not faithfully reflect LLM reasoning. 🏝️ MIRAGE detects context dependence in generations via model internals, producing granular and faithful RAG citations. 🚀 Demo: huggingface.co/spaces/gsarti/… Fun collab w/ Jirui Qi @EMNLP25 ✈️,