Martin Gubri (@framart1) 's Twitter Profile
Martin Gubri

@framart1

Research Lead @parameterlab working on Trustworthy AI | he/him

Active accounts: 🦋 mgubri | 🐘 @[email protected]

ID: 983565397

linkhttps://gubri.eu calendar_today01-12-2012 23:57:22

1,1K Tweet

352 Takipçi

891 Takip Edilen

Martin Gubri (@framart1) 's Twitter Profile Photo

The mood of LLM membership inference datasets (This new paper complements nicely the findings of iamgroot42.github.io/mimir.github.i… and x.com/pratyushmaini/… )

The mood of LLM membership inference datasets

(This new paper complements nicely the findings of iamgroot42.github.io/mimir.github.i… and x.com/pratyushmaini/… )
Parameter Lab (@parameterlab) 's Twitter Profile Photo

Attending #ICML2024 and looking for a research internship in trustworthy AI for LLMs? Reach out to Seong Joon Oh and Martin Gubri at the conference! x.com/parameterlab/s…

Attending #ICML2024 and looking for a research internship in trustworthy AI for LLMs? Reach out to <a href="/coallaoh/">Seong Joon Oh</a> and <a href="/framart1/">Martin Gubri</a> at the conference!

x.com/parameterlab/s…
Martin Gubri (@framart1) 's Twitter Profile Photo

🇦🇹🎡 I am at #ICML2024 in Vienna! Reach out if you want to casually chat about LLM, including membership inference for fingerprinting, uncertainty estimation or privacy.

Parameter Lab (@parameterlab) 's Twitter Profile Photo

📢 Parameter Lab is at the #ACL2024 conference in Bangkok! Come to see our two posters: - TRAP 🪤 about black-box LLM fingerprinting: Tuesday 13 Aug 12:15-13:15. x.com/framart1/statu… - Apricot 🍑 about black-box calibrated confidence score estimation: Tuesday 13 Aug

Martin Gubri (@framart1) 's Twitter Profile Photo

Interested in black-box fingerprinting LLMs? Pass by our poster about TRAP 🪤 at #ACL2024 today during the 12:15 session! And feel free to contact me if you want to chat about fingerprinting, membership inference, adversarial examples, uncertainty estimation or privacy.

Martin Gubri (@framart1) 's Twitter Profile Photo

I will also have the pleasure to present with Dennis Ulmer 🦋 the Apricot 🍑 paper about calibrated confidence estimate during the today poster session at 16:00 of #acl2024 See you there!

Martin Gubri (@framart1) 's Twitter Profile Photo

I just learned the difference between replicability and reproducibility, thanks to this remarkable paper by Dennis Ulmer 🦋 and his authors: arxiv.org/abs/2204.06251 After speaking with several researchers this week, I realized that this distinction is likely not well-known.

I just learned the difference between replicability and reproducibility, thanks to this remarkable paper by <a href="/dnnslmr/">Dennis Ulmer 🦋</a> and his authors: arxiv.org/abs/2204.06251 
After speaking with several researchers this week, I realized that this distinction is likely not well-known.
Erik Mohlin (@karlerikmohlin) 's Twitter Profile Photo

Pretty strong evidence of negative mental health effects of doing a PhD. Recent working paper by Eva Ranehill, Anna Sandberg, Sanna Bergvall, and Clara Fernström. Paper link: swopec.hhs.se/lunewp/abs/lun…

Pretty strong evidence of negative mental health effects of doing a PhD. 
Recent working paper by
<a href="/EvaRanehill/">Eva Ranehill</a>, <a href="/annahsandberg/">Anna Sandberg</a>, Sanna Bergvall, and  Clara Fernström. 
Paper link:  swopec.hhs.se/lunewp/abs/lun…
Martin Gubri (@framart1) 's Twitter Profile Photo

📄 Excited to share our latest paper on the scale required for successful membership inference in LLMs! We investigate a continuum from single sentences to large document collections. Huge thanks to an incredible team: Haritz Puerto, Seong Joon Oh, Sangdoo Yun!

📄 Excited to share our latest paper on the scale required for successful membership inference in LLMs! We investigate a continuum from single sentences to large document collections. Huge thanks to an incredible team: <a href="/HaritzPuerto/">Haritz Puerto</a>, <a href="/coallaoh/">Seong Joon Oh</a>, <a href="/oodgnas/">Sangdoo Yun</a>!
Parameter Lab (@parameterlab) 's Twitter Profile Photo

🎉 We’re pleased to share the release of the models from our Apricot 🍑 paper, "Calibrating Large Language Models Using Their Generations Only", accepted at ACL 2024! At Parameter Lab, we believe openness and reproducibility are essential for advancing science, and we've put in

Jia-Bin Huang (@jbhuang0604) 's Twitter Profile Photo

How to design your presentation? Presentation is an essential skill for academics. After attending so many meetings, prelims, thesis defenses, research talks, and lectures, I’ve realized that I may have been approaching it all wrong ... Some (bitter) lessons I learned. 👇

Martin Gubri (@framart1) 's Twitter Profile Photo

Delighted by this great thread from elvis presenting our new Leaky Thoughts paper! We show that reasoning models pose serious privacy risks when used as personal agents. Reasoning traces are a new attack vector. Work led by Tommaso Green during his internship Parameter Lab!

Delighted by this great thread from <a href="/omarsar0/">elvis</a> presenting our new Leaky Thoughts paper!

We show that reasoning models pose serious privacy risks when used as personal agents. Reasoning traces are a new attack vector.

Work led by <a href="/tommasogreen/">Tommaso Green</a> during his internship <a href="/parameterlab/">Parameter Lab</a>!
The AI Timeline (@theaitimeline) 's Twitter Profile Photo

Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers Author's Explanation: x.com/framart1/statu… Overview: This paper demonstrates that the internal reasoning traces of LLMs, often presumed private, frequently leak sensitive user data through prompt injections or

Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers

Author's Explanation:
x.com/framart1/statu…

Overview:
This paper demonstrates that the internal reasoning traces of LLMs, often presumed private, frequently leak sensitive user data through prompt injections or
Parameter Lab (@parameterlab) 's Twitter Profile Photo

🎉 Very excited to see our new Leaky Thoughts 🫗 paper featured among last week's top AI papers by both DAIR.AI and The AI Timeline! - x.com/dair_ai/status… - x.com/TheAITimeline/… ➡️ Learn more about the paper in this great thread by elvis: x.com/omarsar0/statu… ➡️

🎉 Very excited to see our new Leaky Thoughts 🫗 paper featured among last week's top AI papers by both <a href="/dair_ai/">DAIR.AI</a> and <a href="/TheAITimeline/">The AI Timeline</a>!

- x.com/dair_ai/status…
- x.com/TheAITimeline/…

➡️ Learn more about the paper in this great thread by <a href="/omarsar0/">elvis</a>: x.com/omarsar0/statu… 
➡️
Martin Gubri (@framart1) 's Twitter Profile Photo

📢New paper out: Does SEO work for LLM-based conversational search? We introduce C-SEO Bench, a benchmark to test if conversational SEO methods actually help. Our finding? They don't. But traditional SEO still works because LLMs favour content already ranked higher in the prompt

📢New paper out: Does SEO work for LLM-based conversational search?

We introduce C-SEO Bench, a benchmark to test if conversational SEO methods actually help.
Our finding? They don't. But traditional SEO still works because LLMs favour content already ranked higher in the prompt