Alina Leidinger (@alinaleidinger) 's Twitter Profile
Alina Leidinger

@alinaleidinger

PhD student in NLP+AI Ethics @UvA_Amsterdam || prev mathematics @imperialcollege @TU_Muenchen
she/her

ID: 1225719705694089222

linkhttps://aleidinger.github.io calendar_today07-02-2020 09:55:43

22 Tweet

384 Followers

527 Following

Jaap Jumelet (@jumeletj) 's Twitter Profile Photo

✨What do language models know about grammar? In our new TACL paper we approach this question using the Structural Priming paradigm, that has been used to uncover humans' comprehension of abstract syntax. 📜Blog resources.illc.uva.nl/illc-blog/prob… 📑Paper arxiv.org/pdf/2109.14989… 👇Thread

✨What do language models know about grammar?

In our new TACL paper we approach this question using the Structural Priming paradigm, that has been used to uncover humans' comprehension of abstract syntax.

📜Blog resources.illc.uva.nl/illc-blog/prob…

📑Paper arxiv.org/pdf/2109.14989…

👇Thread
Giada Pistilli (@giadapistilli) 's Twitter Profile Photo

Very excited to give a seminar tomorrow on the ethics of Large Language Models at UvA Amsterdam! Be sure to stop by if you are around, or ping me if you'd like to watch the live streaming on Zoom. Big thanks to Sandro Pezzelle for the kind invitation. I am looking forward!

Very excited to give a seminar tomorrow on the ethics of Large Language Models at <a href="/UvA_Amsterdam/">UvA Amsterdam</a>!

Be sure to stop by if you are around, or ping me if you'd like to watch the live streaming on Zoom.

Big thanks to <a href="/sandropezzelle/">Sandro Pezzelle</a> for the kind invitation. I am looking forward!
Richard Rogers (@richardrogers) 's Twitter Profile Photo

Just published: New research on how language technologies that perpetuate stereotypes actively cement social hierarchies. 'Which Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion?' together with Alina Leidinger, dl.acm.org/doi/10.1145/35… vera.ai

Alina Leidinger (@alinaleidinger) 's Twitter Profile Photo

First few days at #EMNLP2023 have been a blast! Tomorrow, I'll be presenting our poster on robustness in LLM evaluation (session 1): arxiv.org/abs/2311.01967 Come say hi if you wanna chat about robustness/safety of LLMs!

First few days at #EMNLP2023 have been a blast! 

Tomorrow, I'll be presenting our poster on robustness in LLM evaluation (session 1): arxiv.org/abs/2311.01967

Come say hi if you wanna chat about robustness/safety of LLMs!
Leonie Weissweiler (@laweissweiler) 's Twitter Profile Photo

At the #EMNLP2023 poster session right now, all the way in the back: Check out Alina Leidinger's poster on the linguistic properties of successful prompts and start treating them as hyperparameters to be re-tuned for every task and model!

At the #EMNLP2023 poster session right now, all the way in the back:

Check out <a href="/AlinaLeidinger/">Alina Leidinger</a>'s poster on the linguistic properties of successful prompts and start treating them as hyperparameters to be re-tuned for every task and model!
J. AI Research-JAIR (@jair_editor) 's Twitter Profile Photo

New Article: "Undesirable Biases in NLP: Addressing Challenges of Measurement" by van der Wal, Bachmann, Leidinger, van Maanen, Zuidema, and Schulz jair.org/index.php/jair…

clem 🤗 (@clementdelangue) 's Twitter Profile Photo

"The examples might be surprising, but the broad strokes of the research aren’t. It’s well established at this point that all models contain biases, albeit some more egregious than others." “We call on researchers to rigorously test their models for the cultural visions they

Florent Daudens (@fdaudens) 's Twitter Profile Photo

How open text-analyzing models respond to questions relating to LGBTQ+ rights, social welfare, surrogacy and more? Giada Pistilli Alina Leidinger Atoosa Kasirzadeh Yacine Jernite Sasha Luccioni, PhD 🦋🌎✨🤗 MMitchell found that they tend to answer questions inconsistently, which reflects biases embedded

Giada Pistilli (@giadapistilli) 's Twitter Profile Photo

🧵 Very excited to introduce our latest research: "CIVICS: Building a Dataset for Examining Culturally-Informed Values in Large Language Models." Access the pre-print here: huggingface.co/papers/2405.13…

Avijit Ghosh (@evijitghosh) 's Twitter Profile Photo

Announcing NeurIPS Workshop: EvalEval 2024! 🚀 As generative AI rapidly transforms our world, a critical question looms: How do we measure and evaluate its broader societal impacts? 📄 Our recent collaborative paper (arxiv.org/pdf/2306.05949) reveals a lack of standardized

Oskar van der Wal (@oskarvanderwal) 's Twitter Profile Photo

Working on #bias & #discrimination in #NLP? Passionate about integrating insights from other disciplines? Want to discuss current limitations of #LLM bias mitigation? 👋Join the workshop New Perspectives on Bias and Discrimination in Language Technology; 4&5 Nov in #Amsterdam!

Alina Leidinger (@alinaleidinger) 's Twitter Profile Photo

Poster presentation today at #ACL2024NLP at 10:30 🥳🥳 Come say hi if you want to chat about robust evaluation! Paper 👇 aclanthology.org/2024.acl-short…

Poster presentation today at #ACL2024NLP at 10:30 🥳🥳

Come say hi if you want to chat about robust evaluation!

Paper 👇
aclanthology.org/2024.acl-short…
Yacine Jernite (@yjernite) 's Twitter Profile Photo

We're nearly 2 weeks away from the deadline for Tiny Papers for our workshop on social impact evaluation of genAI. If you have thoughts, critiques, WIP, or resources on that topic, now's the time to make them a quick 2-pager! evaleval.github.io/call-for-paper…