Koel Dutta Chowdhury (@koeldc) 's Twitter Profile
Koel Dutta Chowdhury

@koeldc

NLP Researcher @LstSaar | Saarland University

ID: 841603747344207872

calendar_today14-03-2017 10:55:44

206 Tweet

263 Followers

423 Following

SFB 1102 (@1102sfb) 's Twitter Profile Photo

We are hiring: 3 Post-doctoral and 11 doctoral positions available! 🔎 For more details on vacancies, please visit our website: sfb1102.uni-saarland.de/job-openings/ We look forward to your application!

Saarland Informatics Campus (@sic_saar) 's Twitter Profile Photo

Computer scientists Michael A. Hedderich and Jonas Fischer developed software that can point out weaknesses in highly complex machine learning algorithms and thus help to correct them. Find out more: sic.link/pypremise

Dimitris Papailiopoulos (@dimitrispapail) 's Twitter Profile Photo

For anyone starting out with transformers as language models, this is *by far* the most complete, to the point, and non-BS expository document. arxiv.org/pdf/2207.09238… major extra points for having zero figures of chaotically intertwined boxes

For anyone starting out with transformers as language models, this is *by far* the most complete, to the point, and non-BS expository document. 
arxiv.org/pdf/2207.09238…

major extra points for having zero figures of chaotically intertwined boxes
LST @ Saarland University (@lstsaar) 's Twitter Profile Photo

📢We are happy to announce that nine papers by members of our department have been accepted at ACL 2023 ACL 2024. Congratulations to all authors!👏 Authors and their papers👇

Marius Mosbach (@mariusmosbach) 's Twitter Profile Photo

In our #ACL2023NLP paper, we provide a fair comparison of LM task adaptation via in-context learning and fine-tuning. We find that fine-tuned models generalize better than previously thought and that robust task adaptation remains a challenge! 🧵 1/N arxiv.org/abs/2305.16938

In our #ACL2023NLP paper, we provide a fair comparison of LM task adaptation via in-context learning and fine-tuning. We find that fine-tuned models generalize better than previously thought and that robust task adaptation remains a challenge! 🧵 1/N

arxiv.org/abs/2305.16938
Yanai Elazar (@yanaiela) 's Twitter Profile Photo

I also hope to normalize the discussion around failures, not to be ashamed of them, to embrace and learn from them, and finally, how to deal with them.

Michael A. Hedderich (@michedderich) 's Twitter Profile Photo

Presenting an NLP-focused version of our last year's ICML paper on how to understand why your model is failing with global explanations arxiv.org/abs/2311.10920 at BlackboxNLP And very much looking forward to meeting a lot of cool people at #EMNLP2023

Presenting an NLP-focused version of our last year's ICML paper on how to understand why your model is failing with global explanations arxiv.org/abs/2311.10920 at <a href="/BlackboxNLP/">BlackboxNLP</a> 

And very much looking forward to meeting a lot of cool people at #EMNLP2023
Longyue Wang (@wangly0229) 's Twitter Profile Photo

My talk and slides at EMNLP2023 - WMT2023 on Discourse-Level Literary Translation Shared Task. #EMNLP2023 #WMT2023 @EMNLP2023 @WMT2023 researchgate.net/publication/37…

Tom Sherborne (@tomsherborne) 's Twitter Profile Photo

Do you love cross-lingual transfer? Are you interested in putting latent variables everywhere you can? Desperately searching for applied optimal transport research? Come to my poster 16:00 in East Foyer! #EMNLP2023 virtual2023.emnlp.org/paper_TACL-514… w/ Tom Hosking + Mirella Lapata

Do you love cross-lingual transfer? 
Are you interested in putting latent variables everywhere you can? 
Desperately searching for applied optimal transport research? 

Come to my poster 16:00 in East Foyer! #EMNLP2023
virtual2023.emnlp.org/paper_TACL-514…
w/ <a href="/tomhosking/">Tom Hosking</a> + Mirella Lapata
Marius Mosbach (@mariusmosbach) 's Twitter Profile Photo

Excited to share our new preprint led by Miaoran Zhang - The Impact of Demonstrations on Multilingual In-Context Learning: A Multidimensional Analysis. Paper: arxiv.org/abs/2402.12976 Here are our main findings 🧵 1/9

Excited to share our new preprint led by <a href="/MiaoranZ/">Miaoran Zhang</a> - The Impact of Demonstrations on Multilingual In-Context Learning: A Multidimensional Analysis.
Paper: arxiv.org/abs/2402.12976
Here are our main findings 🧵 1/9
Jesujoba Alabi (@alabi_jesujoba) 's Twitter Profile Photo

What happens to the predictions of a language model (LM) when it is adapted to a new language? 🤔 We approach this question in our new work where we explore the hidden space of transformer language adapters. Paper: arxiv.org/abs/2402.13137 Read on for our findings. 🧵1/N

What happens to the predictions of a language model (LM) when it is adapted to a new language? 🤔

We approach this question in our new work where we explore the hidden space of transformer language adapters.

Paper: arxiv.org/abs/2402.13137

Read on for our findings. 🧵1/N
Sanchaita Hazra (@hsanchaita) 's Twitter Profile Photo

🌻 Super excited about my first Computer Science publication at NAACL HLT 2024 (main)! Bodhisattwa Majumder and I study the language of deception and how language models fare at detecting them. And guess what we've found: arxiv.org/pdf/2311.07092… (1/n) 🧵 EconUofU Ai2

🌻 Super excited about my first Computer Science publication at <a href="/naaclmeeting/">NAACL HLT 2024</a> (main)! <a href="/mbodhisattwa/">Bodhisattwa Majumder</a> and I study the language of deception and how language models fare at detecting them. And guess what we've found:  arxiv.org/pdf/2311.07092…
(1/n) 🧵
<a href="/EconUofU/">EconUofU</a> <a href="/allen_ai/">Ai2</a>
LST @ Saarland University (@lstsaar) 's Twitter Profile Photo

📢Our department LST in cooperation with DFKI is inviting applications for the W3 Professorship in Language Technology. uni-saarland.de/fileadmin/uplo… Please spread the word.

Bodhisattwa Majumder (@mbodhisattwa) 's Twitter Profile Photo

Incredibly proud of our teamwork, now in ICML Conference! This position starts a series of work on data-driven scientific discovery w generative models. Follow-ups coming soon on benchmarks, systems, & accessibility in science! arxiv.org/abs/2402.13610 #ICML2024 Ai2 Aristo Team at AI2

Incredibly proud of our teamwork, now in <a href="/icmlconf/">ICML Conference</a>! This position starts a series of work on data-driven scientific discovery w generative models.
Follow-ups coming soon on benchmarks, systems, &amp; accessibility in science!
arxiv.org/abs/2402.13610
#ICML2024 <a href="/allen_ai/">Ai2</a> <a href="/ai2_aristo/">Aristo Team at AI2</a>
Javier Ferrando (@javifer_96) 's Twitter Profile Photo

[1/4] Introducing “A Primer on the Inner Workings of Transformer-based Language Models”, a comprehensive survey on interpretability methods and the findings into the functioning of language models they have led to. ArXiv: arxiv.org/pdf/2405.00208

[1/4] Introducing “A Primer on the Inner Workings of Transformer-based Language Models”, a comprehensive survey on interpretability methods and the findings into the functioning of language models they have led to.

ArXiv: arxiv.org/pdf/2405.00208
Niyati Bafna (@bafnaniyati) 's Twitter Profile Photo

Drop by my talk at LREC-COLING on Thursday on unsupervised cognate induction between closely related data-imbalanced language pairs :) arxiv.org/pdf/2305.14012

Drop by my talk at LREC-COLING on Thursday on unsupervised cognate induction between closely related data-imbalanced language pairs :) arxiv.org/pdf/2305.14012
LST @ Saarland University (@lstsaar) 's Twitter Profile Photo

📜 In-Context Learning Workshop organized by Paloma García de Herreros García, Israel A. Azime and Miaoran Zhang on June 12th from 2 pm to 5 pm. More information 👇 hpc.uni-saarland.de/workshops/icl

Yihuai Hong@ACL 2024 (@yihuaih91773) 's Twitter Profile Photo

🚀The first-ever parametric LLM Unlearning Benchmark! We find current unlearning only modify model’s behavior without truly erasing encoded knowledge in parameters, presenting ConceptVectors Benchmark, with each vector strongly tied to a specific concept.🔗yihuaihong.github.io/ConceptVectors…

🚀The first-ever parametric LLM Unlearning Benchmark!
We find current unlearning only modify model’s behavior without truly erasing encoded knowledge in parameters, presenting ConceptVectors Benchmark, with each vector strongly tied to a specific concept.🔗yihuaihong.github.io/ConceptVectors…