LTL-UvA (@ltl_uva) 's Twitter Profile
LTL-UvA

@ltl_uva

Language Technology Lab @UvA_Amsterdam

ID: 1598376098538131488

linkhttps://ltl.science.uva.nl calendar_today01-12-2022 17:59:35

40 Tweet

61 Takipçi

124 Takip Edilen

Evgeniia Tokarchuk (@evgtokarchuk) 's Twitter Profile Photo

Inspiring day at GRaM GRaM Workshop at ICML 2024 workshop! My only complaint: too short! I want more! 😁 Thanks to organizers for such a great experience ❤️ Amazing talks (personal favorites by Nina Miolane 🦋 @ninamiolane.bsky.social and by Joey Bose), great posters and panel session I genuinely enjoyed. #ICML2024

Inspiring day at GRaM <a href="/GRaM_org_/">GRaM Workshop at ICML 2024</a> workshop! 

My only complaint: too short! I want more! 😁 Thanks to organizers for such a great experience ❤️

Amazing talks (personal favorites by <a href="/ninamiolane/">Nina Miolane 🦋 @ninamiolane.bsky.social</a> and by <a href="/bose_joey/">Joey Bose</a>), great posters and panel session I genuinely enjoyed.

#ICML2024
Baohao Liao (@baohao_liao) 's Twitter Profile Photo

🚨 New paper 🚨 Our multilingual system for the WMT24 general shared task obtain: --- Constrained track: 6 🥇3 🥈 1 🥉 --- Open & Constrained track: 1 🥇2 🥈 2 🥉 A simple and effective pipeline to adapt LLM to multilingual machine translation. paper: arxiv.org/abs/2408.11512

🚨 New paper 🚨
Our multilingual system for the WMT24 general shared task obtain:

--- Constrained track: 6 🥇3 🥈 1 🥉
--- Open &amp; Constrained track: 1 🥇2 🥈 2 🥉

A simple and effective pipeline to adapt LLM to multilingual machine translation.

paper: arxiv.org/abs/2408.11512
Seth Aycock @ ICLR (@sethjsa) 's Twitter Profile Photo

Just returned from MT Marathon 2024 in Prague - thanks to Institute of Formal and Applied Linguistics for organising a great week! Between the insightful talks and collaboration on a mini research project, I presented a poster of my recent work. And of course, we explored the sights of Prague too - in 30°C heat!

Just returned from MT Marathon 2024 in Prague - thanks to <a href="/ufal_cuni/">Institute of Formal and Applied Linguistics</a> for organising a great week! Between the insightful talks and collaboration on a mini research project, I presented a poster of my recent work. And of course, we explored the sights of Prague too - in 30°C heat!
LTL-UvA (@ltl_uva) 's Twitter Profile Photo

Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao Baohao Liao, Di Wu Di Wu 🥳🥳

LTL-UvA (@ltl_uva) 's Twitter Profile Photo

1. Can you learn the meaning of words from someone who thinks you are smarter than you are? Check out Kata's paper: arxiv.org/pdf/2410.05851 #EMNLP2024 #NLProc

LTL-UvA (@ltl_uva) 's Twitter Profile Photo

2. ApiQ: Finetuning of 2-Bit Quantized Large Language Model, check out Baohao's paper: arxiv.org/abs/2402.05147 #EMNLP2024

LTL-UvA (@ltl_uva) 's Twitter Profile Photo

3. How to identify intrinsic task modularity within multilingual translation networks? Check out Shaomu's paper: arxiv.org/abs/2404.11201

LTL-UvA (@ltl_uva) 's Twitter Profile Photo

4. Representational Isomorphism and Alignment of Multilingual Large Language Models. We will release Di's paper later! #EMNLP2024 #NLProc

Seth Aycock @ ICLR (@sethjsa) 's Twitter Profile Photo

Our work “Can LLMs Really Learn to Translate a Low-Resource Language from One Grammar Book?” is now on arXiv! arxiv.org/abs/2409.19151 - in collaboration with David Stap, Di Wu, Christof Monz , and Khalil Sima'an from ILLC and LTL-UvA 🧵

LTL-UvA (@ltl_uva) 's Twitter Profile Photo

We show empirically that LLMs fail to exploit grammatical explanations for translation; instead we find parallel examples mainly drive translation performance. While grammatical knowledge does not help translation, LLMs benefit from our typological prompt for linguistic tasks.

LTL-UvA (@ltl_uva) 's Twitter Profile Photo

LTL News: Happy to announce that Maya's paper got accepted by NAACL 2025 (findings) 🥳#naacl #nlp Paper Link: arxiv.org/abs/2410.18850

LTL-UvA (@ltl_uva) 's Twitter Profile Photo

The paper explores the effect of kNN on ASR using Whisper in general, as well as on speaker adaptation and bias. kNN is a non-parametric method that adapts the output of a model to a domain by searching for neighbouring tokens in a datastore at each step.

Yan Meng (@vivian_yanmy) 's Twitter Profile Photo

Key Finding 1: We simulate the primary source of noise in the parallel corpus, i.e., semantic misalignment, and show the limited effectiveness of widely-used sentence-level pre-filters for detecting it. This underscores the necessity of handling data noise in a fine-grained way.

Key Finding 1: We simulate the primary source of noise in the parallel corpus, i.e., semantic misalignment, and show the limited effectiveness of widely-used sentence-level pre-filters for detecting it. This underscores the necessity of handling data noise in a fine-grained way.
Yan Meng (@vivian_yanmy) 's Twitter Profile Photo

Key Finding 2: With an observation of the increasing reliability of the model's self-knowledge for distinguishing misaligned and clean data at the token level, we propose self-correction to leverage the model's self-knowledge to correct the training supervision.

Key Finding 2: With an observation of the increasing reliability of the model's self-knowledge for distinguishing misaligned and clean data at the token level, we propose self-correction to leverage the model's self-knowledge to correct the training supervision.