Jan Niehues (@_janius_) 's Twitter Profile
Jan Niehues

@_janius_

Prof. "AI for Language Technologies" at @KITKarlsruhe / @KITinformatik

ID: 601938430

linkhttps://ai4lt.anthropomatik.kit.edu/english/21_92.php calendar_today07-06-2012 14:53:27

153 Tweet

130 Takipçi

148 Takip Edilen

IWSLT (@iwslt) 's Twitter Profile Photo

What is new and different this year? ✨🆕 – The challenge gets tougher! Participants will engage with Arabic as a new target language (alongside German & Chinese) 🌍 and diverse testing scenarios featuring unique speech styles, accents & recording conditions. 🎤 Are you ready?

IWSLT (@iwslt) 's Twitter Profile Photo

Why participate? 🚀 - An #IWSLT tradition: the perfect arena to set the state-of-the-art in Spoken Language Translation, test new solutions, join an active research community & push AI’s limits to transform global communication! 🌍🎙️🤖

IWSLT (@iwslt) 's Twitter Profile Photo

Today's task: Low-resource ST! 🎯 Goal: Building speech translation models for currently underserved, mostly low-resource languages and varieties 🗓️ This is the 5th iteration, with new and continuing language pairs (10 total!) 🔗: iwslt.org/2025/low-resou…

IWSLT (@iwslt) 's Twitter Profile Photo

🆕 This year includes new and more diverse language pairs (e.g. Fongbe to French, Estonian to English) in addition to continuing pairs 🆕🔊 Also new this year is a *data track* which encourages the creation of new speech translation datasets for less-supported languages

IWSLT (@iwslt) 's Twitter Profile Photo

Why participate? ✨Low-resource is truly the frontier, and there are many ways to making an impact for a language community: from providing a dataset for a language that is missing one, to exploring the limits of different models and techniques.

IWSLT (@iwslt) 's Twitter Profile Photo

Our final task highlight: Indic speech translation! 🎯 Goal: develop a speech-to-text translation model that bridges the gap for low-resource Indian languages, focusing on Hindi, Bengali, and Tamil from the Indo-Aryan and Dravidian families. 🔗: iwslt.org/2025/indic

IWSLT (@iwslt) 's Twitter Profile Photo

For the 2nd iteration... 🆕 Significantly more data for low-resource Indic languages + more language directions! ➡️ Both English to Indic and Indic to English translation directions, addressing the unique challenges posed by low-resource linguistic contexts.

IWSLT (@iwslt) 's Twitter Profile Photo

Why participate? ✨ Contribute to advancing real-world systems for multilingual speech translation, particularly for underrepresented languages ✨ Collaborate to bridge linguistic divides & make this technology more accessible to millions of speakers worldwide

IWSLT (@iwslt) 's Twitter Profile Photo

Our last task: Subtitling!! 🎯 Goal: The Subtitling Track challenges participants to generate accurate Arabic & German subtitles for English audiovisual recordings, bridging language gaps in media! 🌍📺 #AI #SpeechTech #Subtitling #IWSLT2025 🔗: iwslt.org/2025/subtitling

IWSLT (@iwslt) 's Twitter Profile Photo

🆕 For this year, Arabic joins the challenge! ✨ For the first time, we ask participants to generate subtitles for Arabic, covering the 5th most spoken language and one of the six UN 🌐 official languages. Let’s break language barriers! 🌍 #Subtitling #IWSLT2025

IWSLT (@iwslt) 's Twitter Profile Photo

Why participate? 🔓Help make the world’s vast audiovisual heritage accessible to all! 🔍 Join the scientific effort to push forward multilingual subtitling and improve global communication! 🌎 #SpeechTranslation #AIforGood

IWSLT (@iwslt) 's Twitter Profile Photo

Call for Demos 🗣️🤖💻 For 2025, we invite interactive system demonstrations highlighting innovative systems, tools, and component technologies that advance the field of speech translation. For more info, see our call below 🗓️: Deadline Apr 25 🔗: iwslt.org/2025/call-for-…

Sara Papi (@sarapapi) 's Twitter Profile Photo

📢 The evaluation period of the Instruction Following task at IWSLT 2025 just started! 🖥️ Consider submitting your speech-to-text system! The outputs can be easily uploaded on the SPEECHM platform developed in the Meetween project! ➡️ iwslt2025.speechm.cloud.cyfronet.pl

IWSLT (@iwslt) 's Twitter Profile Photo

The evaluation period has begun for our shared tasks! The test data is now available on our website, and submissions are due Tuesday April 15! ⏰ Please email task organizers or the google group with any questions 🥳

IWSLT (@iwslt) 's Twitter Profile Photo

Hi all, Do you have a reviewed ARR paper on speech translation to commit to IWSLT ? IWSLT has enabled paper commitment for fully reviewed papers from ARR for 2025. If you'd like to commit your paper, please fill out this form by May 17, 2025: forms.gle/1QtVrHXyCGoEq3…

Maike Züfle (@maikezufle) 's Twitter Profile Photo

Fabian Retkowski 🇪🇺🌎❤️ and I've landed in Albuquerque for #NAACL25🌞Can’t wait to connect with everyone! 🎤 Talks & 🖼️Posters from KIT on: - Efficient MT eval - Length-controlled summaries - Cross-modal representations - Low-resource ASR - Annotation for qualitative research 🧵👇

<a href="/f_retkowski/">Fabian Retkowski 🇪🇺🌎❤️</a> and I've landed in Albuquerque for #NAACL25🌞Can’t wait to connect with everyone!

🎤 Talks &amp; 🖼️Posters from KIT on:
-  Efficient MT eval
-  Length-controlled summaries
-  Cross-modal representations
-  Low-resource ASR
-  Annotation for qualitative research

 🧵👇
Sébastien Bratières (@seb_bratieres) 's Twitter Profile Photo

It turns out that LLMs can be adapted to other input/output types than text, eg speech or images. We go deep on this idea with DVPS @DVPS_ai, 29M€ 4-year European research project on Multimodal Foundation Models, which I lead in my capacity as the Director of AI at Translated.

It turns out that LLMs can be adapted to other input/output types than text, eg speech or images. We go deep on this idea with DVPS @DVPS_ai, 29M€ 4-year European research project on Multimodal Foundation Models, which I lead in my capacity as the Director of AI at Translated.