Matthias Hagen (@matthias_hagen) 's Twitter Profile
Matthias Hagen

@matthias_hagen

Professor of "Databases and Information Systems", Friedrich-Schiller-Universität Jena

ID: 217377296

linkhttps://www.matthias-hagen.de calendar_today19-11-2010 10:54:09

457 Tweet

908 Takipçi

927 Takip Edilen

Webis Group (@webis_de) 's Twitter Profile Photo

Goodbye Washington! We had a fantastic week with interesting talks, discussions, and new ideas at #SIGIR24 #SIGIR2024. We hope to see you all again next year in Italy :)

Goodbye Washington! We had a fantastic week with interesting talks, discussions, and new ideas at #SIGIR24 #SIGIR2024. We hope to see you all again next year in Italy :)
alexander bondarenko (@albondarenko2) 's Twitter Profile Photo

Jan Heinrich Merker presented our short paper at #ArgMining2024 at #ACL2024 today. We proposed to add "semantics" to lexical retrieval models for argument retrieval. w/ Matthias Hagen, Maik Fröbe, Danik Hollatz. Paper: aclanthology.org/2024.argmining…

<a href="/H1iReimer/">Jan Heinrich Merker</a> presented our short paper at   #ArgMining2024 at #ACL2024 today. We proposed to add "semantics" to lexical retrieval models for argument retrieval. 
w/ <a href="/matthias_hagen/">Matthias Hagen</a>, <a href="/maik_froebe/">Maik Fröbe</a>, Danik Hollatz.
Paper: aclanthology.org/2024.argmining…
Jan Heinrich Merker (@h1ireimer) 's Twitter Profile Photo

Maik Fröbe Great talk on the Team OpenWebSearch.eu @[email protected]'s submission to the QuantumCLEF shared task on how to exploit #QuantumComputing for feature selection in IR 👍 Exciting new direction of IR research at #CLEF2024 #QuantumCLEF2024

<a href="/maik_froebe/">Maik Fröbe</a> Great talk on the Team <a href="/OpenWebSearchEU/">OpenWebSearch.eu @openwebsearcheu@suma-ev.social</a>'s submission to the QuantumCLEF shared task on how to exploit #QuantumComputing for feature selection in IR 👍
Exciting new direction of IR research at #CLEF2024 #QuantumCLEF2024
Jan Heinrich Merker (@h1ireimer) 's Twitter Profile Photo

Follow-up on our #BIOASQ2024 submission: We actually submitted the best approach for some of the tasks 👍 Looking forward to further improving Medical RAG! #CLEF2024

Follow-up on our #BIOASQ2024 submission: We actually submitted the best approach for some of the tasks 👍
Looking forward to further improving Medical RAG! #CLEF2024
Sumit (@_reachsumit) 's Twitter Profile Photo

Lightning IR: Straightforward Fine-tuning and Inference of Transformer-based Language Models for Information Retrieval Introduces a PyTorch Lightning-based framework for fine-tuning and inference of transformer models in IR. 📝arxiv.org/abs/2411.04677 github.com/webis-de/light…

Ferdinand Schlatt (@fschlatt1) 's Twitter Profile Photo

Happy to share our framework for fine-tuning and running neural ranking models, Lightning IR, was accepted as a demo at #WSDM25 🥳 Pre-print: arxiv.org/abs/2411.04677 Code: github.com/webis-de/light… Docs: webis.de/lightning-ir A quick rundown of Lightning IR's main features:

Webis Group (@webis_de) 's Twitter Profile Photo

📢 Our paper "The Viability of Crowdsourcing for RAG Evaluation" has been accepted to #SIGIR2025 ! We compared how good humans and LLMs are at writing and judging RAG responses, assembling 1800+ responses across 3 styles, and 47K+ pairwise judgments in 7 quality dimensions. 🧵➡️

📢 Our paper "The Viability of Crowdsourcing for RAG Evaluation" has been accepted to #SIGIR2025 ! We compared how good humans and LLMs are at writing and judging RAG responses, assembling 1800+ responses across 3 styles, and 47K+ pairwise judgments in 7 quality dimensions.  🧵➡️
Webis Group (@webis_de) 's Twitter Profile Photo

🧵2/4 Key findings: 1️⃣ Humans write best? No! LLM responses are rated better than human. 2️⃣ Essay answers? No! Bullet lists are often preferred. 3️⃣ BLEU? No! Reference-based metrics don't align with human preferences. 4️⃣ LLMs as judges? No! Prompted models label inconsistently.

Webis Group (@webis_de) 's Twitter Profile Photo

🧵 3/4 This fundamentally challenges previous assumptions about RAG evaluation and system design. But we also show how crowdsourcing offers a viable and scalable alternative! Check out the paper for more. 📝 Preprint @ downloads.webis.de/publications/p…⚙️Code/Data is openly available.

Ferdinand Schlatt (@fschlatt1) 's Twitter Profile Photo

Maik Fröbe Harry Scells Shengyao Zhuang Bevan Koopman Guido Zuccon Benno Stein Martin Potthast Matthias Hagen Short: Rank-DistiLLM: Closing the Effectiveness Gap Between Cross-Encoders and LLMs for Passage Re-ranking webis.de/publications.h… Full: Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise Passage Re-Ranking with Cross-Encoders webis.de/publications.h…

<a href="/maik_froebe/">Maik Fröbe</a> <a href="/hscells/">Harry Scells</a> <a href="/ShengyaoZhuang/">Shengyao Zhuang</a> <a href="/bevan_koopman/">Bevan Koopman</a> <a href="/guidozuc/">Guido Zuccon</a> <a href="/bennostein/">Benno Stein</a> <a href="/martinpotthast/">Martin Potthast</a> <a href="/matthias_hagen/">Matthias Hagen</a> Short:  Rank-DistiLLM: Closing the Effectiveness Gap Between Cross-Encoders and LLMs for Passage Re-ranking webis.de/publications.h…

Full:  Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise Passage Re-Ranking with Cross-Encoders webis.de/publications.h…
Maik Fröbe (@maik_froebe) 's Twitter Profile Photo

Do not forget to participate in the #TREC2025 Tip-of-the-Tongue (ToT) Track :) The corpus and baselines (with run files) are now available and easily accessible via the ir_datasets API and the HuggingFace Datasets API. More details are available at: trec-tot.github.io/guidelines

Do not forget to participate in the #TREC2025 Tip-of-the-Tongue (ToT) Track :)

The corpus and baselines (with run files) are now available and easily accessible via the ir_datasets API and the HuggingFace Datasets API.

More details are available at: trec-tot.github.io/guidelines
Ferdinand Schlatt (@fschlatt1) 's Twitter Profile Photo

Want to know how to make bi-encoders more than 3x faster with a new backbone encoder model? Check out our talk on the Token-Independent Text Encoder (TITE) #SIGIR2025 in the efficiency track. It pools vectors within the model to improve efficiency dl.acm.org/doi/10.1145/37…

Want to know how to make bi-encoders more than 3x faster with a new backbone encoder model? Check out our talk on the Token-Independent Text Encoder (TITE) #SIGIR2025 in the efficiency track. It pools vectors within the model to improve efficiency dl.acm.org/doi/10.1145/37…
Webis Group (@webis_de) 's Twitter Profile Photo

Happy to share that our paper "The Viability of Crowdsourcing for RAG Evaluation" received the Best Paper Honourable Mention at #SIGIR2025! Very grateful to the community for recognizing our work on improving RAG evaluation. 📄 webis.de/publications.h…

Happy to share that our paper "The Viability of Crowdsourcing for RAG Evaluation" received the Best Paper Honourable Mention at #SIGIR2025! Very grateful to the community for recognizing our work on improving RAG evaluation.

📄 webis.de/publications.h…
Webis Group (@webis_de) 's Twitter Profile Photo

Honored to win the ICTIR Best Paper Honorable Mention Award for "Axioms for Retrieval-Augmented Generation"! Our new axioms are integrated with ir_axioms: github.com/webis-de/ir_ax… Nice to see axiomatic IR gaining momentum.

Honored to win the ICTIR Best Paper Honorable Mention Award for "Axioms for Retrieval-Augmented Generation"!
Our new axioms are integrated with ir_axioms: github.com/webis-de/ir_ax…
Nice to see axiomatic IR gaining momentum.