Heidelberg University NLP Group (@hd_nlp) 's Twitter Profile
Heidelberg University NLP Group

@hd_nlp

Welcome to the Natural Language Processing Group at the Computational Linguistics Department @UniHeidelberg, led by @AnetteMFrank #NLProc #ML

ID: 1317119066495221761

linkhttps://www.cl.uni-heidelberg.de/nlpgroup/ calendar_today16-10-2020 15:04:43

150 Tweet

1,1K Followers

75 Following

Moritz Plenz (@moritzplenz) 's Twitter Profile Photo

Graph Language Models: the child of LMs🗣 and GNNs🕸 📜arxiv.org/abs/2401.07105 TLDR we modify Self-Attention such that a LM becomes a graph transformer. The new architecture enables graph reasoning, while the pretrained parameters allow text understanding To appear at #ACL2024

Graph Language Models: the child of LMs🗣 and GNNs🕸

📜arxiv.org/abs/2401.07105

TLDR we modify Self-Attention such that a LM becomes a graph transformer. The new architecture enables graph reasoning, while the pretrained parameters allow text understanding

To appear at #ACL2024
Anette Frank (@anettemfrank) 's Twitter Profile Photo

Very relevant work presented by Letiția Pârcălăbescu today at #ACL2024NLP, measuring LLM self-consistency (but not faithfulness) with a new CC-SHAP metric. 🤩 She's still around and will be happy to talk to you. Heidelberg University NLP Group

Anette Frank (@anettemfrank) 's Twitter Profile Photo

Our Moritz Plenz from Heidelberg University NLP Group in front of numerous visitors of his poster, presenting his new work on Graph Language Models 🕸️🗣️ at #ACL2024NLP. GLMs unify the advantages of LLMs and GNNs on structure-based tasks. Hold on, Moritz, to the end of the session! 😅🤩

Our <a href="/MoritzPlenz/">Moritz Plenz</a> from <a href="/HD_NLP/">Heidelberg University NLP Group</a> in front of numerous visitors of his poster, presenting his new work on Graph Language Models 🕸️🗣️ at #ACL2024NLP. GLMs unify the advantages of LLMs and GNNs on structure-based tasks. Hold on, Moritz, to the end of the session! 😅🤩
Anette Frank (@anettemfrank) 's Twitter Profile Photo

I'm very proud of Xiyan Fu's work on Continual Learning of Compositional Generalization in NLI, which is crucial for applications with dynamic knowledge updates. Check out her paper and benchmark here: tinyurl.com/m568zuwr

AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

How to make powerful LLMs understand graphs and their structure?🕸️ With Graph Language Models! Take a pre-trained LLM and fit it with the ability to process graphs. Watch if you're curious how:👇 📺 youtu.be/JcHeaONGbmQ (Hint: it's about position embeddings, as Moritz Plenz

How to make powerful LLMs understand graphs and their structure?🕸️ With Graph Language Models!
Take a pre-trained LLM and fit it with the ability to process graphs. Watch if you're curious how:👇
📺 youtu.be/JcHeaONGbmQ

(Hint: it's about position embeddings, as <a href="/MoritzPlenz/">Moritz Plenz</a>
Ben Hagag (@benhagag20) 's Twitter Profile Photo

4/ **On Measuring Faithfulness or Self-consistency of Natural Language Explanations** Measuring models' reasoning capabilities is a current challenge the community is dealing with. As this work shows, most existing research focuses on consistency rather than faithfulness or

Frederick Riemenschneider (@bowpis) 's Twitter Profile Photo

When I started my Bachelor in Classical Philology and Computational Linguistics in 2018, I had no idea where it would lead. I'm excited to now be giving my first invited talk at the Computational Approaches to Ancient Greek and Latin Workshop!

Anette Frank (@anettemfrank) 's Twitter Profile Photo

Frederick Riemenschneider Frederick Riemenschneider from Heidelberg University NLP Group to give an invited talk at the Computational Approaches to Ancient Greek and Latin Workshop at ULeuven soon! Lovers of NLP and Ancient Languages not to miss the event! 🤩 #nlproc #DigitalHumanities #digiclass

Frederick Riemenschneider <a href="/bowpis/">Frederick Riemenschneider</a> from <a href="/HD_NLP/">Heidelberg University NLP Group</a> to give an invited talk at the Computational Approaches to Ancient Greek and Latin Workshop at ULeuven soon! Lovers of NLP and Ancient Languages not to miss the event! 🤩 #nlproc #DigitalHumanities #digiclass
Anette Frank (@anettemfrank) 's Twitter Profile Photo

Proud of my PhD student Xiyan Fu from Heidelberg University NLP Group who will present a *new Challenge based on CommonGen* to evaluate the *compositional generalization abilities of LLMs in a combined reasoning & verbalization task, based on KG graph representations as input. 🥳#EMNLP2024 #nlproc 🤩

Letiția Pârcălăbescu (@letiepi) 's Twitter Profile Photo

The last paper of my PhD Heidelberg University NLP Group is accepted at ICLR 2025!🙌 We investigate the reliance of modern Vision & Language Models (VLMs) on image🖼️ vs. text📄 inputs when generating answers vs. explanations, revealing fascinating insights into their modality use and self-consistency.👇

Moritz Plenz (@moritzplenz) 's Twitter Profile Photo

Debates aren’t always black and white—opposing sides often share common ground. These partial agreements are key for meaningful compromises. Presenting “Perspectivized Stance Vectors” (PSVs) — an interpretable method to identify nuanced (dis)agreements 📜 arxiv.org/abs/2502.09644

Debates aren’t always black and white—opposing sides often share common ground. These partial agreements are key for meaningful compromises. 
Presenting “Perspectivized Stance Vectors” (PSVs) — an interpretable method to identify nuanced (dis)agreements
📜 arxiv.org/abs/2502.09644
Frederick Riemenschneider (@bowpis) 's Twitter Profile Photo

What did Aristotle actually write? We think we know, but reality is messy. As Ancient Greek texts traveled through history, they were copied and recopied countless times, accumulating subtle errors with each generation. Our new #NAACL2025 findings paper tackles this challenge.

Frederick Riemenschneider (@bowpis) 's Twitter Profile Photo

How and when do multilingual LMs achieve cross-lingual generalization during pre-training? And why do later, supposedly more advanced checkpoints, lose some language identification abilities in the process? Our #ACL2025 paper investigates.

How and when do multilingual LMs achieve cross-lingual generalization during pre-training? And why do later, supposedly more advanced checkpoints, lose some language identification abilities in the process? Our #ACL2025 paper investigates.
Frederick Riemenschneider (@bowpis) 's Twitter Profile Photo

Looking at Bruegel's Tower of Babel in Vienna makes you wonder: How can multilingual language models overcome the language barriers? Find out tomorrow! 📍 Level 1 (ironic, right?), Room 1.15-1 🕐 2 PM #ACL2025NLP

Looking at Bruegel's Tower of Babel in Vienna makes you wonder: How can multilingual language models overcome the language barriers? Find out tomorrow! 
📍 Level 1 (ironic, right?), Room 1.15-1
🕐 2 PM
#ACL2025NLP