CambridgeLTL (@cambridgeltl) 's Twitter Profile
CambridgeLTL

@cambridgeltl

Language Technology Lab (LTL) at the University of Cambridge. Computational Linguistics / Machine Learning / Deep Learning. Focus: Multilingual NLP and Bio NLP.

ID: 964208941977792512

linkhttp://ltl.mml.cam.ac.uk/ calendar_today15-02-2018 18:45:00

239 Tweet

2,2K Followers

86 Following

CambridgeLTL (@cambridgeltl) 's Twitter Profile Photo

🎙Talks talks talks! 🎙 As the new term is just around the corner, we’re happy to invite you to the Easter term Seminar series. Find the schedule below and an up-to-date information with abstract and links at talks.cam.ac.uk/show/index/604….

🎙Talks talks talks! 🎙

As the new term is just around the corner, we’re happy to invite you to the Easter term Seminar series. 

Find the schedule below and an up-to-date information with abstract and links at talks.cam.ac.uk/show/index/604….
Yinhong Liu (@yinhongliu2) 's Twitter Profile Photo

🔥New paper!📜 Struggle to align LLM evaluators with human judgements?🤔 Introducing PairS🌟: By exploiting transitivity, we push the potential of pairwise preference in efficient ranking evaluations that has better alignment!🧑‍⚖️ 📖arxiv.org/abs/2403.16950 💻github.com/cambridgeltl/p…

🔥New paper!📜
Struggle to align LLM evaluators with human judgements?🤔
Introducing PairS🌟: By exploiting transitivity, we push the potential of pairwise preference in efficient ranking evaluations that has better alignment!🧑‍⚖️
📖arxiv.org/abs/2403.16950
💻github.com/cambridgeltl/p…
Benjamin Minixhofer (@bminixhofer) 's Twitter Profile Photo

Introducing Zero-Shot Tokenizer Transfer (ZeTT) ⚡ ZeTT frees language models from their tokenizer, allowing you to use any model with any tokenizer, with little or no extra training. Super excited to (finally!) share the first project of my PhD🧵

Introducing Zero-Shot Tokenizer Transfer (ZeTT) ⚡

ZeTT frees language models from their tokenizer, allowing you to use any model with any tokenizer, with little or no extra training.

Super excited to (finally!) share the first project of my PhD🧵
Tiancheng Hu (@tiancheng_hu) 's Twitter Profile Photo

"Role-playing" with LLMs is increasingly popular in chatbots and also "simulation" for social sciences. Can LLMs simulate individual perspectives in subjective NLP tasks? Our #ACL2024 paper w. Nigel Collier investigates this question. 🧵👇 arxiv.org/pdf/2402.10811 1/7

Fabian David Schmidt (@fdschmidt) 's Twitter Profile Photo

Introducing NLLB-LLM2Vec! 🚀 We fuse the NLLB encoder & Llama 3 8B trained w/ LLM2Vec to create NLLB-LLM2Vec which supports cross-lingual NLU in 200+ languages🔥 Joint work w/ Philipp Borchert, Ivan Vulić, and Goran Glavaš during my great research stay at @cambridgeltl

Introducing NLLB-LLM2Vec! 🚀

We fuse the NLLB encoder & Llama 3 8B trained w/ LLM2Vec to create NLLB-LLM2Vec which supports cross-lingual NLU in 200+ languages🔥

Joint work w/ Philipp Borchert, <a href="/licwu/">Ivan Vulić</a>, and <a href="/gg42554/">Goran Glavaš</a> during my great research stay at @cambridgeltl
Tiancheng Hu (@tiancheng_hu) 's Twitter Profile Photo

Thrilled to share our new paper: "Can LLM be a Personalized Judge?" We investigate the reliability of LLMs in judging user preferences based on personas and propose improvements using verbal uncertainty estimation to enhance accuracy. 🎭👨‍⚖️ 📄 Paper: arxiv.org/abs/2406.11657

Markus Frohmann (@frohmannm) 's Twitter Profile Photo

Introducing 🪓Segment any Text! 🪓 A new state-of-the-art sentence segmentation tool! Compared to existing tools (and strong LLMs!), our models are far more: 1. efficient ⚡ 2. performant 🔝 3. robust 🚀 4. adaptable 🎯 5. multilingual 🗺

Introducing 🪓Segment any Text! 🪓

A new state-of-the-art sentence segmentation tool!
Compared to existing tools (and strong LLMs!), our models are far more:
1. efficient ⚡
2. performant 🔝
3. robust 🚀
4. adaptable 🎯
5. multilingual 🗺
Meiru Zhang (@zhang_meiru) 's Twitter Profile Photo

Attention Instruction: Amplifying Attention in the Middle via Prompting Key findings: 1. LLMs lack relative position awareness 2. We can guide LLM to a specific region with position-based indexing Paper: arxiv.org/pdf/2406.17095 Thanks to: Zaiqiao Meng and Nigel Collier

Attention Instruction: Amplifying Attention in the Middle via Prompting

Key findings:
1. LLMs lack relative position awareness
2. We can guide LLM to a specific region with position-based indexing

Paper: arxiv.org/pdf/2406.17095

Thanks to: <a href="/mengzaiqiao/">Zaiqiao Meng</a> and <a href="/nigelhcollier/">Nigel Collier</a>