Desmond Elliott (@delliott) 's Twitter Profile
Desmond Elliott

@delliott

Associate Professor at the University of Copenhagen working on multimodal machine learning.

ID: 8164582

linkhttps://elliottd.github.io/ calendar_today13-08-2007 18:37:04

3,3K Tweet

2,2K Followers

445 Following

Semih Yagcioglu (@semihyagcioglu) 's Twitter Profile Photo

I am at #naacl2024 in Mexico City this week and will be presenting our work on “Sequential Compositional Generalization in Multimodal Models” tomorrow. If you’re interested in #multimodality and compositional generalization looking forward to meeting you tomorrow to discuss our

Alberto Testoni (@alberto_testoni) 's Twitter Profile Photo

1/5 📣 Excited to share “LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks”! arxiv.org/abs/2406.18403 🚀 We introduce JUDGE-BENCH, a benchmark to investigate to what extent LLM-generated judgements align with human evaluations. #NLProc

1/5 📣 Excited to share “LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks”! arxiv.org/abs/2406.18403 🚀 We introduce  JUDGE-BENCH, a benchmark to investigate to what extent LLM-generated judgements align with human evaluations. #NLProc
Matt Groh (@mattgroh) 's Twitter Profile Photo

Looks like a real photograph, right? Look again. This is an AI-generated image with a glaring artifact. How many seconds does it take for you to spot it? Here's a clue: count the number of stairs to the man's right and left? See the artifact highlighted 👇

Looks like a real photograph, right? Look again.

This is an AI-generated image with a glaring artifact.

How many seconds does it take for you to spot it?

Here's a clue: count the number of stairs to the man's right and left? 

See the artifact highlighted 👇
Kyunghyun Cho (@kchonyc) 's Twitter Profile Photo

we all want to and need to be prepared to train our own large-scale language models from scratch. why? 1. transparency or lack thereof 2. maintainability or lack thereof 3. compliance or lack thereof and because we can, thanks to amazing open-source and open-platform

Marek Rei (@marekrei) 's Twitter Profile Photo

We are recruiting for a PhD student to work on foundation language models from electronic health records. Project is part of Centre in AI for Healthcare (AI4Health) and Imperial College London, co-supervised with Aldo Faisal, to start Oct 2024. Ideally Home/UK applicant. #NLProc ai4health.io/apply/

Wenyan Li (@wenyan62) 's Twitter Profile Photo

📣📣 Thrilled to share that I’ll present our paper “Understanding Retrieval Robustness for Retrieval-Augmented Image Captioning” at #ACL2024!! arxiv.org/abs/2406.02265 See you In Bangkok🌴🌴🌴 Kudos to our coauthors❤️@JIAANGLI Rita Ramos Raphael Tang Desmond Elliott

Matthias Gallé (@mgalle) 's Twitter Profile Photo

We are doing it again! The 5th edition of the NLP winter school in the Alps, will have again as main pillars: - access to top-notch doers & thinkers - an intimate setting for discussions All of this at 5' of a ⛷️lift, with great speakers and opportunities to brainstorm

We are doing it again!

The 5th edition of the NLP winter school in the Alps, will have again as main pillars:
- access to top-notch doers & thinkers
- an intimate setting for discussions

All of this at 5' of a ⛷️lift, with great speakers and opportunities to brainstorm
Simon Dobnik (@simondobnik) 's Twitter Profile Photo

Looking for a post-doctoral researcher to join the Beyond Pixels and Words Project in #nlp #ml #ai #linguistics #cogsci #robotics at University of Gothenburg in Sweden, deadline end of August 15 (GMT+2) web103.reachmee.com/ext/I005/1035/… JohnDK

Desmond Elliott (@delliott) 's Twitter Profile Photo

I was shocked to see a reviewer synthesise the two already-posted reviews of a paper in the ARR June cycle, but I guess this is the stage we're at in 2024.

Tal Linzen (@tallinzen) 's Twitter Profile Photo

This stuff raises a question that we've grappled with before: when doing LLM research, should we really treat systems like this (15T training tokens, many post-training tricks) as a baseline? Or is there value in research on simpler systems even if they perform a little worse?

Alessandro Suglia (@ale_suglia) 's Twitter Profile Photo

LLMs are great but they are brittle to minimal prompt perturbations (e.g., typos, indentation, ...). Q: How do we create truly multimodal foundation models? A: Do as we humans do: text as visual perception! Enter PIXAR, our work at #ACL2024NLP! arxiv.org/abs/2401.03321

Desmond Elliott (@delliott) 's Twitter Profile Photo

#LazyWeb In the prompt sensitivity literature, what are the best papers that show how model performance varies with differently structured / phrase inputs?

Andre Martins (@andre_t_martins) 's Twitter Profile Photo

Very insightful remarks by (((ل()(ل() 'yoav))))👾. Looking back in history, what is going on is not new and probably not very different from the empirical revolution in the 90s or the shift to DL in NLP in ~2015 (of which LLMs are just a natural continuation).

Jenia Jitsev 🏳️‍🌈 🇺🇦 (@jjitsev) 's Twitter Profile Photo

LAION-5B is important reference research dataset for reproducible language-vision foundation models studies. We release Re-LAION-5B as a transparent safety iteration on LAION-5B which fixes issues and allows broad research community to continue using open datasets as reference🧵

Desmond Elliott (@delliott) 's Twitter Profile Photo

Fun new paper led by Ingo Ziegler and Abdullatif Köksal that shows how we can use retrieval augmentation to create high-quality supervised fine tuning data. All you need to do is write a few examples that demonstrate the task.

Natalie Schluter (@natschluter) 's Twitter Profile Photo

I’m looking for great recent/almost MSc graduates in NLP/Speech/Theoretical Deep Learning, who are interested in doing up to a 6 months/1 year position with me as research assistant. They would be hired by DTU Compute.

MMitchell (@mmitchell_ai) 's Twitter Profile Photo

Can you imagine working in a company that not only supports you, but celebrates you? 😍 Feeling all kinds of gratitude for being able to work Hugging Face .