Wenyan Li (@wenyan62) 's Twitter Profile
Wenyan Li

@wenyan62

PhD student at the CoAStaL NLP Group, University of Copenhagen. Former researcher at Comcast AI and SenseTime.

ID: 1303507887315185665

linkhttp://wenyanli.org calendar_today09-09-2020 01:38:01

26 Tweet

204 Takipçi

189 Takip Edilen

Andrew Ng (@andrewyng) 's Twitter Profile Photo

It is only rarely that, after reading a research paper, I feel like giving the authors a standing ovation. But I felt that way after finishing Direct Preference Optimization (DPO) by Rafael Rafailov @ NeurIPS Archit Sharma Eric Stefano Ermon Christopher Manning and Chelsea Finn. This

Wenyan Li (@wenyan62) 's Twitter Profile Photo

Happy to share that our paper "The Role of Data Curation in Image Captioning" is accepted to #EACL2024 main conference! Thanks to our co-authors Jonas Færch Lotz Desmond Elliott ! we will update the preprint and release the code soon! See you in Malta 🏖️🏖️

Happy to share that our paper "The Role of Data Curation in Image Captioning" is accepted to #EACL2024 main conference! Thanks to our co-authors <a href="/jonasflotz/">Jonas Færch Lotz</a> <a href="/delliott/">Desmond Elliott</a> !  

we will update the preprint and release the code soon! See you in Malta 🏖️🏖️
Desmond Elliott (@delliott) 's Twitter Profile Photo

Reminder about this opportunity to join my group to work on LLMs. Applications are due in one week and informal inquiries are welcome.

Jimmy Lin (@lintool) 's Twitter Profile Photo

They say a picture is worth a thousand words... but work led by ralphtang.eth finds words worth a thousand pictures! arxiv.org/abs/2406.08482

They say a picture is worth a thousand words... but work led by <a href="/ralph_tang/">ralphtang.eth</a> finds words worth a thousand pictures! arxiv.org/abs/2406.08482
Ilias Chalkidis (@kiddothe2b) 's Twitter Profile Photo

New 📑 pre-print auditing LLMs concerning #EUElection: "Investigating LLMs as Voting Assistants via Contextual Augmentation: A Case Study on the European Parliament Elections 2024" arxiv.org/abs/2407.08495

New 📑 pre-print auditing LLMs concerning #EUElection: "Investigating LLMs as Voting Assistants via Contextual Augmentation: A Case Study on the European Parliament Elections 2024" arxiv.org/abs/2407.08495
Wenyan Li (@wenyan62) 's Twitter Profile Photo

📣📣 Thrilled to share that I’ll present our paper “Understanding Retrieval Robustness for Retrieval-Augmented Image Captioning” at #ACL2024!! arxiv.org/abs/2406.02265 See you In Bangkok🌴🌴🌴 Kudos to our coauthors❤️@JIAANGLI Rita Ramos ralphtang.eth Desmond Elliott

Jianyuan Wang (@jianyuan_wang) 's Twitter Profile Photo

(1/6) We’ve just released a HF 🤗 demo for our VGGSfM, the first differentiable Structure from Motion (SfM) pipeline that outperforms traditional algorithms across various benchmarks! Try it yourself! ⬇️ (huggingface.co/spaces/faceboo…)

Wenyan Li (@wenyan62) 's Twitter Profile Photo

Will be presenting “understanding retirement robustness for retrieval-augmented image captioning” at: Poster In-Person session 2: Aug 12, 2pm Oral: Aug 13, multimodal session, 4:45pm Feel free to drop by👋 if you are interested!

Will be presenting “understanding retirement robustness for retrieval-augmented image captioning” at: 

Poster In-Person session 2: Aug 12, 2pm
Oral: Aug 13, multimodal session, 4:45pm

Feel free to drop by👋 if you are interested!
ralphtang.eth (@ralph_tang) 's Twitter Profile Photo

Our paper on understanding variability in text-to-image models was accepted at #EMNLP2024 main track! Lots of thanks to my collaborators Xinyu Crystina Zhang Yao Lu Wenyan Li Ulie Xu and mentors Jimmy Lin Pontus Ferhan Ture. Check out w1kp.com

Yifei Yuan (@yfyuan775) 's Twitter Profile Photo

📢📣Happy to share our new benchmark paper: ‘Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering’ accepted to #EMNLP main! Thanks to my amazing collaborators Yang Deng, Anders Søgaard, Mohammad Aliannejadi ❤️ Looking forward to presenting in Miami🏖️🏝️

📢📣Happy to share our new benchmark paper: ‘Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering’ accepted to #EMNLP main!
Thanks to my amazing collaborators <a href="/ydeng_dandy/">Yang Deng</a>, Anders Søgaard, <a href="/maliannejadi/">Mohammad Aliannejadi</a> ❤️

Looking forward to presenting in Miami🏖️🏝️
Xinyu Crystina Zhang | on job market (@crystina_z) 's Twitter Profile Photo

1/7 🚨non-LLM paper alert!🚨 Human's perception of the sentence is quite robust against interchanging words with similar meanings, not even mentioning the semantically equivalent words across different languages. How about the language models? In our recent work, we measure the

1/7 🚨non-LLM paper alert!🚨

Human's perception of the sentence is quite robust against interchanging words with similar meanings, not even mentioning the semantically equivalent words across different languages. How about the language models?

In our recent work, we measure the
Chengzu Li (@li_chengzu) 's Twitter Profile Photo

Forget just thinking in words. 🚀 New Era of Multimodal Reasoning🚨 🔍 Imagine While Reasoning in Space with MVoT Multimodal Visualization-of-Thought (MVoT) revolutionizes reasoning by generating visual "thoughts" that transform how AI thinks, reasons, and explains itself.

Forget just thinking in words.

🚀 New Era of Multimodal Reasoning🚨
🔍 Imagine While Reasoning in Space with MVoT

Multimodal Visualization-of-Thought (MVoT) revolutionizes reasoning by generating visual "thoughts" that transform how AI thinks, reasons, and explains itself.
Afra Amini (@afra_amini) 's Twitter Profile Photo

Current KL estimation practices in RLHF can generate high variance and even negative values! We propose a provably better estimator that only takes a few lines of code to implement.🧵👇 w/ Tim Vieira and Ryan Cotterell code: arxiv.org/pdf/2504.10637 paper: github.com/rycolab/kl-rb

Current KL estimation practices in RLHF can generate high variance and even negative values! We propose a provably better estimator that only takes a few lines of code to implement.🧵👇
w/ <a href="/xtimv/">Tim Vieira</a> and Ryan Cotterell
code: arxiv.org/pdf/2504.10637
paper: github.com/rycolab/kl-rb
Wenyan Li (@wenyan62) 's Twitter Profile Photo

Check out our new benchmark RAVENEA for VLM culture understanding with retrieval augmentation! Code and data all released!🚀🚀🚀

Wenyan Li (@wenyan62) 's Twitter Profile Photo

Excited to share our multimodal temporal culture benchmark is released 🚀🚀🚀 Dataset is public on 🤗 huggingface Check it out!! arxiv.org/abs/2506.01565 huggingface.co/datasets/lizho…