Liunian Harold Li (@liliunian) 's Twitter Profile
Liunian Harold Li

@liliunian

ID: 1160794781884149760

linkhttps://liunian-harold-li.github.io/ calendar_today12-08-2019 06:06:57

121 Tweet

841 Followers

469 Following

Hao Tan (@haotan5) 's Twitter Profile Photo

Excited to share LRM (large reconstruction model), which views 2D->3D as a multimodal problem and learns transformer from large-scale data. A cornerstone of our 3D foundation model efforts at Adobe Research. Nice work by intern (also upcoming full-time) Yicong Hong.

Da Yin (@wade_yin9712) 's Twitter Profile Photo

🔥Check out 🪄Lumos, our open general language agent! Lumos has features: 🧩General modular framework 🌍Tuned with diverse agent training data 🚀Strong perf vs GPT/larger open agents MOSAIC uclanlp Ai2 📝: arxiv.org/abs/2311.05657 💻: github.com/allenai/lumos (1/N)

🔥Check out 🪄Lumos, our open general language agent!

Lumos has features:
🧩General modular framework
🌍Tuned with diverse agent training data
🚀Strong perf vs GPT/larger open agents

<a href="/ai2_mosaic/">MOSAIC</a> <a href="/uclanlp/">uclanlp</a> <a href="/allen_ai/">Ai2</a> 

📝: arxiv.org/abs/2311.05657
💻: github.com/allenai/lumos (1/N)
Hritik Bansal (@hbxnov) 's Twitter Profile Photo

📢 📽✍️We introduce VideoCon, a video-text dataset for training SOTA alignment model. It resolves a typical issue in video-text alignment models that struggles with robustness. w/ leah bitton, Idan Szpektor, Kai-Wei Chang , Aditya Grover video-con.github.io 🧵 1/

Di Wu (@diwu0162) 's Twitter Profile Photo

How to best leverage your pre-trained language model for keyphrase generation?📇Still directly fine-tuning BART/T5 and using greedy decoding?⚠️Check out our #EMNLP2023 paper for why you may or may not want to do that (1/N)

How to best leverage your pre-trained language model for keyphrase generation?📇Still directly fine-tuning BART/T5 and using greedy decoding?⚠️Check out our #EMNLP2023 paper for why you may or may not want to do that (1/N)
Amita Kamath (@kamath_amita) 's Twitter Profile Photo

VL models fail at spatial reasoning, but biases in benchmarks (dogs are usually UNDER tables) mask even worse capabilities. 📢 A new benchmark at #EMNLP2023 without this bias (yes, we put a dog on a table) in What’s up with VL models? arxiv.org/pdf/2310.19785… Jack Hessel uclanlp

Liunian Harold Li (@liliunian) 's Twitter Profile Photo

Arrived at #NeurIPS2023! Looking forward to meeting old and new friends! We will present our work on teaching models to ground by descriptions (DesCo) on Wed morning. Also checkout our demo huggingface.co/spaces/zdou083…!

Arrived at #NeurIPS2023! Looking forward to meeting  old and new friends!

We will present our work on teaching models to ground by descriptions (DesCo) on Wed morning.
 
Also checkout our demo huggingface.co/spaces/zdou083…!
Zi-Yi Dou (@ziyidou) 's Twitter Profile Photo

#NeurIPS2023 Stop by our poster at Great Hall & Hall B1+B2 (level 1) #1925 (today 10:45-12:45) to chat about object recognition with language descriptions! Paper: openreview.net/pdf?id=WKJDGfU… Code: github.com/liunian-harold… Demo: huggingface.co/spaces/zdou083…

Kai-Wei Chang (@kaiwei_chang) 's Twitter Profile Photo

I am honored to be nominated by SIGDAT (the org that oversees EMNLP) to run for VP-elect with other awesome candidates who share the goal of improving our community. Please check your email to vote by 3/24.🗳️ See details: bit.ly/3ItRc0S

I am honored to be nominated by SIGDAT (the org that oversees EMNLP) to run for VP-elect with other awesome candidates who share the goal of improving our community. Please check your email to vote by 3/24.🗳️ See details: bit.ly/3ItRc0S
Kai-Wei Chang (@kaiwei_chang) 's Twitter Profile Photo

Congrats 🎉 to the newly titled Dr. Lu Pan Lu on defending his thesis about mathematical reasoning with language models"! 🧮 Pan has published a series of works on quantifying and improving math and scientific reasoning ability in LLMs. Some highlights:

Tanmay Parekh (@tparekh97) 's Twitter Profile Photo

🔍🚨How to improve multilingual performance on structured prediction tasks? Excited to share our latest work CLaP - a label projection technique utilizing LLMs to do contextualized machine translation, improving two tasks in 47 languages including 10 extremely low-resource ones!

🔍🚨How to improve multilingual performance on structured prediction tasks?
Excited to share our latest work CLaP - a label projection technique utilizing LLMs to do contextualized machine translation, improving two tasks in 47 languages including 10 extremely low-resource ones!
Chujie Zheng (@chujiezheng) 's Twitter Profile Photo

✨New Paper Alert✨ Excited to introduce ExPO, an extremely simple method to boost LLMs' alignment with human preference, via weak-to-strong model extrapolation 👇 #LLMs #MachineLearning #NLProc #ArtificialIntelligence #AI

✨New Paper Alert✨
Excited to introduce ExPO, an extremely simple method to boost LLMs' alignment with human preference, via weak-to-strong model extrapolation
👇
#LLMs #MachineLearning #NLProc #ArtificialIntelligence #AI
Kuan-Hao Huang (@kuanhaoh_) 's Twitter Profile Photo

I am thrilled to share that I will join the Department of Computer Science and Engineering at Texas A&M University as an Assistant Professor in Fall 2024. Many thanks to my advisors, colleagues, and friends for their support and help. I'm really excited about the new journey at College Station!

I am thrilled to share that I will join the Department of Computer Science and Engineering at <a href="/TAMU/">Texas A&M University</a> as an Assistant Professor in Fall 2024. Many thanks to my advisors, colleagues, and friends for their support and help. I'm really excited about the new journey at College Station!
Wenbo Hu@ICLR🇸🇬 (@gordonhu608) 's Twitter Profile Photo

How to pick a good number of visual tokens? Too few, you have poor performance; too many, you need quadratically more compute. In this work, we introduce a model that works with an elastic number of tokens. arXiv: arxiv.org/abs/2405.19315

How to pick a good number of visual tokens? Too few, you have poor performance; too many, you need quadratically more compute. 
In this work, we introduce a model that works with an elastic number of tokens.

arXiv: arxiv.org/abs/2405.19315
Wenbo Hu@ICLR🇸🇬 (@gordonhu608) 's Twitter Profile Photo

Very excited that we have a similar work almost simultaneously. Some comparisons are: our model enable any number of visual tokens under a predefined maximum accommodating various computational constraints. ... 1/3

Honghua Zhang (@honghuazhang2) 's Twitter Profile Photo

Proposing Ctrl-G, a neurosymbolic framework that enables arbitrary LLMs to follow logical constraints (length control, infilling …) with 100% guarantees. Ctrl-G beats GPT4 on the task of text editing by >30% higher satisfaction rate in human eval. arxiv.org/abs/2406.13892

Xiaojian Ma (@jeasinema) 's Twitter Profile Photo

Introducing ♾OmniJARVIS, our latest venture to #AgentGPT, or vision-language-action (VLA) models for open-world instruction-following agents 🦾🕹️ tuning in 👉omnijarvis.github.io by Team CraftJarvis, 🤿⏬

Zhe Zeng (@zhezeng0908) 's Twitter Profile Photo

📢 I’m recruiting PhD students UVA Computer Science for Fall 2025! 🎯 Neurosymbolic AI, probabilistic ML, trustworthiness, AI for science. See my website for more details: zzeng.me 📬 If you're interested, apply and mention my name in your application: engineering.virginia.edu/department/com…

Aditya Ramesh (@model_mechanic) 's Twitter Profile Photo

Sora is here for Plus and Pro users at no additional cost! Pushing the boundaries of visual generation will require breakthroughs both in ML and HCI. Really proud to have worked on this brand new product with Bill Peebles Rohan Sahai Connor Holmes and the rest of the Sora team!