Jiateng Liu (@jiatengliu) 's Twitter Profile
Jiateng Liu

@jiatengliu

ID: 1580476081903058947

calendar_today13-10-2022 08:30:49

27 Tweet

94 Followers

118 Following

Ke Yang (@empathyang) 's Twitter Profile Photo

Excited to unveil our latest findings in the paper "Prejudice and Volatility of LLMs" (arxiv.org/abs/2402.15481)! 🦠Data Toxicity ⬆️ → 🔵Prejudice ⬆️ 🔴Volatility ⬇️ 🐋Model Size ⬆️ → 🔵Prejudice ⬆️ 🔴Volatility ⬇️ 🧑‍🏫RLHF ☑️ → 🔵Prejudice ⬇️ 🔴Volatility ⬆️

Excited to unveil our latest findings in the paper "Prejudice and Volatility of LLMs" (arxiv.org/abs/2402.15481)!
🦠Data Toxicity ⬆️ → 🔵Prejudice ⬆️ 🔴Volatility ⬇️
🐋Model Size ⬆️ → 🔵Prejudice ⬆️ 🔴Volatility ⬇️
🧑‍🏫RLHF ☑️ → 🔵Prejudice ⬇️ 🔴Volatility ⬆️
Yuji Zhang (@yuji_zhang_nlp) 's Twitter Profile Photo

🔍 New Preprint! Why do LLMs generate hallucinations even when trained on all truths? 🤔 Check out our paper [arxiv.org/abs/2407.08039] 💡 We find that universally, data imbalance causes LLMs to over-generalize popular knowledge and produce amalgamated hallucinations. 📊

🔍 New Preprint! Why do LLMs generate hallucinations even when trained on all truths? 🤔 Check out our paper [arxiv.org/abs/2407.08039]

💡 We find that universally, data imbalance causes LLMs to over-generalize popular knowledge and produce amalgamated hallucinations.

📊
Ke Yang (@empathyang) 's Twitter Profile Photo

👾 Introducing AgentOccam: Automating Web Tasks with LLMs! 🌐 AgentOccam showcases the impressive power of Large Language Models (LLMs) on web tasks, without any in-context examples, new agent roles, online feedback, or search strategies. 🏄🏄🏄 🧙 Link: arxiv.org/abs/2410.13825

👾 Introducing AgentOccam: Automating Web Tasks with LLMs! 🌐 AgentOccam showcases the impressive power of Large Language Models (LLMs) on web tasks, without any in-context examples, new agent roles, online feedback, or search strategies. 🏄🏄🏄
🧙 Link: arxiv.org/abs/2410.13825
Jiaxin-Qin (@jr_qjx) 's Twitter Profile Photo

I am at #EMNLP2024! I will present our work "Why Does New Knowledge Create Messy Ripple Effects in LLMs? " on Web 10:30am. Thanks to all the collaborators Heng Ji Zixuan Zhang Chi Han Manling Li Looking forward to have a chat! Paper Link: arxiv.org/pdf/2407.12828

I am at #EMNLP2024!

I will present our work "Why Does New Knowledge Create Messy Ripple Effects in LLMs? " on Web 10:30am. 

Thanks to all the collaborators <a href="/hengjinlp/">Heng Ji</a> <a href="/zhangzxUIUC/">Zixuan Zhang</a> <a href="/Glaciohound/">Chi Han</a> <a href="/ManlingLi_/">Manling Li</a> 

Looking forward to have a chat! 

Paper Link: arxiv.org/pdf/2407.12828
Jiateng Liu (@jiatengliu) 's Twitter Profile Photo

Transferring UIUC Spring Apartments at an Extremely Low Price❗️❗️ Current price $500 per month vs. Original price $890 for one room in Yugo 3rd Lofts 2b1b 🎮🌞🎮 Reach out to know more details if you are interested. 💨💨💨

Lin Ai (@_lin_ai_) 's Twitter Profile Photo

Excited to share 1/2 of my #coling2025 papers: PropaInsight: Toward Deeper Understanding of Propaganda! Huge thanks to my coauthors Jiateng Liu, May Fung and team, and special thanks to Julia Hirschberg, Heng Ji, Preslav Nakov for their support! Read here: arxiv.org/pdf/2409.18997

Excited to share 1/2 of my #coling2025 papers: PropaInsight: Toward Deeper Understanding of Propaganda! Huge thanks to my coauthors <a href="/JiatengLiu/">Jiateng Liu</a>, <a href="/May_F1_/">May Fung</a> and team, and special thanks to <a href="/juliahberg/">Julia Hirschberg</a>, <a href="/hengjinlp/">Heng Ji</a>, <a href="/preslav_nakov/">Preslav Nakov</a> for their support! Read here: arxiv.org/pdf/2409.18997
Lin Ai (@_lin_ai_) 's Twitter Profile Photo

Here’s 2/2 of my #coling2025 papers: NoVAScore🌟 We introduce an automated metric to assess document novelty and salience. Huge thanks to my coauthor Ziwei (Sara) Gong and the team for their amazing collaboration! Read here: arxiv.org/pdf/2409.09249

Here’s 2/2 of my #coling2025 papers: NoVAScore🌟 We introduce an automated metric to assess document novelty and salience. Huge thanks to my coauthor <a href="/SaraZiweiGong/">Ziwei (Sara) Gong</a> and the team for their amazing collaboration! Read here: arxiv.org/pdf/2409.09249
Qingyun Wang (@eagle_hz) 's Twitter Profile Photo

📢📈 I’m on the 2025 faculty job market! I've been incredibly grateful to work with inspiring advisors, mentors & peers. 💡My research, AI4Scientists🔬, accelerates & democratizes the research lifecycle by: 1️⃣ Few-shot scientific knowledge acquisition 2️⃣ Domain-aware scientific

📢📈 I’m on the 2025 faculty job market!
I've been incredibly grateful to work with inspiring advisors, mentors &amp; peers.
💡My research, AI4Scientists🔬, accelerates &amp; democratizes the research lifecycle by:
1️⃣ Few-shot scientific knowledge acquisition
2️⃣ Domain-aware scientific
Eric Modesitt (@ericmodesittxs) 's Twitter Profile Photo

Proud to announce my recent work on domain specific language modeling! We present a cost-effective methodology for filtering large, general corpus datasets to a specific domain, like astronomy, and training a LLM on the result for great improvements. arxiv.org/abs/2412.14436

Ke Yang (@empathyang) 's Twitter Profile Photo

🙌 Happy New Year everyone! 🤖 New preprint: TinyHelen's First Curriculum: Training and Evaluating Tiny Language Models in a Simpler Language Environment 🤖 We train and evaluate tiny language models (LMs) using a novel text dataset with systematically simplified vocabularies and

🙌 Happy New Year everyone!
🤖 New preprint: TinyHelen's First Curriculum: Training and Evaluating Tiny Language Models in a Simpler Language Environment
🤖 We train and evaluate tiny language models (LMs) using a novel text dataset with systematically simplified vocabularies and
Zhenhailong Wang (@zhenhailongw) 's Twitter Profile Photo

Why allocate the same number of visual tokens to a blank image and a complex landscape? Introducing DyMU: a training-free algorithm that makes any ViT visual encoder dynamic-length and plug-and-play with downstream VLMs. 🚀 🔗 Project Page: mikewangwzhl.github.io/dymu/

Why allocate the same number of visual tokens to a blank image and a complex landscape? Introducing DyMU: a training-free algorithm that makes any ViT visual encoder dynamic-length and plug-and-play with downstream VLMs. 🚀
🔗 Project Page: mikewangwzhl.github.io/dymu/
Heng Ji (@hengjinlp) 's Twitter Profile Photo

We are extremely excited to announce mCLM, a Modular Chemical Language Model that is friendly to automatable block-based chemistry and mimics bilingual speakers by “code-switching” between functional molecular modules and natural language descriptions of the functions. 1/2

We are extremely excited to announce mCLM, a Modular Chemical Language Model that is friendly to automatable block-based chemistry and mimics bilingual speakers by “code-switching” between functional molecular modules and natural language descriptions of the functions. 1/2
Ke Yang (@empathyang) 's Twitter Profile Photo

Imagine AI assistants on your smart glasses or laptop proactively bridging your info gaps! 🗺️ Entering a building? Get instant floor plans for seamless navigation. 🧑‍💻 In lectures? Receive concise explanations to stay on track. Our new preprint introduces Just-In-Time

Imagine AI assistants on your smart glasses or laptop proactively bridging your info gaps! 🗺️ Entering a building? Get instant floor plans for seamless navigation. 🧑‍💻 In lectures? Receive concise explanations to stay on track.  

Our new preprint introduces Just-In-Time
Yuji Zhang (@yuji_zhang_nlp) 's Twitter Profile Photo

🧠Let’s teach LLMs to learn smarter, not harder💥[arxiv.org/pdf/2506.06972] 🤖How can LLMs verify complex scientific information efficiently? 🚀We propose modular, reusable atomic reasoning skills that reduce LLMs’ cognitive load to verify scientific claims with little data.

🧠Let’s teach LLMs to learn smarter, not harder💥[arxiv.org/pdf/2506.06972]
🤖How can LLMs verify complex scientific information efficiently?
🚀We propose modular, reusable atomic reasoning skills that reduce LLMs’ cognitive load to verify scientific claims with little data.
Zhenhailong Wang (@zhenhailongw) 's Twitter Profile Photo

Multimodal conversational agents struggle to follow complex policies, which also impose a fixed computational cost. We ask: 👉 How can we achieve stronger policy-following behavior without having to include policies in-context? 🌐: mikewangwzhl.github.io/TriMPI/ 🧵1/3

Multimodal conversational agents struggle to follow complex policies, which also impose a fixed computational cost.
We ask:
👉 How can we achieve stronger policy-following behavior without having to include policies in-context?
🌐: mikewangwzhl.github.io/TriMPI/ 🧵1/3