Zihao Li (@_violet24k_) 's Twitter Profile
Zihao Li

@_violet24k_

Ph.D. candidate @siebelschool @UofIllinois | (ex-)intern @Amazon @MSFTResearch

ID: 1618456517589757953

linkhttps://www.zihao.website/ calendar_today26-01-2023 03:51:37

78 Tweet

77 Followers

139 Following

Zihao Li (@_violet24k_) 's Twitter Profile Photo

๐Ÿ“ˆ Your time-series-paired texts are secretly a time series! ๐Ÿ™Œ Real-world time series (stock prices) and texts (financial reports) share similar periodicity and spectrum, unlocking seamless multimodal learning using existing TS models. ๐Ÿ”ฌ Read more: arxiv.org/abs/2502.08942

๐Ÿ“ˆ Your time-series-paired texts are secretly a time series!

๐Ÿ™Œ Real-world time series (stock prices) and texts (financial reports) share similar periodicity and spectrum, unlocking seamless multimodal learning using existing TS models.

๐Ÿ”ฌ Read more: arxiv.org/abs/2502.08942
Wei Xiong (@weixiong_1) 's Twitter Profile Photo

๐Ÿš€ New Paper Alert! ๐Ÿš€ LLMs struggle with self-correction like O1/R1 because models lack judgment on when to revise vs. when to stay confident. We introduce self-rewarding reasoning LLM, a reasoning framework that: โœ… integrates generator and generative RM into a single LLM. โœ…

๐Ÿš€ New Paper Alert! ๐Ÿš€

LLMs struggle with self-correction like O1/R1 because models lack judgment on when to revise vs. when to stay confident. We introduce self-rewarding reasoning LLM, a reasoning framework that:

โœ… integrates generator and generative RM into a single LLM.
โœ…
Yuji Zhang (@yuji_zhang_nlp) 's Twitter Profile Photo

๐Ÿ”New findings of knowledge overshadowing! Why do LLMs hallucinate over all true training data? ๐Ÿค”Can we predict hallucinations even before model training or inference? ๐Ÿš€Check out our new preprint: [arxiv.org/pdf/2502.16143] The Law of Knowledge Overshadowing: Towards

๐Ÿ”New findings of knowledge overshadowing! Why do LLMs hallucinate over all true training data? ๐Ÿค”Can we predict hallucinations even before model training or inference? 
๐Ÿš€Check out our new preprint: [arxiv.org/pdf/2502.16143] The Law of Knowledge Overshadowing: Towards
iDEA-iSAIL Group@UIUC (@ideaisailuiuc) 's Twitter Profile Photo

๐Ÿ”ฌGraph Self-Supervised Learning Toolkit ๐Ÿ”ฅWe release PyG-SSL, offering a unified framework of 10+ self-supervised choices to pretrain your graph foundation models. ๐Ÿ“œPaper: arxiv.org/abs/2412.21151 ๐Ÿ’ปCode: github.com/iDEA-iSAIL-Labโ€ฆ Have fun!

Yu Zhang (@yuz9yuz) 's Twitter Profile Photo

๐ŸšจCall for Papers - ๐— ๐—Ÿ๐—ผ๐—š-๐—š๐—ฒ๐—ป๐—”๐—œ @ ๐—ž๐——๐—— ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ Join us at the Workshop on ๐— ๐—ฎ๐—ฐ๐—ต๐—ถ๐—ป๐—ฒ ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ด ๐—ผ๐—ป ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต๐˜€ in the Era of ๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—”๐—œ co-located with #KDD2025! ๐ŸŒWebsite: mlgraphworkshop.github.io ๐ŸŒSubmission Link: openreview.net/group?id=KDD.oโ€ฆ

๐ŸšจCall for Papers - ๐— ๐—Ÿ๐—ผ๐—š-๐—š๐—ฒ๐—ป๐—”๐—œ @ ๐—ž๐——๐—— ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ

Join us at the Workshop on ๐— ๐—ฎ๐—ฐ๐—ต๐—ถ๐—ป๐—ฒ ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ด ๐—ผ๐—ป ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต๐˜€ in the Era of ๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—”๐—œ co-located with #KDD2025!

๐ŸŒWebsite: mlgraphworkshop.github.io
๐ŸŒSubmission Link: openreview.net/group?id=KDD.oโ€ฆ
iDEA-iSAIL Group@UIUC (@ideaisailuiuc) 's Twitter Profile Photo

We'll present 4 papers and 1 keynote talk at #ICLR2025. Prof. Jingrui He and Prof. Hanghang Tong will be at the conference. Let's connect! โ˜•๏ธ

We'll present 4 papers and 1 keynote talk at #ICLR2025.

Prof. Jingrui He and Prof. Hanghang Tong will be at the conference. Let's connect! โ˜•๏ธ
Gaotang Li (@gaotangli) 's Twitter Profile Photo

๐Ÿšจ ICML โ€™25 SPOTLIGHT ๐Ÿšจ Taming Knowledge Conflict in Language Models ๐Ÿค” Why does your LLM sometimes echo the prompt but other times rely on its โ€œbuilt-inโ€ facts? ๐ŸŽญ Can we toggle between parametric memory and fresh context without fine-tuning? ๐Ÿ”ฌ Curious about LLM internals,

๐Ÿšจ ICML โ€™25 SPOTLIGHT ๐Ÿšจ
Taming Knowledge Conflict in Language Models

๐Ÿค” Why does your LLM sometimes echo the prompt but other times rely on its โ€œbuilt-inโ€ facts?
๐ŸŽญ Can we toggle between parametric memory and fresh context without fine-tuning?
๐Ÿ”ฌ Curious about LLM internals,
Ke Yang (@empathyang) 's Twitter Profile Photo

๐Ÿค– New preprint: We propose ten principles of AI agent economics, offering a framework to understand how AI agents make decisions, influence social interactions, and participate in the broader economy. ๐Ÿ“œ Paper: arxiv.org/abs/2505.20273

๐Ÿค– New preprint: We propose ten principles of AI agent economics, offering a framework to understand how AI agents make decisions, influence social interactions, and participate in the broader economy.

๐Ÿ“œ Paper: arxiv.org/abs/2505.20273
ACM CIKM 2025 (@cikm2025) 's Twitter Profile Photo

Today, we introduce our hashtag#CIKM2025 Industry Day Chairs ๐Ÿ‘ Jingren Zhou, Soonmin Bae, and Xianfeng Tang are leading this program connecting academia and industry.

Today, we introduce our hashtag#CIKM2025 Industry Day Chairs ๐Ÿ‘ 
Jingren Zhou, Soonmin Bae, and Xianfeng Tang are leading this program connecting academia and industry.
Gaotang Li (@gaotangli) 's Twitter Profile Photo

๐Ÿ˜ฒ Not only reasoning?! Inference scaling can now boost LLM safety! ๐Ÿš€ Introducing Saffron-1: - Reduces attack success rate from 66% to 17.5% - Uses only 59.7 TFLOP compute - Counters latest jailbreak attacks - No model finetuning On the AI2 Refusals benchmark. ๐Ÿ“– Paper:

๐Ÿ˜ฒ Not only reasoning?! Inference scaling can now boost LLM safety!

๐Ÿš€ Introducing Saffron-1:
- Reduces attack success rate from 66% to 17.5%
- Uses only 59.7 TFLOP compute
- Counters latest jailbreak attacks
- No model finetuning
On the AI2 Refusals benchmark.

๐Ÿ“– Paper:
Yuji Zhang (@yuji_zhang_nlp) 's Twitter Profile Photo

๐Ÿง Letโ€™s teach LLMs to learn smarter, not harder๐Ÿ’ฅ[arxiv.org/pdf/2506.06972] ๐Ÿค–How can LLMs verify complex scientific information efficiently? ๐Ÿš€We propose modular, reusable atomic reasoning skills that reduce LLMsโ€™ cognitive load to verify scientific claims with little data.

๐Ÿง Letโ€™s teach LLMs to learn smarter, not harder๐Ÿ’ฅ[arxiv.org/pdf/2506.06972]
๐Ÿค–How can LLMs verify complex scientific information efficiently?
๐Ÿš€We propose modular, reusable atomic reasoning skills that reduce LLMsโ€™ cognitive load to verify scientific claims with little data.
Zihao Li (@_violet24k_) 's Twitter Profile Photo

๐ŸŒ Flow Matching Meets Biology and Life Science: A Survey Flow matching is emerging as a powerful generative paradigm. We comprehensively review its foundations and applications across biology & life science๐Ÿงฌ ๐Ÿ“šPaper: arxiv.org/abs/2507.17731 ๐Ÿ’ปResource: github.com/Violet24K/Awesโ€ฆ

Jiaxuan You (@youjiaxuan) 's Twitter Profile Photo

Benchmarks don't just measure AI; they define its trajectory. Today, thereโ€™s a shortage of truly challenging and useful benchmarks for LLMs โ€” and we believe future forecasting is the next frontier. Introducing TradeBench. trade-bench.live A live-market benchmark where

Benchmarks don't just measure AI; they define its trajectory. 
Today, thereโ€™s a shortage of truly challenging and useful benchmarks for LLMs โ€” and we believe future forecasting is the next frontier.

Introducing TradeBench.
trade-bench.live

A live-market benchmark where
Hanze Dong @ ICLR 2025 (@hendrydong) 's Twitter Profile Photo

๐Ÿ’ฅThrilled to share our new work Reinforce-Ada, which fixes signal collapse in GRPO ๐ŸฅณNo more blind oversampling or dead updates. Just sharper gradients, faster convergence, and stronger models. โš™๏ธ One-line drop-in. Real gains. arxiv.org/html/2510.0499โ€ฆ github.com/RLHFlow/Reinfoโ€ฆ

๐Ÿ’ฅThrilled to share our new work Reinforce-Ada, which fixes signal collapse in GRPO

๐ŸฅณNo more blind oversampling or dead updates. Just sharper gradients, faster convergence, and stronger models.

โš™๏ธ One-line drop-in. Real gains.
arxiv.org/html/2510.0499โ€ฆ

github.com/RLHFlow/Reinfoโ€ฆ
Chuxuan Hu (@chuxuanhu) 's Twitter Profile Photo

With DRAMA being a new paradigm that unifies a wide range of sub-problems, DRAMA-Bot is just the beginning โ€” imagine complex data integration, cleaning, etc, all in one agent. Canโ€™t wait to see what DRAMA unfolds next: DATA SCIENCE IS FULL OF DRAMA๐ŸŽญ๐Ÿค