TsinghuaNLP (@tsinghuanlp) 's Twitter Profile
TsinghuaNLP

@tsinghuanlp

Natural Language Processing Lab at Tsinghua University

ID: 1354681717047476225

linkhttps://nlp.csai.tsinghua.edu.cn/ calendar_today28-01-2021 06:44:36

266 Tweet

3,3K Followers

82 Following

TsinghuaNLP (@tsinghuanlp) 's Twitter Profile Photo

Excited to see the #ChatDev team pushing the boundaries of LLM-powered multi-agent collaboration with their curated collection of seminal papers. Dive into the latest advancements and explore the interactive e-book here: thinkwee.top/multiagent_eboโ€ฆ ๐Ÿ“š๐Ÿค– #AI #Research #Innovation

Tsinghua CS (@thudcst) 's Twitter Profile Photo

๐Ÿš€ Exciting day at #Tsinghua! The 2024 Summer School for Large Language Models kicks off today. ๐ŸŒ CS students from around the globe have gathered here, eager to dive into the latest IT trends and embark on this journey of knowledge and innovation. ๐Ÿ“š

๐Ÿš€ Exciting day at #Tsinghua! The 2024 Summer School for Large Language Models kicks off today. ๐ŸŒ CS students from around the globe have gathered here, eager to dive into the latest IT trends and embark on this journey of knowledge and innovation. ๐Ÿ“š
Weize Chen (@jeffreychen_thu) 's Twitter Profile Photo

Introducing Internet of Agents (IoA) - a novel framework for AI agent collaboration! ๐ŸŒ๐Ÿค– Imagine a world where heterogeneous AI agents located on different devices work together seamlessly through the Internet, just like humans do on the Internet. That's IoA!

Introducing Internet of Agents (IoA) - a novel framework for AI agent collaboration! ๐ŸŒ๐Ÿค– Imagine a world where heterogeneous AI agents located on different devices work together seamlessly through the Internet, just like humans do on the Internet. That's IoA!
Tsinghua CS (@thudcst) 's Twitter Profile Photo

๐Ÿ† We're thrilled to announce that our paper "Scaling Laws For Dense Retrieval"won the SIGIR'24 Best Paper Award! Congratulations to our research team for information retrieval THUIR from #DCST, #Tsinghua! ๐Ÿ“š #SIGIR #SIGIR2024 dl.acm.org/doi/abs/10.114โ€ฆ

๐Ÿ† We're thrilled to announce that our paper "Scaling Laws For Dense Retrieval"won the SIGIR'24 Best Paper Award! Congratulations to our research team for information retrieval THUIR from #DCST, #Tsinghua! ๐Ÿ“š #SIGIR #SIGIR2024
dl.acm.org/doi/abs/10.114โ€ฆ
Chaojun Xiao (@xcjthu1) 's Twitter Profile Photo

1/5 ๐Ÿš€ Excited to share our latest paper on Configurable Foundation Models! ๐Ÿง  Inspired by the human brain's functional specialization, we propose a concept: Configurable Foundation Model, a modular approach to LLMs.

1/5 ๐Ÿš€ Excited to share our latest paper on Configurable Foundation Models! ๐Ÿง 

Inspired by the human brain's functional specialization, we propose a concept: Configurable Foundation Model, a modular approach to LLMs.
TsinghuaNLP (@tsinghuanlp) 's Twitter Profile Photo

Optima explores the evolution of agent communication and scaling laws. Intriguing findings from Weize Chen โ€”join the conversation! ๐Ÿ’ฌ #TechTalk #AI #THUNLP

OpenBMB (@openbmb) 's Twitter Profile Photo

๐Ÿš€ Excited to share our latest work: โ€œRAGEvalโ€! ๐ŸŽ‰ Itโ€™s a versatile framework for generating scenario-specific RAG evaluation datasets, complete with comprehensive metrics. Perfect for rapid evaluations! ๐Ÿ” โœจ Paper: arxiv.org/abs/2408.01262

๐Ÿš€ Excited to share our latest work: โ€œRAGEvalโ€! 
๐ŸŽ‰ Itโ€™s a versatile framework for generating scenario-specific RAG evaluation datasets, complete with comprehensive metrics. Perfect for rapid evaluations! ๐Ÿ”
โœจ Paper: arxiv.org/abs/2408.01262
Chenyang Song (@nlp_rainy_sunny) 's Twitter Profile Photo

(Repost) We are thrilled to introduce our new work ๐Ÿ”ฅ#SparsingLaw๐Ÿ”ฅ, a comprehensive study on the quantitative scaling properties and influential factors of the activation sparsity within LLMs.๐Ÿ’ช ๐Ÿ“ŽArxiv: arxiv.org/pdf/2411.02335 ๐Ÿ“ŽCodes: github.com/thunlp/Sparsinโ€ฆ ๐Ÿงต1

(Repost) We are thrilled to introduce our new work ๐Ÿ”ฅ#SparsingLaw๐Ÿ”ฅ, a comprehensive study on the quantitative scaling properties and influential factors of the activation sparsity within LLMs.๐Ÿ’ช

๐Ÿ“ŽArxiv: arxiv.org/pdf/2411.02335
๐Ÿ“ŽCodes: github.com/thunlp/Sparsinโ€ฆ

๐Ÿงต1
Chaojun Xiao (@xcjthu1) 's Twitter Profile Photo

1/4 ๐Ÿš€ Densing Law of LLMs ๐Ÿš€ OpenAI's Scaling Law showed how model capabilities scale with size. But what about the trend toward efficient models? ๐Ÿค” We introduce "capacity density" and found an exciting empirical law: LLMs' capacity density grows EXPONENTIALLY over time!

1/4 ๐Ÿš€ Densing Law of LLMs ๐Ÿš€

OpenAI's Scaling Law showed how model capabilities scale with size. But what about the trend toward efficient models? ๐Ÿค”

We introduce "capacity density" and found an exciting empirical law: LLMs' capacity density grows EXPONENTIALLY over time!