Tianlong Chen (@tianlongchen4) 's Twitter Profile
Tianlong Chen

@tianlongchen4

Assistant Professor at UNC Chapel Hill (@unccs, @unc).
Postdoc, CSAIL@MIT (@MIT_CSAIL) & BMI@Harvard (@Harvard).
Ph.D., ECE@UT Austin (@UTAustin). #AI #ML

ID: 1564005266508681222

linkhttps://tianlong-chen.github.io/ calendar_today28-08-2022 21:41:41

87 Tweet

910 Takipçi

17 Takip Edilen

UNC Computer Science (@unccs) 's Twitter Profile Photo

CS Professors Marc Niethammer, Mohit Bansal, Tianlong Chen , and Junier Oliva are leading a collaboration with the UNC School of Medicine to use multimodal, ethical AI for earlier diagnosis of autoimmune diseases. The project received $4 million in NIH funding. cs.unc.edu/news-article/c…

CS Professors <a href="/MarcNiethammer/">Marc Niethammer</a>, <a href="/mohitban47/">Mohit Bansal</a>, <a href="/TianlongChen4/">Tianlong Chen</a> , and Junier Oliva are leading a collaboration with the <a href="/UNC_SOM/">UNC School of Medicine</a> to use multimodal, ethical AI for earlier diagnosis of autoimmune diseases. The project received $4 million in NIH funding. cs.unc.edu/news-article/c…
Mohit Bansal (@mohitban47) 's Twitter Profile Photo

Looking forward to welcoming everyone in Miami this week for #EMNLP2024! 🤗 We have an exciting program filled with papers + keynotes + panel + BoF + mentoring sessions etc. (as well as a great line up of workshops + tutorials) -- see details in the thread below (and keep on the

Jaemin Cho (on faculty job market) (@jmin__cho) 's Twitter Profile Photo

🚨 I’m on the 2024-2025 academic job market! j-min.io I work on ✨ Multimodal AI ✨, with a special focus on enhancing reasoning in both understanding and generation tasks by: 1⃣Making it more scalable 2⃣Making it more faithful 3⃣Evaluating and refining multimodal

🚨 I’m on the 2024-2025 academic job market!
j-min.io

I work on ✨ Multimodal AI ✨, with a special focus on enhancing reasoning in both understanding and generation tasks by:
1⃣Making it more scalable
2⃣Making it more faithful
3⃣Evaluating and refining multimodal
SOUVIK KUNDU (@thisissouvikk) 's Twitter Profile Photo

🌟🌟🌟 Announcing #ICLR2025 workshop on "Scalable Optimization for Efficient and Adaptive Foundation Models (#SCOPE)". Co-organized by: Amir Yazdan Efficient and Intelligent Computing Lab Beidi Chen Tianlong Chen Shiwei Liu Haizhong ICLR 2026 📄Workshop link: lnkd.in/g8ZbgjbX The topics

🌟🌟🌟 Announcing #ICLR2025 workshop on "Scalable Optimization for Efficient and Adaptive Foundation Models (#SCOPE)". Co-organized by: <a href="/ayazdanb/">Amir Yazdan</a> <a href="/eiclab/">Efficient and Intelligent Computing Lab</a> <a href="/BeidiChen/">Beidi Chen</a> <a href="/TianlongChen4/">Tianlong Chen</a> <a href="/Shiwei_Liu66/">Shiwei Liu</a> <a href="/haizhong_zheng/">Haizhong</a>  <a href="/iclr_conf/">ICLR 2026</a> 

📄Workshop link: lnkd.in/g8ZbgjbX

The topics
Jaehong Yoon (on the faculty job market) (@jaeh0ng_yoon) 's Twitter Profile Photo

🚨 I am on the 2025 faculty job market! 🚨(jaehong31.github.io) I develop reliable and lifelong embodied AI systems 🔥 that continually evolve capabilities through safe and robust interactions with an ever-changing multimodal world, focusing on: 👇 ▶️ Scalable and

🚨 I am on the 2025 faculty job market! 🚨(jaehong31.github.io)

I develop reliable and lifelong embodied AI systems 🔥 that continually evolve capabilities through safe and robust interactions with an ever-changing multimodal world, focusing on: 👇

▶️ Scalable and
Yang Luo (@yangl_7) 's Twitter Profile Photo

Training-free Video Enhancement: Achieved 🎉 Nice work with Xuanlei Zhao Wenqi Shaw Victor.Kai Wang @VitaGroupUT Yang You et al. Non-trivial enhancement, training-free, and plug-and-play 🥳 Blog: oahzxl.github.io/Enhance_A_Vide… (🧵1/6)

Victor.Kai Wang (@victorkaiwang1) 's Twitter Profile Photo

Generating ~200 million parameters in just minutes! 🥳 Excited to share our work with Doven Tang , ZHAO WANGBO , and Yang You: 'Recurrent Diffusion for Large-Scale Parameter Generation' (RPG for short). Example: Obtain customized models using prompts (see below). (🧵1/8)

Adyasha Maharana (@adyasha10) 's Twitter Profile Photo

🎉 Adapt-♾ has been accepted to #ICLR2024 ICLR 2026! We propose a dynamic, multi-way data selection strategy for continual VLM learning with growing instruction-tuning datasets. Stay tuned for the camera-ready version with additional results on LLMs! 🙌

Archiki Prasad (@archikiprasad) 's Twitter Profile Photo

🚨 Excited to share: "Learning to Generate Unit Tests for Automated Debugging" 🚨 which introduces ✨UTGen and UTDebug✨ for teaching LLMs to generate unit tests (UTs) and debugging code from generated tests. UTGen+UTDebug improve LLM-based code debugging by addressing 3 key

🚨 Excited to share: "Learning to Generate Unit Tests for Automated Debugging" 🚨
which introduces ✨UTGen and UTDebug✨ for teaching LLMs to generate unit tests (UTs) and debugging code from generated tests.

UTGen+UTDebug improve LLM-based code debugging by addressing 3 key
UNC NLP (@uncnlp) 's Twitter Profile Photo

We are excited to have our next UNC NLP/ML Colloquium by Dr. Hua Wei from ASU School of Computing and Augmented Intelligence Arizona State University talking about "Paradoxes in Transformer Language Models: Masking, Positional Encodings, and Routing"! (Friday, March 07, 3:15-4:15 PM EST, FB 141)

We are excited to have our next UNC NLP/ML Colloquium by Dr. Hua Wei from <a href="/SCAI_ASU/">ASU School of Computing and Augmented Intelligence</a> <a href="/ASU/">Arizona State University</a> talking about "Paradoxes in Transformer Language Models: Masking, Positional Encodings, and Routing"! 
(Friday, March 07, 3:15-4:15 PM EST, FB 141)
Justin Chih-Yao Chen (@cyjustinchen) 's Twitter Profile Photo

🚨 We introduce ✨ Symbolic-MoE ✨ which uses skill-based instance-level recruiting to dynamically combine LLMs, allowing three 7-8B LLMs to beat GPT4o-mini and Llama3.3 70B across challenging + diverse reasoning tasks (MMLU-Pro, AIME, GPQA, MedMCQA) while running on 1 GPU! Key

🚨 We introduce ✨ Symbolic-MoE ✨ which uses skill-based instance-level recruiting to dynamically combine LLMs, allowing three 7-8B LLMs to beat GPT4o-mini and Llama3.3 70B across challenging + diverse reasoning tasks (MMLU-Pro, AIME, GPQA, MedMCQA) while running on 1 GPU!

Key
Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

🚨 Excited to announce Symbolic-MoE, for efficiently recruiting+combining small (7-8B) LLMs based on their strengths/the skills needed for a query. With 3 LLMs running on 1 GPU, we beat GPT4o-mini + Llama3.3 70B, and beat multi-agent debate w/out expensive discussion. Key

Tianlong Chen (@tianlongchen4) 's Twitter Profile Photo

📢 Our new survey is out! "Trustworthy LLM Agents: Threats & Countermeasures" ➡️ [arxiv.org/abs/2503.09648] Key Points: 🔍 Modular framework (Brain, Memory, Tools...) 🔭 Attack, Defense, Evaluation taxonomy 📚 Curated recent literature 🛠️ Practical techniques & future directions

VITA Group (@vitagrouput) 's Twitter Profile Photo

🚀 Thrilled to announce SPIN-Bench! 🚀 We all love seeing how smart LLMs can be—solving complex math, crafting beautiful text, and coding effortlessly. But how well do they handle real-world strategic complexity, cooperation, and social negotiation? Can they play well when

🚀 Thrilled to announce SPIN-Bench! 🚀

We all love seeing how smart LLMs can be—solving complex math, crafting beautiful text, and coding effortlessly. But how well do they handle real-world strategic complexity, cooperation, and social negotiation? Can they play well when
Zaid Khan (@codezakh) 's Twitter Profile Photo

What if we could transform advanced math problems into abstract programs that can generate endless, verifiable problem variants? Presenting EFAGen, which automatically transforms static advanced math problems into their corresponding executable functional abstractions (EFAs).

What if we could transform advanced math problems into abstract programs that can generate endless, verifiable problem variants?

Presenting EFAGen, which automatically transforms static advanced math problems into their corresponding executable functional abstractions (EFAs).
Mohit Bansal (@mohitban47) 's Twitter Profile Photo

🔥 BIG CONGRATS to Elias + UT Austin! Really proud of you -- it has been a complete pleasure to work with Elias and see him grow into a strong PI on *all* axes 🤗 Make sure to apply for your PhD with him -- he is an amazing advisor and person! 💙

Vaidehi Patil (@vaidehi_patil_) 's Twitter Profile Photo

🚨 Introducing our Transactions on Machine Learning Research paper “Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation” W:nt UnLOK-VQA, a benchmark to evaluate unlearning in vision-and-language models—where both images and text may encode sensitive or private

🚨 Introducing our <a href="/TmlrOrg/">Transactions on Machine Learning Research</a> paper “Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation”

W:nt UnLOK-VQA, a benchmark to evaluate unlearning in vision-and-language models—where both images and text may encode sensitive or private