
Tianlong Chen
@tianlongchen4
Assistant Professor at UNC Chapel Hill (@unccs, @unc).
Postdoc, CSAIL@MIT (@MIT_CSAIL) & BMI@Harvard (@Harvard).
Ph.D., ECE@UT Austin (@UTAustin). #AI #ML
ID: 1564005266508681222
https://tianlong-chen.github.io/ 28-08-2022 21:41:41
87 Tweet
910 Followers
17 Following

CS Professors Marc Niethammer, Mohit Bansal, Tianlong Chen , and Junier Oliva are leading a collaboration with the UNC School of Medicine to use multimodal, ethical AI for earlier diagnosis of autoimmune diseases. The project received $4 million in NIH funding. cs.unc.edu/news-article/c…




🌟🌟🌟 Announcing #ICLR2025 workshop on "Scalable Optimization for Efficient and Adaptive Foundation Models (#SCOPE)". Co-organized by: Amir Yazdan Efficient and Intelligent Computing Lab Beidi Chen Tianlong Chen Shiwei Liu Haizhong ICLR 2026 📄Workshop link: lnkd.in/g8ZbgjbX The topics



Training-free Video Enhancement: Achieved 🎉 Nice work with Xuanlei Zhao Wenqi Shaw Victor.Kai Wang @VitaGroupUT Yang You et al. Non-trivial enhancement, training-free, and plug-and-play 🥳 Blog: oahzxl.github.io/Enhance_A_Vide… (🧵1/6)

We are happy to announce that the Workshop on Sparsity in LLMs will take place ICLR 2026 in Singapore! For details: sparsellm.org Organizers: Tianlong Chen, utku, Yani Ioannou, Berivan Isik, Shiwei Liu, Mohammed Adnan @ ICML 2025, Aleksandra


Generating ~200 million parameters in just minutes! 🥳 Excited to share our work with Doven Tang , ZHAO WANGBO , and Yang You: 'Recurrent Diffusion for Large-Scale Parameter Generation' (RPG for short). Example: Obtain customized models using prompts (see below). (🧵1/8)



We are excited to have our next UNC NLP/ML Colloquium by Dr. Hua Wei from ASU School of Computing and Augmented Intelligence Arizona State University talking about "Paradoxes in Transformer Language Models: Masking, Positional Encodings, and Routing"! (Friday, March 07, 3:15-4:15 PM EST, FB 141)









🚨 Introducing our Transactions on Machine Learning Research paper “Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation” W:nt UnLOK-VQA, a benchmark to evaluate unlearning in vision-and-language models—where both images and text may encode sensitive or private
