Zhimeng Jiang (@zhimengj) 's Twitter Profile
Zhimeng Jiang

@zhimengj

Staff Research Scientist@Visa Research | CS Ph.D. @tamu| Formerly, @Amazon & @Visa & @Samsung | Trustworthy ML & Graph Neural Network | Opinions are my own

ID: 1163282318389317634

linkhttp://www.zhimengjiang.com calendar_today19-08-2019 02:51:46

92 Tweet

400 Followers

1,1K Following

Zhimeng Jiang (@zhimengj) 's Twitter Profile Photo

📢 Exciting News! We start with sufficiency and fidelity criteria to rethink fairness metrics in machine learning and introduce innovative ABPC and ABCC metrics at the distribution level. Discover new insights and fairness behaviors of existing models.

Zhimeng Jiang (@zhimengj) 's Twitter Profile Photo

Excited to be in Long Beach for #KDD2023, Aug 6-10! 🌊 Presenting a tutorial on #Fairness and a workshop paper on Editable Graph Neural Networks 📚 (PDF: arxiv.org/pdf/2305.15529…). Looking forward to meeting old friends and making new connections. See you there! 😃

NewInML @ NeurIPS 2024 (@newinml) 's Twitter Profile Photo

The paper submission deadline has been extended to the 5th of October. Submit now to get feedback from top mentors on your first ML paper that will get you started in the field! Note that *paper* is broadly define and can mean an initial draft with first ideas and experiments

NewInML @ NeurIPS 2024 (@newinml) 's Twitter Profile Photo

Our workshop for newcomers in ML is starting now! This morning we have inspiring keynotes (Hugo Larochelle now) and a panel discussion on *Slow Science* and more talks in the afternoon

Our workshop for newcomers in ML is starting now! 

This morning we have inspiring keynotes (Hugo Larochelle now) and a panel discussion on *Slow Science* and more talks in the afternoon
Xiaotian (Max) Han (@xiaotianhan1) 's Twitter Profile Photo

With the recent release of #TinyLlama, SLMs have attracted a lot of attention. I re-released my previously trained SLM - LiteLlama under the MIT license, which has 460M parameters trained with 1T tokens. I hope to contribute a bit to the community. huggingface.co/ahxt/LiteLlama…

HongyeJ (@serendip410) 's Twitter Profile Photo

The 2.7B Phi-2 model is even better than everyone thought! Utilizing our Self-Extend method, which requires no fine-tuning, we've successfully expanded Phi-2's window length from 2k to 8k. This enhancement significantly boosts its performance across a variety of long-context

The 2.7B Phi-2 model is even better than everyone thought! Utilizing our Self-Extend method, which requires no fine-tuning, we've successfully expanded Phi-2's window length from 2k to 8k. This enhancement significantly boosts its performance across a variety of long-context
Xiaotian (Max) Han (@xiaotianhan1) 's Twitter Profile Photo

Thrilled to share that this paper has been accepted by #ICLR2024! It offers a range of user-friendly fairness methods, metrics, and datasets. Please try them out! We hope this project can facilitate fairness research and welcome contributions of new fairness algorithms!

Zhimeng Jiang (@zhimengj) 's Twitter Profile Photo

Thrilled to see Google I/O spend 10 minutes introducing our self-extend work at the 35-minute mark! Don't miss it! #GoogleIO #KerasNLP

Hunyuan (@tencenthunyuan) 's Twitter Profile Photo

🚀 Introducing Hunyuan-A13B, our latest open-source LLM. As an MoE model, it leverages 80B total parameters with just 13B active, delivering powerful performance that scores on par with o1 and DeepSeek across multiple mainstream benchmarks. Hunyuan-A13B features a hybrid

🚀 Introducing Hunyuan-A13B, our latest open-source LLM.

As an MoE model, it leverages 80B total parameters with just 13B active, delivering powerful performance that scores on par with o1 and DeepSeek across multiple mainstream benchmarks.

Hunyuan-A13B features a hybrid