Akhil Mathur (@akhilmathurs) 's Twitter Profile
Akhil Mathur

@akhilmathurs

Research Scientist in Generative AI @MetaAI | ex-@BellLabs

ID: 896724155843973120

linkhttps://akhilmathurs.github.io calendar_today13-08-2017 13:24:33

176 Tweet

488 Followers

144 Following

Fahim Kawsar (@raswak) 's Twitter Profile Photo

We have met some terrific talents lately and we want to meet more. Shout out to lateral thinkers and bright engineers to join a talented, fearless and relentless team Bell Labs Cambridge to do career-defining work on devices that matter. Researchers & engineers, tell us why you?

Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

Very happy to share that our work on self-supervised federated learning in resource-constrained settings has been accepted at #ICML 2022. A fantastic outcome for the internship work by Ekdeep Singh Lubana in collaboration with Ian Tang Fahim Kawsar. Arxiv link and more details are coming soon.

Very happy to share that our work on self-supervised federated learning in resource-constrained settings has been accepted at #ICML 2022. A fantastic outcome for the internship work by <a href="/EkdeepL/">Ekdeep Singh Lubana</a> in collaboration with 
<a href="/IanTangC/">Ian Tang</a> <a href="/raswak/">Fahim Kawsar</a>. Arxiv link and more details are coming soon.
Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

An excellent summary of our upcoming ICML paper on unsupervised FL. The paper is out on arXiv arxiv.org/abs/2205.11506. Also, a big thank you to the Flower framework flower.dev which helped us scale our FL experiments.

Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

Glad to share that our work FLAME has been accepted to IMWUT '22. We explore algorithmic & system challenges for federated learning in the presence of multiple data-generating devices on a user. Proud of our intern Hyunsung Cho who led this work. arxiv.org/abs/2202.08922 Fahim Kawsar

Glad to share that our work FLAME has been accepted to IMWUT '22. We explore algorithmic &amp; system challenges for federated learning in the presence of multiple data-generating devices on a user. Proud of our intern <a href="/hciresearcher/">Hyunsung Cho</a> who led this work. arxiv.org/abs/2202.08922 <a href="/raswak/">Fahim Kawsar</a>
Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

If you are attending #ICML2022, please join our spotlight talk on self-supervised FL on July 19 and poster presentation on July 21. Please feel free to DM if you are up for a chat on FL, self-supervision, and embedded ML. icml.cc/virtual/2022/s…

Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

Looking forward to presenting our work and learning more about ML efficiency @ Deep Learning Indaba. Kudos to the incredible organizing team for putting this event together.

Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

"At the stroke of the midnight hour, when the world sleeps, India will awake to life and freedom. Happy 75th Independence Day! The journey has just begun #IndiaAt75

Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

Hyunsung Cho (Hyunsung Cho) is presenting her work FLAME πŸ”₯ which shows how to make federated learning work in multi-device environments #UbiComp2022

Hyunsung Cho (<a href="/hciresearcher/">Hyunsung Cho</a>) is presenting her work FLAME πŸ”₯ which shows how to make federated learning work in multi-device environments #UbiComp2022
Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

Shohreh Deldari (@ShohrehDeldari) from RMIT is now presenting her super cool work on cross modal self supervised learning. #UbiComp2022

Shohreh Deldari (@ShohrehDeldari) from RMIT is now presenting her super cool work on cross modal self supervised learning. #UbiComp2022
Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

Charlie Dean was outside the crease for more than 85% of all balls she started at the non-striker's end. "Spirit of cricket!" πŸ™„

Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

All train services between London and Cambridge are delayed because there is a swan 🦒 sitting on the track πŸ€·πŸ»β€β™‚οΈπŸ€¦β€β™‚οΈ

Anupriya Tuli (@anupriyatuli) 's Twitter Profile Photo

πŸ₯πŸ₯ Join us for ACM SIGCHI Symposium for π‡π‚πˆ 𝐚𝐧𝐝 π…π«π’πžπ§ππ¬ at IIT Bombay, India, 9-11 December 2022!🧡 Speakers: namastehci.in/#speakers November 9th: Applications due namastehci.in/#registration πŸ™ Anupriya, Akhil Mathur, Anirudha Joshi, @Pushpendra_S__, Neha Kumar

πŸ₯πŸ₯ Join us for ACM SIGCHI Symposium for π‡π‚πˆ 𝐚𝐧𝐝 π…π«π’πžπ§ππ¬ at IIT Bombay, India, 9-11 December 2022!🧡

Speakers:  namastehci.in/#speakers

November 9th: Applications due
namastehci.in/#registration

πŸ™
Anupriya, <a href="/akhilmathurs/">Akhil Mathur</a>, Anirudha Joshi, @Pushpendra_S__, <a href="/nehakumar/">Neha Kumar</a>
Yann LeCun (@ylecun) 's Twitter Profile Photo

This is huge: Llama-v2 is open source, with a license that authorizes commercial use! This is going to change the landscape of the LLM market. Llama-v2 is available on Microsoft Azure and will be available on AWS, Hugging Face and other providers Pretrained and fine-tuned

Akhil Mathur (@akhilmathurs) 's Twitter Profile Photo

So happy to see this important work published. Model fairness should be a key consideration when we optimize machine learning pipelines for edge devices. Our paper offers a framework to approach on-device ML fairness, along with extensive experimental findings for speech models.

Dimitris Spathis (@spdimitris) 's Twitter Profile Photo

🀿 What is latent masking and why is it relevant to multimodal learning? In our paper in ML4MHD at #ICML2023 we presented CroSSL, a new model that masks intermediate embeddings to improve multimodal learning. πŸ“– Paper: arxiv.org/abs/2307.16847 πŸŽ₯ Video: drive.google.com/file/d/1lF5GsQ…

🀿 What is latent masking and why is it relevant to multimodal learning?

In our paper in ML4MHD at #ICML2023 we presented CroSSL, a new model that masks intermediate embeddings to improve multimodal learning.

πŸ“– Paper: arxiv.org/abs/2307.16847
πŸŽ₯ Video: drive.google.com/file/d/1lF5GsQ…
Ian Tang (@iantangc) 's Twitter Profile Photo

1/5 Tired of self-supervised models that can't adapt to new data? In our #WACV2024 paper, we propose Kaizen - an end-to-end approach for continual learning that performs knowledge distillation across both the pre-training and fine-tuning steps. πŸ‘‰ arxiv.org/abs/2303.17235

1/5 Tired of self-supervised models that can't adapt to new data? 

In our #WACV2024 paper, we propose Kaizen - an end-to-end approach for continual learning that performs knowledge distillation across both the pre-training and fine-tuning steps.

πŸ‘‰ arxiv.org/abs/2303.17235