Yuwei Zhang (@eveyuwei) 's Twitter Profile
Yuwei Zhang

@eveyuwei

PhD student @Cambridge_Uni, @Cambridge_CL

ID: 1454098534026788865

linkhttps://evelyn0414.github.io calendar_today29-10-2021 14:52:04

6 Tweet

24 Takipçi

92 Takip Edilen

Yuwei Zhang (@eveyuwei) 's Twitter Profile Photo

Tomorrow I will present my work on uncertainty quantification in FL (along with Tong Xia, Abhirup Ghosh and Cecilia Mascolo🇪🇺🇬🇧) at FL4Data-Mining #KDD2023 during poster session 11:30 am - 12:00 pm local time. Welcome to stop by! You can find our paper here: openreview.net/pdf?id=QSQOTUV…

Tomorrow I will present my work on uncertainty quantification in FL (along with <a href="/TongXia9/">Tong Xia</a>,  <a href="/AbhirupGhosh16/">Abhirup Ghosh</a> and <a href="/cecim/">Cecilia Mascolo🇪🇺🇬🇧</a>) at FL4Data-Mining #KDD2023 during poster session 11:30 am - 12:00 pm local time. Welcome to stop by!

You can find our paper here: openreview.net/pdf?id=QSQOTUV…
Cecilia Mascolo🇪🇺🇬🇧 (@cecim) 's Twitter Profile Photo

We announce our work on Open Respiratory Acoustic Foundation Models (arxiv.org/abs/2406.16148). In this work we build the first open and reproducible respiratory sounds based model for respiratory health. Work led by Yuwei Zhang and Tong Xia

Cecilia Mascolo🇪🇺🇬🇧 (@cecim) 's Twitter Profile Photo

Our work towards respiratory acoustic foundation model is to be presented at NeurIPS24 D&B track. This was an amazing effort by Yuwei Zhang and Tong Xia (with collaborators). This work will be useful for downstream tasks with limited data. Blog opera-benchmark.github.io/blog/overview

Yuwei Zhang (@eveyuwei) 's Twitter Profile Photo

I will be at #NeurIPS2024 Poster Session 2 West, presenting our paper "Towards open respiratory acoustic foundation models: Pretraining and benchmarking". Link: openreview.net/pdf?id=vXnGXRb… Welcome to chat about anything, e.g. mobile health, foundation models, multimodal LLMs.

I will be at #NeurIPS2024 Poster Session 2 West, presenting our paper "Towards open respiratory acoustic foundation models: Pretraining and benchmarking".  Link: openreview.net/pdf?id=vXnGXRb…

Welcome to chat about anything, e.g. mobile health, foundation models, multimodal LLMs.
Yuzhe Yang (@yang_yuzhe) 's Twitter Profile Photo

🚨 Let your wearable data "speak" for themselves! ⌚️🗣️ Introducing *SensorLM*, a family of sensor-language foundation models, trained on ~60 million hours of data from >103K people, enabling robust wearable sensor data understanding with natural language. 🧵

🚨 Let your wearable data "speak" for themselves! ⌚️🗣️

Introducing *SensorLM*, a family of sensor-language foundation models, trained on ~60 million hours of data from &gt;103K people, enabling robust wearable sensor data understanding with natural language. 🧵
Google Research (@googleresearch) 's Twitter Profile Photo

Let your wearable data "speak" for itself! Introducing SensorLM, a family of sensor-language foundation models trained on ~60 million hours of data, enabling robust wearable data understanding with natural language. → goo.gle/4lSLwQi