CDT Artificial Intelligence+Music (@cdt_ai_music) 's Twitter Profile
CDT Artificial Intelligence+Music

@cdt_ai_music

The UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM) is a leading PhD research programme aimed at the Music/Audio Technology.

ID: 1216727153733263361

linkhttps://www.aim.qmul.ac.uk/ calendar_today13-01-2020 14:23:23

271 Tweet

1,1K Followers

77 Following

Jordie Shier (@jordieshier) 's Twitter Profile Photo

Happy to share our new paper: Real-time Timbre Remapping with Differentiable DSP Timbral control of synthesizers (like an 808) in real-time using audio from acoustic percussion performances. 🥁🎛️ arXiv: arxiv.org/abs/2407.04547 audio/video/code: jordieshier.com/projects/nime2…

C4DM at QMUL (@c4dm) 's Twitter Profile Photo

🚨 Call for Papers: First AES International Conference on Artificial Intelligence and Machine Learning for Audio (AIMLA 2025), Queen Mary University of London, Sept. 8-10, 2025. c4dm.eecs.qmul.ac.uk/news/2024-07-2…

CDT Artificial Intelligence+Music (@cdt_ai_music) 's Twitter Profile Photo

We're happy to share that our students Soumya Sai Vanka Franco Caspe and Farida Yusuf are part of the organizing committee for the AIMLA conference next year. We're also calling for contributions; please check the blog for details. aim.qmul.ac.uk/cfp-first-aes-…

CDT Artificial Intelligence+Music (@cdt_ai_music) 's Twitter Profile Photo

Our student Jiawen will present "Towards Building an End-to-End Multilingual Automatic Lyrics Transcription Model," an extension of her UKIS poster, at EUSIPCO 2024 at the end of August. aim.qmul.ac.uk/aim-at-eusipco…

Marco Pasini (@marco_ppasini) 's Twitter Profile Photo

🔊 Encode and decode audio to/from latents with Music2Latent! 🔊 Music2Latent encodes only ~10 latents per second of audio 👀 This means lightning-fast training/inference of latent generative models ⚡️️ Try it: github.com/SonyCSLParis/m… Paper: arxiv.org/abs/2408.06500 How? 👇🧵

🔊 Encode and decode audio to/from latents with Music2Latent! 🔊

Music2Latent encodes only ~10 latents per second of audio 👀

This means lightning-fast training/inference of latent generative models ⚡️️

Try it: github.com/SonyCSLParis/m…
Paper: arxiv.org/abs/2408.06500

How? 👇🧵
Chin-Yun Yu (@yoyololicon) 's Twitter Profile Photo

Differentiable Time-Varying Linear Prediction in the Context of End-to-End Analysis-by-Synthesis Paper: arxiv.org/abs/2406.05128 Web: yoyololicon.github.io/golf2-demo/ Improvements to the GOLF voice synthesiser and comprehensive comparison. Details in this thread. 1/7

Differentiable Time-Varying Linear Prediction in the Context of End-to-End Analysis-by-Synthesis

Paper: arxiv.org/abs/2406.05128
Web: yoyololicon.github.io/golf2-demo/

Improvements to the GOLF voice synthesiser and comprehensive comparison. Details in this thread. 1/7
Yinghao Ma (@nicolaus625) 's Twitter Profile Photo

📢📢Glad to share our latest survey on Foundation Models for Music! 🎶For both computer music researchers and ML researchers: this comprehensive review explores how LLMs and diffusion models are revolutionising music through music understanding, generation, therapy, and more.🎉🎉

📢📢Glad to share our latest survey on Foundation Models for Music! 🎶For both computer music researchers and ML researchers: this comprehensive review explores how LLMs and diffusion models are revolutionising music through music understanding, generation, therapy, and more.🎉🎉
Marco Comunità (@marcomunita) 's Twitter Profile Photo

AFX-Research: an Extensive and Flexible Repository of Research about Audio Effects All the research on audio effects of the last few decades. A table with lots of metadata. Search. Filter. Order. Contribute. repo: github.com/mcomunita/AFX-… web: mcomunita.github.io/AFX-Research

AFX-Research: an Extensive and Flexible Repository of Research about Audio Effects

All the research on audio effects of the last few decades. A table with lots of metadata. Search. Filter. Order. Contribute.

repo: github.com/mcomunita/AFX-…
web: mcomunita.github.io/AFX-Research
CDT Artificial Intelligence+Music (@cdt_ai_music) 's Twitter Profile Photo

NIME is happening next week! This time, our members Jordie Shier and Shuoyang will present two papers on DDSP and interactive ML, respectively, and teresa pelinski will lead two workshops on embedded AI. Details: aim.qmul.ac.uk/aim-at-nime-20…

CDT Artificial Intelligence+Music (@cdt_ai_music) 's Twitter Profile Photo

**Correction** Teresa is leading workshops on " First—and second-person perspectives for ML in NIME" and "Building NIMEs with Embedded AI." In the previous tweet, we mistakenly said they were about the same topic.

Andrew McPherson (@instrumentslab) 's Twitter Profile Photo

Just out! New ACM TOCHI paper with Courtney Reed Adán L. Benito Franco Caspe: "Shifting Ambiguity, Collapsing Indeterminacy: Designing with Data as Baradian Apparatus". Open access: dl.acm.org/doi/10.1145/36…

C4DM at QMUL (@c4dm) 's Twitter Profile Photo

Next week, we will host a seminar on "Machine Learning-Based Artificial Reverberation & Non-stationary Noise Removal from Repeated Sweep Measurements", by Gloria Dal Santo & Sebastian J. Schlecht, respectively. More information at: c4dm.eecs.qmul.ac.uk/news/2024-09-1…

Soumya Sai Vanka (@sai_soum_) 's Twitter Profile Photo

Mid-July I presented my work as part of a case study on responsible and ethical design of AI systems organised by the UAL Creative Computing Institute as part of their MusicRAI research project. Slides at music-rai.github.io

Mid-July I presented my work as part of a case study on responsible and ethical design of AI systems organised by the <a href="/ual_cci/">UAL Creative Computing Institute</a> as part of their MusicRAI research project. Slides at music-rai.github.io
Soumya Sai Vanka (@sai_soum_) 's Twitter Profile Photo

aes2.org/contributions/… AES AIMLA 2025 website is now up to date with submission deadlines. The submission portal for challenge and tutorial proposals is now open until the end of Oct 2024. #AESAIMLA25

Christopher Mitcheltree (@frozenmango) 's Twitter Profile Photo

"Differentiable All-pole Filters for Time-varying Audio Systems" being presented by Chin-Yun Yu at DAFx! Paper, audio samples, code, and plugins available at diffapf.github.io/web/

"Differentiable All-pole Filters for Time-varying Audio Systems" being presented by <a href="/yoyololicon/">Chin-Yun Yu</a> at DAFx! 
Paper, audio samples, code, and plugins available at diffapf.github.io/web/