NYU MARL
@nyumarl
Music and Audio Research Laboratory at NYU Steinhardt
ID: 3825693142
http://steinhardt.nyu.edu/marl/ 30-09-2015 16:28:02
151 Tweet
961 Followers
435 Following
How do noise patterns vary between weekdays and weekends in #NYC? We developed Time Lattice + Noise Profiler to find out! Work led by Fabio Miranda + @mqolf Harish D Charlie Mydlarz SAY Yitzchak Lockerman Juliana Freire Claudio Silva #datascience #SmartCities justinsalamon.com/news/time-latt…
MaxPool? MeanPool? AutoPool! A trainable operator that interpolates between pooling funcs, adapting to data --> multiple instance learning from weakly labeled time series. New paper with brianmcfee & Juan Bello, keras layer: github.com/marl/autopool justinsalamon.com/news/autopool-…
Excited to be giving a talk at #SANE2018 today, hope you can join us! 🚘🐦 -->🎙️ --> 💻 (work by NYU MARL Cornell Lab CUSP at NYU Tandon sonycproject #BirdVox #SONYC) justinsalamon.com/news/robust-so…
PCEN is an excellent audio frontend for sound recognition in far-field recordings. But... why? And how to configure it for your application? Answers in our new paper led by Vincent Lostanlen in collab Cornell Lab NYU MARL CUSP at NYU Tandon #nocmig justinsalamon.com/news/per-chann…
Mark Cartwright (Mark Cartwright) presenting our work on understanding how visualizations of audio clips affect workers' annotation performance in audio annotation tasks. Collaboration: NYU MARL + Waterloo HCI. Justin Salamon Edith Law #CSCW2018
New datasets for pitch tracking research with monophonic, melody, bass and multi-f0 annotations 🥳 (NYU MARL + MTG-UPF collab with Rachel Bittner Jordi Bonada Juanjo Bosch Emilia Gomez & Juan Pablo Bello) #MIR #MachineLearning #Data justinsalamon.com/news/new-datas…
We learned the hard way. So we wrote the article we wish we could have read at the beginning of our careers. May 2019 be full of great open-source research! With brianmcfee ᓚᘏᗢ🌈 Mark Cartwright Rachel Bittner & Juan Pablo Bello #openscience #opensource justinsalamon.com/news/open-sour…
Announcing OpenL3: a self-supervised deep audio embedding based on an improved L3-Net that's state of the art for sound recognition! Just run "pip install openl3" to try it out (requires #TensorFlow). Work led by @jsondotload & Ho-Hsiang Wu #openscience justinsalamon.com/news/openl3-co…
Check out our summer work #ProjectSoundSeek presented at #AdobeMax2019 ! Huge thanks to my awesome mentors Justin Salamon and Nicholas J. Bryan ! Such an exciting way to summarize my fruitful internship 🤩
Vincent Lostanlen Vincent Lostanlen presents periodic modulation recognition in music signals
Which dataset do you want next in mirdata? GitHub.com/mir-dataset-lo… we take requests! Magdalena Fuentes Vincent Lostanlen Keunwoo Choi Thor Kell
Our 2017 paper on ConvNets + data augmentation for environmental sound recognition has won an IEEE Signal Processing Society best paper award 🥳 Much gratitude to my co-author & former mentor Juan P. Bello NYU MARL for the many years of fruitful collab Cue awkward screenshot justinsalamon.com/news/2020-ieee…
Our #BirdVox project NYU MARL+Cornell Lab got a nice mention in this Scientific American article 🐦🎤🤖 #DeepLearning #bioacoustics With Vincent Lostanlen @auditorybean Andrew Farnsworth & Juan Pablo Bello scientificamerican.com/article/artifi…