Vladimir Rybakov (@vladimirrybako9) 's Twitter Profile
Vladimir Rybakov

@vladimirrybako9

Head of data science in WaveAccess. wave-access.com

ID: 1013121828972318720

calendar_today30-06-2018 18:07:21

438 Tweet

32 Followers

29 Following

neptune.ai (@neptune_ai) 's Twitter Profile Photo

We had an opportunity to chat with Vladimir Rybakov (Head of #DataScience at WaveAccess) about: - solving problems under pressure, - taking responsibility, - why asking for help is an important skill and more! bit.ly/2yr9vTK

We had an opportunity to chat with <a href="/VladimirRybako9/">Vladimir Rybakov</a> (Head of #DataScience at <a href="/wave_access/">WaveAccess</a>) about: 
- solving problems under pressure, 
- taking responsibility, 
- why asking for help is an important skill
 and more!

bit.ly/2yr9vTK
OpenAI (@openai) 's Twitter Profile Photo

Introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We're releasing a tool for everyone to explore the generated samples, as well as the model and code: openai.com/blog/jukebox/

Nando de Freitas (@nandodf) 's Twitter Profile Photo

This brilliant ⁦OpenAI⁩ work and the video of ⁦Andrej Karpathy⁩ I shared recently are very exciting AI frontiers. The story repeats itself: Big net, curated data, and common sense are the ingredients. Congrats ⁦Ilya Sutskever⁩ et al. arxiv.org/abs/2005.14165

Google AI (@googleai) 's Twitter Profile Photo

Today we present SpineNet, a novel alternative to standard scale-decreased backbone models for visual recognition tasks, which uses reordered network blocks with cross-scale connections to better preserve spatial information. Learn more below: goo.gle/3dNq9fa

Google AI (@googleai) 's Twitter Profile Photo

Introducing a new approach to RL that uses complex duality to convert problems with a large number of constraints to equivalent, more computationally friendly forms, enabling mathematically principled algorithms that are useful in practice. Learn more ↓ goo.gle/2AEKW7h

Christoph Henkelmann - @chenkelmann@sigmoid.social (@chenkelmann) 's Twitter Profile Photo

Too l̶a̶z̶y̶ busy too keep up with reading the latest #DeepLearning papers? I just discovered this great channel that explains important papers in real depth 🎓 but is still great fun and easy to follow 🤡 youtube.com/channel/UCZHmQ…

Josh Tobin (@josh_tobin_) 's Twitter Profile Photo

What makes production ML hard? - Cleaning, labeling, and augmenting data - Troubleshooting training and ensuring reproducibility - Deploying models and monitoring their real-world impact To help, we're excited to announce our online production ML course: course.fullstackdeeplearning.com

Google AI (@googleai) 's Twitter Profile Photo

Creating a system that can execute written/spoken instructions on a graphical interface requires a model that grounds the instructions to executable UI actions. Check out foundational work that does so through use of a multi-part language grounding model: goo.gle/3fo8qwQ

Michael Galkin (@michael_galkin) 's Twitter Profile Photo

ACL 2020 #acl2020nlp ends this week! If you didn't manage to attend all #KnowledgeGraph related talks - I comprised a review of KG-related papers focusing on question answering, KG embeddings, graph-to-text NLG, some ConvAI and OpenIE 😊 #NLProc medium.com/@mgalkin/knowl…

Jeff Dean (@jeffdean) 's Twitter Profile Photo

AutoML-Zero: new research from Google AI researchers Esteban Real, Chen Liang, David R. So, & Quoc Le that that can rediscover fundamental ML techniques by searching a space of different ways of combining basic mathematical operations. Arxiv: arxiv.org/abs/2003.03384

AutoML-Zero: new research from <a href="/GoogleAI/">Google AI</a> researchers Esteban Real, <a href="/crazydonkey200/">Chen Liang</a>,  David R. So, &amp; <a href="/quocleix/">Quoc Le</a> that that can rediscover fundamental ML techniques by searching a space of different ways of combining basic mathematical operations.

Arxiv: arxiv.org/abs/2003.03384
Greg Yang (@thegregyang) 's Twitter Profile Photo

1/ Crazy exp: take Resnet embedding of Imagenet as dataset A. Train linear predictor on A; get accuracy p. Now make fake dataset B = a mixture of Gaussians w/ same class mean & covariance as A. Train linear predictor on B => get *SAME ACCURACY* p. WTF proceedings.icml.cc/static/paper_f…

1/ Crazy exp: take Resnet embedding of Imagenet as dataset A. Train linear predictor on A; get accuracy p. Now make fake dataset B = a mixture of Gaussians w/ same class mean &amp; covariance as A. Train linear predictor on B =&gt; get *SAME ACCURACY* p. WTF proceedings.icml.cc/static/paper_f…
Jeremy Bernstein (@jxbz) 's Twitter Profile Photo

Madam (multiplicative adam) needs little to no learning rate tuning and brings the numerical representation of the synapse closer to neuroscience. w/ Jiawei Zhao, Markus Meister, Ming-Yu Liu, Prof. Anima Anandkumar & Yisong Yue paper: arxiv.org/abs/2006.14560 code: github.com/jxbz/madam

Madam (multiplicative adam) needs little to no learning rate tuning and brings the numerical representation of the synapse closer to neuroscience.

w/ <a href="/jiawzhao/">Jiawei Zhao</a>, <a href="/mameister4/">Markus Meister</a>, <a href="/liu_mingyu/">Ming-Yu Liu</a>, <a href="/AnimaAnandkumar/">Prof. Anima Anandkumar</a> &amp; <a href="/yisongyue/">Yisong Yue</a>
 
paper: arxiv.org/abs/2006.14560
code: github.com/jxbz/madam
Stanford NLP Group (@stanfordnlp) 's Twitter Profile Photo

Suffering from post-#acl2020nlp withdrawal? There are also lots of Percy Liang, Tatsunori Hashimoto, and other Stanford people’s papers at ICM 2020 this coming week! ai.stanford.edu/blog/icml-2020/

Xander Steenbrugge (@xsteenbrugge) 's Twitter Profile Photo

I had a few samples from my latest GAN project printed on canvas, love how they turned out 😋👏 See more at vimeo.com/neuralsynesthe…

I had a few samples from my latest GAN project printed on canvas, love how they turned out 😋👏

See more at vimeo.com/neuralsynesthe…
OpenAI (@openai) 's Twitter Profile Photo

We've used reinforcement learning from human feedback to train language models for summarization. The resulting models produce better summaries than 10x larger models trained only with supervised learning: openai.com/blog/learning-…

Vladimir Rybakov (@vladimirrybako9) 's Twitter Profile Photo

Java still try-harding to catch-up to python in the DS/ML field. Though it is good to have a scikit-like lib on Java just in case. blogs.oracle.com/javamagazine/p… PS Christoph Henkelmann - @[email protected], your thoughts? :)