Adversarial Machine Learning (@adversarial_ml) 's Twitter Profile
Adversarial Machine Learning

@adversarial_ml

I tweet about #MachineLearning and #MachineLearningSecurity.

ID: 990456089874345985

calendar_today29-04-2018 05:01:48

14 Tweet

186 Followers

53 Following

gio (@giorgiopatrini) 's Twitter Profile Photo

Excited by this direction of formal investigation for adversarial defences: Adversarial examples from computational constraints, Bubeck et al arxiv.org/abs/1805.10204

Excited by this direction of formal investigation for adversarial defences: Adversarial examples from computational constraints, Bubeck et al arxiv.org/abs/1805.10204
Aleksander Madry (@aleks_madry) 's Twitter Profile Photo

Think BatchNorm helps training due to reducing internal covariate shift? Think again. (What BatchNorm *does* seem to do though, both empirically and in theory, is to smoothen out the optimization landscape.) (with Shibani Santurkar Dimitris Tsipras Andrew Ilyas) arxiv.org/abs/1805.11604

Think BatchNorm helps training due to reducing internal covariate shift? Think again. (What BatchNorm *does* seem to do though, both empirically and in theory, is to smoothen out the optimization landscape.) (with <a href="/ShibaniSan/">Shibani Santurkar</a> <a href="/tsiprasd/">Dimitris Tsipras</a> <a href="/andrew_ilyas/">Andrew Ilyas</a>) arxiv.org/abs/1805.11604
Joey Bose (@bose_joey) 's Twitter Profile Photo

Here's an article by University of Toronto about our new work on adversarial attacks on Face Detectors that help you preserve your privacy. news.engineering.utoronto.ca/privacy-filter…

Aleksander Madry (@aleks_madry) 's Twitter Profile Photo

Adversarial robustness is not free: decrease in natural accuracy may be inevitable. Silver lining: robustness makes gradients semantically meaningful (+ leads to adv. examples w/ GAN-like trajectories) arxiv.org/abs/1805.12152 (Dimitris Tsipras Shibani Santurkar Logan Engstrom Alex Turner)

Adversarial robustness is not free: decrease in natural accuracy may be inevitable. Silver lining: robustness makes gradients semantically meaningful (+ leads to adv. examples w/ GAN-like trajectories) arxiv.org/abs/1805.12152 (<a href="/tsiprasd/">Dimitris Tsipras</a> <a href="/ShibaniSan/">Shibani Santurkar</a> <a href="/logan_engstrom/">Logan Engstrom</a> <a href="/alex_m_turner/">Alex Turner</a>)
Somesh Jha (@jhasomesh) 's Twitter Profile Photo

Just read this paper. Short summary: when thinking of defenses to adversarial examples in ML, think of the threat model carefully. Nice paper. Also won the best paper award at ICML 2018 (ICML Conference ) Congrats to the authors!! arxiv.org/abs/1802.00420

Adversarial Machine Learning (@adversarial_ml) 's Twitter Profile Photo

IBM Ireland just released "The Adversarial Robustness Toolbox: Securing AI Against Adversarial Threats". This library will allow rapid crafting and analysis of attacks and defense methods for machine learning models. ibm.com/blogs/research… #MachineLearningSecurity #AdversarialML

Somesh Jha (@jhasomesh) 's Twitter Profile Photo

Two papers accepted to ICML 2018. Congrats to all my amazing co-authors. Both on adversarial ML. The arxiv version of the papers are up, but we will update it soon based on reviewer comments. Arxiv versions: arxiv.org/abs/1711.08001 and arxiv.org/abs/1706.03922

Ian Goodfellow (@goodfellow_ian) 's Twitter Profile Photo

This paper shows how to make adversarial examples with GANs. No need for a norm ball constraint. They look unperturbed to a human observer but break a model trained to resist large perturbations. arxiv.org/pdf/1805.07894…

This paper shows how to make adversarial examples with GANs. No need for a norm ball constraint. They look unperturbed to a human observer but break a model trained to resist large perturbations. arxiv.org/pdf/1805.07894…
Battista Biggio (@biggiobattista) 's Twitter Profile Photo

"No pixels are manipulated in this talk. No pandas are harmed..." Great ways to differentiate your talk from the rest of talks on adversarial examples... no more pandas please 😀

Ian Goodfellow (@goodfellow_ian) 's Twitter Profile Photo

I'm speaking at the 1st Deep Learning and Security workshop (co-located with IEEE S&P ) at 1:30 today: ieee-security.org/TC/SPW2018/DLS/ I'll discuss research into defenses against adversarial examples, including future directions. Slides and lecture notes here: iangoodfellow.com/slides/2018-05…

I'm speaking at the 1st Deep Learning and Security workshop (co-located with <a href="/IEEESSP/">IEEE S&P</a> ) at 1:30 today: ieee-security.org/TC/SPW2018/DLS/ I'll discuss research into defenses against adversarial examples, including future directions. Slides and lecture notes here: iangoodfellow.com/slides/2018-05…