Leonardo Viana (@leonardovt18) 's Twitter Profile
Leonardo Viana

@leonardovt18

Mechatronics Engineer and Deep Learning Researcher.

ID: 979358016381292546

calendar_today29-03-2018 14:02:01

42 Tweet

19 Takipçi

78 Takip Edilen

Aakash Kumar Nain (@a_k_nain) 's Twitter Profile Photo

How quickly can we build a captcha reader using deep learning? Check out yourself the Captcha cracker built TensorFlow 2.0 and #Keras colab.research.google.com/drive/16y14HuN…

hardmaru (@hardmaru) 's Twitter Profile Photo

We're living in a cyberpunk future: “Fooling automated surveillance cameras: adversarial patches to attack person detection” arxiv.org/abs/1904.08653

We're living in a cyberpunk future:

“Fooling automated surveillance cameras: adversarial patches to attack person detection” arxiv.org/abs/1904.08653
François Chollet (@fchollet) 's Twitter Profile Photo

If you replace AI research with 3D printer R&D, this is the equivalent of saying: "we're building a matter photocopier. When we're done we can just mass produce absolutely anything for free & sell at 100% margins". AI is math & engineering. It isn't magic. It isn't a free lunch.

If you replace AI research with 3D printer R&D, this is the equivalent of saying: "we're building a matter photocopier. When we're done we can just mass produce absolutely anything for free & sell at 100% margins".

AI is math & engineering. It isn't magic. It isn't a free lunch.
Quoc Le (@quocleix) 's Twitter Profile Photo

EfficientNets: a family of more efficient & accurate image classification models. Found by architecture search and scaled up by one weird trick. Link: arxiv.org/abs/1905.11946 Github: bit.ly/30UojnC Blog: bit.ly/2JKY3qt

EfficientNets: a family of more efficient & accurate image classification models. Found by architecture search and scaled up by one weird trick. 

Link: arxiv.org/abs/1905.11946 

Github: bit.ly/30UojnC 

Blog: bit.ly/2JKY3qt
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

ConvNets on microcontrollers (e.g. Arduino Uno). In ranges of ~1cm^2 chips, ~$1 costs, running at ~1mW, and 4 MOPs/Sec, 2KB of RAM (intermediate tensors), and 32KB flash (weights). E.g. even LeNet is 420KB model and 177KB RAM. Very interesting to see this line of work develop.

François Chollet (@fchollet) 's Twitter Profile Photo

Here's regression example on the Ames Housing Price dataset. This dataset turns out to be great for demonstrating how to vectorize structured data, and how to handle missing features. colab.research.google.com/drive/127UxEcv… Thanks to Micah for suggesting this dataset!

François Chollet (@fchollet) 's Twitter Profile Photo

You don't need to know everything. You don't really need a formal background in this or that -- though it helps. You don't even need a PhD. You do, however, need to be constantly learning. Be curious. Read books. Don't be "too busy" to learn, or otherwise proud of your ignorance

Suzana Ilić (@suzatweet) 's Twitter Profile Photo

Super excited to be at DeepCon on June 8! We'll look at AlexNet VGG Inception MobileNet ShuffleNet ResNet DenseNet Xception U-Net SqueezeNet YOLO RefineNet The workshop will be recorded, you can find our code on GitHub (Part I: ConvNets.ipynb) Dimitris Katsios github.com/Machine-Learni…

Super excited to be at DeepCon on June 8! We'll look at 

AlexNet
VGG
Inception
MobileNet
ShuffleNet
ResNet
DenseNet
Xception
U-Net
SqueezeNet
YOLO
RefineNet

The workshop will be recorded, you can find our code on GitHub (Part I: ConvNets.ipynb) <a href="/dkatsios/">Dimitris Katsios</a> github.com/Machine-Learni…
TensorFlow (@tensorflow) 's Twitter Profile Photo

The release of the Beta for TensorFlow 2.0 is here! We've closed over 100 issues you reported against the alpha release. Your feedback has helped us get to where we are today, please keep it coming! Get more info here → goo.gle/2IpE5OZ

The release of the Beta for TensorFlow 2.0 is here! We've closed over 100 issues you reported against the alpha release. Your feedback has helped us get to where we are today, please keep it coming!
 
Get more info here → goo.gle/2IpE5OZ
hardmaru (@hardmaru) 's Twitter Profile Photo

Evolving Neural Turing Machines (GECCO 2016 🦎) “We introduce an evolvable version of NTM and show that such an approach greatly simplifies the neural model, generalizes better, and does not require accessing the entire memory content at each time-step.” sebastianrisi.com/wp-content/upl…

Evolving Neural Turing Machines (GECCO 2016 🦎)

“We introduce an evolvable version of NTM and show that such an approach greatly simplifies the neural model, generalizes better, and does not require accessing the entire memory content at each time-step.”

sebastianrisi.com/wp-content/upl…
Shanqing Cai (@sqcai) 's Twitter Profile Photo

Chapter 9 of "Deep Learning with JavaScript" was recently released to #MEAP. It covers the basics of generative deep learning (VAE, GAN, & RNN-based sequence generation) and how to train and serve such models in TensorFlow.js. Stan Bileschi François Chollet manning.com/books/deep-lea…

Chapter 9 of "Deep Learning with JavaScript" was recently released to #MEAP. It covers the basics of generative deep learning (VAE, GAN, &amp; RNN-based sequence generation) and how to train and serve such models in TensorFlow.js. <a href="/xtan/">Stan Bileschi</a> <a href="/fchollet/">François Chollet</a>

manning.com/books/deep-lea…
Siraj Raval (@sirajraval) 's Twitter Profile Photo

"Study hard what interests you the most in the most undisciplined, irreverent, and original manner possible” - Richard Feynman

Google AI (@googleai) 's Twitter Profile Photo

Temporal Cycle-Consistency Learning (TCC) is a novel self-supervised method for learning representations that are well-suited for fine-grained temporal labeling of video. Learn how it’s done and download the TCC codebase to try it out for yourself! goo.gle/2YSkA87

ML Review 💙💛 (@ml_review) 's Twitter Profile Photo

Sparse Networks from Scratch: Faster Training without Losing Performance By Tim Dettmers Finds "winning lottery tickets" – sparse configurations with 20% weights and similar performance. SoTA on MNIST, CIFAR-10, and ImageNet-2012 among sparse methods arxiv.org/abs/1907.04840

Sparse Networks from Scratch: Faster Training without Losing Performance
By <a href="/Tim_Dettmers/">Tim Dettmers</a> 

Finds "winning lottery tickets" – sparse configurations with 20% weights and similar performance.
SoTA on MNIST, CIFAR-10, and ImageNet-2012 among sparse methods

arxiv.org/abs/1907.04840
Google DeepMind (@googledeepmind) 's Twitter Profile Photo

We built bsuite to do two things: 1. Offer clear, informative, and scalable experiments that capture key issues in RL 2. Study agent behaviour through performance on shared benchmarks You can get started with bsuite in this colab: colab.research.google.com/drive/1rU20zJ2…