Upgrade to Pro — share decks privately, control downloads, hide ads and more …

¿Que paso en Machine Learning en los últimos ...

matiskay
February 19, 2016

¿Que paso en Machine Learning en los últimos 6 meses?

Todo lo que paso en Machine Learning en el ultimo semestre del 2015 y principios del 2016.

matiskay

February 19, 2016
Tweet

More Decks by matiskay

Other Decks in Programming

Transcript

  1. Aprendizaje No Supervisado (Unsupervised Learning) • No Knowledge of output

    class or value. • Data is unlabelled or value unknown. • Goal: Determinate data patterns/groupings. • Algorithms: K-means, genetic algorithms, clustering approaches, etc.
  2. Aprendizaje Supervisado (Supervised Learning) • Knowledge of output: Learning with

    the presence of an “expert” / teacher. • Data is labelled with a class or value. • Goal: Predict class or value label. • Algorithms: Neural Networks, Support Vector Machines, Decision Trees, Bayesian Classifier, etc. C1 C1 C1 C1 C2 C2 C2 C2 C2 C2
  3. Aprendizaje por Refuerzos (Reinforcement Learning) Reinforcement Learning: An Introduction, Richard

    S. Sutton and Andrew G. Barto. https://webdocs.cs.ualberta.ca/~sutton/book/ebook/the-book.html Reinforcement Learning is the area of Machine Learning concerned with the actions that software agents ought to take in a particular environment in order to maximize rewards.
  4. TensorFlow Features • Deep Flexibility: TensorFlow isn't a rigid neural

    networks library. • True Portability: runs on CPUs or GPUs, and on desktop, server, or mobile computing platforms. • Auto-Differentiation. • Language Options: Python and C++ • Maximize Performance. TensorFlow: https://www.tensorflow.org/
  5. MxNET • Lightweight, memory efficient and portable to smart devices

    • MXNet is a deep learning framework designed for both efficiency and flexibility. • Support for python, R, C++ and Julia • Cloud-friendly and directly compatible with S3, HDFS, and Azure • Training Deep Net on 14 Million Images by Using A Single Machine.
  6. CuDNN 4 • The NVIDIA CUDA Deep Neural Network library

    (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. • Train neural networks up to 14x faster using Google’s Batch Normalization technique • Increase training and inference performance for convolutional layers up to 2x faster with new 2D tiled FFT algorithm. • Accelerate inference performance for convolutional layers on small batch sizes up to 2x on Maxwell-architecture GPUs. CuDNN4: https://developer.nvidia.com/cudnn
  7. A Neural Algorithm of Artistic Style Paper: http://arxiv.org/abs/1508.06576 Codigo: https://github.com/jcjohnson/neural-style

    “The key finding of this paper is that the representations of content and style in the Convolutional Neural Network are separable.”
  8. Large Scale Visual Recognition Challenge 2015 (ILSVRC2015) and COCO 2015

    Paper: Deep Residual Learning for Image Recognition http://arxiv.org/abs/ 1512.03385 • ImageNet Classification: “Ultra-deep” 152-layers net. • ImageNet Detection: 16% better than 2nd. • ImageNet Localization: 27% better than 2nd. • COCO Detection: 11% better than 2nd. • COCO Segmentation: 12% better than 2nd.
  9. Mastering the game of Go with deep neural networks and

    tree search Link: http://googleresearch.blogspot.pe/2016/01/alphago-mastering-ancient-game- of-go.html
  10. Predict the destination of taxi trips based on initial partial

    trajectories Link: http://blog.kaggle.com/2015/07/27/taxi-trajectory-winners-interview-1st-place- team-%F0%9F%9A%95/
  11. ¿Como Mantenerse al día en Machine Learning? • Reddit: http://www.reddit.com/r/machinelearning

    • Data Tau: http://www.datatau.com/ • Kaggle Blog: http://blog.kaggle.com/ • R Bloggers: http://www.r-bloggers.com/ • NLP People: https://nlppeople.com/category/blog/