Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Introduction to Deep Learning with TensorFlow a...

Introduction to Deep Learning with TensorFlow at PyData Ann Arbor

Introduction to Deep Learning with TensorFlow at PyData Ann Arbor Aug 24, 2017

Sebastian Raschka

August 24, 2017
Tweet

More Decks by Sebastian Raschka

Other Decks in Technology

Transcript

  1. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (Preliminary White

    Paper, November 9, 2015) Mart´ ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´ e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´ egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng Google Research⇤ Abstract TensorFlow [1] is an interface for expressing machine learn- ing algorithms, and an implementation for executing such al- gorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of hetero- geneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learn- ing systems into production across more than a dozen areas of sequence prediction [47], move selection for Go [34], pedestrian detection [2], reinforcement learning [38], and other areas [17, 5]. In addition, often in close collab- oration with the Google Brain team, more than 50 teams at Google and other Alphabet companies have deployed deep neural networks using DistBelief in a wide variety of products, including Google Search [11], our advertis- ing products, our speech recognition systems [50, 6, 46], Google Photos [43], Google Maps and StreetView [19], Google Translate [18], YouTube, and many others. Based on our experience with DistBelief and a more complete understanding of the desirable system proper- ties and requirements for training and using neural net- https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45166.pdf Figure 1: Example TensorFlow code fragm W b x MatMul Add ReLU ... C 3
  2. 4 https://sebastianraschka.com/pdf/books/dlb/appendix_g_tensorflow.pdf at performing highly parallelized numerical computations. In addition,

    TensorFlow also supports distributed systems as well as mobile computing platforms, including Android and Apple’s iOS. But what is a tensor? In simplifying terms, we can think of tensors as multidimensional arrays of numbers, as a generalization of scalars, vectors, and matrices. 1. Scalar: R 2. Vector: Rn 3. Matrix: Rn × Rm 4. 3-Tensor: Rn × Rm × Rp 5. … When we describe tensors, we refer to its “dimensions” as the rank (or order) of a tensor, which is not to be confused with the dimensions of a matrix. For instance, an m × n matrix, where m is the number of rows and n is the number of columns, would be a special case of a rank-2 tensor. A visual explanation of tensors and their ranks is given is the figure below. Index [2] Index [0,0] Index [0,2,1] rank 0 tensor dimensions [ ] scalar rank 2 tensor dimensions [5, 3] matrix rank 1 tensor dimensions [5] vector rank 3 tensor dimensions [4, 4, 2] Tensors?
  3. pip install tensorflow pip install tensorflow-gpu Setup help: • https://www.tensorflow.org/install/

    • https://sebastianraschka.com/pdf/books/dlb/appendix_h_cloud-computing.pdf 6
  4. 8 logits = np.zeros([num_train_examples, num_hidden]) for i, row in enumerate(X):

    # row = training_example for j, col in enumerate(W.T): # col = features vector_dot_product = 0 for a, b in zip(row, col): vector_dot_product += a*b logits[i, j] = vector_dot_product np.allclose(logits, np.dot(X, W))
  5. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (Preliminary White

    Paper, November 9, 2015) Mart´ ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´ e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´ egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng Google Research⇤ Abstract TensorFlow [1] is an interface for expressing machine learn- ing algorithms, and an implementation for executing such al- gorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of hetero- geneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learn- ing systems into production across more than a dozen areas of sequence prediction [47], move selection for Go [34], pedestrian detection [2], reinforcement learning [38], and other areas [17, 5]. In addition, often in close collab- oration with the Google Brain team, more than 50 teams at Google and other Alphabet companies have deployed deep neural networks using DistBelief in a wide variety of products, including Google Search [11], our advertis- ing products, our speech recognition systems [50, 6, 46], Google Photos [43], Google Maps and StreetView [19], Google Translate [18], YouTube, and many others. Based on our experience with DistBelief and a more complete understanding of the desirable system proper- ties and requirements for training and using neural net- https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45166.pdf Figure 1: Example TensorFlow code fragm W b x MatMul Add ReLU ... C 9
  6. 10 Computation Graphs a(x, w, b) = relu(w*x + b)

    activation function weight parameter training example with 1 input feature bias term (“threshold”)
  7. 11 import matplotlib.pyplot as plt import numpy as np def

    relu(x): # max(0, x) return x * (x > 0) x = np.arange(-10, 10) plt.plot(x, relu(x)) relu(x) = ⇢ x if x > 0 0 otherwise drelu(x) dx = ⇢ 1 if x > 0 0 otherwise REctified Linear Unit
  8. 13 Computation Graphs a(x, w, b) = relu(w*x + b)

    u v u = wx x w b + * v = u+b a = relu(v)
  9. Computation Graphs import tensorflow as tf g = tf.Graph() with

    g.as_default() as g: x = tf.placeholder(dtype=tf.float32, shape=None, name='x') w = tf.Variable(initial_value=2, dtype=tf.float32, name='w') b = tf.Variable(initial_value=1, dtype=tf.float32, name='b') u = x * w v = u + b a = tf.nn.relu(v) init_op = tf.global_variables_initializer() print(x, w, b, u, v, a) a(x, w, b) = relu(w*x + b) u v 14
  10. Computation Graphs Tensor("x:0", dtype=float32) <tf.Variable 'w:0' shape=() dtype=float32_ref> <tf.Variable 'b:0'

    shape=() dtype=float32_ref> Tensor("mul:0", dtype=float32) Tensor("add:0", dtype=float32) Tensor("Relu:0", dtype=float32) import tensorflow as tf g = tf.Graph() with g.as_default() as g: x = tf.placeholder(dtype=tf.float32, shape=None, name='x') w = tf.Variable(initial_value=2, dtype=tf.float32, name='w') b = tf.Variable(initial_value=1, dtype=tf.float32, name='b') u = x * w v = u + b a = tf.nn.relu(v) print(x, w, b, u, v, a) 15
  11. Computation Graphs u = wx b=1 + * v =

    u+b a = relu(v) with tf.Session(graph=g) as sess: sess.run(init_op) b_res = sess.run(’b:0') print(b_res) 1.0 x w=2 16
  12. 17 TensorBoard with tf.Session(graph=g) as sess: sess.run(init_op) file_writer = tf.summary.FileWriter(logdir='logs/graph-1',

    graph=g) In your terminal $ pip install tensorboard $ tensorboard --logdir logs/graph-1
  13. 18

  14. Computation Graphs u = wx b=1 + * v =

    u+b a = relu(v) with tf.Session(graph=g) as sess: sess.run(tf.global_variables_initializer()) u_res, v_res, a_res = sess.run([u, v, a], feed_dict={'x:0': 3.}) print(u_res, v_res, a_res) 6.0, 7.0 7.0 x w=2 19 3.0
  15. 20 u = wx b=1 + * v = u+b

    a = relu(v) x w=2 Calculus refresher: https://sebastianraschka.com/pdf/books/dlb/appendix_d_calculus.pdf Computation Graphs and Derivatives
  16. 21 u = wx b=1 + * v = u+b

    a = relu(v) x w=2 Calculus refresher: https://sebastianraschka.com/pdf/books/dlb/appendix_d_calculus.pdf
  17. 22 u = wx b=1 + * v = u+b

    a = relu(v) x w=2
  18. 23 u = wx b=1 + * v = u+b

    a = relu(v) x w=2
  19. 24 u = wx b=1 + * v = u+b

    a = relu(v) x w=2 () (* = ?
  20. 25 he section, let use the Leibniz notation, which makes

    the d dx f(g(x)) = df dg · dg dx . on above is equivalent to writing F′(x) = f′(g(x))g′(x).) ction, let use the Leibniz notation, which makes the d dx f(g(x)) = df dg · dg dx . bove is equivalent to writing F′(x) = f′(g(x))g′(x) Chain Rule
  21. 26 u = wx b=1 + * v = u+b

    a = relu(v) x w=2 () (* = ?
  22. 27 u = wx b=1 + * v = u+b

    a = relu(v) x w=2 () (* = (+ (* () (+
  23. 28 u = wx b=1 + * v = u+b

    a = relu(v) x w=2 () (* = (+ (* () (+ () (, = ?
  24. 29 u = wx b=1 + * v = u+b

    a = relu(v) x w=2 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+
  25. 30 u = wx b=1 + * v = u+b

    a = relu(v) x w=2 3.0
  26. 31 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+
  27. 32 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+ = ? relu(x) = ⇢ x if x > 0 0 otherwise drelu(x) dx = ⇢ 1 if x > 0 0 otherwise
  28. 33 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+ = 1 = ? = ? Appendix D - Calculus and Differentiation Primer Common Differentiation Rules In addition to the constant rule (Table D1 row 1) and the power the following table lists the most common differentiation rules in practice. Although we will not go over the derivations of recommended to memorize and practice them. Most machine learn on applications of these rules, and in the following sections, we wil the last rule in this list, the chain rule. Table D2. Common differentiation rules. Function Derivative Sum Rule f(x) + g(x) f′(x) + g′(x) Difference Rule f(x) − g(x) f′(x) − g′(x) Product Rule f(x)g(x) f(x)g′(x) + f′(x)g(x) Quotient Rule f(x)/g(x) [g(x)f′(x) − f(x)g′(x ′ 2
  29. 34 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+ = 1 = 1 + 0 = 1 = 0 + 1 = 1
  30. 35 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+ = 1 = 1 = 1 = ?
  31. 36 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+ = 1 = 1 = 1 = x = 3
  32. 37 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+ = 1 = 1 = 1 = 3 = ?
  33. 38 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+ = 1 = 1 = 1 = 3 = 1*1 = 1
  34. 39 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+ = 1 = 1 = 1 = 3 = 1 = ?
  35. 40 u = wx x=3 w=2 b=1 + * v

    = u+b a = relu(v) 6 7 7 () (* = (+ (* () (+ () (, = (. (, () (. = (. (, (+ (. () (+ = 1 = 1 = 1 = 3 = 1 = 3*1*1 = 3
  36. 41

  37. 42 with g.as_default() as g: d_a_w = tf.gradients(a, w) d_b_w

    = tf.gradients(a, b) with tf.Session(graph=g) as sess: sess.run(init_op) dw, db = sess.run([d_a_w, d_b_w], feed_dict={'x:0': 3}) print(dw, db) [3.0] [1.0]
  38. 43 g = tf.Graph() with g.as_default() as g: x =

    tf.placeholder(dtype=tf.float32, shape=None, name='x') w = tf.Variable(initial_value=2, dtype=tf.float32, name='w') b = tf.Variable(initial_value=1, dtype=tf.float32, name='b') u = x * w v = u + b a = tf.nn.relu(v) d_a_w = tf.gradients(a, w) d_b_w = tf.gradients(a, b) with tf.Session(graph=g) as sess: sess.run(tf.global_variables_initializer()) res = sess.run([d_a_w, d_b_w], feed_dict={'x:0': 3})
  39. 45 reshape 1 (ℎ) = 0 ()0,1 (ℎ)+1 ()1,1 (ℎ)

    + ⋯ + (),1 (ℎ) 1 (ℎ) = (1 (ℎ)) 1 () = (1 ()) 1 (ℎ) = (1 (ℎ)) 1 () = (1 ())
  40. 48 (very) low-level backprop # Loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=out_z, labels=tf_y)

    cost = tf.reduce_mean(loss, name='cost') # input/output dim: [n_samples, n_classlabels] sigma_out = (out_act - tf_y) / batch_size # input/output dim: [n_samples, n_hidden_1] softmax_derivative_h1 = h1_act * (1. - h1_act) # input dim: [n_samples, n_classlabels] dot [n_classlabels, n_hidden] # output dim: [n_samples, n_hidden] sigma_h = (tf.matmul(sigma_out, tf.transpose(weights['out'])) * softmax_derivative_h1) # input dim: [n_features, n_samples] dot [n_samples, n_hidden] # output dim: [n_features, n_hidden] grad_w_h1 = tf.matmul(tf.transpose(tf_x), sigma_h) grad_b_h1 = tf.reduce_sum(sigma_h, axis=0) # input dim: [n_hidden, n_samples] dot [n_samples, n_classlabels] # output dim: [n_hidden, n_classlabels] grad_w_out = tf.matmul(tf.transpose(h1_act), sigma_out) grad_b_out = tf.reduce_sum(sigma_out, axis=0) # Update weights upd_w_1 = tf.assign(weights['h1'], weights['h1'] - learning_rate * grad_w_h1) upd_b_1 = tf.assign(biases['b1'], biases['b1'] - learning_rate * grad_b_h1) upd_w_out = tf.assign(weights['out'], weights['out'] - learning_rate * grad_w_out) upd_b_out = tf.assign(biases['out'], biases['out'] - learning_rate * grad_b_out) train = tf.group(upd_w_1, upd_b_1, upd_w_out, upd_b_out, name='train')
  41. 49 ########################## ### TRAINING & EVALUATION ########################## with tf.Session(graph=g) as

    sess: sess.run(tf.global_variables_initializer()) for epoch in range(training_epochs): avg_cost = 0. total_batch = mnist.train.num_examples // batch_size for i in range(total_batch): batch_x, batch_y = mnist.train.next_batch(batch_size) _, c = sess.run(['train', 'cost:0'], feed_dict={'features:0': batch_x, 'targets:0': batch_y})
  42. 50 low-level backprop # Loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=out_z, labels=tf_y) cost

    = tf.reduce_mean(loss, name='cost') ################## # Backpropagation ################## # Get Gradients dc_dw_out, dc_db_out = tf.gradients(cost, [weights['out'], biases['out']]) dc_dw_1, dc_db_1 = tf.gradients(cost, [weights['h1'], biases['b1']]) # Update Weights upd_w_1 = tf.assign(weights['h1'], weights['h1'] - learning_rate * dc_dw_1) upd_b_1 = tf.assign(biases['b1'], biases['b1'] - learning_rate * dc_db_1) upd_w_out = tf.assign(weights['out'], weights['out'] - learning_rate * dc_dw_out) upd_b_out = tf.assign(biases['out'], biases['out'] - learning_rate * dc_db_out) train = tf.group(upd_w_1, upd_b_1, upd_w_out, upd_b_out, name='train')
  43. 51 “convenient” backprop # Loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=out_z, labels=tf_y) cost

    = tf.reduce_mean(loss, name='cost') ################## # Backpropagation ################## optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) train = optimizer.minimize(cost, name='train')
  44. 52 ########################## ### TRAINING & EVALUATION ########################## with tf.Session(graph=g) as

    sess: sess.run(tf.global_variables_initializer()) for epoch in range(training_epochs): avg_cost = 0. total_batch = mnist.train.num_examples // batch_size for i in range(total_batch): batch_x, batch_y = mnist.train.next_batch(batch_size) _, c = sess.run(['train', 'cost:0'], feed_dict={'features:0': batch_x, 'targets:0': batch_y})
  45. 55 def fully_connected(input_tensor, output_nodes, activation=None, seed=None, name='fully_connected'): with tf.variable_scope(name): input_nodes

    = input_tensor.get_shape().as_list()[1] weights = tf.Variable(tf.truncated_normal(shape=(input_nodes, output_nodes), mean=0.0, stddev=0.01, dtype=tf.float32, seed=seed), name='weights') biases = tf.Variable(tf.zeros(shape=[output_nodes]), name='biases') act = tf.matmul(input_tensor, weights) + biases if activation is not None: act = activation(act) return act Defining your wrapper functions manually
  46. 56 Using tensorflow.layers g = tf.Graph() with g.as_default(): # Input

    data tf_x = tf.placeholder(tf.float32, [None, n_input], name='features') tf_y = tf.placeholder(tf.float32, [None, n_classes], name='targets') # Multilayer perceptron layer_1 = tf.layers.dense(tf_x, n_hidden_1, activation=tf.nn.relu, kernel_initializer=tf.truncated_normal_initializer(stddev=0.1)) layer_2 = tf.layers.dense(layer_1, n_hidden_2, activation=tf.nn.relu, kernel_initializer=tf.truncated_normal_initializer(stddev=0.1)) out_layer = tf.layers.dense(layer_2, n_classes, activation=None)
  47. 57 Feeding Data into the Graph From Python via placeholders

    - Python pickle - NumPy .npz archives (https://github.com/rasbt/deep-learning-book/blob/master/code/model_zoo/image-data-chunking-npz.ipynb) - HDF5 (https://github.com/rasbt/deep-learning-book/blob/master/code/model_zoo/image-data-chunking-hdf5.ipynb) - CSV - … sess.run([…], feed_dict={'x:0': …, 'y:0': …, …}) More info: https://www.tensorflow.org/programmers_guide/reading_data Using input pipelines and queues - Reading data from TFRecords files (https://github.com/rasbt/deep-learning-book/blob/master/code/model_zoo/tfrecords.ipynb) - Queues for loading raw images (https://github.com/rasbt/deep-learning-book/blob/master/code/model_zoo/file-queues.ipynb)
  48. 58

  49. 62 Thanks for attending! Contact: o E-mail: [email protected] o Website:

    http://sebastianraschka.com o Twitter: @rasbt o GitHub: rasbt Slides Speaker Deck: https://speakerdeck.com/rasbt/introduction-to-deep-learning-with-tensorflow-at-pydata-ann-arbor Code snippets GitHub: https://github.com/rasbt/pydata-annarbor2017-dl-tutorial
  50. 63 Thanks for attending! Contact: o E-mail: [email protected] o Website:

    http://sebastianraschka.com o Twitter: @rasbt o GitHub: rasbt Slides Speaker Deck: https://speakerdeck.com/rasbt/introduction-to-deep-learning-with-tensorflow-at-pydata-ann-arbor Code snippets GitHub: https://github.com/rasbt/pydata-annarbor2017-dl-tutorial Questions?