Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Machine Learning With Scikit-Learn - Pydata Str...

Andreas Mueller
September 29, 2015

Machine Learning With Scikit-Learn - Pydata Strata NYC 2015

Introduction to machine learning and scikit-learn, including basic API, grid-search, pipelines, model complexity and in-depth review of some supervised models.

Andreas Mueller

September 29, 2015
Tweet

More Decks by Andreas Mueller

Other Decks in Technology

Transcript

  1. 4 Classification Regression Clustering Semi-Supervised Learning Feature Selection Feature Extraction

    Manifold Learning Dimensionality Reduction Kernel Approximation Hyperparameter Optimization Evaluation Metrics Out-of-core learning …...
  2. 7 Hi Andy, I just received an email from the

    first tutorial speaker, presenting right before you, saying he's ill and won't be able to make it. I know you have already committed yourself to two presentations, but is there anyway you could increase your tutorial time slot, maybe just offer time to try out what you've taught? Otherwise I have to do some kind of modern dance interpretation of Python in data :-) -Leah Hi Andreas, I am very interested in your Machine Learning background. I work for X Recruiting who have been engaged by Z, a worldwide leading supplier of Y. We are expanding the core engineering team and we are looking for really passionate engineers who want to create their own story and help millions of people. Can we find a time for a call to chat for a few minutes about this? Thanks
  3. 8 Hi Andy, I just received an email from the

    first tutorial speaker, presenting right before you, saying he's ill and won't be able to make it. I know you have already committed yourself to two presentations, but is there anyway you could increase your tutorial time slot, maybe just offer time to try out what you've taught? Otherwise I have to do some kind of modern dance interpretation of Python in data :-) -Leah Hi Andreas, I am very interested in your Machine Learning background. I work for X Recruiting who have been engaged by Z, a worldwide leading supplier of Y. We are expanding the core engineering team and we are looking for really passionate engineers who want to create their own story and help millions of people. Can we find a time for a call to chat for a few minutes about this? Thanks
  4. 10 Representing Data X = 1.1 2.2 3.4 5.6 1.0

    6.7 0.5 0.4 2.6 1.6 2.4 9.3 7.3 6.4 2.8 1.5 0.0 4.3 8.3 3.4 0.5 3.5 8.1 3.6 4.6 5.1 9.7 3.5 7.9 5.1 3.7 7.8 2.6 3.2 6.3
  5. 11 Representing Data X = 1.1 2.2 3.4 5.6 1.0

    6.7 0.5 0.4 2.6 1.6 2.4 9.3 7.3 6.4 2.8 1.5 0.0 4.3 8.3 3.4 0.5 3.5 8.1 3.6 4.6 5.1 9.7 3.5 7.9 5.1 3.7 7.8 2.6 3.2 6.3 one sample
  6. 12 Representing Data X = 1.1 2.2 3.4 5.6 1.0

    6.7 0.5 0.4 2.6 1.6 2.4 9.3 7.3 6.4 2.8 1.5 0.0 4.3 8.3 3.4 0.5 3.5 8.1 3.6 4.6 5.1 9.7 3.5 7.9 5.1 3.7 7.8 2.6 3.2 6.3 one sample one feature
  7. 13 Representing Data X = y = 1.1 2.2 3.4

    5.6 1.0 6.7 0.5 0.4 2.6 1.6 2.4 9.3 7.3 6.4 2.8 1.5 0.0 4.3 8.3 3.4 0.5 3.5 8.1 3.6 4.6 5.1 9.7 3.5 7.9 5.1 3.7 7.8 2.6 3.2 6.3 1.6 2.7 4.4 0.5 0.2 5.6 6.7 one sample one feature outputs / labels
  8. 14 Training and Testing Data X = 1.1 2.2 3.4

    5.6 1.0 6.7 0.5 0.4 2.6 1.6 2.4 9.3 7.3 6.4 2.8 1.5 0.0 4.3 8.3 3.4 0.5 3.5 8.1 3.6 4.6 5.1 9.7 3.5 7.9 5.1 3.7 7.8 2.6 3.2 6.3 y = 1.6 2.7 4.4 0.5 0.2 5.6 6.7
  9. 15 Training and Testing Data X = 1.1 2.2 3.4

    5.6 1.0 6.7 0.5 0.4 2.6 1.6 2.4 9.3 7.3 6.4 2.8 1.5 0.0 4.3 8.3 3.4 0.5 3.5 8.1 3.6 4.6 5.1 9.7 3.5 7.9 5.1 3.7 7.8 2.6 3.2 6.3 y = 1.6 2.7 4.4 0.5 0.2 5.6 6.7 training set test set
  10. 16 Training and Testing Data X = 1.1 2.2 3.4

    5.6 1.0 6.7 0.5 0.4 2.6 1.6 2.4 9.3 7.3 6.4 2.8 1.5 0.0 4.3 8.3 3.4 0.5 3.5 8.1 3.6 4.6 5.1 9.7 3.5 7.9 5.1 3.7 7.8 2.6 3.2 6.3 y = 1.6 2.7 4.4 0.5 0.2 5.6 6.7 training set test set from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y)
  11. 20 Supervised Machine Learning Training Data Test Data Training Labels

    Model Prediction Test Labels Evaluation Training Generalization
  12. 22 clf = RandomForestClassifier() clf.fit(X_train, y_train) Training Data Test Data

    Training Labels Model Prediction y_pred = clf.predict(X_test)
  13. 23 clf = RandomForestClassifier() clf.fit(X_train, y_train) clf.score(X_test, y_test) Training Data

    Test Data Training Labels Model Prediction Test Labels Evaluation y_pred = clf.predict(X_test)
  14. 26 pca = PCA() pca.fit(X_train) X_new = pca.transform(X_test) Training Data

    Test Data Model Transformation Unsupervised Transformations
  15. 27 Basic API estimator.fit(X, [y]) estimator.predict estimator.transform Classification Preprocessing Regression

    Dimensionality reduction Clustering Feature selection Feature extraction
  16. 29 Review: One of the worst movies I've ever rented.

    Sorry it had one of my favorite actors on it (Travolta) in a nonsense role. In fact, anything made sense in this movie. Who can say there was true love between Eddy and Maureen? Don't you remember the beginning of the movie ? Is she so lovely? Ask her daughters. I don't think so. Label: negative Training data: 12500 positive, 12500 negative IMDB Movie Reviews Data
  17. 31 Bag Of Word Representations “This is how you get

    ants.” CountVectorizer / TfidfVectorizer
  18. 32 Bag Of Word Representations “This is how you get

    ants.” ['this', 'is', 'how', 'you', 'get', 'ants'] CountVectorizer / TfidfVectorizer tokenizer
  19. 33 Bag Of Word Representations “This is how you get

    ants.” ['this', 'is', 'how', 'you', 'get', 'ants'] CountVectorizer / TfidfVectorizer tokenizer Build a vocabulary over all documents ['aardvak', 'amsterdam', 'ants', ... 'you', 'your', 'zyxst']
  20. 34 Bag Of Word Representations “This is how you get

    ants.” [0, …, 0, 1, 0, … , 0, 1 , 0, …, 0, 1, 0, …., 0 ] ants get you aardvak zyxst ['this', 'is', 'how', 'you', 'get', 'ants'] CountVectorizer / TfidfVectorizer tokenizer Sparse matrix encoding Build a vocabulary over all documents ['aardvak', 'amsterdam', 'ants', ... 'you', 'your', 'zyxst']
  21. 60 All Data Training data Test data Fold 1 Fold

    2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Split 1
  22. 61 All Data Training data Test data Fold 1 Fold

    2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Split 1 Split 2
  23. 62 All Data Training data Test data Fold 1 Fold

    2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Split 1 Split 2 Split 3 Split 4 Split 5
  24. 66 SVC(C=0.001, gamma=0.001) SVC(C=0.01, gamma=0.001) SVC(C=0.1, gamma=0.001) SVC(C=1, gamma=0.001) SVC(C=10,

    gamma=0.001) SVC(C=0.001, gamma=0.01) SVC(C=0.01, gamma=0.01) SVC(C=0.1, gamma=0.01) SVC(C=1, gamma=0.01) SVC(C=10, gamma=0.01)
  25. 67 SVC(C=0.001, gamma=0.001) SVC(C=0.01, gamma=0.001) SVC(C=0.1, gamma=0.001) SVC(C=1, gamma=0.001) SVC(C=10,

    gamma=0.001) SVC(C=0.001, gamma=0.01) SVC(C=0.01, gamma=0.01) SVC(C=0.1, gamma=0.01) SVC(C=1, gamma=0.01) SVC(C=10, gamma=0.01) SVC(C=0.001, gamma=0.1) SVC(C=0.01, gamma=0.1) SVC(C=0.1, gamma=0.1) SVC(C=1, gamma=0.1) SVC(C=10, gamma=0.1)
  26. 68 SVC(C=0.001, gamma=0.001) SVC(C=0.01, gamma=0.001) SVC(C=0.1, gamma=0.001) SVC(C=1, gamma=0.001) SVC(C=10,

    gamma=0.001) SVC(C=0.001, gamma=0.01) SVC(C=0.01, gamma=0.01) SVC(C=0.1, gamma=0.01) SVC(C=1, gamma=0.01) SVC(C=10, gamma=0.01) SVC(C=0.001, gamma=0.1) SVC(C=0.01, gamma=0.1) SVC(C=0.1, gamma=0.1) SVC(C=1, gamma=0.1) SVC(C=10, gamma=0.1) SVC(C=0.001, gamma=1) SVC(C=0.01, gamma=1) SVC(C=0.1, gamma=1) SVC(C=1, gamma=1) SVC(C=10, gamma=1) SVC(C=0.001, gamma=10) SVC(C=0.01, gamma=10) SVC(C=0.1, gamma=10) SVC(C=1, gamma=10) SVC(C=10, gamma=10)
  27. 70 All Data Training data Test data Fold 1 Fold

    2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Test data Split 1 Split 2 Split 3 Split 4 Split 5
  28. 71 All Data Training data Test data Fold 1 Fold

    2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Test data Finding Parameters Final evaluation Split 1 Split 2 Split 3 Split 4 Split 5
  29. 72 Cross -Validated Grid Search from sklearn.grid_search import GridSearchCV from

    sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) param_grid = {'C': 10. ** np.arange(-3, 3), 'gamma': 10. ** np.arange(-3, 3)} grid = GridSearchCV(SVC(), param_grid=param_grid) grid.fit(X_train, y_train) grid.score(X_test, y_test)
  30. 77 Training Data Training Labels Model Feature Extraction Scaling Feature

    Selection Cross Validation and Parameter selection
  31. 78 Training Data Training Labels Model Feature Extraction Scaling Feature

    Selection Cross Validation and Parameter selection
  32. 81 Combining Pipelines and Grid Search Proper cross-validation param_grid =

    {'svc__C': 10. ** np.arange(-3, 3), 'svc__gamma': 10. ** np.arange(-3, 3)} scaler_pipe = make_pipeline(StandardScaler(), SVC()) grid = GridSearchCV(scaler_pipe, param_grid=param_grid, cv=5) grid.fit(X_train, y_train)
  33. 82 Combining Pipelines and Grid Search II Searching over parameters

    of the preprocessing step param_grid = {'selectkbest__k': [1, 2, 3, 4], 'svc__C': 10. ** np.arange(-3, 3), 'svc__gamma': 10. ** np.arange(-3, 3)} scaler_pipe = make_pipeline(SelectKBest(), SVC()) grid = GridSearchCV(scaler_pipe, param_grid=param_grid, cv=5) grid.fit(X_train, y_train)
  34. 87 Scoring with imbalanced data cross_val_score(SVC(), X_train, y_train) >>> array([

    0.9, 0.9, 0.9]) cross_val_score(DummyClassifier("most_frequent"), X_train, y_train) >>> array([ 0.9, 0.9, 0.9])
  35. 88 Scoring with imbalanced data cross_val_score(SVC(), X_train, y_train) >>> array([

    0.9, 0.9, 0.9]) cross_val_score(DummyClassifier("most_frequent"), X_train, y_train) >>> array([ 0.9, 0.9, 0.9]) cross_val_score(SVC(), X_train, y_train, scoring="roc_auc") >>> array([ 1,0, 1.0, 1,0])
  36. 89 Scoring with imbalanced data cross_val_score(SVC(), X_train, y_train) >>> array([

    0.9, 0.9, 0.9]) cross_val_score(DummyClassifier("most_frequent"), X_train, y_train) >>> array([ 0.9, 0.9, 0.9]) cross_val_score(SVC(), X_train, y_train, scoring="roc_auc") >>> array([ 1,0, 1.0, 1,0])
  37. 92 CDS is hiring Research Engineers Work on your favorite

    data science open source project full time!