Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Hitchhiker's Guide to scikit-learn

The Hitchhiker's Guide to scikit-learn

Yuichiro Someya

August 10, 2017
Tweet

More Decks by Yuichiro Someya

Other Decks in Programming

Transcript

  1. ‣ *UT&TUJNBUPS OPU.PEFM  ‣ $POTUJUVUFMBSHFQBSUPGTLMFBSO ‣ 4VCDMBTTFTPG
 sklearn.base.BaseEstimator
 FYQMBJOFEMBUFS

     .PEFM*NQMFNFOUBUJPOT TLMFBSODMVTUFS TLMFBSOFOTFNCMF TLMFBSODPWBSJBODF TLMFBSOTWN TLMFBSOLFSOFM@SJEHF TLMFBSOLFSOFM@BQQSPYJNBUJPO TLMFBSOJTPUPOJD TLMFBSOHBVTTJBO@QSPDFTT TLMFBSOGFBUVSF@TFMFDUJPO BOENPSF
  2. 

  3. ‣ "MNPTU FWFSZUIJOHJTBO&TUJNBUPS  ,FSOFM3JEHF3FHSFTTPSJTB&TUJNBUPS ‣ 5IFZEJGGFSJOJUTBCJMJUJFT .JYJOT  

    ,FSOFM3JEHF3FHSFTTPSDBOEPSFHSFTTJPOT  .JYJOT5SBOTGPSNFS $MBTTJpFS 3FHSFTTPS 
  4.  class KernelRidge(BaseEstimator, RegressorMixin): """Kernel ridge regression. Kernel ridge regression

    (KRR) combines ridge regression (linear least squares with l2-norm regularization) with the kernel trick. It thus learns a linear function in the space induced by the respective kernel and the data. For non-linear kernels, this corresponds to a non-linear function in the original space. The form of the model learned by KRR is identical to support vector regression (SVR). However, different loss functions are used: KRR uses squared error loss while support vector regression uses epsilon-insensitive loss, both combined with l2 regularization. In contrast to SVR, fitting a KRR model can be done in closed-form and is typically faster for medium-sized datasets. On the other hand, the learned model is non-sparse and thus slower than SVR, which learns a sparse model for epsilon > 0, at prediction-time. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape [n_samples, n_targets]). Read more in the :ref:`User Guide <kernel_ridge>`.