Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Tips and tricks for data science projects with ...

Tips and tricks for data science projects with Python

Python has become the most widely used language for machine learning and data science projects due to its simplicity and versatility.
Furthermore, developers get to put all their effort into solving an Machine Learning or data science problem instead of focusing on the technical aspects of the language.
For this purpose, Python provides access to great libraries and frameworks for AI and machine learning (ML), flexibility and platform independence
In this talk I will try to get a selection of libraries and frameworks that can help us introduce in the Machine Learning world and answer the question that all people is doing, What makes Python the best programming language for machine learning?

jmortegac

March 12, 2023
Tweet

More Decks by jmortegac

Other Decks in Technology

Transcript

  1. 1. Introducing Python for machine learning projects 2. Stages of

    a machine learning project 3. Selecting the best python library for your project for each stage 4. Python tools for deep learning in data science projects
  2. Introducing Python for machine learning projects • Simple and consistent

    • Understandable by humans • General-purpose programming language • Extensive selection of libraries and frameworks
  3. Introducing Python for machine learning projects • Spam filters •

    Recommendation systems • Search engines • Ppersonal assistants • Fraud detection systems
  4. Introducing Python for machine learning projects • Machine learning •

    Keras, TensorFlow, and Scikit-learn • High-performance scientific computing • Numpy, Scipy • Computer vision • OpenCV • Data analysis • Numpy, Pandas • Natural language processing • NLTK, spaCy
  5. Introducing Python for machine learning projects • Reading/writing many different

    data formats • Selecting subsets of data • Calculating across rows and down columns • Finding and filling missing data • Applying operations to independent groups within the data • Reshaping data into different forms • Combing multiple datasets together • Advanced time-series functionality • Visualization through Matplotlib and Seaborn
  6. Introducing Python for machine learning projects import pandas as pd

    import pandas_profiling # read the dataset data = pd.read_csv('your-data') prof = pandas_profiling.ProfileReport(data) prof.to_file(output_file='output.html')
  7. Python libraries • Supervised and unsupervised machine learning • Classification,

    regression, Support Vector Machine • Clustering, Kmeans, DBSCAN • Random Forest
  8. Python libraries • Pipelines • Grid-search • Validation curves •

    One-hot encoding of categorial data • Dataset generators • Principal Component Analysis (PCA)
  9. Python libraries Pipelines >>> from sklearn.pipeline import make_pipeline >>> from

    sklearn.naive_bayes import MultinomialNB >>> from sklearn.preprocessing import Binarizer >>> make_pipeline(Binarizer(), MultinomialNB()) Pipeline(steps=[('binarizer', Binarizer()), ('multinomialnb', MultinomialNB())]) http://scikit-learn.org/stable/modules/pipeline.html
  10. Python libraries Grid-search estimator.get_params() A search consists of: • an

    estimator (regressor or classifier such as sklearn.svm.SVC()) • a parameter space • a method for searching or sampling candidates • a cross-validation scheme • a score function https://scikit-learn.org/stable/modules/grid_search.html#grid-search
  11. Python libraries Validation curves >>> train_scores, valid_scores = validation_curve( ...

    Ridge(), X, y, param_name="alpha", param_range=np.logspace(-7, 3, 3), ... cv=5) >>> train_scores array([[0.93..., 0.94..., 0.92..., 0.91..., 0.92...], [0.93..., 0.94..., 0.92..., 0.91..., 0.92...], [0.51..., 0.52..., 0.49..., 0.47..., 0.49...]]) >>> valid_scores array([[0.90..., 0.84..., 0.94..., 0.96..., 0.93...], [0.90..., 0.84..., 0.94..., 0.96..., 0.93...], [0.46..., 0.25..., 0.50..., 0.49..., 0.52...]])
  12. Python libraries One-hot encoding https://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features # importing sklearn one hot

    encoding from sklearn.preprocessing import OneHotEncoder # initializing one hot encoding encoding = OneHotEncoder() # applying one hot encoding in python transformed_data = encoding.fit_transform(data[['Status']]) # head print(transformed_data.toarray())
  13. Python libraries Principal Component Analysis (PCA) from sklearn.preprocessing import StandardScaler

    sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) from sklearn.decomposition import PCA pca = PCA(n_components=2) X_train = pca.fit_transform(X_train) X_test = pca.transform(X_test)
  14. Python tools for deep learning TensorFlow Keras Pytorch API Level

    High and Low High Low Architecture Not easy to use Simple, concise, readable Complex, less readable Speed Fast, high-performance Slow, low performance Fast, high-performance Trained Models Yes Yes Yes
  15. Python tools for deep learning • tight integration with NumPy

    – Use numpy.ndarray in Theano-compiled functions. • transparent use of a GPU – Perform data-intensive computations much faster than on a CPU. • efficient symbolic differentiation – Theano does your derivatives for functions with one or many inputs. • speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny. • dynamic C code generation – Evaluate expressions faster. • extensive unit-testing and self-verification – Detect and diagnose many types of error
  16. Python tools for deep learning • Synkhronos Extension to Theano

    for multi-GPU data parallelism • Theano-MPI Theano-MPI a distributed framework for training models built in Theano based on data-parallelism. • Platoon Multi-GPU mini-framework for Theano, single node. • Elephas Distributed Deep Learning with Keras & Spark.