Upgrade to Pro — share decks privately, control downloads, hide ads and more …

MLOps in Mercari Group’s Trust and Safety ML Team

MLOps in Mercari Group’s Trust and Safety ML Team

In the Mercari Group’s Trust and Safety ML Team, we provide solutions to ensure the safety of the users. Examples of the solutions we provide include anti-money laundering countermeasures, credit card fraud detection, and many others. Some of these solutions are powered by machine learning models. In order to be as reactive as possible to emerging frauds, it is important to streamline the model improvement and deployment processes. In this talk, we will explain our platform and automation, and how each element helps us rapidly deploy new countermeasures. We will cover all MLOps steps: experimentation, training/deployment, evaluation, and metric monitoring. We hope our talk benefits those integrating DevOps into their ML solutions or building ML platforms, especially with GCP’s Vertex AI.

Calvin Janitra Halim

September 30, 2024
Tweet

Other Decks in Programming

Transcript

  1. 1 MLOps in Mercari Group's Trust and Safety ML Team

    PyCon JP 2024 Calvin Janitra Halim
  2. 2 Team Trust and Safety ML Position ML Engineer Work

    MLOps, Data Analysis, etc. Hobbies Music Production, Jamming Career 2021-04 ~ 2023-12 Rakuten 2024-01 ~ Mercari Email [email protected] [email protected] GitHub, Qiita, Spotify, Soundcloud: CJHJ LinkedIn, Medium, X: My name Calvin Janitra Halim
  3. 3 Agenda 1. Introduction a. What’s Trust and Safety? b.

    ML Team’s Responsibilities 2. System Architecture a. Marketplace Service Architecture b. Fintech Service Architecture 3. Automation Steps a. MLOps Overview b. Experimentation c. Feature store d. Training e. Deployment f. Monitoring 4. Struggles and Learnings 5. Closing
  4. 6 What’s Trust and Safety? Trust and Safety To ensure

    a safe and secure marketplace Fraud detection Monitoring Prevent policy violations
  5. 7 ML Team’s Responsibilities • Item Moderation • Account Moderation

    • Transaction Moderation Fintech Marketplace
  6. 18 Package as a template notebook Experimentation Model comparison To

    facilitate the collection of results from models based on different configurations, we use the Vertex AI's experiment feature. Data loading We usually use Feast to make it easier to perform point-in-time joins in historical features. EDA E.g. univariate analysis helps us determine which features will contribute to the model’s performance. Evaluation PR-AUC and Feature importances are what we usually check to evaluate models. And also some other business metrics. Error analysis A deep dive into false positives and false negatives is really helpful for identifying the types of patterns the model could miss, as well as any bugs in the data we use. write from scratch
  7. 20 Creating and Using Features Define feature tables as dbt

    models Table dependencies, SQL parameterized values, validation
  8. 30 Struggle with Tools Ended up with Vertex AI Feature

    Store dbt Dataflow BigQuery BigTable Offline features Online features Vertex AI Endpoint or Ended up with Vertex AI Model Registry GKE Pod