Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Deploying models to production with TensorFlow ...

Deploying models to production with TensorFlow Model Server

How to serve TensorFlow models over HTTP and HTTPS. How we can essentially follow the main steps of putting a model into production, package it and make it ready for deployment, upload it somewhere in the cloud, make an API, and most importantly have no downtime while you are updating the model and doing version numbering efficiently. We plan to cover all these which are the steps required to deploy a model in the wild and how TensorFlow simplifies them for a developer. We will show how applications could access the model maybe through web or cloud calls. We will also show how one could make this deployment to auto-scale using GCP Cloud functions and/or Kubernetes

Rishit Dagli

May 30, 2020
Tweet

More Decks by Rishit Dagli

Other Decks in Technology

Transcript

  1. Event link: https://www.meetup.com/GDG-Ahmedabad/events/270477738/ Rishit Dagli 10-grade student, past TED-X and

    Ted-Ed speaker Deploying models to production with TensorFlow model server
  2. Event link: https://www.meetup.com/GDG-Ahmedabad/events/270477738/ Rishit Dagli 10-grade student, past TED-X and

    Ted-Ed speaker Deploying models to production with TensorFlow model server
  3. Ideal Audience • Devs who have worked on Deep Learning

    Models (Keras) • Devs looking for ways to put their model into production ready manner
  4. 01 Motivation behind a process for deployment What things to

    take care of? What is TF Model server? 02 03 04 05 06 What can it do? • Versioning • Iaas • CI/ CD Auto Scaling QnA
  5. What things to take care of? • Package the model

    • Post the model on Cloud Hosted Server
  6. What things to take care of? • Package the model

    • Post the model on Cloud Hosted Server • Maintain the server
  7. What things to take care of? • Package the model

    • Post the model on Cloud Hosted Server • Maintain the server ◦ Auto-scale
  8. What things to take care of? • Package the model

    • Post the model on Cloud Hosted Server • Maintain the server ◦ Auto-scale
  9. What things to take care of? • Package the model

    • Post the model on Cloud Hosted Server • Maintain the server ◦ Auto-scale ◦ Global availability
  10. What things to take care of? • Package the model

    • Post the model on Cloud Hosted Server • Maintain the server ◦ Auto-scale ◦ Global availability ◦ And many more ...
  11. What things to take care of? • Package the model

    • Post the model on Cloud Hosted Server • Maintain the server ◦ Auto-scale ◦ Global availability ◦ And many more ... • API
  12. What things to take care of? • Package the model

    • Post the model on Cloud Hosted Server • Maintain the server ◦ Auto-scale ◦ Global availability ◦ And many more ... • API • Model Versioning
  13. Starting the model server os.environ["MODEL_DIR"] = MODEL_DIR %%bash --bg nohup

    tensorflow_model_server \ --rest_api_port = 8501 \ --model_name = test \ --model_base_path="${MODEL_DIR}" >server.log 2>&1
  14. Starting the model server os.environ["MODEL_DIR"] = MODEL_DIR %%bash --bg nohup

    tensorflow_model_server \ --rest_api_port = 8501 \ --model_name = test \ --model_base_path="${MODEL_DIR}" >server.log 2>&1
  15. Starting the model server os.environ["MODEL_DIR"] = MODEL_DIR %%bash --bg nohup

    tensorflow_model_server \ --rest_api_port = 8501 \ --model_name = test \ --model_base_path="${MODEL_DIR}" >server.log 2>&1
  16. Starting the model server os.environ["MODEL_DIR"] = MODEL_DIR %%bash --bg nohup

    tensorflow_model_server \ --rest_api_port = 8501 \ --model_name = test \ --model_base_path="${MODEL_DIR}" >server.log 2>&1
  17. Starting the model server os.environ["MODEL_DIR"] = MODEL_DIR %%bash --bg nohup

    tensorflow_model_server \ --rest_api_port = 8501 \ --model_name = test \ --model_base_path="${MODEL_DIR}" >server.log 2>&1
  18. Making calls xs = np.array([[case_1], [case_2] ... [case_n]]) data =

    json.dumps({"signature_name": " ", "instances": xs.tolist()})
  19. Doing Inference xs = np.array([[case_1], [case_2] ... [case_n]]) data =

    json.dumps({"signature_name": " ", "instances": xs.tolist()}) json_response = requests.post( 'http://localhost:8501/v1/models/test:predict', data = data, headers = headers)
  20. Doing Inference xs = np.array([[case_1], [case_2] ... [case_n]]) data =

    json.dumps({"signature_name": " ", "instances": xs.tolist()}) json_response = requests.post( 'http://localhost:8501/v1/models/test:predict', data = data, headers = headers)
  21. Doing Inference xs = np.array([[case_1], [case_2] ... [case_n]]) data =

    json.dumps({"signature_name": " ", "instances": xs.tolist()}) json_response = requests.post( 'http://localhost:8501/v1/models/test:predict', data = data, headers = headers)
  22. Doing Inference xs = np.array([[case_1], [case_2] ... [case_n]]) data =

    json.dumps({"signature_name": " ", "instances": xs.tolist()}) json_response = requests.post( 'http://localhost:8501/v1/models/test:predict', data = data, headers = headers)
  23. Key Takeaways • Why a process for deployment • What

    it takes to deploy models • Serving a model with TF Model server • Why TF Model server? • What can TF Model server do? • Deploying on Cloud