Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Making Deployments Easy with TF Serving | TF Ev...
Search
Sponsored
·
Your Podcast. Everywhere. Effortlessly.
Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
→
Rishit Dagli
May 11, 2021
Programming
1
190
Making Deployments Easy with TF Serving | TF Everywhere India
My talk at TensorFlow Everywhere India
Rishit Dagli
May 11, 2021
Tweet
Share
More Decks by Rishit Dagli
See All by Rishit Dagli
Fantastic Models and Where to Find Them
rishitdagli
0
87
Plant AI: Project Showcase
rishitdagli
0
130
Deploying an ML Model as an API | Postman Student Summit
rishitdagli
0
110
APIs 101 with Postman
rishitdagli
0
93
Deploying Models to production with Azure ML | Scottish Summit
rishitdagli
1
100
Computer Vision with TensorFlow, Getting Started
rishitdagli
0
310
Teaching Your Models to Play Fair | Global AI Student Conf
rishitdagli
1
190
Deploying Models to Production with TF Serving
rishitdagli
1
220
Superpower Your Android apps with ML: Android 11 | Devfest 2020
rishitdagli
1
97
Other Decks in Programming
See All in Programming
あなたはユーザーではない #PdENight
kajitack
4
280
DSPy入門 Pythonで実現する自動プロンプト最適化 〜人手によるプロンプト調整からの卒業〜
seaturt1e
1
260
生成AIを使ったコードレビューで定性的に品質カバー
chiilog
1
310
今から始めるClaude Code超入門
448jp
8
9.5k
AIフル活用時代だからこそ学んでおきたい働き方の心得
shinoyu
0
150
go directiveを最新にしすぎないで欲しい話──あるいは、Go 1.26からgo mod initで作られるgo directiveの値が変わる話 / Go 1.26 リリースパーティ
arthur1
2
270
AI時代でも変わらない技術コミュニティの力~10年続く“ゆるい”つながりが生み出す価値
n_takehata
2
440
Geminiの機能を調べ尽くしてみた
naruyoshimi
0
170
受け入れテスト駆動開発(ATDD)×AI駆動開発 AI時代のATDDの取り組み方を考える
kztakasaki
2
430
AI主導でFastAPIのWebサービスを作るときに 人間が構造化すべき境界線
okajun35
0
280
Amazon Bedrockを活用したRAGの品質管理パイプライン構築
tosuri13
5
890
ノイジーネイバー問題を解決する 公平なキューイング
occhi
0
130
Featured
See All Featured
Are puppies a ranking factor?
jonoalderson
1
3k
Designing Dashboards & Data Visualisations in Web Apps
destraynor
231
54k
Dealing with People You Can't Stand - Big Design 2015
cassininazir
367
27k
Bootstrapping a Software Product
garrettdimon
PRO
307
120k
Reflections from 52 weeks, 52 projects
jeffersonlam
356
21k
Sam Torres - BigQuery for SEOs
techseoconnect
PRO
0
200
"I'm Feeling Lucky" - Building Great Search Experiences for Today's Users (#IAC19)
danielanewman
231
22k
How People are Using Generative and Agentic AI to Supercharge Their Products, Projects, Services and Value Streams Today
helenjbeal
1
130
jQuery: Nuts, Bolts and Bling
dougneiner
65
8.4k
Pawsitive SEO: Lessons from My Dog (and Many Mistakes) on Thriving as a Consultant in the Age of AI
davidcarrasco
0
74
Why Mistakes Are the Best Teachers: Turning Failure into a Pathway for Growth
auna
0
68
A Guide to Academic Writing Using Generative AI - A Workshop
ks91
PRO
0
220
Transcript
Making Deployments Easy with TF Serving Rishit Dagli High School
TEDx, TED-Ed Speaker rishit_dagli Rishit-dagli
“Most models don’t get deployed.”
of models don’t get deployed. 90%
Source: Laurence Moroney
Source: Laurence Moroney
• High School Student • TEDx and Ted-Ed Speaker •
♡ Hackathons and competitions • ♡ Research • My coordinates - www.rishit.tech $whoami rishit_dagli Rishit-dagli
• Devs who have worked on Deep Learning Models (Keras)
• Devs looking for ways to put their model into production ready manner Ideal Audience
Why care about ML deployments? Source: memegenerator.net
None
• Package the model What things to take care of?
• Package the model • Post the model on Server
What things to take care of?
• Package the model • Post the model on Server
• Maintain the server What things to take care of?
• Package the model • Post the model on Server
• Maintain the server Auto-scale What things to take care of?
• Package the model • Post the model on Server
• Maintain the server Auto-scale What things to take care of?
• Package the model • Post the model on Server
• Maintain the server Auto-scale Global availability What things to take care of?
• Package the model • Post the model on Server
• Maintain the server Auto-scale Global availability Latency What things to take care of?
• Package the model • Post the model on Server
• Maintain the server • API What things to take care of?
• Package the model • Post the model on Server
• Maintain the server • API • Model Versioning What things to take care of?
Simple Deployments Why are they inefficient?
None
Simple Deployments Why are they inefficient? • No consistent API
• No model versioning • No mini-batching • Inefficient for large models Source: Hannes Hapke
TensorFlow Serving
TensorFlow Serving TensorFlow Data validation TensorFlow Transform TensorFlow Model Analysis
TensorFlow Serving TensorFlow Extended
• Part of TensorFlow Extended TensorFlow Serving
• Part of TensorFlow Extended • Used Internally at Google
TensorFlow Serving
• Part of TensorFlow Extended • Used Internally at Google
• Makes deployment a lot easier TensorFlow Serving
The Process
• The SavedModel format • Graph definitions as protocol buffer
Export Model
SavedModel Directory
auxiliary files e.g. vocabularies SavedModel Directory
auxiliary files e.g. vocabularies SavedModel Directory Variables
auxiliary files e.g. vocabularies SavedModel Directory Variables Graph definitions
TensorFlow Serving
TensorFlow Serving
TensorFlow Serving Also supports gRPC
TensorFlow Serving
TensorFlow Serving
TensorFlow Serving
TensorFlow Serving
Inference
• Consistent APIs • Supports simultaneously gRPC: 8500 REST: 8501
• No lists but lists of lists Inference
• No lists but lists of lists Inference
• JSON response • Can specify a particular version Inference
with REST Default URL http://{HOST}:8501/v1/ models/test Model Version http://{HOST}:8501/v1/ models/test/versions/ {MODEL_VERSION}: predict
• JSON response • Can specify a particular version Inference
with REST Default URL http://{HOST}:8501/v1/ models/test Model Version http://{HOST}:8501/v1/ models/test/versions/ {MODEL_VERSION}: predict Port Model name
Inference with REST
• Better connections • Data converted to protocol buffer •
Request types have designated type • Payload converted to base64 • Use gRPC stubs Inference with gRPC
Model Meta Information
• You have an API to get meta info •
Useful for model tracking in telementry systems • Provides model input/ outputs, signatures Model Meta Information
Model Meta Information http://{HOST}:8501/ v1/models/{MODEL_NAME} /versions/{MODEL_VERSION} /metadata
Batch Inferences
• Use hardware efficiently • Save costs and compute resources
• Take multiple requests process them together • Super cool😎 for large models Batch inferences
• max_batch_size • batch_timeout_micros • num_batch_threads • max_enqueued_batches • file_system_poll_wait
_seconds • tensorflow_session _paralellism • tensorflow_intra_op _parallelism Batch Inference Highly customizable
• Load configuration file on startup • Change parameters according
to use cases Batch Inference
Also take a look at...
• Kubeflow deployments • Data pre-processing on server🚅 • AI
Platform Predictions • Deployment on edge devices • Federated learning Also take a look at...
bit.ly/tf-everywhere-ind Demos!
bit.ly/serving-deck Slides
Thank You rishit_dagli Rishit-dagli