• Software Engineer @Merpay, Mercari ◦ developing payment service with Go, gPRC and GKE • Active in Serveless Community (JP) ◦ Serverless Meetup Tokyo, Serverless Days Tokyo • OSS Contributor ◦ Serverless Framework, aws-lambda-go/dotnet, Knative, KEDA • Author ◦ How to deal with Knative (ja), Learning AWS Lambda with Go (ja)
defined in declarative manner • controllers on K8s watch and keep desired state →reduce operational cost by auto recovery in trouble and auto scaling at high load ※https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
code • write Dockerfile • build Docker image • push image to registry • deploy service • expose service to internet • set up monitoring • set up autoscaling →We want to focus more on writing code!
and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.
longer need to spend time and resources on server provisioning, maintenance, updates, scaling, and capacity planning. Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams.
on writing their applications’ business logic. • Operations engineers ◦ can elevate their focus to more business critical tasks. →We want to realize this situation on top of K8s
both of the following: • Function as a Service (FaaS) • Backend as a Service (BaaS) →K8s itself is not FaaS or BaaS. How do we build serverless application on top of K8s?
workloads that are: • Stateless • Amenable to the process scale-out model • Primarily driven by application level (L7 -- HTTP, for example) request traffic
Deployment, and Service in support of the model →by standardizing on higher-level primitives which perform substantial amounts of automation of common infrastructure, it should be possible to build consistent toolkits that provide a richer experience than updating yaml files with kubectl.
resources simple for developers and operators • Building blocks to build your own PaaS/FaaS ◦ Serving, Eventing (and Build) • Solving mundane but difficult tasks such as: ◦ Deploying a container ◦ Routing and managing traffic with blue/green deployment ◦ Scaling automatically and sizing workloads based on demand ◦ Binding running services to eventing ecosystems →build platform to focus more on business value for developers and operators ※https://github.com/knative/docs
Serving Build Eventing Platform Gateway Primitives GitLab Serverless Your Own! Pivotal Function Service Cloud Run SAP Kyma Knative Lambda Runtimes Products
server to pass request to function because deploying artifacts are a container • decouple the server and function and include them into a single Dockerfile • make CLI like faas-cli or tm to put K8s manifests and kubectl out of consciousness • use cloudevents handler in your function
itself is not FaaS/PaaS. It provides the layer between K8s and serverless framework • multiple vendors are developing together as OSS • using cloudevents for event handling →We can avoid vendor lock in and easily migrate to other Knative based FaaS/PaaS
runtime ◦ (+) Can utilize any language, binary, non vendor SDK ◦ (−) Must prepare library by yourself • standardize packaging format ◦ (+) Don’t use different Zip for each FaaS, but Dockerfile ◦ (−) Must learn how to write effective Dockerfile →More responsibility, but can make them templated in consistent manner
on top of K8s • Cloud Run provides a managed serveless container platform • If you don’t operate K8s clusters, you can choose serverless service between CaaS and FaaS • Knative and Cloud Run are not fully matured but have a great potential. I hope to make contributions