JIT PRODUCTION Q&A PyTorch under the hood A guide to understand PyTorch internals Christian S. Perone ([email protected]) http://blog.christianperone.com PyData Montreal, Feb 2019
JIT PRODUCTION Q&A Agenda TENSORS Tensors Python objects Zero-copy Tensor storage Memory allocators (CPU/GPU) The big picture JIT Just-in-time compiler Tracing Scripting Why TorchScript ? Building IR and JIT Phases Optimizations Serialization Using models in other languages PRODUCTION Some tips Q&A
JIT PRODUCTION Q&A WHO AM I Christian S. Perone 14 years working with Machine Learning, Data Science and Software Engineering in industry R&D Blog at blog.christianperone.com Open-source projects at https://github.com/perone Twitter @tarantulae
JIT PRODUCTION Q&A DISCLAIMER PyTorch is a moving target, Deep Learning ecosystem moves fast and big changes happens every week; This is not a talk to teach you the basics of PyTorch or how to train your network, but to teach you how PyTorch components works under the hood in a intuitive way;
JIT PRODUCTION Q&A DISCLAIMER PyTorch is a moving target, Deep Learning ecosystem moves fast and big changes happens every week; This is not a talk to teach you the basics of PyTorch or how to train your network, but to teach you how PyTorch components works under the hood in a intuitive way; This talk is updated to the PyTorch v.1.0.1 version;
JIT PRODUCTION Q&A TENSORS Simply put, TENSORS are a generalization of vectors and matrices. In PyTorch, they are a multi-dimensional matrix containing elements of a single data type.
JIT PRODUCTION Q&A TENSORS Simply put, TENSORS are a generalization of vectors and matrices. In PyTorch, they are a multi-dimensional matrix containing elements of a single data type. >>> import torch >>> t = torch.tensor([[1., -1.], [1., -1.]]) >>> t tensor([[ 1., -1.] [ 1., -1.]])
JIT PRODUCTION Q&A TENSORS Simply put, TENSORS are a generalization of vectors and matrices. In PyTorch, they are a multi-dimensional matrix containing elements of a single data type. >>> import torch >>> t = torch.tensor([[1., -1.], [1., -1.]]) >>> t tensor([[ 1., -1.] [ 1., -1.]]) >>> t.dtype # They have a type torch.float32
JIT PRODUCTION Q&A TENSORS Simply put, TENSORS are a generalization of vectors and matrices. In PyTorch, they are a multi-dimensional matrix containing elements of a single data type. >>> import torch >>> t = torch.tensor([[1., -1.], [1., -1.]]) >>> t tensor([[ 1., -1.] [ 1., -1.]]) >>> t.dtype # They have a type torch.float32 >>> t.shape # a shape torch.Size([2, 2])
JIT PRODUCTION Q&A TENSORS Simply put, TENSORS are a generalization of vectors and matrices. In PyTorch, they are a multi-dimensional matrix containing elements of a single data type. >>> import torch >>> t = torch.tensor([[1., -1.], [1., -1.]]) >>> t tensor([[ 1., -1.] [ 1., -1.]]) >>> t.dtype # They have a type torch.float32 >>> t.shape # a shape torch.Size([2, 2]) >>> t.device # and live in some device device(type= cpu )
JIT PRODUCTION Q&A TENSORS Although PyTorch has an elegant python first design, all PyTorch heavy work is actually implemented in C++. In Python, the integration of C++ code is (usually) done using what is called an extension;
JIT PRODUCTION Q&A TENSORS Although PyTorch has an elegant python first design, all PyTorch heavy work is actually implemented in C++. In Python, the integration of C++ code is (usually) done using what is called an extension; PyTorch uses ATen, which is the foundational tensor operation library on which all else is built;
JIT PRODUCTION Q&A TENSORS Although PyTorch has an elegant python first design, all PyTorch heavy work is actually implemented in C++. In Python, the integration of C++ code is (usually) done using what is called an extension; PyTorch uses ATen, which is the foundational tensor operation library on which all else is built; To do automatic differentiation, PyTorch uses Autograd, which is an augmentation on top of the ATen framework;
JIT PRODUCTION Q&A TENSORS Although PyTorch has an elegant python first design, all PyTorch heavy work is actually implemented in C++. In Python, the integration of C++ code is (usually) done using what is called an extension; PyTorch uses ATen, which is the foundational tensor operation library on which all else is built; To do automatic differentiation, PyTorch uses Autograd, which is an augmentation on top of the ATen framework; In the Python API, PyTorch previously had separate Variable and a Tensor types, after v.0.4.0 they were merged into Tensor .
JIT PRODUCTION Q&A QUICK RECAP PYTHON OBJECTS struct THPVariable { PyObject_HEAD torch::autograd::Variable cdata; PyObject* backward_hooks; }; The TH prefix is from TorcH, and P means Python.
JIT PRODUCTION Q&A IN PYTHON, EVERYTHING IS AN OBJECT >>> a = 300 >>> b = 300 >>> a is b False >>> a = 200 >>> b = 200 >>> a is b True (object fields) PyObject_HEAD object PyIntObject a b Ref Count = 1 Ref Count = 2 (object fields) PyObject_HEAD object PyIntObject (object fields) PyObject_HEAD object PyIntObject a b Ref Count = 1 Ref Count = 1 A typical Python program spend much of its time allocating/deallocating integers. CPython then caches the small integers.
JIT PRODUCTION Q&A ZERO-COPYING TENSORS It is very common to load tensors in numpy and convert them to PyTorch, or vice-versa; >>> np_array = np.ones((2,2)) >>> np_array array([[1., 1.], [1., 1.]]) Underline after an operation means an in-place operation.
JIT PRODUCTION Q&A ZERO-COPYING TENSORS It is very common to load tensors in numpy and convert them to PyTorch, or vice-versa; >>> np_array = np.ones((2,2)) >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.tensor(np_array) >>> torch_array tensor([[1., 1.], [1., 1.]], dtype=torch.float64) Underline after an operation means an in-place operation.
JIT PRODUCTION Q&A ZERO-COPYING TENSORS It is very common to load tensors in numpy and convert them to PyTorch, or vice-versa; >>> np_array = np.ones((2,2)) >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.tensor(np_array) >>> torch_array tensor([[1., 1.], [1., 1.]], dtype=torch.float64) >>> torch_array.add_(1.0) Underline after an operation means an in-place operation.
JIT PRODUCTION Q&A ZERO-COPYING TENSORS It is very common to load tensors in numpy and convert them to PyTorch, or vice-versa; >>> np_array = np.ones((2,2)) >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.tensor(np_array) >>> torch_array tensor([[1., 1.], [1., 1.]], dtype=torch.float64) >>> torch_array.add_(1.0) >>> np_array array([[1., 1.], # array is intact, a copy was made [1., 1.]]) Underline after an operation means an in-place operation.
JIT PRODUCTION Q&A ZERO-COPYING TENSORS Now imagine that you have a batch of 128 images, 3 channels each (RGB) and with size of 224x224; 0 1 1 1 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 0 1 1 1 1 0 1 0 0 0 1 1 1 0 1 0 0 1 0 0 1 1 0 0 1 1 0 0 0 Column Row Channel This will yield a size in memory of ∼ 74MB. We don’t want to duplicate memory (except when copying them to discrete GPUs of course);
JIT PRODUCTION Q&A ZERO-COPYING TENSORS Let’s see now a slightly different code using the function torch.from_numpy() this time: >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.from_numpy(np_array)
JIT PRODUCTION Q&A ZERO-COPYING TENSORS Let’s see now a slightly different code using the function torch.from_numpy() this time: >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.from_numpy(np_array) >>> torch_array.add_(1.0)
JIT PRODUCTION Q&A ZERO-COPYING TENSORS Let’s see now a slightly different code using the function torch.from_numpy() this time: >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.from_numpy(np_array) >>> torch_array.add_(1.0) >>> np_array array([[2., 2.], [2., 2.]])
JIT PRODUCTION Q&A ZERO-COPYING TENSORS Let’s see now a slightly different code using the function torch.from_numpy() this time: >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.from_numpy(np_array) >>> torch_array.add_(1.0) >>> np_array array([[2., 2.], [2., 2.]]) The original numpy array was changed, because it used a zero-copy operation.
JIT PRODUCTION Q&A ZERO-COPYING TENSORS Difference between in-place and standard operations might not be so clear in some cases: >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.from_numpy(np_array)
JIT PRODUCTION Q&A ZERO-COPYING TENSORS Difference between in-place and standard operations might not be so clear in some cases: >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.from_numpy(np_array) >>> np_array = np_array + 1.0
JIT PRODUCTION Q&A ZERO-COPYING TENSORS Difference between in-place and standard operations might not be so clear in some cases: >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.from_numpy(np_array) >>> np_array = np_array + 1.0 >>> torch_array tensor([[1., 1.], [1., 1.]], dtype=torch.float64)
JIT PRODUCTION Q&A ZERO-COPYING TENSORS Difference between in-place and standard operations might not be so clear in some cases: >>> np_array array([[1., 1.], [1., 1.]]) >>> torch_array = torch.from_numpy(np_array) >>> np_array = np_array + 1.0 >>> torch_array tensor([[1., 1.], [1., 1.]], dtype=torch.float64) However, if you use np_array += 1.0 , that is an in-place operation that will change torch_array memory.
JIT PRODUCTION Q&A DATA POINTERS (object fields) data_pointer* object PyArrayObject (object fields) data_pointer* object FloatTensor The tensor FloatTensor did a copy of the numpy array data pointer and not of the contents. The reference is kept safe by the Python reference counting mechanism.
JIT PRODUCTION Q&A TENSOR STORAGE The abstraction responsible for holding the data isn’t actually the Tensor , but the Storage . struct C10_API StorageImpl final : (...) { // (...) private: // (...) caffe2::TypeMeta data_type_; DataPtr data_ptr_; int64_t numel_; Allocator* allocator_; }
JIT PRODUCTION Q&A TENSOR STORAGE The abstraction responsible for holding the data isn’t actually the Tensor , but the Storage . struct C10_API StorageImpl final : (...) { // (...) private: // (...) caffe2::TypeMeta data_type_; DataPtr data_ptr_; int64_t numel_; Allocator* allocator_; } Holds a pointer to the raw data and contains information such as the size and allocator; Storage is a dumb abstraction, there is no metadata telling us how to interpret the data it holds;
JIT PRODUCTION Q&A TENSOR STORAGE The Storage abstraction is very powerful because it decouples the raw data and how we can interpret it; We can have multiple tensors sharing the same storage, but with different interpretations, also called views, but without duplicating memory:
JIT PRODUCTION Q&A TENSOR STORAGE The Storage abstraction is very powerful because it decouples the raw data and how we can interpret it; We can have multiple tensors sharing the same storage, but with different interpretations, also called views, but without duplicating memory: >>> tensor_a = torch.ones((2, 2)) >>> tensor_b = tensor_a.view(4) >>> tensor_a_data = tensor_a.storage().data_ptr() >>> tensor_b_data = tensor_b.storage().data_ptr() >>> tensor_a_data == tensor_b_data True
JIT PRODUCTION Q&A TENSOR STORAGE The Storage abstraction is very powerful because it decouples the raw data and how we can interpret it; We can have multiple tensors sharing the same storage, but with different interpretations, also called views, but without duplicating memory: >>> tensor_a = torch.ones((2, 2)) >>> tensor_b = tensor_a.view(4) >>> tensor_a_data = tensor_a.storage().data_ptr() >>> tensor_b_data = tensor_b.storage().data_ptr() >>> tensor_a_data == tensor_b_data True tensor_b is a different view (interpretation) of the same data present in the underlying storage that is shared between both tensors.
JIT PRODUCTION Q&A MEMORY ALLOCATORS (CPU/GPU) The tensor storage can be allocated either in the CPU memory or GPU, therefore a mechanism is required to switch between these different allocations:
JIT PRODUCTION Q&A MEMORY ALLOCATORS (CPU/GPU) The tensor storage can be allocated either in the CPU memory or GPU, therefore a mechanism is required to switch between these different allocations: struct Allocator { virtual ~Allocator() {} virtual DataPtr allocate(size_t n) const = 0; virtual DeleterFnPtr raw_deleter() const {...} void* raw_allocate(size_t n) {...} void raw_deallocate(void* ptr) {...} }; There are Allocator s that will use the GPU allocators such as cudaMallocHost() when the storage should be used for the GPU or posix_memalign() POSIX functions for data in the CPU memory.
JIT PRODUCTION Q&A THE BIG PICTURE (object fields) Storage *storage object Tensor Allocator *allocator (object fields) DataPtr data_ptr object Storage raw_deallocate() (object fields) raw_allocate() object Allocator Raw Data The Tensor has a Storage which in turn has a pointer to the raw data and to the Allocator to allocate memory according to the destination device.
JIT PRODUCTION Q&A JIT - JUST-IN-TIME COMPILER PyTorch is eager by design, which means that it is easily hackable to debug, inspect, etc; However, this poses problems for optimization and for decoupling it from Python (the model itself is Python code);
JIT PRODUCTION Q&A JIT - JUST-IN-TIME COMPILER PyTorch is eager by design, which means that it is easily hackable to debug, inspect, etc; However, this poses problems for optimization and for decoupling it from Python (the model itself is Python code); PyTorch 1.0 introduced torch.jit , which has two main methods to convert a PyTorch model to a serializable and optimizable format; TorchScript was also introduced as a statically-typed subset of Python;
JIT PRODUCTION Q&A JIT - JUST-IN-TIME COMPILER Two very different worlds with their own requirements. Prototype, debug, train, experiment EAGER MODE Optimization, other languages, deployment SCRIPT MODE tracing scripting
JIT PRODUCTION Q&A TRACING def my_function(x): if x.mean() > 1.0: r = torch.tensor(1.0) else: r = torch.tensor(2.0) return r >>> ftrace = torch.jit.trace(my_function, (torch.ones(2, 2)))
JIT PRODUCTION Q&A TRACING To call the JIT’ed function, just call the forward() method: >>> x = torch.ones(2, 2) >>> ftrace.forward(x) tensor(2.) However, tracing will not record any control-flow like if statements or loops, it executes the code with the given context and creates the graph. You can see this limitation below: >>> x = torch.ones(2, 2).add_(1.0) >>> ftrace.forward(x) tensor(2.) According to my_function() , result should have been 1.0. Tracing also checks for differences between traced and Python function, but what about Dropout ?
JIT PRODUCTION Q&A SCRIPTING Another alternative is to use scripting, where you can use decorators such as @torch.jit.script : @torch.jit.script def my_function(x): if bool(x.mean() > 1.0): r = 1 else: r = 2 return r
JIT PRODUCTION Q&A SCRIPTING The my_function() is now a ScriptModule : >>> type(my_function) torch.jit.ScriptModule When we check the results again: >>> x = torch.ones(2, 2) >>> my_function(x) 2
JIT PRODUCTION Q&A SCRIPTING The my_function() is now a ScriptModule : >>> type(my_function) torch.jit.ScriptModule When we check the results again: >>> x = torch.ones(2, 2) >>> my_function(x) 2 >>> x = torch.ones(2, 2).add_(1.0) >>> my_function(x) 1 Control-flow logic was preserved !
JIT PRODUCTION Q&A WHY TORCHSCRIPT ? The concept of having a well-defined Intermediate Representation (IR) is very powerful, it’s the main concept behind LLVM platform as well;
JIT PRODUCTION Q&A WHY TORCHSCRIPT ? The concept of having a well-defined Intermediate Representation (IR) is very powerful, it’s the main concept behind LLVM platform as well; This opens the door to: Decouple the model (computationl graph) from Python runtime;
JIT PRODUCTION Q&A WHY TORCHSCRIPT ? The concept of having a well-defined Intermediate Representation (IR) is very powerful, it’s the main concept behind LLVM platform as well; This opens the door to: Decouple the model (computationl graph) from Python runtime; Use it in production with C++ (no GIL) or other languages;
JIT PRODUCTION Q&A WHY TORCHSCRIPT ? The concept of having a well-defined Intermediate Representation (IR) is very powerful, it’s the main concept behind LLVM platform as well; This opens the door to: Decouple the model (computationl graph) from Python runtime; Use it in production with C++ (no GIL) or other languages; Capitalize on optimizations (whole program);
JIT PRODUCTION Q&A WHY TORCHSCRIPT ? The concept of having a well-defined Intermediate Representation (IR) is very powerful, it’s the main concept behind LLVM platform as well; This opens the door to: Decouple the model (computationl graph) from Python runtime; Use it in production with C++ (no GIL) or other languages; Capitalize on optimizations (whole program); Split the development world of hackable and easy to debug from the world of putting these models in production and optimize them.
JIT PRODUCTION Q&A BUILDING THE IR To build the IR, PyTorch takes leverage of the Python Abstract Syntax Tree (AST) which is a tree representation of the syntactic structure of the source code. >>> ast_mod = ast.parse("print(1 + 2)") >>> astpretty.pprint(ast_mod.body[0], show_offsets=False) Expr( value=Call( func=Name(id= print , ctx=Load()), args=[ BinOp( left=Num(n=1), op=Add(), right=Num(n=2), ), ], keywords=[], ), )
JIT PRODUCTION Q&A EXECUTING Just like Python interpreter executes your code, PyTorch has a interpreter that executes the IR instructions: bool runImpl(Stack& stack) { auto& instructions = function->instructions; size_t last = instructions.size(); while (pc < last) { auto& inst = instructions[pc]; try { loadTensorsFromRegisters(inst.inputs, stack); size_t new_pc = pc + 1 + inst.callback(stack); for (int i = inst.outputs.size - 1; i >= 0; --i) { int reg = get(inst.outputs, i); registers[reg] = pop(stack); } pc = new_pc; // (...) omitted
JIT PRODUCTION Q&A OPTIMIZATIONS Many optimizations can be used on the computational graph of the model, such as Loop Unrolling: for i.. i+= 1 for i.. i+= 4 for j.. for j.. code(i, j) code(i, j) code(i+1, j) code(i+2, j) code(i+3, j) remainder loop
JIT PRODUCTION Q&A USING THE MODEL IN C++ PyTorch also has a C++ API that you can use to load/train models in C++. This is good for production, mobile, embedded devices, etc. Example of loading a traced model in PyTorch C++ API: #include <torch/script.h> int main(int argc, const char* argv[]) { auto module = torch::jit::load("resnet.pt"); std::vector<torch::jit::IValue> inputs; inputs.push_back(torch::ones({1, 3, 224, 224})); at::Tensor output = module->forward(inputs).toTensor(); }
JIT PRODUCTION Q&A ISSUES WITH TUTORIALS Be careful with online tutorials using Flask, etc. They are simple, but they often fail on good practices: They often use JSON and base64 to serialize images. This adds ∼ 33% overhead per call (uncompressed);
JIT PRODUCTION Q&A ISSUES WITH TUTORIALS Be careful with online tutorials using Flask, etc. They are simple, but they often fail on good practices: They often use JSON and base64 to serialize images. This adds ∼ 33% overhead per call (uncompressed); They don’t pay attention to zero-copy practices, so they often transform, reshape, convert to numpy, convert to PyTorch, etc;
JIT PRODUCTION Q&A ISSUES WITH TUTORIALS Be careful with online tutorials using Flask, etc. They are simple, but they often fail on good practices: They often use JSON and base64 to serialize images. This adds ∼ 33% overhead per call (uncompressed); They don’t pay attention to zero-copy practices, so they often transform, reshape, convert to numpy, convert to PyTorch, etc; They often use HTTP/1;
JIT PRODUCTION Q&A ISSUES WITH TUTORIALS Be careful with online tutorials using Flask, etc. They are simple, but they often fail on good practices: They often use JSON and base64 to serialize images. This adds ∼ 33% overhead per call (uncompressed); They don’t pay attention to zero-copy practices, so they often transform, reshape, convert to numpy, convert to PyTorch, etc; They often use HTTP/1; They seldom do batching (important for GPUs);
JIT PRODUCTION Q&A ISSUES WITH TUTORIALS Be careful with online tutorials using Flask, etc. They are simple, but they often fail on good practices: They often use JSON and base64 to serialize images. This adds ∼ 33% overhead per call (uncompressed); They don’t pay attention to zero-copy practices, so they often transform, reshape, convert to numpy, convert to PyTorch, etc; They often use HTTP/1; They seldom do batching (important for GPUs); They never put that "production" code in production.
JIT PRODUCTION Q&A PREFER BINARY SERIALIZATION FORMATS Prefer using good binary serialization methods such as Protobuf that offers a schema and a schema evolution mechanism. Example from EuclidesDB RPC message: message AddImageRequest { int32 image_id = 1; bytes image_data = 2; // This field can encode JSON data bytes image_metadata = 3; repeated string models = 4; } * http://euclidesdb.readthedocs.io
JIT PRODUCTION Q&A AVOID EXTRA COPIES Be careful to avoid extra copies of your tensors, especially during pre-processing; You can use in-place operations. It is a functional anti-pattern because it introduces side-effects, but it’s a fair price to pay for performance;
JIT PRODUCTION Q&A AVOID EXTRA COPIES Be careful to avoid extra copies of your tensors, especially during pre-processing; You can use in-place operations. It is a functional anti-pattern because it introduces side-effects, but it’s a fair price to pay for performance; Caveat: in-place operations doesn’t make much sense when you need gradients. PyTorch uses tensor versioning to catch that: >>> a = torch.tensor(1.0, requires_grad=True) >>> y = a.tanh() >>> y.add_(2.0) >>> y.backward() # error ! >>> a._version 0 >>> y._version 1
JIT PRODUCTION Q&A A TALE OF TWO HTTPS Client Server Time HTTP 1.0 Client Server Time HTTP 1.1 - Pipelining Client Server Time HTTP 1.1 - HoL Client Server Time HTTP 2.0 - Multiplexing
JIT PRODUCTION Q&A A TALE OF TWO HTTPS Client Server Time HTTP 1.0 Client Server Time HTTP 1.1 - Pipelining Client Server Time HTTP 1.1 - HoL Client Server Time HTTP 2.0 - Multiplexing Use HTTP 2.0 if possible, and avoid the head-of-line blocking;
JIT PRODUCTION Q&A A TALE OF TWO HTTPS Client Server Time HTTP 1.0 Client Server Time HTTP 1.1 - Pipelining Client Server Time HTTP 1.1 - HoL Client Server Time HTTP 2.0 - Multiplexing Use HTTP 2.0 if possible, and avoid the head-of-line blocking; Even better, you can use frameworks such as gRPC that uses HTTP/2.0 and Protobuf.
JIT PRODUCTION Q&A BATCHING Batching data is a way to amortize the performance bottleneck. GPU Non-batching Requests GPU Batch 2 Batch 1 Batching Requests