Many web developers treat the browsers as a black box and ignore all the complexity of the network. This talk is an introduction to the excellent book "High Performance Browser Networking" by Ilya Grigorik .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Route Distance Time, light in a vac- uum Time, light in a fiber Taipei to Tokyo 2115 km 7 ms 10 ms Taipei to Paris 9845 km 32 ms 49 ms Taipei to SF 10346 km 34 ms 51 ms Equatorial cir- cumference 40 075 km 133 ms 200 ms Table: Some latencies Rule of thumb: speed of light in fiber: 2.108m/s Grégoire [email protected] Understand the network April 30, 2014 7 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measure latency traceroute (linux) tracert (windows) $> traceroute google.com traceroute to google.com (74.125.224.102), 64 hops max, 52 byte packets 1 10.1.10.1 (10.1.10.1) 7.120 ms 8.925 ms 1.199 ms 2 96.157.100.1 (96.157.100.1) 20.894 ms 32.138 ms 28.928 ms 3 x.santaclara.xxxx.com (68.85.191.29) 9.953 ms 11.359 ms 9.686 ms 4 x.oakland.xxx.com (68.86.143.98) 24.013 ms 21.423 ms 19.594 ms 5 68.86.91.205 (68.86.91.205) 16.578 ms 71.938 ms 36.496 ms 6 x.sanjose.ca.xxx.com (68.86.85.78) 17.135 ms 17.978 ms 22.870 ms 7 x.529bryant.xxx.com (68.86.87.142) 25.568 ms 22.865 ms 23.392 ms 8 66.208.228.226 (66.208.228.226) 40.582 ms 16.058 ms 15.629 ms 9 72.14.232.136 (72.14.232.136) 20.149 ms 20.210 ms 18.020 ms 10 64.233.174.109 (64.233.174.109) 63.946 ms 18.995 ms 18.150 ms 11 x.1e100.net (74.125.224.102) 18.467 ms 17.839 ms 17.958 ms Grégoire [email protected] Understand the network April 30, 2014 8 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A few shortcomings No state No acknowledgement No congestion control No message splitting No order delivery garantee Grégoire [email protected] Understand the network April 30, 2014 13 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computational cost “SSL/TLS is not computationally expensive anymore.” — Adam Langley (Google) - 2010 We have found that modern software-based TLS implementations running on commodity CPUs are fast enough to handle heavy HTTPS traffic load without needing to resort to dedicated cryptographic hardware. We serve all of our HTTPS traffic using software running on commodity hardware. — Doug Beaver (Facebook) Grégoire [email protected] Understand the network April 30, 2014 33 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Session resumption Re-use an already negotiated key. Avoid a full round-trip. A must have in most circumstances. Problem on the servers: must maintain a cache for each session ID. Even more tricky when it comes to multiple servers sharing the same cache. Grégoire [email protected] Understand the network April 30, 2014 36 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Session ticket Encrypt all session’s data and store it on the client. Only the server knows the key. No server side caching. But servers must be initialized with the same secret key (multi server behind load balancer concerns). Also need structure to deploy new keys Grégoire [email protected] Understand the network April 30, 2014 38 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TLS record Figure: TLS record Record can be split in multiple TCP packets. Record must be complete to decrypt the data. Grégoire [email protected] Understand the network April 30, 2014 42 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Google’s implementation As early 2014: TLS record set to fit into one TCP packet for the first MB, then upgraded to 16KB. Reset to initial value after ≈ 1s of inactivity. Grégoire [email protected] Understand the network April 30, 2014 44 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rapid history HTTP 0.9 Limited to html and ascii. Connection closed after response. HTTP 1.0 Introduced status code, request and response headers. Response is not limited to html. Connection is closed after response. HTTP 1.1 Connection re-use by default Complex, tons of options (content-encoding, caching directive, language negotiation and so on...) HTTP 2.0 (spdy) Work in progress. SPDY is a working implementation. Grégoire [email protected] Understand the network April 30, 2014 47 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Browser optimizations Modern browsers do a lot of work under the hood to provide better experience. DNS pre-resolve and TCP pre-connect. Speculative optimization. Resources pre-fetching and prioritization. Grégoire [email protected] Understand the network April 30, 2014 49 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pipelining concerns Clients and server must support it. Application must handle aborted connections. Must protect from broken intermediaries (use HTTPS) Head of line blocking The head of line blocking problem actually makes pipelining difficult to use in practice. Browsers open 6 TCP connections to a server and process requests in parallel. Grégoire [email protected] Understand the network April 30, 2014 55 / 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiple TCP connections In practice, browsers open 6 TCP connections per hosts. Why 6? Experimentation. No head of line blocking More bytes can be sent within the first cwnd But More complexity to manage the sockets collections More memory and cpu Limited application parallelism Resource exhaustion Grégoire [email protected] Understand the network April 30, 2014 56 / 65