• instrumented version of a j.u.c.ExecutorService • pluggable control mechanism to grow or shrink the pool • provides stats, e.g. QUEUE_LATENCY, TASK_ARRIVAL_RATE etc • tasks queue & j.u.c.RejectedExecutionException
non-blocking IO • communication design unification with Pipeline, Channel & Handler • a lot of ready-to-use handlers and codecs • ChannelFuture, ChannelPromise • *ByteBuf, zero-copy, smart allocations, leaks detector
"low-level" • aleph.netty defines a lot of bridges • helpers to deal with ByteBufs • ChannelFuture → manifold's deferred • Channel represented as manifold's stream • a few macros to define ChannelHandlers • a lot more!
defines SSL context injection • builds io.netty.bootstrap.ServerBootstrap • sets up chieldHandler to pipeline-initializer • binds to socket and waits for the Channel to be ready
wait until Channel is registered on the Netty's Pipeline instance • call pipeline-builder provided as an argument • passing the instnace of Pipeline as an argument • clean up itself
up handlers, notably HttpServerCodec, HttpServerExpectContinueHandler • request handler is either ring-handler or raw-ring- handler • main task: read HTTP request and pass it to handle- request
handler (provided by the user) on a given executor • (or inlined!) • catches j.u.c.RejectedExecutionException and passes to rejected-handler • (by default answering with 503) • sends response when ready
> Host: localhost:8080 ... • RFC 7230 #5.4 • ... a server MUST respond with a 400 ... • is not enforced neither by Aleph nor by Netty • it's not that practical nowadays...
TCP connection for multiple HTTP requests/ responses • HTTP/1.0 Connection: keep-alive • HTTP/1.1: all connections are persistent by default • HTTP/1.1 Connection: close when necessary
uses pools • aleph.http/connection-pool builds flow/intrumented- pool • generate callback in the intrumented-pool creates a new connection • keep-alive? option is set to true by default
is set here • meaning you can mess this up a bit by setting header manually • Aleph server detects keep-alive "status" here and here • and uses here to send response • Aleph server adds the header automatically • ... still !
a public API • connection is a function: request → deferred response • manifold streams to represent requests & responses (we can have many) • netty/create-client to build Netty's channel
set a few options: SO_REUSEADDR, MAX_MESSAGES_PER_READ • detects to use EpollEventLoopGroup or NioEventLoopGroup • sets handler to pipeline-initializer • connects to the remote address
here ! • updates Pipeline instance with a few new handlers, most notably: • HttpClientCodec with appropriate settings • "main" handler with Aleph's client logic • pipeline-transform option might be useful to rebuild Pipeline when necessary
• raw-client-handler returns body as manifold's stream of Netty's ByteBuf • client-handler converts body to InputStream of bytes (additional copying but less frictions) • both implementations are kinda tricky • most of the complexity: buffering all the way down & chunks
• "jumps" to the executor specified (or default-response- executor) • this might throw j.u.c.RejectedExecutionException • aleph.http/request is responsible for cleaning up after response is ready and on timeouts • also responsible for "top-level" middlewares: redirects & cookies
connection-pool) • waits for the connection to be realized (either ready/reused or connecting) • "sends" the request applying connection function • chains on response and waits for :aleph/complete • disposes the connection from the pool when not keep-alive and on error
PoolTimeout, ConnectionTimeout, RequestTimeout, ReadTimeout • never perform async operations w/o timeout • flexible error handling, easier to debug (reasoning is different) • you need this when implementing proxies or deciding on retries
be "persistent" forever • not always the best option ! • idle-timeout option is available both for the client and the server since • when set, just updates the Pipeline builder • heavy lifting is done by Netty's IdleStateHandler • catching IdleStateEvent to close the connection
waiting on responses • "allowed" with HTTP/1.1, not used widely (e.g. not used in modern browsers) • might dramatically reduce the number of TCP/IP packets • Aleph • supports pipelining on the server • does not support pipelining on the client
to trace what's going on with your connections • at least state changes: opened, closed, acquired, released • easiest way: inject a ChannelHandler that listens to all events and logs them • to catch acquire and release you need to wrap flow/ instrumented-pool
getting a header by the name is not taking element from a map • concatenates with "," in case we have few values • not always the best option • required by Ring Spec
sending request • generates random boundary, sets appropriate header • would be helpful to "remember" boundary generated in the request (e.g. for testing) !
) • clj-http uses org.apache.http.entity.mime.MultipartEntityBuilder • Aleph implements "from scratch" on the client • supported Content-Transfer-Encodings • no support for the server • yada's implementation with manifold's stream
transmit body • server replies with status code 100 Continue or 417 Expectation failed • client send body • potentially, less pressure on the networks when sending large requests • rarely used in practise
the request • server: detecting last chunk of the request • client: reading the body • client: detecting last chunk • :max-chunk-size and :response-buffer-size options
when :body is seq, iterator or stream • if Content-Lenght header is not set explicitely • detection client disconnect is still kinda tough • think about buffering and throttling in advance, this talk might help
forces send-file-body to use send-chunked-file instead of send-file-region • why? send-file-region uses zero-copy file transfer with io.netty.channel.DefaultFileRegion • does not support user-space modifications, e.g. compression !
Protocol" • handshaking using HTTP Upgrade header (compatibility) • Aleph uses manifold's SplicedStream to represent duplex channel • supports Text and Binary frames, replies to Ping frames • a lot of cases and corners in the protocol (duplex communication is hard)
mind the difference with aleph.http/websocket- connection ! • http.client/websocket-connection builds a Channel with netty/create-client • websocket-client-handler creates a duplex stream and a handler
the idea here is to process HTTP/1.1 request first and then perform "upgrade" • http.server/websocket-upgrade-request? might be useful to "test" the request
to http.server/initialize-websocket-handler • initialize-websocket-handler builds and runs handshaker • .websocket? mark is set to modify response sending behavior • Pipeline is rebuilt appropriately • 2 streams spliced into one, as for the client
events is "almost RFC" • client sends CloseFrame before closing the connection • on receiving CloseFrame saves status & reason • server sends CloseFrame w/o closing the connection • as it will be done by Netty • Netty behavior is "more RFC-ish"
extension since • fine-grained Ping/Pong support is still an open question • to add ability to send http/websocket-ping manually, and to wait for Pong • helpful for heartbeats, online presence detection etc • pipeline-transform might be used to extend both server and client
part is mostly params juggling • supports epoll detection • and flexible configuration format for name server providers • aleph.http/create-connection uses InetSocketAddress/createUnresolved
infrastructure • used pretty heavily even for internal networks (yeah, servise mesh ! ) • long story, available in Aleph since • implementaion in not compatible with clj-http API, works on the connection-pool level only • heavy lifting is done by io.netty/netty-handler-proxy
the application layer to negotiate which protocol should be performed • replaced NPN (Next Protocol Negotiation Extension) • emerged from SPDY development
should be added to the Pipeline earlier • passes to different engines, like OpenSSL, BoringSSL or even JDK • "Don't use the JDK for ALPN! But if you absolutely have to, here's how you do it... :)", grpc-java
7540 • high-level compatibility with HTTP/1.1 • features (notably): • compressed headers (HPACK) • server push • multiplexing over a single TCP connection • more!
does not • Ring spec does not cover all HTTP/2 features • could be done in a separate library • with smart fallback to Aleph on ALPN (when necessary) • started working, very slow progress