server.bind(('127.0.0.1', 8000)) server.listen(1) client, addr = server.accept() while True: data = client.recv(4096) if not data: break print(data) server.close()
server.bind(('127.0.0.1', 8000) server.listen(8) while True: client, addr = server.accept() while True: data = client.recv(4096) if not data: break print(data) if data == b'CLOSE\r\n': client.close() break server.close()
while in %s' % ident) client.send(bytes('You have connected to %s\n' % ident, encoding='utf-8')) while True: data = client.recv(4096) if not data: print('Thread %s ending' % (ident)) break print('Thread %s received %s' % (ident, data)) client.close()
def handle(self): ident = threading.get_ident() print('Got connection while in %s' % ident) self.request.sendall(bytes('You have connected to %s\n' % ident, encoding='utf-8')) while True: data = self.request.recv(4096) if not data: break print(data) class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer): pass server = ThreadedTCPServer(('127.0.0.1', 9001), DemoHandler) server.serve_forever()
to the Unix select() system call. The first three arguments are sequences of ‘waitable objects’: either integers representing file descriptors or objects with a parameterless method named fileno() returning such an integer: • rlist: wait until ready for reading • wlist: wait until ready for writing • xlist: wait for an “exceptional condition” (see the manual page for what your system considers such a condition) Empty sequences are allowed, but acceptance of three empty sequences is platform-dependent. (It is known to work on Unix but not on Windows.) The optional timeout argument specifies a time-out as a floating point number in seconds. When the timeout argument is omitted the function blocks until at least one file descriptor is ready. A time-out value of zero specifies a poll and never blocks. The return value is a triple of lists of objects that are ready: subsets of the first three arguments. When the time-out is reached without a file descriptor becoming ready, three empty lists are returned. rs, ws, es = select.select(conns, [], [], 20)
socket client sockets server socket new connection received data timeout remove client no data 127.0.0.1, 7000 server socket is readable? client socket is readable? call recv()
which didn’t get approved • “This is a proposal for asynchronous I/O in Python 3, starting at Python 3.3…[the] proposal includes a pluggable event loop, transport and protocol abstractions similar to those in Twisted, and a higher- level scheduler based on yield from (PEP 380).”
y = y -1 x = testing(5) print(inspect.getgeneratorstate(x)) print(inspect.getgeneratorlocals(x)) print(next(x)) print(inspect.getgeneratorlocals(x)) print(next(x)) print(inspect.getgeneratorlocals(x)) print(next(x)) Inspecting generator state
10 yield 20 def testing2(): yield 100 yield from testing3() yield 200 def testing1(): yield 100 yield testing3() yield 200 print([x for x in testing1()]) print([x for x in testing2()])
magically use more CPUs ◦ if processing the data is CPU intensive, then you will want to distribute that load over multiple CPUs ◦ asyncio has support for handling callbacks within threads, but this has the same kinds of issues as threading on Python normally has.
def get_http_body(url): print('Fetching HTTP file') response = yield from aiohttp.request('GET', url) print('Got HTTP response') return (yield from response.read()) @asyncio.coroutine def fetch_file_from_sftp(hostname, username, filename): with (yield from asyncssh.connect(hostname, username=username)) as conn: print('Connected to SSH server') with (yield from conn.start_sftp_client()) as sftp: print('Retrieving file') yield from sftp.get(filename)