I’m tremendously excited this week; it looks like the work I did on supporting asynchronous sockets and select in jython has paid off.
Cpython excels in a very specific area; asynchronous (aka non-blocking) network servers. Back in the days before multi-threading was invented, asynchronous designs were the only available mechanism for managing more than one connection in a single process. Although it was possible to spawn off new processes to service incoming requests, that made communicating and sharing information and resources (e.g. database connections) between server components more complex and intractable.
With an asynchronous design, all of the incoming socket connections are managed by a single process. Current socket connections are kept in a simple data structure, such as a list, along with state information relating to the connection, and perhaps a handler, i.e. a piece of code that services the incoming requests. The single process used the unix select call (or its descendants such as poll) to monitor the state of every connection simultaneously, and then invoking the associated handlers when a network event of interest to them occurred (i.e. following a reactor pattern). And therein lies the problem.
When the asynch server calls the handler to service a request, that handler cannot delay before it returns; it must return very quickly, or it holds up the entire server. Returning quickly is easy if you’re doing trivial processing, e.g. an echo or chat server. But if you need to connect to a database, or even do something as simple as a DNS lookup, that requires forming another network connection, sending a request packet and waiting for a response. Which means that if your database or DNS library waits until the response arrives, hundreds of milliseconds or even seconds later, then the entire network server is held up, and clients must wait possibly seconds for service; clearly unacceptable. So with asynchronous servers, you must always use libraries that are aware they are operating in an asynchronous environment, are able to surrender control while they are awaiting a network response, and can add themselves to the server reactor, so that they can be notified when their response arrives.
While asynchronous architectures thrived in many areas, they proved to be overly complex for the mainstream programming market. Making all libraries asynch-aware, and tracking all of the interactions between them, was complex, difficult to understand, and thus error-prone. Multi-threading was invented to circumvent these problems. Instead of a single thread of execution controlling network service, separate threads of execution could be created, giving the illusion that there were multiple copies of the server code executing simultaneously, inside the same process, sharing memory, etc. Actually all the threads were running on the same linear processor, and the available timeslices were divided between them, in much the same way that most modern operating systems timeslice a single processor between multiple processes. The threads would generally run the same instructions, but would have their own program counter and call stack. Threads made programmers lives much easier, since threads could be designed to execute in a linear fashion. If the thread needed to pause at any stage, e.g. to await the response to a network request, it simply suspended itself, and a different thread would be selected to be given CPU time. At low cost.
Well, not quite.
There are problem with threads. Threads can have very large memory overheads, perhaps measuring in megabytes. Threads must synchronised, so that they don’t corrupt shared data structures, e.g. race conditions, which can be very hard to recognise when they happen, and complex to avoid in design. And threads can deadlock, which can be enormously difficult to debug.
The overhead of threads is such that it is rare to see a threading-based server outperform a well designed asynchronous server, on identical hardware. Threaded servers rarely service more than a couple of thousand network connections simultaneously; many can only service in the hundreds per second. There have been incredible increases in all forms of hardware capacity in recent decades; commodity boxes come with gigahertz processors, gigabytes of memory, terabytes of hard disk, and gigabits of network bandwidth. But although threaded-based servers have vastly improved in design, they still cannot match the performance of asynchronous servers, and address the C10K problem.
The C10K problem is a simple one; how to service 10,000 simultaneous network requests. I refer to you the C10K website for enormously detailed technical information on this complex problem. Suffice it to say that asynchronous architectures, with their much smaller memory usage, and lack of need for locking, synchronisation and context-switching, are generally considered to be far more performant than threaded architectures.
So where does python come in? It so happens that python is an excellent language for writing asynchronous servers. Python’s expressiveness and conciseness make it an excellent choice for developing simple and elegant solutions to the problems posed by asynchronous designs. Python was first published in 1991, years before java, and did not develop multi-threading until much later; python’s threading design borrows much from java’s design. Before threading came to python, asynchronous was the design of choice; none more so than Sam Rushing’s medusa, which is still considered the archetypal asychronous design, and was adapted into python as the asyncore module.
But asyncore is basic, and doesn’t provide non-blocking support for the protocols necessary in a real-world application; database protocols, DNS lookups, etc. But there are two heavyweights in the python asynchronous world that have the required support, in a robust form. Zope Corporation was there first with their venerable Zope product, arguably the grand-daddy of all python web frameworks. But dissatisfaction with Zope’s complexity lead to a new initiative; the Twisted Matrix. I won’t try to introduce Twisted here; take a look at the names of the sponsors on the Twisted website to see how important a framework it is.
And that’s why I’m excited; Twisted could soon be running on jython!
Back in the old days, jython 2.1 could not do aysnchronous socket operations. Jython 2.1 was written in the days when the only socket support in java was java.net, which was threading oriented. It was only in version 1.4 of the JVM that asynchronous support arrived, in the shape of the java.nio package. Now java has robust asynchronous support, after several years in the field.
I saw the possibility of using java.nio to enable jython to do asynchronous sockets a couple of years ago. I developed the support, and checked it into jython 2.2 last year, in the hope that it would draw the attention of developers of the big cpython asynchronous frameworks. Java has some distinct advantages over cpython in high performance network serving world, with the most obvious being the ability to run on all cores of multi-core processors simultaneously. Cpython is restricted in this regard by it’s Global Interpreter Lock, which prevents multiple threads running pure python code from running on multiple cores or processors simultanously. In java, and thus in jython, every single core in the multi-core system can be running its own reactor, actually simultaneously.
It’s looking more like those cpython asynchronous frameworks could be coming to jython soon.
The founder and chief architect of Twisted, Glyph Lefkowitz, logged three bugs (1119, 1120, 1121) against the jython socket module last week; I fixed them yesterday. I’m glad to say that they were all relating to corner cases and unusual usage; so far there have been no reports of serious bugs in the main functionality of the socket module. So hopefully the road to running twisted on jython is clear and obstacle free, at least at the level of socket communications.
And the Zope people have an ongoing project to get Zope running on jython; see the Zope-dev archives for ongoing updates.
Yes, exciting times indeed!
If you are interested in reading further about asynchronous I/O in the java.nio package, I thoroughly recommend Ron Hitchens book Java NIO.