- For example, earlier versions of some containers used a thread pool and allocated threads-per-client socket connection and recycled used threads once connections closed. Of course, this means that there is a cap to the number of clients that can be supported with this approach since after a few hundred threads, there can be severe context switching. This model has been abandoned for good reason in newer versions of the same containers. Most application servers and containers now provide the thread-per-request model where each HTTP request may be read and processed by a thread from a thread-pool and after processing of the request, the thread is recycled back. But this goes to show that not all application servers / containers are made equal.
- The mechanism used by the server to read the request as part of the socket communication may also vary. Servers can adopt Native IO or Java NIO mechanisms to implement either the reactor pattern or the proactor pattern. The proactor pattern is more sophisticated, scalable and efficient than its reactor counterpart. True proactor implementations require extensive OS support and Native IO. Using Java NIO’s non-blocking read, a near-Proactor implementation can be achieved.
As in the reactor approach, the thread-pool invokes the Servlet’s threads but the read is actually being done from an in-memory buffer rather than the client socket channel.
Let us compare the two approaches.
In the reactor variation, since the application thread pool is reading from the actual socket, one saves on the memory consumed. There is no buffering up of data except for the actual request data that will be used by the application. When one is concerned about the possible memory consumption due to unknown request sizes, this would be an optimal approach.
On the down-side, consider that the clients are sending data to the server over a fragile medium with low bandwidth like GPRS. I happen to have some experience developing server-side communicators for GPRS. GPRS actually uses unused time division multiple access (TDMA) slots of a GSM system to transfer data, for example, when there are gaps and pauses in speech. This means that the transfer of data can happen very slowly depending on the voice traffic and moreover, it would arrive in bursts rather than a complete packet. The time gap between bursts of data is arbitrary, depending on the availability of time slots. So, if the reading is done by the Servlet in the context of the thread-pool, a precious container thread gets blocked till more data can arrive on the socket. This can bring down the scalability of the application. So, for low bandwidth transportation mediums, such a server could prove sub-optimal.
In comparison, the proactor variation would have much better throughput and scalability for low-bandwidth transport mediums since the event de-multiplexer part would take care to buffer up data in memory and the thread-pool would read and process completed requests from in-memory buffers. On the down-side, this may be unnecessary when you are assured of excellent bandwidth on your LAN or WAN and would be sub-optimal if you have memory constraints. Of course, this would require very careful implementation of the container or server with stringent memory handling and timeouts and users of such containers may have to read the documentation very carefully to configure optimally. If not, it could even lead to memory leaks.
Most Application Servers/Servlet Containers actually use the reactor pattern, because most web-site based applications can assume good bandwidth from clients and keeping in view the hazards of inefficient memory management. Personally, I was hit by this behavior when I used Weblogic8.1 Application Server and its Servlet Container for handling HTTP POST data from GPRS based clients.
In general, for designing server-side applications which only use HTTP as a transport medium, I would recommend using a proactor pattern-based approach. To handle memory constraints, a proper timeout mechanism should be implemented to detect idling of clients and cleanup buffers.