Axional Server includes a high performance object pool. It is the base of the JDBC pool. In this section there is a description of the pool architecture.

1 Architecture

The pool handles the creation, life cycle and destruction of pooled objects.

The pool contains two queues, one of used objects and one for live free objects. The general life cycle of a pool operation is:

  1. A thread requests an object from the pool.
  2. If an object is in the free queue, it is moved to used queue, and health checked and returned to caller if sane. If not passes health check, a new object is created to be replaced the old one.
  3. If there are no free objects and the used size is less than the pool size limit, a new object is created, then placed on used queue and returned.
  4. If pool limit is full and there are no free objects for more than a period, an exception is thrown.
  5. Objects in free pool are periodically checked and are closed and removed it timed out.
  6. If pool has a check out count, each object is checked against it's checkout limit. If reached, they are destroyed.

BlockingQueuePool lifecycle


The used queue pool is a FIFO queue based on ConcurrentLinkedQueue

The free queue pool can be selected to be FIFO, LIFO or ARRAY


In general, any of the three queues renders 1 million I/O request to the queue per second for a queue size of 10 and 10 concurrent threads

2 Parameters

You can setup pool for specific tasks by setting up a custom configuration.

Implementation Base class Characteristics
LinkedFIFOBlockingQueuePool LinkedBlockingDeueue FIFO behavior, highest performance, low resource consumption, zero waits
LinkedLIFOBlockingQueuePool LinkedBlockingQueue LIFO behavior, good performance, high resource consumption, low waits
ArrayBlockingQueuePool ArrayBlockingQueue Fixed size ARRAY behavior, medium resource consumption, low waits

Each pool can have specific characteristics defined by using a pool configuration. The pool configuration main class is BlockingQueuePoolConfiguration

2.1 Pool configuration

Pools are created using the BloquingQueuePoolManager. Each pool has it's own set of configuration parameters that define either pool or object parameters

Property Type Default Limit Description
setPoolType Type LIFO LIFO, FIFO, ARRAY The type of blocking pool
setPoolMaxSize Integer 10 >1 Maximum number of elements to keep in pool
setPoolExtraSize Integer 0 Maximum extra elements that can be added to pool when pool fires a bussy exception
setPoolMaxEmptyCount Integer 0 0 not applied When objects are subject to be recycled on a check out count basis, close the pool if it has been found up to PoolMaxEmptyCount times empty.
This option must be used with caution. If application has live references to the pool the will be invalid. It should only be used when references to the pool are handled by BloquingQueuePool class.
setPoolCheckIntervalSeconds Integer 30 0 not applied Interval between health check verifications of objects in free pool.
setPoolObjectMaxIdleSeconds Integer 30 >1 Number of second an object can be idle in pool before it's been closed.
setPoolObjectMaxCheckOutCount Integer 0 0 not applied Maximum number of times an object can be used from pool before being destroyed.
setPoolObjectAcquireTimeoutSeconds Integer 30 >1 Number of seconds before timeout a request from an object when pool is full.
setPoolObjectBornDieTimeoutSeconds Integer 30 >1 Number of seconds before timeout a creation or destruction of an object

2.2 JDBC Pool configuration

BlockingQueuePoolConfiguration is extended by JDBC implementation class JDBCPoolConfiguration to add specific JDBC properties for JDBC pool handling.

Property Type Default Limit Description
setJDBCQueryTimeout Integer 300 The time in seconds to cancel a query
setJDBCCacheSize Integer 100 The number of individual query ResultSet keep in LRU cache
setJDBCMaxCacheRows Integer -1 The maximum number of rows allowed in a cached ResultSet
setJDBCInitialSQL String An initial SQL statement to be sent to each new connection
setJDBCProperties Properties The properties to be sent to the JDBC driver prior aquire a connection

3 Thread executors

Pool may priodically do tasks we call cleanners. A clenner is a time event that is periodically executed to ensure objects in pool are valid. This taks are performed using a Scheduler.

Additionally there are some operations that may take time and block the pool. For example, in JDBC pools, when a connection is retuned it has to be pasivated. The passivate process may fire closing JDBC statements. And this operation may block or hang. To avoid this situations, passivate is executed in a ThreadPoolExecutor and will use a timeout indicated by PoolObjectBornDieTimeoutSeconds config parameters.

The number of processor and the queue size of ThreadPoolExecutor are setup by default and can be modified by system parameters.

Action Processor Threads Queue
Cleanner Scheduler Processors Unbound
Cleanner ThreadPoolExecutor JVM processors * passivateProcessorsScaleFactor (1) passivateQueueSize 32 (2)

The default vaules can be changed by setting java properties at startup

  • deister.axional.server.pool.queues.BlockingQueuePool.passivateProcessorsScaleFactor : 2
  • deister.axional.server.pool.queues.BlockingQueuePool.passivateQueueSize : 32

4 Thread pooling

The pool is used to manage thread concurrency limiting resource conumption but trying to give a fast response time.

4.1 Little’s Law

Little’s law says that the number of requests in a system equals the rate at which they arrive, multiplied by the average amount of time it takes to service an individual request. This law is so common in our every day lives that it’s surprising that it wasn’t proposed until the 1950s and only proven in the 1960s. Here’s an example of one form of Little’s law in action. Have you ever stood in a line and tried to figure out how long you’d be standing in it? You would consider how many people are in line and then have a quick glance at how long it took to service the person at the front of the queue. You would then multiply those two values together, producing an estimate of how long you’d be in line. If instead of looking at the length of the line, you observed how often new people were joining the line and then multiplied that value by the service time, you’d then know the average number of people that are either in the line or being serviced.

There are numerous other similar games you can play with Little’s law that will answer other questions like “how much time on average does a person stand in the queue waiting to be served?” and so on.

$$ L = AV $$

Where L is the average number of customers in the system, A is the arrival rate and W is the average waiting time.

In a similar vein we can use Little’s law to determine thread pool size. All we have to do is measure the rate at which requests arrive and the average amount of time to service them. We can then plug those measurements into Little’s law to calculate the average number of requests in the system. If that number is less than the size of our thread pool then the size of the pool can be reduced accordingly. On the other hand if the number is larger than the size of our thread pool, things get a little more complicated.

In the case where we have more requests waiting than being executed the first thing we need to determine is if there’s enough capacity in the system to support a larger thread pool. To do that we must determine what resource is most likely to limit the application’s ability to scale. In this article we’ll assume that it’s CPU, though be aware that in real life it’s very likely to be something else. The easiest situation is that you have enough capacity to increase the size of the thread pool. If you don’t you’ll have to consider other options, be it tuning your application, adding more hardware, or some combination of both.