Axional Server includes a high performance JDBC pool with specific database agent adapative features.
Applications can operate indistictly on diferent database agents in a heterogeneus environment at
the same time.
Each time an application attempts to access a back-end store (such as a database), it requires resources to create, maintain, and release a connection to that data store. To mitigate the strain that this process can place on overall application resources, the application server enables you to establish a pool of back-end connections that applications can share on an application server. Connection pooling spreads the connection overhead across several user requests, thereby conserving application resources for future requests.
1 Architecture overview
2 JDBC pool architecture
JDBC pool is built on a hierarchical structure include a
3 Connection pool size
The goal of tuning the connection pool is to ensure that each thread that needs a connection to the database has one, and that requests are not queued up waiting to access the database. For most applications, each task performs a query against the database. Since each thread performs a task, each concurrent thread needs a database connection. Typically, all requests come in over HTTP and are executed on a Web container thread. Therefore, the maximum connection pool size should be at least as large as the maximum size of the Web container thread pool.
Be aware, though, this is not a best practice for all scenarios.
Using a connection pool as large or larger than the Web container thread pool ensures that there are no threads
waiting for a connection and provides the maximum performance for a single server. However, for an environment
that has numerous application servers all connecting to the same back end database, careful consideration should
be placed on the connection pool size. If, for example, there are ten application servers all accessing a database
with 50 connections in the connection pool, that means up to 500 connections could be requested at a time on the database.
This type of load could easily cause problems on the database.
A better approach here is to use the
"funneling" method, in which the number of Web container threads is
larger than the number of connections in the connection pool. This ensures that under extreme load, not all active
threads are getting a connection to the database at a single moment. This will produce longer response times,
but will make the environment much more stable.
config.xmlfile. And each application may programmatically define it's pool sizes.
JDBC pool implementation contains exaustive metrics that can be examined from server consoles (either SWT, WEB or TCP).
operation rate is a meter metric which measures mean throughput (
sql operations) as well as one minute, five-minute and one-hour
EWMA (exponentially-weighted moving average) throughputs.
JDBC pool can have it's own cache of
SQL statements. Cache can be stored
in memory or memory mapped files, depending the pool configuration settings.
6 Blob handling
JDBC pool handles blobs using JDBCBlob wapper class. When a Blob is retrieved from JDBCResultSet, a new instance of JDBCBlob is returned in one of two subclasses.
- JDBCBlobDISK - a disk file with blob data read from db
- JDBCBlobDBMS - a lightweight reference that will fire a direct SQL to get data when Blob getBytes or getInputStream is called.
When a blob is required, JDBCResultSet.getBlob() will check for:
- A primary table can be determined form SQL statement parser
- Connection has one or more JDBCConnectionListener(s)
- A JDBCConnectionListener returns for the table name and column a metadata description (JDBCBlobMetaDataDBMS) to load a blob using table primary key data
As seen before, JDBCBlobDISK uses a temporary file to store data retrieved from a database Blob. Data is stored in a temporary directory under JDBCBlobMetaDataDISK/database.
When blob data is required, the class will:
- getBytes() - load data from file and return a Byte
- getBinaryStream() - return a FileInputStream
The file will remain in disk during the life of JDBCBlobDISK reference in memory. Once garbage collected, the fizalize() method will destroy the file.
In some cases, this blob references are stored in a session cache so they can be retrieved from a emitted page request. In that cases, you sould notice data will remain alive till Blob is removed from session.
JDBCBlobDISK is a memory reference about how to obtain the blob bytes. It does not store any blob content as it's a simple reference to the database connection and the SQL statement required to get the blob data.
When blob data is required, the class will:
- aquire a connecion
- getBytes() - get the data from db in memory and return a Byte
- getInputStream() - get the data from db in memory and return a ByteArrayInputStream
- free the connection
We can see data is always stored in memory before sending back to caller. This is necessary cause JDBC connection will be freed before data can be reached by caller.
... JDBCConnectionWrapper con = getConnection() JDBCResultSer rs = con.executeQuery("SELECT ...") JDBCBlob blob = rs.getBlob() rs.close(); con.freeConnection() return blob;
6.3 Select a blob from database
When you select a blob from database it is necessary to add the Primary Key or rowid to the query in order to recovery it when need it.
If not an error occurs showing the message that the Primary Key or rowid is required.
<select prefix='m_' > <columns>file_name, file_dcrc, file_estado, file_data</columns> <from table='textract_file' /> <where> file_seqno = 15 AND file_format = 'AEB43' </where> </select>
produces an error because there is not column on the query to obtain the data when is needed.
Message: Can not load BLOB reference cause pk column 'file_seqno' is not selected in query and no alternative rowid is found Trace: <STACKTRACE> java.sql.SQLException: Can not load BLOB reference cause pk column 'file_seqno' is not selected in query and no alternative rowid is found at deister.axional.server.jdbc.blob.impl.dbms.JDBCBlobDBMS.<init>(JDBCBlobDBMS.java:101) at deister.axional.server.jdbc.impl.JDBCResultSet.getBlob(JDBCResultSet.java:2701)
So you can use the Primary Key column as show below
<select prefix='m_' > <columns>file_seqno, file_name, file_dcrc, file_estado, file_data</columns> <from table='textract_file' /> <where> file_seqno = 15 AND file_format = 'AEB43' </where> </select>
7 Temporary tables
Temporary tables can be handled in JDBC using the @ prefix to allow automatic naming and cleanup. Temporary tables created using the @ prefix can be viewed from JDBC console under TEMP tables section.
Temp table management is specially important in a multiuser environment. Let's consider the following example:
SELECT * FROM TABLE INTO TEMP tmp1;
This SQL statement will not work when used from shared JDBC connections cause it can generate the same table identifier many times. Even worst. In Oracle or DB2, global temporary tables will be shared across connections so a table name identifier can collider even in different JDBC connections. To avoid this, you can use the temporary prefix operator.
SELECT * FROM TABLE INTO TEMP @tmp1;
7.1 Regex parser of temp tables
JDBC temp parser uses regular expressions to locate temporary table references in SQL text.
Unfortunately, Java's builtin regex support has problems with regexes containing repetitive alternative paths (that is, (A|B)*). This is compiled into a recursive call, which results in a StackOverflow error when used on a very large string.
The parser has two regular expressions that will be used sequentially if one fails. If first
parse fails you will see in console log:
Temp parser StackOverflowError, retrying with alternative expression
To determine the current stack size of your JVM you can type:
$ java -XX:+PrintFlagsFinal -version | grep ThreadStackSize
8 Pool statistics
JDBC pools acquire statistics of every operation and even keeps a queue of last statements. Some internal flags can be used to configure queue sizes.
According inital values, history and statistical queues are configured with the following ratios.
|Statistics||TOP SQL time||1||1||1/2||1/4|
|Statistics||TOP SQL usage||1||1||1/2||1/4|
For example, if node is configured to have a history size of 1000 elements, each server will have also 1000 elements, each database 500 and each pool up to 250.
A sample view of node history looks like:
9 Message queue
JDBC pool includes a message queue for transaction event processing (
A transaction message is sent to the queue when a insert, delete or update operation is detected.
The transaction message holds statement parameters (when prepared) and can map columns to arguments using SQL stament analysis. This way, a transaction message can determine the table affected, the operation type (insert, delete, update), the database, the remote user that has initiated the operation (if defined), etc.
On every transaction, a message wrapper (
JDBCMessageWrapper) is build and offered to a general queue controlled by JDBCNode.
The message wrapper contains the
JDBCTransactionMessage in it's envelope.
The queue is responsible to take message wrappers from queue using a special thread and call the handle
method for each message wrapper processed.
Application can register listeners on a
JDBCConnection to handle transaction messages using
When a message wrapper is taken from queue it's handle method is called. This way,
can perform a forward call to it's registered listeners.
Queue size is controlled by
JDBCNode.messageQueueSize environemnt variable with a default size of 1000
elements. In general, applications should handle message processing fast enougth so queue does not overflow. If so,
messages will be lost. In that case, you can increase queue size.
TransactionMessages are used in
Axional Studio to:
- Register transaction history
- Restart cron tasks if changed
- Forward transaction changes to web applications via websockets
10 Supported databases
- IBM Informix
- IBM DB2 UDB
- IBM DB2 400
- Apache Derby
- Apache Hive