Axional Server multi-tier architecture help make complex enterprise applications manageable and scalable. Yet, as the number of servers and tiers increases, so does the communication among them, which can degrade overall application performance.

In most of the applications, data is retrieved from the database. The database operation is expensive and time consuming.

If the web application is frequently accessing the database for each request then its performance will be slow. In Enterprise application we can cache objects by object caching. It allows applications to share objects across requests and users, and coordinates the objects’ life cycles across processes.

Using caching technology at strategic points across the multi-tier model can help reduce the number of back-and-forth communications. Furthermore, although cache repositories require memory and CPU resources, using caches can nonetheless lead to overall performance gains by reducing the number of expensive operations (such as database accesses and Web page executions). However, ensuring that caches retain fresh content and invalidate stale data is a challenge, and keeping multiple caches in sync, in clustered environments, is even more so.

In object caching by storing frequently accessed or expensive-to-create objects in memory, object caching eliminates the need to repeatedly create and load data. It avoids the expensive reacquisition of objects by not releasing the objects immediately after their use and instead, the objects are stored in memory and reused for any subsequent client requests.

1 Cache types

Axional Server introduces serveral types of caches that share a common interfaces of type VTable and in some cases JVMMemoryWarningListener.

  • The VTable interface includes functions for cache introspection.
  • The JVMMemoryWarningListener interface allows caches to receve notifications on low memory conditions.

Caches are divided by types that include:

  • Maps that include VTableMap, VTableHashMap, VTableConcurrentHashMap, VTableLRUHashMap may be used to store key / value pairs.
  • Queues that include VTableQueue. VTableBlockingQueue, VTableConcurrentLimitedDeque, VTableConcurrentLinkedQueue, VTableFixedSizePriorityQueue, VTableLinkedBlockingDeque may be used to store objects on bounded or unbounded queues.
  • Sets that include a generic VTableSet may be used to store colllections of unique elements.
  • Future that include a generic VTableFutureMap may be used for to store key / value parirs in a map using future objects.
  • Weak that include a generic VTableWeakList may be used to store a WeakReference to elements in a list (a queue).

2 Caches eviction

2.1 Uderlying change

When cache reflects data from a file system or a JDBC source it may try to detect the underlying cached object has changed.

Axional Server JDBC modules that use caches to store SQL resultsets include SQL analysis to determine if row has been updated. More over, it's not 100% sure to determine if a row has changed cause changes may be introduced using stored procedure calls, triggers or other database connections.

It's a complex rule and strongly application dependent the way to implement cache eviction or refresh on a JDBC source.

2.2 Time based

Caches have not specific control over the time that an object can be alive in cache. So they will live forever unless specified. For Map caches that hold elements that may expire after a non used period, a VTableMapEvictableScheduler can be used.

For example:

VTableLRUHashMap<String, Node> map = ...
    new VTableMapEvictableScheduler<String, Node>(map, 60000);

Will add a task that will remove objects (Node in the example) that are not used for more than 1 minute.

3 Cache coordination

Cache coordination across multiple servers is delegated to application implementations and it's supported using soap message broadcasting to servers in a group.

4 Cache release on low memory condition

When server reaches a low memory condition on a tenured pool caches will receive a memory warning notification. According to it's nature they will decide how to evit elements.

For example, map implemention will do:

  • Sort elements by usage (low to top)
  • Remove 50% of elements in the list (the 50% less used)

List<K> list = m_map
    .limit(m_map.size() / 2) 

5 Cache inspection

VTable caches of types Map, Queue, Future and Set can be inspected using console interfaces. Moreover, by selecting a cache entry, it's content can be inspect using VTableRow interface.