Java objects reside in an area called the heap. The heap is created when the JVM starts up and may increase or decrease in size while the application runs. When the heap becomes full, garbage is collected. During the garbage collection objects that are no longer used are cleared, thus making space for new objects.

The heap is sometimes divided into two areas (or generations) called the nursery (or young generation) and the old generation. The young generation is a part of the heap reserved for allocation of new objects. When the nursery becomes full, garbage is collected by running a special young collection, where all objects that have lived long enough in the nursery are promoted (moved) to the old space, thus freeing up the nursery for more object allocation. When the old space becomes full garbage is collected there, a process called an old collection.

1 Introduction

Axional Server has three storage tiers, summarized here:

  • Memory store – Heap memory that holds a copy of the hottest subset of data from the off-heap store. Subject to Java GC.
  • Off-heap store – Limited in size only by available RAM. Not subject to Java GC. Can store serialized data only. Provides overflow capacity to the memory store.
  • Disk store – Backs up in-memory data and provides overflow capacity to the other tiers. Can store serialized data only.

2 Memory store (On heap)

The JVM has a heap that is the runtime data area from which memory for all class instances and arrays are allocated. It is created at the JVM start-up. The heap size may be configured with the following VM options:

  • -Xmx<size> - to set the maximum Java heap size
  • -Xms<size> - to set the initial Java heap size

Heap memory for objects is reclaimed by an automatic memory management system which is known as a garbage collector. The heap may be of a fixed size or may be expanded and shrunk, depending on the garbage collector's strategy.

The memory store is always enabled and exists in heap memory. For the best performance, allot as much heap memory as possible without triggering GC pauses, and use the off-heap store to hold the data that cannot fit in heap (without causing GC pauses).

The memory store has the following characteristics:

  • Accepts all data, whether serializable or not
  • Fastest storage option
  • Backed by VTable caches (see Cache section)

3 Off heap store

The off-heap store extends the in-memory store to memory outside the of the object heap. This store, which is not subject to Java GC, is limited only by the amount of RAM available.

3.1 Allocating Direct Memory in the JVM

The off-heap store uses the direct-memory portion of the JVM. You must allocate sufficient direct memory for the off-heap store by using the JVM property MaxDirectMemorySize.

For example, to allocate 2GB of direct memory in the JVM:

Copy
java -XX:MaxDirectMemorySize=2G ...

Note the following about allocating direct memory:

  • If you configure off-heap memory but do not allocate direct memory with -XX:MaxDirectMemorySize, the default value for direct memory depends on your version of your JVM. Oracle HotSpot has a default equal to maximum heap size (-Xmx value), although some early versions may default to a particular value.
  • MaxDirectMemorySize must be added to the local node's startup environment.
  • Direct memory, which is part of the Java process heap, is separate from the object heap allocated by -Xmx. The value allocated by MaxDirectMemorySize must not exceed physical RAM, and is likely to be less than total available RAM due to other memory requirements.
  • The amount of direct memory allocated must be within the constraints of available system memory and configured off-heap memory.
  • The maximum amount of direct memory space you can use depends on the process data model (32-bit or 64-bit) and the associated operating system limitations, the amount of virtual memory available on the system, and the amount of physical memory available on the system.

3.2 Off heap store performance

3.2.1 Read / Write performance

Because off-heap data is stored in bytes, only data that is Serializable is suitable for the off-heap store.

Since serialization and deserialization take place on putting and getting from the off-heap store, it is theoretically slower than the memory store. This difference, however, is mitigated when GC involved with larger heaps is taken into account.

3.2.2 Compressed References

For 64-bit JVMs running Java 6 Update 14 or higher, consider enabling compressed references to improve overall performance. For heaps up to 32GB, this feature causes references to be stored at half the size, as if the JVM is running in 32-bit mode, freeing substantial amounts of heap for memory-intensive applications. The JVM, however, remains in 64-bit mode, retaining the advantages of that mode.

For the Oracle HotSpot, compressed references are enabled using the option -XX:+UseCompressedOops.

3.3 Implementation

As the off-heap store continues to be managed in memory, it is slightly slower than the on-heap store, but still faster than the disk store. Off heap allocations comes with an extra cost. As specified in JDK

"A direct byte buffer may be created by invoking the allocateDirect factory method of this class. The buffers returned by this method typically have somewhat higher allocation and deallocation costs than non-direct buffers. The contents of direct buffers may reside outside of the normal garbage-collected heap, and so their impact upon the memory footprint of an application might not be obvious. It is therefore recommended that direct buffers be allocated primarily for large, long-lived buffers that are subject to the underlying system's native I/O operations. In general it is best to allocate direct buffers only when they yield a measureable gain in program performance."

As seen before, allocating direct memory in small blocks is not efficient and comes with additional costs.

Axional Server provides a MergingByteBufferPoolFactory. This factory allows applications to acquire large segments of long lived off-heap memory and use them to alloc and free sub memory blocks from it.

Copy
import deister.axional.server.java.nio.directmemory.allocator.MergingByteBufferAllocator;
import deister.axional.server.java.nio.directmemory.buffer.MemoryBuffer;

...

// initialize a of heap memory buffer that will use chunks of 1 megabyte            
MergingByteBufferAllocator buff = new MergingByteBufferAllocator(1, 1024 * 1024);

// allocate some blocks          
MemoryBuffer buffer1 = buff.allocate(512);
MemoryBuffer buffer2 = buff.allocate(800);
MemoryBuffer buffer3 = buff.allocate(1000);
...
// Free when no more need
buffer2.free();

3.3.1 Off heap strings

A utility class to provide off-heap large string storage by using a MergingByteBufferPoolFactory is OffHeapString.

This class provides a String like implementation of CharSequence to store large string in off-heap memory pools managed by a MergingByteBufferPoolFactory.

Copy
OffHeapString text = new OffHeapString("This is a long text ...");

3.4 Monitoring

A global memory overview can be seen from Axional Server Console, showing the HEAP memory pools of types Eden, Survivor and Old gen and the non-heap or VM spacific memory pools.

  • Memory heap pools - shows a table with all memory pools
  • Memory buffer pools - shows a table with buffer pools (direct or mapped)
  • Memory garbage collectors - shows the activity of the garbage collectors

The direct cache tab, shows information about uses of MergingByteBufferPool including the allocator class (in the example the OffHeapString), the number of segments, the segment size and the detail of each segment including it's allocated blocks.

4 Disk store (memory mapped files)

The disk store provides a thread-safe disk-spooling facility that can be used for additional storage. This is acomplished by using memory mapped files.

Storage is not persisting through system restarts.

Memory mapped files are special files in Java which allows Java program to access contents directly from memory, this is achieved by mapping whole file or portion of file into memory and operating system takes care of loading page requested and writing into file while application only deals with memory which results in very fast IO operations. Memory used to load Memory mapped file is outside of Java heap Space.

  • MemoryMappedCache<K,V> - a Map implementation backed by a memory mapped file.
  • MemoryMappedArrayList<T> - a List implementation backed by a memory mapped file, used by JDBC ResultSets when cache storage is set to disk.

5 Memory overload

A memory monitor JVMMemoryWarningMonitor is configured on server startup to check for low memory conditions on tenured memory pool. A low memory condition is determined by the high water mark of memory used. By default, server sets it to a default value of 80%.

Whem memory monitor reaches a low condition:

  • An internal varibale is marked as low condition. This variable may be used for heavy consumming memory applications to determine a certain process should be stopped. For exemple, a loop to generate a large Excel using Apache POI may check it.
  • All listeners will receive a low condition signal. This way, applications may decide to free caches, etc.

A periodic task MemoryWarning-{poolname} will check for low memory condition recovery, simply to switch off low memory flag and signal a console notification.

5.1 Low memory condition event

When low memory condition event is fired by JVM it's processed by JVMMemoryWarningMonitor.

WRITE FLOW

5.2 OOM protection

Some core services in Axional Server include OOM protection mechanims. This mechanims may abort a task if server enters in low memory condition.

5.2.1 Apache FOP processor

The FOP process to generate a PDF document may consume lot of memory, specially for large documents including images or SVG graphics.

To protect server, FOP processor introduces two techniques:

  • set FoUserAgent.setConserveMemoryPolicy to true to allow store document rendering on disk
  • Add a OOM DelegatingFOEventHandler during processing that will strop render if JVM enters in low memory condition

5.2.2 Evictable caches

Caches marked as evictable will be released duuring a low memory condtion event (LMCE) that is not recoverable (LMCE exists with low condition).

6 Resume

The following clases provides storage for long living objects.

Class Type Store Application
VTable Interface on-heap Cache clases for Map, Queues , Sets, Future, Weak objects
MergingByteBufferPoolFactory Factory off-heap Low level implementation of off-heap caches
OffHeapString CharSequence off-heap Allows to create large String objects and keep it's references in memory while it's text is stored in a MergingByteBufferPoolFactory block
MemoryMappedCache<K,V> Map disk A map "like" implementation whose values are stored in a memory mapped file.
MemoryMappedArrayList<V> List disk A list implementation whose values are stored in a memory mapped file.