1 IWA Configuration

After you install Informix Warehouse Accelerator, you must perform several configuration steps:

  • On the computer where Informix Warehouse Accelerator is installed, log on as user root.
  • Open the $IWA_INSTALL_DIR/dwa/etc/dwainst.conf configuration file. Review and edit the values in the file, especially the network interface value for the DRDA_INTERFACE parameter. Examples of network interface values are: eth0, lo, peth0. The default value is lo.
  • Run the ondwa setup command. This command creates the files and subdirectories that are required to run the Informix Warehouse Accelerator instance.
  • Run the ondwa start command. This command starts all of the Informix Warehouse Accelerator nodes.
  • Run the ondwa getpin command. Use this command to retrieve the IP address, port number, and pairing code that connect a database server to an accelerator server.

2 Environment variables

Let's consider the following shell script:

Copy
IWADIR=`grep "^informix:" /etc/passwd | awk -F: '{ print $6 }'`;
export IWADIR
PATH=${IWADIR}/bin:${PATH};
export PATH
Note that the username "informix" should be changed if IWA is installed using another user account.

It should be stored as:

Copy
/etc/profile.d/iwa.sh

and the following permissions should be set:

Copy
chmod 755 /etc/profile.d/iwa.sh

3 Configure server memory

To process queries efficiently, verify that you have the optimal configuration for the operating-system kernel parameters, shared memory, and swap space on the computer where Informix® Warehouse Accelerator is installed. You can also monitor the virtual memory usage to see if you need to reduce the total size of the loaded data marts or add physical memory.

3.1 Kernel parameters

For IWA to work properly it may be necessary to adjust some memory-related kernel parameters for the Linux operating system. Following is the short list of these parameters and a brief description for each. If changing of a parameter is needed, this is best done in the file /etc/sysctl.conf to ensure that the change is permanently in effect, also after a reboot of the system. Directly after editing this file the new change can be made effective by running the command sysctl -p as user root.

3.1.1 SHMMAX

The SHMMAX kernel parameter defines the maximum size in bytes of a single shared memory segment. For Informix Warehouse Accelerator, the optimal value of the SHMMAX kernel parameter is the size of physical memory. The minimum value is the WORKER_SHM parameter * 1048576 .

  • To see the value of SHMMAX (in bytes) in kernel use the following command:

    Copy
    $ sysctl kernel.shmmax
  • To change the value of the SHMMAX kernel parameter:

    • Add the following line to /etc/sysctl.conf file where bytes is the number of bytes:
      Copy
      kernel.shmmax = bytes
    • Run the following command for the setting to take effect:
      Copy
      $ sysctl -p

3.1.2 SHMALL

Defines the maximum size in pages of all shared memory on the system. Normally the default value for this parameter is 2097152 (pages). With the default page size of 4096 bytes this amounts to 8 GB. In case of doubt about the actual value of the page size on a specific system run the command getconf PAGE_SIZE. If a machine has more physical memory than what is defined for SHMALL, then this parameter should be increased accordingly.

  • To permanently change this parameter set the parameter kernel.shmall in /etc/sysctl.conf.
    Copy
    kernel.shmall = 67108864
  • Run the following command for the setting to take effect:
    Copy
    $ sysctl -p command
for shared memory allowance up to 256 GB (with a page size of 4 KB)
Copy
67108864 = 64MB * 4K page = 256 GB

3.1.3 vm.overcommit

To avoid issues that might arise if the Linux kernel runs out of memory, set the vm.overcommit_memory parameter and the vm.overcommit_ratio parameter to values that are optimal for Informix Warehouse Accelerator.

vm.overcommit_memory

Controls the memory overcommit behavior of the system. If enabled processes can allocate more memory than actually available. Three different values are possible:

  • 0 : Heuristic memory overcommit. The system decides when and where to allow memory over-commitment. This is the default and used for typical systems.
  • 1 : Memory over-commitment is always allowed. May be useful for some scientific applications.
  • 2 : Do not allow memory over-commitment. The total allocated memory on the system cannot exceed the swap space plus a configurable percentage of physical RAM. With a sensible percentage of RAM configured this will avoid having processes killed because of an OOM (out-of-memory) situation.
For IWA the preferred setting of overcommit_memory is 2.

vm.overcommit_ratio

Defines the percentage of the physical memory size to include in the memory overcommit calculation done in the following way:

Copy
mem_alloc_limit = swap_space + phys_mem * (overcommit_ratio / 100)

Where swap_space is the total size of all swap areas and phys_mem is the size of physically existing memory in the system. For IWA a proven setting of overcommit_ratio is 99.

To configure vm.overcommit values:

  • Add the following lines to the /etc/sysctl.conf file:
    Copy
    vm.overcommit_memory = 2
    vm.overcommit_ratio = 99
  • Run the sysctl -p command for the settings to take effect.

3.2 Size of /dev/shm

The shared memory is used to hold the data mart data. The more worker nodes that you designate, the faster the data is loaded from the database server. However, the more worker nodes you designate, the more memory you need because each worker node stores a copy of the data in the dimension tables. If you do not have sufficient memory assigned to the coordinator node and to the worker nodes, you might receive errors when you load data from the database server.

Informix Warehouse Accelerator uses the shared memory (/dev/shm) for the shared memory of the coordinator node and the worker nodes. If you change the size of /dev/shm, it must be smaller than the total memory in order to leave enough memory for other tasks.

  • For Informix Warehouse Accelerator installed on a single computer, the sum of values for the WORKER_SHM parameter and the COORDINATOR_SHM parameter must fit in available space of /dev/shm.
  • For Informix Warehouse Accelerator installed on a hardware cluster, the value of the WORKER_SHM parameter/ (NUM_NODES parameter - 1) must fit in available space of /dev/shm on each worker node. The value of the COORDINATOR_SHM parameter must fit in available space of/dev/shm on the coordinator node.
You specify the values for the WORKER_SHM parameter and the COORDINATOR_SHM parameter in the dwainst.conf configuration file. To check the size of the available space in /dev/shm, run this command:

Copy
$ df -h /dev/shm

To permanently change the size of /dev/shm perform the following steps:

  • Open the file /etc/fstab as user root with an editor.
  • Locate the entry for /dev/shm and set the size option for ''tmpfs'' to the desired size. For example to set the size to 15 GB the entry should look like this:
    Copy
    tmpfs    /dev/shm    tmpfs    defaults,size=15g    0     0

    (See the man page for mount for more information on the tmpfs file system options and the man page for fstab on the format of entries in the /etc/fstab file).

  • Save the changes in the editor to the file. To immediately make the change effective without the need for a reboot of the system remount the file system with a command like:
    Copy
    mount -o remount /dev/shm

3.3 Swap space

Even on systems that have a large amount of total memory, configuring the swap space can be an advantage. Swap space can prevent unexpected situations where the system might need more memory than is available.

Check the total and the used swap space (in megabytes) with this command:

Copy
free -m
total       used       free     shared    buffers     cached
Mem:          2010        664       1345          0        124        357
-/+ buffers/cache:        182       1827
Swap:         3138          0       3138

Configure the swap space as recommended by your Linux vendor. But a general rule for a large memory machine could be:

RAM Swap space
512 MB 32
1 TB 64
2 TB 96
4 TB 128

4 Configure IWA settings

The principal configuration parameters for the number of IWA nodes and their type, the resources of memory and CPU are:

  • NUM_NODES: Total number of IWA nodes.
  • WORKER_SHM: Combined size of shared memory used by all worker nodes for mart data. I.e. all data marts must fit into this amount of memory.
  • COORDINATOR_SHM: Combined size of shared memory used by all coordinator nodes.
  • CORES_FOR_LOAD_THREADS_PERCENTAGE: How much CPU resources will be used by each IWA node for data mart loading tasks.
  • CORES_FOR_SCAN_THREADS_PERCENTAGE: How much CPU resources will be used by each IWA node for query acceleration tasks.
This parameters are configured in the dwainst.conf file.

For the configuration of IWA SHM, parameter WORKER_SHM is the most important, as it mainly specifies how big the data marts can be. Though the configured amount of memory is not allocated initially, SHM for data marts will be allocated as data marts get created and loaded with data. If the limit of WORKER_SHM is reached, no more data can be loaded. As IWA threads also use temporary (private) memory additionally to the SHM, for data load processing as well as query acceleration, WORKER_SHM must be configured to leave enough space for these additional memory requirements. On the SMP system the worker node also has to share resources with the coordinator node. The threads of the coordinator node also use private memory for final processing of results. For generally small result sets, this may be almost negligible. But for large result sets, even when only selecting the first 10 data records of it, the amount of memory needed by the coordinator for result processing may be considerable. Hence, this also needs to be accounted for when determining the value for WORKER_SHM. To maximize memory and CPU utilization it may be worth considering different application scenarios.

The dwainst.conf file parameter COORDINATOR_SHM determines the size of the shared memory for all the coordinator nodes combined, which then will be distributed evenly among all coordinator nodes. Usually COORDINATOR_SHM should be much less than the WORKER_SHM. If more than one coordinator nodes exist (i.e. NUM_NODES is greater than 7) then a coordinator can assume the role of a worker node in case a worker node fails and terminates. For this to work correctly, also the coordinator node will create a file in the /dev/shm directory with the same size as the worker node. But in normal operation it will not use all this memory. In fact a coordinator will start using shared memory only when it is needed. Therefore it is normal that a single coordinator will start using shared memory as data marts get loaded with data. But additional coordinators will not immediately start using shared memory at this time.

A typical approach for configuration is to first figure out, how much memory of the machine is needed for other tasks, like the operating system and possibly a co-resident Informix server. The remaining memory (which normally should still be the bigger part) is then available for IWA. As a rule of thumb between 55% and 60% of this can be configured as WORKER_SHM, 5% to 6% of it as COORDINATOR_SHM, leaving the remaining 34% to 40% for temporary memory requirements during data load or query processing. This temporary memory needed for processing cannot be configured. Nor is it, unlike with Informix server, part of the shared memory. Instead it is allocated and released as private memory by each process as needed.

Edit the file dwainst.conf to configure the memory for all worker and coordinator nodes. The sum of the memory for workers and coordinator must be fit in the configured server memory /dev/shm.

For example if your system is:

Copy
/dev/shm        128G

a typically configuration can be:

Copy
# Worker shared memory
# SHM (in Megabyte) for all worker nodes.
# Minimum value is 1 percent of physical memory.
WORKER_SHM=80000

# Coordinator shared memory
# SHM (in Megabyte) for all coordinator nodes.
# Minimum value is 1 percent of physical memory.
COORDINATOR_SHM=8000

5 Disk space

TO DO

This section is incomplete and will be concluded as soon as possible.

6 IWA service

Let's consider the following shell script:

Copy
#!/bin/sh
#
# chkconfig: 345 99 01
# description:  IBM-Informix Database Server.
#
# Author:       DEISTER
#
# Initial release: Apr 02
# Aug 02: Added subsys lock file for Mandrake init control
# Oct 02: If variables are alredy defined, dont override them
#
# Absolutely no warranty -- use at your own risk
 
IWADIR=/home/informix
export IWADIR
 
start() {
        if [ `$IWADIR/bin/ondwa status | grep -c "ID"` -eq 0 ]
        then
            echo -n "Starting IWA ..."
            $IWADIR/bin/ondwa start
            sleep 5
            echo "done"
            $IWADIR/bin/ondwa status
            echo ""
        fi
}
 
stop() {
        if [ `$IWADIR/bin/ondwa status | grep -c "ID"` -eq 1 ]
        then
            echo -n "Shutting down IWA ... "
            $IWADIR/bin/ondwa stop
            rm -f /var/lock/subsys/informix
            echo "done"
        fi
}
 
restart() {
        stop
        start
}
 
status() {
        if [ `$IWADIR/bin/ondwa status | grep -c "ID"` -eq 1 ]
        then
            echo -n "IWA is running... "
            echo ""
            $IWADIR/bin/ondwa status
            echo ""
        else
            echo -n "IWA is STOPPED !!!"
            echo ""
        fi
}
 
case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart)
        stop
        start
        ;;
  status)
        status
        ;;
  *)
        echo "Usage: $0 {start|stop|restart|status}"
        exit 1
esac

It should be stored as:

Copy
/etc/init.d/iwa

The following permissions should be set for the script:

Copy
chmod 755 /etc/init.d/iwa

and it must be added to the list of system services:

Copy
# systemctl enable iwa
iwa.service is not a native service, redirecting to /sbin/chkconfig.
    Executing /sbin/chkconfig iwa on

7 Associate IWA to database server

You create an accelerator by connecting the Informix database server to the accelerator server (pairing). An accelerator is a logical entity that contains information for a connection from the database server to the accelerator server and for the data marts that are associated with that connection.

Set up the connection by using the ondwa getpin command to retrieve the IP address, port number, and pairing code from the accelerator server. For example:

Copy
# ondwa getpin
192.168.1.3 21022 4302

Now the accelerator can be created using the SQL administration funcion ifx_setupDWA. The syntax is as follows:

Copy
> ifx_setupDWA('ACC_NAME', 'IP_ADDRESS', 'PORT', 'PIN_CODE');

where ACC_NAME is an arbitrary name for the logical accelerator which will be created, and the other three parameters should be replaced by the valued from the getpin command. In our example is:

Copy
$ dbaccess demodb -
Database selected.

> EXECUTE FUNCTION ifx_setupDWA('IWA1DEMODB', '192.168.1.3', '21022', '4302');

(expression)   The operation was completed successfully.

1 row(s) retrieved.

Check the official documentation for other methods to pair a IWA instance to the DB server: link.

8 Configure IWA for hardware cluster of nodes

Informix Warehouse Accelerator IWA can take advance of clustered hardware.
In a cluster system, one accelerator server acts as coordinator node (IWA uses the first cluster node), while each extra cluster node that you add to the cluster becomes a worker, an accelerator server worker node.

On the shared cluster file system, create a directory to use as the accelerator server storage directory. The storage directory must be accessible with the same path definition, on all cluster nodes. Check the chapter below.

Notice that, as far as the configuration replicates the connection interface to the database server, this can not go trough Shared Memory, and needs a Socket approach (i.e. eth0, em0).

Files created in the shared unit, must have same owner, group and IDS. This means to checkt some pre-requisites, before stepping forward.

  • At each node the user ID and the informix group have to be equal.
  • Network interfaces will have sane naming at all nodess.
The config. file dwainst.conf stores the definition of interface to access the database. At all nodes the this will be equal.

  • Open the $IWA_INSTALL_DIR/dwa/etc/dwainst.conf configuration file. Review and edit the values in the file, especially the network interface value for the DRDA_INTERFACE parameter. Examples of network interface values are: eth0, lo, peth0. The default value is lo.
  • At the DRDA_INTERFACE parameter, specify the network interface value that you identified in step 2
  • At the DWADIR parameter, assign the file path for the storage directory that you created in step 3. On all cluster nodes, the DWADIR parameter must register the same file path
  • At the CLUSTER_INTERFACE parameter, specify the name of the network device, used to connect the cluster nodes. For example, eth0.
However, if there is only a unique coordinator node or a unique worker node, at each cluster node, then add the following additional parameters and values:
  • CORES_FOR_SCAN_THREADS_PERCENTAGE=100
  • CORES_FOR_LOAD_THREADS_PERCENTAGE=100
  • CORES_FOR_REORG_THREADS_PERCENTAGE=25

Copy
[informix@iwaserver etc]$ ondwa setup
Checking for DWA_CM_node0 on iwaserver: The authenticity of host 'iwaserver (192.168.0.11)' can't be established.
            ECDSA key fingerprint is SHA256:8FGNvPyi3QfaWfPLKtSWf+xP27CTtPP1QKSLiKyh2B4.
            ECDSA key fingerprint is MD5:dd:28:72:e1:33:68:b0:ae:71:99:21:72:7e:b3:28:11.
            Are you sure you want to continue connecting (yes/no)? yes
            Warning: Permanently added 'iwaserver,192.168.0.11' (ECDSA) to the list of known hosts.
Copy
informix@iwaserver's password:
stopped
            If you do not already have a default sbspace created and configured in the
            Informix database server:
            a. Create the sbspace. Example:
               onspaces -c -S sbspace1 -p $INFORMIXDIR/tmp/sbspace1 -o 0 -s 30000
            b. In the ONCONFIG configuration file, set the name of the default sbspace in
               the SBSPACENAME configuration parameter. Example:
               SBSPACENAME sbspace1
            c. Restart the Informix database server.
Copy
[root@iwaserver etc]# ondwa status
Retrieving from DWA_CM_node0 on iwaserver:

ID   | Role        | Cat-Status  | HB-Status    | Hostname         | System ID
-----+-------------+-------------+--------------+------------------+------------
0    | COORDINATOR | ACTIVE      |   Healthy    | p710             | 1
1    | WORKER      | ACTIVE      |   Healthy    | pbpiwa           | 2

Cluster is in state    : Fully Operational
Expected node count    : 1 coordinator and 1 worker nodes

8.1 Configuration of the storage directory

The clustered IWA requires the same path on all cluster nodes, to access the storage directory. Besides the file system has to manage parallelization

There are several filesystem solutions, attending the OS installed. For instance:

  • Red-hat can take advance of Oracle Cluster File System OCFS See details at OCFS and OCFS2
  • For CentOS, Debian, RedHat and othersm there is GlusterFS Moreover with variants like Ganesha or GPFS
Next example shows the configuration in CentOS using GlusterFS. However, other OS could require different solutions. For more details about GlusterFS see GlusterFS Install

Installing GPFS

First at all disable the firewall and selinux in all servers. Next step install the packages, enable the service and start. Instructions can be reached at CentOs 7

Copy
iwasrv1
yum -y install epel-release
yum -y install yum-priorities

wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

yum -y install centos-release-gluster
yum -y install glusterfs-server

systemctl enable glusterd.service
systemctl start glusterd.service

gluster volume create gvoliwasrv1 transport tcp iwasrv1:/dataocz/gvol
gluster volume start gvoliwasrv1
Copy
iwasrv2
yum -y install glusterfs-client

mkdir /mnt/glusterfs
Copy
Mount the volumes
mount.glusterfs iwasrv1:/gvoliwasrv1 /mnt/glusterfs