Enterprise Replication (ER) can be simply defined as the process of propagating a defined and configured replicate set from one database server to another. Database replication is important because it enables enterprises to provide users with access to current data where and when they need it. It can also provide a wide spectrum of benefits, including improved performance when centralized resources get overloaded, increased data availability, capacity relief, and support for data warehousing to facilitate decision support.

Enterprise Replication is an asynchronous, log based data replication tool. ER does not compete with user transactions to capture data for replication; rather, data is collected from logical logs, often called the redo logs. An advantage of log based data replication is that it can be used with heterogeneous operating systems (OS) and multiple versions of Informix Dynamic Server (IDS). For example, IDS11 on Linux can replicate data with IDS Version 10 on AIX.

ER should not be confused with High Availability Data Replication (HDR). Before considering ER concepts, first take a look at Table below. There we list the differences between high availability (HDR, RSS, and SDS) and ER solutions. ER and HDR are two separate replication offerings, but they can be used together.

High Availability Data Replication (HDR, RSS, and SDS) Enterprise Replication (ER)
Primary purpose is high availability. Primary purpose is data distribution.
Provides single primary and one or more secondaries. Allows a configurable number of source(s) and target(s).
Primary and Secondary must run the same executables and have similar disk layout. Source and target do not have to be same. Allows replication to other OS/IDS versions.
Primary and Secondary are like mirror images. Source and target can be totally different.
Provides instance-wide replication. Supports customized replication.
Updatable Secondary servers allow the Secondary server the usage of transactions with DML statements. Allows full transactional support on both source and target servers.
Replication can be synchronous. Replication is asynchronous.

1 Basic concepts

The basic building blocks of any replication network are the replication servers and the replicates. Replicates may be grouped into replicate sets. In this section, we describe these concepts in more detail.

  • Enterprise Replication Server: The basic element of any ER network is a database server. Any instance of IDS that is properly licensed for replication can be a database server. It maintains information about the replication environment, which tables and columns are replicated, and the conditions under which the data is replicated. This information is stored in a database, syscdr, that the database server creates upon initialization.
  • Replicates and replicate sets: Each database server doing replication knows what should and should not be replicated based on the set of replicates that are defined for that database server. A replicate can be considered as a set of rules that define what to replicate, when to replicate, where to replicate, and how to handle any conflicts during the replication process.
  • Replication templates: A replication template provides a mechanism to set up and deploy replication when a large number of tables and servers are involved.
  • Master replicate: A master replicate is a replicate that guarantees data integrity by verifying that replicated tables on different servers have consistent column attributes.
  • Replication key: A replication key consists of one or more columns that uniquely identifies each replicated row. The replication key must be the same on all servers that participate in the replicate. Typically, the replication key is a primary key constraint. Otherwise, you can specify ERKEY shadow columns or another unique index as the replication key.
  • Participant: A participant specifies the data (database, table, and columns) to replicate and the database servers to which the data replicates.
  • Replicate set: A replicate set combines several replicates to form a set that can be administered together as a unit.

2 Enterprise Replication domain

The entire group of servers is the replication domain

  • Any node within the domain can replicate data with any other node in the domain
  • Servers in domain can be configured to be root, non-root and Leaf

2.1 ER Topology

Enterprise Replication uses this terms to describe Hierarchical Routing topology.

Root server An Enterprise Replication server that is the uppermost level in a hierarchically organized set of information. All root database servers within Enterprise Replication must be fully interconnected.
Nonroot server An Enterprise Replication server that is not a root database server but has a complete global catalog and is connected to its parent and to its children
Parent-child A relationship between database servers in a tree data structure in which the parent is one step closer to the root than the child.
Leaf server A database server that has a limited catalog and no children.

A root server is fully connected to all other root servers. It has information about all other replication servers in its replication environment. Figure 10 shows an environment with four root servers.

A nonroot server is similar to a root server except that it forwards all replicated messages for other root servers (and their children) through its parent. All nonroot servers are known to all root and other nonroot servers. A nonroot server might or might not have children. All root and nonroot servers are aware of all other servers in the replication environment.

All domain servers should be defined in all root or non-root sqlhosts server definition, as full ER database is synchronized between all root or non-root servers.

Next we show an example of a complex topology ER domain definition:

Copy
cdr define server --connect=ol_dbgrid101 --idle=0 --init                              g_dbgrid101 --ats=/home/informix/cdr/ats/dbgrid101 --ris=/home/informix/cdr/ris/dbgrid101

cdr define server --connect=ol_dbgrid102 --idle=0 --init           --sync g_dbgrid101 g_dbgrid102 --ats=/home/informix/cdr/ats/dbgrid102 --ris=/home/informix/cdr/ris/dbgrid102
cdr define server --connect=ol_dbgrid103 --idle=0 --init           --sync g_dbgrid101 g_dbgrid103 --ats=/home/informix/cdr/ats/dbgrid103 --ris=/home/informix/cdr/ris/dbgrid103
cdr define server --connect=ol_dbgrid201 --idle=0 --init           --sync g_dbgrid101 g_dbgrid201 --ats=/home/informix/cdr/ats/dbgrid201 --ris=/home/informix/cdr/ris/dbgrid201
cdr define server --connect=ol_dbgrid202 --idle=0 --init --nonroot --sync g_dbgrid201 g_dbgrid202 --ats=/home/informix/cdr/ats/dbgrid202 --ris=/home/informix/cdr/ris/dbgrid202
cdr define server --connect=ol_dbgrid203 --idle=0 --init --nonroot --sync g_dbgrid201 g_dbgrid203 --ats=/home/informix/cdr/ats/dbgrid203 --ris=/home/informix/cdr/ris/dbgrid203

cdr define server --connect=ol_dbgrid301 --idle=0 --init --nonroot --sync g_dbgrid102 g_dbgrid301 --ats=/home/informix/cdr/ats/dbgrid301 --ris=/home/informix/cdr/ris/dbgrid301
cdr define server --connect=ol_dbgrid302 --idle=0 --init --nonroot --sync g_dbgrid102 g_dbgrid302 --ats=/home/informix/cdr/ats/dbgrid302 --ris=/home/informix/cdr/ris/dbgrid302
cdr define server --connect=ol_dbgrid303 --idle=0 --init --leaf    --sync g_dbgrid302 g_dbgrid303 --ats=/home/informix/cdr/ats/dbgrid303 --ris=/home/informix/cdr/ris/dbgrid303

cdr define server --connect=ol_dbgrid401 --idle=0 --init --leaf    --sync g_dbgrid101 g_dbgrid401 --ats=/home/informix/cdr/ats/dbgrid401 --ris=/home/informix/cdr/ris/dbgrid401
cdr define server --connect=ol_dbgrid402 --idle=0 --init --leaf    --sync g_dbgrid101 g_dbgrid402 --ats=/home/informix/cdr/ats/dbgrid402 --ris=/home/informix/cdr/ris/dbgrid402
cdr define server --connect=ol_dbgrid403 --idle=0 --init --leaf    --sync g_dbgrid101 g_dbgrid403 --ats=/home/informix/cdr/ats/dbgrid403 --ris=/home/informix/cdr/ris/dbgrid403

3 Defining the replicates

3.1 Primary target or update-anywhere

The most important part in the planning of the data flow is the definition of the data flow directions. Currently there are two common scenarios that can be defined with the cdr define replicate tool:

  • Primary target replication: Restricts the data flow in one direction.
  • Update-anywhere replication: Allows the replication of the data changes across the participating server from all sites.

3.1.1 Primary target

Primary target replication restricts the data flow in one direction. It defines one replication server as the primary server from where all changes are distributed to the other participating servers. Any changes that are made on the other servers related to the same table and to the defined data set of the replicate will not be replicated, but they are allowed.

A primary-target replication system can provide one-to-many or many-to-one replication:

  • One-to-many replication: In one-to-many (distribution) replication, all changes to a primary database server are replicated to many target database servers. Use this replication model when information gathered at a central site must be disseminated to many scattered sites.
  • Many-to-one replication: In many-to-one (consolidation) replication, many primary servers send information to a single target server. Use this replication model when many sites are gathering information (for example, local field studies for an environmental study) that needs to be centralized for final processing.

The role of the server that is either to act as a primary or as a recipient for the data has to be identified by the letter P or R at the time of defining the replication between two tables with the command cdr define.The following example in which the customer table from the dbs1 database is replicated from the primary server g_dbsrv1 to the dbs2 database on the target server g_dbsrv2:

Copy
cdr define replicate -M g_dbsrv1 -C always -S row -A -R customer \
"P dbs1@g_dbsrv1:informix.customer" "SELECT * FROM customer" \
"R dbs2@g_dbsrv2:informix.customer" "SELECT * FROM customer"

Info

The primary target data flow allows setting the Ignore and the Always-Apply conflict resolution strategy.

3.1.2 update-anywhere

Unlike the primary target definition, update-anywhere allows the replication of the data changes across the participating server from all sites. The update-anywhere scenario is the default behavior in the cdr create replicate command. Below example shows the details of how to specify an update-anywhere scenario.

Copy
cdr define replicate -M g_dbsrv1 -C always -S row -A -R customer \
"dbs1@g_dbsrv1:informix.customer" "SELECT * FROM customer" \
"dbs2@g_dbsrv2:informix.customer" "SELECT * FROM customer"

Info

For the update-anywhere replicate definition, all conflict resolution strategy rules are allowed. However, be sure that for time stamp and SPL based conflict resolution that the shadow columns are created on the base table.

3.2 Conflict resolution

When data is replicated from one database server to another and the target database server is also making changes simultaneously to the local data, replication conflicts could occur at the target database server. To ease the conflict resolution process, Enterprise Replication provides several conflict resolution rules. These rules enable ER to automatically detect and resolve the conflicts.

Enterprise Replication provides the following conflict resolution rules:

  • Ignore: ER does not attempt to resolve the conflict.
  • Always-apply: ER does not attempt to resolve conflict, but always applies the data to the target.
  • Time stamp: The row or transaction with the most recent time stamp is applied.
  • SPL routine: Enterprise Replication uses a routine written in Stored Procedure Language (SPL) that the user provides to determine which data should be applied.
  • Time stamp with SPL routine: If the time stamps are identical, Enterprise Replication invokes an SPL routine that the user provides to resolve the conflict.

3.2.1 Ignore

With the Ignore conflict resolution strategy defined, the server will not attempt to solve the conflict. In case of a conflict, the row will be discarded, and, if defined, a RIS/ATS file will be written by the server. The defined behavior for the ignore strategy is summarized in Table below.

Row exists Insert Update Delete
No Insert the row Discard Discard
Yes Discard Update the row Delete the row

3.2.2 Always-apply

In contrast to the Ignore strategy, Always-Apply will always attempt to apply the row in case of a collision. In order to achieve that, in certain conflict situations the requested operation will be transformed to apply the row. For example, an attempt to insert a row that already exists will finally be applied as an update.

Use the always-apply conflict-resolution rule only with a primary-target replication system. If you use always-apply with an update-anywhere replication system, your data might become inconsistent.

The following table describes how the always-apply conflict-resolution rule handles INSERT, UPDATE, and DELETE operations.

Row exists Insert Update Delete
No Insert the row Insert the row N/A
Yes Update the row Update the row Delete the row

3.2.3 Time stamp

The replication server on the target side of a defined replicate will solve the collisions based on a comparison of time stamps that are attached to the rows. In order to enable the server to do the time stamp comparison, the base table has to be altered or created to apply crcols before the replicate can be created. These are two columns named cdrserver and cdrtime. The are also referred to as shadow columns.

The time stamp resolution rule behaves differently depending on the scope defined in the replication using the -S option of the cdr define replicate command:

  • Row scope (-S row): Enterprise Replication evaluates one row at a time. The row with the most recent time stamp wins the conflict and is applied to the target database servers. If an SPL routine is defined as a secondary conflict-resolution rule, the routine resolves the conflict when the row times are equal.
  • Transaction scope (-S trans): Enterprise Replication evaluates the most recent row-update time among all the rows in the replicated transaction. This time is compared to the time stamp of the appropriate target row. If the time stamp of the replicated row is more recent than the target, the entire replicated transaction is applied. If a routine is defined as a secondary conflict resolution rule, it is used to resolve the conflict when the time stamps are equal.

A secondary routine is run only if Enterprise Replication evaluates rows and discovers equal time stamps.

If no secondary conflict-resolution rule is defined and the time stamps are equal, the transaction from the database server with the lower value in the cdrserver shadow column wins the conflict.

The following table describes how the time stamp conflict resolution rule handles INSERT, UPDATE, and DELETE operations.

Row exists Time stamp Insert Update Delete
No Insert the row Insert the row Insert the row into the delete table
Yes The time stamp for the changed row is newer than the existing row Update the row Update the row Delete the row and Insert the row into the delete table
Yes The time stamp for the changed row is older than the existing row Discard Discard Discard
Yes the time stamps are equal Update the row or secondary conflict resolution Update the row or secondary conflict resolution Delete the row or secondary conflict resolution

3.2.4 SPL routine

The ER system gives the DBA the flexibility to create a primary conflict resolution strategy or as secondary conflict resolution rule to the time stamp conflict resolution rule, specifying a SPL-based stored procedure ( SP). The replication server requires existing shadow columns ( CRCOLS) on the base table, which is similar to the time stamp based strategy. Also, with the SP strategy, the replication server will automatically create a shadow table at replicate creation time.

The SP has to follow some requirements for the input and return values. The following input parameters are expected:

  • Local server name.
  • Local time stamp of the row (NULL when not exists).
  • Does the row exists in the shadow table?
  • Remote server name.
  • Time stamp of the remote operation (the requested change).
  • The operation itself (I - insert, U - update, D - delete).
  • Definition of all columns for the local row.
  • Definition of all columns for the change in the row.

From the stored procedures, the following values are expected to be returned:

  • An indicator of the database operation to be performed:
    • A - Accept the replicated row and apply the column values returned by the SPL routine.
    • S - Accept the replicated row and apply the column values as received from the other site.
    • O - Discard the replicated row.
    • X - Abort the transaction.
  • A return code for the RIS files in order to determine the reason why the row was discarded, in an integer data type
  • The columns that will finally be applied to the target row

Follow example shows how to create a replicate with a stored procedure conflict resolution.

Copy
CREATE PROCEDURE conflict_customer_sp
(
   --required server parameter for the change and the existing row attrib
   localServer CHAR(18),
   localTS DATETIME YEAR TO SECOND,
   localDelete CHAR(1),
   replServer CHAR(18),
   replTS DATETIME YEAR TO SECOND,
   replDbop CHAR(1),
   
   --define all the local row columns
   local_customer_num LIKE customer.customer_num,
   local_fname LIKE customer.fname,
   local_lname like customer.lname,
   local_company LIKE customer.company,
   local_address1 LIKE customer.address1,
   local_address2 LIKE customer.address2,
   local_city LIKE customer.city,
   local_state like customer.state,
   local_zipcode like customer.zipcode,
   local_phone like customer.phone,
   
   --define all the remote changes on the row
   remote_customer_num LIKE customer.customer_num,
   remote_fname LIKE customer.fname,
   remote_lname like customer.lname,
   remote_company LIKE customer.company,
   remote_address1 LIKE customer.address1,
   remote_address2 LIKE customer.address2,
   remote_city LIKE customer.city,
   remote_state like customer.state,
   remote_zipcode like customer.zipcode,
   remote_phone like customer.phone
) RETURNING
  --return an Operation what to do with the change , and a reason code
  CHAR(1), INT,
  --return the values for the to be applied changes here for customer tab
  INTEGER, char(15), CHAR(15),CHAR(20),CHAR(20),CHAR(20),CHAR(15),
  CHAR(2), CHAR(5) , CHAR(18);

  -- row does exists decision depending on timestamp comparison
  IF localTS is NOT NULL THEN
     IF ((replTS > localTS) OR  ((replTS = localTS) AND (replServer < localServer))) THEN
        -- apply the change
        RETURN 'A', 600,
        remote_customer_num, remote_lname, remote_fname,remote_company,
        remote_address1,remote_address2,remote_city,remote_state,
        remote_zipcode, remote_phone;
     ELSE
        -- reject the change request RIS file will be created
        RETURN 'O', 700,
        local_customer_num, local_lname, local_fname,local_company,
        local_address1,local_address2,local_city,local_state,local_zipcode,
        local_phone;
     END IF
  -- local row doesnt exists -- Apply the row
  ELIF localTS IS NULL THEN
     RETURN 'A', 800,
     remote_customer_num, remote_lname, remote_fname,remote_company,
     remote_address1,remote_address2,remote_city,remote_state,
     remote_zipcode, remote_phone;
  END IF
END PROCEDURE;
Copy
cdr define replicate -C conflict_customer_sp -S trans -A customer \
"dbs1@g_dbsrv1:informix.customer" "SELECT * FROM customer" \
"dbs2@g_dbsrv2:informix.customer" "SELECT * FROM customer"

Info

You can not use an SPL routine as a conflict resolution rule if you set the replica to replicate only the modified columns.

3.3 Replicate set

In a complex replication scheme, there may be many replicates, and some of them should likely be treated as a group. That is, the set of replicates should be started, stopped, changed, and so forth, as a group. This is usually required in order to keep the data in the target database consistent when data from related tables is replicated.

To create a replicate set, use the cdr define replicateset command. Enterprise Replication supports these types of replicate sets:

  • exclusive: Replicates can belong to only one replicate set. Include the --exclusive option in the cdr define replicateset command.
  • non-exclusive: Default. Replicates can belong to one or more non-exclusive replicate sets.

An exclusive replicate set has the following characteristics:

  • All replicates in an exclusive replicate set have the same state and frequency settings.
  • When you create the replicate set, Enterprise Replication sets the initial state of the replicate set to active.
  • You can manage the replicates in an exclusive replicate set only as part of the set. Enterprise Replication does not support the following actions for the individual replicates in an exclusive replicate set:
    • Starting a Replicate
    • Stopping a Replicate
    • Suspending a Replicate
    • Resuming a Replicate
  • Replicates that belong to an exclusive replicate set cannot belong to any other replicate sets.

Important

You cannot change an exclusive replicate set to non-exclusive.

3.4 Shadow columns

Shadow columns are hidden columns on replicated tables that contain values that are supplied by the database server. The database server uses shadow columns to perform internal operations.

You can add shadow columns to your replicated tables with the CREATE TABLE or ALTER TABLE statement. To view the contents of shadow columns, you must explicitly specify the columns in the projection list of a SELECT statement; shadow columns are not included in the results of SELECT * statements.

You must define the shadow columns on both the source server and the target server.

3.4.1 CRCOLS

The CRCOLS shadow columns, cdrserver and cdrtime, support conflict resolution. These two columns are hidden shadow columns because they cannot be indexed and cannot be viewed in the system catalog tables. In an update-anywhere replication environment, you must provide for conflict resolution using a conflict resolution rule. When you create a table that uses the time stamp, time stamp plus SPL, or delete wins conflict resolution rule, you must define the shadow columns, cdrserver and cdrtime on both the source and target replication servers. If you plan to use only the ignore or always-apply conflict resolution rule, you do not need to define the cdrserver and cdrtime shadow columns for conflict resolution.

Example

New table:

Copy
create table "informix".customer
(
customer_num serial not null ,
fname char(15),
lname char(15),
company char(20),
address1 char(20),
address2 char(20),
city char(15),
state char(2),
zipcode char(5),
phone char(18),
primary key (customer_num)
) with crcols;

Existing table:

Copy
alter table customer add crcols;

Warning

The rowsize is incremented by 8 bytes. Therefore it is only necessary to add those columns in an update-anywhere scenario and with conflict resolution other than ignore or always. Otherwise the size of the tables on disk is increased without any need.

3.4.2 REPLCHECK

The REPLCHECK shadow column, ifx_replcheck, supports faster consistency checking. This column is a visible shadow column because it can be indexed and can be viewed in the system catalog table. If you want to improve the performance of the cdr check replicate or cdr check replicateset commands, you can add the ifx_replcheck shadow column to the replicate table, and then create an index that includes the ifx_replcheck shadow column and your replication key columns.

Example

New table:

Copy
create table "informix".customer
(
customer_num serial not null ,
fname char(15),
lname char(15),
company char(20),
address1 char(20),
address2 char(20),
city char(15),
state char(2),
zipcode char(5),
phone char(18),
primary key (customer_num)
) with REPLCHECK;

Existing table:

Copy
alter table customer add REPLCHECK;

3.4.3 ERKEY

The ERKEY shadow columns, ifx_erkey1, ifx_erkey2, and ifx_erkey3, are used as the replication key on replicated tables. Replicated tables must use a primary key constraint, a unique index or constraint, or the ERKEY shadow columns as the replication key.

After you create the ERKEY shadow columns, a new unique index and a unique constraint are created on the table using these columns. Enterprise Replication uses that index as the replication key.

Example

New table:

Copy
create table "informix".customer
(
customer_num serial not null ,
fname char(15),
lname char(15),
company char(20),
address1 char(20),
address2 char(20),
city char(15),
state char(2),
zipcode char(5),
phone char(18),
primary key (customer_num)
) with ERKEY;

Existing table:

Copy
alter table customer add ERKEY;

3.5 Shadow table

There is a special requirement for handling delete operations on all rows in a table participating in a time stamp based replicate. Because a delete will remove the row from the target table, there is no possibility to compare the time stamps between an already deleted row and late incoming changes that could be made earlier on the same primary key. To ensure that we do not get ghost rows, replicates with defined time stamp conflict resolution create a shadow table, also called delete table, where the deletes for the rows are kept for time stamp comparison.

The schema of the delete table will follow the schema as the master table, including all fragmentation definitions.

Example

Define and initiate a replication on the customer table with the timestamp conflict resolution rule:

Copy
cdr define replicate -M g_dbsrv1 -C timestamp -S row -A -R customer \
"dbs1@g_dbsrv1:informix.customer" "SELECT * FROM customer" \
"dbs2@g_dbsrv2:informix.customer" "SELECT * FROM customer"

cdr start rep customer

Find the identifier of the suppression table created:

Copy
dbaccess syscdr - -

Database selected.

> select * from deltabdef where tabname = 'customer';

tabname   customer
owner     informix
dbname    dbs1
deltabid  2

1 row(s) retrieved.

The schema of the suppression table:

Copy
dbaccess dbs1 - -

Database selected.

> INFO COLUMNS FOR customer;


Column name          Type                                    Nulls

customer_num         serial                                  no
fname                char(15)                                yes
lname                char(15)                                yes
company              char(20)                                yes
address1             char(20)                                yes
address2             char(20)                                yes
city                 char(15)                                yes
state                char(2)                                 yes
zipcode              char(5)                                 yes
phone                char(18)                                yes

> INFO COLUMNS FOR cdr_deltab_000002;


Column name          Type                                    Nulls

cdrserver            integer                                 yes
cdrtime              integer                                 yes
customer_num         serial                                  yes
fname                char(15)                                yes
lname                char(15)                                yes
company              char(20)                                yes
address1             char(20)                                yes
address2             char(20)                                yes
city                 char(15)                                yes
state                char(2)                                 yes
zipcode              char(5)                                 yes
phone                char(18)                                yes

There is a specific thread named CDRDTCleaner that periodically cleans the expired rows from the shadow tables. You can check the statistics about the suppression table cleaner with the following onstat command:

Copy
onstat -g dtc

IBM Informix Dynamic Server Version 12.10.FC3WE -- On-Line -- Up 14 days 20:44:58 -- 181392 Kbytes

---- Delete Table Cleanup Status as of (1420640774) 2015/01/07 15:26:14
        NOT RUNNING
        rows deleted     = 0
        lock timeouts    = 0
        cleanup interval = 300
        list size        = 3
        last activity    = (1420640654) 2015/01/07 15:24:14

Id      Database                         Last Cleanup Time
        Replicate            Server              Last Log Change
=========================================================
000001  syscdr                           (1419865527) 2014/12/29 16:05:27
        _ifx_qod_control     g_dbsrv2            (1419871019) 2014/12/29 17:36:59
        _ifx_qod_control     g_dbsrv1            (1419865527) 2014/12/29 16:05:27
000002  dbs1                             <never cleaned>
        customer             g_dbsrv2            <no log events>
        customer             g_dbsrv1            <no log events>

Important

Not drop shadow table created by Enterprise Replication. Shadow table are automatically dropped when the replica defined on the table is deleted.

3.6 Triggers

Triggers defined on the table on the target side of a replicate requires special treatment. With the default definition of a replicate, if the target side has a trigger defined on the table, the trigger will not fire. The execution of triggers on the target have to be specified at definition time of the replicate with the -T or --firetrigger option.

In contrast to the general server behavior, where the trigger will be only be executed (during an update) when the column is changed (on the one on which the trigger is defined), any update on an ER source server, regardless of which column is updated, causes a execution of the trigger. As shown in the following example:

Example

Create on the target server dbsrv2 the following table with a trigger that execute only when the fname is changed:

Copy
create table customer_log
(
cust_num integer,
cust_name char(32),
username char(32),
update_time datetime year to minute,
old_fname char(20),
new_fname char(20)
);
create trigger customer_upd update of fname on customer
referencing old as prv new as nxt
for each row
(
insert into customer_log
(cust_num,cust_name,username,update_time,old_fname,new_fname)
values
(prv.customer_num ,prv.fname ,USER , CURRENT year to fraction(3) ,prv.fname,nxt.fname )
);

The following replication is defined on the dbsrv1 server:

Copy
cdr define replicate -M g_dbsrv1 -C always -S row -A -R -T customer \
"P dbs1@g_dbsrv1:informix.customer" "SELECT * FROM customer" \
"R dbs2@g_dbsrv2:informix.customer" "SELECT * FROM customer"

cdr start rep customer

Run this update on the dbsrv1 server:

Copy
update customer set phone="11111" where customer_num=101;

On the dbsrv2 server you can verify that the trigger has been executed even when the fname column has not been modified:

Copy
select *  from customer_log

cust_num     101
cust_name    walter
username     informix
update_time  2014-12-29 18:32
old_fname    walter
new_fname    walter

3.7 cdrsession

The cdrsession option to the DBINFO() function detects if an INSERT, UPDATE, or DELETE statement is being performed as part of a replicated transaction.

You might want to design triggers, stored procedures, or user-defined routines to take different actions depending on whether a transaction is being performed as part of Enterprise Replication. The cdrsession option of the DBINFO() function returns 1 if the thread performing the database operation is an Enterprise Replication apply or sync thread; otherwise, the function returns 0.

The following example shows an SPL function that uses the cdrsession option to determine if a thread is performing an Enterprise Replication operation:

Copy
CREATE FUNCTION iscdr ()
RETURNING int;

DEFINE iscdrthread int;
SELECT DBINFO('cdrsession') into iscdrthread
from systables where tabid = 1;
RETURN iscdrthread;

END FUNCTION

4 Informix Configuration parameters

The configuration parameters of the onconfig file that affect Enterprise Replication are detailed below, these parameters are valid for both the source server and the destination server of the data:

Prameter Description
CDR_DBSPACE Specifies the dbspace where the syscdr database is created. If it is not set, then syscdr is created in the root dbspace.
CDR_QHDR_DBSPACE Specifies the location of the dbspace that Enterprise Replication uses to store the transaction record headers spooled from the send and receive queues. By default, Enterprise Replication stores the transaction record headers in the root dbspace.
CDR_QDATA_SBSPACE Specifies the list of up to 32 names of sbspaces that Enterprise Replication uses to store spooled transaction row data. Enterprise Replication creates one smart large object per transaction. If CDR_QDATA_SBSPACE is configured for multiple sbspaces, then Enterprise Replication uses all appropriate sbspaces in round-robin order.

5 Setting up Enterprise Replication

This section explains how to setting up an Enterprise Replication between two informix instances dbsrv1 (server 31-v-db-i) and dbsrv2 (server 38-v-db-i).

5.1 Network environment

The network environment must be prepared for each database server participating in Enterprise Replication. The following files are involved in the replica network configuration:

5.1.1 Servers

The /etc/hosts file specifies the names and op address of all the servers.

  • Content of the file on the 31-v-db-i server:
    Copy
    192.168.10.31   31-v-db-i       dbsrv1
    192.168.10.38   38-v-db-i       dbsrv2
  • Content of the file on the 38-v-db-i server:
    Copy
    192.168.10.31   31-v-db-i       dbsrv1
    192.168.10.38   38-v-db-i       dbsrv2

5.1.2 Trusted servers

The /etc/hosts.equiv file specifies the names of trusted servers.

  • Content of the file on the 31-v-db-i server:
    Copy
    38-v-db-i
  • Content of the file on the 38-v-db-i server:
    Copy
    31-v-db-i

5.1.3 Services

The /etc/services file specifies the service name associated with the port number.

  • Content of the file on the 31-v-db-i server:
    Copy
    sqlexec         9088/tcp                # IBM Informix SQL Interface
  • Content of the file on the 38-v-db-i server:
    Copy
    sqlexec         9088/tcp                # IBM Informix SQL Interface

5.1.4 Sqlhosts

The $INFORMIXDIR/etc/sqlhosts file specifies the connectivity between replication servers, including server pools, connection security, and network security.

Important

Enterprise Replication requires that all database servers participating in the replica be members of database server groups.
  • Content of the file on the 31-v-db-i server:
    Copy
    dbsrv1        onipcshm        dbsrv1  on_dbsrv1
    
    g_dbsrv1      group           -       -       i=1
    ol_dbsrv1     onsoctcp        dbsrv1  sqlexec g=g_dbsrv1
    
    g_dbsrv2      group           -       -       i=2
    ol_dbsrv2     onsoctcp        dbsrv2  sqlexec g=g_dbsrv2
  • Content of the file on the 38-v-db-i server:
    Copy
    dbsrv2          onipcshm        dbsrv2  on_dbsrv2
    
    g_dbsrv2         group           -       -       i=2
    ol_dbsrv2        onsoctcp        dbsrv2  sqlexec g=g_dbsrv2
    
    g_dbsrv1         group           -       -       i=1
    ol_dbsrv1        onsoctcp        dbsrv1  sqlexec g=g_dbsrv1

5.1.5 Connectivity test

  • 31-v-db-i server:
    Copy
    dbaccess sysmaster - -
    
    Database selected.
    
    > select name from sysmaster@g_dbsrv2:sysdatabases where name = 'sysmaster';
    
    name  sysmaster
    
    1 row(s) retrieved.
  • 38-v-db-i server:
    Copy
    dbaccess sysmaster - -
    
    Database selected.
    
    > select name from sysmaster@g_dbsrv1:sysdatabases where name = 'sysmaster';
    
    name  sysmaster
    
    1 row(s) retrieved.

5.2 Storage

5.2.1 Logical logs

The database server uses the logical log to store a record of changes to the data since the last archive. Enterprise Replication requires the logical log to contain entire row images for updated rows, including deleted rows.

The database server normally logs only columns that have changed. This behavior is called the logical-log record reduction option. Enterprise Replication deactivates this option for tables that participate in replication.

Logical logs must be configured correctly for Enterprise Replication. Use the following guidelines when configuring your logical log files:

  • Make sure that all logical log files are approximately the same size.
  • Make the size of the logical log files large enough so that the database server switches log files no more than once every 15 minutes during normal processing.
  • Plan to have sufficient logical-log space to hold at least four times the maximum transaction size.
  • Set LTXEHWM (long-transaction, exclusive-access, high-watermark) 30 percent larger than LTXHWM (long-transaction high-watermark).

Important

If you specify that the database server allocate logical log files dynamically ( DYNAMIC_LOGS), it is recommended that you set LTXEHWM to no higher than 70 when using Enterprise Replication.

5.2.2 Define ER related dbspaces

The complete ER functionality relies on a system defined and system created database named syscdr. This database will be created automatically on each server when the server is defined as a replication server. Before the transition to a replication server is made, there is the requirement to specify a normal dbspace for the location of the syscdr database. In addition, a dbspace for the send and receive queue header data and an sblob space for the data that will be sent out for application to the target replication server is needed. The dbspace location of the syscdr and the queue header information can be the same.

To continue with our example have been created on both servers dbsrv1 and dbsrv2 the dbspace d_syscdr and the sbspace s_syscdr:

Copy
onspaces -c -d d_syscdr -p /INFORMIXDEV/d_syscdr_00 -o 4 -s 1000000
onspaces -c -S s_syscdr -p /INFORMIXDEV/s_syscdr_00 -o 4 -s 1000000 -Df LOGGING=ON

5.2.3 ATS and RIS Files

The definition of the replication server with the cdr define server command will allow the DBA to specify directories on the file system for writing error files. These files are related to detected aborted transactions ( ATS) and contain detailed row information ( RIS). The content of the files typically contains the time and the reason why the transaction apply on the target replication server was aborted, specified by the SQL error. Also in the RIS file, there are the details for the failed row for a later manual apply or manual conflict resolution.

You should create a separate directory to store ATS and RIS files. If you do not create a separate directory and specify it when you define the replication server, Enterprise Replication stores the ATS and RIS files in the /tmp directory on UNIX and the %INFORMIXDIR%\tmp directory on Windows.

  1. Create a directory for Enterprise Replication to store ATS and RIS files. You can create two directories if you want to generate both types of file and store them in separate directories.
    • If you are using primary-target replication, create the directory on the target system.
    • If you are using update-anywhere replication and have a conflict resolution rule other than ignore or always-apply enabled, create the directory on all participating replication systems.
  2. When you define or modify a replication server, specify the location of the ATS and RIS directory by using the --ats and --ris options of the cdr define server command or the cdr modify server command.
  3. When you define or modify a replicate, specify that ATS and RIS file generation is enabled by using the --ats and --ris options of the cdr define replicate command or the cdr modify replicate command.

To continue with our example, the following directories have been created on both servers dbsrv1 and the dbsrv2:

Copy
mkdir -p /data/IFMX-ERATS
chown informix:informix /data/IFMX-ERATS

mkdir -p /data/IFMX-ERRIS
chown informix:informix /data/IFMX-ERRIS

5.3 Informix configuration file

On both servers dbsrv1 and the dbsrv2, edit the informix configuration file (onconfig) and modify the following parameters:

Prameter Description Value
CDR_DBSPACE Specifies the dbspace where the syscdr database is created. If it is not set, then syscdr is created in the root dbspace. d_syscdr
CDR_QHDR_DBSPACE Specifies the location of the dbspace that Enterprise Replication uses to store the transaction record headers spooled from the send and receive queues. By default, Enterprise Replication stores the transaction record headers in the root dbspace. d_syscdr
CDR_QDATA_SBSPACE Specifies the list of up to 32 names of sbspaces that Enterprise Replication uses to store spooled transaction row data. Enterprise Replication creates one smart large object per transaction. If CDR_QDATA_SBSPACE is configured for multiple sbspaces, then Enterprise Replication uses all appropriate sbspaces in round-robin order. s_syscdr
CDR_FEATURES

Old versions

ONLY WHEN THERE ARE PARTICIPANTS WITH OLD VERSION AS 11. In order to maintain compatibility with older versions, at onconfig of Informix versions 12 and 14 is needed to add this parameter with this value.
NOSMX4ER

5.4 Time synchronization

Whenever you use Time stamp conflict resolution replication, it is important to have a tool (eg sys_timesync script) to keep the servers synchronized on time.

5.5 Defining replication servers

  • dbsrv1 server:

    Using the cdr utility, you define the dbsrv1 server as the root node of the enterprise replication, by executing the following command:

    Copy
    cdr define server -A /data/IFMX-ERATS -R /data/IFMX-ERRIS -I g_dbsrv1
  • dbsrv2 server:

    Using the cdr utility, you define the dbsrv2 server as the child node of the enterprise replication server dbsrv1, by executing the following command:

    Copy
    cdr define server -A /data/IFMX-ERATS -R /data/IFMX-ERRIS -L -S g_dbsrv1 -I g_dbsrv2

5.6 Check replication servers

  • dbsrv1 server:

    Copy
    cdr list server
    SERVER                 ID STATE    STATUS     QUEUE  CONNECTION CHANGED
    -----------------------------------------------------------------------
    g_dbsrv1                1 Active   Local           0
    g_dbsrv2                2 Active   Connected       0 Dec  5 13:29:09
  • dbsrv2 server:

    Copy
    cdr list server
    SERVER                 ID STATE    STATUS     QUEUE  CONNECTION CHANGED
    -----------------------------------------------------------------------
    g_dbsrv1                1 Active   Connected       0 Dec  5 13:29:08
    g_dbsrv2                2 Active   Local           0

5.7 Add new replicate

  • dbsrv1 server:

    Table customer:

    Copy
    cdr define replicate -M g_dbsrv1 -C always -S row -A -R customer \
    "P dbs1@g_dbsrv1:informix.customer" "SELECT * FROM customer" \
    "R dbs2@g_dbsrv2:informix.customer" "SELECT * FROM customer"
    Copy
    cdr start rep customer

    Table orders:

    Copy
    cdr define replicate -M g_dbsrv1 -C always -S row -A -R orders \
    "P dbs1@g_dbsrv1:informix.orders" "SELECT * FROM orders" \
    "R dbs2@g_dbsrv2:informix.orders" "SELECT * FROM orders"
    Copy
    cdr start rep orders
  • dbsrv1 server or dbsrv2 server (indifferent):

    Table customer:

    Copy
    cdr sync repl -m g_dbsrv1 -r customer g_dbsrv2
    Copy
    cdr check repl -m g_dbsrv1 -r customer g_dbsrv2

    Table orders:

    Copy
    cdr sync repl -m g_dbsrv1 -r orders g_dbsrv2
    Copy
    cdr check repl -m g_dbsrv1 -r orders g_dbsrv2

6 Restrictions

6.1 Exclusive lock on a database

You can not apply an exclusive lock on a database that is involved in the replication (or perform operations requiring an exclusive lock, for example dbexport). To do this, it is necessary to stop the enterprise replication, perform the dbexport and finally start the enterprise replication.

Copy
cdr stop
dbexport dbs1
cdr start

6.2 Change the logging mode of a database

The databases and tables of all instances of the servers involved in the replica must be created with logging mode. You also can not change the logging mode of a database whose tables participate in an enterprise replication. To do this, it is necessary to stop the enterprise replication, change loggind mode and finally start the enterprise replication.

Copy
cdr stop
ontape -s -N dbs1 -t /dev/null
ontape -s -U dbs1 -t /dev/null
cdr start

6.3 Views and synonyms

The replica is restricted to the table. You can not define a replicate on a view or a synonym.

7 cdr commands

7.1 Enterprise

7.1.1 Stop enterprise replication

Copy
cdr stop

7.1.2 Start enterprise replication

Copy
cdr start

7.1.3 Start enterprise replication with empty queues

Copy
cdr cleanstart

7.1.4 Profile

Copy
cdr view profile
ER PROFILE for Node g_dbsrv2            ER State Active

DDR - Running                           SPOOL DISK USAGE
  Current        60:168505344             Total                   500000
  Snoopy         60:168501316             Metadata Free            25057
  Replay         60:147685400             Userdata Free           466271
  Pages from Log Lag State   108860
                                        RECVQ
SENDQ                                     Txn In Queue                 0
  Txn In Queue               0            Txn In Pending List          0
  Txn Spooled                0
  Acks Pending               0          APPLY - Running
                                          Txn Processed              153
NETWORK - Running                         Commit Rate               0.00
  Currently connected to 1 out of 1       Avg. Active Apply         1.01
  Msg Sent               10370            Fail Rate                 0.00
  Msg Received             617            Total Failures               0
  Throughput            231.71            Avg Latency               0.00
  Pending Messages           0            Max Latency                  0
                                          ATS File Count               0
                                          RIS File Count               0
---------------------------------------------------------------------------
ER PROFILE for Node g_dbsrv1            ER State Active

DDR - Running                           SPOOL DISK USAGE
  Current        11:41148416              Total                   500000
  Snoopy         11:41144388              Metadata Free            25057
  Replay         11:41082904              Userdata Free           466271
  Pages from Log Lag State   189953
                                        RECVQ
SENDQ                                     Txn In Queue                 0
  Txn In Queue               0            Txn In Pending List          0
  Txn Spooled                0
  Acks Pending               0          APPLY - Running
                                          Txn Processed                0
NETWORK - Running                         Commit Rate               0.00
  Currently connected to 1 out of 1       Avg. Active Apply         0.00
  Msg Sent                  80            Fail Rate                 0.00
  Msg Received              25            Total Failures               0
  Throughput             15.22            Avg Latency               0.00
  Pending Messages           0            Max Latency                  0
                                          ATS File Count               0
                                          RIS File Count               0
---------------------------------------------------------------------------

7.2 Server

7.2.1 Create an enterprise server

With this instruction defines the servers involved in replication and their relationship.

  • Replication domain root node:
    Copy
    cdr define server -A /data/IFMX-ERATS -R /data/IFMX-ERRIS -I g_dbsrv1
  • Replication domain sheet Node:
    Copy
    cdr define server -A /data/IFMX-ERATS -R /data/IFMX-ERRIS -L -S g_dbsrv1 -I g_dbsrv2

The meaning of the each options is described below:

  • -A: Specifies the directory to store aborted transaction spooling files for replicate transactions that fail to be applied.
  • -R: Specifies the directory to store row information spooling files for replicate row data that fails conflict resolution or encounters replication-order problems.
  • -L: Defines the server as a leaf server in an existing domain. The server that is specified by the --sync (-S) option becomes the parent of the leaf server.
  • -S: Adds a server to the existing domain of which the sync_server is a member.
  • -I: Adds server_group to the replication domain.

7.2.2 List

Copy
cdr list server
SERVER                 ID STATE    STATUS     QUEUE  CONNECTION CHANGED
-----------------------------------------------------------------------
g_dbsrv1                1 Active   Local           0
g_dbsrv2                2 Active   Connected       0 Dec  5 13:29:09

7.2.3 Suspend

Copy
cdr suspend server g_dbsrv2

7.2.4 Resume

Copy
cdr resume server g_dbsrv2

7.3 Replicate

7.3.1 Create an replicate

  • Create a new replicate named orders for the orders table between the source server g_dbsrv1 and the destination server g_dbsrv2:

    Copy
    cdr define replicate -M g_dbsrv1 -C always -S row -A -R orders \
    "P dbs1@g_dbsrv1:informix.orders" "SELECT * FROM orders" \
    "R dbs2@g_dbsrv2:informix.orders" "SELECT * FROM orders"
  • Create a new replicate named gvenpedh_note for the gvenpedh_note table between the source server g_dbsrv1 and the destination server g_dbsrv2, in this case the table has no primary key but a unique index:

    Copy
    cdr define replicate -M g_dbsrv1 -C always -S row -A -R gvenpedh_note -k "cabid,linid,tipnot,orden" \
    "P dbs1@g_dbsrv1:informix.gvenpedh_note" "SELECT * FROM gvenpedh_note" \
    "R dbs2@g_dbsrv2:informix.gvenpedh_note" "SELECT * FROM gvenpedh_note"

7.3.2 Start

Start orders replicate:

Copy
cdr start rep orders

7.3.3 Stop

Stop orders replicate:

Copy
cdr stop rep orders

7.3.4 Suspend

Suspend orders replicate:

Copy
cdr suspend rep orders

7.3.5 Resume

Resume orders replicate:

Copy
cdr resume rep orders

7.3.6 Remove

Remove orders replicate:

Copy
cdr delete rep orders

7.3.7 List

  • List all replicates:

    Copy
    cdr list rep
    CURRENTLY DEFINED REPLICATES
    -------------------------------
    REPLICATE:        customer
    STATE:            Active ON:g_dbsrv1
    CONFLICT:         Always Apply
    FREQUENCY:        immediate
    QUEUE SIZE:       0
    PARTICIPANT:      dbs1:informix.customer
    OPTIONS:          row,ris,ats,fullrow
    REPLID:           65538 / 0x10002
    REPLMODE:         PRIMARY  ON:g_dbsrv1
    APPLY-AS:         INFORMIX ON:g_dbsrv1
    REPLTYPE:         Master
    
    REPLICATE:        orders
    STATE:            Inactive ON:g_dbsrv1
    CONFLICT:         Always Apply
    FREQUENCY:        immediate
    QUEUE SIZE:       0
    PARTICIPANT:      dbs1:informix.orders
    OPTIONS:          row,ris,ats,fullrow
    REPLID:           65540 / 0x10004
    REPLMODE:         PRIMARY  ON:g_dbsrv1
    APPLY-AS:         INFORMIX ON:g_dbsrv1
    REPLTYPE:         Master
  • List a replicate:

    Copy
    cdr list rep customer
    DEFINED REPLICATES ATTRIBUTES
    ------------------------------
    REPLICATE:        customer
    STATE:            Active ON:g_dbsrv1
    CONFLICT:         Always Apply
    FREQUENCY:        immediate
    QUEUE SIZE:       0
    PARTICIPANT:      dbs1:informix.customer
    OPTIONS:          row,ris,ats,fullrow
    REPLID:           65538 / 0x10002
    REPLMODE:         PRIMARY  ON:g_dbsrv1
    APPLY-AS:         INFORMIX ON:g_dbsrv1
    REPLTYPE:         Master
  • Show the definition of a replicate:

    Copy
    cdr list rep brief customer
    REPLICATE            TABLE                                    SELECT
    ----------------------------------------------------------------------------
    customer             dbs1@g_dbsrv1:informix.customer          select t.customer_num, t.fname, t.lname, t.company, t.address1, t.address2, t.city, t.state, t.zipcode, t.phone from customer t
    customer             dbs2@g_dbsrv2:informix.customer          select t.customer_num, t.fname, t.lname, t.company, t.address1, t.address2, t.city, t.state, t.zipcode, t.phone from customer t

7.3.8 Check

  • Check the status of a replicate:

    Copy
    cdr check repl -m g_dbsrv -r cpar_parprel g_dbsrv_er
    Aug 31 2016 10:44:35 ------   Table scan for cpar_parprel start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv                8964         0         0         0         0
    g_dbsrv_er             8964         0         0         0         0
    
    Aug 31 2016 10:44:37 ------   Table scan for cpar_parprel end   ---------
  • Check the synchronization state of a replicate and display the detail of the inconsistent rows:

    Copy
    cdr check repl --verbose -m g_dbsrv -r gcomfacl g_dbsrv_er
    ------------------------------------------------------------------
    row missing on <g_dbsrv_er>
    key:linid:1717463
    ------------------------------------------------------------------
    row missing on <g_dbsrv_er>
    key:linid:1717464
    ------------------------------------------------------------------
    row missing on <g_dbsrv_er>
    key:linid:1717465
    ------------------------------------------------------------------
    row missing on <g_dbsrv_er>
    key:linid:1717466
    ------------------------------------------------------------------
    row missing on <g_dbsrv_er>
    key:linid:1717467
    ------------------------------------------------------------------
    
    ------------------------------------------------------------------
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv             1612082         0         0         0         0
    g_dbsrv_er          1609757         2      2327         0         0
    
    WARNING: replicate is not in sync
    Aug 31 2016 11:27:27 ------   Table scan for gcomfacl end   ---------
    
    command failed -- WARNING: replicate is not in sync (178)
  • Check the synchronization status of a replicate and repair the inconsistencies encountered:

    Copy
    cdr check repl --repair -m g_dbsrv -r cpar_prearti g_dbsrv_er
    Aug 31 2016 11:45:35 ------   Table scan for cpar_prearti start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv             1237741         0         0         0     70322
    g_dbsrv_er          1237730         0        11       213         0
    
    
    The repair operation completed. Validating the repaired rows ...
    Validation of repaired rows failed.
    WARNING: replicate is not in sync
    Aug 31 2016 11:49:35 ------   Table scan for cpar_prearti end   ---------
    
    command failed -- WARNING: replicate is not in sync (178)

7.3.9 Modify

  • Modifie the replicate test1 to fire triggers:

    Copy
    cdr modify rep test1 -T y

7.4 Replicate set

7.4.1 Create a replicate set

  • Create set1 as a non-exclusive replicate set consisting of two replicates customer and order:

    Copy
    cdr define replicateset set1 customer orders
  • Create set2 as a exclusive replicate set consisting of two replicates state and manufact:

    Copy
    cdr define replicateset --exclusive set2 state manufact

7.4.2 List

  • List all replicate sets:

    Copy
    cdr list replicateset
    Ex T REPLSET                PARTICIPANTS
    -----------------------------------------------
    N  N set1                   customer, orders
    N  N set2                   manufact, state
  • List all the replicas defined in the set2 set:

    Copy
    cdr list replicateset set2
    REPLICATE SET:set2
    
    CURRENTLY DEFINED REPLICATES
    -------------------------------
    REPLICATE:        state
    STATE:            Inactive ON:g_dbsrv1
    CONFLICT:         Always Apply
    FREQUENCY:        immediate
    QUEUE SIZE:       0
    PARTICIPANT:      dbs1:informix.state
    OPTIONS:          row,ris,ats,fullrow
    REPLID:           65544 / 0x10008
    REPLMODE:         PRIMARY  ON:g_dbsrv1
    APPLY-AS:         INFORMIX ON:g_dbsrv1
    REPLTYPE:         Master
    
    REPLICATE:        manufact
    STATE:            Inactive ON:g_dbsrv1
    CONFLICT:         Always Apply
    FREQUENCY:        immediate
    QUEUE SIZE:       0
    PARTICIPANT:      dbs1:informix.manufact
    OPTIONS:          row,ris,ats,fullrow
    REPLID:           65545 / 0x10009
    REPLMODE:         PRIMARY  ON:g_dbsrv1
    APPLY-AS:         INFORMIX ON:g_dbsrv1
    REPLTYPE:         Master

7.4.3 Modify

  • Delete the customer replicate from an replicate set named set1:

    Copy
    cdr change replicateset set1 --d customer
  • Add the customer replicate from an replicate set named set2:

    Copy
    cdr change replicateset set2 --a customer

7.4.4 Remove

Remove an replicate set named set2:

Copy
cdr delete replicateset set2

7.4.5 Check

  • Check the synchronization status of the replicates included in the set replicate named set2:

    Copy
    cdr check replset -m g_dbsrv1 -s set2 g_dbsrv2
    Dec 11 2014 17:14:56 ------   Table scan for manufact start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv1                 9         0         0         0         0
    g_dbsrv2                 8         0         1         1         0
    
    WARNING: replicate is not in sync
    Dec 11 2014 17:14:57 ------   Table scan for manufact end   ---------
    
    
    Dec 11 2014 17:14:57 ------   Table scan for state start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv1                 0         0         0         0         0
    g_dbsrv2                 0         0         0         0         0
    
    Dec 11 2014 17:14:57 ------   Table scan for state end   ---------
    
    
    Dec 11 2014 17:14:57 ------   Table scan for customer start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv1                28         0         0         0         0
    g_dbsrv2                28         0         0         0         0
    
    Dec 11 2014 17:14:57 ------   Table scan for customer end   ---------
    
    command failed -- WARNING: set is not in sync (213)
  • Checks the synchronization state of the replicates included in the set replicate named set2 and showing the detail of the inconsistent rows:

    Copy
    cdr check replset --verbose -m g_dbsrv1 -s set2 g_dbsrv2
    Dec 11 2014 17:16:41 ------   Table scan for manufact start  --------
    
    data mismatch
    key: manu_code:ANZ
    
    column: lead_time
    g_dbsrv1:   5
    g_dbsrv2:<NULL>
    ------------------------------------------------------------------
    row missing on <g_dbsrv2>
    key:manu_code:SMT
    ------------------------------------------------------------------
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv1                 9         0         0         0         0
    g_dbsrv2                 8         0         1         1         0
    
    WARNING: replicate is not in sync
    Dec 11 2014 17:16:42 ------   Table scan for manufact end   ---------
    
    
    Dec 11 2014 17:16:42 ------   Table scan for state start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv1                 0         0         0         0         0
    g_dbsrv2                 0         0         0         0         0
    
    Dec 11 2014 17:16:42 ------   Table scan for state end   ---------
    
    
    Dec 11 2014 17:16:42 ------   Table scan for customer start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv1                28         0         0         0         0
    g_dbsrv2                28         0         0         0         0
    
    Dec 11 2014 17:16:42 ------   Table scan for customer end   ---------
    
    command failed -- WARNING: set is not in sync (213)
  • Checks the synchronization status of the replicates included in the set replicate named set2 and repairs the inconsistencies found:

    Copy
    cdr check replset --repair -m g_dbsrv1 -s set2 g_dbsrv2
    Dec 11 2014 17:21:17 ------   Table scan for manufact start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv1                 9         0         0         0         2
    g_dbsrv2                 8         0         1         1         0
    
    
    The repair operation completed. Validating the repaired rows ...
    Validation completed successfully.
    Dec 11 2014 17:21:17 ------   Table scan for manufact end   ---------
    
    
    Dec 11 2014 17:21:17 ------   Table scan for state start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv1                 0         0         0         0         0
    g_dbsrv2                 0         0         0         0         0
    
    
    Dec 11 2014 17:21:17 ------   Table scan for state end   ---------
    
    
    Dec 11 2014 17:21:17 ------   Table scan for customer start  --------
    
    Node                  Rows     Extra   Missing  Mismatch Processed
    ---------------- --------- --------- --------- --------- ---------
    g_dbsrv1                28         0         0         0         0
    g_dbsrv2                28         0         0         0         0
    
    
    Dec 11 2014 17:21:17 ------   Table scan for customer end   ---------

8 Tools

8.1 Check replicate

Copy
#!/bin/sh
# ======================================================================
# Este documento contiene datos propiedad intelectual de DEISTER
# S.A. Este documento no puede ser  publicado,  copiado o cedido
# total o parcialmente sin permiso escrito de DEISTER S.A.
#
# Al utilizar este software, usted reconoce y concuerda que su uso
# es bajo su responsabilidad y que ninguna de las partes involucradas
# en la creación, producción o prestación del servicio es responsable
# de cualquier perjuicio directo, incidental, consecuente, indirecto
# o punitivo o de pérdida, costo o gasto alguno (incluyendo honorarios
# legales, honorarios de peritos u otro desembolso) que pueda surgir
# directa o indirectamente a través de su uso, incluyendo pero no
# limitado a lo causado por virus, defectos, acción humana o inacción
# del sistema informático, línea telefónica, hardware, software o errores
# en el programa.
#
# description: Script para la ejecucion cdr check replicate a traves del cron.
#
# processname: ifx_check_replicates.sh
# ======================================================================
 
INFORMIXDIR=${INFORMIXDIR:-`grep "^informix:" /etc/passwd | awk -F: '{ print $6 }'`}
INFORMIXSERVER=${INFORMIXSERVER:-`uname -n|awk -F'.' '{print $1}'`}
export INFORMIXDIR INFORMIXSERVER
 
STATUS=OK
MOTIVE=""
LOGSYS=/usr/tmp/`basename $0 .sh`.`date +%w`.log
 
# Usage message.
usage () {
   STATUS=ERROR
   MOTIVE="Usage: `basename $0` --type <replset|repl> \
                                --master <server-master> \
                                --target <server-target> \
                                --repl  <rep1,rep2, ...,repn> \
                                [--verbose <yes|no>] \
                                [--repair <yes|no>] \
                                [--mailto <"user1@domain.es,user2@domain.es,...">] \
                                [--mailerror <"user1@domain.es,user2@domain.es,...">]"
   echo >> $LOGSYS
   echo "Error: $1" >> $LOGSYS
   echo >> $LOGSYS
   echo $MOTIVE >> $LOGSYS
   exit 1
}
 
#send email
do_mail ()
{
   mail -s "$STATUS `basename $0` at `date`" -r `hostname` $1 < $LOGSYS
}
 
# ************************************************************************
#
# Main
#
# ************************************************************************
#Check log file is current day
if [ -f $LOGSYS ]; then
   CHECKLOG=`find $LOGSYS -daystart -mtime -1 -print`
   if [ -z $CHECKLOG ]; then
        cat /dev/null > $LOGSYS
   fi
fi
 
echo "#% "                                                    >>$LOGSYS
echo "#% START"                                               >>$LOGSYS
echo "#% name: `basename $0 .sh`"                             >>$LOGSYS
echo "#% desc: Executes  cdr check replicate"                 >>$LOGSYS
echo "#% "                                                    >>$LOGSYS
echo "#% host: `uname -n`"                                    >>$LOGSYS
echo "#% date: `date +%d-%m-%Y`"                              >>$LOGSYS
echo "#% time: `date +%H:%M`"                                 >>$LOGSYS
echo "#% "                                                    >>$LOGSYS
echo " "                                                      >>$LOGSYS
 
# -------------------------------------------------------------------
# Parse and verify arguments
# -------------------------------------------------------------------
#Print argument
echo "Arguments:" >> $LOGSYS
echo $* >> $LOGSYS
echo >> $LOGSYS
echo >> $LOGSYS
 
#default value
VERBOSE=NO
REPAIR=NO
 
while echo $1 | grep ^- > /dev/null; do
    case $1 in
       --mailto|--mailerror|--type|--master|--target|--repl|--verbose|--repair)
           eval $( echo $1 | tr 'a-z' 'A-Z' | sed 's/-//g' | tr -d '\012')=\$2;;
       --) break;;
       -*) usage "unknown argument: $1";;
        *) break;;
    esac
    shift
    shift
done
 
#Required params
REQPARAMS=( type master target repl )
for PARAM in "${REQPARAMS[@]}" ; do
   arg=`echo $PARAM | tr 'a-z' 'A-Z'`
   if [ -z ${!arg} ]; then
      usage "missing argument --$PARAM"
   fi
done
 
if [ $TYPE == 'replset' ] ; then
     FLAG=s
elif [ $TYPE == 'repl' ] ; then
     FLAG=r
else
     usage "unknown value $TYPE"
fi
 
if [ $VERBOSE == 'YES' ] || [ $VERBOSE == 'yes' ]; then
     VERBOSE="--verbose"
else
     VERBOSE=""
fi
 
if [ $REPAIR == 'YES' ] || [ $REPAIR == 'yes' ]; then
     REPAIR="--repair"
else
     REPAIR=""
fi
 
# -------------------------------------------------------------------
# Process
# -------------------------------------------------------------------
#SPLIT LISTREP TO ARRAY
LISTREP=(${REPL//,/ })
for REP in "${LISTREP[@]}"; do
   CMD="$INFORMIXDIR/bin/cdr check $TYPE $VERBOSE $REPAIR -m $MASTER -$FLAG $REP $TARGET"
   echo $CMD >> $LOGSYS
   su - informix -c "$CMD" >> $LOGSYS 2>&1
   if [ $? -ne 0 ]; then
      STATUS=ERROR
      MOTIVE="ERROR $?"
   fi
done
 
echo ""                                                    >>$LOGSYS
echo "#% status: $STATUS"                                  >>$LOGSYS
echo "#% motive: $MOTIVE"                                  >>$LOGSYS
echo "#% enddate: `date +%d-%m-%Y`"                        >>$LOGSYS
echo "#% endtime: `date +%H:%M`"                           >>$LOGSYS
echo ""                                                    >>$LOGSYS
echo "END $STATUS `basename $0 .sh` `date` "               >>$LOGSYS
echo "===================================================" >>$LOGSYS
 
# ************************************************************************
#
# Enviamos el resultado del proceso al administrador via e-mail.
# De este modo nos aseguramos que el administrador siempre sabe
# si el proceso se ejecuta correctamente o no.
# Para ello hay que asegurase que el equipo puede hacer mail relay.
#
# ************************************************************************
if [ -n "$MAILTO" ]; then
     do_mail $MAILTO
fi
if [ -n "$MAILERROR" ] && [ "$STATUS" != "OK" ]; then
     do_mail $MAILERROR
fi

8.2 Nagios Plugins

Copy
#!/bin/sh
INFORMIXDIR=`grep "^informix:" /etc/passwd | cut -f6 -d:`
INFORMIXSERVER=`uname -n|awk -F'.' '{print $1}'`
export INFORMIXDIR INFORMIXSERVER
 
STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4
 
FILE_RES=/tmp/check_ifmx_repl.result
$INFORMIXDIR/bin/cdr list server > $FILE_RES
critical=0
text="All servers are active"
while read line; do
  if [[ $line =~ ^SERVER ]]; then
     continue
  fi
  if [[ $line =~ ^- ]]; then
     continue
  fi
  #SPLIT SERVERS
  servers=(${line})
  if [[ ! ${servers[2]} =~ ^Active ]] ; then
     critical=$((critical + 1 ))
     text=${servers[0]}": "${servers[2]}
  fi
  if [[ ! ${servers[3]} =~ ^Connected ]] && [[ ! ${servers[3]} =~ ^Local ]] ; then
     critical=$((critical + 1 ))
     text=$text" and "${servers[3]}"; "
  fi
done < $FILE_RES 
 
if [ $critical -ge 1 ]; then
     echo "CRITICAL: $text"
     exit $STATE_CRITICAL
else
     echo "OK: $text"
     exit $STATE_OK
fi