IBM Informix® allows you to generate events out of the execution of triggers through the use of callback functions. It is possible to write a generic function that can generate an audit trail of the information manipulated through SQL and captured through trigger.
This article describes how these facilities can be use in different context to generate auditing records.
1 Triggers
A trigger is an action that is executed when an operation is executed on a table. All versions of Informix have supported triggers on the following SQL operations: INSERT, UPDATE, and DELETE.
The Informix release version 9.2x added support for SELECT triggers, and Version 9.40 added support for "Instead of" triggers that allow you to replace the current operation with the one in the trigger. This effectively allows you to update a non-updatable view.
2 Trigger syntax
The syntax for triggers is described by the following simplified syntax diagrams.
In these syntax diagrams, you select one item from the list in braces ("{" and "}") separated by a pipe symbol ("|"). The expressions surrounded by brackets ("[" and "]") represent optional items that may be required depending on the previous choices. This will be clarified shortly with examples.
Listing 1. Sample trigger syntax
CREATE TRIGGER trigger_name {INSERT | UPDATE | DELETE | SELECT} ON table_name [REFERENCING OLD AS old_name] [REFERENCING NEW AS new_name] [BEFORE [WHEN (condition)] (action)] [FOR EACH ROW [WHEN(condition)] (action)] [AFTER [WHEN (condition)] (action)][ENABLED | DISABLED] CREATE TRIGGER trigger_name INSTEAD OF {INSERT ON | UPDATE ON | DELETE ON} view_name [REFERENCING OLD AS old_name] [REFERENCING NEW AS new_name] FOR EACH ROW [WHEN(condition)] (action) [ENABLED | DISABLED]
You can choose from three types of actions. All of them could be used in a trigger. They are:
- Before: Execute this action once before the triggering action executes
- For each row: Execute after each row processed
- After: Execute once after the triggering action executed even if no rows were processed
Note that each triggering action includes an optional condition that evaluates if the action will be executed. This can be useful when you want to generate auditing records only for suspicious processing like salary changes of more than 10%, for example. Each type of action (before, for each row, after) can contain multiple conditions and multiple actions, including multiple actions per condition.
Listing 2. Sample trigger creation from the Informix SQL syntax manual
CREATE TRIGGER up_trigger UPDATE OF unit_price ON stock REFERENCING OLD AS pre NEW AS post FOR EACH ROW WHEN (post.unit_price > pre.unit_price * 2) (INSERT INTO warn_tab VALUES (pre.stock_num, pre.order_num, pre.unit_price, post.unit_price, CURRENT) );
This statement inserts a row in the warn_tab table if the new unit_price is more than twice as much as the old one.
The action does not have to be an SQL statement. It can also be either an EXECUTE PROCEDURE statement or an EXECUTE FUNCTION statement.
3 Processing rows
We saw in the syntax diagrams and in the example above that we can reference both the before and after images of the row being processed. It has to be noted that we cannot pass the row reference to a function or a procedure. For example, the following create statement fails:
Listing 3. Create statement that fails
CREATE TRIGGER tab1instrig INSERT ON tab1 REFERENCING NEW AS post FOR EACH ROW (EXECUTE PROCEDURE do_auditing('INSERT', post))
You can work around this problem by defining a row as you pass the arguments to the function. Assuming a two-column table, the previous statement would become:
Listing 4. Successful create statement
CREATE TRIGGER tab1instrig INSERT ON tab1 REFERENCING NEW AS post FOR EACH ROW (EXECUTE PROCEDURE do_auditing('INSERT', ROW(post.pkid, post.col2)::ROW(pkid integer, col2 varchar(30)) ) )
This example shows that it is not sufficient to create a row but we have to include its definition through the casting operator (::). Before we can address this problem, we have to first take a look at a few extensibility concepts introduced in Informix 9.x.
4 Object-relational features
Informix 9.x introduces object-relational features. A few of these features facilitate the implementation of auditing functions. They are the new data type ROW and the user-defined functions (UDFs).
The ROW data type can be equated to a table definition: It defines multiple columns that are grouped together into a tuple. ROW types can be either names or unnamed. For example, you can define a row type as follows:
Listing 5. Define row type
CREATE ROW TYPE zipcode_t ( state CHAR(2), code CHAR(5) );
We can create elements of that type using the zipcode_t name. We can also create unnamed row types as we did in the trigger example above. The unnamed row type was created with the expression:
ROW(post.pkid, post.col2)::ROW(pkid integer, col2 varchar(30))
A ROW type can be used to created type tables or as the data type for a column in a table. In the case of the zipcode_t type, it could be used in a table definition:
Listing 6. Table definition
CREATE TABLE customer ( FirstName varchar(30), . . . zip zipcode_t, . . . );
User-defined functions (or procedures) can accept ROW types as arguments. Here is an example of formatting the each row returned in XML format:
SELECT genxml2('customer', customer) FROM customer;
This statement passes a row as the second argument of the genxml2() function. This argument, being the same name as the table it selects from, represents a row from the customer table. What is passed as argument is an unnamed row type. For this reason, genxml2() defines a first argument that gives a name for the row. This is then used as the top-level name in the XML representation. For more information on generating XML from Informix, see the article "Generating XML from Informix" listed in the reference section at the end of this article.
A ROW type is self-describing. When a UDF receives a ROW as argument, it can find out the number of columns that are defined, their names, their types and their content. A UDF can be defined as receiving a generic row. At runtime, it can extract enough information from the row to decide on the type of processing.
5 Generating auditing records
With what we just learned about ROW types and UDFs, we can see that it is possible to create an auditing function that can be used for any tables in your database. If we are planning to write to an auditing table, we have to make sure that we match the audit table definition, no matter which table is being audited.
A simple approach is to generate the audit record in an XML representation. We could then use an audit table with the following format:
Listing 7. Audit table format
CREATE TABLE auditTable ( id SERIAL PRIMARY KEY, tabname VARCHAR(128), log LVARCHAR(30000) );
We can create a function, do_auditing(), that takes up to four arguments: the table name, the trigger type ( INSERT, UPDATE, DELETE, SELECT), and the before and after image of the row.
6 Trigger introspection
Informix, Version 9.40.xC4 introduced a set of functions in the DataBlade API to retrieve context information from a UDF. The DataBlade API is the programming interface used to interface with the database server in "C." This implies that the trigger introspection facility can only be used if the function or procedure called in the trigger is written in "C."
From this point on, the example code will assume the use of the stores7 demo database. If we wanted to create an audit of inserts into the customer table, we could create a function do_auditing1() and use it in the CREATE TRIGGER as follows:
CREATE TRIGGER custinstrig INSERT ON customer FOR EACH ROW (EXECUTE PROCEDURE do_auditing1() )
The do_auditing1() function retrieved the row information and any other information that could be useful for the auditing. The trigger introspection functions include:
- mi_integer mi_hdr_status(): the status returned indicates if the function is executing in a HDR environment and if it is executing on the primary or the secondary.
- mi_string *mi_trigger_tabname(mi_integer flags): returns triggering table or view. The flag argument indicates the format of the table name: if it includes the schema name, owner name, and so on.
- mi_integer mi_trigger_event(): trigger information (operation, before/after/foreach/instead)
- mi_integer mi_trigger_level(): Nesting level of the trigger
- mi_string *mi_trigger_name(): returns the name of the trigger
- MI_ROW *mi_trigger_get_old_row(): before image of the row
- MI_ROW *mi_trigger_get_new_row(): after image of the row
With these functions, we can find out everything about the trigger or its context. We can now write the do_auditing1() "C" procedure to provide the database modifications auditing.
The first thing to do is make sure we are in a trigger and processing each row:
Listing 8. Making sure we are in a trigger and processing each row
trigger_operation = mi_trigger_event(); if (trigger_operation & MI_TRIGGER_NOT_IN_EVENT) { /* not in a trigger! generate an exception */ mi_db_error_raise(NULL, MI_EXCEPTION, "do_auditing1() can only be called within a trigger!", NULL); return; } /* Make sure this is in a FOR EACH type of trigger */ if (0 == (trigger_operation & MI_TRIGGER_FOREACH_EVENT) ){ /* not in a for each trigger! generate an exception */ mi_db_error_raise(NULL, MI_EXCEPTION, "do_auditing1() must be in a FOR EACH trigger operation", NULL); return; }
Once we know we are in the right context, we can prepare a log record based on the type of operation executed. The following code excerpt illustrates how it can be done:
Listing 9. Preparing a log record
trigger_operation &= (MI_TRIGGER_INSERT_EVENT | MI_TRIGGER_UPDATE_EVENT | MI_TRIGGER_DELETE_EVENT | MI_TRIGGER_SELECT_EVENT); /* Call the appropriate function */ switch (trigger_operation) { case MI_TRIGGER_INSERT_EVENT: pdata = doInsertCN(); break; . . .
Once the log record has been created, the last thing we have to do is insert it into the auditing table:
Listing 10. Insert into audit table
. . . sprintf(psql, "INSERT INTO auditTable VALUES(0, '%s', '%s')",tabname, pdata); sessionConnection = mi_get_session_connection(); ret = mi_exec(sessionConnection, psql, MI_QUERY_NORMAL); . . .
For all the details of the do_auditing1() implementation, please consult the example code provided with this article.
7 Getting other useful information
The do_auditing1() function records the table name and the row being added, modified, or removed. The datablade API provides two functions to allow you to further identify the statements:
- mi_get_database_info(): Retrieve basic information such as database name and username.
- mi_get_id(): Retrieve either the statement id or the session id.
- mi_get_transaction_id(): Obtain the current transaction id.
You can also retrieve the username of the user executing the trigger by using the SQL built-in function USER (see the Informix 11.50 SQL Syntax manual, pages 4-71).
8 Transaction boundary
The implementation of do_auditing1() runs within the context of the current transaction. This means that if the transaction ends with a rollback, the record is removed from the auditTable table. This is fine for this implementation. In this case, if an auditing program needs to know about the changes to the auditTable table, it must go read the table at some time interval. Depending on how quickly it must react to these records, it could be every few seconds or, if it can be processed at a more leisurely pace, every few minutes or hours.
What if we want to write to a file outside the database or send the auditing record on a message queue? In this case, the operation cannot be completed until we know that the transaction has been committed. For this purpose, we need to be able to react to events.
9 Event processing
The DataBlade API provides ways to register callback functions that wait for specific events. This mechanism allows us to complete our auditing operation when the transaction completes. To implement the trigger, we follow the general approach illustrated in the following figure.

When a statement executes (1), the trigger is called (2). The trigger registers a callback function (3) to write the result to a file. It also creates the auditing record and stores it in memory (4). This is a special type of memory available through the DataBlade API where you give it a name and can retrieve a reference to it by name.
A transaction can complete after one row is processed but can also finish after multiple rows. When this happens (5), Informix calls the callback function (6). The callback function can read the records that were saved in named memory (7) and write each record to a file (8).
The processing for this approach is similar to do_auditing1(). Let's call it do_auditing2(). It adds the creation of the named memory segment, the registration of a callback function, and the writing to files. The memory allocation is shown in Listing 11:
Listing 11. Memory allocation
sessionId = mi_get_id(sessionConnection, MI_SESSION_ID); /* Retrieve or create session memory */ sprintf(buffer, "logger%d", sessionId); if (MI_OK != mi_named_get(buffer, PER_SESSION, &pmem)) { /* wasn't there, allocate it */ if (MI_OK != mi_named_zalloc(sizeof(NamedMemory_t), buffer, PER_SESSION, &pmem)) { mi_db_error_raise(NULL, MI_EXCEPTION, "Logger memory allocation error", NULL); } }
We first retrieve the session ID to create a unique name for our named memory. We then try to get access to it. If it fails, this means that it is the first time this session called this trigger function. We then allocate the memory. Note that the third argument to the mi_named_zalloc() function is PER_SESSION. This means that the memory is allocated with a PER_SESSION duration. Once the user disconnects from the database server, the session disappears. Since the named memory was allocated on a PER_SESSION duration, the named memory is also freed.
The second addition to the code concerns the registration of a callback function.
Listing 12. Register the callback
/* Register the callback */ if (pmem->gothandle == 0) { cbhandle = mi_register_callback(NULL, MI_EVENT_END_XACT, cbfunc,(void *)pmem, NULL); if (cbhandle == NULL) mi_db_error_raise(NULL, MI_EXCEPTION, "Callback registration failed", NULL); pmem->gothandle = 1; } }
This code registers a callback function called cbfunc(). A pointer to the named memory is passed as argument to mi_register_callback(). The cbfunc() function can the use it directly in its code. The function definition is:
MI_CALLBACK_STATUS MI_PROC_CALLBACK cbfunc(MI_EVENT_TYPE event_type, MI_CONNECTION *conn, void *event_data, void *user_data)
The key decision made in cbfunc() is to decide if we should write to the audit file. This is done with two tests. One test looks at the type of event ( MI_EVENT_END_XACT) and the other to see if it ended in a commit ( MI_NORMAL_END) or a rollback ( MI_ABORT_END). Listing 13 provides some code illustrating this:
Listing 13. Testing if we should write to the audit file
if (event_type == MI_EVENT_END_XACT) { . . . change_type = mi_transition_type(event_data); switch(change_type) { case MI_NORMAL_END: . . . case MI_ABORT_END: . . .
Note that even in the case of a rollback, the callback function must do some cleanup, removing all the records that were part of the transaction from the named memory.
The DataBlade API provides functions to write to operating system files. The callback function creates a unique file name to write the audit records stored in named memory.
Listing 14. Writing to operating system files
sprintf(buffer, "%s%d_%d.xml", LOGGERFILEPREFIX, pmem->sessionId, pcur->seq); fd = mi_file_open(buffer, O_WRONLY | O_APPEND | O_CREAT, 0644); ret = mi_file_write(fd, pcur->xml, strlen(pcur->xml)); mi_file_close(fd);
10 The fastpath interface
The DataBlade API provides functions, called the fastpath interface, to call another UDR. The description of this interface is beyond the scope of this article. You can find more information on the fastpath interface in the documentation provided in the reference section later in this article.
This interface could be use to call other functions such as the ones defined in the MQSeries® DataBlade.
11 What about Java?
The fine-grained auditing capability described in this article is better implemented in "C" so you can take advantage of the introspection feature that allows you to implement a generic function that works for any table. This does not mean that Java™ cannot be used for part of the processing.
The advantage of using Java user-defined functions or procedures is that you have access to all the capabilities of the Java environment. This includes communication classes such as socket connections, HTTP protocol, and so on.
To demonstrate one way to use Java in our auditing functions, consider a new function, do_auditing3(). This function provides the same processing as do_auditing2() but changes the callback function slightly.
Instead of using the DataBlade API functions to write to a file, this callback function used the fastpath interface to call a Java user-defined procedure that will write to a file. This Java function is defined as follows:
Listing 15. Java function
CREATE PROCEDURE writeFile(lvarchar, lvarchar) EXTERNAL NAME 'audit_jar:RecordAudit.writeFile(java.lang.String, java.lang.String)' LANGUAGE JAVA;
The first argument represents the file name and the second argument, the audit record. The callback function executes the Java procedure using the fastpath interface. It first finds a reference to the function and then executes it with the appropriate argument. This is demonstrated with the following code:
Listing 16. Finding and executing appropriate argument
fn = mi_routine_get(conn, 0, "writeFile(lvarchar, lvarchar)"); . . . ret = mi_routine_exec(conn, fn, &ret, buffer, pcur->xml); . . .
In the mi_routine_exec() function, the arguments buffer and pcur->xml are the arguments to the writeFile() function. The function reference fn must be release once we are done with it:
mi_routine_end(conn, fn);
12 Example code
As "root" user install the following packages:
# yum install unzip # yum install gcc
With the informix user create the folder $INFORMIXDIR/extend/auditing/ where we will populate the program code.
$ mkdir $INFORMIXDIR/extend/auditing/
From the following link source code (http://public.dhe.ibm.com/software/dw/data/dm-0410roy/auditing.zip), download the source code and deposit it in the folder you just created:
$ curl -o $INFORMIXDIR/extend/auditing/auditing.zip http://public.dhe.ibm.com/software/dw/data/dm-0410roy/auditing.zip
$ ls $INFORMIXDIR/extend/auditing/auditing.zip
Unzip the downloaded file:
$ cd $INFORMIXDIR/extend/auditing/ $ unzip auditing.zip
Edit the UNIX.mak file and set the target variable
TARGET=$(INFORMIXDIR)/incl/dbdk/makeinc.linux86_64
Edit the UNIX.mak file and comment the lines that refer to the java class for our example are not necessary
RecordAudit.class: RecordAudit.java #javac RecordAudit.java RecordAudit.jar: RecordAudit.class #jar cf RecordAudit.jar RecordAudit.class
Edit the file auditing1.c and modify the following insert so avoid sql errors in case the table is added more columns:
sprintf(psql, "INSERT INTO auditTable (id,tabname,log) VALUES(0, '%s', '%s')", tabname, pdata);
Edit the file audit_util.c and modify the update element from lowercase to uppercase:
sprintf(&buffer[pbufLen], "</UPDATE></%s>", ptabname);
Change null values to empty string:
Edit the file audit_util.c and modify doUpdateCN function:
/* we should do this test */ case MI_NULL_VALUE: /*pcast = "NULL";*/ /*OLD*/ pcast = ""; /*NEW*/ break; case MI_NORMAL_VALUE: pcast = do_cast(conn, datum, tid, lvarTid); break; } /* end switch */ if (0 == strcmp(poldcolname, pnewcolname) ) { switch (mi_value(newRow, j, &datum, &collen)) { case MI_NULL_VALUE: /*pcast2 = "NULL";*/ /*OLD*/ pcast2 = ""; /*NEW*/ break; case MI_NORMAL_VALUE: pcast2 = do_cast(conn, datum, tid, lvarTid); break; } /* end switch */
Fix some characters that may cause error:
- Replace single quotes with double quotes.
- Replace new line with blank character.
Edit again the file audit_util.c and add the following function after the definition of the "fixname" method:
/*--------------------------------------------------------------*/ void fixXMLValue(mi_string *in) { char temp[BUFSIZE]; int i,j,pos; strcpy(temp, in); j = strlen(temp); pos = 0; for (i = 0; i < j; i++) { switch(temp[i]) { case '&' : strcpy(&in[pos],"&"); pos = pos + 5; break; case '\"': strcpy(&in[pos],"""); pos = pos + 6; break; case '\'': strcpy(&in[pos],"'"); pos = pos + 6; break; case '<' : strcpy(&in[pos],"<"); pos = pos + 4; break; case '>' : strcpy(&in[pos],">"); pos = pos + 4; break; case '\n': in[pos]= ' '; pos++; break; case '\r': in[pos]= ' '; pos++; break; default : in[pos]=temp[i]; pos++; break; } } in[pos] = '\0'; }
Inside the function doUpdateCN add the two following lines before the first occurrence of the line pbufLen = strlen(buffer);
:
/*ADD*/ fixXMLValue(pcast); fixXMLValue(pcast2); /*BEFORE*/ pbufLen = strlen(buffer);
Inside the function doInsertCN add the following line before the first occurrence of the line posi = strlen(buffer);
:
/*ADD*/ fixXMLValue(pcast); /*BEFORE*/ posi = strlen(buffer);
Inside the function doDeleteCN add the following line before the first occurrence of the line posi = strlen(buffer);
:
/*ADD*/ fixXMLValue(pcast); /*BEFORE*/ posi = strlen(buffer);
Avoid columns of the blob or clob type:
Edit the file audit_util.c and change the following code fragment inside the function doInsertCN
:
case MI_NORMAL_VALUE: /*OLD:*/ /*pcast = do_cast(conn, datum, tid, lvarTid);*/ /*NEW*/ if (mi_typeid_equals(tid,mi_typestring_to_id(conn, "blob")) == MI_TRUE || mi_typeid_equals(tid,mi_typestring_to_id(conn, "clob"))) pcast= ""; else pcast = do_cast(conn, datum, tid, lvarTid);
Edit the file audit_util.c and change the following code fragment inside the function doDeleteCN
:
case MI_NORMAL_VALUE: /*OLD:*/ /*pcast = do_cast(conn, datum, tid, lvarTid);*/ /*NEW*/ if (mi_typeid_equals(tid,mi_typestring_to_id(conn, "blob")) == MI_TRUE || mi_typeid_equals(tid,mi_typestring_to_id(conn, "clob"))) pcast= ""; else pcast = do_cast(conn, datum, tid, lvarTid);
Edit the file audit_util.c and change the following code fragment inside the function doUpdateCN
:
case MI_NORMAL_VALUE: /*OLD:*/ /*pcast = do_cast(conn, datum, tid, lvarTid);*/ /*NEW*/ if (mi_typeid_equals(tid,mi_typestring_to_id(conn, "blob")) == MI_TRUE || mi_typeid_equals(tid,mi_typestring_to_id(conn, "clob"))) pcast= ""; else pcast = do_cast(conn, datum, tid, lvarTid); case MI_NORMAL_VALUE: /*OLD*/ /*pcast2 = do_cast(conn, datum, tid, lvarTid);*/ /*NEW*/ if (mi_typeid_equals(tid,mi_typestring_to_id(conn, "blob")) == MI_TRUE || mi_typeid_equals(tid,mi_typestring_to_id(conn, "clob"))) pcast2= ""; else pcast2 = do_cast(conn, datum, tid, lvarTid); break;
Fix bug order column update:
Edit the file audit_util.c and change the function doUpdateCN
:
mi_string *doUpdateCN() { MI_CONNECTION *conn; MI_ROW *oldRow, *newRow; MI_TYPEID *tid, *lvarTid; MI_TYPE_DESC *td; MI_ROW_DESC *rdOld, *rdNew; MI_DATUM datum; mi_lvarchar *lvarret; mi_integer i, j, k, len, posi, colCountOld, colCountNew, collen; mi_string *buffer, *ptabname, *poldcolname, *pnewcolname, *pcast, *pcast2; mi_integer pbufLen; DPRINTF("logger", 90, ("Entering doUpdateCN()")); conn = mi_get_session_connection(); /* get the rows */ oldRow = mi_trigger_get_old_row(); newRow = mi_trigger_get_new_row(); rdOld = mi_get_row_desc(oldRow); rdNew = mi_get_row_desc(newRow); colCountOld = mi_column_count(rdOld); colCountNew = mi_column_count(rdNew); DPRINTF("logger", 90, ("Column count before: %d, after: %d", colCountOld, colCountNew)); tid = mi_rowdesc_typeid(rdOld); lvarTid = mi_typename_to_id(conn, mi_string_to_lvarchar("lvarchar")); td = mi_type_typedesc(NULL, tid); /* prepare the output buffer */ ptabname = mi_trigger_tabname(MI_TRIGGER_CURRENTTABLE | MI_TRIGGER_TABLENAME); len = strlen(ptabname); for (i = 0; i < len; i++) if (0 == isgraph(ptabname[i])) { ptabname[i] = 0; DPRINTF("logger", 90, ("Found a non-printable character in tabname")); } buffer = (mi_string *)mi_alloc(BUFSIZE); sprintf(buffer, "<%s><UPDATE>", ptabname); /* Process each column */ for (i = 0; i < colCountOld; i++) { /* get column name and type id */ j = -1; poldcolname = mi_column_name(rdOld, i); for (k = 0; k < colCountNew; k++) { pnewcolname = mi_column_name(rdNew, k); if (0 == strcmp(poldcolname, pnewcolname) ) { j = k; break; } } tid = mi_column_type_id(rdOld, i); switch(mi_value(oldRow, i, &datum, &collen)) { /* we should do this test */ case MI_NULL_VALUE: pcast = ""; break; case MI_NORMAL_VALUE: /*pcast = do_cast(conn, datum, tid, lvarTid);*/ if (mi_typeid_equals(tid,mi_typestring_to_id(conn, "blob")) == MI_TRUE || mi_typeid_equals(tid,mi_typestring_to_id(conn, "clob"))) pcast= ""; else pcast = do_cast(conn, datum, tid, lvarTid); break; } /* end switch */ if ( j >= 0 ) { switch (mi_value(newRow, j, &datum, &collen)) { case MI_NULL_VALUE: pcast2 = ""; break; case MI_NORMAL_VALUE: /*pcast2 = do_cast(conn, datum, tid, lvarTid);*/ if (mi_typeid_equals(tid,mi_typestring_to_id(conn, "blob")) == MI_TRUE || mi_typeid_equals(tid,mi_typestring_to_id(conn, "clob"))) pcast2= ""; else pcast2 = do_cast(conn, datum, tid, lvarTid); break; } /* end switch */ } else { pcast2 = pcast; } fixXMLValue(pcast); fixXMLValue(pcast2); pbufLen = strlen(buffer); sprintf(&buffer[pbufLen], "<%s><old>%s</old><new>%s</new></%s>", poldcolname, pcast, pcast2, poldcolname); pbufLen = strlen(buffer); } /* end for */ pbufLen = strlen(buffer); sprintf(&buffer[pbufLen], "</UPDATE></%s>", ptabname); DPRINTF("logger", 90, ("Exiting doUpdateCN()")); return(buffer); }
Then compile the source code:
make -f UNIX.mak
Create the following table on the database whose tables you want to audit
CREATE TABLE auditTable ( id SERIAL NOT NULL, logdate DATETIME YEAR TO SECOND DEFAULT CURRENT YEAR TO SECOND NOT NULL , loguser CHAR(20) DEFAULT USER NOT NULL, tabname VARCHAR(128), log LVARCHAR(30000) ); CREATE UNIQUE INDEX p_auditTable on auditTable (id); ALTER TABLE auditTable ADD CONSTRAINT PRIMARY KEY (id) CONSTRAINT p_auditTable;
CREATE TABLE auditTable_off ( tabname VARCHAR(128) ); CREATE UNIQUE INDEX p_auditTable_off on auditTable_off (tabname); ALTER TABLE auditTable_off ADD CONSTRAINT PRIMARY KEY (tabname) CONSTRAINT p_auditTable_off;
Create the SPL on the database whose tables you want to audit
CREATE PROCEDURE do_auditing1() EXTERNAL NAME "$INFORMIXDIR/extend/auditing/linux-linux86_64/auditing.bld(do_auditing1)" LANGUAGE C;
Create the triggesr on the database whose tables you want to audit
CREATE TRIGGER capuntes_ins_aud INSERT ON capuntes FOR EACH ROW WHEN ((SELECT COUNT(*) FROM auditTable_off WHERE tabname IN ('all','capuntes'))= 0) (EXECUTE PROCEDURE do_auditing1()); CREATE TRIGGER capuntes_upd_aud UPDATE ON capuntes FOR EACH ROW WHEN ((SELECT COUNT(*) FROM auditTable_off WHERE tabname IN ('all','capuntes'))= 0) (EXECUTE PROCEDURE do_auditing1()); CREATE TRIGGER capuntes_del_aud DELETE ON capuntes FOR EACH ROW WHEN ((SELECT COUNT(*) FROM auditTable_off WHERE tabname IN ('all','capuntes'))= 0) (EXECUTE PROCEDURE do_auditing1());
13 Removing Routines from the Shared Library
echo 'EXECUTE FUNCTION IFX_UNLOAD_MODULE("$INFORMIXDIR/extend/auditing/linux-linux86_64/auditing.bld(do_auditing1)","C");' |dbaccess sysadmin
14 Parsing XML log column
This script parse the xml stored in the log column of the audit table and returns a vtable with the values in table-rows format
Arguments:
-
p_tabname
:Table on which we want to get logs. -
p_loguser
:Filter by user or all (%). -
p_event
:Filter by event (INSERT, UPDATE or DELETE) or all (%). -
p_fecini
:Start day for logs. -
p_fecfin
:End day for logs. -
p_cond
:Filter on columns in the audited table. -
p_xmlids
:Use xml informix function (S) or xsql-script funcition (N).
Eaxmple:
<call syscode='true' name = 'auditTable_find'> <args> <arg>wic_user</arg> <arg>LIKE '%'</arg> <arg>%</arg> <arg>01-01-2010</arg> <arg>01-01-2018</arg> <arg>user_code matches "deister*"</arg> <arg>N</arg> </args> </call>
<xsql-script name = 'auditTable_find'> <args> <arg name = 'p_tabname' type = 'string'/> <arg name = 'p_loguser' type = 'string'/> <arg name = 'p_event' type = 'string'/> <arg name = 'p_fecini' type = 'date'/> <arg name = 'p_fecfin' type = 'date'/> <arg name = 'p_cond' type = 'string'/> <arg name = 'p_xmlids' type = 'string'/> <!-- S/N --> </args> <body> <function name='local_get_value'> <args> <arg name = 'p_path' type='string'/> <arg name = 'p_root'/> </args> <body> <set name = 'm_elem'> <dom.getElementByXPath xpath='#p_path'> <p_root/> </dom.getElementByXPath> </set> <if><expr><isnotnull><m_elem/></isnotnull></expr> <then> <set name = 'm_node'> <dom.node.getFirstChild> <m_elem/> </dom.node.getFirstChild> </set> </then> <else> <return><null/></return> </else> </if> <if><expr><isnotnull><m_node/></isnotnull></expr> <then> <return> <dom.node.getNodeValue> <m_node/> </dom.node.getNodeValue> </return> </then> </if> <return><null/></return> </body> </function> <set name ='v_metadata'> <connection.metadata.getColumns> <system.dbms.getCode/> <null /> <p_tabname/> <string>%</string> </connection.metadata.getColumns> </set> <set name = 'm_update_old'><string/></set> <set name = 'm_update_new'><string/></set> <set name = 'm_insert'><string/></set> <set name = 'm_delete'><string/></set> <set name = 'm_select'><string/></set> <foreach> <in prefix = 'm_'> <v_metadata/> </in> <do> <switch name = 'm_type_name' regexp='true'> <case value = 'char|varchar|lvarchar|nchar|nvarchar'> <set name = 'm_type'><string><m_type_name/>(<m_column_size/>)</string></set> <set name = 'm_value'><m_column_name/></set> </case> <case value = 'blob'> <set name = 'm_type'><string>lvarchar</string></set> <set name = 'm_value'><string>''</string></set> </case> <case value = 'serial'> <set name = 'm_type'><string>integer</string></set> <set name = 'm_value'>0</set> </case> <case value = 'decimal'> <set name = 'm_type'><string><m_type_name/>(<m_column_size/>,<m_decimal_digits/>)</string></set> <set name = 'm_value'>0.0</set> </case> <case value = 'datetime'> <switch name = 'm_column_size'> <case value = '1128'> <set name = 'm_type'><string>datetime hour to minute</string></set> <set name = 'm_value'><string>CURRENT</string></set> </case> <case value = '3080'> <set name = 'm_type'><string>datetime year to minute</string></set> <set name = 'm_value'><string>CURRENT</string></set> </case> <default> <set name = 'm_type'><string>datetime year to second</string></set> <set name = 'm_value'><string>CURRENT</string></set> </default> </switch> </case> <default> <set name = 'm_type'><m_type_name/></set> <set name = 'm_value'><string>''</string></set> </default> </switch> <set name = 'm_update_old'> <string><m_update_old/> <string.nl/>,extractvalue(log, '/<p_tabname/>/UPDATE/<m_column_name/>/old')::<m_type/> as <m_column_name/></string> </set> <set name = 'm_update_new'> <string><m_update_new/> <string.nl/>,extractvalue(log, '/<p_tabname/>/UPDATE/<m_column_name/>/new')::<m_type/> as <m_column_name/></string> </set> <set name = 'm_insert'> <string><m_insert/> <string.nl/>,extractvalue(log, '/<p_tabname/>/INSERT/<m_column_name/>')::<m_type/> as <m_column_name/></string> </set> <set name = 'm_delete'> <string><m_delete/> <string.nl/>,extractvalue(log, '/<p_tabname/>/DELETE/<m_column_name/>')::<m_type/> as <m_column_name/></string> </set> <set name = 'm_select'> <string><m_select/> <string.nl/>,<m_value/>::<m_type/> as <m_column_name/></string> </set> </do> </foreach> <if><expr><eq><p_xmlids/><string>S</string></eq></expr> <then> <drop table = '@tmp_result' onexception='ignore'/> <union type = 'all' intotemp='@tmp_result'> <select> <columns> auditTable.id, auditTable.logdate, auditTable.loguser, "UPDATE" <alias name = 'event'/>, "OLD" <alias name = 'type'/> #m_update_old </columns> <from table = 'auditTable'/> <where> auditTable.tabname = <p_tabname/> AND auditTable.loguser #p_loguser AND <date>auditTable.logdate</date> BETWEEN <p_fecini/> AND <p_fecfin/> AND existsnode(log, "/#p_tabname/UPDATE") = 1 </where> </select> <select> <columns> auditTable.id, auditTable.logdate, auditTable.loguser, "UPDATE" <alias name = 'event'/>, "NEW" <alias name = 'type'/> #m_update_new </columns> <from table = 'auditTable'/> <where> auditTable.tabname = <p_tabname/> AND auditTable.loguser #p_loguser AND <date>auditTable.logdate</date> BETWEEN <p_fecini/> AND <p_fecfin/> AND existsnode(log, "/#p_tabname/UPDATE") = 1 </where> </select> <select> <columns> auditTable.id, auditTable.logdate, auditTable.loguser, "INSERT" <alias name = 'event'/>, "NEW" <alias name = 'type'/> #m_insert </columns> <from table = 'auditTable'/> <where> auditTable.tabname = <p_tabname/> AND auditTable.loguser #p_loguser AND <date>auditTable.logdate</date> BETWEEN <p_fecini/> AND <p_fecfin/> AND existsnode(log, "/#p_tabname/INSERT") = 1 </where> </select> <select> <columns> auditTable.id, auditTable.logdate, auditTable.loguser, "DELETE" <alias name = 'event'/>, "OLD" <alias name = 'type'/> #m_delete </columns> <from table = 'auditTable'/> <where> auditTable.tabname = <p_tabname/> AND auditTable.loguser #p_loguser AND <date>auditTable.logdate</date> BETWEEN <p_fecini/> AND <p_fecfin/> AND existsnode(log, "/#p_tabname/DELETE") = 1 </where> </select> </union> </then> <else> <drop table = '@tmp_result' onexception='ignore'/> <select intotemp='@tmp_result'> <columns> 0::INTEGER <alias name = 'id'/>, CURRENT::DATETIME YEAR TO SECOND <alias name = 'logdate'/>, ""::CHAR(20) <alias name = 'loguser'/>, ""::CHAR(10) <alias name = 'event'/>, ""::CHAR(3) <alias name = 'type'/> #m_select </columns> <from table = '#p_tabname'/> <where> 1 = 0 </where> </select> <foreach> <select prefix='add_'> <columns> auditTable.id, auditTable.logdate, auditTable.loguser, auditTable.log </columns> <from table = 'auditTable'/> <where> auditTable.tabname = <p_tabname/> AND auditTable.loguser #p_loguser AND <date>auditTable.logdate</date> BETWEEN <p_fecini/> AND <p_fecfin/> </where> </select> <do> <unset name = 'add_new_id'/> <unset name = 'add_new_logdate'/> <unset name = 'add_new_loguser'/> <unset name = 'add_new_type'/> <unset name = 'add_new_event'/> <unset name = 'add_old_id'/> <unset name = 'add_old_logdate'/> <unset name = 'add_old_loguser'/> <unset name = 'add_old_type'/> <unset name = 'add_old_event'/> <set name = 'm_root'><dom.parse><add_log/></dom.parse></set> <unset name = 'add_log'/> <set name='add_event'> <dom.element.getTagName><dom.element.getFirstChildElement> <m_root/> </dom.element.getFirstChildElement></dom.element.getTagName> </set> <foreach> <in prefix = 'm_'> <v_metadata/> </in> <do> <unset name = 'add_new_#m_column_name'/> <unset name = 'add_old_#m_column_name'/> <unset name = 'add_#m_column_name'/> <switch name = 'add_event'> <case value='INSERT'> <set name = 'add_type'><string>NEW</string></set> <local_get_value into='add_#m_column_name'> <string>/<p_tabname/>/INSERT/<m_column_name/></string> <m_root/> </local_get_value> </case> <case value='DELETE'> <set name = 'add_type'><string>OLD</string></set> <local_get_value into='add_#m_column_name'> <string>/<p_tabname/>/DELETE/<m_column_name/></string> <m_root/> </local_get_value> </case> <case value='UPDATE'> <set name = 'add_old_id'><add_id/></set> <set name = 'add_old_logdate'><add_logdate/></set> <set name = 'add_old_loguser'><add_loguser/></set> <set name = 'add_old_event'><add_event/></set> <set name = 'add_old_type'><string>OLD</string></set> <local_get_value into='add_old_#m_column_name'> <string>/<p_tabname/>/UPDATE/<m_column_name/>/old</string> <m_root/> </local_get_value> <set name = 'add_new_id'><add_id/></set> <set name = 'add_new_logdate'><add_logdate/></set> <set name = 'add_new_loguser'><add_loguser/></set> <set name = 'add_new_event'><add_event/></set> <set name = 'add_new_type'><string>NEW</string></set> <local_get_value into='add_new_#m_column_name'> <string>/<p_tabname/>/UPDATE/<m_column_name/>/new</string> <m_root/> </local_get_value> </case> <default> <exception><string>Event not implemented</string></exception> </default> </switch> </do> </foreach> <if><expr><eq><add_event/><string>UPDATE</string></eq></expr> <then> <insert table='@tmp_result' prefix='add_new_'/> <insert table='@tmp_result' prefix='add_old_'/> </then> <else> <insert table='@tmp_result' prefix='add_'/> </else> </if> </do> </foreach> </else> </if> <vtable name = 'vresult'> <select> <columns> * </columns> <from table = '@tmp_result'/> <where> event LIKE <p_event/> AND #p_cond </where> <order>id, type DESC</order> </select> </vtable> <drop table = '@tmp_result' onexception='ignore'/> <return><vresult/></return> </body> </xsql-script>