This server administraton is an application module to monitor and analyze Informix database server operational parameters. It also include a complete set of management procedures to administer a server instance.
Those procedures are based on IBM Informix SQL Administration API Functions
1 General
This area, allows to access to key instance information like current on use configuration parameters, default parameters set on configuration file, global profile instance information like total amount of disk i/o or memory cache performance, shared memory use, etc.
One of the important information you can get from this area is machine information statistics and license usage statistics. This two areas are used to review compliance about license of instance and even they are documented, they are not clearly exposed to user. With this information DBA can get key information about how Instance is being used and if all license restrictions are applied.
1.1 Checkpoints
Checkpoints tab shows last checkpoints executed in server.
1.2 Onconfig
Onconfig tab shows all Engine configuration parameters with values defined in configuration file and current effective configuration values.
1.3 Profile
Allows to get relevant information about Server profile, including global data for disk activity: reads or writes, sequential scans, waits produced by locks or by buffer contention, etc.
1.4 Shared Memory
Shows information about critical shared memory parameters, including the server state in the sh_mode
parameter.
It's possible values are:
Value | Explanation |
---|---|
0 | Initialization |
1 | Quiescent |
2 | Recovery |
3 | Backup |
5 | Shutdown |
5 | Online |
6 | Abort |
7 | Single_user |
1.5 Machine Info
Shows information about physical server where instance is currently executing.
1.6 License Info
Last 5 years information about parameters relevant for licensing, like maximum memory allocated, virtual processors or total disk used.
1.7 Server Features
Shows last 5 years statistics about server usage. This information can also be relevant for licensing monitoring, but includes other global parameters about server configuration and usage.
2 Health
This menu entry shows information about event alarms generated by the database server or alerts generated by the Scheduler. Alerts that are associated with built-in tasks and sensors are automatically added to this list.
User can dismiss alert to remove from the list or ask for recheck again.
3 Logs
You can review logs generated by engine and logs generated by onbar archive system. This entry allows you to manage tasks for rotating log files automatically and keep only a number of log files in your server
Also, you can view the history of all the SQL
administration API functions that the were run in the previous 30 days.

4 Tasks
You can use the Scheduler to create jobs to run administrative tasks or collect information at predictable times.
The Scheduler uses SQL
statements instead of operating system job scheduling tools.
The Scheduler has four different job types that you can choose from:
- Task: Runs an action at a specific time and frequency.
- Sensor: Runs an action at a specific time and frequency to collect data, create a results table, store the data in the results table, and purge old data after a specified time.
- Startup task: A task that runs only when the server moves from quiescent mode to online mode.
- Startup sensor: A sensor that runs only when the server moves from quiescent mode to online mode.
The action of a task or sensor can be one or more SQL
statements, user-defined routines, or stored procedures.
In addition to defining an action for a task or sensor, you can also use the Scheduler to:
- Associate tasks and sensors into functional groups
- Track the execution time and return value each time a task or sensor is run
- Define alerts with varying severity
- Define thresholds to control when tasks or sensors are run
The Scheduler contains built-in tasks and sensors that run automatically. You can modify the built-in tasks and sensors and define your own tasks and sensors.
4.1 Modify tasks
To enable or disable a task or change execution parameters or scheduling, go to Axional DBStudio
menu and choose Server Information > Tasks
Select task you want to modify and click on the Name button. This will popup Task information window.
5 Databases
This menu option allows to analyze database contents from Server Instance prespective. You can get all database contents, including tables and indexes getting current size, dbspace
and compression statistics.
By clicling in database name button, you can access to database management. This option allows to change database logging mode to: non logging, buffered, unbuffered or ansi. If database selected is sysadmin you can also reset sysadmin database and create a new sysadmin database in dbspace
selected in combo box.
By selecting a database in right side menu, you can get list of tables and explore their content just selecting one table. Clicking on table name button, you can access extended table menu that allows you to check data in the engine (oncheck command).
6 Storage
The storage section gives a complete view of all Informix storage objects from databases to dbspaces
and chunks
.
6.1 Monitoring space usage
Review information about space usage and the storage spaces for a database server. For a specific storage space, review information about the space usage and optimization, and the chunks
, tables, and indexes in the space.
To monitor space usage:
- On the menu of the server connection, click Server Information > Storage > Overview. The page displays panels of information about space usage for the database server.
- To display detailed information of
dbspaces
, clickdbspace
name in the list. To obtain detailed infor forchunks
, click on thechunk
number inchunk
list panel.
You can also get detailed information of a dbspace
by selecting it in tree menu expanded in the overview node. Click on dbspace
you want to obtain detailed information.
6.2 Managing space usage
Manage the storage space for a database server.
6.2.1 Creating a dbspace
To create a new dbspace
, fill the form, with dbspace
name and properties for first chunk
.
6.2.2 Modifying a space
To modify a space by expanding its available space.
On the menu of the Axional DBStudio
, click: Server Information > Storage > DBSpaces
.
- On the
dbspaces
list panel, select the space and click on space name button. - Open the Expand tab
- Select expansion mode: By automatically expanding the space or by adding a
chunk
. - Complete the fields on the page.
6.2.3 Drop a space
If all the chunks
belonging to a dbspace
are empty, you can drop a dbspace
.
On the menu of the Axional DBStudio
, click: Server Information > Storage > DBSpaces
.
- On the
dbspaces
list panel, select the space and click on space name button. - Open the Drop tab
- Press the Confirm Drop button.
This procedure will drop dbspace
and all belonging chunks
.
6.2.4 Add a chunk
To add a new chunk
to a dbspace
, you should use the "Modifying a space" procedure described previously.
Choose to expand dbspace
by adding a new chunk
and fulfill form data to add this chunk
to the dbspace
.
6.2.5 Modifying a chunk
This procedure allows to specify whether the database server can automatically extend the chunk
when necessary or expand available size for a chunk
inmediabely
To modify a chunk
:
On the menu of the Axional DBStudio
, click: Server Information > Storage > Chunks
.
- On the
chunks
list panel, select thechunk
and click on #Chunk button. - If
chunk
is not expandable, you car mark it as expandable by using Mark expandable tab - Open the Expand tab
- Fill the fields to configure the space to be added to expandable
chunk
- Press the Expand chunk button.
6.2.6 Drop a chunk
This procedure allows to specify whether the database server can automatically extend the chunk
when necessary or expand available size for a chunk
inmediabely
To modify a chunk
:
On the menu of the Axional DBStudio
, click: Server Information > Storage > Chunks
.
- On the
chunks
list panel, select thechunk
and click on #Chunk button. - Open the Drop tab
- Indicate if you want the
chunk
space to be returned to storage pool or just discarted - Press the Drop button.
6.3 Adding storage space automatically
Every instance of Informix has a storage pool.
The storage pool contains information about the directories, cooked files, and raw devices that the server can use if necessary to automatically
expand an existing dbspace
, temporary dbspace
, sbspace, temporary sbspace, or blobspace.
When the storage space falls below a threshold defined in the SP_THRESHOLD configuration parameter, Informix can automatically run a task that expands the space, either by extending an existing chunk
in the space or by adding a new chunk
.
Configure the database server to add storage space automatically when more space is needed. Add entries to the storage pool that the server can use to expand a space and set the threshold for expanding a space.
6.3.1 Add a storage pool
On the menu of the Axional DBStudio
, click: Server Information > Storage > Add Storage Pool
.
Specify the information requested:
- The path for the file, directory, or device that the server can use when additional storage space is required.
- The offset in KB into the device where Informix can begin allocating space.
- The total space available to Informix in this entry. The server can allocate multiple
chunks
from this amount of space. - The minimum size in KB of a
chunk
that can be allocated from the device, file, or directory. The smallestchunk
that you can create is 1000 KB. Therefore, the minimumchunk
size that you can specify is 1000 KB. - Priority (1 = high; 2 = medium; 3 = low). The server attempts to allocate space from a high-priority entry before it allocates space from a lower priority entry.
6.3.2 Modify a storage pool
Modify the directories, files, and devices that the database server can use to add storage space automatically when more space is needed.
On the menu of the Axional DBStudio
, click: Server Information > Storage > Storage Pools
.
Select the storage pool you want to modify and click on the corresponding ID button.
Complete the fields in the page.
6.3.3 Drop a storage pool
To delete a storage pool entry, follow same procedure as if you want to modify it, but select drop tab and confirm by clicking Drop pool button.
6.4 View physical log info
This option show information related to physical log storage and usage and allows to move physical log to another dbspace
, perform a check point or drop a physical log dbspace
after physical log has been moved to another dbspace
and physical log dbspace
is empty.
6.5 View all logical logs
This option allows to get complete list of logical logs and administer it by allow dropping logs or create new ones.
6.6 Monitoring temporary space usage
This option allows to get complete list of temporary tables allocating space in temporary dbspaces
.
6.7 Optimizing storage space
Compress tables, indexes, and fragments, consolidate free space (repack), return free space to the dbspace
(shrink), and merge extents (defragment). Uncompress tables and table fragments.
6.8 Managing recovery logs
View and manage the checkpoints, logical logs, and physical logs for a database server.
7 Sessions
The sessions section allows to analize current client connections, what's being executed currently in the server and locked resources with locking waiters.
7.1 Current sessions information
This option show all current opened sessions. If session is killable, clicking on the "Kill" button kills this session.
Field | Description |
---|---|
Session | Session ID |
User | User name who created session |
Host | Client host from session is created |
database | Database where session is connected |
Age | Time passed since session where created |
Memory | Total memory allocated for this session |
IO Wait | Time this session spent waiting for I/O |
CPU Time | Total CPU Time consumed by this session |
Run Threads | Current threads running statement in this session |
is_killable | Allow to kill sessions killable by pressing button. |
Clicking on a session row, you can get advanced information about session selected:
7.1.1 Session current statement
SQL
Current Tab shows information about current executing SQL
Statement.
7.1.2 Session SQL Trace History
If tracing is enabled, you will see last SQL
statements for this session.
Field | Description |
---|---|
sql_id | Unique SQL execution ID |
sql_stmtname | Statement type displayed as a word |
sql_statement |
SQL statement that ran |
7.1.3 Session locks hold
Field | Description |
---|---|
table_name | Database and table name |
lock_type | Type of lock:
|
lock_held | Lock held time |
others_waiting_for_lock | NONE or Session ID of the user waiting for the lock. If more than one user is waiting, only the first session ID appears. |
rowid | Real rowid if key lock (0 indicates a table lock.) |
index_number | Key number of index key lock |
key_item_locked | Key value locked |
7.1.4 Session threads
Field | Description |
---|---|
name | The name of the thread. |
thread_id | The numeric identifier of the thread. |
statedesc | Thread state description. |
wait_reason | Reason for thread to be wait. |
num_scheduled | Number of time thread scheduled. |
total_time | Time spent running on vp. |
time_slice | Time slice (total_time/num_scheduled). |
vpid | The ID of the virtual processor that the thread was last scheduled to run on. |
vpclass | Classname of VP. |
thread_priority | The priority of the thread. |
7.1.5 Session memory pools
7.1.6 Session network usage
7.1.7 Session environment variables
7.1.8 Session Profile
Session profile provide information about session memory amounts and buffers and disk utilization. for session selected.
Field | Description |
---|---|
nreads | Total number of read operations |
nwrites | Total number of write operations |
isreads | Total number of ISAM read operations |
iswrites | Total number of ISAM write operations |
isdeletes | Total number of ISAM delete operations |
iscommit | Total number of ISAM commit transaction operations |
upf_bufreads | Number of buffer reads performed in session |
upf_bufwrites | Number of buffer reads performed in session |
upf_seqscans | Number of sequential scans performed in session |
7.2 SQL Statements
SQL
Statements shows similar information like previous Sessions option menu, but focused in showing SQL
Statements currently executing in each session in a performance point of view.
You can introspect more session information in a similar way like Sessions menu. Results from SQL
Current, SQL
Trace, Locks, Threads, etc. are the same and can be obtained by selecting a session in Sessions panel.
Field | Description |
---|---|
Session | Session number ID |
User | User name for the connection |
Host | Client host from where connection has been stablished |
Database | Session |
Age | Time since session opening |
Running | Show if SQL Statenent is currently runned or already finished |
Isolation | Isolation mode |
Lock mode | Lock mode |
SQL -Error |
SQL Error Number for last SQL Execution |
ISAM-Error | ISAM Error Number for last SQL Execution |
SQL Statement |
Last Executed SQL Statement |
FE Version | Client Informix connection protocol version |
FE Program | Client program responsible of session connection stablishement |
7.3 Active Threads
Use this menu option to obtain a list of threads. By default, you get only Active Threads, but you can change filtering behavour by selectiong thread state to show pressing the filtering button.
- Active threads are threads currently running in server. Each
SQL
Statement runs in one or multiple sqlexecd type threads. Other threads usually running in sever are network polling looping to receive client statements. - Ready threads are theads ready to run but waiting for enough server resources in thre server.
- Waiting threads are threads threads the system that are currently in the wait queue and not currently executing.
- Yield status, are threads not doing anything particularly important and if any other threads or processes need to be run, they should run. Otherwise, the current thread will continue to run.
Field | Description |
---|---|
tcb | Is the address for the thread, in hexadecimal value. It matches column 'rstcb' in 'onstat -g ath' and column 'address' in 'onstat -u' outputs. |
rstcb | RSAM thread control block. From this number you can access user thread and session information: onstat -u. If rstcb = 0 it means this thread is internal and not related to any user session. |
priority | Priority of the thread |
State | |
expression | |
name | Thread name |
session ID | Is the session ID, the unique identifier for the session. Details on the session can be seen with 'onstat -g ses sid' command. It matches column 'sid' in 'onstat -g ntt' output. |
username | Is the user ID at the operating system, that established the connection. |
pid | Is the identifier of the process at the operating system from which the connection was established. |
hostname | |
sqs_statement |
SQL Statement running in thread |
scs_sqlstatement | Extended SQL Statement running in thread |
7.4 Transactions
This option shows list of all opened transactions.
Field | Description |
---|---|
txid | id of transaction |
address | address of transaction struct |
sid | session id creator of transaction |
username | session username |
hostname | session client hostname |
log_begin | Logical log ID where transaction begins |
log_end | Current Logical log ID where transaction is registering operations |
filltime | time when first logical log used by this transaction was filled. |
filltime_duration | Time since the first logical log used by this transaction was filled |
rb_time | Estimated rollback time |
tx_state | Transaction state |
longtx | Has long transaction rollback be fired by this transaction? |
sqs_dbname | Database name where transaction is executed |
sqs_statement | Current SQL Statement executing |
istar_coord | istar coordinator |
7.5 Server locks
This option provides information about all the currently active table locks in the database server.
dbsname | Database name | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tabname | Table name | ||||||||||||||
rowidlk | Real rowid, if it is an index key lock | ||||||||||||||
keynum | Show if lock is over table or Key number if index key lock | ||||||||||||||
type | Description | ||||||||||||||
Column | Type of lock:
|
||||||||||||||
owner | Session ID of the lock owner | ||||||||||||||
waiter | Session ID of the user waiting for the lock. If more than one user is waiting, only the first session ID appears. | ||||||||||||||
partnum | Partition number | ||||||||||||||
grtime | Grant time |
7.6 Waiting locks
Shows lock intents waiting for resources already locked.
Field | Description |
---|---|
dbsname | Database name |
tabname | Table where resource is locked |
rowidr | Real rowid if key lock |
keynum | Keynum of item lock |
type | Lock type |
owner | Session ID currently locking resource |
ownername | User name owner of session currently locking resource |
waiter | Session ID waiting for resource to unlock |
waitname | User name owner of session waiting for resource to unlock |
7.7 Waiting resources
List of sessions currently waiting for a condition: latch, lock, buffer, check point, etc.
Field | Description |
---|---|
id | Session ID |
username | User owner of session |
is_wlatch | Session waiting for a latch |
is_wlock | Session waiting for a lock |
is_wbuff | Session waiting for buffer adquisition |
is_wckpt | Session waiting for check point to finish |
is_wlogbuf | Session waiting for logical log buffer |
is_wtrans | Session waiting for transaction |
8 Network
Show a list containing information about the instance network operation.
Field | Description |
---|---|
net_id | Netscb id |
sid | Session id |
net_netscb | Netscb prt |
net_client_type | Client type Int |
net_client_name | Client protocol name |
net_read_cnt | Number of network reads |
net_write_cnt | Number of network writes |
net_open_time | Time this session connected |
net_last_read | Time of the last read from the network |
net_last_write | Time of the last write from the network |
net_stage | Connect / Disconnect / Receive |
net_options | Options from sqlhosts |
net_protocol | Protocol |
net_type | Type of network protocol |
net_server_fd | Server fd |
net_poll_thread | Poll thread |
9 Backup
9.1 Backup size estimation
Shows estimation of backup size in kilobytes. This stamation don't take care of compression method defined in BACKUP_FILTER when real backup is writed to disk.
To calculate estimation, aggregates all pages used plus physical log size plus current logical log.
9.2 Backup traffic
Logfile traffic produced over the last 24 hours
Field | Description |
---|---|
time | Time period corresponding to last 24H |
logtraffic_24 | Size of all logical logs filles in last 24H |
logfiles_24h | Number of logical logs filled in last 24 H |
9.3 Backup status
Field | Description |
---|---|
dbsnum | Dbspace number |
name | Dbspace name |
oldestlevel0 | Time of last level 0 archive |
oldestlevel1 | Time of last level 1 archive |
oldestlevel2 | Time of last level 2 archive |
dbstype | Type of dbspace
|
9.4 Backup configuration
Specify the utility to use to back up the storage spaces. You can choose between two options:
- Use onbar utility command options to back up to tape.
- Use ontape utility command options to back up to tape.
Specify backup target storage (Tape, Disk file or Disk directory) and configuration parameters for ontape:
- Path (TAPEDEV): The tape device path for backups.
- Block size (TAPEBLK): The tape block size, in KB, for backups.
- Device size (TAPESIZE): The maximum amount of data to put on one backup tape.
Schedule and enable backup copies for each level (0,1 and 2):
- Start Time (Hours, Minutes and Seconds).
- Mark the options for each day of the week.
Finally, a summary of the configuration made in the previous steps is shown.
9.5 Backup logs
Field | Description |
---|---|
user | User and from which host the backup was released. |
cmd_exec_time | The time that the command started to execute backup. |
Command | The executed command. |
Return status | Return codes indicate whether the function succeed or failed. If this value is less than zero, the function failed. Other value the function succeed. |
cmd_ret_msg | Return message. |
10 High Availability
This menu section show all information about High availavility system current status and statistics. This includes HDR nodes and RSS nodes.
10.1 Type
Field | Description |
---|---|
ha_type | Server type.
|
ha_primary | Primary server name. |
ha_secondary | HDR or SD or RS secondary server name. |
10.2 Workload
Show workload statistics on each of the secondary servers.
Field | Description |
---|---|
Secondary | Name of secondary server. |
Last update | Time at which workload last updated. |
wl_ttype | This row contains the ready queue size, user CPU time, and system CPU time. |
wl_01 | Most recent workload activity. |
wl_02 | Second most recent workload activity. |
wl_03 | Third most recent workload activity. |
wl_04 | Fourth most recent workload activity. |
wl_05 | Fifth most recent workload activity. |
wl_06 | Sixth most recent workload activity. |
wl_07 | Seventh most recent workload activity. |
wl_08 | Eighth most recent workload activity. |
wl_09 | Ninth most recent workload activity. |
wl_10 | Tenth most recent workload activity. |
wl_11 | Eleventh most recent workload activity. |
wl_12 | Twelfth most recent workload activity. |
wl_13 | Thirteenth most recent workload activity. |
wl_14 | Fourteenth most recent workload activity. |
wl_15 | Fifteenth most recent workload activity. |
wl_16 | Sixteenth most recent workload activity. |
wl_17 | Seventeenth most recent workload activity. |
wl_18 | Eighteenth most recent workload activity. |
wl_19 | Nineteenth most recent workload activity. |
wl_20 | Twentieth most recent workload activity. |
10.3 Lag Time
Show HA server latency information.
Field | Description |
---|---|
Key | Name of the latency metric. |
Value | Value of the latency metric. |
Info | Description of the latency metric. |
10.4 Proxy
Show Proxy Agent information.
Field | Description |
---|---|
tid | ID of the proxy agent thread. |
flags | flags of thread. |
proxy_id | Proxy distributor ID. |
source_session_id | ID of session on source node. |
proxy_txn_id | Number of current transaction. |
current_seq | Sequence number of current operation. |
sqlerrno |
SQL error number. |
iserrno | ISAM/RSAM error number. |
10.5 Multiplexer
Show SMX connection information.
Field | Description |
---|---|
address | SMX pipe address. |
name | Target server name. |
encryption_status | Enabled/Disabled. |
buffers_sent | Number of buffers sent. |
buffers_recv | Number of buffers received. |
bytes_sent | Number of bytes sent. |
bytes_recv | Number of bytes received. |
reads | Number of read calls. |
writes | Number of write calls. |
retries | Number of write call retries. |
11 Connection Manager
12 Replication
12.1 Connection Manager
12.2 Clusters
12.3 Grid
List ER Grids defined in server.
12.4 ER Domain
Shows list of server groups integrated in ER Domain.
Server relationships can be defined according to this group types:
Root server | An Enterprise Replication server that is the uppermost level in a hierarchically organized set of information The root is the point from which database servers branch into a logical sequence. All root database servers within Enterprise Replication must be fully interconnected. |
Nonroot server | An Enterprise Replication server that is not a root database server but has a complete global catalog and is connected to its parent and to its children |
Leaf server | A database server that has a limited catalog and no children. |
By selecting one server group, you can get, in below area,all replicates defined for this group.
Field | Description |
---|---|
serverid | Group ID for servers defined in sqlhosts file. |
dbsrvnm | Name of the server group. |
ishub | Is a root server? |
isleaf | Is a leaf server? |
servstate | Current status of server group. |
rootserverid | If server is a "non root" server, this field shows id for the master or root server who current server is connected. Defines a relationship between database servers in a tree data structure in which the parent is one step closer to the root than the child. |
12.5 ER Layout
12.6 Node Details
12.6.1 Summary
12.6.2 Capture
12.6.3 Send Queue
12.6.4 Disk Usage
12.6.5 Apply
12.6.6 ATS
12.6.7 RIS
12.6.8 Errors
12.6.9 Configuration
Resume all configuration parameters related with Enterprise Replication usage.
12.7 Replicates
12.7.1 Templates
12.7.2 Replicate Set
12.7.3 Replicates
12.7.4 Task Status
13 Instance Administration
13.1 Memory
13.1.1 Memory Segments
Show all memory segments allocated in memory by engine. Administration options allos to allocate new virtual memory segments or to drop all unused free memory segments currently allocated by server.
13.1.2 Low memory manager
13.1.3 Memory Cache
13.2 Virtual Processors
13.3 Update Statistics
13.4 Accelerators
The Accelerators node inside the Administration section allows us to monitor the state of all the IWA instances attached to the DB engine. We will also be able to create another logical accelerators and attach them to the DB engine, as well as operating with the datamarts defined on the accelerators. The datamart creation wizard will simplify the definition of new datamarts.
13.4.1 Introduction
The Accelerators node in DBStudio
offers us two sections for the administration of
IWA instances on a particular DB engine:
- The list of all IWA instances currently attached to the DB
- The form for defining a new accelerator and submiting the data to DB.
13.4.2 Operating with existing accelerators
Once the accelerators have been created, they can be shown in the Instances tab from the Accelerators menu. Clicking on the blue column containing the accelerator name of the displayed rows will lead to the "Remove Accelerator" screen. However, if we wish to operate upon a certain accelerator, click on the row instead (on id, Host IP or Service Port). This will open three new tabs: Active Marts, Create Mart Wizard and Create Mart.
View active marts
The Active Marts menu will display every available DataMart for that accelerator. Similarly to the Instances tab, clicking on the blue column will prompt a window to perform a drop, load/refresh or enable/dsiable on the DataMart. To see further information of a specific DataMart, just like in the Instances Tab, click on the row of such DataMart (not on the blue cell). Doing so, will open three new tabs : Mart Definition, Mart Diagram and Modify Mart
View mart definition as XML
On the Mart Definition tab, the XML used to generate such datamart is displayed. However, modifying the DataMart by editing this XML is not possible. This is instead done on the Modify Mart tab

View mart diagram
On the Mart Diagram menu, we can see the columns and relations for such datamart displayed in a typical table diagram. Again, this is purely informational and does not change the datamart definition.

Modify datamart (wizard)
Finally, the Modify Datamart wizard will open the Create Mart Wizard (section 2.2) but loading the data from the already defined XML displayed at the Mart Definition tab. Even though, the detailed steps on using the Wizard can be found in section 2.2, the outlines of modifying the datamart are the following :
1. Review the selected columns and table. To add a new table click on the left menu with the "Tables" header and to add or remove a column click on the checkbox of the right menu with the "Columns" header

2. Review the links. To remove one, click on the red cross of such link, and to add a new one, first select the two tables involved, and then click first on the row that contains the source column and then on the row that contains the end column. If done correctly, this will draw a line between these two rows

3. Review the XML generated by the new changes. Note that there is some information missing compared to the initial XML. This is because such information is not relevant to the DataMart.

Create a datamart (wizard)
The Axional DBStudio
application allows for an easy setup of an Informix DataMart
with the DataMart Creation Wizard. This is particularly useful when creating
a DataMart from scratch, that is, without a previous xml, although it can also be created
directly from such xml. With the DataMart Creation Wizard, we are able to select
which tables and columns do we want to store and define links between distinct columns of
different tables (same table links are also allowed).
Selection of the columns to store
- First, we need to select the columns to be stored. To do so, click on a table on the left menu (it will turn grey when you hover it). After selecting a table, every available column for that specific table will appear on the right menu.
- This time, selecting the columns can be achieved just by clicking on the checkbox (or the whole row, doesn't matter) or select them all by clicking on the checkbox at the top, next to the "Columns" header. The selected columns and tables will be displayed at the bottom of the Wizard.
- Finally, the Fact Table has to be defined in order to continue on to the next step. To do so, simply click on the radio button on the left of each information row

In case a mistake was made, the selected entries can be deleted just by clicking on the red cross at the bottom panel of the screen.
Definition of the links between tables
The second step of the Wizard is optional, and it is used to build connections between the selected columns chosen at the first step.
- To build a link between two different columns, we must first select the two tables that contain such columns. This is done on the select menus at the top (store and employee in the picture below).
- Once the tables have been selected, the columns chosen at the first step will be displayed. between these columns, click on a column on one side to then click a column of the opposite side ('store_id' in both sides in the example picture below).
- During this step, the cardinality or type of each link (1:n, n:m or n:1) is also chosen. This can be done by clicking on the dropdown select at the start of each link at the bottom menu, and selecting the desired option. This decision will also force which table will be the "Parent Table" and which one will result on the "Dependent Table", since the dependent table can only have cardinality "n" with respect to the parent.

Note : If done correctly, the Wizard will draw a line between these columns and will show the created link at the bottom of the screen. If the link isn't build, that's probably because it was trying to be build on the same side (same table). Also, keep in mind, that the Wizard will try to modify the link if that's possible, that is, when the starting column of the link already has a link but the ending column does not or viceversa. If both the starting and the ending column are linked, no link will be produced.
Also, in the rare case the link has to be built within the same table, such table has to be selected on both menus, since links in the same side are not allowed.
Review of the information
At the last step, the xml that will be used to generate the DataMart will be displayed. If everything seems correct, click on the Create Datamart button to attempt the creation of the datamart, and if something seems wrong, data can always be modified on the previous steps just by clicking the "Back" button.

Create a datamart (by uploading XML definition)
To build a DataMart directly from an XML, the file path to the XML and the database are necessary. To select these, use the dropdown select for the database and write the full file path on the "Local File Name" field :

13.4.3 Create a new accelerator instance
Given a previously IWA instance installed and configured in a host, we will be able to connect the accelerator to the Database Server by using this form:

We are required to fill the following information:
- Name of the new logical accelerator. This is the name that will be used to refer to this particular IWA instance when issuing operations on the accelerator (create, load or drop a mart, etc...).
- IP of the host where IWA service is installed.
- Port number where IWA service is attached in the specified host.
- PIN to be used as authorization.
It will be necessary to use the provided ondwa getpin command on the host where IWA service was intalled. This will give us a valid auth PIN number and all the information we need to fill in the previous form:
$ ondwa getpin 192.168.10.19 21022 5893
13.5 Configuration
13.6 User Privileges
Access to SQL
Admin API privileges is not available on servers below 12.10
14 Performance
14.1 SQL Explorer
Use the SQL
Explorer to perform query drill-down. The SQL
Explorer uses SQL
tracing to gather statistical
information about each SQL
statement executed on the system and shows statement history.
SQL
Explorer allows to analyze SQL
Statements activity in the whole server, allowing to identify
wich are the worse performance statements or the most used ones, and identifying
which statements we should improve to increase global server performance.
14.1.1 Tracing Admin
This option is not available on servers below 12.10
14.1.2 Activity Summary
14.1.3 Transactions
14.1.4 SQL Statements
14.1.5 SQL Statements (Detailed)
14.2 History
This option shows history graphs about some performance items evolution.
To colect historical monitoring statistics you need to activate task named: ""
14.3 Sessions
This menu entry show all currently open sessions in the server from a performance prespective. Basically you can get all sessions and wait time produced by i/o, cpu time spend and memory consumed by each session.
15 Instance status reports
A highly useful group of commonly-used reports are ready to be executed at any time. These include:
|
|
16 Execute onstat commands
Monitor servers remotely with the onstat utility, by using the onstat utility in the Axional DBStudio
.
You can use the onstat utility to check the status and monitor the activities of a database server.
Some of the onstat command executions has defined links to drill down by executing other related onstat command:
e.g. onstat -g sql returns all sessions and what type of SQL
command are executing currently, this screen allows to click at
the session number to get more information about current statement executed by this session
To use the onstat utility:
- On the menu, expand Server Information and then click onstat Utility.
- Type an onstat option. You do not have to type onstat.
- Click Run button.
16.1 Hiperlinks
Some comands output may contain hyperlinks. Click on the hyperlink to see related onstat information in a new window.