Cordes and Graefe

Cordes und Graefe KG in Bremen, Germany, was in a batch job creating different levels of sales statistics based on more than 30 million order line records. To create the statistics, additional information had to be fetched from other data bases (customer data, rebates, etc.) - a total of five other files were accessed by key for every record. The job ran with a low CPU percentage for around 5 hours on an iSeries server with abundant CPU, main storage, and disk capacity.

Cordes und Graefe KG in Bremen, Germany, was in a batch job creating different levels of sales statistics based on more than 30 million order line records. To create the statistics, additional information had to be fetched from other data bases (customer data, rebates, etc.) - a total of five other files were accessed by key for every record. The job ran with a low CPU percentage for around 5 hours on an iSeries server with abundant CPU, main storage, and disk capacity.

CG 2

Using GiAPA (Global i Applications Performance Analyzer) from iPerformance to analyze the job it was easy to see the reason for the low CPU percentage: The job was most of the time waiting for the completion of physical disk I/Os because of millions of synchronous data base reads. GiAPA also showed that the time was used by QDBGETKY (Read a record by key), and showed the programs and the source statement numbers doing these reads. Also the files names involved could be seen, but they were of course known.

The random access to several large data base files did not allow the operating system expert cache to make the records needed available in advance, and the files were too large to be kept in main storage.

However, only very few fields were needed from the files read randomly, and iPerformance therefore suggested reading each of these files at job start, using sequential blocked access, and loading the key fields and the few bytes of data needed into user indexes.

All the random data base accesses could then be replaced by index search operations, and the indexes would not be bigger than it should be possible for storage management to keep them in main storage.

The strategy proved to be successful: The new version of the program only uses around 40 minutes elapsed time, and the total CPU time used also decreased, although the job CPU percentage is very high - the job does not need to wait for data being fetched from disk. But running with the low batch job priority, this never disturbs any interactive jobs.