April 24, 2018 | IBM i

IBM i Storage: Better Performance Through Hardware


Better IBM i Storage Performance Through Hardware

Even though the new POWER9 scale-out servers can support many terabytes of data, it’s still important to optimize storage utilization. Data drives business, which is why AI is a hot topic. It’s why IBM invented Watson.


Data speed also drives business. Without optimized data, your business can’t react as quickly as it must. The business is asking IT to provide more data, so it can make better business decisions. IBM i systems are well positioned to assist companies with things like AI with Watson and temporal support for databases. Unfortunately, these technologies will result in exponential data growth, which makes storage performance even more important than before.


Today, let’s discuss how you can minimize IBM i storage usage and increase file access speeds for applications, using the right hardware resources and proper systems maintenance.

Increasing Memory, Better Storage Controllers, & Switching to SDDs improve storage performance

With the new IBM POWER9 hardware, its CPW benchmark has increased by 1.5 times over the previous POWER8 hardware. Does the increased CPW help the storage perform better? No. Getting a faster processor (increased CPW) will not help improve your storage performance. Increased CPW will only allow your system to process information faster, it won’t improve storage performance.


Conversely, increasing your IBM POWER memory does have a positive effect on IBM i storage performance. Page faults happen when requested data is not found in the main memory and it must be moved from storage to memory and vice versa. When there is a high volume of movement between main memory and storage, latency increases, and users become unhappy. Applications with a high I/O volume can always benefit from increased memory, however adding more memory alone may not be enough to correct your performance issues.


Storage controller cards can also improve storage throughput. Many new IBM i storage controllers such as the EJ14 PCI Express (PCIe) controllers, effectively provide up to 12 Gb of write cache that is built right into the card. This provides increased memory and throughput options, that can also speed up storage-to-main memory transfers which can, again, decrease latency.


Making use of SSDs also improves your storage performance. Solid State Drives (SSDs) offer a lower latency versus Hard Disk Drives (HDDs) because they have no moving parts. SSDs use flash technology to read and write data, which is much faster than HDD performance. HDDs have physical arms which are moved over the platter to read or write data. If you have an application that experiences high transfer volume, then SSDs will offer a significant improvement in performance.


SSDs are more expensive the HDDs, but they can handle more input/output operations per second (IOPS). Since they offer improved performance over HDDs, you can support the same workload with fewer drives. With HDDs, you may need to add an extra drive or two to balance the load across the drives more efficiently. Since SSDs have no arms, you can often install fewer SSDs than HDDs and avoid buying extra drives just to increase the system’s arm speed.


Optimally, you can take a hybrid approach to your storage by implementing SSDs for your more I/O intensive applications and HDDs for data that is accessed less often. For archive purposes, you should remove the files you no longer need to access to tape or user a virtual tape library (VTL) to save space, which will also help improve  storage performance.

Monitoring Storage Utilization

The first step to ensuring that your IBM i storage is optimized is to monitor storage utilization. Storage utilization is simply the percentage of storage that is currently in use. When your storage gets too full, it doesn’t just impact system performance, it can cause your system to crash. Although there is some debate as to the best value to set your storage pool utilization monitoring to, IBM sets this value to 90% utilization by default. That is, once storage pool utilization reaches 90%, the operating system will automatically alert you through QSYSOPR messages that a storage full condition is approaching.


Storage can automatically fill up for two different reasons. The storage pool can either have been slowly filling up for years and it just reached the storage threshold percentage, or a runaway job is looping, filling up a file or generating thousands of spooled files that are quickly filling up system storage. If the system has been slowly filling up when it hits the storage threshold, you’ll probably have at least some time to add more storage before the system crashes due to storage overflow.


Although 90% is the default storage threshold value, it may not give you enough time to react and clear out excessive storage before the system crashes when your storage fills up due to a runaway job. Doing nothing once your IBM i storage hits 90% isn’t an option if you want to keep your system running.


To guard against runaway jobs filling up storage and crashing your system, you can do two things. You can change your default storage value to a value less than 90% (say 80% or 85%). In an emergency where a looping job is filling up storage and crashing the system, the lower threshold will give you more time to react and clean up the situation. You can change the default storage threshold value in either your system’s System Service Tools (SST) menu or inside the Dedicated Service Tools (DST) menu.


The second thing you can do is use an automated IBM i monitoring package such as SEA’s absMessage software to look for the system message that your IBM i storage pool has filled up past the system storage threshold. You can program your monitoring software to look for the CPF0907 message (Serious storage condition may exist. Press HELP) and then alert your first responders through email, text message, or tweet that they must check out this critical storage issue.

Maintenance is key to keeping your storage performing at its peak.

Now that you better understand how hardware resources can affect storage performance and that it’s important to monitor your utilization, let’s talk about file maintenance. The sheer volume of data that companies have today is staggering. Ensuring that your system is running lean and you are not saving unnecessary objects will help keep your utilization in line.


Companies are notorious for keeping files they don’t need for long periods of time. While it’s great that you’re performing backups, you could be wasting time and money backing up data you just don’t need. Often companies leave unnecessary Spool Files or Journals on their system for long periods of time. What they don’t realize is they are backing up those files needlessly day after day. This adds time to the backups and it adds space to the backup media.


Not only will managing your storage and deleting or archiving obsolete objects to outside media help improve performance, it will also help you save money on additional system resources. By better utilizing the storage you have, you can extend the life of your system and your storage. Regular maintenance is key to better storage utilization. In a future post, we will take a closer look at some of the types of objects you should be cleaning up to keep storage utilization down, including Journal Receivers, Log Files, Save Files and using file reorganization to improve storage performance.


There are several things that can affect the performance of your storage including the POWER hardware itself, and the maintenance you do or do not do. Please feel free to contact us here at SEA for more information on handling your IBM i storage. We can help ensure you understand how to get the most out of your IBM i.