Mainframe applications have traditionally enjoyed a consistent, well-defined software management process, where update/install cycles typically take 1-2 months from QA into production.
Unfortunately, that long development process does not work well anymore. Particularly when attempting to keep mainframe applications synchronized with fast moving distributed applications such as mobile apps, Web services, and analytics. Using DevOps and Agile methodologies, distributed apps frequently change something every day or week. They require continuous update and continuous improvement. And mainframe back-office applications must accelerate their release schedules to adapt to this fast-changing distributed world.
Solving The mainframe DevOps challenge
Agile and DevOps require us to get mainframe software into production more quickly, possessing the same or better quality when it goes live. However, production control managers are hesitant to take a chance and allow you to impact their quality by promoting things that have not been tested well and have not shown to be quality deployments.
To meet this challenge, mainframe shops must create DevOps methods to do the following:
1. Get mainframe batch applications to quality sooner.
2. Ensure and prove they can release quality applications to production more quickly.
The full loop for getting to quality and increasing application release time
As shown in figure 1, we have identified a full loop for getting to quality releases. This loop helps achieve the twin goals of getting to quality sooner and ensuring quality applications can be more quickly released to production. Our loop consists of these components.
1. Shift-left thinking and tools to find and eliminate defects, and to automate job changes, before releasing applications to quality assurance or production.
2. Implement metrics to identify usage and defect patterns.
3. Take actions to enable continuous improvement.
Here is how each item in our getting to quality loop relates to the mainframe DevOps challenge.
The high cost of traditional development
In traditional mainframe development environments, most defects are discovered and avoided (fixed) during the quality assurance (QA) and production (PROD) stages. The farther you move through the SDLC process, the more defects appear.
The cost of finding defects in QA and PROD is exponentially higher than finding them in DEV. This is because defect correction must be sent back to DEV (triggering application rework) and because of the costs a defect failure has when occurring in production. Traditional development practices take much longer to get to quality.
You can see this issue by looking at figure 2. The left Y-axis represents the number of defects found while the right Y-axis represents the costs of finding defects. The red line shows the effects of finding more defects in QA and PROD than are found in DEV. Defect detection rises as the change moves through QA and PROD, which also causes the cost of finding those defects to rise. More defects in PROD causes more cost to rework and fix production issues. More costs lower your JCL quality.
Shift-left flips defect detection patterns
Finding and fixing defects in development is far less costly than finding and fixing them in production. That is the idea behind the shift-left concept.
Under shift-left practices, more testing (particularly integration testing) occurs as early as possible during development. The goal is to find most defects in development rather than in QA and production.
Notice the effect of shifting left on the blue line in figure 2. Using shift-left, more defects are found in DEV than in QA/PROD, and defects are fixed sooner in the development process. The costs for fixing defects also shift-left to DEV, where defect costs are exponentially lower in DEV than in QA/PROD.
Fewer PROD defects result in higher quality JCL. Higher quality JCL incurs fewer expensive production failures and application rework. Applications can get to quality much sooner, accelerating release schedules by eliminating the costs and delays involved in production failures and rework cycles.
It is optimal to push defect detection as far left as possible in the SDLC to find and correct defects. Shift-left produces the highest quality in production—a defect-free environment—in the fastest and least costly way.
Shift-left requires that your developers have the proper tools to get your code to quality before it is promoted to QA and PROD. Tools such as SEA’s JCLplus+ enforces site standards and perform JCL runtime simulation on the production environment over which those jobs will run. The right tools allow the code to be validated with the true production security rules, along with the production libraries and control members, in the environment where the changes will eventually live.
Impact analysis tools are another valuable component for ensuring that any defects introduced by your JCL Proc changes will be detected before leaving DEV. Tools such as SEA’s XREFplus+ DBR allow you to create a database of application components, including the datasets, Jobs, Procs, programs, DB2, and scheduling components in your installation. These collections allow programmers to query what other jobs will be affected when they change a Proc, a program, or make other changes. These tools help DEV understand what the effects of a new change are, so they do not make errors in the process when they are making other changes.
You can also improve schedule release speed and detect defects in DEV by integrating your JCL management tools with your source change management system. As change packages are promoted through the SDLC, a product like JCLplus+ will send a simulation test over to the appropriate LPAR for testing, retrieve the results, and allow or reject the promotion based on the results. Package promotion can serve as the last gateway to ensure you are sending high quality code to production.
These JCL management tool capabilities—enforcing site standards, running production simulations, impact analysis, and source change management integration—enable you to detect defects and get to quality sooner while your install/update changes are still in DEV. They eliminate rework cycles and production failures by finding defects before the code is promoted.
Closing the loop with metrics and continuous improvement
The last two pieces that close the full loop for getting to quality are metrics and taking action for continuous improvement. JCL management tools such as JCLplus+ can automatically create, collect, and report on JCL metrics. Metrics creation and taking action for continuous improvement allow you to do several important things in getting to quality, including:
- Report on what defects were found and validate tool usage.
- Create a baseline for increasing release schedule acceleration.
- Identify and implement SDLC improvements and compare against the baseline.
- Show trends on how you are getting to quality more quickly than you were before.
With metrics and continuous improvement initiatives, production control managers can look at performance and understand that it does not take 1-2 months to get to quality anymore. They can see that you are now doing it in a few weeks, giving them more confidence they can more quickly promote changes to production to meet DevOps and Agile requirements.
A full loop
With products like JCLplus+ and XREFplus+ DBR, you can create the full loop for getting to quality releases faster and ensuring and proving you can release quality applications to production more quickly. This process can create significant improvements to match the DevOps and Agile goals mainframe shops need to meet.
SEA has significant experience in providing JCL management tools for IBM System Z mainframes. Please feel free to contact SEA for more information on how to improve your DevOps processes.