Have you ever wished there was an easy way to move IBM i workloads from one server to another without any downtime? If you have, then you’re in luck – such a capability already exists on the IBM i server. It’s called live partition mobility (LPM).
LPM is an established virtualization technology that allows an active or inactive IBM i logical partition (LPAR) to be moved from one physical machine to another one without bringing the LPAR down.
What LPM does almost seems like magic. All of the system resources associated with an LPAR — including the processor state, memory, attached virtual devices, and connected users — are moved nearly instantaneously to the target system, with no impact to the users. Even though the underlying server on which the IBM i operating system and applications are changing, the OS and applications remain usable throughout the migration process. It’s a nifty trick, for sure, and one that provides a number of benefits.
Reasons to consider LPM for your shop
In the high availability category, one could use LPM to keep an application running while its underlying server is down for regular maintenance, backup, or when upgrading to a new physical machine. Another recommended IBM use is to keep an LPAR/application combination running when an error is detected on the production server and the user wants to avoid unexpected downtime.
Workload balancing is another LPM use case. The technology can be used to easily move a production LPAR from a smaller system without enough resources to a larger system with more computing resources. It can also be used to combine workloads from multiple smaller systems onto one larger system.
However, there’s a word of caution on workload balancing. While workload balancing is one use case for LPM, IBM advises users that LPM does not currently support automatic workload balancing. In other words, it’s up to the administrator to manually move the workloads.
Lastly, LPM is useful in cloud environments, where the underlying hardware is already abstracted away from the user. In this use case, the technology can be used for workload balancing or high availability needs by a public or private cloud provider. Considering the recent rise in interest public cloud computing – not to mention a desire by IBM i shops to treat their existing on-premise systems like their own private cloud resource — this is certainly a valuable way to leverage LPM.
LPM was added to IBM i in 2012 with version 7.1 Technology Refresh 4, thereby bringing IBM i up to par with AIX and Linux on Power Systems, which have had the technology since 2007. LPM also makes IBM i more competitive with Windows and Linux on X86 servers, which have a similar capability in the form of VMware’s VMotion software.
IBM i LPM requirements
You can get LPM by licensing PowerVM Enterprise Edition, IBM’s flagship hypervisor software for hardware, network, and storage virtualization. Most IBM i customers will opt to manage LPM through the Hardware Management Console (HMC), although it also supports the Systems Director Management Console (SDMC) and the Integrated Virtualization Manager (IVM).
LPM requires virtualized external storage through the Virtual I/O Server (VIOS), as opposed to native internal disk. It also requires two relatively equivalent Power Systems servers that feature the same CPU, memory, interactive features, and software tier groups, although they don’t have to be identical.
The LPM technology works by essentially copying the memory associated with an LPAR asynchronously from one system to another. This creates a “clone” of the LPAR on the target machine. After a period of time, when a certain threshold of similarity is reached with the asynchronous replication process, the LPAR is transitioned to the target machine, and a synchronous replication process ensures any remaining pages in memory are copied at that time.
The hardware requirements for LPM are slightly different for Power Systems running IBM i than those running AIX or Linux. First, IBM i shops must be running on at least a Power 7 server, or later, running firmware release 740.40 or 730.51, or later. The HMC must be running firmware V7R7.5.0M0 or later, and VIOS must be used for storage (no internal DASD allowed).
Within the VIOS family, LPM users are free to use Virtual SCSI (VSCSI), Virtual Fibre Chanel (NPIV), or Virtual Ethernet, according to IBM. However, both the source and target systems must be on the same Ethernet network.
IBM i LPM restrictions
There are some restrictions to LPM that one should keep in mind. For starters, you can’t use LPM to move workloads from Server A to Server B, while simultaneously moving workloads from Server B to Server A. You also can’t move workloads from Server A to Server C at the same time you’re moving them to Server B.
LPM also requires LPARs to be backed up by physical volumes, and none of them can be assigned to a VSCSI-connected tape or optical storage device. There are also restrictions on use of VSCSI server adapters, according to this developerWorks story, and NPIV users should also be aware that there are other restrictions.
What about IBM i LPP and third-party application licensing?
Licensing implications also arise with LPM. Obviously, the user must own the source and target systems that LPM is being used with. When it comes to software, IBM’s own software is covered through its virtualization capacity license counting rules. If customers have not obtained the necessary permanent license for IBM’s licensed program products (LPP), then IBM’s temporary licensing scheme and the 70-day clock kick in as soon as the migration is complete.
When it comes to third-party applications, each ISV will have its own rules for handling the licensing of its software on servers, so it’s up to users to check in advance on those requirements.
LPM opens up a number of enticing new possibilities for managing IBM i workloads in a flexible manner. Combined with IBM’s existing dynamic LPAR capability, customers today are essentially free to move their IBM i workloads wherever and however they want.
For more information on LPM, check out this IBM Knowledge Center article, which has a link to the definitive 170-page PDF on LPM.