Automatic load balancing overview

Automatic load balancing provides improved I/O resource management by reacting dynamically to load changes over time and automatically adjusting volume controller ownership to correct any load imbalance issues when workloads shift across the controllers.

The workload of each controller is continually monitored and, with cooperation from the multipath drivers installed on the hosts, can be automatically brought into balance whenever necessary. When workload is automatically re-balanced across the controllers, the storage administrator is relieved of the burden of manually adjusting volume controller ownership to accommodate load changes on the storage array.

When Automatic Load Balancing is enabled, it performs the following functions:

Enabling and disabling Automatic Load Balancing

Automatic Load Balancing is enabled by default on all storage arrays.

You might want to disable Automatic Load Balancing on your storage array for the following reasons:
  • You do not want to automatically change a particular volume's controller ownership to balance workload.

  • You are operating in a highly tuned environment where load distribution is purposefully set up to achieve a specific distribution between the controllers.

Host types that support the Automatic Load Balancing feature

Even though Automatic Load Balancing is enabled at the storage array level, the host type you select for a host or host cluster has a direct influence on how the feature operates.

When balancing the storage array's workload across controllers, the Automatic Load Balancing feature attempts to move volumes that are accessible by both controllers and that are mapped only to a host or host cluster capable of supporting the Automatic Load Balancing feature.

This behavior prevents a host from losing access to a volume due to the load balancing process; however, the presence of volumes mapped to hosts that do not support Automatic Load Balancing affects the storage array's ability to balance workload. For Automatic Load Balancing to balance the workload, the multipath driver must support TPGS and the host type must be included in the following table.

Note: For a host cluster to be considered capable of Automatic Load Balancing, all hosts in that group must be capable of supporting Automatic Load Balancing.
Host type supporting Automatic Load Balancing With this multipath driver
Windows or Windows Clustered


Linux DM-MP (Kernel 3.10 or later) DM-MP with scsi_dh_alua device handler
VMware Native Multipathing Plugin (NMP) with VMW_SATP_ALUA Storage Array Type plug-in
Note: With minor exceptions, host types that do not support Automatic Load Balancing continue to operate normally whether or not the feature is enabled. One exception is that if a system has a failover, storage arrays move unmapped or unassigned volumes back to the owning controller when the data path returns. Any volumes that are mapped or assigned to non-Automatic Load Balancing hosts are not moved.

See the Lenovo Interoperability Matrix for compatibility information for specific multipath driver, OS level, and controller-drive tray support.

DSM considerations

The following are considerations for Multipath driver installation.


Use the host mapping “Linux DM-MP (Kernel 3.10 or later)” for all Linux OS.

For RHEL 7, RHEL 8, SLES 11 SP4, SLES 12, SLES 15 and higher, a modification to /etc/multipath.conf is required.
  1. Here is the recommended settings:
    devices {
    		device {
    			vendor "LENOVO"
    			product "DE_Series"
    			product_blacklist "Universal Xport"
    			path_grouping_policy "group_by_prio"
    			path_checker "rdac"
    			features "2 pg_init_retries 50"
    			hardware_handler "1 rdac"
    			prio "rdac"
    			failback immediate
    			rr_weight "uniform"
    			no_path_retry 30
    			retain_attached_hw_handler yes
    			detect_prio yes
  2. Restart multipath service.

  3. If SAN boot, rebuild initramfs after updating multipath.conf file.

For RHEL 6.7 – 6.9, use above multipath.conf settings and additionally need to do the following:
  1. Execute the following command to add “rdloaddriver=scsi_dh_alua” to the bootloader (in menu.lst) : grubby --update-kernel=ALL --args=rdloaddriver=scsi_dh_alua

  2. If SAN boot, rebuild initramfs to include scsi_dh_alua module: dracut -f --add-drivers scsi_dh_alua /boot/initramfs-$(uname -r).img $(uname -r)

  3. Reboot the host.


If SAN booting, install with single path, then install DSM driver and reboot before presenting the rest of the paths to the host.

If not SAN booting, install DSM before presenting LUNs to host.


After install, manually create claim rule with following command, then reboot:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V LENOVO -M DE_Series -c tpgs_on -P VMW_PSP_RR -e 
"Lenovo DE-Series arrays with ALUA support"
For iSCSI SANboot, ensure any vmkernel that participates in SANboot has a MAC address that matches the physical adapter. The default adapter created during install will match by default. Use the following command when creating additional vmkernel to specify MAC address:
esxcli network ip interface add -i [vmkernel name] -M “[MAC address]“ -p “[Portgroup name]” -m [MTUSIZE]

For more information about DSM, refer to the ThinkSystem DE Series Hardware Installation and Maintenance Guide at:

Verifying OS compatibility with the Automatic Load Balancing feature

Verify OS compatibility with the Automatic Load Balancing feature before setting up a new (or migrating an existing) system.

  1. Go to the Lenovo Interoperability Matrix to find your solution and verify support.

  2. For Linux OS, update and configure the /etc/multipath.conf file .

  3. Ensure that both retain_attached_hw_handler and detect_prio are set to yes for the applicable vendor and product, or use default settings.