NVMe initiator configuration in a RoCE environment includes installing and configuring the rdma-core and nvme-cli packages, configuring initiator IP addresses, and setting up the NVMe-oF layer on the host.

What you’ll need

You must be running the latest compatible RHEL 8, RHEL 9, SUSE Linux Enterprise Server 12 and 15 service pack operating system. See the Lenovo Storage Interoperation Center (LSIC) for a complete list of the latest requirements.

Steps
  1. Install the rdma and nvme-cli packages:

    SLES 12 or SLES 15

    # zypper install rdma-core
    # zypper install nvme-cli

    RHEL 8 or RHEL 9

    # yum install rdma-core
    # yum install nvme-cli
  2. Get the host NQN, which will be used to configure the host to an array.

    # cat /etc/nvme/hostnqn
  3. Set up IPv4 IP addresses on the ethernet ports used to connect NVMe over RoCE. For each network interface, create a configuration script that contains the different variables for that interface.

    The variables used in this step are based on server hardware and the network environment. The variables include the IPADDR and GATEWAY. These are example instructions for SLES and RHEL:

    SLES 12 and SLES 15

    Create the example file /etc/sysconfig/network/ifcfg-eth4 with the following contents.

    BOOTPROTO='static'
    BROADCAST=
    ETHTOOL_OPTIONS=
    IPADDR='192.168.1.87/24'
    GATEWAY='192.168.1.1'
    MTU=
    NAME='MT27800 Family [ConnectX-5]'
    NETWORK=
    REMOTE_IPADDR=
    STARTMODE='auto'

    Then, create the example file /etc/sysconfig/network/ifcfg-eth5:

    BOOTPROTO='static'
    BROADCAST=
    ETHTOOL_OPTIONS=
    IPADDR='192.168.2.87/24'
    GATEWAY='192.168.2.1'
    MTU=
    NAME='MT27800 Family [ConnectX-5]'
    NETWORK=
    REMOTE_IPADDR=
    STARTMODE='auto'

    RHEL 8 or RHEL 9*

    Create the example file /etc/sysconfig/network-scripts/ifcfg-eth4 with the following contents.

    BOOTPROTO='static'
    BROADCAST=
    ETHTOOL_OPTIONS=
    IPADDR='192.168.1.87/24’
    GATEWAY='192.168.1.1'
    MTU=
    NAME='MT27800 Family [ConnectX-5]'
    NETWORK=
    REMOTE_IPADDR=
    STARTMODE='auto'

    Then, create the example file /etc/sysconfig/network-scripts/ifcfg-eth5:

    BOOTPROTO='static'
    BROADCAST=
    ETHTOOL_OPTIONS=
    IPADDR='192.168.2.87/24'
    GATEWAY='192.168.2.1'
    MTU=
    NAME='MT27800 Family [ConnectX-5]'
    NETWORK=
    REMOTE_IPADDR=
    STARTMODE='auto'
  4. Enable the network interfaces:

    # ifup eth4
    # ifup eth5
  5. Set up the NVMe-oF layer on the host. Create the following file under /etc/modules-load.d/ to load the nvme-rdma kernel module and make sure the kernel module will always be on, even after a reboot:

    # cat /etc/modules-load.d/nvme-rdma.conf
      nvme-rdma

    To verify the nvme-rdma kernel module is loaded, run this command:

    # lsmod | grep nvme
    nvme_rdma              36864  0
    nvme_fabrics           24576  1 nvme_rdma
    nvme_core             114688  5 nvme_rdma,nvme_fabrics
    rdma_cm               114688  7 rpcrdma,ib_srpt,ib_srp,nvme_rdma,ib_iser,ib_isert,rdma_ucm
    ib_core               393216  15 rdma_cm,ib_ipoib,rpcrdma,ib_srpt,ib_srp,nvme_rdma,iw_cm,ib_iser,ib_umad,ib_isert,rdma_ucm,ib_uverbs,mlx5_ib,qedr,ib_cm
    t10_pi                 16384  2 sd_mod,nvme_core