Start with the hardware. Insert two identical storage devices into the empty bays of the storage hosts — one drive per server. To maintain redundancy, storage must always be expanded with matching drives so they can form a mirrored (redundant) array.
Note: If the storage host uses hardware RAID, the new drives must first be added to a RAID 0 array using remote management tools such as iLO, iDRAC, or IPMI — or directly through the RAID controller interface if physical access to the server is available.
Next, check the existing storage setup by running:
# zpool status
pool: NETSTOR
state: ONLINE
scan: resilvered 728M in 0h0m with 0 errors on Tue Dec 6 16:13:09 2016
config:
NAME STATE READ WRITE CKSUM
NETSTOR ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
SW3-NETSTOR-SRV1-1 ONLINE 0 0 0
SW3-NETSTOR-SRV2-1 ONLINE 0 0 0
errors: No known data errors
This command shows the current configuration of the storage pool. In the example system, there are two storage devices in the NETSTOR pool, configured as a mirror (mirror-0). After confirming the existing pool layout, the next step is to prepare the new drives that will become part of mirror-1 in the same NETSTOR pool.
To identify the block device names assigned to the newly added disks, run:
# ls -lah /dev/disk/by-id
drwxr-xr-x 2 root root 400 Srp 24 13:42 .
drwxr-xr-x 8 root root 160 Srp 24 13:28 ..
lrwxrwxrwx 1 root root 9 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN -> ../../sda
lrwxrwxrwx 1 root root 10 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN-part9 -> ../../sda9
lrwxrwxrwx 1 root root 9 Srp 24 13:30 ata-ST31000520AS_5VX0BZPV -> ../../sdb
lrwxrwxrwx 1 root root 10 Srp 24 13:30 ata-ST31000520AS_5VX0BZPV-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 9 Srp 24 13:42 ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E6458WKX -> ../../sdd
lrwxrwxrwx 1 root root 9 Srp 24 13:30 scsi-360000000000000000e00000000010001 -> ../../sdc
lrwxrwxrwx 1 root root 10 Srp 24 13:30 scsi-360000000000000000e00000000010001-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 9 Srp 24 13:42 wwn-0x11769037186453098497x -> ../../sdd
lrwxrwxrwx 1 root root 9 Srp 24 13:30 wwn-0x3623791645033518541x -> ../../sda
Once the block device name of the new disk is identified, create a partition table and prepare the drive for use. Use parted to create a GPT partition table:
# parted /dev/ --script -- mktable gpt
Important: The partition label must follow the format: SW3-NETSTOR-SRVx-y .
So, following the format:
(i.e., SW3-NETSTOR-SRV2-1 = virtual disk on Server two and 1 = disk one.)
Next, create a partition with a name matching the faulty partition from the previous setup. For the example above, the command would be:
# parted /dev/<device> --script -- mkpart "SW3-NETSTOR-SRV2-1" 1 -1
At this point, the new partition has been created and labeled correctly, ready to be added to the zpool mirror.
The next step is to edit the configuration file to include the location of the secondary disk:
# nano /etc/sysmonit/mirror.cfg
Add the new storage name, separated by a comma, after the existing one. After editing, the file should look similar to the following example:
"storage": {
"pool_name": "NETSTOR",
"nodes": [
{
"id": "9b41c9b2ee1cb5eb47917f4d301cf9aa",
"address": "2.2.2.20",
"port": 4420,
"nvme_targets": [
"SW3-NETSTOR-SRV1-1",
"SW3-NETSTOR-SRV1-2"
],
"subsystem": "sw-mirror"
},
{
"id": "13da151e3e1e486959db9c08dcb76458",
"address": "2.2.2.21",
"port": 4420,
"nvme_targets": [
"SW3-NETSTOR-SRV2-1",
"SW3-NETSTOR-SRV2-2"
"SW3-NETSTOR-SRV2-2"
],
"subsystem": "sw-mirror"
}
]
}
Next, the newly added storage must be exported to the network so the primary server can detect the device.
# sw-nvme expand-pool --path /dev/disk/by-id/ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E6458WKX
This completes the required steps on the secondary server.
Note: All steps from the beginning of this guide up to this point should also be performed on the primary server to ensure redundancy and prevent data loss in case of failover.
On Primary Storage Server
Once the same steps have been completed on the primary storage server, we can proceed to add the new storage to the NETSTOR pool.
Check the current configuration using:
# zpool status
pool: NETSTOR
state: ONLINE
scan: scrub in progress since Wed Dec 9 16:08:22 2020
1,72G scanned at 587M/s, 28,5K issued at 9,50K/s, 114G total
0B repaired, 0,00% done, no estimated completion time
config:
NAME STATE READ WRITE CKSUM
NETSTOR ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
SW3-NETSTOR-SRV1-1 ONLINE 0 0 0
SW3-NETSTOR-SRV2-1 ONLINE 0 0 0
errors: No known data errors
Next, expand the pool by adding the new logical drives. Be careful with this step — verify the names of the logical drives to ensure the correct disks are being added.
# zpool add NETSTOR mirror /dev/disk/by-partlabel/SW3-NETSTOR-SRV1-2 /dev/disk/by-partlabel/SW3-NETSTOR-SRV2-2 -f
After adding the new drives, the zpool should display the newly added logical volume.
zpool status
pool: NETSTOR
state: ONLINE
scan: resilvered 728M in 0h0m with 0 errors on Tue Dec 6 16:13:09 2016
config:
NAME STATE READ WRITE CKSUM
NETSTOR ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
SW3-NETSTOR-SRV1-1 ONLINE 0 0 0
SW3-NETSTOR-SRV2-1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
SW3-NETSTOR-SRV1-2 ONLINE 0 0 0
SW3-NETSTOR-SRV2-2 ONLINE 0 0 0
errors: No known data errors
Wait for the zpool to complete resilvering before performing any further operations. This concludes the storage pool expansion procedure.
| Command | Description |
|---|---|
| sw-nvme list | Lists all connected devices with /dev/nvme-fabrics |
| sw-nvme discover | Discover all devices exported on the remote host with given IP and port |
| sw-nvme connect | Import remote device from given IP, port and NQN |
| sw-nvme disconnect | Remove the imported device from the host |
| sw-nvme disconnect-all | Remove all imported devices from the host |
| sw-nvme import | Import remote devices from a given JSON file |
| sw-nvme reload-import | Import remote devices from JSON file after disconnecting all current imports |
| sw-nvme enable-modules | Enable necessary kernel modules for NVMe/TCP |
| sw-nvme enable-namespace | Enable namespace with given ID |
| sw-nvme disable-namespace | Disable namespace with given ID |
| sw-nvme load | Export remote devices from a given JSON file |
| sw-nvme store | Save system configuration in JSON format if devices are exported manually |
| sw-nvme clear | Remove exported device from system configuration; with 'all' removes all configurations |
| sw-nvme export | Export device on port with given NQN |
| sw-nvme export-stop | Remove device being exported on port with given ID |
| sw-nvme reload-configuration | Export remote devices from JSON file after removing all current exports |
| sw-nvme replace-disk | Combine 'clear all' and 'reload-configuration' for easier disk replacement on SERVERware |
| sw-nvme expand-pool | Update export configuration and add new namespace into sw-mirror subsystem for SERVERware |