top of page

SSD7000 Series Performance Test Guide (Linux)

  • 4 days ago
  • 3 min read

This knowledge base (KB) applies to the following NVMe RAID AICs.

 

Table 1: Support NVMe RAID AICs

 

Support NVMe RAID AICs

SSD7101A-1

SSD7104

SSD7105

SSD7204

SSD7140A

SSD7540

SSD7749M

SSD7749M2

SSD7749E

SSD7505

SSD7202

SSD7502

SSD7120

SSD7180

SSD7184

SSD7580B

SSD7580C

 

  • Steps

1. Download the Performance Test tool.

We recommend using the fio utility to test the NVMe RAID array’s performance in a Linux environment.

1) Download fio (The following example was created using an Ubuntu 20.04 system):

#apt-get install fio


2. Check the PCIe Lane assignment.

WebGUI:

1) Start the WebGUI management software and click the Physical--Enclosure 1 tab.

a. SSD7100 Series RAID Controllers require a dedicated PCIe 3.0 x16 slot in order to perform optimally.

b. SSD7200 Series RAID Controllers require a dedicated PCIe 3.0 x8 slot in order to perform optimally.

c. SSD7500 Series RAID Controllers require a dedicated PCIe 4.0 x16 slot in order to perform optimally.

2) If you are configuring a Cross-Sync RAID array, repeat this procedure for Enclosure 2 to check the PCIe Lane assignment.

CLI:

1) Open a command terminal and enter the following command to start the CLI:#hptraidconf

2) Enter the following command to check the PCIe Lane assignment:HPT CLI>query enclosures

a. SSD7100 Series RAID Controllers require a dedicated PCIe 3.0 x16 slot in order to perform optimally.

b. SSD7200 Series RAID Controllers require a dedicated PCIe 3.0 x8 slot in order to perform optimally.

c. SSD7500 Series RAID Controllers require a dedicated PCIe 4.0 x16 slot in order to perform optimally.

3) If you are configuring a Cross-Sync RAID array, repeat this procedure for Enclosure 2 to check the PCIe Lane assignment.

3. Configure the RAID Array (e.g. RAID 0)

1) Create a RAID array using the WebGUI or CLI.

WebGUI:

a. To configure the NVMe RAID array, access the WebGUI management software, and click the Logical tab.

b. Click on Create Array and configure the NVMe SSD’s as a RAID 0.

CLI:

a. Open a command terminal and enter the following command to start the CLI:

b. Enter the following command to check the PCIe Lane assignment:

HPT CLI> create RAID0 disks=* capacity=* init=quickinit bs=512K

2) Format the RAID array; use the following command:

#mkfs.ext4 /dev/hptblock0n* -E lazy_itable_init=0,lazy_journal_init=0

3) Mount the disk:

#mount /dev/hptblock0n* /mnt

4. Start the Performance Test (e.g. RAID0)

Single CPU performance test

1) Use a command terminal to select a performance test script that corresponds with the number of CPUs used by the motherboard.

2M continuous reading performance test script:

# fio --filename=/mnt/test1.bin --direct=1 --rw=read --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-read

2M continuously write performance test scripts:

# fio --filename=/mnt/test1.bin --direct=1 --rw=write --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-write

4K random read performance test script:

# fio --filename=/mnt/test1.bin --direct=1 --rw=randread --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-read

4K random write performance test script:

# fio --filename=/mnt/test1.bin --direct=1 --rw=randwrite --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-write

 

Multi-CPU performance test

1) First, confirm which CPU corresponds with the slot the card is installed into, and then specify this CPU for the performance test.

a. Use the following command to view the node corresponding to each CPU, and confirm the cpus value that corresponds with each CPU:

The node corresponding to CPU1 is 0, and the node corresponding to CPU2 is 1.

The cpus corresponding to CPU1 is: 0-11, 24-35

The cpus corresponding to CPU2 is: 12-23, 36-47

b. Confirm that the HighPoint NVMe RAID Controller is plugged into the PCIe Slot of the motherboard. If the PCIe Slot used corresponds to CPU1, you need to specify the a cpu value of CPU1 during the performance test. Several workers are used in the script (to correspond with the number of cpus).

2M continuous reading performance script:

# taskset -c 0 fio --filename=/mnt/test1.bin --direct=1 --rw=read --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-read

2M continuous writing performance script:

# taskset -c 0 fio --filename=/mnt/test1.bin --direct=1 --rw=write --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-write

4K random read performance script:

# taskset -c 0-7 fio --filename=/mnt/test1.bin --direct=1 --rw=randread --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-read

4K random write performance script:

# taskset -c 0-7 fio --filename=/mnt/test1.bin --direct=1 --rw=randwrite --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-write


Recent Posts

See All
Rocket 8631x Series FAQ

Q1: Why does the RoketStor 8631D include 2x8-Pin power cables? A: Standard 8-pin cables are rated for 150W. The RocketStor 8631D uses a Native 12VHPWR connection to deliver a clean, direct 16-pin stre

 
 
 

Comments


bottom of page