Thursday, September 12, 2013

Performance test on SSD and NFS

Note:
First testing target of the two is a 2.5TB RAID (/ssd, with 12 2.5" SSD, XFS)
Another is a 400GB PCIe SSD (/ssd2, ext4).
Both of them connect to a server. It has only GbE network, so NFS performance on them will be limited by network.
All clients uses GbE.
All server and clients uses CentOS 6.4.

Test is done with Bonnie++, 4 types of tests are:
1. Large file sequential (>1GB)
2. 1MB files creation, 1024 files
3. 100KB files creation , 2048 files
4. 10KB files creation, 4096 files

Read tests are done only on large files. These hosts have large RAM, I can create larger file ( > 2x of RAM size is recommended) to fill the cache and get correct performance. But doing so on file creation test costs too much time, so... (In short I'm lazy on this)

Another post on Bonnie++ will be here soon ;)

Let's see local SSD performance (all speed unit in MB/s)


1GB1MB100K10K
readwritewritewritewrite
/ssd739.02699.54631.00173.5021.04
/ssd21223.171152.45831.00644.20104.50

Woo! We have over 1.2GB/s speed on PCIe SSD!
The high IOPs of SSD really wins when small files are used.
The SSD RAID, however, suffers from the RAID controller and lose to single SSD.
Maybe this model is not optimized for SSD operation, so do not imply this on all RAID machines.

Getting through NFS slows down of course. Let's see the result of 1 client, 8 clients simultaneously, and 16 clients simultaneously:
(I forgot to take the large file read on multiple clients test. Spare me since they will be GbE limit...)

NFS test1GB1MB100K10K
readwritewritewritewrite
1 client/ssd117.74102.0077.0033.406.56
/ssd2113.09101.2575.0056.5017.39
8 clients/ssd110.87112.00118.1064.60
/ssd2105.94111.00115.7066.32
16 clients/ssd111.10102.00119.4095.76
/ssd280.47108.00117.90103.26

Comparing performance on NFS and local disk shows the network overhead is heavy when small files are used.

No comments: