Quantcast
Viewing latest article 4
Browse Latest Browse All 10

SCI 2011Sep29 Review of latest SPECsfs2008 performance results

We return now to file system performance and analyze the latest SPECsfs® 2008* benchmark results. There has only been one new NFS benchmark since our last report, a HDS 3090-G2 (powered by BlueArc®) cluster run.  However, this new submission did not break into our normal throughput or ORT top 10 charts.  Also, there were no new CIFS benchmark submissions since our last report.  Let’s first correct the mistake on our last SPECsfs2008 report.

Latest SPECsfs2008 results

Image may be NSFW.
Clik here to view.
Scatter plot showing NFS throughput per disk in blue and CIFS throughput per disk in red, with linear regression lines drawn for both. The linear regression line for CIFS has a higher slope than the one for NFS

(SCISFS110929-001) (c) 2011 Silverton Consulting, All Rights Reserved

 

Figure 1 Scatter plot of “NFS throughput” per disk vs. “CIFS throughput” per disk

We made a mistake on our previous version of this chart and as such, have fixed and updated this chart since last time.  Mostly the difference between this version and the original is the removal of multiple EMC Isilon NFS and CIFS runs and of course the addition of the latest HDS NAS 3090-G2 NFS submission somewhere in the lower left corner.

As it turns out Isilon had been using SSDs all along in their submissions that we didn’t catch before.  Don’t know how we missed this in Isilon’s benchmark reports as they clearly indicated the use of SSDs. Be that as it may, sorry for any confusion we may have caused.

The correlations between number of disks and protocol throughput are still pretty good with 0.98 for CIFS and 0.82 for NFS.  Although without the EMC Isilon CIFS submissions we really only have 15 CIFS vs. 37 NFS submissions and the CIFS results are mainly skewed to low-end systems.  Nonetheless, the results are still pretty impressive and clearly show an advantage for CIFS, at least with respect to throughput per disk spindle deployed

Image may be NSFW.
Clik here to view.
Column plot of NFS throughput operations per disk spindle with Avere (2node, 6node and 1 node) taking the top 3 spots

(SCISFS110929-002) (c) 2011 Silverton Consulting, All Rights Reserved

Figure 2 Top 10 NFS throughput operations per second per disk drive

Higher is better on this chart. One can see the newest entry here, the HDS 3090-G2 BlueArc system coming in at #10.  BlueArc (and it’s former HDS OEM, now parent company) have 7 of the top slots here and Avere has the rest.  The results above seem to indicate that the HDS 3090 system is using BlueArc’s Mercury or midrange controller but looking at the reports it could just as easily have been the Titan or high-end controller.

As you may recall Avere system is a NAS virtualization engine which has other NAS boxes behind it.  One thing about this chart is that we exclude any and all SSDs or NAND based caching. But in all honesty, the Avere systems have lot’s of cache (163GB & 424GB of RAM for #1 & 2 respectively) and the latest HDS entry is no slouch either with 184GB of RAM caching spread throughout the VSP and BlueArc controllers.  We may need to establish a cutoff limit for RAM as well here.

Even though there have been no new CIFS submissions, we provide a similar chart below to Figure 2 because we have not shown one recently.

Image may be NSFW.
Clik here to view.
Column plot of CIFS throughput per disk spindle with Fujitsu TX 300 taking #1 and #3 slot and Apple Xserve taking second

(SCISFS110929-003) (c) 2011 Silverton Consulting, All Rights Reserved

 

Figure 3 Top 10 CIFS throughput operations per second per disk drive

In Figure 3 the Fujitsu TX300 S5 RAID 50, Apple’s Xserve, and Fujitsu TX300 S5 RAID0 (ouch!) take top honors here with 20, 65 and 20 disks respectively.  In contrast, EMCs Celerra VG8 had 280 disks and the Huawei Symantec system 1344, considerably more drives than the other entry-level systems elsewhere on this chart.

 

 

Significance

We believe CIFS is still winning in its horse race against NFS but future submissions could tip the scales either way.  More CIFS submissions, especially at the enterprise level (and without the use of SSDs) would help.

As an aside, we went looking to determine what version of SMB (CIFS) SPECsfs2008 used but there was none stated that could be found in the reports.  This probably means they use SMB 1 and not the latest SMB 2.1 (from Microsoft) which might speed it up even more. On the other hand, SPECsfs2008 clearly states that it uses NFSv3.  Unclear to us whether NFSv4 would help speed NFS throughput per disk up or not.

As always we welcome any recommendations for improvement of our SPECsfs2008 analysis.  For the discriminating storage analyst or for anyone who wants to learn more we now include a top 30 version of these and all our other charts plus further refined performance analysis in our NAS briefing which is available for purchase from our website.

[This performance dispatch was originally sent out to our newsletter subscribers in September of 2011.  If you would like to receive this information via email please consider signing up for our free monthly newsletter (see subscription request, above right) or subscribe by email and we will send our current issue along with download instructions for this and other reports.  Also, if you need an even more in-depth analysis of  NAS storage system features and performance please take the time to examine our recently updated (September, 2014) NAS Buying Guide available for purchase from our website.]

~~~~

Silverton Consulting, Inc. is a Storage, Strategy & Systems consulting services company, based in the USA offering products and services to the data storage community

* SPECsfs2008 results from http://www.spec.org/sfs2008/results/


Image may be NSFW.
Clik here to view.


Viewing latest article 4
Browse Latest Browse All 10

Trending Articles