This is Part 5 in an ongoing series on disk performance. You can read the entire series by starting at the Introduction.

In part 2 and part 3 of this series I looked at RAID 10 and RAID 5 performance, respectively.  Now I’ll show how the two rate against each other. For this comparison I’ll look at all three of my test harnesses for this post (view the test harness specifics here). For all comparisons I am using a 64 KB RAID stripe with a 64 KB partition offset and a 64 KB allocation unit size. As with my previous posts I’ll focus on OLTP data activity (8 KB random reads\writes).

Test Harness #1: Dell PowerEdge 2950
These tests were performed using 4 physical SCSI drives with 8 threads against a 64 GB data file at an I/O queue depth of 8.

Here’s what 8 KB random reads look like:

IOs/sec, 8 KB random reads, PowerEdge 2950 MBs/sec, 8 KB random reads, PowerEdge 2950 Avg latency, 8 KB random reads, PowerEdge 2950

And here’s what 8 KB random writes look like:

IOs/sec, 8 KB random writes, PowerEdge 2950 MBs/sec, 8 KB random writes, PowerEdge 2950 Avg latency, 8 KB random writes, PowerEdge 2950

Test Harness #2: Dell PowerVault 220s
Unlike the PowerEdge 2950 which uses local drives, the 220s is a standalone enclosure commonly run “split bus”, a mode which splits the drives in the enclosure into two groups and dedicates an independent SCSI bus to each group. This set of tests was run in split bus mode using 12 physical SCSI drives (i.e. 6 drives per SCSI bus). I tested workloads of 4, 8, and 16 threads against a 512 GB data file; each set of tests used an I/O queue depth of 8.

Here’s what 8 KB random reads look like:

IOs/sec, 8 KB random reads, PowerVault 220S MBs/sec, 8 KB random reads, PowerVault 220S Avg latency, 8 KB random reads, PowerVault 220S

And here’s what 8 KB random writes look like:

IOs/sec, 8 KB random writes, PowerVault 220S MBs/sec, 8 KB random writes, PowerVault 220S Avg latency, 8 KB random writes, PowerVault 220S

Test Harness #3: Dell PowerVault MD1000
The PowerVault MD series is Dell’s current offering in DASD enclosures. The MD1000 is the replacement for the 220s and supports both SAS and SATA drives, and just like the 220s it can operate in split bus mode. Similar to my tests against the 220s, this series was run in split bus mode using 12 physical SAS drives with workloads of 4, 8, and 16 threads against a 512 GB data file using an I/O queue depth of 8.

Here’s what 8 KB random reads look like:

IOs/sec, 8 KB random reads, PowerVault MD1000 MBs/sec, 8 KB random reads, PowerVault MD1000 Avg latency, 8 KB random reads, PowerVault MD1000

And here’s what 8 KB random writes look like:

IOs/sec, 8 KB random writes, PowerVault MD1000 MBs/sec, 8 KB random writes, PowerVault MD1000 Avg latency, 8 KB random writes, PowerVault MD1000

Conclusion
Generally speaking, it’s common knowledge that RAID 5 offers better read performance while RAID 10 offers better write performance. The performance graphs from each test illustrate that, but to drive the point home I’ll translate the numbers into percentages. The chart below shows the RAID 10 metrics relative to RAID 5. Red numbers indicate where RAID 10 performed worse; black numbers indicate where RAID 10 performed better. (Higher values for IOs/sec and MBs/sec are better so negative numbers mean a drop in performance. Lower values for latency are better so negative numbers in this column mean a drop in latency which is an increase in performance.)

RAID 10 performance relative to RAID 5

I knew that RAID 5 has slower write performance than RAID 10, but roughly 65% slower for IOs/sec and MBs/sec? Wow! Arm yourself with this knowledge next time somebody suggests that it isn’t so bad to use RAID 5 instead of RAID 10, especially if your database is write intensive.

RAID 10 Performance From RAID 5?
As I was writing this series my friend Andy Warren asked me an interesting question: What would it take to get RAID 10 performance from a RAID 5 configuration? (Obviously he was talking about write performance since RAID 5 already offers better read performance) Honestly, I had never thought of that angle before, but I can see how it might make sense in some circumstances.

I’ll pretend that I’m being asked to set up a RAID 5 array on a PowerVault MD1000 that gives me performance equivalent to the RAID 10 array I configured on the PowerEdge 2950. For the sake of this exercise I’ll ignore latency and focus on IOs/sec and MBs/sec. Here comes the voodoo math: At 8 threads on a 12 disk RAID 5 configuration on the MD1000 8 KB random writes come out to 71.23 IOs/sec and 0.56 MBs/sec per disk (that’s just the total numbers divided by 12). The RAID 10 results for the 4 drive configuration on the PowerEdge 2950 were 617.12 IOs/sec and 4.82 MBs/sec; to get to those numbers using the voodoo math I just did means I’d need to use 9 disks in a RAID 5 configuration, or over twice the number of disks required for the RAID 10 array to begin with.

So can you get RAID 10 performance out of RAID 5? Sort of. Granted, my math was simplistic at best, but it’s obvious that you can’t just add a few extra disks to a RAID 10 array, reconfigure them as RAID 5, and expect to see equivalent performance; it’s probably going to take 2X the disks and practically speaking the numbers just don’t add up to make it worthwhile.

Stay tuned, in Part 6 I will take a look at how RAID 10 compares against RAID 1.

About Kendal

author profile image

Kendal is a database strategist, community advocate, public speaker, and blogger. A practiced IT professional with over 15 years of SQL Server experience, Kendal excels at disaster recovery, high availability planning/implementation, & debugging/troubleshooting mission critical SQL Server environments. Kendal is a Senior Consultant on the Microsoft Premier Developer Support team and President of MagicPASS, the Orlando, FL based chapter of PASS. Before joining Microsoft, Kendal was a SQL Server/Data Platform MVP from 2011-2016.