Read vs Write performance

Post Reply
michaelj
Posts: 1
Joined: Mon Oct 22, 2012 8:46 am

Read vs Write performance

Post by michaelj »

Hi,

We've recently purchased an F400 array and we're seeing read performance which is a lot better than write performance.

The array is made of the following:
    2 x Controllers
    SSD tier: 16 x 100GB RAID5 (3+1)
    FC tier: 32 x 600GB FC 15k RAID5 (3+1)
    NL tier: 16 x 2000GB SATA 7.2k RAID6 (8+2)
    4 disk enclosures, so the RAID configuration withstands the loss of one enclosure.


I've tested using SQLIO with a 40GB test file to try and rule out results being skewed by the cache. Also, AO is turned off while benchmarking each tier. The x-axis shows the queued requests.

Image

Image

We're happy with the fact that the SSD tier really boosts the random I/O which is why we purchased it. I understand that traditionally RAID 5 & 6 incur write penalties while calculating parity compared to RAID10. I've seen stats before that show read faster than write for P2000, P4500 and EVAs, but the gap normally closes and sometimes even betters for larger block size sequential workloads opposed to random on these arrays.

Does the 3PAR have a significant performance penalty for RAID5 over RAID10? (We were told by presales around 9% which in our case was acceptable compared to extra capacity required for RAID10 parity)

The IO we're getting should still suit our needs, but I'm just wondering if this is expected behavior and if there is any tweaking, or options we can turn on to improve the write performance on the array?
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: Read vs Write performance

Post by Richard Siemers »

Interesting data, thanks for sharing.

This is an example of using the right tool for the right job. Random small block i/o is ideal on SSD as there is no seek time associated with a spindle and a head. Switching to a large block sequential i/o has little to no benefit to SSD, and a huge benefit to spinning disk. It looks like all 3 are read bottlenecking at the same physical location, capping at 2.5 gigabits per second (5000 * 64 KB).

The 64kb test is minimizing the SSD's strengths and minimizing the FC/Sata disk's weaknesses, and also exposing a weakness of SSD, the "block re-write penalty". Empty SSD blocks are ready to be written to in 4kb pages, however, editing or rewriting an existing page requires the entire 512kb block to be read, zapped clean, and then re-written.

The raid 5 penalty for 3+1 is about 9%, but that gets higher with the more devices in the raid set. Raid 5 (8+1) is about a 19% penalty vs R10.

As far as tweaking goes, I think your 64kb sequential write benchmark is not a typical real world pattern, except maybe when doing a full restore to SSD. However, keep an eye out for SSD drive firmware enhancements. Manufacturers are tweaking their "wear leveling" code to maximize the opportunity for writes to use a clean 4k page thus avoiding the block re-write penalty. I have yet to hear or read any integrated HP intellectual property in the area of flash write performance, so I think they pretty much leverage the SSD drive's internal technology, like STEC's Cellcare.

--Richard
Attachments
image001.png
image001.png (64.3 KiB) Viewed 17417 times
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Conrad
Posts: 2
Joined: Fri Sep 07, 2012 2:42 pm

Re: Read vs Write performance

Post by Conrad »

You may also be running into some cage or other systemic limits. The 30K random reads seems low.
Post Reply