Read vs Write performance
Posted: Tue Oct 23, 2012 10:44 am
Hi,
We've recently purchased an F400 array and we're seeing read performance which is a lot better than write performance.
The array is made of the following:
I've tested using SQLIO with a 40GB test file to try and rule out results being skewed by the cache. Also, AO is turned off while benchmarking each tier. The x-axis shows the queued requests.
We're happy with the fact that the SSD tier really boosts the random I/O which is why we purchased it. I understand that traditionally RAID 5 & 6 incur write penalties while calculating parity compared to RAID10. I've seen stats before that show read faster than write for P2000, P4500 and EVAs, but the gap normally closes and sometimes even betters for larger block size sequential workloads opposed to random on these arrays.
Does the 3PAR have a significant performance penalty for RAID5 over RAID10? (We were told by presales around 9% which in our case was acceptable compared to extra capacity required for RAID10 parity)
The IO we're getting should still suit our needs, but I'm just wondering if this is expected behavior and if there is any tweaking, or options we can turn on to improve the write performance on the array?
We've recently purchased an F400 array and we're seeing read performance which is a lot better than write performance.
The array is made of the following:
- 2 x Controllers
- SSD tier: 16 x 100GB RAID5 (3+1)
- FC tier: 32 x 600GB FC 15k RAID5 (3+1)
- NL tier: 16 x 2000GB SATA 7.2k RAID6 (8+2)
- 4 disk enclosures, so the RAID configuration withstands the loss of one enclosure.
I've tested using SQLIO with a 40GB test file to try and rule out results being skewed by the cache. Also, AO is turned off while benchmarking each tier. The x-axis shows the queued requests.
We're happy with the fact that the SSD tier really boosts the random I/O which is why we purchased it. I understand that traditionally RAID 5 & 6 incur write penalties while calculating parity compared to RAID10. I've seen stats before that show read faster than write for P2000, P4500 and EVAs, but the gap normally closes and sometimes even betters for larger block size sequential workloads opposed to random on these arrays.
Does the 3PAR have a significant performance penalty for RAID5 over RAID10? (We were told by presales around 9% which in our case was acceptable compared to extra capacity required for RAID10 parity)
The IO we're getting should still suit our needs, but I'm just wondering if this is expected behavior and if there is any tweaking, or options we can turn on to improve the write performance on the array?