Page 1 of 1

RAID Performance and Rebuild Times

Posted: Mon Feb 15, 2010 2:46 pm
by sremsing
Has anyone does any tests to measure the performance and rebuild times of the various RAID formats and set sizes that 3PAR supports? I have done this for other vendors but have not had time with our 3PAR.

Re: RAID Performance and Rebuild Times

Posted: Mon Mar 15, 2010 11:11 am
by Richard Siemers
Rebuild times are heavily dependent upon the amount of capacity used. This is because raid is done at a chunklet level instead of a whole disk level. This is also why there are no spare drives, just spare chunklets spread accross all your drives.

To date we have only had a 1 TB drive replaced, and it was done so pre-failure. The process went as such:

3Par support remotely issued servicemag commands that evacuated all the data off the magazine that contained the failing drive. Tech arrived, pulled the magazine, replaced the drive, reinstalled magazine. Remote support confirmed the fix, then issued servicemag commands that moved all the data back to the magazine.

As for performance testing, I do have some data we collected during our eval window that I can share. Look for a new reply, it will take me a bit to find it.

Re: RAID Performance and Rebuild Times

Posted: Mon Mar 15, 2010 1:33 pm
by Richard Siemers
You're SE should have access to detailed performance comparisons between the various raid types supported by the 3Par. I sugguest you utilize that free resource to get the results from more thourough testing to answer your question.

Here we tested 3Par's Raid 5 3+1 against a competitors disk based Raid 6. This information is provided as-is, without any promises, committments or obligations to accuracy. Our environment is unique to us, but unlike most vendor benchmarks, mine actually compares 2 storage vendors using identical tests and equipment. You are strongly encouraged to perform your own benchmarks in your own environment to obtain your own results.

8k "Fifty-Fifty" refers to 8k io blocks, 50% read 50% write, 50% sequential 50% random patterns. We used IOmeter and aprox 8 SAN attached Windows hosts to generate the load. All luns were spread accross all spindles, no isolation/segregation was done. Each test machine had 4 luns each of different sizes ranging from 100g to 2 TB. Each LUN was filled to capacity with random data to eliminate "Fake Reads" due to thin provisioning utilized by both test systems.
Benchmarks
Benchmarks
benchmarks1.JPG (27.89 KiB) Viewed 28669 times