RAID Performance and Rebuild Times
RAID Performance and Rebuild Times
Has anyone does any tests to measure the performance and rebuild times of the various RAID formats and set sizes that 3PAR supports? I have done this for other vendors but have not had time with our 3PAR.
Steve Remsing
Sr. Research Engineer
The Dow Chemical Company
Sr. Research Engineer
The Dow Chemical Company
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: RAID Performance and Rebuild Times
Rebuild times are heavily dependent upon the amount of capacity used. This is because raid is done at a chunklet level instead of a whole disk level. This is also why there are no spare drives, just spare chunklets spread accross all your drives.
To date we have only had a 1 TB drive replaced, and it was done so pre-failure. The process went as such:
3Par support remotely issued servicemag commands that evacuated all the data off the magazine that contained the failing drive. Tech arrived, pulled the magazine, replaced the drive, reinstalled magazine. Remote support confirmed the fix, then issued servicemag commands that moved all the data back to the magazine.
As for performance testing, I do have some data we collected during our eval window that I can share. Look for a new reply, it will take me a bit to find it.
To date we have only had a 1 TB drive replaced, and it was done so pre-failure. The process went as such:
3Par support remotely issued servicemag commands that evacuated all the data off the magazine that contained the failing drive. Tech arrived, pulled the magazine, replaced the drive, reinstalled magazine. Remote support confirmed the fix, then issued servicemag commands that moved all the data back to the magazine.
As for performance testing, I do have some data we collected during our eval window that I can share. Look for a new reply, it will take me a bit to find it.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: RAID Performance and Rebuild Times
You're SE should have access to detailed performance comparisons between the various raid types supported by the 3Par. I sugguest you utilize that free resource to get the results from more thourough testing to answer your question.
Here we tested 3Par's Raid 5 3+1 against a competitors disk based Raid 6. This information is provided as-is, without any promises, committments or obligations to accuracy. Our environment is unique to us, but unlike most vendor benchmarks, mine actually compares 2 storage vendors using identical tests and equipment. You are strongly encouraged to perform your own benchmarks in your own environment to obtain your own results.
8k "Fifty-Fifty" refers to 8k io blocks, 50% read 50% write, 50% sequential 50% random patterns. We used IOmeter and aprox 8 SAN attached Windows hosts to generate the load. All luns were spread accross all spindles, no isolation/segregation was done. Each test machine had 4 luns each of different sizes ranging from 100g to 2 TB. Each LUN was filled to capacity with random data to eliminate "Fake Reads" due to thin provisioning utilized by both test systems.
Here we tested 3Par's Raid 5 3+1 against a competitors disk based Raid 6. This information is provided as-is, without any promises, committments or obligations to accuracy. Our environment is unique to us, but unlike most vendor benchmarks, mine actually compares 2 storage vendors using identical tests and equipment. You are strongly encouraged to perform your own benchmarks in your own environment to obtain your own results.
8k "Fifty-Fifty" refers to 8k io blocks, 50% read 50% write, 50% sequential 50% random patterns. We used IOmeter and aprox 8 SAN attached Windows hosts to generate the load. All luns were spread accross all spindles, no isolation/segregation was done. Each test machine had 4 luns each of different sizes ranging from 100g to 2 TB. Each LUN was filled to capacity with random data to eliminate "Fake Reads" due to thin provisioning utilized by both test systems.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.