Page 1 of 2
Sizing 3PAR based on EVAPerf results
Posted: Thu Jan 23, 2014 6:48 am
by slink
Wonder if I could get some help to try and understand how to size a 3PAR based on some performance figures from an existing EVA. I know it's a fine art with a gazillion variables but some general guidance would be good.
I don't have access to servers accessing the array, only a dump of some EVAPerf data from which to my untrained eye I believe the existing EVA is capable of some good performance but is getting hammered and the latency is terrible. I've been asked to replace it with a 3PAR that will satisfy the performance and capacity requirements.
The key stats I have are:
HP EVA 8100
No thin provisioning, 95TB allocated
Peak frontend controller reaching around 12,000 IOPS
Throughput peak of around 472MB/Sec
Read latency response times reaching 23ms Avg, 40ms 95% & 202ms max
DG1 (FATA 48 x 1TB 7.2K disks VRAID5 60/40 r/w):
Peak backend disk performance reaching –7,392 IOPS
95th% Percentile across all backend disk drives -5,760 IOPS
DG2 (FC 144 x 600GB 10K 600GB disks VRAID5 70/30 r/w):
Peak backend disk performance reaching –31,824 IOPS
95th% Percentile across all backend disk drives -8,928 IOPS
DG3 (FC 36 x 600GB 10K 600GB disks VRAID1 30/70 r/w):
Peak backend disk performance reaching –5,796 IOPS
95th% Percentile across all backend disk drives -72 IOPS
I've tried to find 3PAR sizer/calculators to punch some of these figures into but seems there's nothing specific out there or it's restricted to HP reps and resellers. Where can I start with this trying to calculate and size for myself in the world of the 3PAR?
Re: Sizing 3PAR based on EVAPerf results
Posted: Thu Jan 23, 2014 2:38 pm
by skumflum
I think someone have made some bad decisions sizing and partitioning your EVA. About 50% of the IO is on 48 x FATA disk! Sorry, but this is asking for trouble. Moreover, why no use all the 600GB disk in one diskgroup and capitalizing on all the spindles.
There is a tool available from HP but you have to be HP Employee or Certified Partner to get at license. To complicate matters 3PAR have a unique (in the world of HP) feature called Adaptive Optimization where data is moved between tiers.
I would advise you to spend some money on an experienced 3PAR guy. You are about to make a significant investment and the money spend to get some help is just a drop in the ocean
[edit] I called for help
[/edit]
Re: Sizing 3PAR based on EVAPerf results
Posted: Thu Jan 23, 2014 3:44 pm
by slink
yes I don't disagree with you about the config of the existing EVA and I do indeed have some meetings lined up with HP to help with this but for my own knowledge, I was just wondering how to interpret these figures really and whether it's possible to reverse engineer the requirements from performance data of an existing SAN or whether you have to go to each attached host and benchmark IOPS from there and total it all up.
That max backend IOPS of 31824, does that mean that any new array has to be able to reach that value but at low latency or does it mean that the EVA maxed out at that value and actually the connecting hosts wanted more?
I was thinking that there should be a value for an average total read requests and total write requests, so could get average total IO requests across the entire array and that is effectively the IOPS requirement and then it would be a case of doing some calculations to reach that value (with maybe a 20% overhead for growth) at low latencies (I was thinking <5ms), which would give me the number and type of disks I needed.
I'm aware of stuff like AO and the small 9% write penalty of a 3PAR's RAID5 (3+1) compared to RAID1 and all of the virtualisation going on which essentially negates many of the traditional concerns and so it can be treated as one piece of converged monolithic storage, so should just come down to IOPS, latency and the usable capacity required. A calculator would be really useful for that.
Re: Sizing 3PAR based on EVAPerf results
Posted: Thu Jan 23, 2014 5:00 pm
by skumflum
The Frontend IOPS is the IO generated by the applications to the Storage Array. The Backend IOPS is the IO that the controllers sends to the physical disk. The value of the backend IOPS is higher than frontend due to RAID write overheads.
I am not sure about vRAID5 write penalty but traditional RAID5 is 4 for every frontend IO.
As the 3PAR should have only 9% penalty you should not necessary aim for the current total backend IO.
A simple rule of thumb:
NL: 75
FC 10k: 150
FC 15k: 200
Please bear in mind that a simple summery of IOPS values do not make the whole picture. It’s way more complicated. It all depends on your environment.
Random small block is ideal for SSD. There is almost no seek time unlike a mechanical drive. In contrast, large blocks and sequential r/w has almost no benefit of SSD.
We use 3-TIER AO and it is almost unsettling to see how much data is migrated to NL disk, but the total service time is very good.
[edit] However, if you intend to use AO it is important that the FC TIER is large and fast enough to accommodate the initial placement of data [/edit]
Re: Sizing 3PAR based on EVAPerf results
Posted: Thu Jan 23, 2014 5:15 pm
by slink
My understanding of how the 3PAR works with its chunklets with LDs taking even smaller parts (step size), means that there is effectively no such thing as "sequential" read or write, everything is random.
Re: Sizing 3PAR based on EVAPerf results
Posted: Thu Jan 23, 2014 5:34 pm
by skumflum
It would be wrong to say that everything is random. RAID is achieved by striping the data across chunklets that are spread across multiple disks and across disk magazines. These chunklets are sub-allocated in 128MB regions. There can be many blocks in 128MB, so it still makes sense to speak of random/sequential access.
It is the regions AO moves between tiers.
Re: Sizing 3PAR based on EVAPerf results
Posted: Thu Jan 23, 2014 5:54 pm
by slink
hmm, I guess the way I visualise things might be wrong then. As I understand it LDs are the RAID across the chunklets but the CPGs sit on the LDs and take up only a small portion of each LD, the step size, which is in the KB. Therefore, any data sitting in a VV on a CPG is actually spread across very small parts of slightly larger chunklets spread across many disks in the array and there is nothing about that spread which is contiguous. The chunklets can come from any disk in the array, within the constraints of the HA configuration (Cage, Mag, R1, R6 etc.) so I'm not sure how there can be anything sequential about how the data is written or read.
Re: Sizing 3PAR based on EVAPerf results
Posted: Thu Jan 23, 2014 6:20 pm
by skumflum
You are correct LDs are collection of chunklet arranged in rows of RAID sets. I can see i should have clarified that.
A CPG is not using space in itself. It’s more like a pool of LD that can be allocated to VV.
On thin VV 3PAR automatically assigns more space to the VV by mapping new regions from logical disks.
You are correct that it’s not sequential end-to-end but the data is not fragmented in KB
Re: Sizing 3PAR based on EVAPerf results
Posted: Tue Jan 28, 2014 11:02 am
by Cleanur
Can't delete :-0...see below
Re: Sizing 3PAR based on EVAPerf results
Posted: Tue Jan 28, 2014 11:07 am
by Cleanur
Front end IOps, I/O size and Read write ratios are the key figures.
Ask to speak to HP Storage Presales or you Channel partner Presales, they can take the EVAPerf output and pull it directly into a 3PAR sizing tool. They'll still need to do some massaging of the configuration to meet your exact requirements, but the sizing service is free and they'll provide you with the expected 3PAR numbers including latency along with ballpark pricing.
You might also want them to run a thin analysis (again it's free) to see how much space you can potentially save by going 3PAR. That way you can potentially lose some of the spindles and make up the performance with some SSD. If you do take this route along with AO as mentioned above make sure you have an adequate middle tier to land new writes before they get promoted / demoted.
Once they have the numbers you can sit down and easily work on a few different combinations to best suite your requirements going forward.[/quote]