PD Performance against VV Performance

nsnidanko
Posts: 116
Joined: Mon Feb 03, 2014 9:40 am

Re: PD Performance against VV Performance

Post by nsnidanko »

slink wrote:I understand you can't do like for like comparisons with FE & BE but I am confused with how my 3PAR is graphing total IOPS at the backend way beyond what I would expect.

I have a 4-node 7400 with 160 10K FC drives in it and the PD IOPS are showing as topping out to 500 IOPS when running a 4K 100% random read IOmeter test with latency on the frontend <4ms.

Image

The other confusing thing is when it graphs IOPS like this:

Image

This is a 4K 100% random 50/50 r/w test. This is a Physical Disk chart with the red line being reads, blue is writes, the top light blue is the total IOPS. This is just weird because those drives are not capable of doing >50000 random IOPS, the host doing the test is reporting average latencies of ~5ms and IOPS of ~25,000 split 50/50 r/w so it is more like the red OR the blue line is the total IOPS. I would expect to see those red and blue lines closer to the 10,000 IOPS line and the light blue line around where the red and blue are currently sitting. Why does the 3PAR report IOPS like this? 50,000 IOPS on a random read/write test with 160x10K spinning disk? No way.


We were astonished with the same PD graph - our 10K drives were showing over 700 IOPS. After countless hours with support to get some explanation (oh boy support was terrible) we final got engineer who explained these IOPS include outstanding I/O as well (whatever is in the queue waiting to be processed). According to HP it's a sum of current and outstanding I/O. Basically for 10K PDs if you see over 160 IOPS for a prolonged period of time during production you need to beef up your back end (by adding more drives or AO). I believe if your run system check when your drives experiencing such high I/O you will get them mentioned in report.
slink
Posts: 77
Joined: Wed May 01, 2013 5:39 pm

Re: PD Performance against VV Performance

Post by slink »

Thanks for the info but that makes no sense at all as outstanding IO isn't IOPS, it's a queue. If I'm looking at an IOPS chart for physical disks I want to see how many they are doing, not how many they are doing plus what they are going to be doing.

I get that it might be useful to see what is being demanded of the drives but that could be a separate chart/option. An IOPS chart should be just that, how many IOPS are the drives doing.

(not questioning you or the engineer, I believe the information, just don't understand why it is the way it is)
nsnidanko
Posts: 116
Joined: Mon Feb 03, 2014 9:40 am

Re: PD Performance against VV Performance

Post by nsnidanko »

slink wrote:Thanks for the info but that makes no sense at all as outstanding IO isn't IOPS, it's a queue. If I'm looking at an IOPS chart for physical disks I want to see how many they are doing, not how many they are doing plus what they are going to be doing.

I get that it might be useful to see what is being demanded of the drives but that could be a separate chart/option. An IOPS chart should be just that, how many IOPS are the drives doing.

(not questioning you or the engineer, I believe the information, just don't understand why it is the way it is)


I agree these graphs are deceiving. I've just had enough of support and took this explanation as "reasonable". If you're interested to pursue this further with HP support please pm me your email and I'll forward everything I have for this case.
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: PD Performance against VV Performance

Post by Richard Siemers »

slink wrote:The other confusing thing is when it graphs IOPS like this:

Image

This is a 4K 100% random 50/50 r/w test. This is a Physical Disk chart with the red line being reads, blue is writes, the top light blue is the total IOPS. This is just weird because those drives are not capable of doing >50000 random IOPS, the host doing the test is reporting average latencies of ~5ms and IOPS of ~25,000 split 50/50 r/w so it is more like the red OR the blue line is the total IOPS. I would expect to see those red and blue lines closer to the 10,000 IOPS line and the light blue line around where the red and blue are currently sitting. Why does the 3PAR report IOPS like this? 50,000 IOPS on a random read/write test with 160x10K spinning disk? No way.


Not sure how its clocking that many IOPs/disk, has to be including something else like a BE cache in that equation. In regards to FE to BE ratios, aren't BE writes = 2 iops? Write then read for verify? Also be aware of any snaps you have active that might be forcing a copy-on-write operations. There are also iops going to the .admin and .srdata luns as well.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Post Reply