I understand you can't do like for like comparisons with FE & BE but I am confused with how my 3PAR is graphing total IOPS at the backend way beyond what I would expect.
I have a 4-node 7400 with 160 10K FC drives in it and the PD IOPS are showing as topping out to 500 IOPS when running a 4K 100% random read IOmeter test with latency on the frontend <4ms.
The other confusing thing is when it graphs IOPS like this:
This is a 4K 100% random 50/50 r/w test. This is a Physical Disk chart with the red line being reads, blue is writes, the top light blue is the total IOPS. This is just weird because those drives are not capable of doing >50000 random IOPS, the host doing the test is reporting average latencies of ~5ms and IOPS of ~25,000 split 50/50 r/w so it is more like the red OR the blue line is the total IOPS. I would expect to see those red and blue lines closer to the 10,000 IOPS line and the light blue line around where the red and blue are currently sitting. Why does the 3PAR report IOPS like this? 50,000 IOPS on a random read/write test with 160x10K spinning disk? No way.
We were astonished with the same PD graph - our 10K drives were showing over 700 IOPS. After countless hours with support to get some explanation (oh boy support was terrible) we final got engineer who explained these IOPS
as well (whatever is in the queue waiting to be processed). According to HP it's a sum of current and outstanding I/O. Basically for 10K PDs if you see over 160 IOPS for a prolonged period of time during production you need to beef up your back end (by adding more drives or AO). I believe if your run system check when your drives experiencing such high I/O you will get them mentioned in report.