Unusual performance

Post Reply
DaveN
Posts: 3
Joined: Wed Feb 05, 2014 8:32 pm

Unusual performance

Post by DaveN »

At the moment we are running validation on a brand new 3PAR 7400 with 264 x 450GB x 10,000rpm SAS drives in a full 4 node mesh.

This is designed to be our performance storage that backs our primary Oracle and Exchange systems.

What we have noticed is that sequential read is much slower then sequential write, mainly at lower block sizes. Attached is a graph from the 3PAR management console of the particular VV we are testing on.

This is from a server with 4 paths to the 3PAR. There are 2 ports on the server connected to 4 ports on the 3PAR via a 4GB FC fabric.

The test is running 16k blocks from 8 workers in sequential read and then sequential write.

Is this normal behaviour for a 3PAR system? HP refuse to say whether this is expected or not, but tell us that our SAN is correctly configured and show other graphs with much higher trhoughput; e.g. large blocks, random reads and writes etc.

It isn't a "real world" example of our work load but struck as anomalous and worthy of an explanation before we comiisioned it to production.

Any suggestions or explanations?

Thanks,
Dave

Graph of performance read vs write at 16k blocks with 8 workers.
Graph of performance read vs write at 16k blocks with 8 workers.
3parperf.PNG (73.12 KiB) Viewed 16948 times
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: Unusual performance

Post by Richard Siemers »

Looks like something went haywire with your read test... its showing a block size of 425k, which is further supported by less than 25 iops.

I would recommend re-validating your read workload parameters are 16k io.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
DaveN
Posts: 3
Joined: Wed Feb 05, 2014 8:32 pm

Re: Unusual performance

Post by DaveN »

I've double checked the test and they are using 16k blocks.

If I pull up the performance graphs for the data ports on the 3PAR the IO size on the port matches the block size I'm using.

However as you noticed the IO size on the VV is much much larger.

Maybe it's pulling back "step size" sized data off the disk to get the 16k requested? Though our step size is 128k so we'd be looking at a rough multiple of 3.5 step's to process each 16k request?

I can't find a neat multiple of our step size, 5+1 redundancy or something else to explain this.

Thanks,
Dave
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: Unusual performance

Post by Richard Siemers »

I suspect thin provisioning mojo, and perhaps cache are at play here. My guess is you might be "read benchmarking" a thin provisioned disk that is mostly empty? Try blasting your test lun full of random data (not zeros) and then re-running the test again to see if it works differently.

Also, what do the stats look like from the host perspective instead of the IMC realtime?
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Cleanur
Posts: 254
Joined: Wed Aug 07, 2013 3:22 pm

Re: Unusual performance

Post by Cleanur »

You should be looking at VLUNs not VV's. The VV numbers include cache and internal IO so are not representative of front end performance.VV numbers are typically used for performance troubleshooting not benchmarking.
DaveN
Posts: 3
Joined: Wed Feb 05, 2014 8:32 pm

Re: Unusual performance

Post by DaveN »

Ahhh many thanks!

Now I can see what's happening and have tracked it back to read/write consolidation occurring in the host.

Just running some more testing to confirm my suspicions.

Thanks,
Dave
Post Reply