Performance

Post Reply
joe.beaton
Posts: 3
Joined: Wed Feb 05, 2014 11:11 am

Performance

Post by joe.beaton »

Hi All,

This is my first post, I've been lurking for a while.

We've got a 7200 connected to 4 ESXi 5.1 hosts. The 7200 has 24 x 15k 300GB disks. The hosts are FC attached via a HP B-Series switch.

If I run HD_Speed. http://www.steelbytes.com/?mid=20 from within a W2K8 R2 VM, then I get very different results depending on the block size.

8KB block size = Average 27MB/s
32KB block size = Ave 63MB/s
128KB block size = Ave 94MB/s
512KB block size = Ave 138MB/s
2MB block size = Ave 197MB/s
4MB block size = Ave 236MB/s

Does anyone know if this is normal? If possible could someone run the same test and post their results? Are those results OK?

Thanks

Joe
joe.beaton
Posts: 3
Joined: Wed Feb 05, 2014 11:11 am

Re: Performance

Post by joe.beaton »

Hi,

Sorry I forgot to say, we're not experiencing any particular performance problems. This was more of an academic question of why smaller block sizes get much worse performance and if it's normal? And if those numbers in general are OK.

Thanks again,

Joe
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: Performance

Post by Richard Siemers »

This is normal and to be expected with storage and networking. Your MB/s benchmark and your IO/s benchmarks will have an inverse relation to each other, as one goes up the other will go down.

The key to benchmarking is to simulate your real workload the best you can. Here are a couple factors to pay attention to:

Block size: larger blocks have more payload per packet.
Read vs Write ratio: generally reads are faster than writes.
Sequential vs Random ration: sequential reads or writes are faster.
# of queued IO/s allowed = how many transactions to queue up before pausing to wait.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Post Reply