Page 1 of 1

High IOPs during large deletions on RHEL cluster NFS storage

Posted: Sat Nov 02, 2013 8:02 pm
by woodunn
We have an 8 note V800 which is only serving storgage to a two node passive/active clustered RHEL NFS gateway. It has 12 x 12TB TPVV volumes presented to it. The NFS storage is mounted using the "unmap" switch to enable the reclamation of space when deletions occur.

When a large deletion is done (of 1TB) the IOPs go nuts and affects all other VLUNS sharing the same spindles.

Is anyone aware of a way to control the way in which large changes affect the controller. Could it be that the TPVV need to be fully prov?

Re: High IOPs during large deletions on RHEL cluster NFS sto

Posted: Sat Nov 02, 2013 8:06 pm
by woodunn
I have attached a system reporter capture if anyone is interested.

Re: High IOPs during large deletions on RHEL cluster NFS sto

Posted: Tue Nov 05, 2013 3:43 pm
by Architect
Interesting Issue. did you log a support case for this, with HP? might be wise to do so, to have them check out what is going on.

Re: High IOPs during large deletions on RHEL cluster NFS sto

Posted: Sun Nov 10, 2013 12:15 am
by Richard Siemers
Well this is good news, I did not know about the discard mount option.

Check out this document:
http://people.redhat.com/lczerner/disca ... 1_Brno.pdf

63% performance hit on GFS, 18% on ext4...

Re: High IOPs during large deletions on RHEL cluster NFS sto

Posted: Mon Nov 11, 2013 7:55 am
by woodunn
We have ended up remounting with the unmap feature removed. This has settled the performance hit on the array but now we cannot reclaim space after a delete unless we manually run a cleanup accross the CPG!
HP are working with us on this as we have a TPVV licence that is useless if we have TPVV volumes acting as thickly provisioned.
HP are working on this and as per our RC issue on large volumes, a code update will be part of the Feb 14 release.