We have an 8 note V800 which is only serving storgage to a two node passive/active clustered RHEL NFS gateway. It has 12 x 12TB TPVV volumes presented to it. The NFS storage is mounted using the "unmap" switch to enable the reclamation of space when deletions occur.
When a large deletion is done (of 1TB) the IOPs go nuts and affects all other VLUNS sharing the same spindles.
Is anyone aware of a way to control the way in which large changes affect the controller. Could it be that the TPVV need to be fully prov?
High IOPs during large deletions on RHEL cluster NFS storage
Re: High IOPs during large deletions on RHEL cluster NFS sto
I have attached a system reporter capture if anyone is interested.
- Attachments
-
- largedelete_production_02112013.csv
- System Reporter evidence of impact of large delete
- (15.04 KiB) Downloaded 1823 times
Re: High IOPs during large deletions on RHEL cluster NFS sto
Interesting Issue. did you log a support case for this, with HP? might be wise to do so, to have them check out what is going on.
The goal is to achieve the best results by following the clients wishes. If they want to have a house build upside down standing on its chimney, it's up to you to figure out how do it, while still making it usable.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: High IOPs during large deletions on RHEL cluster NFS sto
Well this is good news, I did not know about the discard mount option.
Check out this document:
http://people.redhat.com/lczerner/disca ... 1_Brno.pdf
63% performance hit on GFS, 18% on ext4...
Check out this document:
http://people.redhat.com/lczerner/disca ... 1_Brno.pdf
63% performance hit on GFS, 18% on ext4...
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: High IOPs during large deletions on RHEL cluster NFS sto
We have ended up remounting with the unmap feature removed. This has settled the performance hit on the array but now we cannot reclaim space after a delete unless we manually run a cleanup accross the CPG!
HP are working with us on this as we have a TPVV licence that is useless if we have TPVV volumes acting as thickly provisioned.
HP are working on this and as per our RC issue on large volumes, a code update will be part of the Feb 14 release.
HP are working with us on this as we have a TPVV licence that is useless if we have TPVV volumes acting as thickly provisioned.
HP are working on this and as per our RC issue on large volumes, a code update will be part of the Feb 14 release.