maybe slightly offtopic, but If you spot less then optimal performance issues, please look at the growth sizes and step sizes of your CPGs.
- growth sizes for FC and NL should be 32GB*the number of nodepairs (so assuming a 4-node 7400, it should be 64GB). For SSD it should be 8GB*the number of nodepairs (16GB for a 4-node 3PAR)
- default step sizes depend on Raid level used, the set-size and drive type. Often I advice to remove the -ss value all together from the CPG configuration, to force the 3PAR to use the defaults (which are always right). Of course there are very heavily tuned CPGs possible where you could need to vary the stepsize based on the underlying application, but usually the default is best.
e.g. via CLI do the following to show the CPG configuration:
Code:
showcpg -sdg
output would be something like:
Code:
Id Name Domain Warn Limit Grow Args
Some_FC_Raid5_CPG SomeDomain - - 16384 -t r5 -ha cage -ssz 4 -ss 32 -ch first -p -devtype FC
in this example both the growth size is to small 16GB instead of 64GB), and the step size is wrong (32KB vs 128KB). setting the right value is done like this:
Code:
setcpg -sdgs 65536 -t r5 -ha cage -ssz 4 -ss 128 -ch first -p -devtype FC Some_FC_Raid5_CPG
The system will require a "tunesys" afterwards to adjust the right stepsizes for all previously created LDs with the "wrong" specifications. Please suspend AO during a tunesys to prevent nasty issues.
Below a reference with the right stepsizes per drivetype / raid / setsize:
For HDDs (both NL and FC):
Code:
R1: 256k
R5: 128k
R6 4+2: 128k
R6 6+2: 64k
R6 8+2: 64k
R6 10+2: 64k
R6 14+2: 32k
For SSD's:
Code:
R0: 32k
R1: 32k
R5: 64k
R6 6+2: 64k
R6 8+2: 64k
R6 10+2: 64k
R6 14+2: 32k