So some of those settings may not be ideal unless you are a 3par ninja, and know exactly what you want, in which case you probably wouldn't be asking for help here... so I am going to assume you are not a ninja, and just a regular 3PAR user like many of us.
Quote:
createcpg -t r5 -ssz 4 -rs 6 -p -nd 0,1 -devtype SSD ssd-r5-ssz8-a
I notice the -ha is missing. I believe the default "high availability" setting is CAGE... so in order to create a r5 ssz 4 you would need to have at least 4 disk shelves with SSD installed. You may be forced to use -ha mag to force this CPG to be created, but I would like give you some more info before you do that.
If you only have 2 nodes, you can leave out the -nd 0,1. If you have 4 (or more) nodes, you should discuss that with your HP professionals (or perhaps us here online) as it is best practice and best performance for your CPGs to span as many drives as possible. I would recommend you have 1 cpg for all of your SSD, 1 for all of your FC, and 1 cpg for all of your nearline. If your intention is to isolate "spindles" from each other as per old school legacy storage/database best practice, I recommend re-evaluating that plan and downloading some of the HP 3PAR implementation guides that discuss best practice for Exchange/SQL/ or Oracle specifically.
-rs 6 ; This one throws me off the most, what is the design/intent here here by manually setting the row size instead of letting the system decide that?
Raid 5 set size 4 on SSD. The SSD space is so expensive, I totally understand the desire for raid 5 to get 25% more usable space, I argued this case myself for my own site. However, raid 5 on ssd is counter productive to performance goals. The raid 5 write penalty combined with SSD re-write penalties spread across your set size, downgrade performance significantly. Not to mention the parity writes put a higher re-write load on the SSD and wear them out faster. Raid 1 for SSD is best option to deliver SSD performance for both reads and writes, plus if you only have 2 disk shelves, its easier to maintain cage availability.
If you could share you config, model number, number of nodes, number of shelves, number of SSD drives... and your goal, myself or someone else will be able to provide better information.