Yes I would split the SSD drives between the cages, reason is the external cage is on a separate SAS loop than the internal cage (on a 2 node system), thus 2x the back end connectivity to the drives. Remember drives must be installed into a shelf in pairs. If you only buy 2 drives, they both go into the same shelf.
So if you only have 4 SSD, and you set the CPG size to 3+1... we need to call out 2 things. First is minor, express layout would be used, which has 1 node own all 4 of those drives. If you obsess over reports that show how evenly balanced work is spread across 2 nodes, this could trigger some OCD as one node will be doing all the word associated with those 4 SSD drives. My guess is your bottlenecks are 100% on the spinning disks level, and your nodes have ample performance surplus, so this shouldn't be an issue.
Secondly and more severe, as noted by Mammagut, there is an advisory about using a set size equal to the total number of drives. The issue is when 1 drive fails, AND growth requires creating a new LD, that process will fail because there are not enough drives to meet the CPG setsize requirement.
https://support.hpe.com/hpesc/public/do ... 45368en_usHave you considered using some of the SSD capacity as Adaptive Flash Cache? What is the average r/w ratio on the system?
AFC:
https://h20195.www2.hpe.com/v2/getpdf.a ... 397ENW.pdf