Page 1 of 1

3par 8400: zone hosts to node pairs or all four nodes?

Posted: Thu May 05, 2022 7:30 am
by Gatorfreak
When we originally got our 4-node 3par 8400 HPE recommended zoning servers to node pairs.
Example:
host A -> nodes 0 & 1
host B -> nodes 2 & 3

We now have an HPE Synergy system which can manage/automate the zoning for us but the default behavior is to zone hosts to all storage nodes. So this got me thinking if the HPE recommendation has changed over the years.

Thanks

Re: 3par 8400: zone hosts to node pairs or all four nodes?

Posted: Thu May 05, 2022 11:15 am
by MammaGutt
Recommendation as always been 4 paths, so you should have zoned one path to all 4 nodes from day 1.

Re: 3par 8400: zone hosts to node pairs or all four nodes?

Posted: Fri May 06, 2022 4:16 am
by ailean
I think I started off with all nodes originally, with 2 fabrics that gave 8 active paths. When peer persistence came along there was some mention somewhere of the max paths per host being 8 and so I changed to using node pairs (4 active paths and 4 standby paths).

I split my clusters up so they were always on the same node pairs to hopefully balance things.

I've not really noticed any issues either way and I'm not sure if the max paths ever got clarified/updated. The main hit if a node is down tends to be the IO handling on the SSD side and as there is never more then 2 nodes connected to a disk shelf there's little you can do to prevent that (other then having twice as much CPU/memory in each node then can ever be needed :) ).

Re: 3par 8400: zone hosts to node pairs or all four nodes?

Posted: Sun May 08, 2022 10:22 am
by MammaGutt
ailean wrote:I think I started off with all nodes originally, with 2 fabrics that gave 8 active paths. When peer persistence came along there was some mention somewhere of the max paths per host being 8 and so I changed to using node pairs (4 active paths and 4 standby paths).

I split my clusters up so they were always on the same node pairs to hopefully balance things.

I've not really noticed any issues either way and I'm not sure if the max paths ever got clarified/updated. The main hit if a node is down tends to be the IO handling on the SSD side and as there is never more then 2 nodes connected to a disk shelf there's little you can do to prevent that (other then having twice as much CPU/memory in each node then can ever be needed :) ).


The biggest issue you'll get into with a lot of paths is max logins per host port on the 3PAR and possible maximum number of paths to storage for a Vmware hosts or cluster if you have some arrays.

The positive point of zoning to all nodes is that load is equally spread across all nodes and hopefully to some degree ports (which are all active in a 3PAR)..... As for SSDs on a node pair, the only thing you can do is to ensure you have enough SSDs to saturate the nodes... then it doesn't really matter as the nodes are maxed no matter what :)