Hello,
We have the following situation, we are using a 3par 7400 2-node with 48 300gb sas disks in a 4 enclosure configuration.
We would like to add 40 1tb nl sff with AO.
For the near future it may be handy to add 2 nodes so we get more room for expansion.
But if we want to do that we should (regarding to the best practices) also buy the same amount of disks for the other 2 nodes. Is this really necessary or can we migrate the current 300 gb disks to the new enclosures. This means 24 300gb disks for node 0,1 and 24 300gb for node 2,3
Love to hear your migrating thoughts
Upgrade a 7400 2-node to a 7400 4-node
Re: Upgrade a 7400 2-node to a 7400 4-node
If you a 3par 7400 2-node with 48 300gb sas disks in a 4 enclosure configuration you must have 48 slots available for new disks being each shelf or cage carries 24 spindles. Typically you want to add the same amount shelves for the new nodes (4) in your case, that is best practice, its not the drive # its the shelf number that should match the new nodes.
Adding new nodes is dependent on the amount of hosts you currently have on your host ports and what peformance or through put your seeing, this can easily be monitored real-time via the performance graphs or use System Reporter if you have it. Make sure when adding the new disks they are group togather in vertical fashion, redundacy is based on Raid type and the HA setting when building your CPG's. Since these are new disks and NL these will have there on CPG's. For example if you build CPG's with Raid 6 (6 +2) your set size would be 8. When a VV is created the first set of 8 would strip across 4 shelves vertically, start again on first shelf on second row and stripe vertically until first set of 8 is reached, then process is repeated until all drives are used, chunklets spread out on all drives.
Adding new nodes is dependent on the amount of hosts you currently have on your host ports and what peformance or through put your seeing, this can easily be monitored real-time via the performance graphs or use System Reporter if you have it. Make sure when adding the new disks they are group togather in vertical fashion, redundacy is based on Raid type and the HA setting when building your CPG's. Since these are new disks and NL these will have there on CPG's. For example if you build CPG's with Raid 6 (6 +2) your set size would be 8. When a VV is created the first set of 8 would strip across 4 shelves vertically, start again on first shelf on second row and stripe vertically until first set of 8 is reached, then process is repeated until all drives are used, chunklets spread out on all drives.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: Upgrade a 7400 2-node to a 7400 4-node
I don't agree 100% with what Trireed is saying. I believe matching the spindle counts between node pairs is more important than matching the shelf counts, I feel this way primarily because after 2 shelves, they become daisy chained off each other.
If node 0 and 1 share 6 cages, 3 cages on one loop, and 3 cages on the other loop, with 4 drives per cage for a total of 24 drives.... and you want to add 2 more nodes, with 2 more cages, 12 drives per shelf make for a balanced workload after you systuned. 24 spindles on each node pair, 12 drives per 4 loops.
Back to your main question... ideally, you want an even balance of NL and SAS across your nodes for best performance. Yes you can avoid doubling up your SAS spindle counts, IF you dont mind a gruesome involved process of rebalancing your data multiple times to free up SAS spindles to move around. You will want to co-plan this with 3PAR sales engineers/support. The easiest thing I would suggest is install your NL drives equally balanced between all 4 nodes. Create a new NL only CPG, Dynamic Optimize ALL of your LUNs to the NL... work with support to confirm that the SAS drives to be moved are free and clear of data, have them run through the commands to remove those spindles from the hardware config, move them to the new node pair, and add that hardware into the config. At that point you could us AO policies to move your hot data to SAS, or you can DO complete VVs back.
If node 0 and 1 share 6 cages, 3 cages on one loop, and 3 cages on the other loop, with 4 drives per cage for a total of 24 drives.... and you want to add 2 more nodes, with 2 more cages, 12 drives per shelf make for a balanced workload after you systuned. 24 spindles on each node pair, 12 drives per 4 loops.
Back to your main question... ideally, you want an even balance of NL and SAS across your nodes for best performance. Yes you can avoid doubling up your SAS spindle counts, IF you dont mind a gruesome involved process of rebalancing your data multiple times to free up SAS spindles to move around. You will want to co-plan this with 3PAR sales engineers/support. The easiest thing I would suggest is install your NL drives equally balanced between all 4 nodes. Create a new NL only CPG, Dynamic Optimize ALL of your LUNs to the NL... work with support to confirm that the SAS drives to be moved are free and clear of data, have them run through the commands to remove those spindles from the hardware config, move them to the new node pair, and add that hardware into the config. At that point you could us AO policies to move your hot data to SAS, or you can DO complete VVs back.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.