HPE Storage Users Group
https://3parug.net/

8200 SSD RAID configuration
https://3parug.net/viewtopic.php?f=18&t=3392
Page 1 of 1

Author:  Sagi [ Sat Apr 25, 2020 4:16 pm ]
Post subject:  8200 SSD RAID configuration

Hi all,

I would like to get assistance,
I am planning to add SSD disks to 3PAR 8200 which now have only FC disks,
The disk size is 920GB and total of SSD disks that I am planning to add are 8,
I wonder which setup should I choose, Raid 5 (3+1), or (7+1) to get better performance, and what is the best practice?

Here the 3PAR version: 3.3.1 (MU3)
Number of current FC disks: 32 (1 CPG of RAID 5 7+1)

Thanks! :)

Author:  MammaGutt [ Sun Apr 26, 2020 3:54 am ]
Post subject:  Re: 8200 SSD RAID configuration

Performance is about the same.
RAID overhead will be less with 7+1 but there is an advisory out stating not to use a set size equal to the number of drives on 2node systems.

If you plan on using AO only I don't see a problem. If you plan for pure SSD volumes I wouldn't go higher than 6+1.

Author:  Sagi [ Sun Apr 26, 2020 7:56 am ]
Post subject:  Re: 8200 SSD RAID configuration

MammaGutt wrote:
Performance is about the same.
RAID overhead will be less with 7+1 but there is an advisory out stating not to use a set size equal to the number of drives on 2node systems.

If you plan on using AO only I don't see a problem. If you plan for pure SSD volumes I wouldn't go higher than 6+1.



I am not planning to use AO, my new CPG will be pure SSD and current CPG pure FC.
Another question please, in order to save slots in my 3PAR cage I guess that I will choose a setup of 4 SSD with 1.92TB (RAID 5 3+1), should I split 2 SSD PD for each cage?

Thanks!

Author:  Richard Siemers [ Tue Apr 28, 2020 9:02 pm ]
Post subject:  Re: 8200 SSD RAID configuration

Yes I would split the SSD drives between the cages, reason is the external cage is on a separate SAS loop than the internal cage (on a 2 node system), thus 2x the back end connectivity to the drives. Remember drives must be installed into a shelf in pairs. If you only buy 2 drives, they both go into the same shelf.

So if you only have 4 SSD, and you set the CPG size to 3+1... we need to call out 2 things. First is minor, express layout would be used, which has 1 node own all 4 of those drives. If you obsess over reports that show how evenly balanced work is spread across 2 nodes, this could trigger some OCD as one node will be doing all the word associated with those 4 SSD drives. My guess is your bottlenecks are 100% on the spinning disks level, and your nodes have ample performance surplus, so this shouldn't be an issue.

Secondly and more severe, as noted by Mammagut, there is an advisory about using a set size equal to the total number of drives. The issue is when 1 drive fails, AND growth requires creating a new LD, that process will fail because there are not enough drives to meet the CPG setsize requirement.

https://support.hpe.com/hpesc/public/do ... 45368en_us

Have you considered using some of the SSD capacity as Adaptive Flash Cache? What is the average r/w ratio on the system?
AFC: https://h20195.www2.hpe.com/v2/getpdf.a ... 397ENW.pdf

Author:  Sagi [ Wed May 13, 2020 1:15 pm ]
Post subject:  Re: 8200 SSD RAID configuration

Richard Siemers wrote:
Yes I would split the SSD drives between the cages, reason is the external cage is on a separate SAS loop than the internal cage (on a 2 node system), thus 2x the back end connectivity to the drives. Remember drives must be installed into a shelf in pairs. If you only buy 2 drives, they both go into the same shelf.

So if you only have 4 SSD, and you set the CPG size to 3+1... we need to call out 2 things. First is minor, express layout would be used, which has 1 node own all 4 of those drives. If you obsess over reports that show how evenly balanced work is spread across 2 nodes, this could trigger some OCD as one node will be doing all the word associated with those 4 SSD drives. My guess is your bottlenecks are 100% on the spinning disks level, and your nodes have ample performance surplus, so this shouldn't be an issue.

Secondly and more severe, as noted by Mammagut, there is an advisory about using a set size equal to the total number of drives. The issue is when 1 drive fails, AND growth requires creating a new LD, that process will fail because there are not enough drives to meet the CPG setsize requirement.

https://support.hpe.com/hpesc/public/do ... 45368en_us

Have you considered using some of the SSD capacity as Adaptive Flash Cache? What is the average r/w ratio on the system?
AFC: https://h20195.www2.hpe.com/v2/getpdf.a ... 397ENW.pdf



Hi Richard,

So I split 2 SSD PD for each shelf,
Another question please since I ordered refurbished disks they show in the system with data (allocated capacity), how can I wipe the data only from my SSD disks without any impact on the current disks (FC)? I attached a screenshot.

Thank you!

Attachments:
3PAR.PNG
3PAR.PNG [ 91.9 KiB | Viewed 18915 times ]

Author:  Richard Siemers [ Wed May 13, 2020 11:57 pm ]
Post subject:  Re: 8200 SSD RAID configuration

I don't believe they will import with data on them. I would dig deeper into where that space is going from command line. IMC is deprecated, its the legacy GUI and 3.3.1 has features in it that IMC doesn't know about. SSMC is the new management GUI for 3PAR.

showcpg
showvv

Is dedupe enabled on the SSD cpg?

Author:  MammaGutt [ Thu May 14, 2020 4:53 am ]
Post subject:  Re: 8200 SSD RAID configuration

I agree with Richard.

When you admit any drives to the array, it will initialize the chunklets.

Are you sure that your old SSDs wasn't very full and that adding the drives has automatically issued a tunesys to spread existing data across the new drives?

Edit: or even better, you are doing "default" sparing algoritm and the capacity used is spare capacity.

showpd -c
and
showsys -param
should confirm.

Double edit:
You initially talked about 8x 920GB but I see 4x 1.92TB. I'm pretty sure that it is not supported to have less than 6 SSDs behind a nodepair (unless they are dedicated to Adaptive Flash Cache).

Page 1 of 1 All times are UTC - 5 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/