HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 7 posts ] 
Author Message
 Post subject: 8200 SSD RAID configuration
PostPosted: Sat Apr 25, 2020 4:16 pm 

Joined: Sun Aug 04, 2019 9:26 am
Posts: 11
Hi all,

I would like to get assistance,
I am planning to add SSD disks to 3PAR 8200 which now have only FC disks,
The disk size is 920GB and total of SSD disks that I am planning to add are 8,
I wonder which setup should I choose, Raid 5 (3+1), or (7+1) to get better performance, and what is the best practice?

Here the 3PAR version: 3.3.1 (MU3)
Number of current FC disks: 32 (1 CPG of RAID 5 7+1)

Thanks! :)


Top
 Profile  
Reply with quote  
 Post subject: Re: 8200 SSD RAID configuration
PostPosted: Sun Apr 26, 2020 3:54 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
Performance is about the same.
RAID overhead will be less with 7+1 but there is an advisory out stating not to use a set size equal to the number of drives on 2node systems.

If you plan on using AO only I don't see a problem. If you plan for pure SSD volumes I wouldn't go higher than 6+1.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: 8200 SSD RAID configuration
PostPosted: Sun Apr 26, 2020 7:56 am 

Joined: Sun Aug 04, 2019 9:26 am
Posts: 11
MammaGutt wrote:
Performance is about the same.
RAID overhead will be less with 7+1 but there is an advisory out stating not to use a set size equal to the number of drives on 2node systems.

If you plan on using AO only I don't see a problem. If you plan for pure SSD volumes I wouldn't go higher than 6+1.



I am not planning to use AO, my new CPG will be pure SSD and current CPG pure FC.
Another question please, in order to save slots in my 3PAR cage I guess that I will choose a setup of 4 SSD with 1.92TB (RAID 5 3+1), should I split 2 SSD PD for each cage?

Thanks!


Top
 Profile  
Reply with quote  
 Post subject: Re: 8200 SSD RAID configuration
PostPosted: Tue Apr 28, 2020 9:02 pm 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
Yes I would split the SSD drives between the cages, reason is the external cage is on a separate SAS loop than the internal cage (on a 2 node system), thus 2x the back end connectivity to the drives. Remember drives must be installed into a shelf in pairs. If you only buy 2 drives, they both go into the same shelf.

So if you only have 4 SSD, and you set the CPG size to 3+1... we need to call out 2 things. First is minor, express layout would be used, which has 1 node own all 4 of those drives. If you obsess over reports that show how evenly balanced work is spread across 2 nodes, this could trigger some OCD as one node will be doing all the word associated with those 4 SSD drives. My guess is your bottlenecks are 100% on the spinning disks level, and your nodes have ample performance surplus, so this shouldn't be an issue.

Secondly and more severe, as noted by Mammagut, there is an advisory about using a set size equal to the total number of drives. The issue is when 1 drive fails, AND growth requires creating a new LD, that process will fail because there are not enough drives to meet the CPG setsize requirement.

https://support.hpe.com/hpesc/public/do ... 45368en_us

Have you considered using some of the SSD capacity as Adaptive Flash Cache? What is the average r/w ratio on the system?
AFC: https://h20195.www2.hpe.com/v2/getpdf.a ... 397ENW.pdf

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
 Post subject: Re: 8200 SSD RAID configuration
PostPosted: Wed May 13, 2020 1:15 pm 

Joined: Sun Aug 04, 2019 9:26 am
Posts: 11
Richard Siemers wrote:
Yes I would split the SSD drives between the cages, reason is the external cage is on a separate SAS loop than the internal cage (on a 2 node system), thus 2x the back end connectivity to the drives. Remember drives must be installed into a shelf in pairs. If you only buy 2 drives, they both go into the same shelf.

So if you only have 4 SSD, and you set the CPG size to 3+1... we need to call out 2 things. First is minor, express layout would be used, which has 1 node own all 4 of those drives. If you obsess over reports that show how evenly balanced work is spread across 2 nodes, this could trigger some OCD as one node will be doing all the word associated with those 4 SSD drives. My guess is your bottlenecks are 100% on the spinning disks level, and your nodes have ample performance surplus, so this shouldn't be an issue.

Secondly and more severe, as noted by Mammagut, there is an advisory about using a set size equal to the total number of drives. The issue is when 1 drive fails, AND growth requires creating a new LD, that process will fail because there are not enough drives to meet the CPG setsize requirement.

https://support.hpe.com/hpesc/public/do ... 45368en_us

Have you considered using some of the SSD capacity as Adaptive Flash Cache? What is the average r/w ratio on the system?
AFC: https://h20195.www2.hpe.com/v2/getpdf.a ... 397ENW.pdf



Hi Richard,

So I split 2 SSD PD for each shelf,
Another question please since I ordered refurbished disks they show in the system with data (allocated capacity), how can I wipe the data only from my SSD disks without any impact on the current disks (FC)? I attached a screenshot.

Thank you!


Attachments:
3PAR.PNG
3PAR.PNG [ 91.9 KiB | Viewed 18713 times ]
Top
 Profile  
Reply with quote  
 Post subject: Re: 8200 SSD RAID configuration
PostPosted: Wed May 13, 2020 11:57 pm 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
I don't believe they will import with data on them. I would dig deeper into where that space is going from command line. IMC is deprecated, its the legacy GUI and 3.3.1 has features in it that IMC doesn't know about. SSMC is the new management GUI for 3PAR.

showcpg
showvv

Is dedupe enabled on the SSD cpg?

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
 Post subject: Re: 8200 SSD RAID configuration
PostPosted: Thu May 14, 2020 4:53 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
I agree with Richard.

When you admit any drives to the array, it will initialize the chunklets.

Are you sure that your old SSDs wasn't very full and that adding the drives has automatically issued a tunesys to spread existing data across the new drives?

Edit: or even better, you are doing "default" sparing algoritm and the capacity used is spare capacity.

showpd -c
and
showsys -param
should confirm.

Double edit:
You initially talked about 8x 920GB but I see 4x 1.92TB. I'm pretty sure that it is not supported to have less than 6 SSDs behind a nodepair (unless they are dedicated to Adaptive Flash Cache).

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 7 posts ] 


Who is online

Users browsing this forum: Google [Bot] and 83 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt