Page 1 of 1
Change CPG from Mag to Cage
Posted: Tue Jan 22, 2013 12:46 am
by iOsX
Unfortunately I have no idea why I can't do the simple action for change CPG Availability.
Code: Select all
V400_zz cli% showcpg -sdg FC_EXCH_OPT_cpg
------(MB)------
Id Name Warn Limit Grow Args
23 FC_EXCH_OPT_cpg - - 16384 -t r5 -ha mag -ss 128 -p -devtype FC
In GUI, I see the Avalability: Magazine. When I try to change it with right-click "Edit", Next> Avalability: Cage (Default) and Finish. I didn't get that I want. Status will not change to Cage. It will still Mag.
Why ?
But if I do in CLI next:
Code: Select all
V400_zz cli% showcpg -sdg FC_EXCH_OPT_cpg
------(MB)------
Id Name Warn Limit Grow Args
23 FC_EXCH_OPT_cpg - - 16384 -t r5 -ss 128 -p -devtype FC
V400_zz cli%
V400_zz cli% setcpg -ha cage FC_EXCH_OPT_cpg
V400_zz cli% showcpg -sdg FC_EXCH_OPT_cpg
------(MB)------
Id Name Warn Limit Grow Args
23 FC_EXCH_OPT_cpg - - 16384 -ha cage -p -devtype FC
In GUI the status of Avalability would change on Port and Disk Filter would add -ha cage line.
What does it mean ?
Re: Change CPG from Mag to Cage
Posted: Tue Jan 22, 2013 2:11 pm
by Richard Siemers
Personally, I don't recommend changing the settings of the CPG because it only applies to new growth. All your existing data will remain at the OLD settings. It would be best to create a new CPG with the setting you want, and use Dynamic Optimizer to move the VVs from the old to the new CPG. This will ensure your lun is completely converted and evenly balanced.
The command line "showcpg -sdg" shows you the parameters for how NEW GROWTH will be created. I am not sure about this, so this is speculation, but the setcpg command may be doing less safety checks than the GUI does. So the GUI may be preventing you from making a mistake, and the comand line is doing exactly what you tell it to do.
In this case, you are forcing the CPG to use cage HA... this requires that you have at least 4 cages for raid 5. If you don't have 4 cages, the GUI wont let you proceed. If somehow you force the CPG to take that setting any way (command line?) then the CPG will fail to auto grow as needed because there is no free space left that matches the growth arguments.
I am confused by your screenshot... it says that your using raid5 with a set size of 2, I don't think that is physically possible to have raid5 with only 2 members. The default set size for raid5 when not specified is 4. Please run the following command, and lets look at the output:
showld -d -cpg FC_EXCH_OPT_cpg
Re: Change CPG from Mag to Cage
Posted: Tue Jan 22, 2013 10:23 pm
by iOsX
Oh, Thanks a lot for reply
Code: Select all
V400_zz cli% showld -d -cpg FC_EXCH_OPT_cpg
LDs for SA:
Id Name CPG RAID Own SizeMB RSizeMB RowSz StepKB SetSz Refcnt Avail CAvail -----CreationTime------ -CreationPattern-
560 tp-23-sa-0.0 FC_EXCH_OPT_cpg 1 0/1/2/3 4096 12288 4 256 3 0 cage port 2012-09-20 13:51:10 UTC -p -devtype FC
561 tp-23-sa-0.1 FC_EXCH_OPT_cpg 1 1/0/3/2 4096 12288 4 256 3 0 cage port 2012-09-20 13:51:10 UTC -p -devtype FC
562 tp-23-sa-0.2 FC_EXCH_OPT_cpg 1 2/3/0/1 4096 12288 4 256 3 0 cage port 2012-09-20 13:51:10 UTC -p -devtype FC
563 tp-23-sa-0.3 FC_EXCH_OPT_cpg 1 3/2/1/0 4096 12288 4 256 3 0 cage port 2012-09-20 13:51:10 UTC -p -devtype FC
---------------------------------------------------------------------------------------------------------------------------------------------
4 16384 49152
LDs for SD:
Id Name CPG RAID Own SizeMB RSizeMB RowSz StepKB SetSz Refcnt Avail CAvail -----CreationTime------ -CreationPattern-
564 tp-23-sd-0.0 FC_EXCH_OPT_cpg 5 0/1/2/3 76800 102400 5 128 4 0 mag port 2012-09-20 13:51:10 UTC -p -devtype FC
565 tp-23-sd-0.1 FC_EXCH_OPT_cpg 5 1/0/3/2 129024 172032 6 128 4 0 mag port 2012-09-20 13:51:10 UTC -p -devtype FC
566 tp-23-sd-0.2 FC_EXCH_OPT_cpg 5 2/3/0/1 55296 73728 6 128 4 0 mag port 2012-09-20 13:51:10 UTC -p -devtype FC
567 tp-23-sd-0.3 FC_EXCH_OPT_cpg 5 3/2/1/0 46080 61440 5 128 4 0 mag port 2012-09-20 13:51:10 UTC -p -devtype FC
569 tp-23-sd-0.5 FC_EXCH_OPT_cpg 5 0/1/3/2 258048 344064 14 128 4 0 mag port 2012-09-20 13:51:10 UTC -p -devtype FC
573 tp-23-sd-0.9 FC_EXCH_OPT_cpg 5 2/3/1/0 215040 286720 14 128 4 0 mag port 2012-09-20 13:51:10 UTC -p -devtype FC
575 tp-23-sd-0.11 FC_EXCH_OPT_cpg 5 3/2/1/0 215040 286720 14 128 4 0 mag port 2012-09-20 13:51:10 UTC -p -devtype FC
865 tp-23-sd-0.13 FC_EXCH_OPT_cpg 5 0/1/3/2 215040 286720 1 128 4 0 mag port 2012-10-29 11:08:32 UTC -p -devtype FC
866 tp-23-sd-0.15 FC_EXCH_OPT_cpg 5 3/2/1/0 218112 290816 1 128 4 0 mag port 2012-10-29 11:08:32 UTC -p -devtype FC
869 tp-23-sd-0.24 FC_EXCH_OPT_cpg 5 2/3/0/1 227328 303104 1 128 4 0 mag port 2012-10-30 11:09:16 UTC -p -devtype FC
871 tp-23-sd-0.27 FC_EXCH_OPT_cpg 5 0/1/2/3 319488 425984 13 128 4 0 mag port 2012-11-07 08:45:56 UTC -p -devtype FC
872 tp-23-sd-0.28 FC_EXCH_OPT_cpg 5 0/1/3/2 258048 344064 14 128 4 0 mag port 2012-11-07 08:45:56 UTC -p -devtype FC
873 tp-23-sd-0.29 FC_EXCH_OPT_cpg 5 1/0/2/3 319488 425984 13 128 4 0 mag port 2012-11-07 08:45:56 UTC -p -devtype FC
874 tp-23-sd-0.30 FC_EXCH_OPT_cpg 5 1/0/3/2 258048 344064 14 128 4 0 mag port 2012-11-07 08:45:56 UTC -p -devtype FC
875 tp-23-sd-0.31 FC_EXCH_OPT_cpg 5 2/3/0/1 319488 425984 13 128 4 0 mag port 2012-11-07 08:45:56 UTC -p -devtype FC
876 tp-23-sd-0.32 FC_EXCH_OPT_cpg 5 2/3/1/0 258048 344064 14 128 4 0 mag port 2012-11-07 08:45:56 UTC -p -devtype FC
877 tp-23-sd-0.33 FC_EXCH_OPT_cpg 5 3/2/0/1 319488 425984 13 128 4 0 mag port 2012-11-07 08:45:56 UTC -p -devtype FC
878 tp-23-sd-0.34 FC_EXCH_OPT_cpg 5 3/2/1/0 258048 344064 14 128 4 0 mag port 2012-11-07 08:45:56 UTC -p -devtype FC
879 tp-23-sd-0.35 FC_EXCH_OPT_cpg 5 0/1/2/3 319488 425984 13 128 4 0 mag port 2012-11-07 08:46:05 UTC -p -devtype FC
880 tp-23-sd-0.37 FC_EXCH_OPT_cpg 5 1/0/3/2 319488 425984 13 128 4 0 mag port 2012-11-07 08:46:05 UTC -p -devtype FC
881 tp-23-sd-0.38 FC_EXCH_OPT_cpg 5 1/0/2/3 215040 286720 14 128 4 0 mag port 2012-11-07 08:46:05 UTC -p -devtype FC
882 tp-23-sd-0.39 FC_EXCH_OPT_cpg 5 2/3/0/1 359424 479232 13 128 4 0 mag port 2012-11-07 08:46:05 UTC -p -devtype FC
883 tp-23-sd-0.41 FC_EXCH_OPT_cpg 5 3/2/1/0 359424 479232 13 128 4 0 mag port 2012-11-07 08:46:05 UTC -p -devtype FC
1034 tp-23-sd-0.6 FC_EXCH_OPT_cpg 5 1/0/2/3 181248 241664 1 128 4 0 mag port 2012-12-04 19:40:30 UTC -p -devtype FC
1138 tp-23-sd-0.4 FC_EXCH_OPT_cpg 1 0/1/2/3 12288 24576 4 256 2 0 cage port 2013-01-22 11:51:31 UTC -p -devtype FC
1139 tp-23-sd-0.7 FC_EXCH_OPT_cpg 1 1/0/3/2 12288 24576 4 256 2 0 cage port 2013-01-22 11:51:31 UTC -p -devtype FC
1140 tp-23-sd-0.8 FC_EXCH_OPT_cpg 1 2/3/1/0 12288 24576 4 256 2 0 cage port 2013-01-22 11:51:31 UTC -p -devtype FC
1153 tp-23-sd-0.16 FC_EXCH_OPT_cpg 1 3/2/0/1 8192 16384 4 256 2 0 cage port 2013-01-22 19:10:50 UTC -p -devtype FC
------------------------------------------------------------------------------------------------------------------------------------------------
28 5765120 7716864
Re: Change CPG from Mag to Cage
Posted: Wed Jan 23, 2013 8:18 am
by Richard Siemers
CAvail is your current availability, and Avail is what its set to be. So your current availability is dynamic based on the health of the system... you could be set for CAGE but if a single cage goes down, then your current availability will drop since you already have 1 cage offline. Port is higher availability than cage, it means you can loose an entire backend loop, and ALL the cages attached to that "port", however, in the T-Series there is only 1 cage per port, so the 2 mean the exact same thing, however in that output you all see you have several set to cage, but you are getting "port" instead. On a T-series, Port and Cage are technically synonyms.
So, do you see on the 2nd part of the output all your data LD's that were created before 2012-11-07 were set to MAG but are getting PORT anyway, but your new stuff since this week is set to cage? You have the luxury of space (and at least 4 shelves) so even though the admin set the CPG to MAG, the 3PAR automatically put that data in "better than requested" locations balanced across the shelves. The problem is, that you can't always guarantee that capacity luxury will available for the system to make that decision, and future growth or rebalance operations could put an entire LD on one disk shelf because MAG safe was permitted.
If this were my data, I would use dynamic optimizer to tune that data so it all was set to cage safe. D.O. is the recommended/supported method for we users to fix issues like this.
It might be worth calling into support to see if there is a supported method to convert those existing LDs to cage safe. I don't touch LDs, or recommend them to be touched without manufacturer supervision, preferably one who works in Sunnyvale California (3PARs old HQ, and where I believe level 2 up through engineering are).
Re: Change CPG from Mag to Cage
Posted: Tue Feb 12, 2013 5:34 am
by Vicente
Hi.
You can change CPG layout on the fly. But carefully, the changes only applied to new ld. The old ld maintain the old layout.
If you want that the new layout applies to all ld. You need to run a tunesys to this CPG.
bye