Just wanted to know everyone's opinion on this, I've read having a separate CPG makes reporting and capacity planning easier.
I am aware of the Best Practises around CPG’s –
• The number of CPGs should be kept to a minimum.
• Maximum VVs per CPG is 8,192 for 3PAR StoreServ 7000 (3PAR OS 3.1.2)
• Maximum TPVVs per CPG is 4,095.
I just wanted to know your thoughts on whether or not I should just leave the two default CPG’s (Raid 1 FC, Raid 6 NL) or create separate CPG’s for ESX hosts, Windows, HP UX etc…
CPG for each OS type ?
Re: CPG for each OS type ?
We base it on base application platform. All our AIX LPARs are in 3 dat tiered CPGs, all VMs in 3 and then all databases regardless if AIX or VM get their own. The DB grouping is Oracle on AIX LPARs and SQL VMs. This allows us to run AO configs against the different CPG groupings to tweak each platform as we want.
AIX_FC_R1_S1
AIX_FC_R5_S4
AIX_NL_R5_S4
DB_FC_R1_S1
DB_FC_R5_S4
DB_FC_R5_S8
VM_FC_R1_S1
VM_FC_R5_S4
VM_NL_R5_S4
Also each groups we create a 4th SNAP CPG
SNAP_VM_NL_R5_S8
SNAP_AIX_NL_R5_S4
SNAP_DB_NL_R5_S4
Then we pin our large file servers to single CPG:
CIFS_NL_R5_S4
with snap space:
SNAP_CIFS_NL_R5_S8
Yes there are some basically redundant CPGs, but as you mentioned this does help us track real usage across data tiers.
I have 440TB V400 with over 600 exported volumes supporting 800 VMs and 140 AIX LPARs
AIX_FC_R1_S1
AIX_FC_R5_S4
AIX_NL_R5_S4
DB_FC_R1_S1
DB_FC_R5_S4
DB_FC_R5_S8
VM_FC_R1_S1
VM_FC_R5_S4
VM_NL_R5_S4
Also each groups we create a 4th SNAP CPG
SNAP_VM_NL_R5_S8
SNAP_AIX_NL_R5_S4
SNAP_DB_NL_R5_S4
Then we pin our large file servers to single CPG:
CIFS_NL_R5_S4
with snap space:
SNAP_CIFS_NL_R5_S8
Yes there are some basically redundant CPGs, but as you mentioned this does help us track real usage across data tiers.
I have 440TB V400 with over 600 exported volumes supporting 800 VMs and 140 AIX LPARs
Re: CPG for each OS type ?
To me, the ideal number of CPG's is 1.
There are few reasons to make more. Here are some for instances:
Different RAID types
Virtual Domains
Setting limits (no barfing emoticon available)
In our environment I originally had one per host. That was on an E200 with about 20 hosts, so no big deal. Then came T800's and several hundred hosts. this didn't get us much for the effort except a lot of sprawl. Now I have one CPG/RaidType/Domain.
There are few reasons to make more. Here are some for instances:
Different RAID types
Virtual Domains
Setting limits (no barfing emoticon available)
In our environment I originally had one per host. That was on an E200 with about 20 hosts, so no big deal. Then came T800's and several hundred hosts. this didn't get us much for the effort except a lot of sprawl. Now I have one CPG/RaidType/Domain.
- PearMotion
- Posts: 12
- Joined: Wed Jan 02, 2013 8:13 am
- Location: UK
Re: CPG for each OS type ?
I tend to make new CPG's and then kill the default ones it self creates - i've seen some crazy stuff in the default CPG's - like R6 on SSD :->.
I'll usually make four (jn a system with no SSD)
Platinum: R0 in FC - used for test and dev only
Gold: R1 in FC
Silver: R5 in FC
Bronze: R1 in NL
I know R1 in NL sounds a bit mad, but usually you run out of IOPS on those slow NL drives before you run out of space, using R1 means that you can actually store a lot more data on those disks (and you get less stranded capacity) and the disks can contribute to performance to some degree.
PearMotion
I'll usually make four (jn a system with no SSD)
Platinum: R0 in FC - used for test and dev only
Gold: R1 in FC
Silver: R5 in FC
Bronze: R1 in NL
I know R1 in NL sounds a bit mad, but usually you run out of IOPS on those slow NL drives before you run out of space, using R1 means that you can actually store a lot more data on those disks (and you get less stranded capacity) and the disks can contribute to performance to some degree.
PearMotion
PEAR-Motion
2xNode 3PAR StoreServ 7200
4xNode 3PAR Storeserv 7400 with LFF and SFF
2xNode 3PAR Storeserv 10400
2xNode 3PAR StoreServ 7200
4xNode 3PAR Storeserv 7400 with LFF and SFF
2xNode 3PAR Storeserv 10400
Re: CPG for each OS type ?
Do you really notice a difference in performance when creating different RAID-levels in FC while using the same amount of disks? Or is it just a measurable difference?
Re: CPG for each OS type ?
HP had some numbers that showed that their RAID 5 is about 95% the performance of RAID 1 using 3+1. Our reasoning is that for anything that does not need the ultimate in performance we take the space savings with RAID 5. I don;t know if I see any measurable differences, but then we are not measuring things that precisely and you would have to do the measurements in a controlled environment to really test that out.