CPG Census

Post Reply
kmccaf1
Posts: 1
Joined: Sat Apr 26, 2014 12:11 pm

CPG Census

Post by kmccaf1 »

Hello all,

First poster here. I'm excited to join this community of the "enlightened"--3PAR is truly outstanding storage and I've been continually impressed with my 7200's performance and ease of administration from day 1.

My array: 7200, 4 cages, (44) 900GB 10k, (20) 300GB 15k, and expanding this week with (24) 1TB NL.

Since I've had the array (a little over a year) I I've run 1 CPG --> FC_10k_R5_SS4, cage availability.
I was new to 3PAR and short on time (one of those shops where I have to do it all...you know the feeling) so I threw everything on that one CPG--fully virtualized hybrid env. with Hyper-V and VMware hosts with VMs including fileshares, SQL, Exchange 2010...the list goes on. It's all been swiftly running along.

Now that I've had a bit more time and experience with the array, and we're adding some NL drives and AO, I've decided it may be time to optimize my workloads rather than running everything at blazing speeds, because frankly I don't need our fileshare to break the the sound barrier, and that CPG is costlier, where disk space is concerned, than larger set size, using magazine availabilty, etc.

I was wondering if any of you would be interested in posting some of your CPG layouts and their purposes/uses--I think it'd be helpful for new(er) 3PAR users.

Here's what I have so far...again:

    FC_10K_R5_SS4 - 90% of my environment (mixed workload, 400 user Exchange, a few SQL servers, other random servers)

    FC_15K_R5_SS4 - exclusively used for accounting SQL server (though, that particular server lived on the 10k CPG for most of its life and performed very, very well. Even thinking about moving it back to 10k)

Thanks in advance for your input!
hdtvguy
Posts: 576
Joined: Sun Jul 29, 2012 9:30 am

Re: CPG Census

Post by hdtvguy »

We had CPGs based on tiersand applications, but have since realized that causes more trouble than it is worth if you run your system tight. We used to have:

VM_FC_R5_S4 (300G 15K)
VM_SAS_R5_S6 (900GB 10K)
VM_NL_R5_S4 (2TB 7K)

AIX_FC_R5_S4
AIX_SAS_R5_S6
AIX_NL_R5_S4

DB_FC_R1_S2
DB_FC_R5_S4
DB_SAS_R5_S6


This works well for us to all us to tune our 3 major platform as needed. The problem is if you run lean on free space then the wasted space in each CPG creates undue disk space issues array wide. We are moving our databases to a separate 7400 and will have a single AO policy and set of CPGs for that array. We will them on the V400 merge the existing 2 AO polices and CPGs into a single AO policy with 3 CPGs.

Some notes on AO. I personally feel AO worked better prior to 3.1.2 when it was off controller. 3.1.2 moved AO onto the controller nodes and they keep "tweaking" how AO works. I find AO is marginally effective in that it is really only good at finding consistent hot spots or stale spots, if your data usage patterns are random then AO is too slow to react and by the time it moves the blocks it is too late. Also AO can only move so much data at a time and the less free space you have on your system the less effective AO becomes.

My current philosophy is to try and get the default user CPG right up front and erro on the side of performance. Our database array we did not even purchase NL drives because we find it is too easy to over subscribe them and cause performance issues. If I were buying my V400 again I would fill it with 900Gb drives and some 2 or 3TB drives and use the NL drives for snapshots only and really low use workload like file servers, everything else would go to 900GB drives. I would still create some CPG tiers within the 900GB drives to be space efficient, but "value proposition" of NL is just over shadowed by the performance hit when you over subscribe them. Our 3par sales team basically says all their customers over subscribe their NL drives which basically means most customers are pushing their NL drives harder then their limits.
Cleanur
Posts: 254
Joined: Wed Aug 07, 2013 3:22 pm

Re: CPG Census

Post by Cleanur »

Upvoting this post.
Last edited by Cleanur on Thu May 01, 2014 7:08 am, edited 2 times in total.
spencer.ryan
Posts: 35
Joined: Tue Feb 11, 2014 11:33 am

Re: CPG Census

Post by spencer.ryan »

We have 112 disks with a mix of SSD, FC, and NL.

We have three CPGs R5, R5, and R6 that are part of a "performance" AO policy. No VVols are created in the SSD tier. Most things exist in FC but some archival type stuff is presented from NL.


Another R1 CPG exists on the SSD for our VMware View/VDI pool. The only reason that exists is that we need AO to not move data out of the SSDs. We have linked clones and a lot of the data sits "idle" a lot of the time, except when users need it. I also think I'll switch that CPG to R5, the performance gains of R1 are minimal on this platform and the space reclaimed is welcome.
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: CPG Census

Post by Richard Siemers »

My main systems are T800s installed 5-6 years ago starting with Inserv 2.4 when there were recommended limits to how many VVs you should put into a single CPG. Thankfully that has changes a bit over the years.

We have 2x T800s... each has 4x nodes, 16 total drive cages, 448 FC 15k, 112 NL, 16 SSD.

Current Day CPGs:

DEV_TIER2 = FC 15K R5 (8+1) mag safe
DEV_TIER3 = NL 7.2K R5 (8+1) mag safe
PRD_TIER0 = SSD R1 (1+1) cage safe + AO target only, no VVs
PRD_TIER1 = FC 15K R5 (5+1) cage safe + AO candidates
PRD_TIER2 = FC 15K R5 (6+1) cage safe
PRD_TIER3 = NL 7.2K R5 (6+1) cage safe
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Post Reply