CPG, Volumes, Virtual Volumes Best Practices

mohuddle
Posts: 62
Joined: Thu May 08, 2014 4:43 pm

CPG, Volumes, Virtual Volumes Best Practices

Post by mohuddle »

What is the normal/best practice for laying out CPG's? Am I correct in thinking CPGs are the container that VVs and VLUNs are carved from?

As an example, let's say have have some Database servers ( physical ), a handful of ESX hosts with some windows VMs and some ESX with Linux VMs.

Would I normally create a CPG for each one of those 'types' of servers and services?
afidel
Posts: 216
Joined: Tue May 07, 2013 1:45 pm

Re: CPG, Volumes, Virtual Volumes Best Practices

Post by afidel »

No, normally you'll have as few CPG's as your requirements allow. We have 3 sets of CPG's, one nonprod that never tiers up to SSD, one for database and database like applications that never tiers down to NL, and the general set that moves between all 3 tiers. The nonprod and prod tier-1 CPG's also use RAID5-8 vs RAID5-4 for the high performance tier-1 CPG.
mohuddle
Posts: 62
Joined: Thu May 08, 2014 4:43 pm

Re: CPG, Volumes, Virtual Volumes Best Practices

Post by mohuddle »

Thanks afidel, very helpful and concise answer.
Davidkn
Posts: 237
Joined: Mon May 26, 2014 7:15 am

Re: CPG, Volumes, Virtual Volumes Best Practices

Post by Davidkn »

That's right, as few as possible.

See it as a raid policy, so windows servers, Linux, VMware etc can all use the same policy if the data needs to be protected by the same raid level.

You could have important data on a cage availability cpg and then non-important data on a mag availability cpg for example.

I normally just have 3, although this does depend on the number of disk types.
User avatar
Richard Siemers
Site Admin
Posts: 1333
Joined: Tue Aug 18, 2009 10:35 pm
Location: Dallas, Texas

Re: CPG, Volumes, Virtual Volumes Best Practices

Post by Richard Siemers »

I would like to emphasize the point that a CPG is more of a policy that defines new growth of a VV/LUN assigned to it, as opposed to a "container that luna are carved from".

How many disk types are you working with, and do you have Adaptive optimization?

Even if you only have 1 disk type, you might choose to use a higher level of raid5 on production that grants you "cage safe" high availability, while your dev CPG may have a larger raid set but only be "magazine safe".
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Schmoog
Posts: 242
Joined: Wed Oct 30, 2013 2:30 pm

Re: CPG, Volumes, Virtual Volumes Best Practices

Post by Schmoog »

In my environments, I generally create cpgs for system groups. So for instance I have a vmware pcg, exchange cpg, etc.

The reason I did that was so that I can run performance reports on the cpg and view the performance of a particular group of hosts/services in system reporter
RitonLaBevue
Posts: 390
Joined: Fri Jun 27, 2014 2:01 am

Re: CPG, Volumes, Virtual Volumes Best Practices

Post by RitonLaBevue »

...
Designing CPGs to face SR use... First time i hear of such practice :)
Cleanur
Posts: 254
Joined: Wed Aug 07, 2013 3:22 pm

Re: CPG, Volumes, Virtual Volumes Best Practices

Post by Cleanur »

Reporting against departmental, customer or application specific CPG's is pretty common. It's also what some people use virtual Domains for on larger systems as it eases the reporting burden if you can look at a CPG or Domain rather than each individual VV or VVSet. The key is not to go too mad especially on smaller systems, otherwise you run the risk of wasting space and actually increasing overall management overhead.
hdtvguy
Posts: 576
Joined: Sun Jul 29, 2012 9:30 am

Re: CPG, Volumes, Virtual Volumes Best Practices

Post by hdtvguy »

Schmoog wrote:In my environments, I generally create cpgs for system groups. So for instance I have a vmware pcg, exchange cpg, etc.

The reason I did that was so that I can run performance reports on the cpg and view the performance of a particular group of hosts/services in system reporter



We did something similar, I had 3 AO configs each with 3 CPGs, one for vmware, one for AIX and one for databases. We abandoned that because as our array got tight on space it was a nightmare to juggle growth warnings to make sure we had enough space in each tier of disk. We instead went to a single AO config and 3 CPGs on each array. It helps that our databases got peeled off onto their own 7400. I do have front end ports dedicated for each environment so if I want to see how much IO or latency a given environment is seeing I run SR against the front end reports.

I also find that unless you have a lot of active data AO regardless of mode (Performance, Balanced, cost) AO tend to move data down and you never want to run out of your lowest tier of disk since if a higher tier fills the array will always write to the lowest tier. Keeping that in mind if you run your array tight then having multiple AO configs and CPG sets is a nightmare to manage.
slink
Posts: 77
Joined: Wed May 01, 2013 5:39 pm

Re: CPG, Volumes, Virtual Volumes Best Practices

Post by slink »

hdtvguy wrote:It helps that our databases got peeled off onto their own 7400.

This is OT but can I ask what the reasons were for doing that? Business, technical or both?

I'm working on a consolidation project and we've had several heated discussions around whether to adopt a common platform approach with our SQL DBs on the same storage as the Hyper-V VMs.

We've reached a consensus now and we're going for all the eggs in one basket with a view to using Priority Optimization so I'd be interested to hear your experiences and what has driven you to split out your workloads.

Thanks
Post Reply