Page 1 of 1

CPG set size

Posted: Fri May 02, 2014 3:04 am
by borkowski
Hi,

my company has ordered a 7200 system with 16 NL drives (all in one cage) and I've been wondering what would be the best set size to choose.

Since RAID 6 is recommended for NL drives, I'm just wondering whether 6+2 would be the best set size option (because of how each of the controller nodes "owns" half of the disks during regular operation)? Will I even be offered larger set sizes than 6+2 during CPG creation?

Thanks for your help.

Borkowski

Re: CPG set size

Posted: Fri May 02, 2014 11:31 am
by Cleanur
Yes max set size you can create with 16 drives is 6+2 (2 Controllers x 6+2 LD's = 16 Disks).

Re: CPG set size

Posted: Fri May 02, 2014 4:31 pm
by Richard Siemers
Cleanur wrote:Yes max set size you can create with 16 drives is 6+2 (2 Controllers x 6+2 LD's = 16 Disks).



and if that errors out, you may need to check the "advanced options" and select "mag safe" instead of the default of "cage safe". Your mileage may vary based on versions of Inserv and IMC.

Re: CPG set size

Posted: Fri May 02, 2014 4:53 pm
by borkowski
Thanks for your replies, just a few more questions :)

What happens when we decide to expand the system with more disks - e.g. if we opt for additional 8 disks, would it be best to add them to CPG and then change set size to 10+2? Is it necessary to worry about set sizes and always try to achieve a "balanced" system in terms of controller load?

Re: CPG set size

Posted: Fri May 02, 2014 5:13 pm
by Richard Siemers
Its a good idea to rethink your set size, or choose quantities that compliment your existing set size when adding more disks. tunesys and/or dynamic optimizer can be used after install to re-balance.

Technically, if your CPG is set to use NL disks, and you add more NL disks... you have to do nothing to start using them... data will start landing on them if the CPG specs match the disk type... but re balancing is ideal.

Re: CPG set size

Posted: Wed May 07, 2014 11:41 am
by Cleanur
As Richard said if you go mag level availability and just add drives the data will just land on their chunklets using the same stripe size. However running tunesys after adding drives would re-level the data across old and new disks and so spread the I/O more evenly.

It might be worth adjusting the stripe size to gain better capacity efficiency assuming you have enough drives per node. e.g 6+2 (14 disks) has a 25% parity overhead whereas 8+2 (20 Disks) has a 20% overhead. 10+2 (24 Disks) has 16.6% overhead etc. So you get more usable space the wider the stripe, but remember you can have multiple CPG's sharing the same disks so it's very flexible.

Re: CPG set size

Posted: Fri Jun 27, 2014 4:22 am
by RitonLaBevue
tunesys is not really the command to use. It does not work as expected.
It appears with 3.1.1 and was bugged. Things must change in 3.1.2... And Then must be fixed in 3.1.3...
There, with 3.1.3 MU1 we tested if and...
We prefer use tunevv, tuneld, depending on situations (AO, no AO, disks full or not...)