Hi,
my company has ordered a 7200 system with 16 NL drives (all in one cage) and I've been wondering what would be the best set size to choose.
Since RAID 6 is recommended for NL drives, I'm just wondering whether 6+2 would be the best set size option (because of how each of the controller nodes "owns" half of the disks during regular operation)? Will I even be offered larger set sizes than 6+2 during CPG creation?
Thanks for your help.
Borkowski
CPG set size
Re: CPG set size
Yes max set size you can create with 16 drives is 6+2 (2 Controllers x 6+2 LD's = 16 Disks).
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: CPG set size
Cleanur wrote:Yes max set size you can create with 16 drives is 6+2 (2 Controllers x 6+2 LD's = 16 Disks).
and if that errors out, you may need to check the "advanced options" and select "mag safe" instead of the default of "cage safe". Your mileage may vary based on versions of Inserv and IMC.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: CPG set size
Thanks for your replies, just a few more questions
What happens when we decide to expand the system with more disks - e.g. if we opt for additional 8 disks, would it be best to add them to CPG and then change set size to 10+2? Is it necessary to worry about set sizes and always try to achieve a "balanced" system in terms of controller load?
What happens when we decide to expand the system with more disks - e.g. if we opt for additional 8 disks, would it be best to add them to CPG and then change set size to 10+2? Is it necessary to worry about set sizes and always try to achieve a "balanced" system in terms of controller load?
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: CPG set size
Its a good idea to rethink your set size, or choose quantities that compliment your existing set size when adding more disks. tunesys and/or dynamic optimizer can be used after install to re-balance.
Technically, if your CPG is set to use NL disks, and you add more NL disks... you have to do nothing to start using them... data will start landing on them if the CPG specs match the disk type... but re balancing is ideal.
Technically, if your CPG is set to use NL disks, and you add more NL disks... you have to do nothing to start using them... data will start landing on them if the CPG specs match the disk type... but re balancing is ideal.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: CPG set size
As Richard said if you go mag level availability and just add drives the data will just land on their chunklets using the same stripe size. However running tunesys after adding drives would re-level the data across old and new disks and so spread the I/O more evenly.
It might be worth adjusting the stripe size to gain better capacity efficiency assuming you have enough drives per node. e.g 6+2 (14 disks) has a 25% parity overhead whereas 8+2 (20 Disks) has a 20% overhead. 10+2 (24 Disks) has 16.6% overhead etc. So you get more usable space the wider the stripe, but remember you can have multiple CPG's sharing the same disks so it's very flexible.
It might be worth adjusting the stripe size to gain better capacity efficiency assuming you have enough drives per node. e.g 6+2 (14 disks) has a 25% parity overhead whereas 8+2 (20 Disks) has a 20% overhead. 10+2 (24 Disks) has 16.6% overhead etc. So you get more usable space the wider the stripe, but remember you can have multiple CPG's sharing the same disks so it's very flexible.
-
- Posts: 390
- Joined: Fri Jun 27, 2014 2:01 am
Re: CPG set size
tunesys is not really the command to use. It does not work as expected.
It appears with 3.1.1 and was bugged. Things must change in 3.1.2... And Then must be fixed in 3.1.3...
There, with 3.1.3 MU1 we tested if and...
We prefer use tunevv, tuneld, depending on situations (AO, no AO, disks full or not...)
It appears with 3.1.1 and was bugged. Things must change in 3.1.2... And Then must be fixed in 3.1.3...
There, with 3.1.3 MU1 we tested if and...
We prefer use tunevv, tuneld, depending on situations (AO, no AO, disks full or not...)