Data Distribution Across PDs - PDs in Cage 1 Filling Up
Posted: Thu Jun 16, 2022 2:51 pm
Hello,
I’ve been asked to take over our 7400c 4 cage iSCSI 3PAR which hasn’t been touched in many years. The cages have a mix of two different sizes of FC as well as some SSDs. An incident occurred where the smaller drives (500GB vs 1TB) in Cage 1 filled up completely causing an outage. We worked with HPE and were able to free up some space, they also had us change the Set Size of the offending RAID 5 CPG temporarily until we were able to procure additional disks. While we still had support with HPE I was able to update the Service Processor and the OS. However our support with HPE expired before the new disks arrived.
I was able to add the additional disks, 8 additional drives in cages 2 and 3, and the system automatically discovered the drives however I ran AdmitPD just to be sure. I then changed the setsize back to 6 from 3 which HPE advised us to do. I then have run tunesys multiple times and see data being distributed to the new disks which is good. However, the smaller 500GB drives in Cage 1 don’t seem to be getting data distributing off of them. No matter how many times I run tunesys, even with chunkpct and dskpct set to 2% as well as trying a nodepct of 10% without success. The drives will not get below 90% full.
My question is how do I get data distributed off these almost full disks and ensure that future CPG growth will be evenly distributed among all disks?
Any advice would be greatly appreciated, this user group has already helped me with both general understanding as well as a few specific issues I encountered. I’m new to administering a SAN and this system was dumped in my lap and sadly the previous administrator refuses to give me any assistance. Although I don’t know that I would care much about his opinion anyway considering the state of the system when I was asked to take it over, so there's that.
Thanks in advance to you 3PAR experts!
I’ve been asked to take over our 7400c 4 cage iSCSI 3PAR which hasn’t been touched in many years. The cages have a mix of two different sizes of FC as well as some SSDs. An incident occurred where the smaller drives (500GB vs 1TB) in Cage 1 filled up completely causing an outage. We worked with HPE and were able to free up some space, they also had us change the Set Size of the offending RAID 5 CPG temporarily until we were able to procure additional disks. While we still had support with HPE I was able to update the Service Processor and the OS. However our support with HPE expired before the new disks arrived.
I was able to add the additional disks, 8 additional drives in cages 2 and 3, and the system automatically discovered the drives however I ran AdmitPD just to be sure. I then changed the setsize back to 6 from 3 which HPE advised us to do. I then have run tunesys multiple times and see data being distributed to the new disks which is good. However, the smaller 500GB drives in Cage 1 don’t seem to be getting data distributing off of them. No matter how many times I run tunesys, even with chunkpct and dskpct set to 2% as well as trying a nodepct of 10% without success. The drives will not get below 90% full.
My question is how do I get data distributed off these almost full disks and ensure that future CPG growth will be evenly distributed among all disks?
Any advice would be greatly appreciated, this user group has already helped me with both general understanding as well as a few specific issues I encountered. I’m new to administering a SAN and this system was dumped in my lap and sadly the previous administrator refuses to give me any assistance. Although I don’t know that I would care much about his opinion anyway considering the state of the system when I was asked to take it over, so there's that.
Thanks in advance to you 3PAR experts!