HPE Storage Users Group https://3parug.net/ |
|
ld:tp-1-sd-0.4 LD has 80 remote chunklets https://3parug.net/viewtopic.php?f=18&t=3841 |
Page 1 of 1 |
Author: | sebastian7780_2 [ Fri Oct 07, 2022 10:53 am ] |
Post subject: | ld:tp-1-sd-0.4 LD has 80 remote chunklets |
Hello everyone. After the node change I had several errors like this. I fixed them all with "tunnel -f tp-1-sd-0.13". Only 3 I can not repair. pdch ld:tp-1-sd-0.4 LD has 80 remote chunklets pdch ld:tp-1-sd-0.13 LD has 86 remote chunklets pdch ld:tp-1-sd-0.15 LD has 161 remote chunklets Running tunneld -f xxxxxx gives me the error 2022-10-07 12:46:12 -03 Created task. 2022-10-07 12:46:12 -03 Updated Executing "tunnel tp-1-sd-0.13" as 0:14437 2022-10-07 12:46:13 -03 Updated **** 2022-10-07 12:46:13 -03 Updated **** tunnel started 2022-10-07 12:46:13 -03 Updated **** 2022-10-07 12:46:14 -03 Updated Error - unable to create new LD - error: Could not find enough available disk space. 2022-10-07 12:46:14 -03 Updated No tuning possible 2022-10-07 12:46:14 -03 Error 2022-10-07 12:46:14 -03 Error Task exited with status 1 2022-10-07 12:46:14 -03 Failed Could not complete task. Any ideas? as always Thanks. |
Author: | MammaGutt [ Fri Oct 07, 2022 2:29 pm ] |
Post subject: | Re: ld:tp-1-sd-0.4 LD has 80 remote chunklets |
Is your system very full and unbalanced? You could try compactcpg as well. |
Author: | sebastian7780_2 [ Mon Oct 10, 2022 8:33 pm ] |
Post subject: | Re: ld:tp-1-sd-0.4 LD has 80 remote chunklets |
MammaGutt wrote: Is your system very full and unbalanced? You could try compactcpg as well. Hello. Thanks for your help. It's all compacted. I have 22tb free. |
Author: | MammaGutt [ Tue Oct 11, 2022 2:17 am ] |
Post subject: | Re: ld:tp-1-sd-0.4 LD has 80 remote chunklets |
sebastian7780_2 wrote: MammaGutt wrote: Is your system very full and unbalanced? You could try compactcpg as well. Hello. Thanks for your help. It's all compacted. I have 22tb free. Are you able to complete tuneld after running compactcpg? If you have a balanced system and 22TB free capacity in the correct tier, you shouldn't have any issues running tuneld. |
Author: | sebastian7780_2 [ Tue Oct 11, 2022 8:22 am ] |
Post subject: | Re: ld:tp-1-sd-0.4 LD has 80 remote chunklets |
MammaGutt wrote: sebastian7780_2 wrote: MammaGutt wrote: Is your system very full and unbalanced? You could try compactcpg as well. Hello. Thanks for your help. It's all compacted. I have 22tb free. Are you able to complete tuneld after running compactcpg? If you have a balanced system and 22TB free capacity in the correct tier, you shouldn't have any issues running tuneld. Mammagut, I have run tunesys several times. this is the checkhealth log -svc -detail PD Cage:2 PDs FC/10K/900GB unbalanced. Primary path: 8 on Node:0, 6 on Node:1 pdch ld:tp-1-sd-0.4 LD has 80 remote chunklets pdch ld:tp-1-sd-0.15 LD has 161 remote chunklets I was able to repair only one LD after compacting and running "tunesys -chunkpct 1" Showpd -path ---------Paths--------- Id CagePos Type -State- A B Order 0 2:8:0 FC normal 1:0:2\1:0:1 0:0:2\0:0:1 1/0 1 2:4:0 FC normal 1:0:2\1:0:1 0:0:2\0:0:1 0/1 3 2:6:0 FC normal 1:0:2\1:0:1 0:0:2\0:0:1 0/1 4 0:4:0 FC normal 1:0:1 0:0:1 1/0 5 0:5:0 FC normal 1:0:1 0:0:1 0/1 6 0:6:0 FC normal 1:0:1 0:0:1 1/0 7 0:7:0 FC normal 1:0:1 0:0:1 0/1 8 0:8:0 FC normal 1:0:1 0:0:1 1/0 9 0:9:0 FC normal 1:0:1 0:0:1 0/1 10 0:10:0 FC normal 1:0:1 0:0:1 1/0 11 0:11:0 FC normal 1:0:1 0:0:1 0/1 12 2:7:0 FC normal 1:0:2\1:0:1 0:0:2\0:0:1 1/0 13 2:9:0 FC normal 1:0:2\1:0:1 0:0:2\0:0:1 0/1 14 2:10:0 FC normal 1:0:2\1:0:1 0:0:2\0:0:1 1/0 15 2:11:0 FC normal 1:0:2\1:0:1 0:0:2\0:0:1 0/1 16 1:0:0 NL normal 1:0:1 0:0:1 1/0 17 1:1:0 NL normal 1:0:1 0:0:1 0/1 18 1:4:0 NL normal 1:0:1 0:0:1 0/1 19 1:5:0 NL normal 1:0:1 0:0:1 1/0 20 1:8:0 NL normal 1:0:1 0:0:1 1/0 21 1:9:0 NL normal 1:0:1 0:0:1 0/1 22 1:12:0 NL normal 1:0:1 0:0:1 0/1 23 1:13:0 NL normal 1:0:1 0:0:1 1/0 24 1:16:0 NL normal 1:0:1 0:0:1 1/0 25 1:17:0 NL normal 1:0:1 0:0:1 0/1 26 1:20:0 NL normal 1:0:1 0:0:1 0/1 27 1:21:0 NL normal 1:0:1 0:0:1 1/0 28 2:0:0 SSD normal 1:0:2 0:0:2 0/0 29 2:1:0 SSD normal 1:0:2 0:0:2 0/0 30 2:2:0 SSD normal 1:0:2 0:0:2 0/0 31 2:3:0 SSD normal 1:0:2 0:0:2 0/0 32 0:0:0 SSD normal 1:0:1\1:0:2 0:0:1\0:0:2 0/0 33 0:1:0 SSD normal 1:0:1\1:0:2 0:0:1\0:0:2 0/0 34 0:2:0 SSD normal 1:0:1\1:0:2 0:0:1\0:0:2 0/0 35 0:3:0 SSD normal 1:0:1\1:0:2 0:0:1\0:0:2 0/0 36 0:12:0 FC normal 1:0:1 0:0:1 1/0 37 0:13:0 FC normal 1:0:1 0:0:1 0/1 38 0:14:0 FC normal 1:0:1 0:0:1 1/0 39 0:15:0 FC normal 1:0:1 0:0:1 0/1 40 2:5:0 FC normal 1:0:2 0:0:2 0/1 41 0:17:0 FC normal 1:0:1 0:0:1 0/1 42 0:18:0 FC normal 1:0:1 0:0:1 1/0 43 0:19:0 FC normal 1:0:1 0:0:1 0/1 44 0:20:0 SSD normal 1:0:1 0:0:1 0/0 45 0:21:0 SSD normal 1:0:1 0:0:1 0/0 46 0:22:0 SSD normal 1:0:1 0:0:1 0/0 47 0:23:0 SSD normal 1:0:1 0:0:1 0/0 48 2:12:0 FC normal 1:0:2 0:0:2 1/0 49 2:13:0 SSD normal 1:0:2 0:0:2 0/0 50 2:14:0 SSD normal 1:0:2 0:0:2 0/0 51 2:15:0 FC normal 1:0:2 0:0:2 0/1 52 2:16:0 FC normal 1:0:2 0:0:2 1/0 53 2:17:0 FC normal 1:0:2 0:0:2 0/1 54 2:18:0 FC normal 1:0:2 0:0:2 1/0 55 2:19:0 FC normal 1:0:2 0:0:2 0/1 56 2:20:0 SSD normal 1:0:2 0:0:2 0/0 57 2:21:0 SSD normal 1:0:2 0:0:2 0/0 58 2:22:0 SSD normal 1:0:2 0:0:2 0/0 59 2:23:0 SSD normal 1:0:2 0:0:2 0/0 60 0:16:0 FC normal 1:0:1 0:0:1 1/0 ----------------------------------------------------- 60 total Do you think the balance error is related? how can it be solved? do you need any other log? As I was saying, it all started after replacing a failed node thanks for your help |
Author: | MammaGutt [ Tue Oct 11, 2022 10:27 am ] |
Post subject: | Re: ld:tp-1-sd-0.4 LD has 80 remote chunklets |
what does tuneld -f tp-1-sd-0.15 say? Also, are you sure you got the cabling correct after node replacement? Something looks funky. |
Author: | sebastian7780_2 [ Tue Oct 11, 2022 12:53 pm ] |
Post subject: | Re: ld:tp-1-sd-0.4 LD has 80 remote chunklets |
MammaGutt wrote: what does tuneld -f tp-1-sd-0.15 say? Also, are you sure you got the cabling correct after node replacement? Something looks funky. Sure. If I run "tunnel -f tp-1-sd-0.15" the work is never registered in the tasks. Instantly the mail arrives saying that it was not possible to execute... there is no space. The task log is 2022-10-11 14:47:28 -03 Created task. 2022-10-11 14:47:28 -03 Updated Executing "tunnel tp-1-sd-0.15" as 0:14794 2022-10-11 14:47:29 -03 Updated **** 2022-10-11 14:47:29 -03 Updated **** tunnel started 2022-10-11 14:47:29 -03 Updated **** 2022-10-11 14:47:30 -03 Updated Error - unable to create new LD - error: Could not find enough available disk space. 2022-10-11 14:47:30 -03 Updated No tuning possible 2022-10-11 14:47:30 -03 Error 2022-10-11 14:47:30 -03 Error Task exited with status 1 2022-10-11 14:47:30 -03 Failed Could not complete task. the cables are ok. 3par_Ducasse cli% checkhealth cabling Checking wiring The following components are healthy: cabling 3par_Ducasse cli% showspace --Estimated(MB)--- RawFree UsableFree 5931008 2965504 3par_Ducasse cli% Thanks. |
Author: | MammaGutt [ Tue Oct 11, 2022 2:36 pm ] |
Post subject: | Re: ld:tp-1-sd-0.4 LD has 80 remote chunklets |
Okay.... tp-1..... means this is a LD for CPG #1 Issue "showcpg" and find the CPG with ID 1. Then do "showspace -cpg <cpgname>" Then do "showpd -c -p -devtype <SSD , FC or NL depending on CPG>". Look at how much free space you have for that tier. You could do "showpd" ... The path showing "*" indicates which node owns a PD. Verify that this is equal on both nodes. You can do "showld" to see the size of the two LDs. A normal LD is usually 256GiB. If you have LDs that are bigger (SizeMB) than their usage (UsedMB), you should be able to extract that capacity by doing "compactcpg -f <cpgname>" .... Remember, first digit in a LD name is the CPG ID.... And you could expect more than a few GiB before bothering to do compactcpg on a CPG. |
Author: | sebastian7780_2 [ Wed Oct 12, 2022 6:49 pm ] |
Post subject: | Re: ld:tp-1-sd-0.4 LD has 80 remote chunklets |
MammaGutt wrote: Okay.... tp-1..... means this is a LD for CPG #1 Issue "showcpg" and find the CPG with ID 1. Then do "showspace -cpg <cpgname>" Then do "showpd -c -p -devtype <SSD , FC or NL depending on CPG>". Look at how much free space you have for that tier. You could do "showpd" ... The path showing "*" indicates which node owns a PD. Verify that this is equal on both nodes. You can do "showld" to see the size of the two LDs. A normal LD is usually 256GiB. If you have LDs that are bigger (SizeMB) than their usage (UsedMB), you should be able to extract that capacity by doing "compactcpg -f <cpgname>" .... Remember, first digit in a LD name is the CPG ID.... And you could expect more than a few GiB before bothering to do compactcpg on a CPG. Mammagut. I tell you... after running several AO, compact... and so many times little by little I was repairing the LD. Thank you again. |
Author: | MammaGutt [ Thu Oct 13, 2022 12:32 am ] |
Post subject: | Re: ld:tp-1-sd-0.4 LD has 80 remote chunklets |
sebastian7780_2 wrote: MammaGutt wrote: Okay.... tp-1..... means this is a LD for CPG #1 Issue "showcpg" and find the CPG with ID 1. Then do "showspace -cpg <cpgname>" Then do "showpd -c -p -devtype <SSD , FC or NL depending on CPG>". Look at how much free space you have for that tier. You could do "showpd" ... The path showing "*" indicates which node owns a PD. Verify that this is equal on both nodes. You can do "showld" to see the size of the two LDs. A normal LD is usually 256GiB. If you have LDs that are bigger (SizeMB) than their usage (UsedMB), you should be able to extract that capacity by doing "compactcpg -f <cpgname>" .... Remember, first digit in a LD name is the CPG ID.... And you could expect more than a few GiB before bothering to do compactcpg on a CPG. Mammagut. I tell you... after running several AO, compact... and so many times little by little I was repairing the LD. Thank you again. Ahhh, you're running AO.... AO, compactcpg and tuneld/tunesys are three tasks that are fighting against eachother. If two of them run at the same time you're 99% sure that at least one will fail (or not fully complete). Next time, disable AO before you start and re-enable when you're done Then you'll be able to complete on the first attempt |
Page 1 of 1 | All times are UTC - 5 hours |
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group http://www.phpbb.com/ |