zotgene
Posts: 4 Joined: Fri Feb 05, 2021 5:58 pm
Post
by zotgene » Fri Feb 05, 2021 6:12 pm
Hello everybody.
We had to move a 3PAR 7200 so we shut down properly.
Once rekindled, the beautiful surprise:
Code: Select all
InServ cli% showsysmgr TOC quorum found for TOC 326028, but waiting for nodes 1 to boot up. If the nodes are offline, use 'setsysmgr tocgen 326028'. TOCs found: Valid TOC: Name(3PAR-PEC) Sysid(28555) Gen(326028) Modified(2021-02-05 19:52:36 CET) Disk count(32/32) 32 disks were available to be examined for TOCs.
It has been like this for more than two hours that the node 1 has a fixed blue led
I tried
Code: Select all
InServ cli% shutdownnode reboot 1 Error: System manager is still initializing - try this command later. Use 'showsysmgr' to see the current system state.
Code: Select all
InServ cli% showversion Release version 3.1.2 (MU2) Patches: P11,P19,P25,P38,P39 Component Name Version CLI Server 3.1.2 (MU2) CLI Client 3.1.2 (MU2) System Manager 3.1.2 (P11) Kernel 3.1.2 (MU2) TPD Kernel Code 3.1.2 (MU2) TPD Kernel Patch 3.1.2 (P38)
I am really desperate, can i try to do something?
zotgene
Posts: 4 Joined: Fri Feb 05, 2021 5:58 pm
Post
by zotgene » Fri Feb 05, 2021 8:23 pm
I tryed:
Code: Select all
InServ cli% setsysmgr tocgen 326028 Are you sure you want to run setsysmgr tocgen? select q=quit y=yes n=no: y InServ cli% showsysmgr System is reporting an unknown initialization type: ms_tocgen TOC quorum found for TOC 326028, but waiting for nodes 1 to boot up because we need to recover from a previous powerfail.-You can use 'setsysmgr force_iderecovery' to force recovery with possible data loss. TOCs found: Valid TOC: Name(3PAR-PEC) Sysid(28555) Gen(326028) Modified(2021-02-05 19:52:36 CET) Disk count(32/32) 32 disks were available to be examined for TOCs. InServ cli% setsysmgr force_iderecovery Are you sure you want to run setsysmgr force_iderecovery? select q=quit y=yes n=no: y InServ cli% showsysmgr System is reporting an unknown initialization type: ms_tocgen TOC quorum found for TOC 326028, but waiting for nodes 1 to boot up because we need to recover from a previous powerfail.-You can use 'setsysmgr force_iderecovery' to force recovery with possible data loss. TOCs found: Valid TOC: Name(3PAR-PEC) Sysid(28555) Gen(326028) Modified(2021-02-05 19:52:36 CET) Disk count(32/32) 32 disks were available to be examined for TOCs.
but after a few time whitout pinging the console, i have this:
Code: Select all
InServ cli% showsysmgr System is recovering from a previous power failure. TOC quorum found for TOC 326033, but waiting for nodes 1 to boot up. If the nodes are offline, use 'setsysmgr tocgen 326033'. TOCs found: Valid TOC: Name(3PAR-PEC) Sysid(28555) Gen(326033) Modified(2021-02-06 02:05:55 CET) Disk count(32/32) 32 disks were available to be examined for TOCs.
MammaGutt
Posts: 1578 Joined: Mon Sep 21, 2015 2:11 pm
Location: Europe
Post
by MammaGutt » Fri Feb 05, 2021 9:40 pm
Sounds like a dead node.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
zotgene
Posts: 4 Joined: Fri Feb 05, 2021 5:58 pm
Post
by zotgene » Fri Feb 05, 2021 10:02 pm
Yes, I think so.
After the command above the system start whit just one node
Code: Select all
InServ cli% shownode Control Data Cache Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%) 0 1628555-0 OK Yes Yes Off GreenBlnk 8192 8192 0
But i can see this...and i'm worry...what happened ?
The LUN and the ESX's datastore they would look OK but...
Code: Select all
--------------------------------------------------------- -----------------------Disk Inventory-------------------- --------------------------------------------------------- Id CagePos State ----Node_WWN---- --MFR-- -----Model------ -Serial- -FW_Rev- Protocol MediaType 0 1:7:0 degraded 5000CCA06F074FF7 HITACHI HCBRE0450GBAS10K W6G40NPG 3P00 SAS Magnetic 1 0:1:0 degraded 5000CCA02252725B HITACHI HCBRE0450GBAS10K KMWGAMDL 3P00 SAS Magnetic 2 0:2:0 degraded 5000CCA0225270EB HITACHI HCBRE0450GBAS10K KMWGAJEL 3P00 SAS Magnetic 3 0:3:0 degraded 5000CCA0225261C7 HITACHI HCBRE0450GBAS10K KMWG9J5L 3P00 SAS Magnetic 4 0:4:0 degraded 5000CCA0224D2AA3 HITACHI HCBRE0450GBAS10K KMWBELSF 3P00 SAS Magnetic 5 0:5:0 degraded 5000CCA0224EC4E7 HITACHI HCBRE0450GBAS10K KMWD9XSF 3P00 SAS Magnetic 6 0:6:0 degraded 5000CCA0224A1F87 HITACHI HCBRE0450GBAS10K KMW9SR8F 3P00 SAS Magnetic 7 0:7:0 degraded 5000CCA02252618B HITACHI HCBRE0450GBAS10K KMWG9HPL 3P00 SAS Magnetic 8 0:8:0 degraded 5000CCA022497DDF HITACHI HCBRE0450GBAS10K KMW9DYJF 3P00 SAS Magnetic 9 0:9:0 degraded 5000CCA022526197 HITACHI HCBRE0450GBAS10K KMWG9HTL 3P00 SAS Magnetic 10 0:10:0 degraded 5000CCA022526483 HITACHI HCBRE0450GBAS10K KMWG9PUL 3P00 SAS Magnetic 11 0:11:0 degraded 5000CCA0224ECF5F HITACHI HCBRE0450GBAS10K KMWDAMBF 3P00 SAS Magnetic 12 0:12:0 degraded 5000CCA0224ECEFF HITACHI HCBRE0450GBAS10K KMWDALLF 3P00 SAS Magnetic 13 0:13:0 degraded 5000CCA0224970BB HITACHI HCBRE0450GBAS10K KMW9D2DF 3P00 SAS Magnetic 14 0:14:0 degraded 5000CCA02241D6B3 HITACHI HCBRE0450GBAS10K KMW56GTF 3P00 SAS Magnetic 15 0:15:0 degraded 5000CCA0224D279F HITACHI HCBRE0450GBAS10K KMWBEDJF 3P00 SAS Magnetic 16 1:0:0 degraded 5000CCA0224ECF03 HITACHI HCBRE0450GBAS10K KMWDALMF 3P00 SAS Magnetic 17 1:1:0 degraded 5000CCA0224BD0BB HITACHI HCBRE0450GBAS10K KMWAPKML 3P00 SAS Magnetic 18 0:0:0 degraded 5000CCA06F03187B HITACHI HCBRE0450GBAS10K W6G1PT2X 3P00 SAS Magnetic 20 1:4:0 degraded 5000CCA0224BD173 HITACHI HCBRE0450GBAS10K KMWAPM3L 3P00 SAS Magnetic 21 1:5:0 degraded 5000CCA0224B4FFF HITACHI HCBRE0450GBAS10K KMWADZVL 3P00 SAS Magnetic 22 1:6:0 degraded 5000CCA0224DDCBF HITACHI HCBRE0450GBAS10K KMWBUGGL 3P00 SAS Magnetic 24 1:8:0 degraded 5000CCA0224BE0F3 HITACHI HCBRE0450GBAS10K KMWARN3L 3P00 SAS Magnetic 25 1:9:0 degraded 5000CCA0224BD08F HITACHI HCBRE0450GBAS10K KMWAPK8L 3P00 SAS Magnetic 26 1:10:0 degraded 5000CCA0224ECF0B HITACHI HCBRE0450GBAS10K KMWDALPF 3P00 SAS Magnetic 27 1:11:0 degraded 5000CCA0224ECF23 HITACHI HCBRE0450GBAS10K KMWDALWF 3P00 SAS Magnetic 28 1:12:0 degraded 5000CCA0224A1F3B HITACHI HCBRE0450GBAS10K KMW9SPNF 3P00 SAS Magnetic 29 1:13:0 degraded 5000CCA0224ECF57 HITACHI HCBRE0450GBAS10K KMWDAM9F 3P00 SAS Magnetic 30 1:14:0 degraded 5000CCA0224ECF1B HITACHI HCBRE0450GBAS10K KMWDALUF 3P00 SAS Magnetic 31 1:15:0 degraded 5000CCA0224D2A5F HITACHI HCBRE0450GBAS10K KMWBEL6F 3P00 SAS Magnetic 32 1:2:0 degraded 5000CCA055052377 HITACHI HCBRE0450GBAS10K KHG2ULTR 3P00 SAS Magnetic 33 1:3:0 degraded 5000CCA06F05985F HITACHI HCBRE0450GBAS10K W6G32D4G 3P00 SAS Magnetic --------------------------------------------------------------------------------------------------
zotgene
Posts: 4 Joined: Fri Feb 05, 2021 5:58 pm
Post
by zotgene » Fri Feb 05, 2021 10:52 pm
mmm I think is normal whit just one node
Code: Select all
InServ cli% showpd -s 27 Id CagePos Type -State-- -Detailed_State- 27 1:11:0 FC degraded missing_A_port ----------------------------------------- 1 total InServ cli% showpd -s 18 Id CagePos Type -State-- -Detailed_State- 18 0:0:0 FC degraded missing_A_port ----------------------------------------- 1 total
ailean
Posts: 392 Joined: Wed Nov 09, 2011 12:01 pm
Post
by ailean » Mon Feb 08, 2021 4:15 am
Yes would be expected with node/path down. Have you tried having a serial connection to Node1 during boot? Might at least get a failure reason (disk, ram etc).