Page 2 of 4

Re: a couple of issues with 8440

Posted: Fri Jul 09, 2021 1:31 pm
by PaulC
sansw 1 & 2 are cisco 9148s
sansw 4 is a Brocade 8/8

as far as cabling a lot has been moved around but the initial setup
was all the top FC connections on the back of the 3par controllers got connected to the 1st cisco switch

then the bottoms were connected to the 2nd.

how is that not good?

the PDF along with my text diagram has the nodes that are connected and talking to specific ports

Re: a couple of issues with 8440

Posted: Fri Jul 09, 2021 3:42 pm
by MammaGutt
PaulC wrote:sansw 1 & 2 are cisco 9148s
sansw 4 is a Brocade 8/8

as far as cabling a lot has been moved around but the initial setup
was all the top FC connections on the back of the 3par controllers got connected to the 1st cisco switch

then the bottoms were connected to the 2nd.

how is that not good?

the PDF along with my text diagram has the nodes that are connected and talking to specific ports


I didn’t make sense out of the PDF. Anyways, for cabling the 3PAR… if port 3:0:2 was to go down, the persistent port feature on the 3PAR will move the port WWN to the partner port (being 2:0:2) which isn’t in the same fabric resulting in path failure. Also port are not equally distributed resulting in potentially interestimg numbers of paths for each fabric (and hosts) which again could result in interesting performance issues.

As for the rest I can only provide you with what you need to do as you’re not providing me the information…..
1. collect WWNs from hosts
2. verify that those WWNs are visible in FC switch name server (if they are not, there is a host issue or VC issue)
3. verify that those WWNs are seen by 3PAR (check showhost -d . If you don’t find the WWNs visible on a port, you need to check zoming)

If all these 3 things add up, then you start looking at the 3PAR, ensuring that the correct WWNs are mapped to the correct host with the correct persona and that your volumes are exported to the correct hosts.

Re: a couple of issues with 8440

Posted: Fri Jul 09, 2021 3:55 pm
by PaulC
thanks

this command showhost -d then is from the 3par CLI.

I'll check that

Re: a couple of issues with 8440

Posted: Fri Jul 09, 2021 4:01 pm
by PaulC
cli% showhost -d
Id Name Persona -WWN/iSCSI_Name- Port IP_addr
1 FACS1 OpenVMS 5001438024293788 1:0:1 n/a
1 FACS1 OpenVMS 5001438024293788 0:0:2 n/a
1 FACS1 OpenVMS 5001438024293788 3:0:1 n/a
2 facs3 OpenVMS 50060B0000C26200 --- n/a
2 facs3 OpenVMS 201100110A000405 --- n/a
2 facs3 OpenVMS 201200110A000405 --- n/a
2 facs3 OpenVMS 201300110A000405 --- n/a
2 facs3 OpenVMS 201400110A000405 --- n/a
0 FACST1 OpenVMS 5001438001697484 2:0:1 n/a
0 FACST1 OpenVMS 5001438001697484 0:0:1 n/a
------------------------------------------------
10 total

showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol Label Partner FailoverState
0:0:1 target ready 2FF70002AC01A6E1 20010002AC01A6E1 host FC FACST1 1:0:1 none
0:0:2 target ready 2FF70002AC01A6E1 20020002AC01A6E1 host FC FACS1 1:0:2 none
0:1:1 initiator ready 50002ACFF701A6E1 50002AC01101A6E1 disk SAS DP-1 - -
0:1:2 initiator loss_sync 50002ACFF701A6E1 50002AC01201A6E1 free SAS DP-2 - -
0:3:1 peer offline - 480FCFA33925 free IP IP0 - -
1:0:1 target ready 2FF70002AC01A6E1 21010002AC01A6E1 host FC FACS1 0:0:1 none
1:0:2 target ready 2FF70002AC01A6E1 21020002AC01A6E1 free FC - 0:0:2 none
1:1:1 initiator ready 50002ACFF701A6E1 50002AC11101A6E1 disk SAS DP-1 - -
1:1:2 initiator loss_sync 50002ACFF701A6E1 50002AC11201A6E1 free SAS DP-2 - -
1:3:1 peer offline - 3464A9EADB41 free IP IP1 - -
2:0:1 target ready 2FF70002AC01A6E1 22010002AC01A6E1 host FC FACST1 3:0:1 none
2:0:2 target ready 2FF70002AC01A6E1 22020002AC01A6E1 free FC - 3:0:2 none
2:1:1 initiator ready 50002ACFF701A6E1 50002AC21101A6E1 disk SAS DP-1 - -
2:1:2 initiator loss_sync 50002ACFF701A6E1 50002AC21201A6E1 free SAS DP-2 - -
2:3:1 peer offline - 3464A9EABD0D free IP IP2 - -
3:0:1 target ready 2FF70002AC01A6E1 23010002AC01A6E1 host FC FACS1 2:0:1 none
3:0:2 target ready 2FF70002AC01A6E1 23020002AC01A6E1 free FC - 2:0:2 none
3:1:1 initiator ready 50002ACFF701A6E1 50002AC31101A6E1 disk SAS DP-1 - -
3:1:2 initiator loss_sync 50002ACFF701A6E1 50002AC31201A6E1 free SAS DP-2 - -
3:3:1 peer offline - 3464A9EAB895 free IP IP3 - -
--------------------------------------------------------------------------------------------------------
20
[size=85][/size]

Re: a couple of issues with 8440

Posted: Sat Jul 10, 2021 1:04 am
by MammaGutt
You have no redundancy in your storage fabric.

FACS1 is only connected thru SW2 on only one FC port on the host.

FACST1 is only connected thru SW1 on only one FC port on the host.

Facs3 is not connected to the 3PAR thru any SAN switch. But I see some of those WWNs on the nsshow from SW4, so that's most likely a zoning thing.

Re: a couple of issues with 8440

Posted: Sat Jul 10, 2021 10:09 am
by PaulC
facs1 is plugged into 2 switches but only 3 paths not 4 still don't know why? it sees the 3Par on the 4th but wont talk to it. I added another WWN on the 3Par let see if that helps

facst1 is connected to only one switch. I know this.

as far as the BL870c, well, I have nothing.

all the ports in all the switches each have all the ports in the same zone.

what you also don't know which is irrelevant to this issue is that all the disks are shadowed to an EVA and once I have viable connections I will be taking FACS1 down and replacing it with the BL870c.

the Blade is my primary goal at the moment, as to why it can't talk to the 3Par has me stumped and others.

Re: a couple of issues with 8440

Posted: Sat Jul 10, 2021 12:33 pm
by PaulC
here are the 2 FC Modules. I have inserted the WWN's into the 3Par. (logged in ones only)

let me know if there is anything else that would be helpful

thanks
Paul

cli% showport -par
N:S:P Connmode ConnType CfgRate MaxRate Class2 UniqNodeWwn VCN IntCoal TMWO Smart_SAN
0:0:1 host point auto 16Gbps disabled disabled disabled disabled disabled unknown
0:0:2 host point auto 16Gbps disabled disabled disabled disabled disabled unsupported
0:1:1 disk point 12Gbps 12Gbps n/a n/a n/a enabled n/a n/a
0:1:2 disk point 12Gbps 12Gbps n/a n/a n/a enabled n/a n/a
1:0:1 host point auto 16Gbps disabled disabled disabled disabled disabled unsupported
1:0:2 host point auto 16Gbps disabled disabled disabled disabled disabled unsupported
1:1:1 disk point 12Gbps 12Gbps n/a n/a n/a enabled n/a n/a
1:1:2 disk point 12Gbps 12Gbps n/a n/a n/a enabled n/a n/a
2:0:1 host point auto 16Gbps disabled disabled disabled disabled disabled unknown
2:0:2 host point auto 16Gbps disabled disabled disabled disabled disabled unsupported
2:1:1 disk point 12Gbps 12Gbps n/a n/a n/a enabled n/a n/a
2:1:2 disk point 12Gbps 12Gbps n/a n/a n/a enabled n/a n/a
3:0:1 host point auto 16Gbps disabled disabled disabled disabled disabled unsupported
3:0:2 host point auto 16Gbps disabled disabled disabled disabled disabled unknown
3:1:1 disk point 12Gbps 12Gbps n/a n/a n/a enabled n/a n/a
3:1:2 disk point 12Gbps 12Gbps n/a n/a n/a enabled n/a n/a
---------------------------------------------------------------------------------------------------
16
FACSSAN1 cli% showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol Label Partner FailoverState
0:0:1 target ready 2FF70002AC01A6E1 20010002AC01A6E1 host FC FACST1 1:0:1 none
0:0:2 target ready 2FF70002AC01A6E1 20020002AC01A6E1 host FC FACS1 1:0:2 none
0:1:1 initiator ready 50002ACFF701A6E1 50002AC01101A6E1 disk SAS DP-1 - -
0:1:2 initiator loss_sync 50002ACFF701A6E1 50002AC01201A6E1 free SAS DP-2 - -
0:3:1 peer offline - 480FCFA33925 free IP IP0 - -
1:0:1 target ready 2FF70002AC01A6E1 21010002AC01A6E1 host FC FACS1 0:0:1 none
1:0:2 target ready 2FF70002AC01A6E1 21020002AC01A6E1 free FC - 0:0:2 none
1:1:1 initiator ready 50002ACFF701A6E1 50002AC11101A6E1 disk SAS DP-1 - -
1:1:2 initiator loss_sync 50002ACFF701A6E1 50002AC11201A6E1 free SAS DP-2 - -
1:3:1 peer offline - 3464A9EADB41 free IP IP1 - -
2:0:1 target ready 2FF70002AC01A6E1 22010002AC01A6E1 host FC FACST1 3:0:1 none
2:0:2 target ready 2FF70002AC01A6E1 22020002AC01A6E1 free FC - 3:0:2 none
2:1:1 initiator ready 50002ACFF701A6E1 50002AC21101A6E1 disk SAS DP-1 - -
2:1:2 initiator loss_sync 50002ACFF701A6E1 50002AC21201A6E1 free SAS DP-2 - -
2:3:1 peer offline - 3464A9EABD0D free IP IP2 - -
3:0:1 target ready 2FF70002AC01A6E1 23010002AC01A6E1 host FC FACS1 2:0:1 none
3:0:2 target ready 2FF70002AC01A6E1 23020002AC01A6E1 free FC - 2:0:2 none
3:1:1 initiator ready 50002ACFF701A6E1 50002AC31101A6E1 disk SAS DP-1 - -
3:1:2 initiator loss_sync 50002ACFF701A6E1 50002AC31201A6E1 free SAS DP-2 - -
3:3:1 peer offline - 3464A9EAB895 free IP IP3 - -
--------------------------------------------------------------------------------------------------------
20
FACSSAN1 cli% showhost
Id Name Persona -WWN/iSCSI_Name- Port
1 FACS1 OpenVMS 5001438024293788 1:0:1
5001438024293788 0:0:2
5001438024293788 3:0:1
500143802429378A ---
2 facs3 OpenVMS 50060B0000C26200 ---
201100110A000405 ---
201200110A000405 ---
201300110A000405 ---
201400110A000405 ---
201100110A004A6F ---
201200110A004A6F ---
201300110A004A6F ---
201400110A004A6F ---
100000110A000405 ---
100000110A004A6F ---
0 FACST1 OpenVMS 5001438001697484 2:0:1
5001438001697484 0:0:1
----------------------------------------
17 total

Re: a couple of issues with 8440

Posted: Sat Jul 10, 2021 1:32 pm
by MammaGutt
Let me first start with "Jikes!". You have some serious issues here.

Uplink port 2 in VC FC Bay4 and Uplink port 2 and 4 in Ba3 is connected to the same SAN switch.

Uplink ports 1,3,4 in Bay4 is connected to another SAN swiych.

Uplink ports 1 and 3 in Bay3 is connected to yet another SAN switch.

For VC FC, all uplinks defined in the same SAN Fabric must go to the same SAN switch/fabric. VC FC is based on NPIV and round robin so for every reboot of the host, it will jump to another uplink and possibly another SAN switch/fabric.

Unless there is some very good reasons for nothing doing so, you should get rid of one SAN switch. Connect all x:0:1 ports from 3PAR to SW1 and all x:0:2 ports to SW2. Connect all uplinks from VC FC bay 3 to SW1, all from Bay4 to SW2. Connect all "port 1" on rack hosts to SW1 and all "port 2" to SW2. Create zoning where you zone each host with 2 paths in each fabric to the 3PAR resulting in 4 paths in total per host.

For BL870 you should only zone HBA WWN, ending in 62:00 and 62:02. Everyone starting with 20:11, 20:12, 20:13 etc is the VCFC ports and should not be zoned.

My recommendation would be to get someone in to take a look at this for you. There is just so much stuff here that should be addressed.

Re: a couple of issues with 8440

Posted: Sat Jul 10, 2021 2:12 pm
by PaulC
yes there is a good reason nothing was working, and let's throw everything and see what sticks!

I did have it cabled that way and hence where we got to today.

this is what I've been missing!

For BL870 you should only zone HBA WWN, ending in 62:00 and 62:02. Everyone starting with 20:11, 20:12, 20:13 etc is the VCFC ports and should not be zoned.

didn't make a difference to the Blade... still seeing the same thing!

HP Smart Array P410i Controller (version 6.42)
1785-Drive Array Not Configured
No Drives Detected


Currently the controller is in HBA mode
HP PCIe 2Port 8Gb Fibre Channel Adapter (driver 2.27, firmware 5.06.006)
HP PCIe 2Port 8Gb Fibre Channel Adapter (driver 2.27, firmware 5.06.006)
HP PCIe 2Port 8Gb Fibre Channel Adapter (driver 2.27, firmware 5.06.006)
HP PCIe 2Port 8Gb Fibre Channel Adapter (driver 2.27, firmware 5.06.006)
HP Smart Array P410i Controller (version 6.42)
1785-Drive Array Not Configured
No Drives Detected


Currently the controller is in HBA mode
HP PCIe 2Port 8Gb Fibre Channel Adapter (driver 2.27, firmware 5.06.006)
HP PCIe 2Port 8Gb Fibre Channel Adapter (driver 2.27, firmware 5.06.006)
HP PCIe 2Port 8Gb Fibre Channel Adapter (driver 2.27, firmware 5.06.006)
HP PCIe 2Port 8Gb Fibre Channel Adapter (driver 2.27, firmware 5.06.006)
ReconnectController(0,0,0) : Status = Success

Shell> map
map: Cannot find required map name.

Shell> map -r
map: Cannot find required map name.

Shell>

Re: a couple of issues with 8440

Posted: Sat Jul 10, 2021 2:58 pm
by MammaGutt
Okay...

So where is the zoning for this host? SW1, 2 or 4? Based on nsshow I know that SW4 is connected to both VC modules and it's the one in Bay 3 that has the 62:00 WWN you probably have zoned and have added on the 3PAR.

Disconnect(or remove from VC SAN fabric) port 1 and 3 in Bay4 VC module and reboot the server.