Demo Scenario HP 3Par 7000 Storage
Demo Scenario HP 3Par 7000 Storage
Hey Guys,
first of all: I'm not a storage expert and I'm not working for any IT department.
Currently I create a demo scenario for customer presentations with the focus on storage devices.
I decided to "document" a HP 3PAR Storeserv 7000 and it is almost done (SAN).
I've found almost all the information I need, but there is one or maybe more information missing.
At this time my scenario looks like this: see attachment
Also the connections are almost done. And here I'm searching for the information.
Can anyone tell me, how to connect the HP 3PAR Service Processor?
How do I have to connect it to the controllers? (I guess) Which PCI card is missing.
Can anyone help me using my second screenshot?
(I know the PDUs are still missing)
I used the following sources:
http://vmfocus.com/2013/08/22/3par-stor ... ice-guide/
http://h20628.www2.hp.com/km-ext/kmcsdi ... 9821-3.pdf
http://www8.hp.com/h20195/v2/GetPDF.asp ... 967ENW.pdf
http://h20628.www2.hp.com/km-ext/kmcsdi ... 2717-2.pdf
first of all: I'm not a storage expert and I'm not working for any IT department.
Currently I create a demo scenario for customer presentations with the focus on storage devices.
I decided to "document" a HP 3PAR Storeserv 7000 and it is almost done (SAN).
I've found almost all the information I need, but there is one or maybe more information missing.
At this time my scenario looks like this: see attachment
Also the connections are almost done. And here I'm searching for the information.
Can anyone tell me, how to connect the HP 3PAR Service Processor?
How do I have to connect it to the controllers? (I guess) Which PCI card is missing.
Can anyone help me using my second screenshot?
(I know the PDUs are still missing)
I used the following sources:
http://vmfocus.com/2013/08/22/3par-stor ... ice-guide/
http://h20628.www2.hp.com/km-ext/kmcsdi ... 9821-3.pdf
http://www8.hp.com/h20195/v2/GetPDF.asp ... 967ENW.pdf
http://h20628.www2.hp.com/km-ext/kmcsdi ... 2717-2.pdf
Last edited by Dr.Pepper on Fri Dec 05, 2014 7:06 am, edited 1 time in total.
Re: Demo Scenario HP 3Par 7000 Storage
The service processor connects to the 3PAR nodes management ports via the LAN. On the back of the SP the lowest numbered ethernet port (of the four on the left of the SP) needs to be on the same subnet as the arrays management ports.
Re: Demo Scenario HP 3Par 7000 Storage
Do I have to connect every "MGMT" port of the four controllers?
So an ethernet switch is missing I guess.
Why the hell should the SP be placed below the nodes?
emr_na-c03819821-3.pdf page 16
Another question: What type of connector (cable) is that "Intr-0" port (interconnects)?
So an ethernet switch is missing I guess.
Why the hell should the SP be placed below the nodes?
emr_na-c03819821-3.pdf page 16
Another question: What type of connector (cable) is that "Intr-0" port (interconnects)?
Re: Demo Scenario HP 3Par 7000 Storage
Yes all 4 management ports are connected, however only one is active at any time so they share a single IP address which fails over between nodes as required during online upgrades etc. Typically the SP and these 4 ports will connect to a management VLAN (on a switch) that is routable from the IMC/CLI etc.
If you intend to use RemoteCopy over the inbuilt RCIP ports then these should be connected to a different VLAN.
The SP sits at the bottom for ease of maintenance and to simplify cabling so as not to interfere with active components if it needs to be serviced. It could easily sit elsewhere though, so long as it doesn't interfere with expansion or maintenance and is able to communicate with the controllers. Keeping it close by means you can always run a xover cable if you have a problem in the network.
The node interconnect cables are proprietary PCI-E multi lane extenders, very low latency, high throughput as these form the back end mesh active fabric for cluster communications.
If you intend to use RemoteCopy over the inbuilt RCIP ports then these should be connected to a different VLAN.
The SP sits at the bottom for ease of maintenance and to simplify cabling so as not to interfere with active components if it needs to be serviced. It could easily sit elsewhere though, so long as it doesn't interfere with expansion or maintenance and is able to communicate with the controllers. Keeping it close by means you can always run a xover cable if you have a problem in the network.
The node interconnect cables are proprietary PCI-E multi lane extenders, very low latency, high throughput as these form the back end mesh active fabric for cluster communications.
Re: Demo Scenario HP 3Par 7000 Storage
Thank you very much for your help.
I think it's done. I decided to install a patch panel in this rack, the additional switch has been placed in another rack.
Can you tell me, if this looks like a more or less realistic scenario?
For me it is important, that there is nothing that's asolutely wrong.
I think every 3PAR looks different, but I hope this one is realistic.
FC panel
patch panel
san switch
2x filer
..x disk arrays
4 node enclosure
service processor
2x pdu
I think it's done. I decided to install a patch panel in this rack, the additional switch has been placed in another rack.
Can you tell me, if this looks like a more or less realistic scenario?
For me it is important, that there is nothing that's asolutely wrong.
I think every 3PAR looks different, but I hope this one is realistic.
FC panel
patch panel
san switch
2x filer
..x disk arrays
4 node enclosure
service processor
2x pdu
Last edited by Dr.Pepper on Fri Dec 05, 2014 7:06 am, edited 1 time in total.
Re: Demo Scenario HP 3Par 7000 Storage
Arguably, to be realistic, there should be more than one SAN switch, allowing a SAN-A/SAN-B air-gapped fabrics model for complete path redundancy. Whether you want redundancy for the LAN side is up to your specific requirements.
While you did add PDUs to your corrected diagram, there's not nearly enough outlets to go around to support everything. Depending on the racks used, it may be preferable to use vertical PDUs (assuming you do not block the removal of the hot-swappable power supplies). For note, you do not need to get HP's PDUs.
While you did add PDUs to your corrected diagram, there's not nearly enough outlets to go around to support everything. Depending on the racks used, it may be preferable to use vertical PDUs (assuming you do not block the removal of the hot-swappable power supplies). For note, you do not need to get HP's PDUs.
Re: Demo Scenario HP 3Par 7000 Storage
Well you are right, a second san switch would be necessary for redundancy. It is just for the connection beteen the filers an the controllers. The controllers are also connected with 2 Nexus 7k (in another rack).
OK the PDU extenders are not shown in my screenshot So I think there are enough outlets.
Are these 2 PDU powerful enough for these racks?
OK the PDU extenders are not shown in my screenshot So I think there are enough outlets.
Are these 2 PDU powerful enough for these racks?
Re: Demo Scenario HP 3Par 7000 Storage
If the other ports go off to your Nexus ? why not plumb the file controllers directly into the built in controller ports with no intervening FC switch, lower cost and NSPOF ?
Re: Demo Scenario HP 3Par 7000 Storage
I'll connect them to the Nexus too. Well you are right, in reality it would be an useless investment. But it looks nice in my scenario ^^ It is just a demo scenario in our DCIM software.
Do you know how many physical hosts can take place in this scenario? 8? 80? 800?
Let's say every host is a HP DL380 Gen8.
Do you know how many physical hosts can take place in this scenario? 8? 80? 800?
Let's say every host is a HP DL380 Gen8.
Re: Demo Scenario HP 3Par 7000 Storage
Assuming this is a 7400 4 node with the 16x additional FC ports then you could have up to 2048 initiators, if each is dual pathed then that is 1024 hosts. Different combos would provide different numbers, e.g if using iSCSI then you have fewer host facing ports available, so you can't support as many hosts fanning into the same system.
Even though the controllers support the above numbers, you still need to ensure you have enough disk I/O to support total number of hosts being connected.
Even though the controllers support the above numbers, you still need to ensure you have enough disk I/O to support total number of hosts being connected.