Page 2 of 2

Re: Demo Scenario HP 3Par 7000 Storage

Posted: Sun Oct 19, 2014 8:16 am
by Davidkn
I see three issues with this.

Firstly, you would absolutely have 2 fc switches, even for the 2 file servers, as what's the point in having 2 of them if there's a single point of failure introduced.

Secondly, the controllers wouldn't go at the bottom of the rack, with the 4 node systems they go In the middle of the disk shelves, and the disks for the top pair go above, and bottom pair below.

Finally, you have a 4 port fc card in the controllers for host connections, you say you are connecting to a nexus 7000, the nexus 7000 doesn't suprt native fc ports, you would need a nexus 5x00.

So depends how accurate you want to be, but those are a few pointers.

Re: Demo Scenario HP 3Par 7000 Storage

Posted: Thu Oct 23, 2014 3:11 am
by Dr.Pepper
hmm ok. So I guess the fabric is the next part which needs to get redesigned.

Currently I placed 80x HP DL380 Gen8 in 8 Racks (ok...for presentations i need just one of them..).
One rack with 4x Cisco MDS9148 and 2 racks with Cisco N7K-C7010.
I guess I'm absolutely wrong with these switches. It should be a Core-Edge-Design.
I've chosen the big N7K, because it just looked nice in presentations and there are other subsystems connected to it.
Are there any line cards I can use?
Current cards: N7K-132XP-12L, N7K-SUP1
What cards are missing if I can keep using these two N7k?

Re: Demo Scenario HP 3Par 7000 Storage

Posted: Sat Oct 25, 2014 3:01 pm
by Reactor
I design and engineer datacenter infrastructure for a significant portion of my job—and I have no clue where you're really going with this.

Creating a demonstrative scenario is fine, but the same questions that need to be answered in real projects also need to be answered in a demonstrative. What are the customer requirements? How do we weigh cost, performance, workloads, connectivity, access, serviceability, power, or available space? Each of these, and many others, factor heavily into a well-designed solution.

In other words, we see you're trying to build something, but without knowing what your designing for, our answers are just shots in the dark, since we're left guessing.

Off the cuff, for example, you could do the following:
  • 2x Cisco Nexus 7706 for your cores.
    • Each with one or two N77-F324FQ-25 modules (40 Gbps ports, each divisible into 4x10Gbps ports), with FCoE licenses
    • Additional modules may be desired, e.g., N77-F348XP-23 for 1/10Gb connectivity.
  • 4x Cisco Nexus 2248UPQ fabric extenders, as top-of-rack switches, two in each server rack, with 40 servers per rack. Each would connect to a single N77 core.
  • 2x Cisco MDS 9250i for your SAN traffic, each connecting 12 FC ports (one SAN side) of the 3PAR and sending the traffic upstream over FCoE to a single N77 core.
  • 80x dual-port 10 Gbps CNAs for your servers (e.g., Qlogic QLE8362, Emulex OCe14102-UX, or HP CN1100E).

This is a very basic, though modern, configuration, that converges LAN & SAN traffic into your cores, using FCoE. If wired up and configured correctly, you will maintain SAN-A/B separation of your SAN traffic, while delivering LAN traffic over the same wire to your servers.