Hi 3PARUsers
I also got confirmation from HP that a 7400 4-node can only sustain a single node failure. I know this topic appeared in an earlier post and I will post the response from HP there for your perusal.
My question is, in light of this information, am I better off when creating a CPG to restrict disk selection to a controller pair and associated shelves... I have nothing setup yet, just reading and trying to put a plan in place.
The "problem" I am trying to address is how to maximize the potential of a 7400 4-node....in my virtualized environment, I have a lot of VM - hosted on blades in a C7000 chassis connecting to the SAN network via PTM - but little traffic so the number of iSCSI initiators will become a limiting factor. Only 64 are allow per port. If I create a CPG using disks on a single controller pair and connected disk cages, I could get away with connecting port 0:2:1 to switch-A and port 1:2:2 to switch-B. This would give me two paths to volumes created off that CPG and give me switch and node failure. Any thoughts.
SAN newbie and first post.
Thanks
Kieran
7400 4-node: zoning considerations
Re: 7400 4-node: zoning considerations
CPGs should us all the drive you have if possible otherwise you are defeating the wide striping and also limiting the IO processing to the nodes that own those drives. People need to stop trying to second guess the technology. It may not be perfect, but it works. I have a manager that is always trying to out guess how vmware is going to do CPU and RAM management and I tell him to let the technology do what it was designed to do. If we have to spend so much time trying to second guess the technology we purchase then we purchased the wrong solution.
Re: 7400 4-node: zoning considerations
Appreciate the reply.
I was experimenting today and found I couldn't segregate the drives in the way I suggested in the mail which gave me a strong indication that I was way off base with my thought process.
Old school thinking from a newbie.
Cheers
I was experimenting today and found I couldn't segregate the drives in the way I suggested in the mail which gave me a strong indication that I was way off base with my thought process.
Old school thinking from a newbie.
Cheers
-
- Posts: 41
- Joined: Thu Apr 19, 2012 9:29 am
Re: 7400 4-node: zoning considerations
I agree with kluken. CPGs should use all the drives. More spindle more performance. You might also want to check the docs as I believe that you can only have 32 initiators per port for VMWARE.
JD
JD
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: 7400 4-node: zoning considerations
CPG wise, what Kluken said... let your CPGs span all 4 nodes.... but you can still do what you want and setup your hosts to talk to just 2 nodes. A single pathed host connected to just 1 iscsi port can still access a CPG spanning 4 nodes, there is a backplane that the 4 nodes talk to each other through and service data requests they themselves cant answer. This backplane is used even when you have 4 paths, one to each node, because RR does not guarantee the node receiving the request actually is responsible for that block.
From the sound of it, you have a total of 8 iscsi ports, 64 hosts max per port. Thats 256 dual attached guests or 128 quad pathed guests.
I hope this helps.
From the sound of it, you have a total of 8 iscsi ports, 64 hosts max per port. Thats 256 dual attached guests or 128 quad pathed guests.
I hope this helps.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.