Getting rid of FC - Hyper-V Cluster best practice
Getting rid of FC - Hyper-V Cluster best practice
Hi everyone,
i am trying to find a cost effective solution for my problem.
At the moment we have a 3par 8200 with FC connected to a hpe enclosure over 2 SAN switches.
After some cloud migrations and also the components reaching end of life i am searching for a more cost effective solution.
I might have only 2 DL560 replacing the hpe enclosure but i do not want to buy new FC switches.
So which technique is the best to create a hyper-v cluster then? iSCSI or SMBv3 (or another)?
best regards
Stephan
i am trying to find a cost effective solution for my problem.
At the moment we have a 3par 8200 with FC connected to a hpe enclosure over 2 SAN switches.
After some cloud migrations and also the components reaching end of life i am searching for a more cost effective solution.
I might have only 2 DL560 replacing the hpe enclosure but i do not want to buy new FC switches.
So which technique is the best to create a hyper-v cluster then? iSCSI or SMBv3 (or another)?
best regards
Stephan
Re: Getting rid of FC - Hyper-V Cluster best practice
Direct attach the hosts to the 3PAR without SAN switches?
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
Re: Getting rid of FC - Hyper-V Cluster best practice
We had the direct attached in the beginning... then reconfigured it. I need to search my brain why it was not working right.
I also need to present the LUNs to our Backup machine (external Proliant) to perform faster backups and do not have pressure on the hyper-v hosts.
I do not think that we a have a 4 port controller in our 3Par. So i need to buy one anyway if i proceed with FC.
If i have a dedicated 2 port 25gbs card for the storage in each server - how could be my best configuration for this?
I also need to present the LUNs to our Backup machine (external Proliant) to perform faster backups and do not have pressure on the hyper-v hosts.
I do not think that we a have a 4 port controller in our 3Par. So i need to buy one anyway if i proceed with FC.
If i have a dedicated 2 port 25gbs card for the storage in each server - how could be my best configuration for this?
Re: Getting rid of FC - Hyper-V Cluster best practice
StephanG wrote:We had the direct attached in the beginning... then reconfigured it. I need to search my brain why it was not working right.
I also need to present the LUNs to our Backup machine (external Proliant) to perform faster backups and do not have pressure on the hyper-v hosts.
I do not think that we a have a 4 port controller in our 3Par. So i need to buy one anyway if i proceed with FC.
If i have a dedicated 2 port 25gbs card for the storage in each server - how could be my best configuration for this?
Well, there is no onboard ports on the 3PAR for iSCSI, and if you’re going for cifs thru File Persona (which isn’t recommended for virtualization) you would be limited to 1Gbit onboard RCIP port (also not recommended). So either way it seems to me that you either need to get additional HBAs for the 3PAR or continue with FC switches.
If you’re going for iSCSI you should have switches designed for iSCSI with DCBX and if you don’t have that it will also add to the cost on top of the added complexity over direct attached FC. neither iSCSI or File Persona supports direct attach.
Based on what you’ve posted I assume you ditched FlatSAN (not direct attach) because you ran out of ports, and based on the info provided I don’t see a cheaper and less complex way out of this that direct attach.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
Re: Getting rid of FC - Hyper-V Cluster best practice
Thanks for your input.
Then i have to decide whether buying new SAN switches or 2 new interface cards.
Then i have to decide whether buying new SAN switches or 2 new interface cards.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: Getting rid of FC - Hyper-V Cluster best practice
Whats the model of the switches? Are you sure they are end of life?
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: Getting rid of FC - Hyper-V Cluster best practice
Hi Richard,
not 100%
I was searching at HPE - they said i have to look at Brocade. There i cannot find the EOL date
Product: HP SN3000B (maybe it's named other at Brocade)
HPE
https://support.hpe.com/hpesc/public/do ... 90361en_us
Brocade (belongs to Broadcom now)
https://www.broadcom.com/support/fibre- ... orking/eol
I also asked the support at Broadcom - they send me the link back i already found.
I do not know why HPE, Brocade and others just not provide a simple PDF with all the EOSL of all products. I wasted more than 5 hours in my life searching for end of life dates.
not 100%
I was searching at HPE - they said i have to look at Brocade. There i cannot find the EOL date
Product: HP SN3000B (maybe it's named other at Brocade)
HPE
https://support.hpe.com/hpesc/public/do ... 90361en_us
Brocade (belongs to Broadcom now)
https://www.broadcom.com/support/fibre- ... orking/eol
I also asked the support at Broadcom - they send me the link back i already found.
I do not know why HPE, Brocade and others just not provide a simple PDF with all the EOSL of all products. I wasted more than 5 hours in my life searching for end of life dates.
Re: Getting rid of FC - Hyper-V Cluster best practice
Looks like a Brocade 6505 switch, they have it listed as supported till 2025.
Most of the 16Gb kit is safe for a couple of years, it was mostly the 8Gb gear that Brocade EOL'd I think last December.
The HPE quickspec doc for the switch was updated last month so it's still getting attention from them.
I did have to poke a lot of HPE/Brocade tech people last time we ordered a director just to get a list of what the HPE model numbers of SFPs were in Brocade terms, to verify compatibility, it can be a pain.
Most of the 16Gb kit is safe for a couple of years, it was mostly the 8Gb gear that Brocade EOL'd I think last December.
The HPE quickspec doc for the switch was updated last month so it's still getting attention from them.
I did have to poke a lot of HPE/Brocade tech people last time we ordered a director just to get a list of what the HPE model numbers of SFPs were in Brocade terms, to verify compatibility, it can be a pain.
Re: Getting rid of FC - Hyper-V Cluster best practice
SN3000B is 24-port 16Gbit/Gen5.
They are not end of life (yet). If my memory serves me right they are good for at least a couple of more years (2023?).
As with everything storage you need a support contract to access software/patches.
They are not end of life (yet). If my memory serves me right they are good for at least a couple of more years (2023?).
As with everything storage you need a support contract to access software/patches.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
Re: Getting rid of FC - Hyper-V Cluster best practice
Regarding direct attach, i have trouble with this right now, please see my post:
viewtopic.php?f=18&t=3535
viewtopic.php?f=18&t=3535