We are having an issue with a 3Par host where we have a 12 TB VV presented as an RDM to a Windows 2012 Vmware VM.
This drive appears to have poor performance compared with other similar Drives and this same VM has another RDM that appears to perform normally.
The ONLY thing I see different is when we look at this LUN in devices in Vmware as a 3PARdata ISCSI Disk but has a valid FC identifier of naa.60002ac0000000000000000900003d6e. Also looking in some of the performance counters in Vmware this Lun shows high Physical Device Latency at times.
The disk showing up as ISCSI may be a red herring but at this point I am just looking at one of these things is not like the others.
Anyone ever seen anything like this?
3Par Fibre Channel Virtual Volume shows as ISCSI
Re: 3Par Fibre Channel Virtual Volume shows as ISCSI
What SCSI controller are you using in the VM? Paravirtual SCSI would be the recommended controller. What are you using to gauge performance for the RDM other than vmware perf counters, have you run any Windows perfmon counters? Are the NTFS cluster sizes the same for both RDMs?
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: 3Par Fibre Channel Virtual Volume shows as ISCSI
Can you confirm the lun settings in the 3par side to check into the FC vs ISCSI export?
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: 3Par Fibre Channel Virtual Volume shows as ISCSI
It was presented as Fibre-Channel but had been previously presented as ISCSI so according to HP that is why it picked up the ISCSI disk identifier and it should not really affect anything and is just a name.
Re: 3Par Fibre Channel Virtual Volume shows as ISCSI
maybe slightly offtopic, but If you spot less then optimal performance issues, please look at the growth sizes and step sizes of your CPGs.
- growth sizes for FC and NL should be 32GB*the number of nodepairs (so assuming a 4-node 7400, it should be 64GB). For SSD it should be 8GB*the number of nodepairs (16GB for a 4-node 3PAR)
- default step sizes depend on Raid level used, the set-size and drive type. Often I advice to remove the -ss value all together from the CPG configuration, to force the 3PAR to use the defaults (which are always right). Of course there are very heavily tuned CPGs possible where you could need to vary the stepsize based on the underlying application, but usually the default is best.
e.g. via CLI do the following to show the CPG configuration:
output would be something like:
in this example both the growth size is to small 16GB instead of 64GB), and the step size is wrong (32KB vs 128KB). setting the right value is done like this:
The system will require a "tunesys" afterwards to adjust the right stepsizes for all previously created LDs with the "wrong" specifications. Please suspend AO during a tunesys to prevent nasty issues.
Below a reference with the right stepsizes per drivetype / raid / setsize:
For HDDs (both NL and FC):
For SSD's:
- growth sizes for FC and NL should be 32GB*the number of nodepairs (so assuming a 4-node 7400, it should be 64GB). For SSD it should be 8GB*the number of nodepairs (16GB for a 4-node 3PAR)
- default step sizes depend on Raid level used, the set-size and drive type. Often I advice to remove the -ss value all together from the CPG configuration, to force the 3PAR to use the defaults (which are always right). Of course there are very heavily tuned CPGs possible where you could need to vary the stepsize based on the underlying application, but usually the default is best.
e.g. via CLI do the following to show the CPG configuration:
Code: Select all
showcpg -sdg
output would be something like:
Code: Select all
Id Name Domain Warn Limit Grow Args
Some_FC_Raid5_CPG SomeDomain - - 16384 -t r5 -ha cage -ssz 4 -ss 32 -ch first -p -devtype FC
in this example both the growth size is to small 16GB instead of 64GB), and the step size is wrong (32KB vs 128KB). setting the right value is done like this:
Code: Select all
setcpg -sdgs 65536 -t r5 -ha cage -ssz 4 -ss 128 -ch first -p -devtype FC Some_FC_Raid5_CPG
The system will require a "tunesys" afterwards to adjust the right stepsizes for all previously created LDs with the "wrong" specifications. Please suspend AO during a tunesys to prevent nasty issues.
Below a reference with the right stepsizes per drivetype / raid / setsize:
For HDDs (both NL and FC):
Code: Select all
R1: 256k
R5: 128k
R6 4+2: 128k
R6 6+2: 64k
R6 8+2: 64k
R6 10+2: 64k
R6 14+2: 32k
For SSD's:
Code: Select all
R0: 32k
R1: 32k
R5: 64k
R6 6+2: 64k
R6 8+2: 64k
R6 10+2: 64k
R6 14+2: 32k
The goal is to achieve the best results by following the clients wishes. If they want to have a house build upside down standing on its chimney, it's up to you to figure out how do it, while still making it usable.