So since vvol hasn't yet arrived to save us...
I was curious as to how you all layout your datastores and luns. Do you use rdms? Or only vmdk?
With the advent of VAAI, T10, and specifically ATS, are you limiting the number of vm's per datastore?
To date, we have limited our datastore size to about 3tib, and put all non-boot drives on rdms, which means we get 30-40 vm's per vmfs.
I am considering the option of just creating 16tib datastores (the 3par max) and making everything under 1tib a vmdk
VMDK, VV, and lun layout
-
- Posts: 390
- Joined: Fri Jun 27, 2014 2:01 am
Re: VMDK, VV, and lun layout
And what is your version of vSphere ?
Did you ever think about restoring luns larger than 3TB ? About the time lost restoring such big volumes ?
Did you ever think about restoring luns larger than 3TB ? About the time lost restoring such big volumes ?
Re: VMDK, VV, and lun layout
Schmoog wrote:So since vvol hasn't yet arrived to save us...
I was curious as to how you all layout your datastores and luns. Do you use rdms? Or only vmdk?
I view RDM as a technology that's had its day. Every major advantage no longer exists in current versions of ESXi, but all the drawbacks are still there.
There's still value in limiting VMs per LUN, one being you get better multipath support if you encourage there being more LUNs.
-
- Posts: 142
- Joined: Wed May 07, 2014 10:29 am
Re: VMDK, VV, and lun layout
You are still limited to one queue per LUN, with a default queue debt of 64. (for FC)
We have a default of 4TB LUNs atm, and that seems to be working ok,
but customer requests of larger vmdks seems to be more and more often.
The only reason to use RDM today is MS clustering IMO.
We have a default of 4TB LUNs atm, and that seems to be working ok,
but customer requests of larger vmdks seems to be more and more often.
The only reason to use RDM today is MS clustering IMO.
Re: VMDK, VV, and lun layout
I am on vsphere 5.5
The nice thing about rdms are that they keep the datastore situation nice and tidy. And if I have to add/expand, I dont need to think about orphaned capacity, moving the vm around to a datastore with more space etc.
That is true about the queue depth/multipathing. I hadn't thought of that
The nice thing about rdms are that they keep the datastore situation nice and tidy. And if I have to add/expand, I dont need to think about orphaned capacity, moving the vm around to a datastore with more space etc.
That is true about the queue depth/multipathing. I hadn't thought of that
Re: VMDK, VV, and lun layout
We have 850+ VMs in 7 clusters over 60 hosts. We use SRM for DR so grouping VMs on datastores by like DR recovery is important to us. Also I am still a bit old school and try to limit the number of VMs on a datastore to help disperse IO. I know 5.x made some change sin the discksched parameter operations, but you need to be careful that a VM with busy IO does not bottle neck a datastore. We still use some RDMs because even though 5.5 support 2TB+ VMDKs you need to power off the VM to increase a VMDK that is larger than 2TB. In those cases we use multiple VMDKs or for our larger VMs we use physical RDMS so we can work with large volumes.
Re: VMDK, VV, and lun layout
Yup, still using pRDM's for our file servers with >2TB volumes due to the stupid non-expandable large VMDK support in esxi. Also recently found out that the 1024 path limit isn't going to be change in any planned releases of esxi so I have to do some re-engineering of our path policy as we have >700 paths today and I haven't even zoned in our new controllers and when we move to Exchange 2013 we'll have several hundred additional paths for the new VMs if we continue our current policy.
Re: VMDK, VV, and lun layout
I find vmware lagging in keeping up. Path limtis, SCSI ID limits, seems very much outdated in many areas.