Page 1 of 1

vSphere datastore size

Posted: Thu Jan 02, 2014 4:27 am
by skumflum
The size should not be an issue according to VMware vSphere 5 Best practices guide but what size VV is sensible?

I was planning to make 4TB VV’s but I was thinking…why not 8TB or more?

The only reason to hold back (that I can think of) is the hypothetical situation that we need to empty a datastore or recovery time from disaster. Smaller size easier to handle.

Re: vSphere datastore size

Posted: Thu Jan 02, 2014 10:57 am
by hdtvguy
Depends on what you are doing. We used to be vigilant about not having more than 10-12 VMs on a datastore due to old issues with vmdk.vmx file locking, now we are more driven by SRM protection groups and recovery plans, but I still try to keep it to 10-12 which is rare sine many of our protection groups are only a handful of vms. I have a 9TB datastore that has a single VM in it where the VM has several 2TB VMDKs. I think it really depends on what you are trying to accomplish.

Re: vSphere datastore size

Posted: Tue Jan 07, 2014 11:30 am
by Richard Siemers
# of VMs per lun is something you should do the math on as the queue depth is a shared resource per lun.

Benchmarks on my system led me to believe that my LUNs get no performance increase beyond 29ish queued IOs.... so a I prefer a 32 io queue depth as opposed to the new 64 default in ESX 5.x. However, my SAN hardware is older and uses 4gb ports, so doing your own testing is always recommended.

Using 32 queued IOs/lun as our starting point. From there we figured 10-16 VMs per data store was a sweet spot... multiplied by our average VMDK size... we went with a default of 1 TB datastore. We do have some 2 TB datstores for special one-off needs. Also it is really easy to grow a datastore/vv if you start your vv off too small.... so that said, I think its best to start small and adjust bigger as needed.