3PAR VV for datastores Vmware
-
- Posts: 13
- Joined: Thu Jun 28, 2018 8:26 am
3PAR VV for datastores Vmware
Hello,
We use 3PAR 8400 (4 nodes) with Vmware vSphere 6.5.
Actually, we have 4 VVs for 4 Vmware Datastore (cardinality 1:1), one VV for one Datastore.
Each hosts own 2 HBA (16Gbps)
Each VV is sized with 15To (dedup, thin).
We host about 150 Vms (application, database, and so on), anyway very eclectic !
I heard many different things about max sizing VV, do you think 15To for a VV is too high ?
Thanks for your advise.
Best regard
Speedypizz
We use 3PAR 8400 (4 nodes) with Vmware vSphere 6.5.
Actually, we have 4 VVs for 4 Vmware Datastore (cardinality 1:1), one VV for one Datastore.
Each hosts own 2 HBA (16Gbps)
Each VV is sized with 15To (dedup, thin).
We host about 150 Vms (application, database, and so on), anyway very eclectic !
I heard many different things about max sizing VV, do you think 15To for a VV is too high ?
Thanks for your advise.
Best regard
Speedypizz
Re: 3PAR VV for datastores Vmware
Personally I'd prefer smaller VVs for running VMs on, we currently use 1-2TB VVs for general VMs but have larger VVs for specific large data VMs (up to around 10TB atm).
VMware can now be configured to use the VVs automatically for distributing where VMs are stored so little management overhead with lots of VVs.
It's normally the number of VMs/Datastore that drives this, our VMware Team tend to prefer between 10-20 small VMs per datastore so have been happy to stick to 1-2TB VVs.
However I would be happy if are Solaris Team would use bigger VVs, they tend to like loads of 20GB ones, getting them to use fewer 200GB ones was challenging.
VMware can now be configured to use the VVs automatically for distributing where VMs are stored so little management overhead with lots of VVs.
It's normally the number of VMs/Datastore that drives this, our VMware Team tend to prefer between 10-20 small VMs per datastore so have been happy to stick to 1-2TB VVs.
However I would be happy if are Solaris Team would use bigger VVs, they tend to like loads of 20GB ones, getting them to use fewer 200GB ones was challenging.
Re: 3PAR VV for datastores Vmware
I would say "maybe".
Remember that each vmware host has one I/O queue for each datastore. So one VM on one host might impact other VMs on the same host using the same datastore more than it would if the other VMs was on another host or another datastore.
I second Ailean. I prefer smaller datastores/VVs with less VMs and I have no issues creating huge datastores for large single VMs.
Your problem will become apparent if you have 15TB datastores with 150 VMs of 100GB and suddenly two of those start running IOMeter.
Remember that each vmware host has one I/O queue for each datastore. So one VM on one host might impact other VMs on the same host using the same datastore more than it would if the other VMs was on another host or another datastore.
I second Ailean. I prefer smaller datastores/VVs with less VMs and I have no issues creating huge datastores for large single VMs.
Your problem will become apparent if you have 15TB datastores with 150 VMs of 100GB and suddenly two of those start running IOMeter.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
-
- Posts: 13
- Joined: Thu Jun 28, 2018 8:26 am
Re: 3PAR VV for datastores Vmware
Ok,
Thank you all for your opinion.
It's more clear in my mind now
Speedypizz
Thank you all for your opinion.
It's more clear in my mind now
Speedypizz
Re: 3PAR VV for datastores Vmware
I have seen multiple cases were Datastores become corrupted and unusable.
If you're have a Big one you will have a big issue.
If you're have a Big one you will have a big issue.
Re: 3PAR VV for datastores Vmware
We run multiple small datastores (4TiB is the size we chose) rather than giant sized ones.
Take a look a Vvols that's our next step, each VM has its own set of volumes from the 3PAR.
Take a look a Vvols that's our next step, each VM has its own set of volumes from the 3PAR.
Re: 3PAR VV for datastores Vmware
markinnz wrote:We run multiple small datastores (4TiB is the size we chose) rather than giant sized ones.
Take a look a Vvols that's our next step, each VM has its own set of volumes from the 3PAR.
Vvol are generally a good idea, but for siloed organisations it may cause the same conflicts as with hyper-converged... Server/virtualisation teams doesn’t always understand storage and with Vvol everything is controlled from vCenter and storage teams will have limited access/visability.
Also I still think there are some storage feature available on 3pars (and other systems) which are still not available with Vvol because the storage systems doesn’t control the volumes.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
Re: 3PAR VV for datastores Vmware
MammaGutt wrote:
Vvol are generally a good idea, but for siloed organisations it may cause the same conflicts as with hyper-converged... Server/virtualisation teams doesn’t always understand storage and with Vvol everything is controlled from vCenter and storage teams will have limited access/visability.
Also I still think there are some storage feature available on 3pars (and other systems) which are still not available with Vvol because the storage systems doesn’t control the volumes.
Well this particular storage admin is now also a VMware admin (taking over from the inside)
I'd been keen to know of any pitfalls of VVols, going to be trailing them in a test bed once I get back to NZ and get the firewalls/routing between the 3PAR and vCenter sorted out.
I do know that 3.2.2 code exposes less 3PAR features for use by vCenter (just basic creation and snapshots I think), but 3.3.1 also gives control of replication. In time it should just get better as HPE and VMware sort the code out.
Re: 3PAR VV for datastores Vmware
The main limit to VVols with the latest 3PAR/VMware code is no Peer Persistence support I believe, last update seemed to improve the scaling to larger setups.
Did poke our HPE reps last week about this and they are off to speak with the developers soon so might get some feedback.
Did poke our HPE reps last week about this and they are off to speak with the developers soon so might get some feedback.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: 3PAR VV for datastores Vmware
I think somewhere above you may have also asked about Thick provisioning on Thin provisioning... just not in those exact words.
I am a fan of of the VMDK files being Thick Eager Zero, and the 3PAR VVs being Thin Provisioned. Its helps the ESX admins keep from overloading a single datastore with too many guests, and also takes the headaches of thin provisioning off their plate completely (and off their processors). The eager zeros help the 3PAR reclaim any space that may have been missed by T10/Unmap.
Also ESX 6.5 and VMFS6 support the automatic unmap of deleted data. ESX 6.5 and VMFS5 do not. Remember to check your datastore VMFS versions.
I am a fan of of the VMDK files being Thick Eager Zero, and the 3PAR VVs being Thin Provisioned. Its helps the ESX admins keep from overloading a single datastore with too many guests, and also takes the headaches of thin provisioning off their plate completely (and off their processors). The eager zeros help the 3PAR reclaim any space that may have been missed by T10/Unmap.
Also ESX 6.5 and VMFS6 support the automatic unmap of deleted data. ESX 6.5 and VMFS5 do not. Remember to check your datastore VMFS versions.
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.