Page 1 of 2

Odd VVol issue

Posted: Sun May 03, 2020 9:38 pm
by markinnz
This is an odd one :-)

I've started rolling out VVols into our production ESX clusters and after storage migrating the 1st VM noticed an oddness.

Even though there is a VM Storage Policy created and applyed as default to the VVol datastore, the "Swap" volume for the VM was using a different CPG to the rest of the VMs volumes.

I created a new CPG for VVols to make use of (just for logical separation) called "SSD_R5_HA_VVol" and the array already has a CPG called "SSD_R5_HA", the VM storage Policy looks like this :

Code: Select all

Name   LDV_3PAR_Thin_SSD_R5
Description   HPE 8400 3PAR
Rule-set 1: HPE 3PAR StoreServ
Placement
Storage Type   HPE 3PAR StoreServ
Common Provisioning Group   SSD_R5_HA_VVol
Snapshot Common Provisioning Group   SSD_R5_HA_VVol
Thin Persistence   Enabled
Thin Deduplication   Disabled


When I storage migrated a VM you can see it has used the correct CPGs :

Code: Select all

akl-ldv-3par-1 cli% showvvolvm -d -sc LDV-ITVFCorp-VVols
                                                               ------(MB)------
VM_Name   UUID                                 Num_vv Num_snap Physical Logical GuestOS               VM_State  UsrCPG         SnpCPG         Container          CreationTime
test01 501e348d-3965-1912-2fa2-2b7776f70703      5        0    72775  221184 windows9Server64Guest PoweredOn SSD_R5_HA_VVol SSD_R5_HA_VVol LDV-ITVFCorp-VVols 2020-05-01 15:46:56 NZST
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        1 total                                     5             72775  221184


The rules have been applied OK :

Code: Select all

akl-ldv-3par-1 cli% showvvolvm -sp -sc LDV-ITVFCorp-VVols test01
VM_Name   SP_Name              SP_Constraint_List
test01 LDV_3PAR_Thin_SSD_R5 CPG=SSD_R5_HA_VVol
                               SnapshotCPG=SSD_R5_HA_VVol
                               ThinPersistence=Enabled
                               Deduplication=Disabled
---------------------------------------------------------
        1 total


Virtual volumes look like this :

Code: Select all

akl-ldv-3par-1 cli% showvvolvm -vv -sc LDV-ITVFCorp-VVols test01
                                                     ------(MB)------
VM_Name   VV_ID VVol_Name             VVol_Type Prov Physical Logical
test01 87385 cfg-test01-6e2485c7 Config    tpvv     3876    4096
          87386 dat-test01-a7000170 Data      tpvv    32329   81920
          87387 dat-test01-cd1fe59a Data      tpvv    13604   81920
          87388 dat-test01-ec2fa9c1 Data      tpvv     8923   40960
          87390 swp-test01-0ec91562 Swap      full    14043   12288
---------------------------------------------------------------------
        1 total                     5                   72775  221184


But when you pull out the details on each volume, the "swap" volume is not right, below is a "data" volume and the "swap :

Code: Select all

akl-ldv-3par-1 cli% showvv -cpgalloc dat-test01-ec2fa9c1
   Id Name                  Prov Compr Dedup Type UsrCPG         SnpCPG
87388 dat-test01-ec2fa9c1 tpvv No    No    base SSD_R5_HA_VVol SSD_R5_HA_VVol
-------------------------------------------------------------------------------
    1 total
akl-ldv-3par-1 cli% showvv -cpgalloc swp-test01-0ec91562
   Id Name                  Prov Compr Dedup Type UsrCPG    SnpCPG
87390 swp-test01-0ec91562 full NA    NA    base SSD_R5_HA --
------------------------------------------------------------------
    1 total


I can only assume it is because the "swap" has to be a full fat volume ... just I don't know why it can't also be using the same CPG

Any ideas ? (I also have a HPE case logged, but that'll take a bit for them to work on .. a brain here might already know!)

Re: Odd VVol issue

Posted: Mon May 04, 2020 6:30 am
by MammaGutt
Just my thought....

Swap volume should contain only things that are swapped from "physical" memory. If you ever were to roll back a snapshot the VM would have to be restarted and the swap is expected to be gone like with the rest of the "physical" memory which is lost during each reboot.

Edit: Google power gives me this one as an example:
https://support.purestorage.com/Solutio ... FlashArray

Pure states:
The swap vVol, which only exists when a VM is powered on, is never replicated.
Swap vVols are never assigned a storage policy


https://h20195.www2.hpe.com/V2/GetPDF.a ... 302ENW.pdf

HPE states:
A swap VVol is created only upon first power-on of the VM and deleted when a VM is powered off. This volume contains copies of memory pages that can no longer be kept in host memory due to physical memory constraints. It functions much like a normal VM swap file does—the size is calculated by the size of the memory allocated to a VM. By default, this VVol is thick provisioned.
(ie, not assigned a storage policy)

Re: Odd VVol issue

Posted: Mon May 04, 2020 6:35 am
by markinnz
MammaGutt wrote:Just my thought....

Swap volume should contain only things that are swapped from "physical" memory. If you ever were to roll back a snapshot the VM would have to be restarted and the swap is expected to be gone like with the rest of the "physical" memory which is lost during each reboot.


Ok .. but .. I'm trying to work out why the 3PAR, VASA or VMware used a CPG for the Swap volume than it is different to what is used for all the other volumes :)

Re: Odd VVol issue

Posted: Mon May 04, 2020 6:37 am
by MammaGutt
markinnz wrote:
MammaGutt wrote:Just my thought....

Swap volume should contain only things that are swapped from "physical" memory. If you ever were to roll back a snapshot the VM would have to be restarted and the swap is expected to be gone like with the rest of the "physical" memory which is lost during each reboot.


Ok .. but .. I'm trying to work out why the 3PAR, VASA or VMware used a CPG for the Swap volume than it is different to what is used for all the other volumes :)


See my edited response above.

Swap is never assigned a storage policy.

Re: Odd VVol issue

Posted: Mon May 04, 2020 9:27 am
by Richard Siemers
There is some logic used by VASA to determine where to put vvols not covered by a policy, detailed below. Alternatively, you could also use Virtual domains to lock down the list of CPGs available to your VASA user.

In the Vmware Implementation Guide, section about CPG allocation starting at page 141, but specifically page 144.
https://support.hpe.com/hpesc/public/do ... lang=en-us


The VASA Provider uses the following algorithm to select CPGs for newly created VVols:

1. If there are any CPG names that begin with vvol_, all other CPGs are eliminated from consideration for provisioning.

2. If the space available to any of the CPGs is less than 10% of the total space originally provisioned for that CPG, these CPGs are eliminated from consideration for provisioning, unless all CPGs are under the same space pressure.

3. Of the remaining CPGs under consideration, a balance of performance and availability is used to consider which CPG to provision the VVols. The order of CPGs selected depends on the CPG configuration drive type (NL, FC, SSD) and RAID (r0, r1, r5, r6) settings.

The VASA Provider chooses the first CPG to match the preferred drive/raid order as follows:

1 FCr5 7 SSDr5
2 FCr6 8 SSDr6
3 FCr1 9 SSDr1
4 NLr6 10 FCr0
5 NLr1 11 NLr0
6 NLr5 12 SSDr0

Re: Odd VVol issue

Posted: Mon May 04, 2020 4:27 pm
by markinnz
Ahhh! Thank you Richard! Your answer is waaaaaayyyyy better than HPEs response :-)

They said they have no control over which CPG "swap" uses .. but that document says I can create a CPG with a prefix of "vvol_" and any VMs created without a storage profile will default to a CPG starting with "vvol_".

So I can bodge a fix then :-)

Thanks!

Re: Odd VVol issue

Posted: Mon May 04, 2020 8:07 pm
by markinnz
And just to confirm ... I created a new CPG called "vvol_VMware", storage migrated a VM and the VM storage policy made sure the data volumes all used the CPG specified in th epolicy and the "Swap" volume (which gets no policy applied) defaulted to going to using the "vvol_VMware" CPG instead of what it was picking earlier.

Ta muchly!

Re: Odd VVol issue

Posted: Mon May 04, 2020 11:09 pm
by Richard Siemers
Thanks for sharing the results! Glad I could help.

Re: Odd VVol issue

Posted: Mon Sep 28, 2020 9:48 am
by T16
This is still a seriously annoying problem.

I can have a policy applied to a VVOL container in vsphere, and manually create a folder on the VVOL container in vcenter, and actually WATCH IT create it in totally the wrong CPG from what it was told with the applied storage policy to the vvol container. Its ridiculous.

OK so you can force use of a CPG with prefixing it with vvol_, but then what are you supposed to do if you already have AO defined CPGs and you want to use them for the VVOLs. You can only have a cpg in each tier dedicated for AO, so its either rename the CPGs for AO with "vvol_" in front, OR, create a 2nd AO policy and apply that to just two new CPGs defined for VVols only... what a mess.

I would add, we have one array which behaves just fine, the swap, new folders, anything goes into the correct CPGs as per the applied storage policy, and another array where swap and new folders go into our REPLICAS CPG, and new VMs go into the correctly defined AO CPG defined in the policy. Why one array ignores it, and the other fine... who knows, probably just random.

Re: Odd VVol issue

Posted: Mon Sep 28, 2020 10:06 am
by T16
I do see that on the array which is causing issues, the CPG ID is set to 0, (our replica CPG) so I guess when Vmware doesnt apply a policy for a swap, OR for a new folder manually created on the VVOL, the Array is just going straight to the CPG with ID0 first.

Can we change the CPG ID?

EDIT, I gave up in the end on trying to do anything fancy, and just prefixed our existing two AO CPG's with vvol_ to catch any shenannigans from VMWare not applying Storage policy under certain situations. Slightly messy in that it ruins our naming convention for those two AO CPGS compared to the rest, but hey ho.. life goes on!