HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 4 posts ] 
Author Message
 Post subject: cleanup after TDVV2 to TDVV3 migration
PostPosted: Tue Apr 10, 2018 9:10 am 

Joined: Fri Jan 20, 2017 9:39 am
Posts: 58
I just finished moving all my VVs to TDVV3

Do I need to do anything to reclaim space? The CPG that had my TDVV2 is now showing nothing in SSMC (I thought I might have to do a compact or even delete the CPG but it looks like that's not the case as everytihgn just magically disappeared after I deleted the last VV in the CPG).

I ask is because when I look in SSMC and add up the the total capacity of each CPG and compare with what system > capacity shows, I have about 8TB more allocated under system than in the CPG.

In System capacity I see:
Block: 38640 GiB
System: 8455 GiB
Total: 47095 GiB

If I add up all my CPGs I get to 30912 GiB

Am I missing 8TB of space somewhere? Due to the changes in dedupe I was expecting to have more free space compared to when I started due to the changes in how dedupe works in TDVV3. It looks like there's been little to no change after the migration though

Just for fun I ran a tunesys (with the analyze option) and the only thing of interest is shown below. Interestingly the amount of data it seems to want to move (roughly 30TB) is about what the CPGs add up to. Coincidence?



*
Device type: SSD100 Average Usage per node 43.09% threshold 40.09%
-Node Disk availability & percentage use-

*
*********************************************************
* Running 144 LD re-layout tunes. Note that these
* run one at a time (maxtasks does NOT apply to these tunes)
*********************************************************
*
*
*******************************************************************************
* Dry run - The following (Individual) LD tunes would be performed:
*******************************************************************************
*
tuneld -f -tunesys tp-3-sa-0.0 (Reserved space: 1856MiB)
tuneld -f -tunesys tp-3-sa-0.1 (Reserved space: 1856MiB)
tuneld -f -tunesys tp-3-sd-0.0 (Reserved space: 262144MiB)
<long list of tuneld removed>
*
*********************************************************
* Dry Run completed
*********************************************************
*
Number of tunes suggested: 144 (30260GiB to move)
Completed scheduled task.


Top
 Profile  
Reply with quote  
 Post subject: Re: cleanup after TDVV2 to TDVV3 migration
PostPosted: Tue Apr 10, 2018 9:32 am 

Joined: Wed Nov 09, 2011 12:01 pm
Posts: 392
Make sure you are looking at "Raw Allocated Capacity" when comparing CPG sizes to system sizes, otherwise it hides all the raid overhead.

Typically once a CPG has all it's volumes removed it'll drop to zero usage although I think there is still some background tasks to zero fill the unallocated chunklets.


Top
 Profile  
Reply with quote  
 Post subject: Re: cleanup after TDVV2 to TDVV3 migration
PostPosted: Tue Apr 10, 2018 11:11 am 

Joined: Fri Jan 20, 2017 9:39 am
Posts: 58
ailean wrote:
Make sure you are looking at "Raw Allocated Capacity" when comparing CPG sizes to system sizes, otherwise it hides all the raid overhead.

Typically once a CPG has all it's volumes removed it'll drop to zero usage although I think there is still some background tasks to zero fill the unallocated chunklets.


Where is this raw allocated capacity? Do I need to check the command line to verify the numbers are adding up?

I found showld -d which seems to have a SizeMB and RSizeMB which is roughly the 30 and 38TB number and I found a showpd -space which seems to show the 38TB and the 8TB for spare space. Is that where I need to look? All the other show commands I've tried are all giving me numbers closer to the 30TB value.


Top
 Profile  
Reply with quote  
 Post subject: Re: cleanup after TDVV2 to TDVV3 migration
PostPosted: Tue Apr 10, 2018 12:29 pm 

Joined: Wed Nov 19, 2014 5:14 am
Posts: 505
Typically whenever you look at a CPG's capacity the raid overhead is already factored in so you get a logical view. When looking at the system it's giving you a physical or raw view, it does this as you may have several CPG's all with different overheads and the raw space hasn't yet been assigned to a CPG so the overhead can't be calculated.

showcpg command will give you the logical space i.e only the user data consumed in the cpg.

showcpg -space gives you the same but also efficiency ratios

showspace -cpg gives you the same but also potential free raw and logical space.

showcpg -r command will give you both the logical and physical raw space i.e data + raid overheads consumed.

Divide the logical by raw and that'll give you your raid overhead %.

e.g. 25155200 logical / 33540235 raw = 0.75 = 25% overhead so 3+1 or 6+2


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 4 posts ] 


Who is online

Users browsing this forum: Google [Bot] and 45 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt