HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 2 posts ] 
Author Message
 Post subject: 8200 new disks
PostPosted: Thu Mar 02, 2023 3:19 pm 

Joined: Wed Feb 15, 2023 6:45 am
Posts: 8
Hello all.
There's a single-shelf 8200 with 8 x 3.84 (pd0 - pd7) where another 16 x 3.84 have been added (pd8 - pd24).
Now i observe some strange thing with space distribution after more than 12 hrs passed.


cli% showpd -c
--------- Normal Chunklets --------- ---- Spare Chunklets ----
-- Used -- -------- Unused --------- - Used - ---- Unused ----
Id CagePos Type State Total OK Fail Free Uninit Unavail Fail OK Fail Free Uninit Fail
0 0:0:0 SSD normal 3575 32 0 3293 0 0 0 2 0 248 0 0
1 0:1:0 SSD normal 3575 453 0 2872 0 0 0 2 0 248 0 0
2 0:2:0 SSD normal 3575 34 0 3291 0 0 0 0 0 250 0 0
3 0:3:0 SSD normal 3575 454 0 2871 0 0 0 0 0 250 0 0
4 0:4:0 SSD normal 3575 34 0 3291 0 0 0 0 0 250 0 0
5 0:5:0 SSD normal 3575 454 0 2871 0 0 0 0 0 250 0 0
6 0:6:0 SSD normal 3575 36 0 3289 0 0 0 0 0 250 0 0
7 0:7:0 SSD normal 3575 494 0 2831 0 0 0 0 0 250 0 0
8 0:8:0 SSD normal 3575 910 0 2415 0 0 0 0 0 250 0 0
9 0:9:0 SSD normal 3575 689 0 2636 0 0 0 0 0 250 0 0
10 0:10:0 SSD normal 3575 910 0 2415 0 0 0 0 0 250 0 0
11 0:11:0 SSD normal 3575 686 0 2639 0 0 0 0 0 250 0 0
12 0:12:0 SSD normal 3575 910 0 2415 0 0 0 0 0 250 0 0
13 0:13:0 SSD normal 3575 688 0 2637 0 0 0 0 0 250 0 0
14 0:14:0 SSD normal 3575 910 0 2415 0 0 0 0 0 250 0 0
15 0:15:0 SSD normal 3575 696 0 2629 0 0 0 0 0 250 0 0
16 0:16:0 SSD normal 3575 910 0 2415 0 0 0 0 0 250 0 0
17 0:17:0 SSD normal 3575 690 0 2635 0 0 0 0 0 250 0 0
18 0:18:0 SSD normal 3575 910 0 2415 0 0 0 0 0 250 0 0
19 0:19:0 SSD normal 3575 697 0 2628 0 0 0 0 0 250 0 0
20 0:20:0 SSD normal 3575 910 0 2415 0 0 0 0 0 250 0 0
21 0:21:0 SSD normal 3575 697 0 2628 0 0 0 0 0 250 0 0
22 0:22:0 SSD normal 3575 910 0 2415 0 0 0 0 0 250 0 0
23 0:23:0 SSD normal 3575 695 0 2630 0 0 0 0 0 250 0 0
-------------------------------------------------------------------------------------------
24 total 85800 14809 0 64991 0 0 0 4 0 5996 0 0

cli% showpd -space
----------------------(MiB)----------------------
Id CagePos Type -State- Size Volume Spare Free Unavail Failed
0 0:0:0 SSD normal 3660800 34816 253952 3372032 0 0
1 0:1:0 SSD normal 3660800 465920 253952 2940928 0 0
2 0:2:0 SSD normal 3660800 34816 256000 3369984 0 0
3 0:3:0 SSD normal 3660800 464896 256000 2939904 0 0
4 0:4:0 SSD normal 3660800 34816 256000 3369984 0 0
5 0:5:0 SSD normal 3660800 464896 256000 2939904 0 0
6 0:6:0 SSD normal 3660800 36864 256000 3367936 0 0
7 0:7:0 SSD normal 3660800 505856 256000 2898944 0 0
8 0:8:0 SSD normal 3660800 931840 256000 2472960 0 0
9 0:9:0 SSD normal 3660800 705536 256000 2699264 0 0
10 0:10:0 SSD normal 3660800 931840 256000 2472960 0 0
11 0:11:0 SSD normal 3660800 702464 256000 2702336 0 0
12 0:12:0 SSD normal 3660800 931840 256000 2472960 0 0
13 0:13:0 SSD normal 3660800 704512 256000 2700288 0 0
14 0:14:0 SSD normal 3660800 931840 256000 2472960 0 0
15 0:15:0 SSD normal 3660800 712704 256000 2692096 0 0
16 0:16:0 SSD normal 3660800 931840 256000 2472960 0 0
17 0:17:0 SSD normal 3660800 706560 256000 2698240 0 0
18 0:18:0 SSD normal 3660800 931840 256000 2472960 0 0
19 0:19:0 SSD normal 3660800 713728 256000 2691072 0 0
20 0:20:0 SSD normal 3660800 931840 256000 2472960 0 0
21 0:21:0 SSD normal 3660800 713728 256000 2691072 0 0
22 0:22:0 SSD normal 3660800 931840 256000 2472960 0 0
23 0:23:0 SSD normal 3660800 711680 256000 2693120 0 0
-------------------------------------------------------------------------
24 total 87859200 15168512 6139904 66550784 0 0

cli% showcpg
----Volumes---- -Usage- -----------(MiB)------------
Id Name Warn% VVs TPVVs TDVVs Usr Snp Base Snp Free Total
2 SSD_r6 - 3 3 0 3 3 11088000 1536 55680 11145216
--------------------------------------------------------------------
1 total 3 3 11088000 1536 55680 11145216


cli% showspace -cpg SSD_r6
------------------------(MiB)------------------------
CPG --------EstFree--------- -----------Efficiency------------
Name RawFree LDFree OPFree Base Snp Free Total Compact Dedup Compress DataReduce Overprov
SSD_r6 64815104 48611328 - 11088000 1536 55680 11145216 2.22 - - - 0.39



cli% showtask -t 14
Id Type Name Status Phase Step -------StartTime------- ------FinishTime------- -Priority- ---User----
12539 system_task auto_admithw done --- --- 2023-03-02 00:01:09 GMT 2023-03-02 00:07:21 GMT n/a sys:3parsys
12540 system_tuning tunesys done --- --- 2023-03-02 00:07:18 GMT 2023-03-02 05:33:12 GMT n/a 3parsvc
12542 system_task auto_admithw done --- --- 2023-03-02 00:09:09 GMT 2023-03-02 00:09:13 GMT n/a sys:3parsys
12544 background_command tuneld done --- --- 2023-03-02 00:12:22 GMT 2023-03-02 00:13:08 GMT n/a 3parsvc
12545 move_regions tuneld: tp-2-sa-0.0 done --- --- 2023-03-02 00:12:23 GMT 2023-03-02 00:12:55 GMT med 3parsvc
12547 background_command tuneld done --- --- 2023-03-02 00:13:22 GMT 2023-03-02 00:17:58 GMT n/a 3parsvc
12548 move_regions tuneld: tp-2-sd-0.44 done --- --- 2023-03-02 00:13:24 GMT 2023-03-02 00:17:16 GMT med 3parsvc
12550 move_regions tuneld: tp-2-sd-0.44 done --- --- 2023-03-02 00:17:28 GMT 2023-03-02 00:17:48 GMT med 3parsvc
12551 background_command tuneld done --- --- 2023-03-02 00:18:23 GMT 2023-03-02 00:26:01 GMT n/a 3parsvc
12552 move_regions tuneld: tp-2-sd-0.0 done --- --- 2023-03-02 00:18:24 GMT 2023-03-02 00:22:50 GMT med 3parsvc
~CUT many moves and tunelds~
12751 background_command tuneld done --- --- 2023-03-02 05:24:59 GMT 2023-03-02 05:32:52 GMT n/a 3parsvc
12752 move_regions tuneld: tp-2-sd-0.43 done --- --- 2023-03-02 05:25:01 GMT 2023-03-02 05:28:52 GMT med 3parsvc
12755 move_regions tuneld: tp-2-sd-0.43 done --- --- 2023-03-02 05:29:04 GMT 2023-03-02 05:32:52 GMT med 3parsvc
12756 compact_cpg SSD_r6 done --- --- 2023-03-02 05:32:59 GMT 2023-03-02 05:32:59 GMT n/a 3parsvc


cli% showdate
Node Date
0 2023-03-02 13:21:33 GMT
1 2023-03-02 13:21:32 GMT

As i understand admithw and tunesys have started and completed automatically.

But as you can see even "old" disks are 34 GB used, when odd "old" disks 455 GB used.
New even disks are 910 GB used and new odd - 690 GB used.

And there are some used spare chunklets reported:
cli% showspare -used
Pdid Chnk LdName LdCh State Usage Media Sp Cl From To
0 3110 log0.0 5 normal ld valid Y N 7:3572 ---
0 3111 log0.0 13 normal ld valid Y N 7:3571 ---
1 3110 log1.0 3 normal ld valid Y N 7:3567 ---
1 3111 log1.0 13 normal ld valid Y N 7:3566 ---
---------------------------------------------------------
Total chunklets: 4


And showspace reports zeroes:

cli% showspace
--Estimated(MB)---
RawFree UsableFree
0 0


Is that normal?

Can i just initiate tunesys to rebalace array with active workloads?

Thanks


Top
 Profile  
Reply with quote  
 Post subject: Re: 8200 new disks
PostPosted: Fri Mar 03, 2023 12:29 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
Yes, you can do tunesys.


If the system is heavy loaded it is wise to watch the performance metrics as tunesys generates a lot of data movement and if the array is heavy loaded you might see increased latency and reduced performance if you’re doing data reduction or generally pushing the system to its limit.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 2 posts ] 


Who is online

Users browsing this forum: Google [Bot] and 39 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt