Page 1 of 1

3Par High I/O per Disk

Posted: Wed Mar 31, 2021 1:02 pm
by xbennetxk
VMware Cluster
Oracle 2 node rac

Running the health check shows this:
3-PAR_7400 cli% checkhealth -svc -detail
Checking alert
Checking ao
Checking cabling
Checking cage
Checking cert
Checking dar
Checking date
Checking file
Checking fs
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking pdch
Checking port
Checking qos
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
Checking sp
Component ---------------Summary Description--------------- Qty
Alert New alerts 10
cert Certificates that will expire soon 1
PD Disks experiencing a high level of I/O per second 36
---------------------------------------------------------------
3 total 47

Component -----Identifier------ ------------------------------------Detailed Description-------------------------------------
Alert sw_vv:54:MailOnPrem Thin provisioned VV MailOnPrem has reached reserved allocation warning of 510G (85% of 600G)
Alert sw_sysmgr Total NL raw space usage at 24376G (above 75% of total 32496G)
Alert sw_vv:35:RAG_VDI Thin provisioned VV RAG_VDI has reached reserved allocation warning of 1638G (80% of 2048G)
Alert sw_vv:14:ITMisc Thin provisioned VV ITMisc has reached reserved allocation warning of 1638G (80% of 2048G)
Alert sw_vv:4:SHARES Thin provisioned VV SHARES has reached reserved allocation warning of 3200G (80% of 4000G)
Alert sw_vv:52:libelle12a Thin provisioned VV libelle12a has reached reserved allocation warning of 40G (80% of 50G)
Alert sw_vv:53:libelle12c Thin provisioned VV libelle12c has reached reserved allocation warning of 40G (80% of 50G)
Alert sw_vv:57:MailServers Thin provisioned VV MailServers has reached reserved allocation warning of 240G (80% of 300G)
Alert sw_sysmgr Total FC raw space usage at 12509G (above 50% of total 19656G)
Alert sw_sysmgr Total SSD raw space usage at 2992G (above 75% of total 3568G)
cert HP_3PAR 7400c-1674164 Certificate for Service:unified-server* will expire in 21 days
PD disk:0 Disk is experiencing a high level of I/O per second: 350.0
PD disk:1 Disk is experiencing a high level of I/O per second: 287.8
PD disk:2 Disk is experiencing a high level of I/O per second: 338.6
PD disk:3 Disk is experiencing a high level of I/O per second: 288.8
PD disk:4 Disk is experiencing a high level of I/O per second: 296.2
PD disk:5 Disk is experiencing a high level of I/O per second: 264.6
PD disk:8 Disk is experiencing a high level of I/O per second: 342.6
PD disk:9 Disk is experiencing a high level of I/O per second: 265.4
PD disk:10 Disk is experiencing a high level of I/O per second: 350.6
PD disk:11 Disk is experiencing a high level of I/O per second: 357.0
PD disk:12 Disk is experiencing a high level of I/O per second: 302.4
PD disk:13 Disk is experiencing a high level of I/O per second: 293.0
PD disk:16 Disk is experiencing a high level of I/O per second: 219.4
PD disk:17 Disk is experiencing a high level of I/O per second: 134.2
PD disk:18 Disk is experiencing a high level of I/O per second: 139.2
PD disk:19 Disk is experiencing a high level of I/O per second: 218.6
PD disk:20 Disk is experiencing a high level of I/O per second: 227.4
PD disk:21 Disk is experiencing a high level of I/O per second: 152.4
PD disk:22 Disk is experiencing a high level of I/O per second: 137.0
PD disk:23 Disk is experiencing a high level of I/O per second: 220.6
PD disk:24 Disk is experiencing a high level of I/O per second: 218.2
PD disk:25 Disk is experiencing a high level of I/O per second: 141.4
PD disk:26 Disk is experiencing a high level of I/O per second: 157.2
PD disk:27 Disk is experiencing a high level of I/O per second: 220.0
PD disk:28 Disk is experiencing a high level of I/O per second: 342.8
PD disk:29 Disk is experiencing a high level of I/O per second: 276.6
PD disk:31 Disk is experiencing a high level of I/O per second: 288.8
PD disk:32 Disk is experiencing a high level of I/O per second: 272.4
PD disk:33 Disk is experiencing a high level of I/O per second: 275.2
PD disk:36 Disk is experiencing a high level of I/O per second: 347.4
PD disk:37 Disk is experiencing a high level of I/O per second: 271.2
PD disk:38 Disk is experiencing a high level of I/O per second: 356.2
PD disk:39 Disk is experiencing a high level of I/O per second: 280.2
PD disk:40 Disk is experiencing a high level of I/O per second: 304.2
PD disk:41 Disk is experiencing a high level of I/O per second: 304.2
PD disk:44 Disk is experiencing a high level of I/O per second: 299.0
--------------------------------------------------------------------------------------------------------------
47 total

Can anyone point me in a direction that could help me resolve this?

I looked into this blog post: https://allanmcaleavy.com/2017/11/16/3p ... he-delack/
and found this:
3-PAR_7400 cli% statcmp -iter 1
13:00:46 03/31/2021 ---- Current ----- --------- Total ----------
Node Type Accesses Hits Hit% Accesses Hits Hit% LockBlk
0 Read 8488 6060 71 8488 6060 71 67520
0 Write 1802 795 44 1802 795 44 779929
1 Read 10482 8046 77 10482 8046 77 53556
1 Write 1526 538 35 1526 538 35 741572

Queue Statistics
Node Free Clean Write1 WriteN WrtSched Writing DcowPend DcowProc RcpyRev
0 30740 409260 3158 859 1101 51 0 0 0
1 30813 407822 3610 677 1250 60 0 0 0

Temporary and Page Credits
Node Node0 Node1 Node2 Node3 Node4 Node5 Node6 Node7
0 80 38301 --- --- --- --- --- ---
1 38169 5 --- --- --- --- --- ---

Page Statistics
------------CfcDirty------------ --------------CfcMax-------------- ------------------DelAck------------------
Node FC NL SSD_150KRPM SSD_100KRPM FC NL SSD_150KRPM SSD_100KRPM FC NL SSD_150KRPM SSD_100KRPM
0 3228 352 0 1539 28800 7200 0 19200 129133401 53566067 0 559798
1 3091 552 0 1902 28800 7200 0 19200 131177596 48422219 0 459045

Re: 3Par High I/O per Disk

Posted: Thu Apr 01, 2021 2:57 am
by MammaGutt
The simple answer, more everything.

At the time where you run checkhealth you have more IO than the threshold for both NL and FC drives.

Looking at the last page you have DelAck on FC, NL and SSD. DelAck is a counter for each time the controllers have been unable to destage write cache to backend (disk) and a IO had to be directly written to disk rather than write cache. DelAck is reset at node reboot.

The bigger answer is that you probably could tweak this for some minor improvements, but as you at some point have been struggling with every tier I don't see a permanent fix without more hardware to handle the load or reducing the load by removing services.

Re: 3Par High I/O per Disk

Posted: Thu Apr 01, 2021 9:43 am
by xbennetxk
MammaGutt wrote:The simple answer, more everything.

At the time where you run checkhealth you have more IO than the threshold for both NL and FC drives.

Looking at the last page you have DelAck on FC, NL and SSD. DelAck is a counter for each time the controllers have been unable to destage write cache to backend (disk) and a IO had to be directly written to disk rather than write cache. DelAck is reset at node reboot.

The bigger answer is that you probably could tweak this for some minor improvements, but as you at some point have been struggling with every tier I don't see a permanent fix without more hardware to handle the load or reducing the load by removing services.


Thanks for looking at this
Do I need to worry about DelAck if i'm not using ISCI? I was checking the vmware : https://kb.vmware.com/s/article/1002598 and i saw these.

Would there be a benefit in rebooting the nodes?

Re: 3Par High I/O per Disk

Posted: Thu Apr 01, 2021 12:31 pm
by MammaGutt
DelAck = Delayed Acknowledgement.

It is not an iSCSI thing, so yes. You should worry.

Rebooting nodes is probably not the answer. The problem visible from your output is that you are putting X amount of IOPs/load on the system and the system hardware configuration isn't sized big enough to handle it.

The only way to get more backend performance is adding more drives to share the load.... however a 7k system (which was released in 2012) isn't really worth putting a lot of money into. 8k/Primera/Nimble would be the natural upgrade path within HPE with the 8k looking like the least best option as it has been announced end of sale.