Bug With “enq: DX – contention” in wait class “Other”
Recently got a request as database responding slow and operations are unusual, As a part of database slowness first reviewed OS level what is the CPU utilization and how much processes and what processes are consuming CPU and found to be oracle Database processes are consuming from 90% to 100% with around 30 processes.
Database Version: 11.1.0.7.0 on Linux
Now connected to database for the AWR report to see what are the wait events and what can be issue with performance, I can see wait events with 85% of Activity and below is the information form the advisory reports and wait event “enq: DX – Contention” is reporting with “Wait class Other”.
Description Active Sessions Recommendations Percent of Activity -------------------------- ------------------- --------------- 1 Unusual "Other" Wait Event 1.71 | 85.54 5 2 Unusual "Other" Wait Event .29 | 14.24 4 Finding 1: Unusual "Other" Wait Event Impact is 1.71 active sessions, 85.54% of total activity. --------------------------------------------------------- Wait event "enq: DX - contention" in wait class "Other" was consuming
Usually, My first approach to check for any Unusual “other” wait event i will start look at locks and also the session blocks as a preliminary step. As expected found session blocking.
BLOCKING ------------------------------ SID 42 is blocking 43 SID 28 is blocking 138
Now lets see more in detail with view “v$session_wait“, what are the events, event state and how much time it is spending for each event.
EVENT STATE WAIT_TIME_MICRO TIME_REMAINING_MICRO TIME_SINCE_LAST_WAIT_MICRO ------------------------------- ------------------- --------------- -------------------- -------------------------- enq: DX - contention WAITING 1108966 4891034 0 inactive transaction branch WAITING 336971 663029 0 SQL*Net more data to client WAITING 1094640039070 -1 0 SQL*Net more data to client WAITING 1097358489246 -1 0
For the above sessions, all the events are in “Waiting” state, Also you can see amount of time waited(WAIT_TIME_MICRO). If the session is currently waiting, then the value is the time spent in the current wait. If the session is currently not in a wait, then the value is the amount of time waited in the last wait and “TIME_REMAINING_MICRO” column refers to the multiple meanings depending on the value of each wait event, If value is “>0” then which refers amount of time remaining in the current wait and if it is “-1″ then it means session can indefinitely wait in the current wait”. From the view “v$session” of column “WAIT_TIME” , If any sessions are waiting with a zero value means the session is currently waiting.
SQL> select sid,event,seq#,p1,p1raw,p2,p3 from v$session where wait_time=0; SID EVENT SEQ# P1 P1RAW P2 P3 ---------- ------------------------------ ---------- ---------- ---------------- ---------- ---------- 28 SQL*Net more data to client 2074 1413697536 0000000054435000 8144 0 42 SQL*Net more data to client 19526 1413697536 0000000054435000 8150 0 43 enq: DX - contention 49987 1146617862 0000000044580006 62 0 138 enq: DX - contention 49215 1146617862 0000000044580006 30 0
So all the sessions are waiting, they all are related to the session blocking “SID”. 🙂
Now the decision is to kill the possible blocking sessions? Of course now you can resolve it. If the same issue caused again and again? For further more analysis for any such blocks/hang analysis suggested to generate system state dumps by using “ALTER SESSION SET EVENTS ‘IMMEDIATE TRACE NAME SYSTEMSTATE LEVEL 258’;” Now i can able to got much information by the system state dump as below.
Waiter:
Current Wait Stack: 0: waiting for 'SQL*Net more data to client' driver id=54435000, #bytes=1fd0, =0 wait_id=2073 seq_num=2074 snap_id=1 wait times: snap=18228 min 40 sec, exc=18228 min 40 sec, total=18228 min 40 sec wait times: max=infinite wait counts: calls=0 os=0 in_wait=1 iflags=0x5a0 There is at least one session blocked by this session. Dumping one waiter: inst: 1, sid: 138, ser: 6349 wait event: 'enq: DX - contention' p1: 'name|mode'=0x44580006 p2: 'transaction entry #'=0x1e p3: '0'=0x0 row_wait_obj#: 4294967295, block#: 0, row#: 0, file# 0 min_blocked_time: 3 secs, waiter_cache_ver: 64532 Wait State: auto_close=0 flags=0x22 boundary=(nil)/-1
Blocker:
Current Wait Stack: 0: waiting for 'enq: DX - contention' name|mode=44580006, transaction entry #=3e, 0=0 wait_id=313066 seq_num=50927 snap_id=1 wait times: snap=2.163796 sec, exc=2.163796 sec, total=2.163796 sec wait times: max=6.000000 sec wait counts: calls=2 os=2 in_wait=1 iflags=0x15a0 There is at least one session blocking this session. Dumping one blocker: inst: 1, sid: 42, ser: 19547 Wait State: auto_close=0 flags=0x22 boundary=(nil)/-1 Session Wait History: 0: waited for 'inactive transaction branch'
Apart from the wait events, The same sessions you can see below consumed CPU Time. Other sessions waiting on the above may exhibit higher than usual waits on related events “Enq: DX – Contention‘, or “Inactive Trasaction Branch“. May be
SID Serial# Status Oracle User O/S User O/S PID Session Program Machine CPU
------ ------- --------- ------------ --------- ------- -------------------- -------------- ----------------
42 19547 ACTIVE RMSADMIN vaira 6296 oracle@rmsuatdb ( rmsuatdb 24,753
28 8779 ACTIVE RMSADMIN vaira 28500 oracle@rmsuatdb ( rmsuatdb 2,531
So far based on the findings, High CPU by the sessions trying to abort the distributed transactions, these sessions are holding a DX enqueue and more associated with distributed transaction activity. If you go in detail to the system state dump files, you can see “kssdct()” operation and it should be for short lived and should not appear in successive system states. For the same kind of wait events and high CPU listed “bug:8552942” for the 10.2.0.4 and patch is available for that fix. However if patch not available for upper versions(Ex: 11gR1) next step is to submit alert, traces, system trace to Support for related Patch to fix.
–Happy Reading–