Thread 1 cannot allocate new log
Sequence
and Checkpoint not complete
and Checkpoint not complete
When
you will see these messages, like Oracle wants to reuse the redo log file, but
checkpoint position is still in the log, Oracle must wait until the checkpoint
completes.
CAUSE:
CAUSE:
In
this situation either DBWR writes slowly or log
switch happens before the log is completely full or log file is small.
Thread 1 advanced to log sequence
38379 (LGWR switch)
Current log# 3 seq# 38379
mem# 0: +OCMDB/OCMDB/onlinelog/group_3.263.853784771
Thread 1 advanced to log sequence
38380 (LGWR switch)
Current log# 1 seq# 38380
mem# 0: +OCMDB/OCMDB/onlinelog/group_1.261.853784763
Mon Dec 29 05:17:19 2014
Thread 1 cannot allocate new log,
sequence 38381
Checkpoint not complete
Current log# 1 seq# 38380
mem# 0: +OCMDB/OCMDB/onlinelog/group_1.261.853784763
Thread 1 advanced to log sequence
38381 (LGWR switch)
Current log# 2 seq# 38381
mem# 0: +OCMDB/OCMDB/onlinelog/group_2.262.853784767
Mon Dec 29 05:17:40 2014
Thread 1 cannot allocate new log,
sequence 38382
Checkpoint not complete
This
occurred when large number of DML (updates) in the system, and required you
might need more redo groups in your running instances.
By
adding more redo group in your running database its can help you to get rid
into this.
ADDING
REDO LOGS
SQL> ALTER DATABASE ADD LOGFILE GROUP …. ;
If you have smaller redo log and if you see many log switches then increasing the redo size might help.
Step1: Switching logfile to make group 1 ‘INACTIVE’
SQL> Alter system switch logfile;
SQL> select group#, status from v$log;
GROUP# STATUS
---------- ----------------
1 INACTIVE
2 ACTIVE
3 CURRENT
Step2:- Drop and recreate redo with size greater than earlier.
SQL> alter database drop logfile group 1;
SQL> alter database add logfile group 1 <…………..> size 100M reuse;
Repeat step 1 and 2 until you drop and recreate all redo logs with bigger size.
It is a recommended to have 4-5 log switches per hour. You can use below Script to find the log switches on hourly basis.
SCRIPTS
FOR CHECKING ARCHIVELOG GENERATION
SQL> set lines 200 pages 2000
SQL> SELECT to_char(first_time,'YYYY-MON-DD') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "0",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23"
from
v$log_history
GROUP by to_char(first_time,'YYYY-MON-DD');
Archivelog
generation on a daily basis:
SQL> select
trunc(COMPLETION_TIME,'DD') Day,
thread#,
round(sum(BLOCKS*BLOCK_SIZE)/1048576)
MB,
round((sum(BLOCKS*BLOCK_SIZE)/1048576)/1024
) GB,
count(*) Archives_Generated from
v$archived_log
group by
trunc(COMPLETION_TIME,'DD'),thread# order by 1;
Archive
log generation on an hourly basis:
SQL> select
trunc(COMPLETION_TIME,'HH') Hour,
thread# ,
round(sum(BLOCKS*BLOCK_SIZE)/1048576)
MB,
round(sum(BLOCKS*BLOCK_SIZE)/1048576)/1024
GB,
count(*) Archives
from v$archived_log
group by
trunc(COMPLETION_TIME,'HH'),thread# order by 1 ;
No comments:
Post a Comment