1, graphical Oracle Logminer configuration

Recommended for you: Get network issues from WhatsUp Gold. Not end users.

LogMinerConfiguration manual

1 Introduction to Logminer

1.1 INTRODUCTION LogMiner

Oracle LogMiner is a very useful practical tool to analyze Oracle company offers products from 8i, use the tool can easily get online Oracle / specific content archive log files, especially the tool can analyze all the database operations DML and DDL statements. This tool is particularly suitable for debugging, audit or a particular transaction rollback.

The LogMiner analysis tool is actually made up of a group of PL/SQL kit and some dynamic view (part of the Oracle8i built-in package), as a part of Oracle database to release is a completely free tool 8i products. But the tools and other built-in Oracle tools compared with is a little complex, the main reason is the tool does not provide any graphical user interface(GUI).

1.2 LogMiner

In the Oracle before 8i, Oracle does not provide any assistance to the database administrator tools to read and interpret redo log file content. System problems, for an ordinary data administrator, can only work is to package all the log files, and then sent to Oracle company's technical support, and then quietly waiting for Oracle company technical support to our final answer. However, from 8i, Oracle provides a powerful tool for --LogMiner.

LogMiner tools that can be used to analyze online, also can be used to analyze the offline log file, which can analyze itself database redo log files, can also be used to analyze other database redo log files.

Generally speaking, the main use of LogMiner tools are:

1, Change tracking database: change tracking database can be offline, without affecting the performance of online system.

2, Regresses changes: changes in database data rollback specific, reduce the point-in-time recovery implementation.

3, The optimization and expansion plan: through the analysis of log file data to analyze data growth model

1.3 the use of detailed

The installation of 1.3.1 LogMiner

Before using LogMiner need to confirm whether Oracle with LogMiner analysis package, generally above the Windows operating system Oracle10g are included by default. If not sure, you can log in to the system is running DBA, LogMiner need dbms_logmnr, dbms_logmnr_d package exists view system, if there is no need to install the LogMiner tools, must first run the following two script:

1, $ORACLE_HOME/rdbms/admin/dbmslm.sql

2, $ORACLE_HOME/rdbms/admin/dbmslmd.sql.

The two script must be run using the DBA user. The first script used to create DBMS_LOGMNR packages, the package is used in the analysis of log file. The second script used to create DBMS_LOGMNR_D packages, the package is used to create a data dictionary file.

After you build will include the following process and view:

Type

Process name

Use

Process

Dbms_logmnr_d.build

Create a data dictionary file

Process

Dbms_logmnr.add_logfile

Add log file in the list for analysis

Process

Dbms_logmnr.start_logmnr

The dictionary file and the front using an optional determined to analyze the log files to start LogMiner

Process

Dbms_logmnr.end_logmnr

Stop LogMiner analysis

View

V$logmnr_dictionary

Display is used to determine the ID object name dictionary file information

View

V$logmnr_logs

Display log list in the LogMiner startup

View

V$logmnr_contents

After LogMiner starts, which you can use to enter the SQL statement in the SQL prompt to query redo log content

1.3.2 to create a data dictionary file

The LogMiner tool is actually made up of two new built-in PL/SQL package (DBMS_LOGMNR and DBMS_ (LOGMNR_D) and four V$dynamic performance view (view is created when the LogMiner starts with DBMS_LOGMNR.START_LOGMNR). Before the analysis of redo log file using the LogMiner tool, you can use the DBMS_LOGMNR_D package to the data dictionary is derived as a text file. The dictionary file is optional, but without it, a data dictionary in some LogMiner explained out statement (such as a table name, column names) and the value will be 16 hexadecimal form, we are unable to understand directly. For example, the following SQL statement:

INSERT INTO dm_dj_swry (rydm, rymc) VALUES (00005, 'Zhang San'); 

LogMiner explains the results will come out in something like the following:

insert into Object#308(col#1, col#2) values (hextoraw('c30rte567e436'), hextoraw('4a6f686e20446f65')); 

Create a data dictionary, the purpose is to let the LogMiner references to the internal data dictionary in part for their real names, instead of the internal system of 16 hexadecimal. Data dictionary file is a text file, used to create the package DBMS_LOGMNR_D. If we want to analyze the tables in the database changes, affect the library's data dictionary also changes, then you need to re create the dictionary file. Another situation is in another database files redo log, must rebuild it again by analysis of data dictionary document database.

Before creating a data dictionary file need to configure the LogMiner folder:

1 CREATE DIRECTORY utlfile AS 'D:\oracle\oradata\practice\LOGMNR';
2 alter system set utl_file_dir='D:\oracle\oradata\practice\LOGMNR' scope=spfile;

Create a dictionary file to DBA user login, create to the above configured in the LogMiner folder:

1 CONN LOGMINER/ LOGMINER@PRACTICE AS SYSDBA
2 EXECUTE dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora', dictionary_location =>'D:\oracle\oradata\practice\LOGMNR');

1.3.3 added to the analysis of log files

Oracle LogMiner can analyze online (online) and archive (offline) of two log files, add log file analysis using the dbms_logmnr.add_logfile process, the first file using the dbms_logmnr.NEW parameter, the back file using the dbms_logmnr.ADDFILE parameter.

1, Create a list

1 BEGIN
2 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\REDO03.LOG',options=>dbms_logmnr.NEW);
3 END;
4 /

2, Add other log files to the list

1 BEGIN
2 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\ARCHIVE\ARC00002_0817639922.001',options=>dbms_logmnr.ADDFILE);
3 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\ARCHIVE\ARC00003_0817639922.001',options=>dbms_logmnr.ADDFILE);
4 END;
5 /

1.3.4 use LogMiner log analysis

LogMiner analysis of Oracle time is no limit and boundary conditions of two, without restrictions of all added to the list of log file analysis, limit conditions according to the analysis of limitation specified log file.


1, Without restrictions

EXECUTE dbms_logmnr.start_logmnr(dictfilename=>'D:\oracle\oradata\practice\LOGMNR\dictionary.ora');

2, With constraints

By comparing the different parameters of several DBMS_ setting in LOGMNR.START_LOGMNR (the meaning of parameters see Table 1), can reduce the scope of analysis of log file. By setting the starting time and ending time parameter we can limit only one time range analysis log.

Parameters

Parameter type

The default value

Meaning

StartScn

Digital type

0

Analysis of redo log SCN&ge StartScn log file part;

EndScn

Digital type

0

Analysis of redo log SCN&le EndScn log file part;

StartTime

Date type

1998-01-01

Analysis for the timestamp log &ge StartTime log file part;

EndTime

Date type

2988-01-01

Analysis for the timestamp log &le EndTime log file part;

DictFileName

Character


In the dictionary file the file contains a database directory.

As shown in the examples below, we only analyze June 8, 2013 diary, :

EXECUTE dbms_logmnr.start_logmnr(

DictFileName => dictfilename=>'D:\..\practice\LOGMNR\dictionary.ora',

StartTime =>to_date('2013-6-8 00:00:00','YYYY-MM-DD HH24:MI:SS')

EndTime =>to_date(''2013-6-8 23:59:59','YYYY-MM-DD HH24:MI:SS '));

Can also set the initial SCN and up to SCN to limit the scope to log analysis:

EXECUTE dbms_logmnr.start_logmnr(

DictFileName =>'D:\..\practice\LOGMNR\dictionary.ora',

StartScn =>20,

EndScn =>50);

1.3.5 observation results(v$logmnr_contents)

Up to now, we have analyzed the redo log file contents. Dynamic performance views v$logmnr_contents contains LogMiner analysis of all information obtained.

SELECT sql_redo FROM v$logmnr_contents;

If we just want to know a user for a table of the operation, can be obtained by the following SQL query, the query can be user all the work on the LOGMINER EMP.

SELECT sql_redo FROM v$logmnr_contents WHERE username='LOGMINER' AND tablename='EMP';

Serial number

Name

Meaning

1

SCN

System specific data change change number

2

TIMESTAM

Data change time

3

COMMIT_TIMESTAMP

Changes to the data submitted to the time

4

SEG_OWNER

Change the data segment name

5

SEG_NAME

The owner name

6

SEG_TYPE

Change the data segment types

7

SEG_TYPE_NAME

Section type name change data occur

8

TABLE_SPACE

Change section table space

9

ROW_ID

Specific changes for ID data

10

SESSION_INFO

Data changes the user process information

11

OPERATION

The operation of the record in the redo records (such as INSERT)

12

SQL_REDO

You can specify a SQL statement for the redo records the redo for change (forward)

13

SQL_UNDO

To redo record rollback or restore the specified SQL statement line change (reverse)

Need to emphasize a point of view, analysis the result in v$logmnr_contents only exists in our operational process of'dbms_logmrn.start_logmnr'in the session lifetime. This is because all the LogMiner stored in PGA memory, all other processes will not be able to see it, at the same time as the end of the process, the results of the analysis also disappear.

Finally, the use of DBMS_LOGMNR.END_LOGMNR to terminate the transaction at the log analysis, PGA memory areas are removed, the results also no longer exist.

2 data synchronization Oracle database settings

LogMiner uses Oracle data view the execution of the SQL statement, which requires the following four steps refer to:

1, Set the database archiving mode,

2, Set the LogMiner dictionary file path,

3, Create a data synchronization user (such as user name is LOGMINER, the user has the DBA permission),

4, Verify that the configuration is successful,

2.1 set up the database archiving mode

2.1.1 to see whether the database is archiving mode

Using SqlPlus or the command line interface to connect to the database (hereinafter to the command line interface)

To SqlPlus program

sqlplus /nolog

Using DBA user login to the source database

conn system/system@practic as sysdba

-- see PRACTICE database is in the archive mode

1 SELECT dbid, name, log_mode FROM v$database;
2 Or
3 ARCHIVE LOG LIST;

If the display database display for archiving mode, setting the database archiving mode can be skipped; if the display database no archive mode is the need for the following settings.

Above shows the database without filing, the need for archiving settings.

2.1.2 settings.

Create a ARCHIVE folder, the ARCHIVE folder path is set according to the server, the operation is set to" D:\oracle\oradata\practice\ARCHIVE"

-- set archived log file path

ALTER SYSTEM SET log_archive_dest="D:\oracle\oradata\practice\ARCHIVE";

-- log file name format:

ALTER SYSTEM SET log_archive_format="ARC%S_%R.%T" SCOPE=SPFILE;

-- after modification, close the database, start with MOUNT mode

1 SHUTDOWN IMMEDIATE;
2 STARTUP MOUNT;

-- set the database archiving mode

ALTER DATABASE ARCHIVELOG;

(Note: if the restart data failed, please refer to fourth chapters dealing with abnormal problem)

2.1.3 verification archive is set successfully

-- see PRACTICE database is in the archive mode

1 SELECT dbid, name, log_mode FROM v$database;
2 Or
3 ARCHIVE LOG LIST;

- verify the parameter settings are play a role

SELECT dest_id, status, destination FROM v$archive_dest WHERE dest_id =1;

In the parameter file settings have played a role, open the database

ALTER DATABASE OPEN;

2.2 LogMiner set

2.2.1 to create LogMiner folder

Create a LOGMNR folder, path"D:\oracle\oradata\practice\LOGMNR"

2.2.2 LogMiner dictionary file path

To create a data dictionary file

1 CREATE DIRECTORY utlfile AS 'D:\oracle\oradata\practice\LOGMNR';
2 alter system set utl_file_dir='D:\oracle\oradata\practice\LOGMNR' scope=spfile;

2.2.3 opens the LogMiner log complement model

To create a data dictionary file

alter database add supplemental log data;

The resumption of the 2.2.4 database verify

-- after modification, close the database, start with MOUNT mode

1 SHUTDOWN IMMEDIATE;
2 STARTUP;

-- see Logminer folder is set

SHOW PARAMETER utl_file_dir;

2.3 create a data synchronization user

Create a LOGMINER user in the database, the user must have DBA permissions

To create a LOGMINER user in the source database, and give DBA permission

1 CREATE USER LOGMINER IDENTIFIED BY LOGMINER;
2 GRANT CONNECT, RESOURCE,DBA TO LOGMINER;

The 3 LogMiner is used to read the log examples

In the LogMiner is used to read the archive / online log must be set according to the second chapter, after setting the archived and online log analysis. Especially the need to open the LogMiner log complement model, if there is no beginning LogMiner supplementary model will not be able to view the DDL statement, according to the test results, only the first LogMiner log supplement mode, you can view the DDL statement, prior to DDL will not be able to view.

The 3.1 LogMiner is used to read the online log

Preparation of 3.1.1 test data

To LOGMINER user login (non DBA login) to create the AAAAA table (Oracle11g username, password please note case)

1 CONNECT LOGMINER/LOGMINER@PRACTICE
2 CREATE TABLE AAAAA(field001 varchar2(100));     
3 INSERT INTO AAAAA (field001) values  ('000000');  
4 INSERT INTO AAAAA (field001) values  ('0000010');
5 commit;

3.1.2 to create a data dictionary file

The database object is changed, need to re create the data dictionary file

In LOGMINER (DBA log user permissions), generating a dictionary file

1 CONN LOGMINER/LOGMINER@PRACTICE AS SYSDBA
2 EXECUTE dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora', dictionary_location =>'D:\oracle\oradata\practice\LOGMNR');

3.1.3 confirmed that currently online log file

- need to make sure that the online log files

SELECT group#, sequence#, status, first_change#, first_time FROM V$log ORDER BY first_change#;

From here you can see that the online log REDO03 is in the ACTIVE state

3.1.4 added to the analysis of log files

To join online log file analysis

1 BEGIN
2 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\REDO03.LOG',options=>dbms_logmnr.NEW);
3 END;
4 /

3.1.5 analysis with LogMiner.

- start LogMiner analysis

EXECUTE dbms_logmnr.start_logmnr(dictfilename=>'D:\oracle\oradata\practice\LOGMNR\dictionary.ora');

3.1.6 observation results

- related query log

1 SELECT sql_redo, sql_undo, seg_owner
2    FROM v$logmnr_contents
3   WHERE seg_name='AAAAA'
4     AND seg_owner='LOGMINER';

The 3.2 LogMiner is used to read the archive log

Preparation of 3.2.1 test data

To LOGMINER user login (non DBA permissions) to create the EMP table (Oracle11g username, password please note case)

 1 CONN LOGMINER/ LOGMINER@PRACTICE
 2 CREATE TABLE EMP
 3        (EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,
 4         ENAME VARCHAR2(10),
 5         JOB VARCHAR2(9),
 6         MGR NUMBER(4),
 7         HIREDATE DATE,
 8         SAL NUMBER(7,2),
 9         COMM NUMBER(7,2),
10         DEPTNO NUMBER(2));

Insert the EMP data

1 INSERT INTO EMP VALUES (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);
2 INSERT INTO EMP VALUES (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,30);
3 INSERT INTO EMP VALUES (7521,'WARD','SALESMAN',7698,to_date('22-2-1981','dd-mm-yyyy'),1250,500,30);
4 INSERT INTO EMP VALUES (7566,'JONES','MANAGER',7839,to_date('2-4-1981','dd-mm-yyyy'),2975,NULL,20);
5 COMMIT;

- find the log files from the v$log view.

1 CONNECT system/system@practice as sysdba
2 ALTER SYSTEM SWITCH LOGFILE;
3 select sequence#, FIRST_CHANGE#, NEXT_CHANGE#,name from v$archived_log order by sequence# desc;

3.2.2 to create a data dictionary file

To ensure that logMiner is set to 2.2

In LOGMINER (DBA log user permissions), generating a dictionary file

1 CONN LOGMINER/ LOGMINER@PRACTICE AS SYSDBA
2 EXECUTE dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora', dictionary_location =>'D:\oracle\oradata\practice\LOGMNR');

3.2.3 added to the analysis of log files

Analysis of the log file to join

1 BEGIN
2 dbms_logmnr.add_logfile(logfilename=>'D:\oracle\oradata\practice\ARCHIVE\ARC00002_0817639922.001',options=>dbms_logmnr.NEW);
3 END;
4 /

3.2.4 analysis with LogMiner.

- start LogMiner analysis

1 EXECUTE dbms_logmnr.start_logmnr(dictfilename=>'D:\oracle\oradata\practice\LOGMNR\dictionary.ora');

3.2.5 observation results

- related query log

1 SELECT sql_redo, sql_undo
2    FROM v$logmnr_contents
3   WHERE seg_name='EMP'
4     AND seg_owner='LOGMINER';

4 other

4.1 abnormal problem

4.1.1 error ORA-12514 occurred

If a ORA-12514 error occurs, as shown below:

Need to modify the listerner.ora file, in the Oracle installation directory under the \NETWORK\ADMIN, the current operation "D:\oracle\product.2.0\db_1\NETWORK\ADMIN\listener.ora" add the following settings

1      (SID_DESC =
2        (GLOBAL_DBNAME = practice)
3        (ORACLE_HOME = D:\oracle\product\10.2.0\db_1) 
4        (SID_NAME = practice)
5       )

Restart the TNSListener set, can become effective

4.1.2 error ORA-16018 occurred

If a ORA-16018 error occurs, as shown below:

The problem is the opening Flashback Database, file by default is saved to flash in the path, and easy solution is to join the scope=spfile parameter in the set file path

-- set archived log file path

ALTER SYSTEM SET log_archive_dest="D:\oracle\oradata\practice\ARCHIVE"   scope=spfile;

This view flashback path, the path is not affected, but the flashback archive file and save the file to the folder

4.2 LogMiner related information

See the Oracle official site for LogMiner, the address is as follows:

http://docs.oracle.com/cd/E11882_01/server.112/e22490/logminer.htm

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download

Posted by Jamie at December 01, 2013 - 8:54 AM