datapump and flashback, why it might not be doing what you think....



Have you been merrily using flashback_scn/timestamp and assumed that this means everything that you extract is consistent? Well i've got a short example to show that it may not be behaving exactly as you think. In practical terms this may not be an issue for most export/import tasks but it's something to be aware of as it may catch you out at some point and it's good to fully understand what is happening.

 The problem i refer to is metadata changes during the time the export process is running - you may have assumed that flashback is making the entire extract consistent at the point you specify - it't not - it's only making the table data consistent at that point in time - object ddl is extracted as it is when the export process gets to it - not the scn you are telling it to be consistent to.

As an example that illustrates the point well i'm going to create a very basic table with an id column which is populated using a trigger/sequence on insert (this is simplified in 12c with identity columns and in fact this creates an interesting quirk which i'll discuss in my next post).

 So first up lets create those simple objects
 
SQL> create user flashdemo identified by flashdemo;

User created.

SQL> grant dba to flashdemo; -- being lazy here......

Grant succeeded.

SQL> create table flashdemo.tab1 (id number);

Table created.

SQL> create sequence flashdemo.demo_seq;

Sequence created.

CREATE OR REPLACE TRIGGER flashdemo.demo_trg
BEFORE INSERT ON flashdemo.tab1
FOR EACH ROW
BEGIN
 :new.id := demo_seq.nextval;
END;
/

Trigger created.








OK - thats the objects created now lets put some data in

 SQL> insert into flashdemo.tab1 values (null);

1 row created.

SQL> select * from flashdemo.tab1;

        ID
----------
         1

SQL> insert into flashdemo.tab1 values (null);

1 row created.

SQL> insert into flashdemo.tab1 values (null);

1 row created.

SQL> insert into flashdemo.tab1 values (null);

1 row created.

SQL> insert into flashdemo.tab1 values (null);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from flashdemo.tab1;

        ID
----------
         1
         2
         3
         4
         5



So now we have 5 rows lets find out the current scn so we can export using that number


SQL> SELECT dbms_flashback.get_system_change_number FROM dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
                 3786730





Lets now check the details about the sequence


SQL> select * from dba_sequences where sequence_name='DEMO_SEQ'
  2  /

SEQUENCE_OWNER
--------------------------------------------------------------------------------
SEQUENCE_NAME
--------------------------------------------------------------------------------
 MIN_VALUE  MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER PARTITION_COUNT S
---------- ---------- ------------ - - ---------- ----------- --------------- -
K
-
FLASHDEMO
DEMO_SEQ
         1 1.0000E+28            1 N N         20          21                 N
N





If we now insert 100 rows using the neat trick with connect by and commit that

SQL> insert into flashdemo.tab1 select null from dual connect by level <=100;

100 rows created.

SQL> commit;

Commit complete.


The number of rows in the table is now 105 and looking at the details of the sequence again we can see the last-number column is now 121 (where previously it was 21), the fact it doesn't match the actual last number if a feature of the caching of sequence numbers.

SQL> select * from dba_sequences where sequence_name='DEMO_SEQ'
  2  /

SEQUENCE_OWNER
--------------------------------------------------------------------------------
SEQUENCE_NAME
--------------------------------------------------------------------------------
 MIN_VALUE  MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER PARTITION_COUNT S
---------- ---------- ------------ - - ---------- ----------- --------------- -
K
-
FLASHDEMO
DEMO_SEQ
         1 1.0000E+28            1 N N         20         121                 N
N




So now lets export the schema out using the scn we determined earlier on

[oracle@server]:DEMO:[/oracle/ENDCON/oradata2/DEMO/controlfile]# expdp flashdemo/flashdemo flashback_scn=3786730 reuse_dumpfiles=y

Export: Release 12.1.0.1.0 - Production on Wed Jun 4 13:06:10 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "FLASHDEMO"."SYS_EXPORT_SCHEMA_01":  flashdemo/******** flashback_scn=3786730 reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "FLASHDEMO"."TAB1"                          5.062 KB       5 rows
Master table "FLASHDEMO"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for FLASHDEMO.SYS_EXPORT_SCHEMA_01 is:
  /oracle/12.0.0.1/rdbms/log/expdat.dmp
Job "FLASHDEMO"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Jun 4 13:06:42 2014 elapsed 0 00:00:32


Now lets drop the user completely and reimport (another nice feature of datapump is that it will create the user automatically unlike old exp)

SQL> drop user flashdemo cascade;

User dropped.

[oracle@server]:DEMO:[/oracle/ENDCON/oradata2/DEMO/controlfile]# impdp / schemas=flashdemo

Import: Release 12.1.0.1.0 - Production on Wed Jun 4 13:43:23 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "OPS$ORACLE"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_SCHEMA_01":  /******** schemas=flashdemo
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "FLASHDEMO"."TAB1"                          5.062 KB       5 rows
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "OPS$ORACLE"."SYS_IMPORT_SCHEMA_01" successfully completed at Wed Jun 4 13:43:25 2014 elapsed 0 00:00:01


Right - as expected we have exported/imported the 5 rows that existed at the scn in question

SQL> select * from tab1;         ID
----------
         1
         2
         3
         4
         5



.......but lets check what the sequence definition looks like

SQL> select * from user_sequences;

SEQUENCE_NAME
--------------------------------------------------------------------------------
 MIN_VALUE  MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER PARTITION_COUNT S
---------- ---------- ------------ - - ---------- ----------- --------------- -
K
-
DEMO_SEQ
         1 1.0000E+28            1 N N         20         121                 N
N


And we can see that the last_number value is 121 - the value that existed at the time of the datapump extract not the flashback_scn we specified!

So this now means if we insert into the table....
 
SQL> insert into tab1 values(null);

1 row created.

SQL> commit;

Commit complete.

SQL>  select * from tab1;

        ID
----------
         1
         2
         3
         4
         5
       121

6 rows selected.


We get the nextval of 121........

Now in this case it probably doesn't matter (unless you really care about gaps in sequence numbers), but this issue affects all object ddl, views,packages,procedures,indexes etc

Unless you completely stop all ddl activity the export is not truly consistent and this could in theory cause you some problems.

Just keep this in the back of your mind when doing exports, again you really need to understand how your application works to understand if this is an issue or not.....

Oh and by the way if a table ddl is changed during an extract (i.e. you add a column to a table just before it is reached in the export job) it will throw an error and the table will not be extracted at all as ddl against the table invalidates the undo (truncate is included in this as that too is ddl) - it's pretty unlikely you'll see this - but for completeness here is the error code you are likely to see

ORA-01466:    unable to read data - table definition has changed

0 comments:

Post a Comment