datapump,flashback and 12c identity columns



In my last post i showed how object ddl was not extracted using the flashback scn specified to datapump (only data is). In doing that experiment i initially did the testing using identity columns and found that these are actually working different in datapump to how i thought they would and it didn't make the point i wanted to in my last post so i did it the old way......

This post is about that initial experience and how identity columns and their specific sequences seem to obey flashback scn after all - as far as i know this is the only exception to this rule.

So ok lets set up the test case which is almost a repeat of the last one apart from using the identity columns feature - i'll miss out most of the 'chat' between the steps and cut to the chase at the end


SQL> create user flashdemo identified by flashdemo;

User created.

SQL> grant dba to flashdemo;

Grant succeeded.


Here is the use of the identity column - the syntax looks a little strange like this couldn't possibly be a valid create table command but it works just fine....


SQL> create table flashdemo.tab1 (id number generated by default on null as identity);

Table created.

SQL> insert into flashdemo.tab1 values (null);

1 row created.


This has auto created a sequence for us

SQL>  select sequence_name from dba_sequences where sequence_owner='FLASHDEMO';

SEQUENCE_NAME
--------------------------------------------------------------------------------
ISEQ$$_20144


And we can see the row created fine

SQL> select * from flashdemo.tab1;

        ID
----------
         1


Lets create some more


SQL> insert into flashdemo.tab1 values (null);

1 row created.

SQL> insert into flashdemo.tab1 values (null);

1 row created.

SQL> insert into flashdemo.tab1 values (null);

1 row created.

SQL> insert into flashdemo.tab1 values (null);

1 row created.

SQL> select * from flashdemo.tab1;

        ID
----------
         1
         2
         3
         4
         5


Now get the scn as before

SQL> SELECT dbms_flashback.get_system_change_number FROM dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
                 3778438


Check the sequence again

SQL> select * from dba_sequences where sequence_name='ISEQ$$_20144';

SEQUENCE_OWNER
--------------------------------------------------------------------------------
SEQUENCE_NAME
--------------------------------------------------------------------------------
 MIN_VALUE  MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER PARTITION_COUNT S
---------- ---------- ------------ - - ---------- ----------- --------------- -
K
-
FLASHDEMO
ISEQ$$_20144
         1 1.0000E+28            1 N N         20          21                 N
N
Create some more, rows coomit and check the sequence again

SQL> insert into flashdemo.tab1 select null from dual connect by level <=100;

100 rows created.

SQL> commit;

Commit complete.


SQL> select * from dba_sequences where sequence_name='ISEQ$$_20144';

SEQUENCE_OWNER
--------------------------------------------------------------------------------
SEQUENCE_NAME
--------------------------------------------------------------------------------
 MIN_VALUE  MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER PARTITION_COUNT S
---------- ---------- ------------ - - ---------- ----------- --------------- -
K
-
FLASHDEMO
ISEQ$$_20144
         1 1.0000E+28            1 N N         20          121                 N
N



Now export/import as befiore using the scn we grabbed above

[oracle@server]:DEMO:[/oracle/ENDCON/oradata2/DEMO/controlfile]# expdp flashdemo/flashdemo flashback_scn=3778438

Export: Release 12.1.0.1.0 - Production on Wed Jun 4 12:23:22 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "FLASHDEMO"."SYS_EXPORT_SCHEMA_01":  flashdemo/******** flashback_scn=3778438
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/IDENTITY_COLUMN
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "FLASHDEMO"."TAB1"                          5.070 KB       5 rows
Master table "FLASHDEMO"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for FLASHDEMO.SYS_EXPORT_SCHEMA_01 is:
  /oracle/12.0.0.1/rdbms/log/expdat.dmp
Job "FLASHDEMO"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Jun 4 12:24:02 2014 elapsed 0 00:00:37


SQL> drop user flashdemo cascade;

User dropped.

SQL>

[oracle@server]:DEMO:[/oracle/ENDCON/oradata2/DEMO/controlfile]# impdp / schemas=flashdemo

Import: Release 12.1.0.1.0 - Production on Wed Jun 4 12:25:35 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "OPS$ORACLE"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_IMPORT_SCHEMA_01":  /******** schemas=flashdemo
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "FLASHDEMO"."TAB1"                          5.070 KB       5 rows
Processing object type SCHEMA_EXPORT/TABLE/IDENTITY_COLUMN
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "OPS$ORACLE"."SYS_IMPORT_SCHEMA_01" successfully completed at Wed Jun 4 12:25:47 2014 elapsed 0 00:00:11


Table table looks fine

SQL> select * from tab1;

        ID
----------
         1
         2
         3
         4
         5


Check the sequence

SQL> select * from dba_sequences where sequence_name='ISEQ$$_20144';

no rows selected

SQL> select * from user_sequences;

SEQUENCE_NAME
--------------------------------------------------------------------------------
 MIN_VALUE  MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER PARTITION_COUNT S
---------- ---------- ------------ - - ---------- ----------- --------------- -
K
-
ISEQ$$_20283
         1 1.0000E+28            1 N N         20           6                 N
N


Seems the name has changed ( as it was system generated originally), but note that last_number is actually completely correct and actually has a definition that would never of existed in the original database - last_number was never 6 there!

Lets insert a row to check

SQL> insert into flashdemo.tab1 values (null);

1 row created.


SQL> select * from tab1;

        ID
----------
         1
         2
         3
         4
         5
         6

6 rows selected.


Magically it has retained the sequence numbers (even accounting for numbers that would have been lost in the caching!)

Oracle seem to have implemented some different logic to export/import identity columns - you can see actually in the export and import logs there is a specific extra step:

Processing object type SCHEMA_EXPORT/TABLE/IDENTITY_COLUMN

I'm guessing that the export process must actually find the max value from the table (using undo for scn based extract) , or that the current value is being stored in the data dictionary. I'm assuming it's the former which would be very quick if the column is indexed - storing actual last sequence values in the dictionary would kill performance on very concurrent systems i would have thought.

Anyway - an interesting little change that has crept in with 12c

Comments