Database migration with datapump 10g->12c



Finally one of our 'legacy' systems is being brought up to date after many years of languishing on oracle 9. We're so far behind that a migration route is not straightforward (at least one that runs in a reasonable timeframe) so we are left with this convoluted route (there are reasons for this but i won't go into that for now - in fact i'm just going to cover one of these sections).

So general plan is:

1. create standby from 9i db to another server
2. Break standby link - leaving original db shutdown and untouched
3. Activate standby copy and upgrade to 10.2
4. Datapump over network link from 10.2 to precreated 12.1 shell db
5. Run app upgrade in 12.1

So there are a lot of steps there (and in fact a huge amount of complexity with interlinked systems so the actual process is quite horrendous.

The bit i'm going to talk about for now though is step 4 - the datapump extract - there are maybe some useful elements that other people can take away from this:

So before the datapump process can begin i need to build an empty shell database - for completeness I've mentioned that here - we use a minimal config and omf to keep things really simple so don't be surprised if this section is quite short....

(I'm using 12.1.0.1.5 by the way - the max version supported by the 3rd party Vendor)

To start with i create an init file with this content (and add an entry into oratab for this db)

*.compatible='12.1.0.1'
*.db_create_file_dest='/oracle/RICHLINE/oradata/RICHLINE/oradata'
*.db_name='RICHLINE'
*.db_recovery_file_dest='/oracle/RICHLINE/oradata/RICHLINE/recovery_area'
*.db_recovery_file_dest_size=322122547200
*.diagnostic_dest='/oracle/admin/RICHLINE'
*.job_queue_processes=0
*.LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.sga_max_size=6442450944
*.sga_target=6442450944


I then do the following to create the base db

SQL> startup nomount -- no db to actually do anything with yet - this just allocates mem and proc
SQL> create spfile from pfile; -- make sure we have spfile so control files etc get updated
SQL> shutdown; -- needed to read spfile 
SQL> startup nomount;
SQL> create database character set WE8ISO8859P1; --created

Database created.

That's it - surprisingly short (as I've mentioned in a previous post). Now we just have to run catalog/catproc which i won't paste here.....

So now we have a working database - albeit with nothing in

I now we create some user tablespaces (i base this on what exists in the source db - i'm not going to let datapump do the tablespaces - it's cleaner to do it manually rather than messing with conversion routines)

With OMF that's just

create tablespace xx;
create temporary tablespace temp;

which will create a file with maxextend to 32G and if you need more than 32GB you just add another file via

alter tablespace xx add datafile;

So now we have somewhere to put our data we now need to crank up datapump - in my case I'm going to pull everything over a db link and not bother creating a file, copying it over and then importing - this means the import is starting to load data as soon as you start the job.

To enable this i need a db link to my source db from this 12c db - so

create database link sourcedb connect to x identified by y using 'sourcetnsentry';

once i have that i can create my datapump parfile (and parfile is better here to avoid escape character nightmares)

so here it is:

NETWORK_LINK=SOURCEDB
full=y
PARALLEL=8
LOGFILE=impdp_.log
EXCLUDE=SCHEMA:"IN ('OUTLN','SYSTEM','APX','SYSMAN','SYS','ORA_AUD','CDCSUB','ANONYMOUS','CDCPUB','CDCUTILS','PERFSTAT','XDB','AUDSYS','HP_DBSPI','WMSYS','AUDSYS','OJVMSYS','TSMSYS')"
EXCLUDE=tablespace,password_verify_function,profile,password_history,statistics
streams_configuration=N
keep_master=y


A few comments here to explain some of that (ignoring the obvious ones)

1. I'm doing a full import - this means i get roles, public synonyms etc etc - much more than just schema exports - however I'm explicitly excluding some schemas as they spew out errors on the import side and I don;t need any of them - they either already exist or are schema i don't want to move
2. as mentioned i exclude tablespaces , but i also exclude password_verify_function and profile as there is a bug in my version and these have to be manually done - this may or may not be the case for you -see MOS 1909477.1 for that
3. password history is excluded as it spews loads of errors and i don;t really need it anyway
4. stats are also excluded for a couple of reasons - and a point of note here as there is sometimes a misconception here - datapump is not gathering stats it's using dbms_stats.set_stats to explicit set the values in use in the source system. For some reason this is very slow and in my case it;s quicker to gather than again rather than set them which is crazy as surely gather stats has to set them using similar routines? Anyway collecting in 12c is probably a good idea anyway as there are a lot of changes since 10g......

So anyway notes out of the way - i kick off the process and it merrily runs - until it hits this

ORA-39126: Worker unexpected fatal error in KUPW$WORKER.FETCH_XML_OBJECTS [TABLE:"APPSCHEMA"."AQ$_TRADE_DATA_T_G"]
ORA-01801: date format is too long for internal buffer

Again this is a bug caused by this being upgraded from a really old db - MOS note 1311659.1. This is fixed by a dictionary hack

UPDATE IND$ SET SPARE6 = SYSDATE WHERE TO_CHAR (SPARE6, 'DD-MON-YY') ='00-000-00';

Don't try that without oracle support........

So i retry

And get this message for a couple of tables but it runs to completion

ORA-31679: Table data object "appschema"."CAML" has long columns, and longs can not be loaded/unloaded using a network link

So longs don't work over db links - in my case that's no big issue - the tables in question are tiny and there is no FK/PK on them - these can be done separately out to file and copied over.

Reviewing the multiple errors with other issues one is obvious and the fix for it will resolve a number of issues

PLS-00201: identifier 'DBMS_JAVA.START_IMPORT' must be declared

Java was not installed in the DB as we didn't think the app used it - but seems it does - so i now have to run initjvm to add that in - easy enough.

A re-run now is largely error free (after resetting the environment - using flashback in my case)

However it takes a while and there are a few simple things i can do to speed things up - here is the graph that covers most of the import progress


As you can see a few things stand out here that we can do - i call this tuning by colours.....

The brown is because the redo logs are tiny and i just have the OMF defaults - so i create some new groups and drop the small ones:

alter database add logfile size 1G; -- 3 times then drop the tiny default ones

alter database drop logfile group 1;
alter system switch logfile;
alter system checkpoint;
alter database drop logfile group 2;

The dark blue at the end is largely caused by the index build (some of which has to be done - i.e. reading the data - but some of it is sort i/o and i can reduce that by increasing the PGA).

SQL> alter system set pga_aggregate_target=4G;

After doing that the entire job can run in 55 minutes

For reference there is 60GB of application data and 7GB of indexes

From what I've seen previously this is actually pretty slow and i think there are a few reasons for that worth mentioning:

1. The source db is on a really old AIX 5.2 box so that's not helping
2. 10g doesnt help - there is a bug that it doesn't seem to do partition unload properly over db link from 10g to 12c
3. There is 1 table accounting for about 85% of the data, its very wide thanks to multiple char columns (each row is about 2K)

For reference the import generated 81GB of redo - more than i had imagined but roughly comparable with the data being loaded.

So what was the point of all that?

Well - it shows how simple the migration can be - you just have to put in a little ground work first to make it as quick as possible and iron out all the creases.

And for a further bit of info - a full export of the 12c database (to disk) - takes 8 minutes and this is more in line what I've seen with other testing on newer servers and versions. datapump can be incredibly quick.

Comments

  1. This is an excellent article. I am working on converting our 11g database on Windows to 12c on Linux, and I found the lessons from your experience very helpful. Thanks so much for taking the time to share!

    ReplyDelete

Post a Comment