Datapump out from all pdb's in one go?

Again inspired by an otn question i came up with a solution thats perhaps a useful concept to solve lots of similar type requirements.

In this case they wanted a script to export out each PDB and wondered if this was possible in a single command somehow. Well it isn't but the following solution seems quite neat to me (though you may disagree....)

In order to make the datapump possible we need to create a directory in each PDB (with the same name) - DATA_PUMP_DIR looks like it should be ready made for this but doesn't work in the PDB's for some reason.

So lets create that first (tun in each pdb)

create directory tmp as '/tmp';

(you could use the method i show later to do this everywhere........)

Now we have that i create a small file with a bit of plsql to run datapump exports

cat $HOME/demo.sql

  l_dp_handle      NUMBER;
  v_job_state      varchar2(4000);
  v_day            varchar2(20);
  select SYS_CONTEXT('USERENV', 'CON_NAME')||'-'||rtrim(to_char(sysdate,'DAY')) into v_day from dual;
  l_dp_handle :=   => 'EXPORT',
                                    job_mode    => 'FULL',
                                    remote_link => NULL,
                                    version     => 'LATEST');
  DBMS_DATAPUMP.add_file(handle    => l_dp_handle,
                         filename  => v_day||'.dmp',
                         directory => 'TMP',
                         reusefile => 1);
  DBMS_DATAPUMP.add_file(handle    => l_dp_handle,
                         filename  => v_day||'.log',
                         directory => 'TMP',
                         filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE,
                         reusefile => 1);
  DBMS_DATAPUMP.WAIT_FOR_JOB(l_dp_handle, v_job_state);

I make the file name the container name followed by - and the day of the week.

So now we have something that will create an export when run against a PDB - but how to make it run against all PDB's automatically?

This is where we borrow

You can read up about al the options and what it does elsewhere - i'll just show what the end command i came up with was:

perl -C 'CDB$ROOT PDB$SEED' -b /tmp/demo /home/oracle/demo.sql

I'm specifically excluding the seed and root pdb's as i don;t care about them - thats what the -C does

When run this gives the following output

 catcon: ALL catcon-related output will be written to /tmp/demo_catcon_8698.lst
catcon: See /tmp/demo*.log files for output generated by scripts
catcon: See /tmp/demo_*.lst files for spool files, if any completed successfully

There is loads of stuff outputted from catcon under /tmp but all  i want to look at is the dumpfiles and logfiles (my PDBS@ are called MARKER and REMOTEDB)

ls -l /tmp

-rw-r--r-- 1 oracle oinstall     9098 Jan 28 20:34 MARKER-WEDNESDAY.log
-rw-r----- 1 oracle oinstall  2080768 Jan 28 20:34 MARKER-WEDNESDAY.dmp
-rw-r--r-- 1 oracle oinstall     8954 Jan 28 20:34 REMOTEDB-WEDNESDAY.log
-rw-r----- 1 oracle oinstall  2068480 Jan 28 20:34 REMOTEDB-WEDNESDAY.dmp

So looks good - and checking the last few lines of a logfile i see this

. . exported "C##LINK"."DEMO"                            5.054 KB       1 rows
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Wed Jan 28 20:34:50 2015 elapsed 0 00:02:33

There you go - maybe not something i'd implement for real but it's a useful technique nonetheless.


Post a Comment