Datapump, partitioning and parallel

One of our systems is reasonable large (~2TB) and has 4 tables that have over a billion rows each (and by billion i mean 1000 million - there still seems to be confusion , at least with me,about what is a billion and what is a trillion and it seems to vary from country to country) - anyway thats not really relevant to what i want to say - lets just say the tables are quite big.......

The tables are all related and the larger tables are children of some much smaller tables (no prizes for guessing that one). There are performance issues with this system (again no prizes for that one :-)) and I've been looking for a more efficient way to sructure the tables.

In most cases the number of distinct values for the foreign key column (which is generally what we are joining and filtering on in multitable joins) is relatively small (approx 40,000) and there are good referential constraints between everything.

This led me down the path of trying to interval partition the tables on this FK column - that way when doing queries (which generally always include a single value for the FK column) we can just do a FTS of the single interval partition - currently the complexity of the queries (and the multiple joins) is sending the optimizer down the wrong path a lot of the time.

Currently we're on 11.2 but we plan to upgrade soon to 12c where we can also combine the interval with reference partitioning to hopefully mean the optimizer can very quickly pull all the related records back.

So sounds good in theory - so i went about trying to do some tests. I found out some useful stuff about datapump and discovered some of the restrictions and also some of the limitations of partitioning itself.

So here we go.

To partitition the table i decided to create a new partitioned empty table and do an export/import into this from the original table. I could of used a loopack database link to just do this in one step but decided to keep it simple and just export to a file and then load that in. The basic new table script is like this - i've skipped most of the columns for brevity. As you can see it's interval partitioned on the MODEL_SCENARIO_ID column

   (    "ID" NUMBER(19,0) NOT NULL ENABLE,
 partition by range(model_scenario_id)
           Interval  (1)
(partition p1 values less than (2) )

I then exported the original table, i took the opportunity here to try out some stuff with datapump and parallel which gives some interesting results.

The first thing i did was run the export with parallel=4 - here is what the activity looked like in cloud control

 You can see that only 3 session show as active, datapump had created two datapump workers, one it created was sat idle while the other was exporting the table in parallel degree 3. From the screenshot below you can see those 3 sessions all running the 'create table' command (this seems a little odd for it to be create table - but datapump is basically doing an insert into an external table - which is consider a create table.)

 Here we can see the waits for one of the individual sessions

 And here is the sql monitoring report as it was running (which is a feature i think is just great)

Once it has finished - looking at the summary report shows us that the elapsed/clock time is 39.5 minutes but the 'database' time is 2 hours (basically 3 session each of 40 minutes)

So quite impressive you might think, 1.4 billion rows in 40 minutes - a lot faster than it would have been with a single thread....????

Well this is where it's interesting - what if we repeat that whole process but with no parallel at all?

 What's interesting is what it looks like when it completes.... (notice the tick at the top to show it is complete - the screenshot was taken at the exact point it finished so it does look like the graph might continue - but it doesn't)

So the overall time is 28.8 minutes! So a single thread is faster than 3 parallel threads?

The difference is the method that datapump is using to extract the data (see more info here on that access method ). So in the case of a single thread direct path is used, as soon as we say parallel the external table method is used.

It seems however in this basic case that direct path is at least 3 or 4 times faster than external table access.

Something to be well aware of if you are using datapump for large objects!

So anyway after my brief testing here i carried on with my plan to load the data into the precreated partition table.

So i just import and tell the process to ignore the fact the table already exists and it will just create all the partitions i need?

Well i thought it should but here's what happens when you try that

Import: Release - Production on Fri Jan 9 16:35:27 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning option
Master table "ORACLE"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ORACLE"."SYS_IMPORT_FULL_01":  /******** dumpfile=MSDnoparallel.dmp directory=uresh table_exists_action=truncate logfile=imp.log access_method=external_table
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "TP_PART"."MODEL_SCENARIO_DATA" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "TP_PART"."MODEL_SCENARIO_DATA" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-14400: inserted partition key does not map to any partition
Job "ORACLE"."SYS_IMPORT_FULL_01" completed with 1 error(s) at Fri Jan 9 16:35:39 2015 elapsed 0 00:00:12

It seems that it throws an error, - now this is where it got a little interesting - why wasn't it working?

A quick MOS search revealed there are actually bugs in this area and it plain doesn't work - at least not in 11.2 anyway.

Hmm - thats annoying.

So what i then did was import the table back in (under a different name with REMAP_TABLE), i then tried to do an insert into as select * from the temp table.

And then this happened

ORA-14300: partitioning key maps to a partition outside maximum permitted

Strange - there were only 40,000 values - but hold on what is the range of those values?


---------------------- ----------------------
                401414               10822814

Ah - there lies the problem - even if i'd started at the minimum value there are over 10 million numbers different between max and min. That means interval partitioning has to create (or at least reserve) this many partitions.

A quick google reveals that (even in 12c its the same) - the max number of partitions is 1 million.....

Aaaargh - so the whole exercise is pointless really - the partitioning approach i want to try cannot be done - back to the drawing board...........

In order to try and salvage something a little extra i thought i would do a further test of the partitioning and datapump/parallel behaviour - as it might be useful longer term.

I created a 4 way hash partition of the table and then did a further test.

Running the export with parallel 4 then resulted in the following behaviour

Each partition was unloaded in direct path at the same time (so it parallelized across the partitions but not within the partitions) - this gave the fastest runtime so far of about 25 minutes. I'm sure it would have been faster still if the files were spread over different devices etc etc

Just out of interest i then did the export with parallel 8 to see what it would do (thinking it would just use parallel 2 for each partition). What it actually did was export 2 of the partitions with direct path and the other 2 with external table in parallel 3 - so not sure what the algorithmn is actually doing... :-)

Anyway here is what that looked like when it was running

The serial direct path exports finished much faster than the parallel ones....

So some interesting conclusions

Partitoning is limited to 1 million partitions
importing into a partitioned table just plain failed (at least for this example)

Most interestingly though is the behaviour of parallel, direct path and external tables - in some cases not using parallel may be a better idea - when almost always with datapump parallel is better.

This is probably an edge case though when you are exporting a very limited number of objects, as soon as you do multiple tables/schemas parallel is almost never used for single objects - it's used to do multiple objects at the same time.


  1. Hi Rich,
    Btw, what happened to the performance of the tables? Anything interesting there

  2. Hi Anil,
    I never for that far. I need to see if any other partition scheme could help. We're also considering in memory but it depends on how much it's likely to cost......


  3. I was also curious to abt performance and partitioning. We do have somewhat similar situations. What looks best suited in this scenario is to have RANGE PARTITION MODEL_SCENARIO_DATA table on a DATE column (I am sure there must be some dates column like created_datetime etc) followed by REFERENCE PARTITIONING on (model_scenario_id) which is supported in 11.2 G. You may have to add Date column (partition key) as one of the predicate in user's queries though. Hope this helps.