Partitioning to the rescue



One of our core 3rd party applications has some functionality where results of certain calculations are extracted once per day into customer defined tables for use by downstream systems and interfaces. These results (and there are various different calculations outputting to various different tables) are retained based on business requirements for periods of either a week, a month or sometimes up to 6 months or a year. As the retention length is passed the oldest day is deleted as part of the extraction process.

(that sounded really dull when i read it back..... - we basically have a table that contains a rolling date range of data)

This all works fine but over time and with increasing data volumes the performance degrades, an index on the extraction date helps initially (all interfaces are always querying based on this extract date by the way) but over time the constant insert an entire day of data and delete an entire day of data (to maintain the retention history) ends up pretty much destroying the index. This is one of the few cases where rebuilding indexes actually really is required.

This index rebuild helps but then again over time performance tails off.

There has to be a better way to deal with this - and indeed there is - lets attack this with partitioning.

But first up lets show the current system in action - here is the delete statement removing the out of range earliest date


And that same delete in SQL monitoring (i love this feature by the way)


Anyway you get the idea - the estimate of stats is wrong as you can see from the sql monitoring screen - maybe a histogram would help in correcting the numbers but the plan would be the same - you can see its taking ages.

The story is a similar one for the insert - it's an incredibly simple plan (as you would expect) it just takes forever because of the index maintenance.

OK, so lets sort this - interval partitioning to the rescue.

In this case we want a partition per day as that fits in with the pattern of creation and deletion and we also want to subpartition by list on an additional column as the batch also always chooses from a set series of values and downstream systems query on this - an ideal candidate for partitioning

Oh - and we also want to implement this with no outage as the table is in use all the time, just to make it a little bit more tricky.....

So we have to use DBMS_REDEFINITION for this - but actually in this very simple case that's not difficult at all.

The first thing to do is work out the syntax for the CTAS statement to create an empty composite interval/list partitioned table - this is easier send than done - in fact the most useful thing about this post may be the syntax for doing this :-)

So here it is

create table TARGETSCHEMA.uepf_redef
PARTITION BY RANGE (business_date)
INTERVAL(NUMTODSINTERVAL (1, 'DAY'))
SUBPARTITION BY LIST(use_case)
SUBPARTITION TEMPLATE(
                Subpartition Sp1 Values('FP_BS_SUP_L_PE_0'),
                Subpartition Sp2 Values('Pre'),
                Subpartition Sp3 Values('Post'),
                Subpartition Sp4 Values('FP_BS_ALL_L_PW_0'),
                Subpartition Sp5 Values('FP_BS_SAL_P_PE_0'),
                Subpartition Sp6 Values('FP_BS_SUP_P_PE_0'),
                Subpartition Sp7 Values('FP_BS_CAP_P_PE_0'),
                Subpartition Sp8 Values('FP_BS_SAL_P_PE_1'),
                Subpartition Sp9 Values('Scd'),
                Subpartition Sp10 Values('FP_BS_SAL_L_PE_0'),
                Subpartition Sp11 Values('FP_BS_SAL_P_PE_2'),
                Subpartition Sp12 Values('FP_BS_SUP_P_PE_1'),
                Subpartition Sp17 Values(Default)
    )
                (  
  PARTITION p1 VALUES LESS THAN (to_date('11/06/2015','dd/mm/yyyy'))

                 as select * from TARGETSCHEMA.user_eet_position_feed where 1=0; 

So at a top level the table is partitioned by business_date and we will end up with a separate partition per day, each one of these partitioned is then subpartitioned by 17 different values (including the default to capture anything not listed).

So that's 17 partitions being created every day.

Note here - the first partition date chosen is important - any values less than this all end up in the first partition - the interval routine does not create individual partitions prior to the first partition - this caught me out on the first attempt i had at this.

So now we have an empty table but with the correct definition - now we need to sync the data over and swap this table with the original one - some quick redef steps then

Do initial sync of data - we have no PK so rowid has to be used

EXEC DBMS_REDEFINITION.start_redef_table('TARGETSCHEMA', 'USER_EET_POSITION_FEED', 'UEPF_REDEF',options_flag=>DBMS_REDEFINITION.CONS_USE_ROWID);

Run any grants that exist on the original table again this new one - i.e. grant xxx on UEPF_REDEF to whoever;

Gather stats on this new table

EXEC DBMS_STATS.GATHER_table_STATS (OWNNAME => 'TARGETSCHEMA', TABNAME => 'UEPF_REDEF',GRANULARITY=>'ALL',estimate_percent=>null,degree=>8);

Now everything is ready (and in this simple case i have no FK/PK and no indexes at all - i'll come to that in a minute).

So we switch things over

EXEC DBMS_REDEFINITION.finish_redef_table('TARGETSCHEMA', 'USER_EET_POSITION_FEED', 'UEPF_REDEF');

Now the dictionary has been changed and the table names swapped - so now our active application table is composite partitioned with no outage.

So lets now test what the SQL plan looks like

For the same delete statement (well modified dates/use case - but same data volume and data distribution) we now get this


And in SQL monitoring we now see this


And we can see the whole thing took only a couple of minutes to delete - where the other one was still merrily going for 10 minutes or more. Note that the stats are also very good as we now have all of the data in question in one subpartition.

And back to the index point - it's simply not required - partition pruning is performing the same function in a much better way and the index just gets in the way. Due to the way the data is inserted/queried it's never going to be useful to have an index.

Beware simple tests though - a count(*) on this new table will perform better with an index (using the index as a skinny version of the table) - however in normal application access this statement would never run so the test is not valid and may mislead you.

That's a great result - with no application code changes we improved the system substantially. The insert is also improved as there is no index maintenance to be done on the huge index - so we help both phases of the batch job.

The next stage is to now replace the delete with a drop partition statement

We tested this already and the statements below work very well

alter table xx set interval ();
alter table xx drop partition P1; -- where P1 is the oldest
alter table xx set interval (NUMTODSINTERVAL (1, 'DAY'));

The first statement is a little quirk as you can't drop the first partition directly (at least not in 11.2), so you have to disable interval partitioning and then reenable it after the drop.

This reduces the delete step to almost no time at all - however to implement this we need an application code change which will of course take longer.

A nice demo of some oracle features which solve an application issue quite elegantly i think you'll agree.


Comments