If you want to find out how long individual objects are taking to export in most cases you can use metrics=y to just add summary information to the export logfile - but it's not very granular and doesn;t give stats for tables. There is a way however to do this for tables as long as you retain the master_table after the export has finished.
If you add KEEP_MASTER=y to you datapump command then the master table where datapump tackes all of its metadata is retained
You can then query this table to get the elapsed runtimes - this isn;t stored in the simplest format to query but a simple analytic function gives you what you want (in this example the master table is called OPS$ORACLE.SYS_EXPORT_SCHEMA_03)
This gives output of the form:
If you add KEEP_MASTER=y to you datapump command then the master table where datapump tackes all of its metadata is retained
You can then query this table to get the elapsed runtimes - this isn;t stored in the simplest format to query but a simple analytic function gives you what you want (in this example the master table is called OPS$ORACLE.SYS_EXPORT_SCHEMA_03)
select tab_owner, tab_name, completed_rows, tab_size, elapsed_time
from (select base_object_schema,
object_name,
(lag(object_name, 1) over(order by process_order)) as TAB_NAME,
(lag(base_object_schema, 1) over(order by process_order)) as TAB_OWNER,
completed_rows,
(dump_length / (1024 * 1024 * 1024)) || 'GB' TAB_SIZE,
elapsed_time
from ops$oracle.sys_export_schema_03
where object_type_path = 'SCHEMA_EXPORT/TABLE/TABLE_DATA'
and process_order >= 1)
where object_name is null
This gives output of the form:
very useful information
ReplyDelete