• Home
  • Blog
  • Oracle DBA
  • Transporting tablespaces between databases. a procedure and example - Oracle DBA

Transporting tablespaces between databases. a procedure and example - Oracle DBA

The following steps summarize the process of TRANSPORTING A TABLESPACE. Details for each step are provided in the subsequent example.

1. For cross-platform transport, check the endian format of both platforms by querying the v$transportable_platform
Ignore this step if you are transporting your tablespace set to the same platform.
2. Pick a self-contained set of tablespaces.
3. Generate a transportable tablespace set.

A transportable tablespace set (or transportable set) consists of datafiles for the set of tablespaces being transported and an export file containing structural information (metadata) for the set of tablespaces. You use a data pump or exp to perform the export.

 

Want To Get DBA Training From Experts? Enroll Now For Free Demo On Oracle DBA Training.

 

TRANSPORTABLE TABLESPACE

Note:

If any of the tablespaces contain xmltypes, you must use exp.

If you are transporting the tablespace set to a platform with different endianness from the source platform, you must convert the tablespace set to the endianness of the target platform. You can perform a source-side conversion at this step in the procedure, or you can perform a target-side conversion as part of step 2.

Note:

This method of generating a transportable tablespace requires that you temporarily make the TABLESPACE READ-ONLY. If this is undesirable, you can use the alternate method known as TRANSPORTABLE TABLESPACE from backup.

4. Transport the tablespace set.

Copy the datafiles and the export file to a place that is accessible to the target database.

If you have transported the tablespace set to a platform with different endianness from the source platform, and you have not performed a source-side conversion to the endianness of the target platform, you should perform a target-side conversion now.

5. Import the tablespace set.

Invoke the data pump utility or imp to import the metadata for the set of tablespaces into the target database.

Note:

If any of the tablespaces contain xmltypes, you must use imp.

Example:

The steps for transporting a tablespace are illustrated more fully in the example that follows, where it is assumed that the following datafiles and tablespaces exist:

Tablespace           Datafile

Sales_1               /u01/oracle/oradata/salesdb/sales_101.dbf
Sales_2               /u01/oracle/oradata/salesdb/sales_201.dbf

Step 1: determine if platforms are supported and endianness:

This step is only necessary if you are transporting the tablespace set to a platform different from the source platform.

If you are transporting the tablespace set to a platform different from the source platform, then determine if cross-platform tablespace transport is supported for both the source and target platforms, and determine the endianness of each platform. If both platforms have the same endianness, no conversion is necessary. Otherwise, you must do a conversion of the tablespace set either at the source or target database.

If you are transporting sales_1 and sales_2 to a different platform, you can execute the following query on each platform. If the query returns a row, the platform supports cross-platform tablespace transport.

[Related Article: Oracle DBA Tutorial Point]

Sql>select d.platform_name, endian_format
From v$transportable_platform tp, v$database d
Where tp.platform_name = d.platform_name;

The following is the query result from the source platform:

Platform_name                                            endian_format
————————-                              ————–
Solaris[tm] oe (32-bit)                                     big

The following is the result from the target platform:

Platform_name                                            endian_format
————————-                              ————–
Microsoft windows nt                                      little
You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set.

 MindMajix YouTube Channel

Step 2: pick a self-contained set of tablespaces:

There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. In this context, “self-contained” means that there are no references from inside the set of tablespaces pointing outside of the tablespaces. Some examples of self contained tablespace violations are:

  • An index inside the set of tablespaces is for a table outside of the set of tablespaces.

Note:

It is not a violation if a corresponding index for a table is outside of the set of tablespaces.

  • A partitioned table is partially contained in the set of tablespaces.

The tablespace set you want to copy must contain either all partitions of a partitioned table or none of the partitions of a partitioned table. If you want to transport a subset of a partition table, you must exchange the partitions into tables.

  • A referential integrity constraint points to a table across a set boundary.

When transporting a set of tablespaces, you can choose to include referential integrity constraints. However, doing so can affect whether or not a set of tablespaces is self-contained. If you decide not to transport constraints, then the constraints are not considered as pointers.

  • A table inside the set of tablespaces contains a lob column that points to lobs outside the set of tablespaces.
  • An xml db schema (*.xsd) that was registered by a user imports a global schema that was registered by user b, and the following is true: the default tablespace for user a is tablespace a, the default tablespace for user b is tablespace b, and only tablespace a is included in the set of tablespaces.

         To determine whether a set of tablespaces is self-contained, you can invoke the transport_set_check procedure in the oracle supplied package dbms_tts. You must have been granted the execute_catalog_role role (initially signed to sys) to execute this procedure.

         When you invoke the dbms_tts package, you specify the list of tablespaces in the transportable set to be checked for self-containment. You can optionally specify if constraints must be included. For strict or full containment, you must additionally set the tts_full_check parameter to true.

         The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace point-in-time recovery (tspitr) is one such case where dependent objects must be fully contained or fully outside the transportable set.

         For example, it is a violation to perform tspitr on a tablespace containing a table t but not its index i because the index and data will be inconsistent after the transport. A full containment check ensures that there are no dependencies going outside or coming into the transportable set.

Note:

         The default for transportable tablespaces is to check for self-containment rather than full containment.

         The following statement can be used to determine whether tablespaces sales_1 and sales_2 are self-contained, with referential integrity constraints taken into consideration (indicated by true).

Sql>execute dbms_tts.transport_set_check('sales_1,sales_2', true);

         After invoking this pl/sql package, you can see all violations by selecting from the transport_set_violations view. If the set of tablespaces is self-contained, this view is empty. The following example illustrates a case where there are two violations: a foreign key constraint, dept_fk, across the tablespace set boundary, and a partitioned table, jim.sales, that is partially contained in the tablespace set.

Sql> select * from transport_set_violations;
Violations
---------------------------------------------------------------------------
Constraint dept_fk between table jim.emp in tablespace sales_1 and table
Jim.dept in tablespace other
Partitioned table jim.sales is partially contained in the transportable set

These violations must be resolved before sales_1 and sales_2 are transportable. As noted in the next step, one choice for bypassing the integrity constraint violation is to not export
the integrity constraints.

Step 3: generate a transportable tablespace set:

         Any privileged user can perform this step. However, you must have been assigned the exp_full_database role to perform a transportable tablespace export operation.

[Related Article: Oracle DBA Interview Questions]

Note:

         This method of generating a transportable tablespace requires that you temporarily make the tablespace read-only. If this is undesirable, you can use the alternate method known as transportable tablespace from backup.

         After ensuring that you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions:

1. Make all tablespaces in the set you are copying read-only.
2. Sql> alter tablespace sales_1 read only;
3. Tablespace altered.
4. Sql> alter tablespace sales_2 read only;
5. Tablespace altered.
6. Invoke the data pump export utility on the host system and specify which tablespaces are in the transportable set.


Note:

If any of the tablespaces have xmltypes, you must use exp instead of data pump. Ensure that the constraints and triggers parameters are set to y (the default).

Sql> host
$ expdp system/password dumpfile=expdat.dmp directory=dpump_dir
Transport_tablespaces = sales_1,sales_2

You must always specify transport_tablespaces, which determines the mode of the export operation. In this example:

  • The dumpfile parameter specifies the name of the structural information export file to be created, dmp.
  • The directory parameter specifies the default directory object that points to the operating system or automatic storage management location of the dump file. You must create the directory object before invoking data pump, and you must grant the read and write object privileges on the directory to public.
  • Triggers and indexes are included in the export operation by default.

[Related Article: Oracle DBA Tutorial PDF]

  

If you want to perform a transport tablespace operation with a strict containment check, use the transport_full_check parameter, as shown in the following example:

$expdp system/password dumpfile=expdat.dmp directory dpump_dir
 Transport_tablespaces=sales_1,sales_2 transport_full_check=y

In this example, the data pump export utility verifies that there are no dependencies between the objects inside the transportable set and objects outside the transportable set. If the tablespace set being transported is not self-contained, then the export fails and indicates that the transportable set is not self-contained. You must then return to step 1 to resolve all violations.

Notes:

The data pump utility is used to export only data dictionary structural information (metadata) for the tablespaces. No actual data is unloaded, so this operation goes relatively quickly even for large tablespace sets.
7. When finished, exit back to sql*plus:
8. $ exit
If sales_1 and sales_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the tablespace set, then convert the datafiles composing the sales_1 and sales_2 tablespaces:
9. From sql*plus, return to the host system:
10. Sql> host
11. The rman convert command is used to do the conversion. Start rman and connect to the target database:
12. $ rman target /
13. Recovery manager: release 10.1.0.0.0
14. Copyright (c) 1995, 2003, oracle corporation. All rights reserved.
15. Connected to target database: salesdb (dbid=3295731590)
16. Convert the datafiles into a temporary location on the source platform. In this example, assume that the temporary location, directory /temp, has already been created. The converted datafiles are assigned names by the system.
17. Rman> convert tablespace sales_1,sales_2
18. 2> to platform ‘microsoft windows nt’
19. 3> format ‘/temp/%u’;
20. Starting backup at 08-apr-03
21. Using target database control file instead of recovery catalog
22. Allocated channel: ora_disk_1
23. Channel ora_disk_1: sid=11 devtype=disk
24. Channel ora_disk_1: starting datafile conversion 
25. Input datafile fno=00005 name=/u01/oracle/oradata/salesdb/sales_101.dbf
26. Converted datafile=/temp/data_d-10_i-3295731590_ts-admin_tbs_fno-5_05ek24v5
27. Channel ora_disk_1: datafile conversion complete, elapsed time: 00:00:15
28. Channel ora_disk_1: starting datafile conversion
29. Input datafile fno=00004 name=/u01/oracle/oradata/salesdb/sales_101.dbf
30. Converted datafile=/temp/data_d-10_i-3295731590_ts-example_fno-4_06ek24vl
31. Channel ora_disk_1: datafile conversion complete, elapsed time: 00:00:45
32. Finished backup at 08-apr-03
Exit recovery manager:
33. Rman> exit
34. Recovery manager complete.

[Related Article: Control File in Oracle]

Step 4: transport the tablespace set:

Transport both the datafiles and the export file of the tablespaces to a place that is accessible to the target database.

If both the source and destination are files systems, you can use:

  • Any facility for copying flat files (for example, an operating system copy utility or ftp)
  • The dbms_file_transfer package
  • Rman
  • Any facility for publishing on cds

If either the source or destination is an automatic storage management (asm) disk group, you can use:

  • Ftp to or from the /sys/asm virtual folder in the xml db repository
  • The dbms_file_transfer package
  • Rman

Caution:

Exercise caution when using the UNIX dd utility to copy raw-device files between databases. The dd utility can be used to copy an entire source raw-device file, or it can be invoked with options that instruct it to copy only a specific range of blocks from the source raw-device file.

It is difficult to ascertain actual datafile size for a raw-device file because of hidden control information that is stored as part of the datafile. Thus, it is advisable when using the dd utility to specify copying the entire source raw-device file contents.

If you are transporting the tablespace set to a platform with endianness that is different from the source platform, and you have not yet converted the tablespace set, you must do so now. This example assumes that you have completed the following steps before the transport:

1. Set the source tablespaces to be transported to be read-only.
2. Use the export utility to create an export file (in our example, expdat.dmp).
Datafiles that are to be converted on the target platform can be moved to a temporary location on the target platform. However, all datafiles, whether already converted or not, must be moved to a designated location on the target database.

Now use rman to convert the necessary transported datafiles to the endian format of the destination host format and deposit the results in /orahome/dbs, as shown in this hypothetical example:

Rman> convert datafile
2> '/hq/finance/work/tru/tbs_31.f',
3> '/hq/finance/work/tru/tbs_32.f',
4> '/hq/finance/work/tru/tbs_41.f'
5> to platform="solaris[tm] oe (32-bit)"
6> from platform="hp tru64 unix"
7> db_file_name_convert=
8> "/hq/finance/work/tru/", "/hq/finance/dbs/tru"
9> parallelism=5;

You identify the datafiles by filename, not by tablespace name. Until the tablespace metadata is imported, the local instance has no way of knowing the desired tablespace names.
The source and destination platforms are optional. Rman determines the source platform by examining the datafile, and the target platform defaults to the platform of the host
running the conversion.

 

Explore Oracle DBA Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download Now!

 

Step 5: import the tablespace set:

Note:

If you are transporting a tablespace of a different block size than the standard block size of the database receiving the tablespace set, then you must first have a db_nk_cache_size initialization parameter entry in the receiving database parameter file.

For example, if you are transporting a tablespace with an 8k block size into a database with a 4k standard block size, then you must include a db_8k_cache_size initialization parameter entry in the parameter file. If it is not already included in the parameter file, this parameter can be set using the alter system set statement.

Any privileged user can perform this step. To import a tablespace set, perform the following tasks:

  1. Import the tablespace metadata using the data pump import utility, impdp:
    Note:

If any of the tablespaces contain xmltypes, you must use imp instead of data pump.

$impdp system/password dumpfile=expdat.dmp directory=dpump_dir
Transport_datafiles=
 /salesdb/sales_101.dbf,
  /salesdb/sales_201.dbf
 Remap_schema=(dcranney:smith) remap_schema=(jfee:williams)

In this example, we specify the following:

  • The dumpfile parameter specifies the exported file containing the metadata for the tablespaces to be imported.
  • The directory parameter specifies the directory object that identifies the location of the dump file.
  • The transport_datafiles parameter identifies all of the datafiles containing the tablespaces to be imported.
  • The remap_schema parameter changes the ownership of database objects. If you do not specify remap_schema, all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the target database. If they do not exist, then the import utility returns an error. In this example, objects in the tablespace set owned by dcranney in the source database will be owned by smith in the target database after the tablespace set is imported. Similarly, objects owned by jfee in the source database will be owned by williams in the target database. In this case, the target database is not required to have users dcranney and free, but must have users smith and williams.
  • After this statement executes successfully, all tablespaces in the set being copied remain in read-only mode. Check the import logs to ensure that no error has occurred.

When dealing with a large number of datafiles, specifying the list of datafile names in the statement line can be a laborious process. It can even exceed the statement line limit. In this situation, you can use an import parameter file. For example, you can invoke the data pump import utility as follows:

$impdp system/password parfile='par.f'
Where the parameter file, par.f contains the following:
Directory=dpump_dir
Dumpfile=expdat.dmp
Transport_datafiles="'/db/sales_jan','/db/sales_feb'"
Remap_schema=dcranney:smith
Remap_schema=jfee:williams

2. If required, put the tablespaces into read/write mode as follows:

3. Alter tablespace sales_1 read write;
4. Alter tablespace sales_2 read write;

Job Support Program

Online Work Support for your on-job roles.

jobservice

Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:

  • Pay Per Hour
  • Pay Per Week
  • Monthly
Learn MoreGet Job Support
Course Schedule
NameDates
Oracle DBA TrainingNov 19 to Dec 04View Details
Oracle DBA TrainingNov 23 to Dec 08View Details
Oracle DBA TrainingNov 26 to Dec 11View Details
Oracle DBA TrainingNov 30 to Dec 15View Details
Last updated: 04 Apr 2023
About Author

 

Technical Content Writer

read less