Download (direct link):
Specific AMP journal dump/restore Journal dump/restore operations that involve specific AMPs impact a subset of the vprocs. Note that usage statistics may show AMPs (other than those indicated in the Utility Statement) have activity. This is true when the Permanent Journal (PJ) contains single afterimages; the system stores single afterimages on an AMP in the cluster designated as backup. Since all single afterimages for a given AMP are stored on a single backup AMP, the backup AMPs show measurable usage.
Cluster level dump/restore If the dump / restore operation takes advantage of the ability to perform ARC at the cluster level, the resources consumed during that operation are almost entirely within that cluster.
All-AMP dump performed with less sessions than AMPs During dump operations, each HUTPARSE session is associated with an AMP until the AMP dumps an entire subtable worth of data. If there are fewer sessions than AMPs, resource consumption tends to move from AMP to AMP until all AMPs have dumped their portion of the table. The process starts over again on the next subtable to be dumped.
Restore from a dump performed with less sessions than AMPs If the system performs a dump operation with fewer sessions than AMPs, resource usage is skewed. When resource usage is skewed, the system restores data to the AMPS unevenly.
As an example, the system uses only one HUTPARSE session in a dump operation. Given either a large table or a fast ResUsage sampling rate, you could watch resource usage move from AMP to AMP as the HUTPARSE session transfers data from an AMP to the host (or client), one AMP at a time.
During the restore operation, there is no direct tie between sessions and AMPs. Any session transfers data to whatever AMP on which it is to be stored.
Since the system writes data to the dump tape one AMP at a time, the system bombards each AMP with data until that AMP is restored, then starts loading data on the next AMP. You could watch resource usage move from AMP to AMP as the system transfers data during a HUTPARSE sessions from the host (or client) to an AMP, in some AMP order.
Given an LSN and user name, the system can accumulate all work a job performs as a total that may supply the most useful level of information.
B - 20 Teradata RDBMS Database Administration
Appendix B: Import/Export Utilities
Monitoring an ARC Job
Monitoring HUT Locks
During an Archive/Recovery operation, the utility places locks on the entities affected by the operation. These locks remain active during a Teradata RDBMS restart, and must be explicitly released by the RELEASE LOCK statement or by the RELEASE LOCK option available on the ARCHIVE, ROLLBACK, ROLLFORWARD, RESTORE, and BUILD statements.
When monitoring Host Utility (HUT) locks placed on a database or table, Teradata Manager reports the object that is locked, but not necessarily who is causing the lock to block (blocker).
This occurs because HUT locks remain until they are removed, although the session holding the HUT lock may not be active. For example, that session may be logged off or aborted.
In this case, Teradata Manager does not tell you who is causing the lock and returns the Blk_x_HostId, Blk_x_SessNo, and Blk_x_UserId fields as NULLs.
Use the ShowLocks utility to obtain the name of the user that placed the HUT lock.
For more information on ShowLocks, see Teradata RDBMS Utilities.
Releasing HUT Locks with RELEASE LOCK
The RELEASE LOCK command removes HUT locks from the identified databases or tables.
If the locks were placed at the database level, then they must be released at the database level.
To release HUT locks, log on an ARC session and submit the RELEASE LOCK command.
You must have ARCHIVE or RESTORE privilege on or be the owner of the database or table for which the locks are to be released.
For more information on HUT locks and the RELEASE LOCK command, see
I Teradata Archive/Recovery Utility Reference.
B - 20
Teradata RDBMS Database AdministrationAppendix B: Import/Export Utilities
Monitoring a FastLoad or MultiLoad Job
Monitoring a FastExport Job
This section describes how the nature of work done during the data export phase from the Teradata RDBMS server to the client determines where and when the work is charged to the different sessions. Consequently, the pattern of work charged to different sessions causes Teradata Manager to display resource usage of data in a particular way.
In running a FastExport job, a user is logged onto two Teradata SQL sessions and also, under the same user name, an n number of EXPORT sessions. The system logs EXPORT session partitions on not only under the same user name but also under the same LSN as the Teradata SQL session partition. An LSN is associated with each session when the session logs on and identifies a collection of sessions performing the same job. To see the total impact of a FastExport job, calculate the total impact of all sessions with the same LSN.