Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Teradata RDBMS Database Administration - NCR

NCR Teradata RDBMS Database Administration - NCR , 2004. - 616 p.
Download (direct link): teradatadatabaseadmin2004.pdf
Previous << 1 .. 156 157 158 159 160 161 < 162 > 163 164 165 166 167 168 .. 218 >> Next

I/O rate Average number of I/O operations per transaction.

Teradata RDBMS Database Administration

13 - 13 Chapter 13: Database Analysis

Building Baseline Transaction Profiles

Metric Description
Path time Time a resource spends per transaction or row, which you can calculate as resource utilization divided by throughput rate. For example, a CPU utilization of 70% means the CPU is busy 70% of 1 second, or 0.7 of a second, or 700 milliseconds. If the processing throughput rate is 10 transactions per AMP per second, calculate the path time by dividing 700 milliseconds by 10 transactions; the result is 70 milliseconds per transaction.

Guidelines for Profile Building

The following procedure offers some guidelines for building a baseline profile:

Step Action For more information, see .
1 Run each key operation on a dedicated system (where no other jobs are running), or run a workload that is consistent and known at a non-peak time. "Target Level Emulation (TLE)" on page 13-3 "Scheduling Workloads with Teradata Dynamic Query Manager (TDQM)" on page 15-12
2 Selectively log query processing times and behaviors, including objects targeted, using DBQL. If the queries are processed efficiently and the indexes are used as expected, define the tables, indexes, and queries of a test workload and populate the test tables.(If any indexes produce unexpected results, use the Teradata Index Wizard to analyze them again). Chapter 14: "Tracking Processing Behavior with the Database Query Log (DBQL)"
3 Run each key operation on a dedicated system (where no other jobs are running), or run a workload that is consistent and known at a non-peak time. "Target Level Emulation (TLE)" on page 13-3 "Scheduling Workloads with Teradata Dynamic Query Manager (TDQM)" on page 15-12
4 Run your approved queries against your test tables. Then run ResUsage reports for several periods to establish typical processing results. "ResUsage Reports" on page D-11

Teradata RDBMS Database Administration 13 - 13 Chapter 13: Database Analysis

Building Baseline Transaction Profiles

Step Action For more information, see ...
5 When ResUsage reports prove that your test "Query Optimization" in
indexes provide: Teradata RDBMS SQL
Even row distribution across the disks Reference, Volume 2
AMP task parallelism
Balanced resource consumption
throughout the life of the query
incorporate the indexes in your production
tables.
The associated workload and query library can
then serve as your baseline profile.

Teradata RDBMS Database Administration

13 - 13 Chapter 13: Database Analysis

Building Baseline Transaction Profiles

13 - 16

Teradata RDBMS Database Administration 13 - 13 Chapter 14:

Tracking Processing Behavior with the Database Query Log (DBQL)

The Database Query Log (DBQL) is an optional feature that you can employ to log query processing activity for later analysis. Query counts and response times can be charted and SQL text and processing steps can be compared to fine-tune your applications for optimum performance.

This chapter:

Discusses the purpose and application of the DBQL feature

Describes the components of the DBQL feature

Explains how the log tables are created and populated

Details the logging fields

Explains how to:

Determine what data you want captured

Enable execution of the special Teradata SQL BEGIN/END QUERY LOGGING statements

Specify the statement options to capture the data you want

Provides scenarios of possible logging rules and resulting output

Teradata RDBMS Database Administration

13 - 1 Chapter 14: Tracking Processing Behavior with the Database Query Log (DBQL)

The DBQL Components

Introducing DBQL

DBQL provides a series of predefined tables that can store, based on rules you specify, historical records of queries and their duration, performance, and target activity.

DBQL is flexible enough to log information on the variety of SQL requests that run on Teradata, from short transactions to longer-running analysis and mining queries. You begin and end collection for a user or group of users and/or one or a list of accounts.

Collection options include:

Default logging reports for each query at least the leading SQL characters, the time of receipt, the number of processing steps completed, the time the first step was dispatched, and the times the first and last response packets were returned to the host.

Summary logging reports at each logging interval the count of all queries that completed processing time within the specified time intervals.

Threshold logging can log a combination of default and summary data:

Default data for each query that ran beyond the threshold limit

Summary counts of all queries that ran within the threshold time

Detail logging, which includes:

Default data

Any or all of the following:

- Step level activity, including parallel steps
Previous << 1 .. 156 157 158 159 160 161 < 162 > 163 164 165 166 167 168 .. 218 >> Next