Download (direct link):
Another example is reported by Collantes et al. in Analytical Chemistry, [69:1392-1397 (1997)]. They employed WPT for preprocessing of HPLC data and several classifiers as potential tools for pharmaceutical fingerprinting pattern recognition. The HPLC data for each L-tryptophan sample was preprocessed by the Haar wavelet function in the WPT treatment. Then, the coefficients thus obtained in the wavelet domain were sorted in descending order. A small portion of sorted coefficients were fed as the inputs of the ANN, KNN, and SIMCA classifiers to classify the samples according to manufacturers. With this study, they concluded that WPT preprocessing provides a fast and efficient way of encoding the chromatographic data into a highly reduced set of numerical inputs for the classification models.
For purposes classification and regression, an adaptive wavelet algorithm (AWA) using higher-multiplicity wavelets was proposed. Detailed descriptions of the method can be found in Chapter 8 and Chapter 18 of the book Wavelets in Chemistry . The wavelet neural network (WNN) can also be used for classification and pattern recognition, discussed in the following section.
5.5.3. Combined Method of Wavelet Transform and Chemical Factor Analysis
There are several ways to combine WT and chemical factor analysis (CFA) for different purposes. In Section 5.3.5, for example, WT is used for background removal in order to obtain the correct rankmap of a data matrix.
Principal-component analysis (PCA) is an important and basic method in CFA. However, because the principal components (PCs) are calculated as the eigenvectors of the variance--covariance matrix, computation of the
PCs is time-consuming. In order to speed up the calculation of PCA, a fast approximate PCA has been proposed using WT or WPT as a tool for data compression. Generally, as can the procedures in the combined methods for classification and pattern recognition, the main procedures of the fast approximate PCA can be outlined as two steps: to compress the data set and then perform the PCA. As an example, one of the algorithms for WPT compression and PCA of the set of spectral signals can be summarized as follows:
1. Calculate the ‘‘variance spectrum’’ by
vary = mH X - X )2 (j = 1,---,n) (583)
where m denotes the number of objects, n denotes the number of variables, xy denotes an element the dataset Xmxn, and Xj is the mean of the jth column calculated as
x = (584)
2. Decompose the variance spectrum into the WPT coefficients represented by the WPT tree.
3. Search the WPT tree for the best basis.
4. Compress the coefficients according to a selected criterion.
5. Decompose all the spectra into WPT coefficients and compress them in the same way as in steps 2 and 4.
6. Perform PCA on the compressed coefficients.
Using WT or WPT as a denoising tool for CFA is another type of combination of WT and FA. An example for resolution of multicomponent chromatograms by window factor analysis (WFA) with WT preprocessing can be found in the Journal of Chemometrics [12:85-93 (1998)]. In this paper, resolution and quantitative determination of a multicomponent chromatogram with strong noise was investigated. It was proved that both the resolved chromatographic profiles and the quantitative results by WFA can be greatly improved for the noisy chromatographic data matrix by WT preprocessing. Figure 5.63a,b shows the resolved chromatograms both without and with the WT preprocessing, respectively. The dotted (elliptical) lines in the figure are the experimental chromatograms
application of wavelet transform in chemistry
Retention Time / min Retention Time / min
Figure 5.63. Resolved chromatograms by WFA without (a) and with (b) the WT preprocessing. The dotted lines represent the experimental chromatograms of the standard samples for comparison [from J. Chemometr. 12:85-93 (1998)].
of the standard samples. From this figure we can see the effect of WT preprocessing.
5.5.4. Wavelet Neural Network
The wavelet neural network (WNN) is a combination of wavelet transform and the artificial neural network (ANN). WNN uses wavelet functions instead of the traditional sigmoid function as its transfer function in each neuron. Two different models have been proposed for different applications: (1) one is for general purposes such as quantitative prediction, classification, and pattern recognition and (2) one is for signal compression. Their architectures are illustrated in Figure 5.64a,b, respectively. In the first model, the architecture is almost exactly the same as ANN except that the transfer function is replaced by a wavelet function faib(t). In the second model, the input is a parameter ti describing the position of the compressing signal, such as the wavenumber for IR or NIR and retention time for the chromatogram. In some of the literature, this model is regarded as one input neuron. It would be more accurate to say that there is no input layer in the model because the input parameter t?, is directly fed into the middle layers’ neurons without any processing. Furthermore, there is only one output neuron in the model because the output is the magnitude of the signal at the position ti .