InTech, 2012. — 300 p.
It is more than a century since Karl Pearson invented the concept of Principal Component Analysis (PCA). Nowadays, it is a very useful tool in data analysis in many fields. PCA is the technique of dimensionality reduction, which transforms data in the high-dimensional space to space of lower dimensions. The advantages of this subspace are numerous. First of all, the reduced dimension has the effect of retaining the most of the useful information while reducing noise and other undesirable artifacts. Secondly, the time and memory that used in data processing are smaller. Thirdly, it provides a way to understand and visualize the structure of complex data sets. Furthermore, it helps us identify new meaningful underlying variables.
Indeed, PCA itself does not reduce the dimension of the data set. It only rotates the axes of data space along lines of maximum variance. The axis of the greatest variance is called the first principal component. Another axis, which is orthogonal to the previous one and positioned to represent the next greatest variance, is called the second principal component, and so on. The dimension reduction is done by using only the first few principal components as a basis set for the new space. Therefore, this subspace tends to be small and may be dropped with minimal loss of information.
Originally, PCA is the orthogonal transformation which can deal with linear data. However, the real-world data is usually nonlinear and some of it, especially multimedia data, is multilinear. Recently, PCA is not limited to only linear transformation. There are many extension methods to make possible nonlinear and multilinear transformations via manifolds based, kernel-based and tensor-based techniques. This generalization makes PCA more useful for a wider range of applications.
In this book the reader will find the applications of PCA in many fields such as image processing, biometric, face recognition and speech processing. It also includes the core concepts and the state-of-the-art methods in data analysis and feature extraction.
Two-Dimensional Principal Component Analysis and Its Extensions 1
Application of Principal Component Analysis to Elucidate Experimental and Theoretical Information
Principal Component Analysis: A Powerful Interpretative Tool at the Service of Analytical Methodology
Subset Basis Approximation of Kernel Principal Component Analysis
Multilinear Supervised Neighborhood Preserving Embedding Analysis of Local Descriptor Tensor
Application of Linear and Nonlinear Dimensionality Reduction Methods
Acceleration of Convergence of the Alternating Least Squares Algorithm for Nonlinear Principal Components Analysis
The Maximum Non-Linear Feature Selection of Kernel Based on Object Appearance
FPGA Implementation for GHA-Based Texture Classification
The Basics of Linear Principal Components Analysis
Robust Density Comparison Using Eigenvalue Decomposition
Robust Principal Component Analysis for Background Subtraction: Systematic Evaluation and Comparative Analysis
On-Line Monitoring of Batch Process with Multiway PCA/ICA
Computing and Updating Principal Components of Discrete and Continuous Point Sets