Springer, 2017. — 292 p. — ISBN: 978-3319599748
This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering. Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat
ory frameworks, but become powerful, quantitative data-analytical tools in themselves that enable researchers to look beyond the data surface and unravel underlying mechanisms. Interactive examples of most methods are provided through a package of MatLAB routines, encouraging a playful approach to the subject, and providing readers with a better feel for the practical aspects of the methods covered.
Statistical InferenceStatistical Models
Goals of Model-Based Analysis and Basic Definitions
Principles of Statistical Parameter Estimation
Solving for Parameters in Analytically Intractable Situations
Statistical Hypothesis Testing
Regression ProblemsMultiple Linear Regression and the General Linear Model (GLM)
Multivariate Regression and the Multivariate General Linear Model
Canonical Correlation Analysis (CCA)
Ridge and LASSO Regression
Local Linear Regression (LLR)
Basis Expansions and Splines
k-Nearest Neighbors for Regression
Artificial Neural Networks as Nonlinear Regression Tools
Classification ProblemsDiscriminant Analysis
Fisher’s Discriminant Criterion
Logistic Regression
k-Nearest Neighbors (kNN) for Classification
Maximum Margin Classifiers, Kernels, and Support Vector Machines
Model Complexity and SelectionPenalizing Model Complexity
Estimating Test Error by Cross-Validation
Estimating Test Error by Bootstrapping
Curse of Dimensionality
Variable Selection
Clustering and Density EstimationDensity Estimation
Clustering
Determining the Number of Classes
Mode Hunting
Dimensionality ReductionPrincipal Component Analysis (PCA)
Canonical Correlation Analysis (CCA) Revisited
Fisher Discriminant Analysis (FDA) Revisited
Factor Analysis (FA)
Multidimensional Scaling (MDS) and Locally Linear Embedding (LLE)
Independent Component Analysis (ICA)
Linear Time Series AnalysisBasic Descriptive Tools and Terms
Linear Time Series Models
Autoregressive Models for Count and Point Processes
Granger Causality
Linear Time Series Models with Latent Variables
Computational and Neurocognitive Time Series Models
Bootstrapping Time Series
Nonlinear Concepts in Time Series AnalysisDetecting Nonlinearity and Nonparametric Forecasting
Nonparametric Time Series Modeling
Change Point Analysis
Hidden Markov Models
Time Series from a Nonlinear Dynamical Systems PerspectiveDiscrete-Time Nonlinear Dynamical Systems
Continuous-Time Nonlinear Dynamical Systems
Statistical Inference in Nonlinear Dynamical Systems
Reconstructing State Spaces from Experimental Data
Detecting Causality in Nonlinear Dynamical Systems