注册 登录 进入教材巡展 进入在线书城
#
  • #

出版时间:2006-01-10

出版社:高等教育出版社

以下为《非线性时间序列—建模、预报及应用》的配套数字资源,这些资源在您购买图书后将免费附送给您:
  • 高等教育出版社
  • 9787040173574
  • 1
  • 247358
  • 精装
  • 16开
  • 2006-01-10
  • 670
  • 408
  • 理学
  • 数学
内容简介

本书主要介绍非线性时间序列理论和方法的一些最新研究成果,尤其以近十年来发展起来的非参数和半参数技术为主。 本书不仅对这些技术在时间序列状态空间、频域和时域等方面的应用给出了详细的介绍。同时,为了体现参数和非参数方法在时间序列分析中的整合性,还系统地阐述了一些主要参数非线性时间序列模型(比如ARCH/GARCH模型和门限模型等)的近期研究成果。此外,书中还包含了一个对线性ARMA模型的简洁介绍。为了说明如何运用非参数技术来揭示高维数据的局部结构,本书借助了很多源于实际问题的具体数据,并注重在这些例子的分析中体现部分的分析技巧和工具,阅读本书只需要具备基础的概率论和统计学知识。

本书适用于统计专业的研究生面向应用的时间序列分析人员以及该领域的各类研究人员。此外,本书也对从事统计学的其他分支以及经济计量学、实证金融学、总体生物和生态学的研究人员有参考价值。

目录

 front matter
 1 Introduction
  1.1 Examples of Time Series
  1.2 Objectives of Time Series Analysis
  1.3 Linear Time Series Models
   1.3.1 White Noise Processes
   1.3.2 AR Models
   1.3.3 MA Models
   1.3.4 ARMA Models
   1.3.5 ARIMA Models
  1.4 What Is a Nonlinear Time Series?
  1.5 Nonlinear Time Series Models
   1.5.1 A Simple Example
   1.5.2 ARCH Models
   1.5.3 Threshold Models
   1.5.4 Nonparametric Autoregressive Models
  1.6 From Linear to Nonlinear Models
   1.6.1 Local Linear Modeling
   1.6.2 Global Spline Approximation
   1.6.3 Goodness-of-Fit Tests
  1.7 Further Reading
  1.8 Software Implementations
 2 Characterlstics of Time Series
  2.1 Stationarity
   2.1.1 Definition
   2.1.2 Stationary ARMA Processes
   2.1.3 Stationary Gaussian Processes
   2.1.4 Ergodic Nonlinear Models∗
   2.1.5 Stationary ARCH Processes
  2.2 Autocorrelation
   2.2.1 Autocovariance and Autocorrelation
   2.2.2 Estimation of ACVF and ACF
   2.2.3 Partial Autocorrelation
   2.2.4 ACF Plots, PACF Plots, and Examples
  2.3 Spectral Distributions
   2.3.1 Periodic Processes
   2.3.2 Spectral Densities
   2.3.3 Linear Filters
  2.4 Periodogram
   2.4.1 Discrete Fourier Transforms
   2.4.2 Periodogram
  2.5 Long-Memory Processes∗
   2.5.1 Fractionally Integrated Noise
   2.5.2 Fractionally Integrated ARMA processes
  2.6 Mixing∗
   2.6.1 Mixing Conditions
   2.6.2 Inequalities
   2.6.3 Limit Theorems for α-Mixing Processes
   2.6.4 A Central Limit Theorem for Nonparametric Regression
  2.7 Complements
   2.7.1 Proof of Theorem 2.5(i)
   2.7.2 Proof of Proposition 2.3(i)
   2.7.3 Proof of Theorem 2.9
   2.7.4 Proof of Theorem 2.10
   2.7.5 Proof of Theorem 2.13
   2.7.6 Proof of Theorem 2.14
   2.7.7 Proof of Theorem 2.22
  2.8 Additional Bibliographical Notes
 3 ARMA Modeling and Forecasting
  3.1 Models and Background
  3.2 The Best Linear Prediction—Prewhitening
  3.3 Maximum Likelihood Estimation
   3.3.1 Estimators
   3.3.2 Asymptotic Properties
   3.3.3 Confidence Intervals
  3.4 Order Determination
   3.4.1 Akaike Information Criterion
   3.4.2 FPE Criterion for AR Modeling
   3.4.3 Bayesian Information Criterion
   3.4.4 Model Identification
  3.5 Diagnostic Checking
   3.5.1 Standardized Residuals
   3.5.2 Visual Diagnostic
   3.5.3 Tests for Whiteness
  3.6 A Real Data Example—Analyzing German Egg Prices
  3.7 Linear Forecasting
   3.7.1 The Least Squares Predictors
   3.7.2 Forecasting in AR Processes
   3.7.3 Mean Squared Predictive Errors for AR Processes
   3.7.4 Forecasting in ARMA Processes
 4 Parametric Nonlinear Time Series Models
  4.1 Threshold Models
   4.1.1 Threshold Autoregressive Models
   4.1.2 Estimation and Model Identification
   4.1.3 Tests for Linearity
   4.1.4 Case Studies with Canadian Lynx Data
  4.2 ARCH and GARCH Models
   4.2.1 Basic Properties of ARCH Processes
   4.2.2 Basic Properties of GARCH Processes
   4.2.3 Estimation
   4.2.4 Asymptotic Properties of Conditional MLEs∗
   4.2.5 Bootstrap Confidence Intervals
   4.2.6 Testing for the ARCH Effect
   4.2.7 ARCH Modeling of Financial Data
   4.2.8 A Numerical Example: Modeling S&P 500 Index Returns
   4.2.9 Stochastic Volatility Models
  4.3 Bilinear Models
   4.3.1 A Simple Example
   4.3.2 Markovian Representation
   4.3.3 Probabilistic Properties∗
   4.3.4 Maximum Likelihood Estimation
   4.3.5 Bispectrum
  4.4 Additional Bibliographical notes
 5 Nonparametric Density Estimation
  5.1 Introduction
  5.2 Kernel Density Estimation
  5.3 Windowing and Whitening
  5.4 Bandwidth Selection
  5.5 Boundary Correction
  5.6 Asymptotic Results∗
  5.7 Complements—Proof of Theorem 5.3
  5.8 Bibliographical Notes
 6 Smoothing in Time Series
  6.1 Introduction
  6.2 Smoothing in the Time Domain
   6.2.1 Trend and Seasonal Components
   6.2.2 Moving Averages
   6.2.3 Kernel Smoothing
   6.2.4 Variations of Kernel Smoothers
   6.2.5 Filtering
   6.2.6 Local Linear Smoothing
   6.2.7 Other Smoothing Methods
   6.2.8 Seasonal Adjustments
   6.2.9 Theoretical Aspects∗
  6.3 Smoothing in the State Domain
   6.3.1 Nonparametric Autoregression
   6.3.2 Local Polynomial Fitting
   6.3.3 Properties of the Local Polynomial Estimator
   6.3.4 Standard Errors and Estimated Bias
   6.3.5 Bandwidth Selection
  6.4 Spline Methods
   6.4.1 Polynomial splines
   6.4.2 Nonquadratic Penalized Splines
   6.4.3 Smoothing Splines
  6.5 Estimation of Conditional Densities
   6.5.1 Methods of Estimation
   6.5.2 Asymptotic Properties∗
  6.6 Complements
   6.6.1 Proof of Theorem 6.1
   6.6.2 Conditions and Proof of Theorem 6.3
   6.6.3 Proof of Lemma 6.1
   6.6.4 Proof of Theorem 6.5
   6.6.5 Proof for Theorems 6.6 and 6.7
  6.7 Bibliographical Notes
 7 Spectral Density Estimation and Its Applications
  7.1 Introduction
  7.2 Tapering, Kernel Estimation, and Prewhitening
   7.2.1 Tapering
   7.2.2 Smoothing the Periodogram
   7.2.3 Prewhitening and Bias Reduction
  7.3 Automatic Estimation of Spectral Density
   7.3.1 Least-Squares Estimators and Bandwidth Selection
   7.3.2 Local Maximum Likelihood Estimator
   7.3.3 Confidence Intervals
  7.4 Tests for White Noise
   7.4.1 Fisher’s Test
   7.4.2 Generalized Likelihood Ratio Test
   7.4.3 χ2-Test and the Adaptive Neyman Test
   7.4.4 Other Smoothing-Based Tests
   7.4.5 Numerical Examples
  7.5 Complements
   7.5.1 Conditions for Theorems 7.1—7.3
   7.5.2 Lemmas
   7.5.3 Proof of Theorem 7.1
   7.5.4 Proof of Theorem 7.2
   7.5.5 Proof of Theorem 7.3
  7.6 Bibliographical Notes
 8 Nonparametric Models
  8.1 Introduction
  8.2 Multivariate Local Polynomial Regression
   8.2.1 Multivariate Kernel Functions
   8.2.2 Multivariate Local Linear Regression
   8.2.3 Multivariate Local Quadratic Regression
  8.3 Functional-Coefficient Autoregressive Model
   8.3.1 The Model
   8.3.2 Relation to Stochastic Regression
   8.3.3 Ergodicity∗
   8.3.4 Estimation of Coefficient Functions
   8.3.5 Selection of Bandwidth and Model-Dependent Variable
   8.3.6 Prediction
   8.3.7 Examples
   8.3.8 Sampling Properties∗
  8.4 Adaptive Functional-Coefficient Autoregressive Models
   8.4.1 The Models
   8.4.2 Existence and Identifiability
   8.4.3 Profile Least-Squares Estimation
   8.4.4 Bandwidth Selection
   8.4.5 Variable Selection
   8.4.6 Implementation
   8.4.7 Examples
   8.4.8 Extensions
  8.5 Additive Models
   8.5.1 The Models
   8.5.2 The Backfitting Algorithm
   8.5.3 Projections and Average Surface Estimators
   8.5.4 Estimability of Coefficient Functions
   8.5.5 Bandwidth Selection
   8.5.6 Examples
  8.6 Other Nonparametric Models
   8.6.1 Two-Term Interaction Models
   8.6.2 Partially Linear Models
   8.6.3 Single-Index Models
   8.6.4 Multiple-Index Models
   8.6.5 An Analysis of Environmental Data
  8.7 Modeling Conditional Variance
   8.7.1 Methods of Estimating Conditional Variance
   8.7.2 Univariate Setting
   8.7.3 Functional-Coefficient Models
   8.7.4 Additive Models
   8.7.5 Product Models
   8.7.6 Other Nonparametric Models
  8.8 Complements
   8.8.1 Proof of Theorem 8.1
   8.8.2 Technical Conditions for Theorems 8.2 and 8.3
   8.8.3 Preliminaries to the Proof of Theorem 8.3
   8.8.4 Proof of Theorem 8.3
   8.8.5 Proof of Theorem 8.4
   8.8.6 Conditions of Theorem 8.5
   8.8.7 Proof of Theorem 8.5
  8.9 Bibliographical Notes
 9 Model Validation
  9.1 Introduction
  9.2 Generalized Likelihood Ratio Tests
   9.2.1 Introduction
   9.2.2 Generalized Likelihood Ratio Test
   9.2.3 Null Distributions and the Bootstrap
   9.2.4 Power of the GLR Test
   9.2.5 Bias Reduction
   9.2.6 Nonparametric versus Nonparametric Models
   9.2.7 Choice of Bandwidth
   9.2.8 A Numerical Example
  9.3 Tests on Spectral Densities
   9.3.1 Relation with Nonparametric Regression
   9.3.2 Generalized Likelihood Ratio Tests
   9.3.3 Other Nonparametric Methods
   9.3.4 Tests Based on Rescaled Periodogram
  9.4 Autoregressive versus Nonparametric Models
   9.4.1 Functional-Coefficient Alternatives
   9.4.2 Additive Alternatives
  9.5 Threshold Models versus Varying-Coefficient Models
  9.6 Bibliographical Notes
 10 Nonlinear Prediction
  10.1 Features of Nonlinear Prediction
   10.1.1 Decomposition for Mean Square Predictive Errors
   10.1.2 Noise Amplification
   10.1.3 Sensitivity to Initial Values
   10.1.4 Multiple-Step Prediction versus a One-Step Plug-in Method
   10.1.5 Nonlinear versus Linear Prediction
  10.2 Point Prediction
   10.2.1 Local Linear Predictors
   10.2.2 An Example
  10.3 Estimating Predictive Distributions
   10.3.1 Local Logistic Estimator
   10.3.2 Adjusted Nadaraya-Watson Estimator
   10.3.3 Bootstrap Bandwidth Selection
   10.3.4 Numerical Examples
   10.3.5 Asymptotic Properties
   10.3.6 Sensitivity to Initial Values: A Conditional Distribution Approach
  10.4 Interval Predictors and Predictive Sets
   10.4.1 Minimum-Length Predictive Sets
   10.4.2. Estimation of Minimum-Length Predictors
   10.4.3 Numerical Examples
  10.5 Coomplements
  10.6 Additional Bibliographical Notes
 References
 Index
 版权