Ayuda
Ir al contenido

Dialnet


Resumen de Essays on noncausal and noninvertible time series

Weifeng Jin

  • Over the last two decades, there has been growing interest among economists in nonfundamental univariate processes, generally represented by noncausal and noninvertible processes. Noninvertible moving average (MA) models are commonly employed in Macroeconomics to generate distinct impulse response functions which still make sense economically (Lippi and Reichlin (1993)) or feature unusual information flows influenced by agents¿ foresight on fiscal policies (Leeper et al. (2013)). A comprehensive survey on the applications of noninvertible MA models can be referred to Alessi et al. (2011). Noncausal autoregressive (AR) models have become increasingly popular due to their ability to capture nonlinear dynamics prevalent in Macroeconomics and Finance, such as volatility clustering, asymmetric cycles, and local explosiveness (Fries and Zakoïan (2019); Gouriéroux and Zakoïan (2017)). In particular, the incorporation of both past and future components into noncausal processes makes them attractive options for modeling forward-looking behavior in economic activities as an alternative to noninvertible MA processes, and to some extent improves the forecasting performance. (Hecq et al. (2020); Lanne and Luoto (2013)). However, the classical techniques for time series models relying on second-order moments or Gaussian likelihood functions are largely limited to causal and invertible counterparts. This dissertation seeks to contribute to the field by providing theoretical tools robust to noncausal and noninvertible time series in testing and estimation.

    In Chapter I, titled "Quantile Autoregression-Based Non-causality Testing", we provide an overview of the applications of general autoregressive (AR) processes, namely, mixed causal non-causal AR processes, in Economics and Finance, highlighting their ability to reproduce nonlinear dynamics. We specifically investigate the statistical properties of empirical conditional quantiles of non-causal processes following the seminal work on non-Gaussian processes by Rosenblatt (2000). First, we show that the quantile autoregression (QAR) estimates for non-causal non-Gaussian processes do not remain constant across different quantiles, unlike their causal counterparts. Building upon this finding, we propose the first testing strategy for non-causality using the Kolmogorov-Smirnov (KS) constancy test developed by Koenker and Xiao (2006) in the quantile autoregression (QAR) framework. The test follows a classical linear hypothesis and adopts KS norm over a compact quantile interval of interest. A martingale transformation is introduced to the test statistic to retrieve the distribution-free merit. Though the constancy-based test is unable to ensure consistency due to the unclear functional form of true conditional quantiles of non-causal processes, the accessibility and straightforwardness of this approach make it a touchstone for non-causality testing in practice. Second, we demonstrate that non-causal autoregressive processes admit nonlinear representations for conditional quantiles for at least one ¿ ¿ (0, 1) given past observations, indicating the linear models are misspecified for the conditional quantiles of non-causal processes. Exploiting the second property, we propose two additional testing strategies of non-causality for non-Gaussian processes within the QAR framework resorting to specification tests for quantile regressions (Escanciano and Velasco (2010), Escanciano and Goh (2014)), hereafter EV test and EG test. Both approaches translate the targeted hypothesis into unconditional moment conditions with test statistics constructed on the residual processes indexed by ¿ ¿ (0, 1) and following a Cramér-von Mises (CvM) norm over a considered interval. The information contained in the interval is fully exploited to guarantee the consistency of both specification-based tests for non-causality. EV approach chooses the subsampling method to approximate the critical value due to the estimation of nuisance parameters involved in the asymptotic distribution of the test statistic. Alternatively, EG approach considers an orthogonality condition for the Taylor expansion of the test statistic around the true parameter to alleviate the asymptotic effect of nuisance parameters and proposes to approximate the critical value with the aid of a multiplier bootstrap. The details of these three testing procedures are discussed in the paper.

    The proposed testing procedures in the paper fill an important gap contributing to testing non-causality in the literature, which can work as a model selection before estimation. Furthermore, these procedures exhibit an appealing feature of retaining power even for high-order AR processes in the presence of a mix of causality and non-causality. Owing to the advantage of quantile regression, our proposed tests, based on linear quantile specification offer an advantage over conditional mean regression by being robust to outliers, making them suitable for heavy-tailed processes commonly used in Finance.

    To evaluate the performance of the proposed tests in finite samples, we conduct Monte Carlo experiments where we simulate AR processes of different orders and driven by different innovations covering symmetric and asymmetric, heavy-tailed and light-tailed, bounded support, and unbounded support. The empirical rejection rates in null and alternative hypotheses are reported and we compare the three testing procedures in terms of size and power. Size-wise, EG test has an appealing attribute of maintaining undistorted size at the nominal level in most scenarios. While EV test presents some fluctuations around the nominal level (mostly below the desired level) and the constancy-based test exhibits an over-rejection in the presence of heavy-tailedness. As sample sizes increase, all three procedures yield more stable rejection rates around the nominal level. Regarding power, EG test is extraordinarily competent for asymmetric distributions and EV test achieves the highest power among the three in the symmetric cases. By contrast, the constancy test can be a powerful tool for detecting non-causality in processes with heavy tails. To further illustrate the applicability of our proposed non-causality tests, we apply them to six time series from financial markets studied in Fries and Zakoïan (2019) to investigate the presence of speculative bubbles. The test results showed strong evidence favoring non-causal processes in three series and mild evidence in one series. The result does not deviate much from the results obtained by Fries and Zakoïan (2019). Finally, we discuss tentative extensions of our approaches based on the specification tests to AR processes driven by innovations with heteroskedasticity. Some numerical experiments are conducted to illustrate the extension. In the simulation, the performance of QAR estimates of non-causal processes at extreme quantiles is also explored, which provides another perspective to identify noncausal processes driven by skewed innovations.

    In the second chapter, "Estimation of Time Series Models Using the Empirical Distribution of Residuals", we introduce a novel estimation technique for general linear time series models, potentially noninvertible and noncausal, by utilizing the empirical cumulative distribution function of residuals. Given the fact that all weakly stationary process admits a causal and invertible representation (Brock- well and Davis, 2009, p 105), the investigation of the identification and estimation of noncausality and noninvertibility is only meaningful under the non-Gaussianity. Since noncausal and noninvertible ARMA processes share the same autocorrelation structure with their causal and invertible counterparts, the classical methods based on variance-covariance matrix or Gaussian likelihood functions fail to distinguish noncausality (noninvertibility) from causality (invertibility). The existing literature on the estimation techniques applicable to noncausal and/or noninvertible processes are basically divided into two tribes: one is non-Gaussian maximum likelihood estimation (MLE) from some selected work by Breid et al. (1991); Huang and Pawitan (2000); Lii and Rosenblatt (1992, 1996), where some distributional knowledge is needed a prior; the other one is minimum distance (MD) estimation exploiting information on non-Gaussianity from higher-order moments/cumulants or characteristic functions in either time or frequency domain, see Gospodinov and Ng (2015); Velasco (2022); Velasco and Lobato (2018), which to some extent requires finiteness of higher-order moments for innovations. Inspired by the work by Velasco (2022), where he converts the test statistic proposed by Hong (1999) capturing the pairwise dependence using empirical characteristic functions into the criterion for identifying and estimating general linear time series, we employ a general dependence measure to characterize the pairwise dependence of residuals relying on the empirical cumulative distribution functions (cdf). The general dependence is defined as the general covariance between u_t and u_{t¿j}, i.e., the distance between the joint cdf of any pair of residuals (u_t, u_{t¿j}) and the product of marginal cdf. This measure can be also interpreted as a generalization of the standard covariance between u_t and u_{t¿j} by applying an indicator transformation on the given pair of random variables, which enables capturing both linear and nonlinear dependence between variables (Hoeffding (1948); Skaug and Tjøstheim (1993)). The spectral distribution function (Hong (2000)) allows us to incorporate the general pairwise dependence at all lags into one statistic. In the estimation method, an L2 distance between the proposed dependence measure in the unrestricted case and the restricted one applied to the residuals is considered. The complete information in the joint distribution of residuals is exploited to achieve the identification under the iid assumption. The estimate is obtained by minimizing the sample loss function given observations. Consistency of estimates of model parameters is achieved under regularity conditions. We investigate the asymptotic distribution of the estimates by employing a smoothed cdf to approximate the indicator function, considering the non-differentiability of the original loss function. The asymptotic analysis of the estimates based on the smoothed cdf is provided, where asymptotic distribution depends on the smoothing parameter once the smoothing parameter is fixed. When the smoothing parameter approaches zero, in order to maintain the classical rate of asymptotic normality, some extra conditions on the convergence rate of the smoothing parameter are imposed. In principle, the rate at which the smoothing parameter approaches zero is supposed to be neither too fast for not being "non-smooth" like indicator functions, nor too slow for the asymptotic bias to be negligible due to the approximation. Efficiency improvements can be achieved by properly choosing the scaling parameter for residuals. Besides, the calculation of the standard error of the estimates is given.

    This method shares some appealing attributes compared to the existing alternatives. First, it achieves identification of the model parameters without imposing causality and invertibility. Second, only some regular smoothness conditions are required instead of stringent conditions on moments or parametric distributional knowledge. Unlike other procedures, our proposed method does not involve any subjective choices of lag windows or trimming parameters. Moreover, the cdf is more robust to outliers and less computationally cumbersome compared to the approach based on characteristic functions. Owing to the flexibility of the cdf, the method can be tentatively extended to time series models with different dependence structures for innovations, such as quantile independence or conditional mean independence.

    In the Monte Carlo experiments, we investigate the finite sample properties of the estimates, where the processes are simulated from different non-Gaussian innovations. In the first experiment, we examine the finite sample performance of the estimates based on the indicator function, where the proportion of correct identification is reported. The results show that the method performs better when the innovation exhibits heavy tails or asymmetry, which coincides with the conclusion obtained by Velasco and Lobato (2018) that excess skewness and kurtosis contribute to the identification of noncausality. In the second experiment, we carry out some simulations to evaluate the numerical approximation performance of the smooth cdf estimates with different values of the smoothing parameters. The relative root mean squared error (RRMSE) of the estimates indicates that the approximation performance does not vary much with different choices of the smoothing parameter, and it provides a better approximation as the sample size increases from 100 to 200. An empirical application is added to illustrate this methodology by fitting the daily trading volume of Microsoft stock by autoregressive models with noncausal representation, where the volatility clustering is modeled without introducing ARCH additionally.

    In the third chapter, "Directional Predictability Tests", joint with Carlos Velasco, we propose new tests of predictability for non-Gaussian sequences that may display general nonlinear dependence in higher-order properties. The tests for predictability have been considered either under the form of martingale difference testing or white noise testing. Initially, we investigate the general dependence structure of processes driven by all-pass (AP) filters on innovation sequences under iid or mds conditions, where we show that processes driven by AP filters on iid or mds innovations have predictability at third and fourth orders, despite being linearly unpredictable. Given these preliminaries, we construct different predictability tests based on testing the null of martingale difference against parametric alternatives which can introduce linear or nonlinear dependence as generated by ARMA and AP restricted ARMA models (causal and non-invertible ARMA in our context), respectively. Addition- ally, we develop tests to check for linear predictability under the white noise null hypothesis parameterized by an all-pass model driven by martingale difference innovations and tests of non-linear predictability on ARMA residuals. Namely, we aim to test three hypotheses: AP hypothesis with mds innovations against noninvertible ARMA (linear predictability), mds hypothesis against AP restricted ARMA (non-linear predictability), and mds hypothesis against unrestricted but non-invertible ARMA (general predictability). Besides, we provide the asymptotic analysis of the properties of the new tests against different alternatives, respectively. The testing hypotheses follow the paper by Lanne et al. (2013), where they propose a Wald test based on non-Gaussian ML estimation only applicable to iid innovations. On the contrary, we extend the analysis for processes generated by iid innovations to mds innovations, which are able to account for possible high-order dependence. Unlike Lanne et al. (2013), our robust Lagrange Multiplier (LM) tests are developed from a loss function based on pairwise dependence measures that identify the predictability of levels. More specifically, our LM tests are based on a discrepancy measure that accounts for higher-order dependence in the mds innovations (Velasco (2022)). Notably, our tests do not require prior distributional knowledge of non-Gaussian innovations or estimation of the complete model.

    Some numerical exercises are carried out to investigate the finite sample performance of our proposed directional tests. We follow the experiment in Lanne et al. (2013) with both iid and GARCH(1,1) innovations, and some comparisons are conducted between our methods and alternative ones. In terms of size, LM tests of the hypothesis of nonlinear predictability and general predictability have empirical size closer to the nominal level compared to the one of linear predictability hypothesis, which needs to estimate the restricted model under the null. The performance of tests is better in asymmetric innovations than in symmetric innovations in general. Regarding power, LM tests have little power in symmetric cases compared to the ones in asymmetric cases for all models, hypotheses, and sample sizes. Furthermore, we find that LM tests imposing the iid condition through checking the loss function based on iid assumptions show more power than those only exploiting the mds condition, which is as expected since no full information is used from the innovations. Our finite sample analysis shows that the performance of the new tests is reasonable for moderate sample sizes, but power depends on the true distribution of innovations, so it seems worth exploring alternative characterizations of the past information to those implied by the characteristic function as well as optimal weighting and scaling. In the empirical application, we study the predictability of four series of quarterly US returns for market and value-weighted size-ordered portfolios, imposing iid and mds assumptions on model errors, respectively. Some test results coincide with the ones obtained by Lanne et al. (2013) but some lead to different conclusions. From this empirical application, we find that Wald tests could be more powerful than LM tests, also pseudo likelihood ratio tests based on our proposed loss functions under iid and mds assumptions can be considered.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus