Ayuda
Ir al contenido

Dialnet


Essays on threshold models with unit roots

  • Autores: Jun Yi Peng Zhou
  • Directores de la Tesis: Jesús Gonzalo Muñoz (dir. tes.)
  • Lectura: En la Universidad Carlos III de Madrid ( España ) en 2019
  • Idioma: español
  • Tribunal Calificador de la Tesis: María Dolores Gadea Rivas (presid.), Nazarii Salish (secret.), José Olmo Badenas (voc.)
  • Programa de doctorado: Programa de Doctorado en Economía por la Universidad Carlos III de Madrid
  • Materias:
  • Enlaces
  • Resumen
    • Empirical Time Series Economic models are too linear. When the 2008 crisis started, in several meetings of the central banks was argued that they were not able to forecast due to this linearity. So, there is a need for nonlinear empirical models. It is well established that many economic series contain dominant, smooth components, even after the removal of simple deterministic trends. Since the seminal work of Nelson and Plosser (1982), unit root (UR) models adequately capture this characteristic, such that unit roots have become a "stylized fact" for most macroeconomic and financial time series data. The presence of unit roots adds an extra level of difficulty in the study of nonlinear time series models. The challenge is to develop nonlinear models with persistent variables.

      In this dissertation, we study threshold models with unit roots from two different perspectives. In one hand we introduce a univariate analysis and on the other hand, a multivariate analysis.

      In the first chapter, titled "Threshold Stochastic Unit Roots Models" is a project co-authored with Jesús Gonzalo and Raquel Montesinos. In this study, we present the univariate analysis by introducing a new class of stochastic unit root (STUR) processes. Stochastic unit root models (STUR) arise naturally in economic theory, as well as in many macroeconomic and financial applications. The STUR models can be stationary for some periods or regimes, and mildly explosive for others. This characteristic makes them not to be difference stationary. If a series shows evidence of non-stationarity, which is not removable by differencing, it is inappropriate to estimate the conventional ARIMA or cointegration/error-correction models because the properties of the estimators and the tests involved are not the same as those in the standard difference stationary case. This problem is not detected with standard unit root tests, such as the Dickey-Fuller test, because they cannot easily distinguish between exact unit roots and stochastic unit roots. In order to obtain a better statistical distinction between these two types of unit roots, McCabe and Tremayne (1995) proposed a locally best invariant test, for the null hypothesis of difference stationary versus a stochastic unit root. The application of this constancy parameter test to the macroeconomic variables analyzed in Nelson and Plosser (1982) suggest that about half of them are not difference stationary. Hence, the notion that some economic time series are nonstationary in a somewhat more general way needs to be considered and, consequently, more elaborate techniques of modeling and estimation need to be explored.

      In this study, we assume that the economy stays in a "good" or "bad" state for several periods until certain determining variables overpass some fundamental values. This assumption is perfectly captured by modeling the evolution of economic variables via threshold models. To model the random behavior of the largest root of an ARMA process, we propose a threshold autoregressive (TAR) model where the largest root is less than one in some regimes and larger than one in others, in such a way that on average it is equal to one. This threshold autoregressive stochastic unit root (TARSUR) model presents several advantages to other nonlinear models. First, its computational simplicity. The estimation of all the parameters is done by least squares regression. Second, the t-statistic is used to test the hypothesis of non-threshold effects versus threshold effects, in some cases, it follows asymptotically a standard distribution and, therefore, there is no need to generate new critical values. Third, the threshold variable is suggested by economic theory, and it will likely provide an explanation or cause for the existence of a unit root, which to the best of our knowledge is still lacking in the econometric literature. Fourth, in many situations, threshold models are easier to use for forecasting than random coefficient models.

      In this study, we state the condition for the TARSUR model to be strictly stationary and covariance stationary. It is well known that the unit root model is neither strictly or covariance stationary, but if we allow the coefficient to vary over time, the model can archive strictly/covariance stationary. The intuition behind this result is that all these conditions require that the process does not stay too long in the explosive regime, such that the mean-reverting regimes dominate the explosive states. Also, we analyze the impulse response function (IRF) and show that the TARSUR process can generate a wide variety of behavior depending on the transition probabilities between the explosive and the mean-reverting states. When the probability of being in the same state is higher than switching to other regimes, the IRF behave similarly to an explosive autoregressive process. When the probability of staying in the same state is the same as switching, the IRF behave like a UR process. If the probability of switching regimes is higher than staying in the same regime, then the IRF behave like and stationary autoregressive process.

      To detect the TARSUR process, we present two independent tests. First, we develop a test to assess the null hypothesis that the mean of the coefficients is equal to one. The proposed statistic converges to a distribution that shows a discontinuity depending whether if the coefficients change over time or not. When the coefficient is fixed, under the null hypothesis, the statistic will converge to the same distribution as the Dickey-Fuller test. When the coefficient changes over time and the process is covariance stationary, the statistic converges to a standard normal. To solve this problem, we extend the results from the near unit root literature by allowing the time-varying coefficients to move around unity. With this extension, we obtain a continuous distribution where a nuisance parameter captures the variability of the coefficient. Since the nuisance parameter is unknown and cannot be estimated, we propose the use of subsampling to obtain valid critical values for this test. Second, we present the test to check for the presence of threshold effects. When the threshold value is known, the proposed statistic is the t-statistic which converges to a standard normal distribution. When the threshold value is unknown, the proposed statistic is the supremum of the Wald test. We show that the distribution of the proposed statistic is the same as testing for structural breaks. This result is relevant because the critical values are widely available in the literature. We show through Montecarlo simulation that both tests perform very well in a finite sample. Finally, we present four empirical applications where some theoretical and empirical controversy exists about the randomness of the unit root in the AR representation.

      The first application we estimate the TARSUR models for the real U.S. Stock price, using the real U.S. GDP change as the threshold variable. Applying the proposed tests, we conclude that the stock price follows a TARSUR process. When the increment of the GDP is lower than 78.71, U.S. stock price is in the mean-reverting regime with autoregressive parameter 0.976. When the increment of GDP is higher than 78.71, U.S. stock price is in the explosive state with autoregressive parameter 1.023. Concluding that stock prices follow a TARSUR process open the possibility of predictability of return. Also, we show that the TARSUR model offers better predictions than the RW model in terms of the mean square error.

      The second application we analyze the U.S house price using the GDP per capita growth rate as the threshold variable. As before, using the testing strategy, we conclude that house price follows a TARSUR process. When the GDP per capita growth is lower than 0.28%, the house price is in the stationary regime with autoregressive coefficient 0.97. If the GDP per capita growth is higher than 0.28%, the house price is in the explosive regime with autoregressive coefficient 1.02. This result can be a reasonable explanation for the price bubble in the housing price since it can generate explosive regimes and mean-reverting regimes.

      In the third application, we analyze the nonlinear behavior of the U.S interest rate, using the annual change of the unemployment rate as the threshold variable. Using the testing methodology proposed in the study, we conclude that U.S. interest rate follows a TARSUR process. If the annual change in unemployment is lower than 0.4%, the interest rate is in the "explosive" state with an autoregressive coefficient of 1.006, which is close to one. If the annual change in unemployment is higher than 0.4%, the interest rate is in the mean-reverting state with an autoregressive coefficient of 0.968. This result is significant because it shows the nonlinear behavior of the U.S. interest rates and the cause of this non-linearity. The last application we analyze the USD/pound nominal exchange rates, but we are unable to reject the fix unit root hypothesis.

      The second chapter, we present the multivariate analysis with "Multiple Long Run Equilibria Through Cointegration Eyes." Economic theory assesses the interrelation between economic variables with unit roots via long-run equilibrium relationships. When the relationship between variables is linear, we can test the existence of such relationships through the concept of cointegration. Indeed, when two or more economic variables are in equilibrium, then they must be cointegrated. For example, the literature suggests links between short-term interest rates and long-term interest rates, and the relation between price and dividends via the present value models (PVM).

      Nearly all the economic models in macroeconomics are highly nonlinear, and this gives us good reasons to think that the actual data-generating process of the macroeconomic series is nonlinear; for instance, the DSGE models predict a complicated nonlinear relationship between the variables and between the past and future. Many other examples are the nonlinear Taylor rules, environmental Kuznets curve, models for financial bubbles, and nonlinear growth models.

      Economic theory has developed models with the presence of multiple equilibria, that is, Azariadis and Drazen (1990) propose a Diamond-type model that allows multiple, locally stable equilibria, while Benhabib, Schmitt-Grohé and Uribe (2001) explore the condition where the Taylor rule generates multiple equilibria. Time Series econometrics has not considered this type of nonlinear cointegration with persistent variables.

      The goal of this study is to analyze the presence of multiple long-run equilibria via a threshold cointegration framework where the non-linearity arises from introducing state-dependent behavior in the long-run equilibrium relationship. Specifically, we introduce threshold effects in the long-run equilibrium relationships to capture different relations between nonstationary variables during different stages of the business cycle. Also, we introduce methods to test for the presence of threshold cointegration and inference about the presence of multiple equilibria. Our analysis focuses on the threshold effects induced by observable factors dictated by the economic theory (e.g., economic growth, unemployment growth), which are assumed to be stationary. The advantages that threshold models offer, to other nonlinear models, are a straightforward estimation by the least-squares method and an intuitive economic interpretation of the nonlinear relation. Very often, the economic theory does not specify the type of non-linearity that links different economic variables, but a threshold specification can be viewed as an approximation to a more general class of nonlinear processes.

      Inference tools to assess both the presence of nonlinear cointegration and threshold effects within the long-run equilibrium regression are essential in applied work. Omitting the presence of nonlinear components in the long-run equilibrium relationship produces an inconsistent estimation of the cointegrating vector. We propose the use of the KPSS test to assess the presence of threshold cointegration by checking if the residuals of the cointegration equation follow an I(0) process. We derive the asymptotic distribution of the KPSS test and show that it is the same as if testing for linear cointegration. This result is relevant since the critical values are already tabulated and available in the literature. Concluding that the residual of the long-run equation is an I(0) process is not enough to asses the existence of multiple equilibria. The second test check for the presence of threshold effects. Like the first chapter of this dissertation, the proposed test depends if the threshold value is known or unknown, and in the latter case, it can be identified under the null hypothesis or not. When the threshold value is known, the proposed statistic is the t-test, and we show that converges to a standard normal distribution. In the case where the threshold value is unknown but can be identified under the null, we can estimate the threshold parameter in a first stage and use this estimate to construct the t-test and perform standard inference. Using the estimated threshold value as if it is known is only possible because we prove that the least square estimate of the threshold parameter is super-consistent, in other words, converges to the population parameter very fast. If the threshold value is unknown and unidentified, the proposed test is the supremum of the Wald statistic and converges to the same distribution as if we test for structural breaks.

      In this study, we present two empirical applications. The first application we analyze the presence of multiple long-run equilibria between the short term and long-term U.S. interest rate, using the annual growth rate of the production index (PI) as a threshold variable. Performing the testing procedure developed, we conclude that the short term and long-term interest rate present two equilibrium. When the annual growth rate of the PI is above 4.8%, we are in an expansion period, and the cointegrating vector is (1, -0.909), implying that the long-term interest rate is higher than the short-term interest rate. When the industrial activity slows down, the annual growth of the PI is below 4.8%, the cointegrating vector is (1, -1.244), indicating that the short-term interest rate is higher than the long-term interest rate. These results can be a plausible explanation of why the slope of the yield curve can be a good predictor of economic health.

      In the second application, we investigate the nonlinear link between U.S. stock price and dividends using the volatility index (VIX) as a threshold variable. Performing the testing procedure developed, we conclude with the presence of two equilibria between stock price and dividends driven by the perceived volatility in the market. When the volatility is high, the VIX index is above 19.57; the implicit discount rate is 12.17%. When the volatility is low, the VIX is below 19.57; the implicit discount rate is 1.92%. This result is consistent with the economic theory because the return of a risky asset must be higher in periods where the volatility is high, and lower when the volatility is low.

      The third chapter continues within the multivariate framework, and we introduce the project titled "Quasi-Error Correction Model." Single economic variables, observed as a time series, move freely in an aimless path and yet we may find some pairs of series moving closely, not too far from each other. Economic theory asses the existence of long-run equilibrium relationships between economic variables, and cointegration is a method to study empirically the forces which keep these variables moving together in the long-run. When the variables are cointegrated, Granger representation theorem assures the existence of an error correction model (ECM) representation which describes how variables respond to disturbances from the equilibrium. One can see the ECM as an attractor where the long-run equilibrium is maintained.

      The objective of this study is to analyze the ECM representation theorem when the economic variables present multiple long-run equilibria driven by the business cycle. We study the simplest case where the long-run equilibrium equation presents a threshold effect, indicating the presence of multiple cointegration relations but considering a common adjustment mechanism. This work is a stepping stone to the most general case, which both, the cointegrating vectors and the adjustment present threshold effect.

      A critical point of linear cointegration is that taking the exogenous variable to be an I(1) process, implies that the dependent variable also is an I(1) process. Then the ECM model is balanced, in the sense where the right-hand side and the left-hand side of the equation have the same order of integration. The traditional definition of integration cannot classify the stochastic properties of nonlinear random variables, such that the QECM representation is not balanced in terms of integration. This happens because the QECM has an extra regressor representing the switching between equilibria that does not fit in the definition of integration. To move away from the thigh constraints of the definition of integration, we use the concept of summability introduced by Berenger-Rico and Gonzalo (2014). In this study, we show that the QECM is balanced under the definitions of summability. Also, we show that the least square estimate of the adjustment mechanism is consistent and has a normal distribution allowing us to construct confidence intervals and perform hypothesis testing. Finally, we present an empirical application for the U.S. interest rate from instruments with different maturity. The least-square estimate of the error correction term is -0.0584, which represent how much is correction when the short and the long-term interest rate deviate from each other.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno