Market Predictability - page 8

 
In this paper we study what professional forecasters predict. We use spectral analysis and state space modeling to decompose economic time series into a trend, business-cycle, and irregular component. To examine which components are captured by professional forecasters, we regress their forecasts on the estimated components extracted from both the spectral analysis and the state space model. For both decomposition methods we nd that the Survey of Professional Forecasters can predict almost all variation in the time series due to the trend and business-cycle, but the forecasts contain little or no signi cant information about the variation in the irregular component.
 
In public discussions of the quality of forecasts, attention typically focuses on the predictive performance in cases of extreme events.However, the restriction of conventional forecast evaluation methods tosubsets of extreme observations has unexpected and undesired effects, and is bound to discredit skillful forecasts when the signal-to-noise ratio in the data generating process is low. Conditioning on outcomes is incompatible with the theoretical assumptions of established forecast evaluation methods, thereby confronting forecasters with what we refer

to as the forecaster’s dilemma. For probabilistic forecasts, proper weighted scoring rules have been proposed as decision theoretically justifiable alternatives for forecast evaluation with an emphasis on extreme events. Using theoretical arguments, simulation experiments, and a real data study on probabilistic forecasts of U.S. inflation and gross domestic product (GDP) growth, we illustrate and discuss the  forecaster’s dilemma along with potential remedies.

ATTENTION: Video should be reuploaded

 
In recent years, survey-based measures of expectations and disagreement have received increasing attention in economic research. Many forecast surveys ask their participants for …xed-event forecasts. Since …xed-event forecasts have seasonal properties, researchers often use an ad-hoc approach in order to approximate …xed-horizon forecasts using …xed-event forecasts. In this work, we derive an optimal approximation by minimizing the mean-squared approximation error. Like the approximation based on the ad-hoc approach,our approximation is constructed as a weighted sum of the …xed-event forecasts, with easily computable weights. The optimal weights tend to di¤er substantially from those of the ad-hoc approach. In an empirical application, it turns out that the gains from using optimal instead of ad-hoc weights are very pronounced. While our work focusses on the approximation of …xedhorizon forecasts by …xed-event forecasts, the proposed approximation method is very ‡exible. The forecast to be approximated as well as the information employed by the approximation can be any linear function of the underlying

high-frequency variable. In contrast to the ad-hoc approach, the proposed approximation method can make use of more than two such information-containing functions.

ATTENTION: Video should be reuploaded

 
We propose a Bayesian estimation method for Vector Autoregressions (VARs) featuring asymmetric priors and time varying volatilities that allows for a possibly very large cross sectional dimension of the system, N. The method is based on a simple triangularisation which allows one to simulate the conditional mean coe¢ cients of the VAR by drawing them equation by equation. This strategy reduces the computational complexity by a factor of N2 with respect to the existing algorithms routinely used in the literature and by practitioners. Importantly, our new estimation algorithm can be easily obtained by modifying just one of the steps of the existing algorithms. We illustrate the bene…ts of our proposed estimation method with numerical examples and empirical applications in the context of forecasting and structural analysis.
 
We analyze forecasts of consumption, nonresidential investment, residential investment, government spending, exports, imports, inventories, gross domestic product, inflation, and unemployment prepared by the staff of the Board of Governors of the Federal Reserve System for meetings of the Federal Open Market Committee from 1997 to 2008, called the Greenbooks. We compare the root mean squared error, mean absolute error, and the proportion of directional errors of Greenbook forecasts of these macroeconomic indicators to the errors from three forecasting benchmarks: a random walk, a first-order autoregressive model, and a Bayesian model averaged forecast from a suite of univariate time-series models commonly taught to first-year economics graduate students. We estimate our forecasting benchmarks both on end-of-sample vintage and real-time vintage data. We find find that Greenbook forecasts significantly outperform our benchmark forecasts for horizons less than one quarter ahead. However, by the one-year forecast horizon, typically at least one of our forecasting benchmarks performs as well as Greenbook forecasts. Greenbook forecasts of the personal consumption expenditures and unemployment tend to do relatively well, while Greenbook forecasts of inventory investment, government expenditures, and inflation tend to do poorly.
 
Macroeconomists are increasingly working with large Vector Autoregressions (VARs) where the number of parameters vastly exceeds the number of observations. Existing approaches either involve prior shrinkage or the use of factor methods. In this paper, we develop an alternative based on ideas from the compressed regression literature. It involves randomly compressing the explanatory variables prior to analysis. A huge dimensional problem is thus turned into a much smaller, more computationally tractable one. Bayesian model averaging can be done over various compressions, attaching greater weight to compressions which forecast well. In a macroeconomic application involving up to 129 variables, we find compressed VAR methods to forecast better than either factor methods or large VAR methods involving prior shrinkage.
 
Despite the extensive literature on cross-sectional aspects of momentum, time-variation in momentum profitability receives little attention. We present a comprehensive examination of the time-series predictability of momentum profits. We uncover a list of intriguing features of time-variation in momentum profits: (1) market volatility has significant power to forecast momentum payoffs, which is even more robust than that of market state or business cycle variables; (2) the time-series predictability is centered on loser stocks; and (3) the time-series patterns appear to be at odds with the cross-sectional results. These new findings jointly present a tough challenge to existing theories on momentum.
 
Garry119:
All right. The market is absolutely predictable. After up is down, down is up after. And so constantly)))
Are you sure?
 

Trading Systems

Forecasting interest rates is of great concern for financial researchers, economists and players in the fixed income markets. The purpose of this study is to develop an appropriate model for forecasting the short-term interest rates i.e., commercial paper rate, implicit yield on 91 day treasury bill, overnight MIBOR rate and call money rate. The short-term interest rates are forecasted using univariate models, Random Walk, ARIMA, ARMA-GARCH and ARMA-EGARCH and the appropriate model for forecasting is determined considering six-year period from 1999. The results show that interest rates time series have volatility clustering effect and hence GARCH based models are more appropriate to forecast than the other models. It is found that for commercial paper rate ARIMA-EGARCH model is most appropriate model, while for implicit yield 91 day Treasury bill, overnight MIBOR rate and call money rate, ARIMA-GARCH model is the most appropriate model for forecasting.

 
Using two sets of data, including daily prices (open, close, high and low) of all S&P 500 stocks between 1992 and 1996, we perform a statistical test of predictive capability of candlestick patterns. Out-of-sample tests indicate statistical significance at the level of 36 standard deviations from the null hypothesis, and indicate a profit of almost 1% during a two-day holding period. An essentially non-parametric test utilizes standard definitions of three-day candlestick patterns and removes conditions on magnitudes. The results provide evidence that traders are influenced by price behavior. To the best of our knowledge, this is the first scientific test to provide strong evidence in favor of any trading rule or pattern on a large unrestricted scale.
Reason: