## Abstract

This thesis answers the research question: “Why is it that quantitative models of the Long-run, future stock return distribution are not widely applied in pension counseling today for risk quantification purposes?” within a delimitation only to look for one specific type of explanation to the research question. In two imaginable scenarios, one where usable long data of the past equity premium is present and one where only a short sample of usable historical data is available to the forecaster of the expected annualized ten year equity premium, it is investigated how the fundamental differences between applying a historical average of the realized equity premium and manually adjusting for eventual unusual market P/E-values and in the historical sample and at the time of forecasting in practice leads to diverging or ambiguous forecasts of the annualized ten year equity premium. Since the estimation of a central tendency is a prerequisite to numerical risk quantification, one feasible explanation is thus given why we do not see that quantitative models of the Long-run, future stock return distribution widely applied in pension counseling today. The explanation is that human decisions by the forecaster are necessary, whether a long or a short sample is used to forecast from, and thus the forecast cannot be presented as a fact to customers. In Chapter 1, the theoretical background of the historical average approach is presented, including the central role of the efficient market theory and the efficient market hypothesis (EMH). According to this school of thought, the best estimator of the expected annualized ten year equity premium is the arithmetic average historical equity premium over a long data sample (100 years). Due to random materializations of the underlying DGP, a cross-sectional world average approximates this unknown parameter value best, but it can still only be estimated with uncertainty. A necessary condition for the historical average to converge towards the true parameter value is that there are no structural breaks in the data sample. In a Danish context, this may not be the case. In Chapter 2 another school of thought was presented. This school of thought uses valuation ratios such as the D/P-ratio or the P/E-ratio to measure deviations of the stock market level from fundamental value. Historically, the D/P-ratio has performed well in this respect until the mid-80s when increased leverage and changed corporate payout policy permanently changed the payout ratio and thus the dividend yield. Therefore, the price-smoothed earnings ratio, which adjusts for cyclical variations in year-to-year earnings growth (business cycles), and is not affected by the capital structure and payout policy of corporations like the D/P-ratio, is found to be a better measure of market mis-valuations. It was found that the price-smoothed earnings ratio has performed well in this respect over 140 years of US long data. However this school of thought was found to commit a logical fallacy, when concluding on this basis that the EMH does not hold and that the ten-year real stock return is predictable by the price-smoothed earnings ratio. Scrutiny of the entire empirical data basis of this school of thought therefore revealed that the historical, in-sample relationship between the price-smoothed earnings ratio and subsequent tenyear return on the US stock market, proxied by the S&P 500, could not be used for forecasting purposes. In technical terms a unit root was found in the residuals of the regression. The logical fallacy is thus to infer that market irrationality would cease after exactly ten years, which is of course nonsense. There is no empirical data basis for predicting when markets will revert to fundamental value. In summary, it is concluded that the price-smoothed earnings ratio is a good measure of eventual stock market deviation from fundamental value, and that the state of this variable at the time of forecasting should be manually incorporated by the analyst, who forecasts the expected ten year equity premium based on a long historical sample. In the case study, I investigate the situation where only a short sample is available as a basis for forecasting. The data sample spans Dec1999 to Dec2009, which I am aware is not enough data to forecast the expected equity premium ten years into the future. When I do it anyway, it is because this is the short data sample that was available, and it serves very well to illustrate the practical problems faced by forecasters when unusual P/Es are present in the historical data sample. The event of the Tech Bubble and bust plays an important role in this data sample, and is the ultimate reason why the two approaches view this short data sample different and yield significantly different forecasts based on this sample. Putting the case result into perspective, it is argued that in a Danish context it is very likely that a short data sample will require forecasters of the expected excess return on a diversified Danish stock portfolio to manually take the development in the P/E-ratio over the sampling period into account. In an international context, it is also argued to be likely that such adjustments to the average historical equity premium need to be made given a short data sample. I am aware of the narrow scope of this thesis, since of course other political and legal reasons exist, why it is that we do not see numerical risk quantification in pension counseling today. My thesis has provided the reader with a feasible explanation from the field of finance to this phenomenon.

Educations | MSc in Applied Economics and Finance, (Graduate Programme) Final Thesis |
---|---|

Language | English |

Publication date | 2010 |

Number of pages | 113 |