Prof. Jayanth R. Varma's Financial Markets Blog

About me       Latest Posts       Posts by Year       Posts by Categories

Bayesians in finance redux

In November last year, I wrote a brief post about Bayesians in finance. The post was brief because I thought that what I was saying was obvious. A long and inconclusive exchange with Naveen in the comments section of another post has convinced me that a much longer post is called for. The Bayesian approach is perhaps not as obvious as I assumed.

When finance professors walk into a classroom, they want to build on what the statistics professors have covered in their courses. When I am teaching portfolio theory, I do not want to spend half an hour explaining the meaning of covariance; I would like to assume that the statistics professor has already done that. That is how division of labour is supposed to work in a pin factory or in a university.

Unfortunately, there is a problem with this division of labour – most statistics professors teach classical statistics. That is true even of those statisticians who prefer Bayesian techniques in their research work! The result is that many finance students wrongly think that when the finance professors talk of expected returns, variances and betas, they are referring to the classical concepts grounded in relative frequencies. Worse still, some students think that the means and covariances used in finance are sample means and sample covariances and not the population means and covariances.

In business schools like mine where the case method dominates the pedagogy, these errors are probably less (or at least do less damage) because in the case context, the need for judgemental estimates for almost everything of interest becomes painfully obvious to the students. The certainties of classical statistics dissolve into utter confusion when confronted with messy “case facts”, and this is entirely a good thing.

But if cases are not used or used sparingly, and the statistics courses are predominantly classical, there is a very serious danger that finance students end up thinking of the probability concepts in finance in classical relative frequency terms.

Nothing could be farther from the truth. To see how differently finance theory looks at these things, it is instructive to go back to some of the key papers that established and developed modern portfolio theory over the years.

Here is how Markowitz begins his Nobel prize winning paper (“Portfolio Selection”, Journal of Finance, 1952) more than half a century ago:

The process of selecting a portfolio may be divided into two stages. The first stage starts with observation and experience and ends with beliefs about the future performances of available securities. The second stage starts with the relevant beliefs about future performances and ends with the choice of portfolio.

Many finance students would probably be astonished to read words like observation, experience, and beliefs instead of terms like historical data and maximum likelihood estimates. This was the paper that gave birth to modern portfolio theory and there is no doubt in Markowitz’ mind that the probability distributions (and the means, variances and covariances) are subjective beliefs and not classical relative frequencies.

Markowitz is also crystal clear that what matters is not the historical data but beliefs about the future – historical data is of interest only in so far as it helps form those beliefs about the future. He also seems to take it for granted that different people will have different beliefs. He is helping each individual solve his or her portfolio problem and is not bothered about how these choices affect the equilibrium prices in the market.

When William Sharpe developed the Capital Asset Pricing Model that won him the Nobel prize, he was trying to determine the market equilibrium and he had to assume that all investors have the same beliefs but did so with great reluctance:

... we assume homogeneity of investor expectations: investors are assumed to agree on the prospects of various investments – the expected values, standard deviations and correlation coefficients described in Part II. Needless to say, these are highly restrictive and undoubtedly unrealistic assumptions. However, ... it is far from clear that this formulation should be rejected – especially in view of the dearth of alternative models

But finance theory quickly went back to the idea that investors had different beliefs. Treynor and Black (“How to use security analysis to improve portfolio selection,” Journal of Business, 1973) interpreted the CAPM as saying that:

...in the absence of insight generating expectations different from the market consensus, the investor should hold a replica of the market portfolio.

Treynor and Black devised an elegant model of portfolio choice when investors had out of consensus beliefs.

The viewpoint in this paper is that of an individual investor who is attempting to trade profitably on the diiference between his expectations and those of a monolithic market so large in relation to his own trading that market prices are unaffected by it.

Similar ideas can be seen in the popular Black Litterman model (“Global Portfolio Optimization,” Financial Analysts Journal, September-October 1992). Black and Litterman started with the following postulates:

  1. We believe there are two distinct sources of information about future excess returns – investor views and market equilibrium.
  2. We assume that both sources of information are uncertain and are best expressed as probability distributions.
  3. We choose expected excess returns that are as consistent as possible with both sources of information.

Even if we stick to the market consensus, the CAPM beta itself has to be interpreted with care. The derivation of the CAPM makes it clear that the beta is actually the ratio of a covariance to a variance and both of these are parameters of the subjective probability distribution that defines the market consensus. Statisticians instantly recognize that the ratio of a covariance to a variance is identical to the formula for a regression coefficient and are tempted to reinterpret the beta as such.

This may be formally correct, but it is misleading because it suggests that the beta is defined in terms of a regression on past data. That is not the conceptual meaning of beta at all. Rosenberg and Guy explained the true meaning of beta very elegantly in their paper (“Prediction of beta from investment fundamentals”, Financial Analysts Journal, 1976) introducing what are now called fundamental betas:

It is instructive to reach a judgement about beta by carrying out an imaginary experiment as follows. One can imagine all the various events in the economy that may occur, and attempt to answer in each case the two questions: (l) What would be the security return as a result of that event? and (2) What would be the market return as a result of that event?

This approach is conceptually revealing but is not always practical (though if you are willing to spend enough money, you can access the fundamental betas computed by firms like Barra which Barr Rosenberg founded and later left). In practice, our subjective belief about the true beta of a company involves at least the following inputs:

Much of the above discussion is valid for estimating Fama-French betas and other multi-factor betas, for estimating the volatility (used for valuing options and for computing convexity effects), for estimating default correlations in credit risk models and many other contexts.

Good classical statisticians are quite smart and in a practical context would do many of the things discussed above when they have to actually estimate a financial parameter. In my experience, they usually agree that (a) there is a lot of randomness in historical returns; (b) the data generating process does not remain unchanged for too long; (c) therefore in practice there is not enough data to avoid sampling error; and (d) hence it is desirable to use a method in which sampling error is curtailed by fundamental judgement.

On the other side, Bayesians shamelessly use classical tools because Bayes theorem is an omnivore that can digest any piece of information whatever its source and put it to use to revise the prior probabilities. In practical terms, Bayesians and classical statisticians may end up doing very similar stuff.

The advantage of shifting to Bayesian statistics and subjective probabilities is primarily conceptual and theoretical. It would eliminate confusion in the minds of students on the ontological status of the fundamental constructs of finance theory.

I am now therefore debating in my own mind whether finance professors must spend some time in the class room discussing subjective probabilities.

How would it be like to begin the first course in finance with a case study of subjective probabilities – something like the delightful paper by Karl Borch (“The monster in Loch Ness”, Journal of Risk and Insurance, 1976)? Borch analyzes the probability that the Loch Ness monster exists (and would be captured within a one year period) given that a large company had to pay a rather high 0.25% premium to obtain a million pound insurance cover from Lloyd’s of London against that risk? This is obviously a question which a finance student cannot refuse to answer; yet there is no obvious way to interpret this probability in relative frequency terms.

Posted at 9:57 am IST on Sun, 7 Mar 2010         permanent link

Categories: Bayesian probability, CAPM, post crisis finance, statistics

Comments

Comments