Law, Madoff, fairness and interest rates
I would grant that there is probably no fair way for the courts to deal with the mess created by the Madoff fraud. But I am intrigued by the discussions about fairness in the ruling of the US Bankruptcy Court about the rights of the Madoff victims.
I have nothing to say about the part of the judgement which interprets the law, and will confine myself to the fairness example that the judge discusses (page 32):
Investor 1 invested $10 million many years ago, withdrew $15 million in the final year of the collapse of Madoff’s Ponzi scheme, and his fictitious last account statement reflects a balance of $20 million. Investor 2 invested $15 million in the final year of the collapse of Madoff’s Ponzi scheme, in essence funding Investor 1’s withdrawal, and his fictitious last account statement reflects a $15 million deposit. Consider that the Trustee is able to recover $10 million in customer funds and that the Madoff scheme drew in 50 investors, whose fictitious last account statements reflected “balances” totaling $100 million but whose net investments totaled only $50 million.
The judge believes that Investor 1 has no net investment “because he already withdrew more than he deposited” while Investor 2 has a $15 million net investment. Since the recovery of $10 million is 20% of the $50 million net investment of all investors put together, Investor 2 is entitled to $3 million and Investor 1 is entitled to nothing.
The court states that Madoff apparently started his Ponzi scheme (“investment advisory services”) in the 1960s. Since the fraud was exposed at the end of 2008, the Ponzi scheme went on for maybe 40 years. Let us therefore take “many years ago” in the judge’s example to mean 20 years ago.
Between 1988 and 2008, the 3 month US Treasury Bill yield averaged a little over 4% so that the present value of Investor 1’s $10 million at the risk free interest rate would be about $22 million. After the withdrawal of $15 million, there would still be $7 million left – a little less than half of Investor 2’s $15 million. If you believe in the time value of money, Investor 1 should get a little less than half of Investor 2. The judge thinks Investor 1 should get nothing.
Alternatively, if you believe that the purchasing power of money is important, then the US consumer price inflation during those 20 years averaged about 3%. The $10 million that Investor 1 put in two decades ago would be worth $18 million in 2008 dollars and Investor 1 would still have $3 million of net investment left after the withdrawal of $15 million. Yet the judge thinks he should get nothing.
Posted at 3:08 pm IST on Thu, 11 Mar 2010 permanent link
Categories: bankruptcy, fraud, law
Report on Rating Agency Regulation in India
Last week, the Reserve Bank of India published the Report of the Committee on Comprehensive Regulation of Credit Rating Agencies appointed by the Government of India (more precisely, the High Level Coordination Committee on Financial Markets). This was also accompanied by a study by the National Institute for Securities Markets entitled An assessment of the long term performance of the Credit Rating Agencies in India
The report provides a comprehensive analysis of the issues mentioned in the terms of reference for the committee. Unfortunately, those terms of reference did not include what I believe are the only two questions worth looking at about credit rating in the aftermath of the global financial crisis:
- How should India eliminate or at least reduce the use of credit ratings in financial sector regulations?
- How should India try to introduce greater competition in credit rating?
Rating agencies are fond of saying that “AAA” is just the shortest editorial in the world. Regulators should take the rating agencies at their word and act accordingly. They should give as little regulatory sanction for these ratings as they do to the editorial in a newspaper. Also, regulators should make it as easy to start a rating agency as it is to start a newspaper. These are the two issues that I think need urgent consideration.
As I pointed out in this blog post last year, the US is an outlier in terms of the use of credit ratings in its regulations, and since India has largely adopted US style regulations, it too is an outlier. By unilateral action, India can eliminate all use of credit ratings except what is required by Basel-II. Even Basel-II is not something for which Indian regulators can disown responsibility – India is now a member of the Basel Committee. Indian regulators should be providing thought leadership on eliminating credit rating from Basel-III or Basel-IV.
I am disappointed that India’s apex regulatory forum (High Level Coordination Committee on Financial Markets) having recognized the important role of credit rating agencies in the global crisis, did not bother to ask the truly important questions. All the more so, because the report did a good job of addressing the questions that were referred to it in the terms of reference. If only the same bunch of competent people had been asked the right questions!
Posted at 6:05 pm IST on Tue, 9 Mar 2010 permanent link
Categories: credit rating, regulation
Bayesians in finance redux
In November last year, I wrote a brief post about Bayesians in finance. The post was brief because I thought that what I was saying was obvious. A long and inconclusive exchange with Naveen in the comments section of another post has convinced me that a much longer post is called for. The Bayesian approach is perhaps not as obvious as I assumed.
When finance professors walk into a classroom, they want to build on what the statistics professors have covered in their courses. When I am teaching portfolio theory, I do not want to spend half an hour explaining the meaning of covariance; I would like to assume that the statistics professor has already done that. That is how division of labour is supposed to work in a pin factory or in a university.
Unfortunately, there is a problem with this division of labour – most statistics professors teach classical statistics. That is true even of those statisticians who prefer Bayesian techniques in their research work! The result is that many finance students wrongly think that when the finance professors talk of expected returns, variances and betas, they are referring to the classical concepts grounded in relative frequencies. Worse still, some students think that the means and covariances used in finance are sample means and sample covariances and not the population means and covariances.
In business schools like mine where the case method dominates the pedagogy, these errors are probably less (or at least do less damage) because in the case context, the need for judgemental estimates for almost everything of interest becomes painfully obvious to the students. The certainties of classical statistics dissolve into utter confusion when confronted with messy “case facts”, and this is entirely a good thing.
But if cases are not used or used sparingly, and the statistics courses are predominantly classical, there is a very serious danger that finance students end up thinking of the probability concepts in finance in classical relative frequency terms.
Nothing could be farther from the truth. To see how differently finance theory looks at these things, it is instructive to go back to some of the key papers that established and developed modern portfolio theory over the years.
Here is how Markowitz begins his Nobel prize winning paper (“Portfolio Selection”, Journal of Finance, 1952) more than half a century ago:
The process of selecting a portfolio may be divided into two stages. The first stage starts with observation and experience and ends with beliefs about the future performances of available securities. The second stage starts with the relevant beliefs about future performances and ends with the choice of portfolio.
Many finance students would probably be astonished to read words like observation, experience, and beliefs instead of terms like historical data and maximum likelihood estimates. This was the paper that gave birth to modern portfolio theory and there is no doubt in Markowitz’ mind that the probability distributions (and the means, variances and covariances) are subjective beliefs and not classical relative frequencies.
Markowitz is also crystal clear that what matters is not the historical data but beliefs about the future – historical data is of interest only in so far as it helps form those beliefs about the future. He also seems to take it for granted that different people will have different beliefs. He is helping each individual solve his or her portfolio problem and is not bothered about how these choices affect the equilibrium prices in the market.
When William Sharpe developed the Capital Asset Pricing Model that won him the Nobel prize, he was trying to determine the market equilibrium and he had to assume that all investors have the same beliefs but did so with great reluctance:
... we assume homogeneity of investor expectations: investors are assumed to agree on the prospects of various investments – the expected values, standard deviations and correlation coefficients described in Part II. Needless to say, these are highly restrictive and undoubtedly unrealistic assumptions. However, ... it is far from clear that this formulation should be rejected – especially in view of the dearth of alternative models
But finance theory quickly went back to the idea that investors had different beliefs. Treynor and Black (“How to use security analysis to improve portfolio selection,” Journal of Business, 1973) interpreted the CAPM as saying that:
...in the absence of insight generating expectations different from the market consensus, the investor should hold a replica of the market portfolio.
Treynor and Black devised an elegant model of portfolio choice when investors had out of consensus beliefs.
The viewpoint in this paper is that of an individual investor who is attempting to trade profitably on the diiference between his expectations and those of a monolithic market so large in relation to his own trading that market prices are unaffected by it.
Similar ideas can be seen in the popular Black Litterman model (“Global Portfolio Optimization,” Financial Analysts Journal, September-October 1992). Black and Litterman started with the following postulates:
- We believe there are two distinct sources of information about future excess returns – investor views and market equilibrium.
- We assume that both sources of information are uncertain and are best expressed as probability distributions.
- We choose expected excess returns that are as consistent as possible with both sources of information.
Even if we stick to the market consensus, the CAPM beta itself has to be interpreted with care. The derivation of the CAPM makes it clear that the beta is actually the ratio of a covariance to a variance and both of these are parameters of the subjective probability distribution that defines the market consensus. Statisticians instantly recognize that the ratio of a covariance to a variance is identical to the formula for a regression coefficient and are tempted to reinterpret the beta as such.
This may be formally correct, but it is misleading because it suggests that the beta is defined in terms of a regression on past data. That is not the conceptual meaning of beta at all. Rosenberg and Guy explained the true meaning of beta very elegantly in their paper (“Prediction of beta from investment fundamentals”, Financial Analysts Journal, 1976) introducing what are now called fundamental betas:
It is instructive to reach a judgement about beta by carrying out an imaginary experiment as follows. One can imagine all the various events in the economy that may occur, and attempt to answer in each case the two questions: (l) What would be the security return as a result of that event? and (2) What would be the market return as a result of that event?
This approach is conceptually revealing but is not always practical (though if you are willing to spend enough money, you can access the fundamental betas computed by firms like Barra which Barr Rosenberg founded and later left). In practice, our subjective belief about the true beta of a company involves at least the following inputs:
- The beta is equal to unity unless there is enough reason to believe otherwise. The value of unity (the beta of an average stock) provides an important anchor which must be taken into account even when there is other evidence. It is not uncommon to find that simply equating beta to unity outperforms the beta estimated by naive regression.
- What this means is that betas obtained by other means must be shrunk towards unity. An estimated beta exceeding one must be reduced and an estimated beta below one must be increased. One can do this through a formal Bayesian process (for example, by using a Bayes-Stein shrinkage estimator), or one can do it purely subjectively based on the confidence that one has in the original estimate.
- The beta depends on the industry to which the firm belongs. Since portfolio betas can be estimated more accurately than individual betas, this is often the most important input into arriving at a judgement about the true beta of a company.
- The beta depends on the leverage of the company and if the leverage of the company is significantly different from that of the rest of the industry, this needs to be taken into account by unlevering and relevering the beta.
- The beta estimated by regressing the returns of the stock on the market over different time periods provides useful information about the beta provided the business mix and the leverage have not changed too much over the sample period. Since this assumption usually precludes very long sample periods, the beta estimated through this route typically has a large confidence band and becomes meaningful only when combined with the other inputs.
- Subjective beliefs about possible future changes in the beta because of changing business strategy or financial strategy must also be taken into account.
Much of the above discussion is valid for estimating Fama-French betas and other multi-factor betas, for estimating the volatility (used for valuing options and for computing convexity effects), for estimating default correlations in credit risk models and many other contexts.
Good classical statisticians are quite smart and in a practical context would do many of the things discussed above when they have to actually estimate a financial parameter. In my experience, they usually agree that (a) there is a lot of randomness in historical returns; (b) the data generating process does not remain unchanged for too long; (c) therefore in practice there is not enough data to avoid sampling error; and (d) hence it is desirable to use a method in which sampling error is curtailed by fundamental judgement.
On the other side, Bayesians shamelessly use classical tools because Bayes theorem is an omnivore that can digest any piece of information whatever its source and put it to use to revise the prior probabilities. In practical terms, Bayesians and classical statisticians may end up doing very similar stuff.
The advantage of shifting to Bayesian statistics and subjective probabilities is primarily conceptual and theoretical. It would eliminate confusion in the minds of students on the ontological status of the fundamental constructs of finance theory.
I am now therefore debating in my own mind whether finance professors must spend some time in the class room discussing subjective probabilities.
How would it be like to begin the first course in finance with a case study of subjective probabilities – something like the delightful paper by Karl Borch (“The monster in Loch Ness”, Journal of Risk and Insurance, 1976)? Borch analyzes the probability that the Loch Ness monster exists (and would be captured within a one year period) given that a large company had to pay a rather high 0.25% premium to obtain a million pound insurance cover from Lloyd’s of London against that risk? This is obviously a question which a finance student cannot refuse to answer; yet there is no obvious way to interpret this probability in relative frequency terms.
Posted at 9:57 am IST on Sun, 7 Mar 2010 permanent link
Categories: Bayesian probability, CAPM, post crisis finance, statistics
Greek bond issue
That Greece could borrow money at all (even if it is at 3% above the risk free rate) seems to have calmed the markets a great deal. I am reminded of this piece of Rothschild wisdom:
You are certainly right that there is much to be earned from a government which has no money. But you have to take risks.
That is James Rothschild writing to Nathan Rothschild nearly two centuries ago as quoted by Niall Ferguson, The House of Rothschild: Money’s Prophets 1798-1848, Chapter 4.
Posted at 7:19 pm IST on Fri, 5 Mar 2010 permanent link
Categories: sovereign risk
Regulation by placebo
This is a very nice phrase that I picked up from SEC Commissioner Kathleen Casey’s speech dissenting from the short selling rules that the SEC introduced recently:
But this is regulation by placebo; we are hopeful that the pill we’ve just had the patient take, although lacking potency, will convince him that everything is all right.
Casey’s speech itself was a bit of political grandstanding and was in the context of an SEC vote that went on predictable party lines. I am not therefore inclined to take the speech too seriously. But the phrase “regulation by placebo” very elegantly captures a phenomenon that is all too common in financial sector regulation all over the world.
Securities regulators, banking regulators and other financial regulators have this great urge to be seen to be doing something regardless of whether that something is the right thing or not. The result is often a half hearted measure that does not stop the wrong doing but convinces the public that the evil doers have been kept at bay.
Regular readers of my blog know that I am against short sale restrictions in general. At the very least, I would like short sale restrictions to be accompanied by corresponding and equally severe restrictions on leveraged longs. If you are not allowed to short a stock when it has dropped 10%, then you should not be allowed to buy a stock (with borrowed money) when the stock has risen 10%. Market manipulation is done far more often by longs than by shorts!
Posted at 6:58 pm IST on Wed, 3 Mar 2010 permanent link
Categories: regulation, short selling
Regulation of mutual funds
Morley and Curtis wrote a very interesting paper earlier this month on the regulation of mutual funds. Their fundamental point is that the open-end mutual fund presents governance problems of a completely different nature from that of normal companies.
Unit holders do not sell their shares in the market, they redeem them from the issuing funds for cash. This uniquely effective form of exit almost completely eliminates the role of voice – investors have no incentives to use voting or any other form of activism.
Morley and Curtis advocate product-style regulation of mutual funds. As I understand this, we must treat unit holders as customers and not as owners of the fund. They also advocate regulations that make it easier for investors to exercise exit rights effectively.
I think this insight is fundamentally correct. In the US context where mutual funds are organized as companies with unit holders as shareholders, this implies a huge change in the regulatory framework.
In the Indian context, mutual funds are organized as trusts and investors are legally the beneficiaries of the trust rather than the owners. Indian regulation already uses exit as a regulatory device. Whenever there is any change in the fundamental characteristics of a fund or in the ownership of the asset manager, the fund is required to provide an opportunity to investors to exit at net asset value without any exit load.
However, the trust structure creates another set of confusing and meaningless legal requirements. The governance is divided between the board of the asset management company and the trustees of the trust. This creates a duplication of functions and regulators might hope that if one of them fails, the other would still operate.
It is however more likely that each level might rely on the other to do the really serious work. The job might be done less effectively than if the locus of regulation is made clearer. There is probably merit in creating a brain-dead trust structure and making the board of the asset management company the primary focus of regulation. This is more consistent with the idea of unit holders as customers.
One implication that Morley and Curtis do not draw from their analysis is that closed-end funds are dramatically different from open-end funds and require totally different regulatory structures. Regulations in most countries tend to regard these two types of funds as minor variants of each other and therefore apply similar regulatory regimes to both. If Morley and Curtis are right, we must treat open-end unit holders as customers and closed-end unit holders as owners.
The governance of a closed-end fund should more closely mimic that of a normal corporation. Regulations should permit the unit holders of a closed-end fund to easily throw out the asset manager or even to wind up the fund. The trust structure in India does not give unit holders formal ownership rights – explicit regulations are required to vest them with such rights.
Posted at 7:58 pm IST on Wed, 24 Feb 2010 permanent link
Categories: mutual funds, regulation
Short selling and public issues
When the Indian government company NTPC was conducting a public offering of share, it was alleged that many institutional investors short sold NTPC shares on a large scale (by selling stock futures). This gave rise to some talk about suspending futures trading in shares of a company while a public issue is in progress. Thankfully, the government and the regulators did not do anything as foolish as this.
The US has a different approach to the problem – Rule 105 (Regulation M) prohibits the purchase of offering shares by any person who sold short the same securities within five business days before the pricing of the offering. Last month, the SEC brought charges against some hedge funds for violating this rule.
Obviously, Rule 105 is a far better solution than shutting down the whole market, but it is necessary to ask whether even this is necessary. Take for example, the SEC’s argument that:
Short selling ahead of offerings can reduce the proceeds received by public companies and their shareholders by artificially depressing the market price shortly before the company prices its offering.
We can turn this around to say:
Short sale restrictions ahead of offerings can allow companies to sell their shares to the public at inflated prices by artificially increasing the market price shortly before the company prices its offering.
Why do regulators have to assume that issuers of capital are saints and that investors are the sinners in all this? Provided there is complete transparency about short selling (and open interest in the futures market), it is difficult to see why short selling with the full intention to cover in the public issue should depress the issue price.
The empirical evidence is that an equity issue has a negative impact on the share price. This is partly due to the signalling effect of raising equity rather than debt, and partly due to the need to induce a portfolio rebalancing of all investors to accommodate the new shares.
Now imagine that a hedge fund short sells with the intention to buy back in the issue. Since the short seller is committed to buying in the issue, a part of the issue is effectively pre-sold. To this extent, the price impact of an equity issue is reduced. While the short selling could depress prices, this would be offset by the lower price impact of the issue itself.
In short, the short sellers would not change prices at all. What they would do is to advance the effective date of the public issue. If there is a 100 million share issue happening on the 15th and the hedge funds short 20 million shares on the 10th, then somebody has to take the long position of 20 million shares on the 10th itself. For this amount of portfolio rebalancing to happen on the 10th, there has to be a price adjustment and this can be quite visible.
But the flip side is that on the 15th there are only 80 million shares to be bought by long only investors. There is less price adjustment required on that date. The total portfolio adjustment required with or without short selling is the same – 100 million shares. The only question is whether the price adjustment happens on the 15th or earlier.
In an efficient market, the impact of unrestricted short selling would be to force the entire price adjustment to happen on the announcement date itself. The issue itself would then have zero price impact and this would be a good thing.
Because of limited short selling in the past, we are accustomed to issues being priced at a discount to the market price on the pricing date. With unlimited short selling, this would disappear. If the short selling were excessive, the issue may even be priced at a premium as the shorts scramble to cover their positions. It will take some time for market participants to adjust to the new environment. Regulators should just step back and let the market participants learn and adjust.
Posted at 8:30 pm IST on Sat, 20 Feb 2010 permanent link
Categories: equity markets, short selling
Taxation of securities
I wrote a column in the Financial Express today about the taxation of securities.
Over a period of time, several distortions have crept into the taxation of investment income in India. The budget later this month provides an opportunity to correct some of these without waiting for the sweeping reforms proposed in the direct taxes code. I would highlight the non-taxation of capital gains on equities, the securities transaction tax (STT) and the taxation of foreign institutional investors.
In the current system, the capital gain on sale of equity shares is not taxable provided the sale takes place on an exchange. Of course, there is STT, but the STT rate is a tiny fraction of the rate that applies to normal capital gains.
The taxation of capital gains is itself very low compared to the taxation of normal income. There is no justification for taxing capital gains at a lower rate than, say, salary income. After all, a substantial part of the salary of skilled workers is a return on the investment in human capital that led to the development of their skills.
Why should returns on human capital be taxed at normal rates and returns on financial capital at concessional rates? Even within financial assets, why should, say, interest income be taxed at normal rates while capital gains are taxed at concessional rates?
The discussion paper on the direct taxes code argues that a concessional rate is warranted because the cumulative capital gains of several years are brought to tax in one year and this would push the tax rate to a higher slab. This factor is in many cases overwhelmed by the huge benefit that arises from deferment of tax.
For example, consider an asset bought for Rs 100 that appreciates at the rate of 10% per annum for 20 years so that it fetches Rs 673 when it is sold. A 30% tax on the capital gain of Rs 573 amounts to Rs 172 and the owner is left with Rs 501 after tax. By contrast, if the owner had received interest at 10% every year and paid taxes at a lower slab of 20% each year, the post-tax return would have been 8%. Compounding 8% over 20 years would leave the investor with only Rs 466. In other words, 20% tax paid each year is a stiffer drag on returns than a 30% tax that can be deferred till the time of sale.
In practice, of course, indexation and the periodic re-basing of the original cost of the asset makes the tax burden on capital gains even lighter. There is no economic or moral justification for subsidising the wealthy in this manner.
Ideally, tax rates should be calibrated in such a manner that equal pre-tax rates of return translate into equal post-tax rates of return regardless of the form in which that return is earned. This might be too much to ask for, but a near zero tax rate for capital gains earned on equity shares makes a mockery of the tax system in the country and should be redressed as soon as possible.
The STT is by its very nature a bad tax because it is unrelated to whether the transaction resulted in a profit or a loss. The real reason for the STT was to make the process of tax collection easier. In this sense, the STT is best regarded as a form of the nefarious system of tax farming that is shunned by modern nation states.
Some attempts are being made to justify the STT as a form of Tobin tax on financial transactions. Without entering into a debate on whether a Tobin tax is good or bad, it should be pointed out that the STT is not a Tobin tax. This is evident from the fact that delivery-based transactions attract STT at rates several times higher than on the presumably more speculative non-delivery-based transactions. The rate on delivery-based transactions is far higher than any reasonable Tobin tax.
A final argument for STT in lieu of capital gains is that foreigners pay lower taxes in India. Many other countries tax foreign portfolio investors at low rates. Indian investors can make portfolio investments in the US (within the limit of $200,000 per annum permitted by RBI). They would not pay income taxes in the US on their income from this investment and would pay only Indian taxes.
There is symmetry here to the US portfolio investor paying taxes in the US but not in India. The only difference is that the foreign investor into India has to come through Mauritius while the portfolio investor into the US can go in directly.
We should also exempt foreign portfolio investors from taxation without forcing them to come via Mauritius. The real problem with the Mauritius loophole is that it allows even non-portfolio investors to avoid Indian taxes, but that is a different topic altogether.
Posted at 2:11 pm IST on Fri, 19 Feb 2010 permanent link
Categories: equity markets, taxation
Currency values since the gold standard
I did some analysis of how the value of various currencies have behaved in the hundred years or so since the last decade of the gold standard. I found the results interesting and I am posting them here.
I focus on the twelve countries which are either part of the G7 today, or were one of the eight great powers before World War I, or figure in the top seven traded currencies according to the BIS survey of 2007. I have started with the gold parities during the last few years of the gold standard from 1900-1914. All the currencies of interest were on the gold standard by 1900 and did not change their parities during this period.
I then convert the gold parities into exchange rates against the US dollar. Next, I take into account all the redenominations in which some hyper-inflating currencies had a few zeroes lopped off during the last hundred years. Finally, I take into account the re-denomination of several European currencies into the euro. This leads to the re-denominated gold standard value in USD. I would welcome corrections of any errors that you may find in my data and my computations.
The re-denominated gold standard value tells us what the exchange rate of the modern currency should be to preserve the gold standard value of the old currency through all the redenominations. I compare this with the actual value of the modern currency and compute an annual percentage change in currency value (taking the time period from the gold standard days as a nice round hundred years).
Only two currencies have appreciated against the US dollar over this long period – the Swiss Franc and the Dutch guilder – while the Canadian dollar has held its own. Switzerland has enjoyed a great deal of geo-political luck during this period, but the performance of the Dutch guilder is truly amazing. The euro is commonly regarded as a successor currency of the Deutsch Mark, but if we go back beyond World War II, it makes greater sense to regard it as the successor of the Dutch guilder.
The data has been split into two tables to fit the width of the page better. Countries have been listed in the order of their current GDP.
Country | US | Japan | Germany | France | UK | Italy |
Gold Standard Currency | dollar | yen | mark | franc | pound | lira |
Grams of gold | 1.505 | 0.752 | 0.358 | 0.290 | 7.322 | 0.290 |
Gold standard value in USD | 1.000 | 0.500 | 0.238 | 0.193 | 4.866 | 0.193 |
Re-denomination | 1.00E+12 x 1.95583 | 100 x 6.55957 | 1936.27 | |||
Current Currency | dollar | yen | euro | euro | pound | euro |
Re-denominated gold standard value in USD | 1.0000 | 0.5000 | 4.66E+11 | 126.5684 | 4.8665 | 373.6078 |
Current value in USD(mid Feb 2010) | 1.00 | 0.01 | 1.36 | 1.36 | 1.56 | 1.36 |
Annual change | 0.00% | -3.74% | -23.33% | -4.43% | -1.13% | -5.46% |
Country | Canada | Russia | Switzerland | Australia | Netherlands | Austria |
Gold Standard Currency | dollar | ruble | franc | pound | guilder | krone |
Grams of gold | 1.505 | 0.774 | 0.290 | 7.322 | 0.605 | 0.305 |
Gold standard value in USD | 1.000 | 0.515 | 0.193 | 4.866 | 0.402 | 0.203 |
Re-denomination | 5.00E+15 | 0.5 | 2.20371 | 1.00E+4 x 13.7603 | ||
Current Currency | dollar | ruble | franc | dollar | euro | euro |
Re-denominated gold standard value in USD | 1.0000 | 2.57E+15 | 0.1930 | 2.4332 | 0.8858 | 2.79E+04 |
Current value in USD(mid Feb 2010) | 0.96 | 0.03 | 0.93 | 0.90 | 1.36 | 1.36 |
Annual change | -0.04% | -32.22% | 1.58% | -0.99% | 0.43% | -9.45% |
Not surprisingly, gold itself has done better than any currency, appreciating 4% annually against the US dollar and 2.4% annually against even the Swiss franc. I do not know what the average Swiss interest rate has been during this period; it is conceivable that it compensates for most or even all this depreciation.
What about the Indian rupee? From its gold standard parity of 32 US cents (Rs 15 to the British pound), it has fallen to about 2 US cents – an annual depreciation of 2.67%. This is bad, but better than the Japanese yen and the French franc. The rupee entered the gold standard at a low value reflecting the depreciation of the old silver rupee during the global demonetization of silver in the late nineteenth century. The depreciation of the rupee began only in 1967. The last fifty years would be a lot worse than the last hundred years.
Posted at 6:05 pm IST on Thu, 18 Feb 2010 permanent link
Categories: currency, gold, international finance
SEC under Schapiro after one year
Mary Schapiro took over as the Chairman of the US SEC a year ago when the SEC’s reputation was in tatters. In a speech yesterday, she reviewed the achievements of the past year.
First of all, she believes that the problems of the SEC were at the top and that if the leadership at the top is changed, the rest of the organization had the competence and attitude required to deal with its challenges. Schapiro says: “having served as Commissioner 20 years earlier, I knew what the agency was capable of ... through this process, I witnessed firsthand the dedication and expertise that I had long believed embodied this agency”
Schapiro believes that the new hires she has made and the new technological initiatives that she has undertaken will be sufficient to enable the SEC to recover its last glory. My own sense is that the problems are not merely at the top but permeate the whole organization. The report on the Madoff investigations revealed problems at all levels of the SEC (see my blog post last year).
Schapiro lists the SEC’s achievements in enforcement during the last year and refers to the case that has been brought against State Street Bank. She says nothing about the Bank of America case where the judge has taken the SEC to task, and subsequently the New York Attorney General has seized the initiative.
On the rule making agenda also, Schapiro is unable to list too much of significance. Even the utterly misguided short sale restrictions of the crisis is cited as an achievement. Some minor changes on proxy rules, rating agencies and money market funds are also cited though none of these changes went far enough to make a serious difference.
Schapiro came to the SEC with high expectations after a succession of lacklustre leaders. I think she needs to do a lot more to fulfill these expectations.
Posted at 10:27 pm IST on Sat, 6 Feb 2010 permanent link
Categories: regulation
The Volcker rule
I wrote a column in the Financial Express today on the Volcker rule and other proposals of President Obama.
President Obama has proposed the ‘Volcker rule’ preventing banks from running hedge funds, private equity funds or proprietary trading operations unrelated to serving their customers. Simultaneously, he also proposed size restrictions to prevent the financial sector from consolidating into a few large firms. While this might look like unwarranted government meddling in the functioning of the financial sector, I argue that, in fact, free market enthusiasts should welcome these proposals.
Obama has chosen to frame the proposal as a kind of morality play in which the long-suffering public get their revenge against greedy bankers. While that might make political sense, the reality is that the proposals are pro free markets. To understand why this is so, we must go back to the moral hazard roots of the global financial crisis.
These roots go back to 1998 when the US Fed bailed out the giant hedge fund, LTCM. The Fed orchestrated an allegedly private sector bailout of LTCM, but more importantly, it also flooded the world with liquidity on such a scale that it not only solved LTCM’s problems, but also ended the Asian crisis almost overnight.
LTCM had no retail investors that needed to be protected. The actual reason for its bailout was the same as the reason for the bailout of AIG a decade later. Both these bailouts were in reality bailouts of the banks that would have suffered heavily from the chaotic bankruptcy of these entities.
Back in 1998, the large global banks themselves ran proprietary trading books that were also short liquidity and short volatility on a large scale like LTCM. A panic liquidation of LTCM positions would have inflicted heavy losses on the banks and so the Fed was compelled to intervene.
From a short-term perspective, the LTCM bailout was a huge success, but it engendered a vast moral hazard play. The central bank had now openly established itself as the risk absorber of last resort for the entire financial sector. The existence of such an unwarranted safety net made the financial markets complacent about risk and leverage and set the stage for the global financial crisis.
Those of us who like free markets abhor moral hazard and detest bailouts. The ideal world is one in which there is no deposit insurance and the governments do not bail out banks and their depositors. Since this is politically impossible, the second best solution is to limit moral hazard as much as possible.
If banking is an island in which the laws of capitalism are suspended, this island should be as small as possible, and the domain of truly free markets—free of government meddling and moral hazard—should be as large as possible. Looked at this way, the Volcker rule is a step in the right direction. If banks are not shadow LTCMs, then at least the LTCMs of the world can be allowed to fail.
The post-Lehman policy of extending government safety net to all kinds of financial entities amounted to a creeping socialisation of the entire global financial system. The Volcker rule is the first and essential step in de-socialising the financial sector by limiting socialism to a small walled garden of narrow banking while letting the rest of the forest grow wild and free.
What about the second part of the Obama proposal seeking to limit the size of individual banks? I see this as reducing oligopolies and making banking more competitive. Much of the empirical evidence today suggests that scale economies in banking are exhausted at levels far below those of the largest global banks, and there is some evidence that scale diseconomies set in at a certain level.
There is very little reason to believe that banks with assets exceeding, say, $100 billion are the result of natural scale economies. On the contrary, they appear to be the result of an artificial scale economy caused by the too-big-to-fail (TBTF) factor. The larger the bank, the more likely it is to be bailed out when things go wrong. It is therefore rational for a customer to bank with an insolvent mega-bank rather than with a well-run small bank.
This creates a huge distortion in which banks seek to recklessly grow to become eligible for the TBTF treatment. Well-run banks that grow in a prudent manner are put at a competitive disadvantage. This makes the entire financial sector less competitive and less efficient.
The Obama size restrictions will reduce the distortions created by the TBTF factor, and will make banking more competitive. One could argue that the restrictions do not go far enough because they legitimise the mega firms that already exist and only seek to prevent them from becoming even bigger. Nevertheless, it is in the right direction. It does not undo the damage that has already been done, but prevents further damage.
Posted at 2:27 pm IST on Tue, 26 Jan 2010 permanent link
Categories: regulation
Computational and sociological analyses of financial modeling
I have been reading a number of papers that examine financial modeling in the context of the current crisis from a computational complexity and sociology of knowledge point of view:
- Looking Out, Locking In: Financial Models and the Social Dynamics of Arbitrage Disasters, by Daniel Beunza and David Stark, September 2009
- Credit Models and the Crisis, or: How I learned to stop worrying and love the CDOs by Damiano Brigo, Andrea Pallavicini and Roberto Torresetti, December 2009
- The Credit Crisis as a Problem in the Sociology of Knowledge, by Donald MacKenzie, November 2009
- Computational complexity and informational asymmetry in financial products, by Sanjeev Arora, Boaz Barak, Markus Brunnermeier and Rong Ge, October 2009.
I liked all these papers and learned a lot from each of them which is not the same as saying that I agree with all of them.
The paper that I liked most was Beunza and Stark which is really about cognitive interdependence and systemic risk. Their work is based on an ethnographic study of financial modeling carried out over a three year period at a top-ten global investment bank. Some of their conclusions are:
Using models in reverse, traders find out what their rivals are collectively thinking. As they react to this knowledge, their actions introduce a degree of interdependence ...
Quantitative tools and models thus give back with one hand the interdependence that they took away with the other. They hide individual identities, but let traders know what the consensus is. Arbitrageurs are thus not embedded in personal ties, but neither are they disentangled from each other.
Scopic markets are fundamentally different from traditional social settings in that the tool, not the network, is the central coordinating device.
Instead of ascribing crises to excessive risk-taking, misuse of the models, or irreflexive imitation, our notion of reflexive modeling offers an account of crises in which problems unfold in spite of repeated reassurances, early warnings, and an appreciation for independent thinking.
Implicit in the behavioral accounts of systemic risk is an emphasis on the individual biases and limitations of the investors. At the extreme, investors are portrayed as reckless gamblers, mindless lemmings, or foolish users of models they do not understand. By contrast, our detailed examination of the tools of arbitrage offers a theory of crisis that does not call for any such bias. The reflexive risks that we identified befall on arbitrageurs that are smart, creative, and reflexive about their own limitations.
Though the paper is written in a sociological language, what it most reminded me of was Aumann’s paper more than 30 years ago on “Agreeing to disagree” (The Annals of Statistics, 1976). What Beunza and Stark describe as reflexivity is closely related to Aumann’s celebrated theorem: “If two people have the same priors, and their posteriors for a given event A are common knowledge, then these posteriors must be equal.”
The Brigo et al paper is mathematically demanding as they take “an extensive technical path, starting with static copulas and ending up with dynamic loss models.” But it is very useful in explaining why the Gaussian copula model is still used in its base correlation formulation though its limitations have been known for several years. My complaint about the paper is that it focuses too much on the difficulties in fitting the Gaussian copula to observed market prices and too little on the difficulties of using it to estimate the impact of plausible stress events.
MacKenzie focuses on “evaluation cultures” which are broader than just models. They are “pockets of local consensus on how financial instruments should be valued.” He argues that “‘Greed’ – the egocentrically-rational pursuit of profits and bonuses – matters, but the calculations that the greedy have to make are made within evaluation cultures”. MacKanzie highlights “the peculiar status of the ABS CDO as what one might call an epistemic orphan – cognitively peripheral to both its parent cultures, corporate CDOs and ABSs.”
The Arora et al paper is probably the most mathematical of the lot. It essentially shows that an originator can put bad loans into CDOs in such a way that it is computationally infeasible for the investors to figure this out even ex post.
However, for a real-life buyer who is computationally bounded, this enumeration is infeasible. In fact, the problem of detecting such a tampering is equivalent to the so-called hidden dense subgraph problem, which computer scientists believe to be intractable ... Moreover, under seemingly reasonable assumptions, there is a way for the seller to ‘plant’ a set S of such over-represented assets in a way that the resulting pooling will be computationally indistinguishable from a random pooling.”
Furthermore, we can show that for suitable parameter choices the tampering is undetectable by the buyer even ex post. The buyer realizes at the end that the financial products had a higher default rate than expected, but would be unable to prove that this was due to the seller’s tampering.
The derivatives that Arora et al discuss are weird binary CDOs and my interpretation of this result is that in a rational market, these kinds of exotic derivatives would never be created or traded. Nevertheless, this is an important way of looking at how computational complexity can reinforce information asymmetry under certain conditions.
Posted at 6:10 pm IST on Thu, 21 Jan 2010 permanent link
Categories: behavioural finance
More on OTC derivatives
I have several comments by email on my blog post yesterday on OTC derivatives. This post responds to some of them and adds some more material on the subject.
One of the papers that I did not refer to in yesterday’s post was a paper by Riva and White on the evolution of the clearing house model for account period settlement at the Paris Bourse during the nineteenth century. Streetwise Professor linked to a conference paper version of this in his post while Ajay Shah pointed me to an NBER version of the same paper.
The account period settlement at the Paris Bourse was similar to the ‘badla’ system that prevailed in India until the beginning of this century. All trades during a month were settled at the end of the month so that the stock market at the beginning of the month was actually a one month forward market. At the end of the month, the settlement could be postponed for another month on paying the price difference and a market determined backwardation or contango charge. In fact, this system for trading individual stocks continued even after the introduction of stock index futures (CAC 40) in the Paris market. Crouhy and Galai, “The settlement day effect in the French Bourse,” (Journal of Financial Services Research, 1992) provide a good description of this market and explore the working of the cost of carry model in this market.
Coming back to Riva and White, they document the emergence of a clearing house model in which the Paris Bourse guaranteed all trades on the exchange. The bourse not only guaranteed settlement of trades between two brokers but also repaid the losses suffered by the defaulting broker’s clients (except during a period from 1882 to 1895). This clearing house guarantee was supported by a capital requirement for all brokers and by a guarantee fund, but not by any margins. The entire process seems to have been driven by the government and the central bank.
By contrast, the US futures exchanges introduced initial and variation margins well before they introduced the clearinghouse in 1883 as documented in the Kroszner paper that I mentioned in my post yesterday. The existence of margins eliminates most of the moral hazard problems that plagued the clearing house in Paris and required state intervention of some form or the other. The US thus saw a private ordering emerging without any involvement of the state.
India ran the ‘badla’ system in the nineteenth and through most of the twentieth century without any margins and without any clearinghouse guarantee. While France solved the risk problem in its usual dirigiste style and the US solved it using private ordering, India seems to be a case of state failure and market failure until the last decade of the twentieth century. In fact, the problem was solved only a few years before the abolition of ‘badla’ itself.
Now I turn to some other papers relevant to the regulation of OTC derivatives.
Viral Acharya and several coauthors have written extensively on the regulation of OTC derivatives, and I will mention two. Acharya and Engle have a nice paper that explains the key issues in the context of the proposed US legislation.
Acharya and Binsin have a conference paper explaining how an exchange is able to price counterparty risk better than the OTC market because it is able to see the entire portfolio of the counterparty. Streetwise Professor criticizes this on the ground that exchanges charge the same price to everybody and do not discriminate. I think this criticism is incorrect – exchanges do not discriminate in the sense that they apply the same risk model to everybody, but the standard SPAN type model is a portfolio model where the margin is not on an individual position but on a portfolio. The incremental margin requirement for any position thus depends on what else is there in the portfolio. Thus the risk is priced differentially.
The Acharya and Binsin paper must be read in conjunction with a paper by Duffie and Zhu who show that the efficiency gain from central clearing is best realized when there is a single clearing house for all derivatives and the gains may disappear if there are separate clearing houses for different products and even more so when there are competing clearing houses for the same product. This in my view is only an efficiency issue and does not detract from the reduction of systemic risk from the use of central clearing.
For those interested in data about the magnitudes involved in these markets in terms of risk and collateral requirements, a good source is an IMF Working Paper on “Counterparty Risk, Impact on Collateral Flows, and Role for Central Counterparties.” For more detailed information about the CDS market, there is an ECB paper on “Credit default swaps and counterparty risk”
Posted at 6:10 pm IST on Wed, 13 Jan 2010 permanent link
Categories: derivatives, exchanges, regulation
Regulation of OTC Derivatives
Last month, the UK Financial Services Authority (FSA) and the Treasury put out a document entitled “Reforming OTC Derivative Markets: A UK perspective.” My one line summary of this document is that the UK does not wish to make any significant changes to the regulations of the OTC markets. A cynic would say that this is explained by the fact that London dominates the OTC derivative markets globally.
This month there was a nice paper by Darrell Duffie and two co-authors for the New York Fed advocating not just central clearing, but also encouraging the use of exchanges and electronic trading platforms, as well as post-trade price transparency. I like this report though the Streetwise Professor thinks that this is tantamount to socialist planning.
Streetwise Professor has been arguing in a series of posts on his blog that OTC markets have evolved naturally and must therefore represent an efficient outcome absent demonstrable externalities. This argument deserves serious consideration and is one to which I am sympathetic.
One of the early and clear enunciations of the private ordering argument is a paper over ten years ago by Randall Kroszner (“Can the Financial Markets Privately Regulate Risk?,” Journal of Money, Credit & Banking, 1999) which describes the historical evolution of exchange clearing in the last century and compares it to the development of the OTC markets. As I re-read this paper, I was struck by two statements in the paper.
- Kroszner argues that the large derivative dealer “effectively creates its own ‘mini’ derivatives exchange, with its own netting, clearing, and settlement system.” with the International Swap Dealers Association (ISDA) providing partial standardization of contractual terms.
- Kroszner also points out that: “Credit rating agencies are the effective regulators in setting standards for capital, collateral, and conduct, much like clearinghouses and government regulators, but do not have a direct financial stake in the transactions.”
This analysis provides one perspective on what went wrong during the crisis. First, the “regulation” provided by the rating agencies was an absolute disaster and the “mini derivatives exchange” run by the large derivative dealers turned out to be far less robust than the derivative exchanges.
At the same time, private parties have no incentive to move from the failed model to the robust model because the failed model now comes with the wrapper of a “Too Lehman-like To Fail” guarantee from the government. In the absence of this government guarantee, private ordering might have been relied on to do the right thing, but in its presence, things are different.
Posted at 2:45 pm IST on Tue, 12 Jan 2010 permanent link
Categories: derivatives, exchanges, regulation
Currency in circulation
The Chief Cashier of the Bank of England, Andrew Bailey, gave a fascinating speech last month on various aspects of currency in circulation in the UK. The key point is that while cheques have been in terminal decline in the UK (see this blog post), cash has in recent years been growing not only in notional value, but even as a percentage of GDP.
Bailey thinks that people are increasingly using cash as a store of value (and not just as a medium of exchange) partly because of a lack of confidence in the banking system and partly because at near zero interest rates, the opportunity cost of holding cash is negligible. If people are indeed stuffing cash under their mattresses, it tells us how harsh the global financial crisis has been.
Another interesting point is about the quality of notes. In India, ATMs typically give out better notes than the bank branches do, but Bailey tells us that in the UK, it is the reverse. The new generation of ATM machines can dispense notes so soiled that a human teller would regard them as unfit to be dispensed. One almost hopes that India remains stuck in the old generation of easily jammed ATMs that give out relatively clean notes.
The most important point that Bailey makes is that “banknotes are central-bank money in a form that can be held by the public, in other words the retail equivalent of reserve accounts at the central bank.” I would like to push this point further – if technology allows electronic accounts to be maintained at near zero cost, should not the central banks provide electronic reserve accounts to all citizens? As India moves toward issuing a unique identity number to every citizen, would it not be nice for each such number to be linked to a no frills account at the RBI?
Why in other words should only the rich and powerful institutions have access to central bank money? When a variety of tax laws and money laundering laws attempt to prevent the use of central bank money in the form of cash, should they not then facilitate the use of central bank money in the form of electronic reserve accounts at the central bank?
The shocking thing in finance is that financial markets settlements do not reach the highest standards of DVP (delivery versus payment) – simultaneous and irrevocable payment in central bank money. If you go to a grocery shop, pay for your purchases in cash and walk out with your goods, the transaction conforms to the highest standards of DVP because cash is central bank money. In most financial markets on the other hand, we do not get this level of DVP because the settlement systems do not settle in central bank money. This is a shame.
Posted at 1:36 pm IST on Mon, 11 Jan 2010 permanent link
Categories: currency
Foreign Investment: Direct versus Portfolio
Ever since the Asian crisis, there has been a sort of consensus that foreign direct investment (FDI) is the best and most stable form of capital inflows while foreign portfolio flows (FII in Indian parlance) are more volatile and therefore less desirable.
Ever since the global crisis began, I have been reading a lot of financial history (starting with the last 500 years and slowly going further back). It now appears to me that the aversion to the volatility induced by portfolio flows is extremely short sighted.
In the long run, the volatility gets washed out and what counts is the average growth rate of the economy. The short term (high frequency) is all noise (volatility) while the signal (mean) is apparent only in long time series (low frequency). Lessons drawn from short time series of data are probably wrong.
Looking back at some of the major emerging markets of the nineteenth century (US, Canada, Australia and Argentina) puts things in a totally different perspective. I particularly enjoyed the discussion about nineteenth century US in Chapters 7 and 8 of Atack and Neal, (The Origin and Development of Financial Markets and Institutions; From the Seventeenth Century to the Present, Cambridge University Press, 2009).
Of all the big emerging markets of the nineteenth century, the US relied most on portfolio flows and Argentina relied the most on foreign direct investment. By 1890, the results of the different trajectories were quite apparent.
In the long run, the volatility of the growth rate is largely irrelevant; it is the average that counts. Despite frequent financial crises and corporate bankruptcies, the US grew faster. More importantly, it was also able (despite the damage inflicted by populist politicians like Andrew Jackson) to build a domestic financial system that ultimately made it less dependent on foreign markets and institutions.
Applying that historical lesson would suggest that India should remain friendly to foreign portfolio flows while developing domestic financial markets. We must simply learn to live with the volatility and occasional crises that come in their wake.
Posted at 1:37 pm IST on Fri, 1 Jan 2010 permanent link
Categories: international finance