More on Flash Crash Report
I have a column in yesterday’s Financial Express with a more detailed discussion about the flash crash report that I blogged about (here / here) last week.
The Commodity Futures Trading Commission (CFTC) and Securities and Exchange Commission (SEC), which regulate the US equity markets and equity futures markets respectively, have released a hundred page report on the flash crash of May 6, 2010. On the afternoon of May 6, the broad market index in the US dropped by over 5% in the space of less than five minutes only to bounce back in the next five minutes. Then, even as the broad index was recovering, several stocks crashed to near zero. For example, Accenture fell from $30 to $0.01 in the space of seven seconds and then snapped back to the old level within two minutes.
The worst sufferers were retail investors who found their orders executing at absurd prices. A retail sell order (possibly a stop-loss order) might have been triggered when Accenture was trading at $30, but the order might have ended up being executed at $0.01. Some, but not all, of the damage was undone when the exchanges cancelled all trades that were more than 60% away from the pre-crash prices.
The CFTC-SEC report claims that the crash was triggered when a mutual fund sold $4.1 billion worth of index futures contracts very rapidly. The mutual fund’s strategy was to sell one contract for every ten contracts being sold by other traders, so that it would account for 9% of all trades during each minute until the entire order was executed. The large order allegedly confused the high frequency traders (HFTs) who having bought from the mutual fund, found themselves holding a hot potato, and then tried to pass the potato around by trading rapidly with each other. The result was a sharp rise in the HFTs’ trading volume, and this higher volume fooled the mutual fund’s algorithm into selling even faster to maintain the desired 9% participation rate. This set up a vicious circle of sharp price declines.
This story makes for a great movie plot but in an official investigative report, one expects to see evidence. Sadly, the report provides no econometric tests like vector auto regressions or Granger causality tests on tick-by-tick data to substantiate its story. Nor are there any computer simulations (using agent-based models) to show that the popular HFT algorithms would exhibit the alleged behaviour when confronted by a large price-insensitive seller.
Moreover, the data in the report itself casts doubt on the story. More than half of the mutual fund’s $4.1 billion trade was executed after prices began to recover. And the report suggests that the hot potato trading was set off by the selling of a mere 3,300 contracts ($180 million notional value). In one of the most liquid futures markets in the world, $180 million is not an outlandishly large trade. Surely, there must have been such episodes in the past and if there is a hot potato effect, it must have been observed. The report is silent on this. Further, the one thing that HFTs are good at is analysing past high frequency data to improve their algorithms. Would they not then have observed the hot potato effect in the past data and modified their algorithms to cope with that? Finally, during the most intense period of the alleged hot potato trading, the HFTs were net buyers and not net sellers. This suggests that perhaps the potato was not so hot after all.
The analysis in the report is even more flawed when it comes to the issue that concerns retail investors most—the carnage of individual stocks that began two or three minutes after the index began its recovery. The report bases its conclusions, on this issue, almost entirely on extensive interviews with the big Wall Street firms—market makers, HFTs and other brokers.
Astonishingly, the regulator did not find it necessary to interview retail investors at all. This is like a policeman investigating a theft without talking to the victim. There is no discussion of whether retail investors were confused, misled or exploited. For example, the report dismisses concerns about delays in the public price dissemination because all the big firms subscribe to premium data services that did not suffer delays. If delayed data led to wrong decisions by retail traders, that apparently is of no concern to the regulators.
In the post-crisis, post-Madoff world, we expect two things from regulatory investigations. First, we expect regulators to have the capability to investigate complex situations using state-of-the-art analytical tools. Second we expect them to carry out an unbiased investigation without giving high-profile regulated firms undue importance.
On both counts, the CFTC-SEC report is disappointing. After five months of effort, they do not seem to have come to grips with the terabytes of data that are available. The analysis does not seem to go beyond presenting an array of impressive graphs. Most importantly, the regulators appear to still be cognitively captured by the big securities firms and are, therefore, reluctant to question current market structures and practices.
Posted at 11:33 am IST on Sat, 16 Oct 2010 permanent link
Categories: exchanges, investigation
Flash crash report is superficial and disappointing
The joint report by the US CFTC and SEC on the flash crash of May 6, 2010 was released late last week. I found the report quite disappointing and superficial. Five months in the making, the report provides a lot of impressive graphs, but few convincing answers and explanations.
Consider the key finding:
- “One key lesson is that under stressed market conditions, the automated execution of a large sell order can trigger extreme price movements, especially if the automated execution algorithm does not take prices into account.” This is referring to a sell order by a mutual fund for 75,000 index futures contracts with a notional value of $4.1 billion that was executed with a target participation rate of 9%. Roughly speaking, this participation rate implies that in any time interval, the algorithm tries to execute a quantity equal to one-tenth of what other traders are executing in the aggregate. The discussion in the report is hopelessly vague about what happened, but it suggests that the crucial problems described in the next point below were triggered when the first 3,300 contracts had been sold. While the report explains at length that 75,000 was a truly large order, it is clear that 3,300 contracts ($180 million) is not an outlandishly large trade. Surely, there must have been episodes in the past of a few thousand contracts being sold very quickly and the report could have provided a comparison of what happened on May 6 with what happened on those dates. And if the problem was market stress, then understanding the nature of that stress is critical.
- “Moreover, the interaction between automated execution programs and algorithmic trading strategies can quickly erode liquidity and result in disorderly markets.” The claim is that the large order confused the high frequency traders (HFTs) who increased their trading volume, and this higher volume fooled the mutual fund’s algorithm into selling even faster. As far as I can see, this is pure speculation. I would have liked to see this phenomenon demonstrated using agent based models or some other sound methodology.
- “As the events of May 6 demonstrate, especially in times of significant volatility, high trading volume is not necessarily a reliable indicator of market liquidity.” This is referring to the fact that: “ Moreover, compared to the three days prior to May 6, there was an unusually high level of ‘hot potato’ trading volume – due to repeated buying and selling of contracts – among the HFTs, especially during the period between 2:41 p.m. and 2:45 p.m. Specifically, between 2:45:13 and 2:45:27, HFTs traded over 27,000 contracts, which accounted for about 49 percent of the total trading volume, while buying only about 200 additional contracts net.” However, no explanation at all is provided for this – if HFTs were desperately trying to pass around the hot potatoes that they had acquired minutes earlier, why were they buying 200 more hot potatoes?
Even if we were to accept the conclusion of the report that a mutual fund selling $4 billion of index futures in 20 minutes is an adequate explanation of the flash crash in the index markets, there is still the issue of what happened in specific stocks. The index began its recovery at 2:45:28 but the carnage in individual stocks happened a few minutes later at 2:48 or 2:49. The report attributes this to the withdrawal of liquidity by market makers at around 2:45. At this point, the report relies on extensive interviews with market makers and fails to substantiate key assertions with hard facts.
Some portions of the report look more like a journalist’s casual empiricism than the hard analysis that one expects in a fact finding report. For example, on page 66, there is a discussion of data about a single market maker that concludes with a whole string of tainted phrases: “If this example is typical ... it seems that ... This suggests that ... From this example it does not seem that ... ” I can understand all this five weeks after the crash, not five months later.
A few less important quibbles:
- I also found some of the graphs difficult to interpret. For example, the charts on the order book are colour coded relative to the mid-price of the stock. Since this price was falling dramatically, it is difficult to see what parts of the order book were actually being eliminated through order execution or order cancellation. That prices can fall to near zero only if the buy side of the order book is exhausted is a tautology. One wants to see how and when the buy orders were cancelled or got executed.
- Most of the reported data for individual stocks is at one minute intervals and some at fifteen minute intervals when other analysts have been looking at events on a millisecond time scale. At a one minute time scale, several things happen simultaneously – for example, prices fall and order books shrink – but it is difficult to see what happened first. Causality is hard, but is it too much to ask for at least the sequencing to be described accurately?
- The report suggests that the order books of ETFs had less depth far from the mid quote and this led to the disproportionate incidence of broken trades in them. However, in the first part of the report, it is shown that the ETF on the S&P 500 (known as SPY) performed better than the index future itself. This suggests that not all ETFs are similar in the fragility of their order book, and I would have liked to see some exploration of this issue.
In conclusion, the report leaves me disappointed as regards the three critical questions that I asked myself after reading the report:
- How far does the report provide confidence to an investor that with the corrective action taken since May 6, 2010, market prices are reliable? I think the report is totally unconvincing on this score.
- Does the report provide evidence that the post-Madoff SEC (and CFTC) can analyze a complex situation and arrive at a top quality analysis? Despite the high calibre of resources that have been recruited into the SEC in the last year or two, the quality of the report leaves much to be desired.
- Does the report show that regulators have escaped cognitive capture by their regulatees? I am sorely disappointed on this score. The report draws on extensive interviews with traditional equity market makers, high-frequency traders, internalizers, and options market makers. Apparently, nobody thought it fit to interview retail investors to understand how market distortions and data feed delays affected their order placement strategies. For example, Nanex has claimed that the Dow Jones index was delayed by 80 seconds. Were retail investors who follow the Dow Jones thinking that the market index was still falling even when professionals could see that it was recovering? Did this cause panic selling? For a cognitively captured regulator, it is sufficient to report that “Most of the firms we interviewed ... subscribe directly to the proprietary feeds offered by the exchanges. These firms do not generally rely on the consolidated market data to make trading decisions and thus their trading decisions would not have been directly affected by the delay in data in this feed.”
Posted at 2:19 pm IST on Sun, 3 Oct 2010 permanent link
Categories: exchanges, investigation
BIS Confirms Huge Offshore Rupee Market
In my post early this month, I very tentatively argued that data from the BIS and the RBI could be put together to suggest that half the rupee-dollar market was outside India. Most people whom I talked to said that this was unlikely and that there was probably some error either in the data or in my analysis.
But now the BIS has published a paper on offshore foreign exchange markets which gives clearer data. According to Table 7 of this paper, 52% ($10.8 billion) of the total rupee forward and forex swap market ($20.8 billion) is offshore and only 48% ($10.0 billion) is onshore. We still need to wait for November for more detailed data on other segments of the rupee market, but $10.8 billion a day is much larger than most estimates that I have seen or heard of the offshore non deliverable forward market.
The same table provides data about 2007 as well – only 30% ($3.6 billion) of the rupee forward and forex swap market was offshore. In just three years, the offshore market has tripled in size! A footnote in the table cautions us that the mandatory reporting of trades in the rupee (and several other emerging market currencies) following its reclassification of these currencies as major currencies would have increased the reported size of the offshore markets.
According to the BIS Paper, the offshore markets are even bigger for the Chinese renminbi (63% is offshore but much of that is in Hong Kong) and the Brazilian real (82% is offshore). The paper argues that offshore non deliverable markets in the Brazilian real, Chinese renminbi and Indian rupee are now so large that “adding an offshore deliverable money and bond market may not represent a large change.”
Suddenly, we are waking up to a much more internationalized currency than any of us were aware of. I have long argued that Indian capital controls are more “sand in the wheels” than effective barriers to capital flows. The data points in the same direction – policy makers must recognize that India has a de facto open capital account.
Posted at 2:51 pm IST on Sun, 26 Sep 2010 permanent link
Categories: international finance
Stock Exchange Regulation and Competition
This column of mine regarding stock exchange regulation and competition appeared in the Financial Express today. Coincidentally, yesterday evening, the Securities and Exchange Board of India released its order rejecting the application of MCX-SX to start trading equities in India. My column was written early this week well before SEBI passed its order. I am not yet masochistic enough to sit up all night to read a 68 page order and then write a column about it.
The ongoing dispute regarding the shareholding pattern of MCX-SX is an opportunity to rethink the current regulatory conception of the stock exchange as the extended regulatory arm of the state. Given this regulatory conception, the ownership structure (legal ownership, ownership of economic interests, and ownership of control rights) of the stock exchange becomes a matter of public policy. However, the requirement to have dispersed shareholding is likely to result in an unacceptably anti-competitive outcome.
There is a different way of looking at stock exchanges—not as frontline regulators, but as the equivalent of a shopping mall for securities. We do expect shopping malls to comply with the building safety code, but do not expect them to “regulate” the shop owners. If we treat stock exchanges the same way, we would expect them to comply with basic regulations regarding trading systems and infrastructure, but would not expect them to regulate either the listed companies or the stock brokers.
We could not have thought of stock exchanges like this a century ago because there were no securities regulators in those days and the central banks too did not bother to regulate the markets. For example, in the US a hundred years ago, it was left to the New York Stock Exchange to demand that companies publish their annual accounts (and delist even large companies like Proctor and Gamble for refusing to do so). Similarly, in those days, it was the London Stock Exchange that imposed free float requirements (67% free float!) because there was no other regulator to do so. Today, we expect the securities regulators, the company law departments and the accounting bodies to perform much of this regulatory role.
The time has come to ask whether the stock exchange should be a listing authority at all in an era of demutualised stock exchanges and alternate trading systems. There are stock exchanges elsewhere in the world that are listed on themselves; this appears to me to be as absurd as a snake swallowing its own tail. Of course, the alternative of a stock exchange listing on a rival exchange is only slightly less laughable. Some countries have shifted the listing function into an arm of the regulator itself and I think there is much to commend such a move.
Another problematic area is that of market surveillance. In an era of highly interconnected markets, the idea of each exchange performing surveillance on its own market is an anachronism. A stock exchange that sees only the trades happening on its own platform is no match for a market manipulator who trades in multiple cash and derivative exchanges (as well as the OTC markets) and shifts positions across these markets to avoid detection. The flash crash in the US markets on May 6, 2010, has highlighted the folly of relying on surveillance by the exchanges. Some of the best analyses of the events of that day have come not from the exchanges or the regulators but from data feed companies that specialise in processing high frequency data from multiple trading venues.
The final regulatory barrier to free competitive entry into the stock exchange industry comes from the extreme systemic importance of the clearing corporation of a major exchange. In the UK, the ability to outsource clearing to LCH.Clearnet has been very important in the emergence of alternate trading venues. Indian regulators should also explore such a solution.
Shorn of listing, surveillance and clearing, a stock exchange would be very much like a shopping mall and it would be possible to permit free entry without any significant regulatory barriers. The regulators should then be blithely unconcerned about who owns, controls or runs an exchange. By unleashing competition, this could help bring down costs and improve service levels.
In the interest of ensuring competitive outcomes, I think it would be useful to also dismantle the utterly dysfunctional “fit and proper” regime throughout the financial sector. The global financial crisis has shown that there is scarcely any bank or financial intermediary in the world that is “fit and proper” enough to be entrusted with any significant fiduciary responsibility without intrusive supervision and stringent regulation. The illusion of a “fit and proper” regime only serves to discourage private sector due diligence.
I believe that regulators worldwide should accept this reality and abandon the “fit and proper” requirement altogether. Resources devoted to screening applicants at the point of granting a licence are much better spent supervising those who are already licensed. Today, the position is the opposite. In India banks and stock exchanges have retained their licences long after they had deteriorated to the point where they would not get a licence if they were applying for it afresh. This is an intensely perverse anti-competitive situation.
In short, the problems relating to shareholding pattern of stock exchanges highlighted by the MCX-SX episode should be solved not through legal hair splitting but through more robust regulatory frameworks.
Posted at 10:50 am IST on Fri, 24 Sep 2010 permanent link
Categories: exchanges, regulation
Teji-Mandi goes to Chicago
Options with one-day maturity (known as Teji and Mandi for call and put options) were popular in Indian equity markets during the 1970s and 1980s though they were prohibited under the Securities Contracts (Regulations) Act, 1956. With the equity market reforms of the 1990s, these contracts disappeared completely.
One-day options are now being proposed by the Chicago Board Options Exchange (hat tip FT Alphaville). In its regulatory filing, the CBOE says:
The Exchange believes that Daily Option Series will provide investors with a flexible and valuable tool to manage risk exposure, minimize capital outlays, and be more responsive to the timing of events affecting the securities that underlie option contracts. In particular, the Exchange seeks to introduce Daily Option Series to provide market participants with a tool to hedge overnight and weekend risk, as well as the risk of special events such as earnings announcements and economic reports, and pay a fraction of the premium of a standard or weekly option (due to the very small time value priced into the option premium). The Exchange believes that daily expirations would allow market participants to purchase an option based on a precise timeframe thereby allowing them to tailor their investment or hedging needs more effectively.
Regulatory fashions come and go – often, a financial innovation is only the reintroduction of something that existed decades or centuries ago. Much of what happens in modern equity markets (good or bad) can be traced back to early 17th Century Amsterdam. (See for example, Geoffrey Poitras, From Antwerp to Chicago: The History of Exchange Traded Derivative Security Contracts and Jose Luis Cardoso, “Confusion de confusiones: ethics and options on seventeenth-century stock exchange markets”, Financial History Review (2002), 9:109-123).
I have long believed that what was really wrong with things like Teji, Mandi or Badla in pre-reform Indian equity markets were not the instruments themselves but the absence of robust risk management and the lack of safeguards against market manipulation. There is nothing wrong with daily options – many high frequency trading strategies might effectively be replicating short maturity options. I would in fact wonder whether there is merit in options with even shorter maturity – hourly, if not even shorter.
Posted at 5:03 pm IST on Sat, 11 Sep 2010 permanent link
Categories: derivatives
Is half the rupee-dollar market outside India?
I do not know whether I am reading the data wrong, but the preliminary results of the BIS Triennial Survey on foreign exchange turnover (April 2010) appear to suggest that nearly half the rupee-dollar market is outside India.
For the first time, the 2010 BIS survey includes the rupee as a “main currency” and provides data about the USD/INR turnover. According to Table 4, the average daily turnover in USD/INR was $36 billion. Table 5 tells us that the average daily foreign exchange turnover in India was only $27.4 billion. That might suggest that India accounts for 75% of the USD/INR market.
However, not all the Indian market is USD/INR. According to the RBI data (Table 47 of the RBI bulletin of June 2010) in April 2010, INR versus all foreign currencies was only 71% of the market in India; almost 30% was trading of various foreign currencies against each other. In this computation, I have taken both sides of the merchant trades and only one side of the inter-bank trades as the BIS data is on net basis. Of course, BIS also does cross border netting which might change the numbers a bit, but I would think the percentages might not be impacted too much.
If we make the reasonable assumption that the entire INR versus foreign currency market in India is actually INR versus USD and take 71% of $27.4 billion as the USD/INR market in India, we get $19.5 billion which is only 54% of the $36 billion global USD/INR market. If these calculations are approximately correct, India is only a little more than half of the total global rupee-dollar market.
One other relevant data point is that according to the BIS Table 5, India’s share of the global foreign exchange market has dropped from 0.9% in 2007 to 0.5% in 2010.
Unfortunately, while the BIS links to various central banks that publish their national results at the same time as the BIS, the RBI is not among them. So we do not have more data to verify these computations.
Posted at 3:18 pm IST on Fri, 3 Sep 2010 permanent link
Categories: international finance
How Fast Can Traders Add and Multiply?
I have written a paper entitled “When Index Dissemination Goes Wrong: How Fast Can Traders Add and Multiply?” It has also been uploaded at SSRN.
The abstract is as follows:
This paper studies an episode of dissemination of wrong stock index values in real time due to a software bug in the Indian Nifty index futures market on the morning of January 18, 2006.
The episode provides an opportunity to test various models of cognitive biases and bounded rationality highlighted in behavioural finance. The paper provides strong evidence against cognitive biases like “anchoring and adjustment” (Tversky and Kahneman, 1974) that one might expect under such situations even though the cognitive task involved is quite simple. The futures market tracked the true Nifty index which it could not see while completely ignoring the wrong Nifty index that it could see.
However, the paper demonstrates that market efficiency failed in more subtle ways. There is evidence of a partial breakdown of price discovery in the futures markets and a weakening of the bonds linking futures and cash markets.
This evidence is consistent with the centrality of “market devices” as argued in “actor network theory” in economic sociology (Muniesa, Millo and Callon, 2007 and Preda, 2006). Well functioning markets today depend critically on a whole set of information and communication technologies. Any failures in these material, socio-technical aspects of markets can make markets quite fragile even if behavioural biases are largely absent.
Posted at 9:58 am IST on Wed, 1 Sep 2010 permanent link
Categories: behavioural finance, benchmarks, exchanges
Loss absorbency of regulatory capital
The Basel committee on banking supervision has put out a proposal to ensure the loss absorbency of regulatory capital at the point of non-viability.
The proposal points out that all regulatory capital instruments are loss absorbent in insolvency and liquidation – “they will only receive any repayment in liquidation if all depositors and senior creditors are first repaid in full.” However, the financial crisis has revealed that many regulatory capital instruments do not always absorb losses in situations in which the public sector provides support to distressed banks that would otherwise have failed.
The solution that is proposed is as follows:
All non-common Tier 1 instruments and Tier 2 instruments at internationally active banks must have a clause in their terms and conditions that requires them to be written-off on the occurrence of ... the earlier of: (1) the decision to make a public sector injection of capital, or equivalent support, without which the firm would have become non-viable, as determined by the relevant authority; and (2) a decision that a write-off, without which the firm would become non-viable, is necessary, as determined by the relevant authority.
I am unable to understand how such tweaking of contractual terms will get around the fundamental problem that governments are unwilling to impose losses on banks and their stakeholders. In the United States when the government injected TARP capital into the banks, it forced the healthy banks also to take the capital to eliminate any stigma associated with TARP capital. Even if the proposed clause were present in the regulatory capital instruments issued by the insolvent banks of that time, clearly the clause would not have been triggered by the injection of TARP capital in this gentle form.
Posted at 3:51 pm IST on Fri, 27 Aug 2010 permanent link
Categories: banks, leverage, regulation
Two curve models
Updated: corrected quote attibution of the second quote.
I am posting for comments an introductory note on two curve models. This note which I wrote largely to improve my own understanding of this subject, represents my interpretation of the results presented in various papers listed in the bibliography and does not claim to contain any original content.
The following two quotations give a sense of what two curve models are all about:
LCH.Clearnet Ltd (LCH.Clearnet), which operates the world’s leading interest rate swap (IRS) clearing service, SwapClear, is to begin using the overnight index swap (OIS) rate curves to discount its $218 trillion IRS portfolio. Previously, in line with market practice, the portfolio was discounted using LIBOR. ... After extensive consultation with market participants, LCH.Clearnet has decided to move to OIS to ensure the most accurate valuation of its portfolio for risk management purposes. (LCH.Clearnet, June 17, 2010)
Ten years ago if you had suggested that a sophisticated investment bank did not know how to value a plain vanilla interest rate swap, people would have laughed at you. But that isn’t too far from the case today. (Deus Ex Machiatto, June 23, 2010)
The topic is quite complex even by the standards of this blog and it takes me twelve pages of occasionaly dense mathematics to explain what I have understood of its basic ideas. To fully understand two curve models you would need to read a lot more of even denser mathematics. If you do not have the patience for that, you should just ignore this post.
What I would really love is for some of my readers to provide comments and suggestions to improve this note and correct any errors that might be there.
Posted at 3:49 pm IST on Fri, 20 Aug 2010 permanent link
Categories: bond markets, derivatives, post crisis finance, risk management
RBI on new bank licences in India
I have a column in today’s Financial Express about the RBI’s discussion paper on new bank licences
Reserve Bank of India (RBI) has begun a very open and transparent process of thinking about new bank licences with a discussion paper that outlines the key issues, presents the pros and cons of the alternatives and also highlights the international experience.
While discussing these important issues, it is necessary to keep two things in mind. First, we must learn the right lessons from the experience of the new bank licences given out in the post-reforms period. Second, the global financial crisis has changed the way we think about bank regulation and competition.
Of the 10 new banks licensed in the first phase, the majority were failures in the broadest sense, but a few became outstanding successes. The viewpoint in the RBI discussion paper is that we must identify the causes of the failures and avoid making the same mistakes when granting new licences.
I am of the completely opposite persuasion. The success of the 1993 experiment was that enough licences were granted to permit a few success stories to emerge, despite a low success rate. Capitalism to me is about liberal experimentation and ruthless selection. What is wonderful about the 1993 experiment is that the failures (although many) were on a small scale and were (with one exception) quite painless, while the successes were outstanding. This makes for an extremely favourable risk-reward ratio.
While handing out new licences, the goal should not be to avoid failures; it should be to maintain the same attractive risk-reward ratio. I believe that this again requires the same approach—granting many licences, allowing the market to weed out failures at an early stage, and giving enough freedom to allow the successes to bloom.
It is impossible to figure out right now what will or will not work in the emerging banking environment. It is very unlikely that a successful bank emerging from the new set of licences will be a clone of, say, HDFC Bank. HDFC Bank and its peers succeeded by identifying what the then existing Indian and foreign banks were not doing well or not doing at all, and then setting out to deliver that with high levels of efficiency. But thanks to their very success, that space has now become overcrowded and hypercompetitive. A new bank starting today will have to find a new space in which to make its mark.
No regulator can predict what that new business model will be. As Hayek once wrote: “Competition is valuable only because, and so far as, its results are unpredictable and on the whole different from those which anyone has, or could have, deliberately aimed at.”
What is important is to keep failures small and manageable, and the way to do that is to allow banks to start small. The RBI discussion paper veers towards allowing only large banks in the belief that this will keep out people who are not serious. This is a mistake that financial regulators around the world appear to make.
I still remember that at the height of the Asian Crisis, one of the few healthy and solvent Indonesian banks was one of their smallest banks (Bank NISP). But the Indonesian central bank’s response (probably encouraged by the IMF) was to impose one of the highest minimum capital requirements in the world. It would have been utterly hilarious were it not so tragic.
Equating money with seriousness is a misconception unique to the financial elite. The rest of the world does not think that a student who has got admission by paying a large donation or capitation fee is a more serious student than the one who came in on the merit list. But financial regulators in India and elsewhere have an abiding belief in the ennobling power of money. Sebi has also been talking about increasing capital requirements for its regulatees.
I believe, on the other hand, that the global financial crisis has indicated that in the world of finance, size is evil in itself. Simon Johnson’s brilliant new book 13 Bankers: The Wall Street Takeover and the Next Financial Meltdown argues for a size limit of “no more than 4% of GDP for all banks and 2% of GDP for investment banks.”
By this standard, which I consider quite reasonable, India has seven banks above the size limit, including one that is almost 20% of GDP (I have taken the bank asset data from the RBI’s Statistical Tables Relating to Banks of India, 2008-09). Of these seven banks, only one is in the private sector, but two other large private banks are growing fast enough to cross the size limit in the next few years.
I believe, therefore, that RBI would grant a large number of new bank licences that will rapidly bring down the concentration in the banking system. India needs a lot of banks that are small enough to fail and fewer that are too big to fail.
Somewhere in the long chain from the keyboard to the printed newspaper what should have been a “should” or a “must” in the beginning of the last paragraph became “would.” Who am I to predict what the RBI would or would not do? I do hope however that my unintended prediction turns out right!
Posted at 10:13 am IST on Fri, 13 Aug 2010 permanent link
Categories: banks, regulation
Has the greatest financial risk gone away?
I have long argued that the greatest global financial risk is not toxic derivatives or bad loans – it is the unnerving possibility that P=NP. P≠NP is a conjecture about an abstruse problem in mathematics, but too much of computer security depends on it. It is likely that if P=NP, then many financial assets that are recorded as electronic entries could suddenly evaporate because those entries could all be hacked. Since almost all financial assets today are in electronic form, that would be the end of finance as we know it.
During the last couple of days, a purported proof that P≠NP has been circulating on the web (hat tip Bruce Schneier). The hundred page paper by Vinay Deolalikar of HP Research Labs, Palo Alto utilizes and expands “ upon ideas from several fields spanning logic, statistics, graphical models, random ensembles, and statistical physics” to obtain the purported proof. We still do not know whether the proof is correct (see here and here)
It reminds me of the early days of the initial claims of Wiles’s proof of Fermat’s last theorem or Perelman’s proof of the Poincare conjecture. Everybody agrees that it is a serious proof, but nobody knows whether the proof is right. But if Deolikar is right, the biggest financial risk of all has gone away.
Posted at 5:38 pm IST on Tue, 10 Aug 2010 permanent link
Categories: mathematics, technology
RBI proposes CDS market by dealers and for dealers
The Reserve Bank of India has released the report of the Internal Group on Introduction of Credit Default Swaps for Corporate Bonds in India.
What is proposed is a market by dealers and for dealers. Users can only buy CDS protection, and they have to buy them from dealers (banks and other regulated entities) who are the only people allowed to sell CDS. But the most diabolical recommendation is the following:
The users can, however, unwind their bought protection by terminating the position with the original counterparty. ... Users are not permitted to unwind the protection by entering into an offsetting contract. [Paragraph 2.7.6(ii) on page 19]
This leaves the unwinding users at the complete mercy of the original dealer from whom they bought CDS protection – that dealer can fleece the users knowing fully well that they cannot go elsewhere. Under these terms, it would be utterly imprudent for a company to use CDS at all. Well designed corporate risk management policies should demand the availability of competitive quotes both at inception and at unwind, and should therefore completely prohibit the use of the proposed CDS market. Of course, India has a number of imprudent companies with poor risk management policies; perhaps, the RBI proposed market is suitable only for them.
The other frightening part of the proposals is that at a time when the entire world is worried about the dangers of an opaque CDS market, the report envisages the creation of a CDS market without a trade repository let alone a clearing mandate. The report envisages a trade reporting platform at some unspecified future date, but the establishment of this platform is not a precondition for CDS trading to begin. As far as clearing is concerned, the report makes the right noises, but it is clear that the RBI is not very keen on this.
Even when the trade repository starts functioning, it is unclear what transparency it would bring. First of all, the recommendation uses the word “may” which deprives it of operational significance:
The reporting platform may collect and make available data to the regulators for surveillance and regulatory purposes and also publish, for market information, relevant price and volume data on CDS activities such as notional and gross market values for CDS reference entities broken down by maturity, ratings etc., gross and net market values of CDS contracts and concentration level for major counterparties. [Paragraph 4.2.1 page 40]
Second, the report provides a trade reporting format (Form I in Annex IV) and this format does not include any data on prices at all. This means that even when the reporting platform starts working, it would not provide price transparency even on a post trade basis. What more could the dealer wish for when it comes to fleecing the customer?
One relatively minor issue which I am not able to figure out is whether RBI intends CDS to be used to hedge loans and not only bonds. The report clearly states that only bonds can be reference obligations for CDS, but it is silent on whether loans can be deliverable obligations. Some parts of the report appeared to be deliberately written vaguely to allow loans to be hedged. For example, “The users can buy CDS for amounts not higher than the face value of credit risk held by them” (Paragraph 2.7.6(i) page 19). That would allow loans to be hedged, and what is deliverable would presumably be decided by the Determination Committee which can be counted on to go with the banks on this issue. Whether loans can be hedged is not terribly important, but if the intention is to permit it, why not say so explicitly?
Coming back to the important prudential issue, I believe that India needs a CDS market, but I am concerned that a CDS market as proposed by the RBI would create more systemic risks than it would eliminate. If these are the only terms on which a CDS market can be had, it would be better for the country that we do not create such a market at all.
Posted at 9:11 pm IST on Mon, 9 Aug 2010 permanent link
Categories: bond markets, derivatives
Criticism of monetary policy
There has been a lively debate in India on senior central bank officials criticizing monetary policy decisions in which they may have participated. This debate has tended to focus on the harm that such alleged indiscreetness can do, while I think the important question is how to design the conduct of monetary policy in a manner where open debate does not cause harm.
Consider this passage in a paper last month by a member of the US FOMC that decides monetary policy in that country:
The U.S. is closer to a Japanese-style outcome today than at any time in recent history. In part, this uncomfortably close circumstance is due to the interest rate policy being pursued by the FOMC.
Or this from a UK MPC member who last month titled his speech provocatively as “How long should the song remain the same?”:
The normal monetary policy reaction to a sustained period of above target inflation would be to tighten policy, to create demand conditions which are more conducive to restraining price increases and bringing inflation back to target. But so far, the Committee has not supported that course of action – and is keeping monetary policy extremely loose. ...
Last month, however, I dissented from this approach and voted for a small rise in interest rates. And in today’s speech I want to set out the thinking behind my view of current economic prospects and the implications for UK monetary policy that led me to that decision. ...
The MPC has a clear remit, which is to keep inflation on target at 2% over the medium term. ... we need to adjust the policy settings we put in place to head off the downside risks to inflation identified in the immediate aftermath of the big financial shocks in late 2008 and early 2009.
I think such open debate and criticism strengthens the conduct of monetary policy by allowing divergent points of view to be heard and considered. Alternative analytical frameworks can thus be developed and are available to the policy makers if and when they choose to change their mind. My knowledge of either the theory or practice of monetary policy is very limited, but I like to believe that monetary policy is closer to a science that progresses through informed debate, rather than a dark art that derives its mystique and efficacy from a veil of secrecy.
Unfortunately, criticism of monetary policy decision by senior officials themselves can be prevented from doing harm only in a culture of transparency where minutes of monetary policy deliberations are published openly so that dissenting voices are not misinterpreted by the markets.
Posted at 12:59 pm IST on Mon, 9 Aug 2010 permanent link
Categories: monetary policy
Time stamping by stock exchanges
I just finished reading an interesting study of the flash crash in the US on May 6, 2010 by Nanex which is a data feed company which provides high frequency real time trade and quote data for all US equity, option, and futures exchanges (over one million updates per second). The study was published in mid June and linked by Abnormal Returns a week later, but I got around to reading it only today after several blogs talked about it two days ago.
The study claims:
Beginning at 14:42:46, bids from the NYSE started crossing above the National Best Ask prices in about 100 NYSE listed stocks, expanding to over 250 stocks within 2 minutes (See Part 1, Chart 1-b). Detailed inspection indicates NYSE quote prices started lagging quotes from other markets; their bid prices were not dropping fast enough to keep below the other exchange’s falling offer prices. The time stamp on NYSE quotes matched that of other exchange quotes, indicating they were valid and fresh.
With NYSE’s bid above the offer price at other exchanges, HFT systems would attempt to profit from this difference by sending buy orders to other exchanges and sell orders to the NYSE. Hence the NYSE would bear the brunt of the selling pressure for those stocks that were crossed.
Minutes later, trade executions from the NYSE started coming through in many stocks at prices slightly below the National Best Bid, setting new lows for the day. (See Part 1, Chart 2). This is unexpected, the execution prices from the NYSE should have been higher -- matching NYSE’s higher bid price, unless the time stamps are not reflecting when quotes and trades actually occurred.
If the quotes sent from the NYSE were stuck in a queue for transmission and time stamped ONLY when exiting the queue, then all data inconsistencies disappear and things make sense. In fact, this very situation occurred on 2 separate occasions at October 30, 2009, and again on January 28, 2010. (See Part 2, Previous Occurrences).
If this is really true, then instead of criticizing only high frequency traders, we must also direct some of the blame at the exchanges which behaved irresponsibly. Why cannot the exchange provide time stamps both of the time that a quote entered the queue and when it exited the queue?
Organizations with none of the self regulatory responsibilities of an exchange do this kind of thing routinely. One of the things that I read today was this study about dissemination of press releases at public websites. Yahoo! Finance provides two timestamps when it reports a press release on its web site – first is the timestamp of the press release itself and the second is the time stamp of when it was published on Yahoo! Finance. By comparing these two time stamps, the study concludes that “the average delay was 83 seconds. The fastest was 24 seconds and the slowest was 237 seconds or almost 4 minutes. The median was 80 seconds.”
It did not require a regulator framing rules for a public website to provide two timestamps in its stories. But perhaps exchanges will do the right thing only if they receive a direction from the regulator. And perhaps the regulator will find itself easily persuaded that this simple thing is too costly, complicated or confusing to implement.
Posted at 9:21 pm IST on Wed, 4 Aug 2010 permanent link
Categories: exchanges, regulation, technology
Risk Management for Derivative Exchanges
I wrote a chapter on risk management lessons from the global financial crisis for derivative exchanges for a book edited by Robert W. Kolb on Lessons from the Financial Crisis: Causes, Consequences, and Our Economic Future.
During the global financial crisis, no major derivative clearinghouse in the world encountered distress, while many banks were pushed to the brink and beyond. This was despite the exchanges having to deal with more volatile assets—equities are about twice as volatile as real estate, and natural gas is about 10 times more volatile than real estate. Clearly, risk management at the world’s leading exchanges proved to be superior to that of the banks. The global financial crisis has shown that the quality of risk management models does matter.
Three important lessons have emerged from this experience:
- The quality of risk management models can be measured along two independent dimensions: crudeness versus sophistication and fragility versus robustness. The crisis of 2007-2009 has shown that of these two dimensions, the second dimension (robustness) is far more important than the first dimension (sophistication).
- An apparent structural change in the economy and the financial markets may only be a temporary change in the volatility regime. Risk models that ignore this can be disastrous.
- Risk models of the 1990s, based on normal distributions, linear correlations, and value at risk, are obsolete not only in theory but also in practice.
Most of the chapter paper deals with these lessons from the crisis of 2007-2009. In the final section, the paper argues that as derivative exchanges prepare to trade and clear ever more complex products, it is important that they refine and develop their risk models even further so that they can survive the next crisis.
The chapter is based largely on a paper that I wrote in February 2009.
Posted at 1:12 pm IST on Mon, 2 Aug 2010 permanent link
Categories: derivatives, exchanges, post crisis finance, risk management, statistics
SEC No Action Letter on Rating Non Disclosure
Updated July 27, 2010: Added link and corrected title of the Reform Act.
The US SEC issued a “No Action Letter” last week to negate a key provision of the Dodd-Frank Wall Street Reform and Consumer Protection Act on the day that it came into force. The “No Action Letter” is self explanatory:
Items 1103(a)(9) and 1120 of Regulation AB require disclosure of whether an issuance or sale of any class of offered asset-backed securities is conditioned on the assignment of a rating by one or more rating agencies. If so conditioned, those items require disclosure about the minimum credit rating that must be assigned and the identity of each rating agency. Item 1120 also requires a description of any arrangements to have such ratings monitored while the asset-backed securities are outstanding.
Effective today, Section 939G of the Dodd-Frank Act provides that Rule 436(g) shall have no force or effect. As a result, disclosure of a rating in a registration statement requires inclusion of the consent by the rating agency to be named as an expert. We note that the NRSROs have indicated that they are not willing to provide their consent at this time. In order to facilitate a transition for asset-backed issuers, the Division will not recommend enforcement action to the Commission if an asset-backed issuer as defined in Item 1101 of Regulation AB omits the ratings disclosure required by Item 1103(a)(9) and 1120 of Regulation AB from a prospectus that is part of a registration statement relating to an offering of asset-backed securities.
This no-action position will expire with respect to any registered offerings of asset-backed securities commencing with an initial bona fide offer on or after January 24, 2011.
The relevant portion of Rule 436 is as follows:
(a) If any portion of the report or opinion of an expert or counsel is quoted or summarized as such in the registration statement or in a prospectus, the written consent of the expert or counsel shall be filed as an exhibit to the registration statement and shall expressly state that the expert or counsel consents to such quotation or summarization.
(g) Notwithstanding the provisions of paragraphs (a) and (b) of this section, the security rating assigned to a class of debt securities, a class of convertible debt securities, or a class of preferred stock by a nationally recognized statistical rating organization ... shall not be considered a part of the registration statement.
I can understand a “No Action Letter” that says that as an interim measure the identity of the rating agency need not be disclosed, but I am amazed to find that the SEC is allowing the entire “ratings disclosure” to be omitted. The fact that an issue is conditioned by a minimum rating requirement is I think a material fact.
Also, I do not understand why the SEC cannot simply state that the rating agency shall not be regarded as an expert and require the registration statement to make the same statement. Rating reports should be regarded as being in the same category as press reports and editorials. Ultimately, the SEC should simply abolish the whole category of nationally recognized statistical rating organizations (NRSROs).
I have blogged about rating agency regulatory reform several times in the last three years:
- Non-use of ratings in SEC regulations
- Stocktaking on the use of credit ratings
- Report on Rating Agency Regulation in India
Posted at 3:03 pm IST on Tue, 27 Jul 2010 permanent link
Categories: credit rating, regulation
Time for a Financial Sector Appellate Tribunal
I wrote a column in the Financial Express today on why a Financial Sector Appellate Tribunal is superior to the bureaucratic solution created by last month’s ordinance to deal with turf battles between financial sector regulators.
The old saying that “your freedom to swing your fist ends where my nose begins” is as true of regulators as it is of individuals. In a regime of multiple regulators, the autonomy of each regulator is effectively limited by the autonomy of other regulators. What this means is that regulatory autonomy is a delusion and regulatory heteronomy is the reality.
The only real question is whether this heteronomy should be judicial or bureaucratic. I argued for the judicial option in these columns four months ago (‘Fill the gaps with apex regulator’, FE, March 19). Some degree of competition between regulators is a healthy regulatory dynamic, but ultimately any dispute between two regulators must be resolved in the courts.
My recommendation was based on the well-established proposition that the legislature frames laws, the judiciary interprets them and the executive implements the law as so interpreted. If there is a dispute about a law, the judiciary can step in and interpret the law or the legislature can step in and rewrite the law to eliminate the ambiguity. The executive has to await guidance from either of these two branches. I realise that this principle is perhaps totally old-fashioned in an environment where all three branches of the government are increasingly inclined to step on each others’ turf.
However, the judicial option at least had the advantage of being acceptable to the regulators. Three months ago, when the government suggested that the dispute between Sebi and Irda regarding the regulation of Ulips be resolved by the court, none of the regulators complained about loss of regulatory autonomy.
Last month, however, the President promulgated the Securities and Insurance Laws (Amendment and Validation) Ordinance, 2010, which not only settled the Ulips dispute in favour of Irda legislatively, but also provided a new bureaucratic arbitration mechanism for certain future disputes.
Most of the regulators are upset with this on the ground that it undermines their autonomy. This is not quite the correct way of looking at it because what it does is to replace judicial arbitration of disputes by bureaucratic arbitration. A better reason for scepticism is that, in general, bureaucratic arbitration is inferior in terms of process and in terms of outcomes.
The drafting of the ordinance itself is a good example of how bureaucratic processes tend to go wrong. The intention of the new section 45Y that has been inserted into the RBI Act is to ensure that future disputes can be resolved quickly. However, as one reads the section, one realises that this section is hopelessly inadequate.
First of all, section 45Y deals only with instruments. It essentially says that if any difference of opinion arises as to whether a certain instrument is a hybrid or composite instrument and falls under the jurisdiction of RBI, Sebi or Irda, then such difference of opinion shall be referred to a joint committee consisting of the finance minister, two top finance ministry officials and the key financial regulators.
Because the Ulips dispute was about a certain instrument, the government created a statute to deal with disputed instruments. What happens if the next dispute is about institutions and intermediaries? For example, RBI may want to regulate as an NBFC an entity that Sebi regulates as a capital market intermediary. Section 45Y is helpless to deal with this dispute because the dispute is not about instruments.
The second problem with the statute is that it says: “The Joint Committee shall follow such procedure as it may consider expedient and give, within a period of three months... its decisions thereon to the Central Government.” One would have liked to see an explicit provision of decision making by majority or qualified majority. The fundamental problem with the existing HLCC is its quasi-consensual and secretive procedure and its unwillingness to rely on transparent voting. The joint committee inherits this fatal weakness.
The third problem is that the ordinance provides that the decision of the joint committee shall be binding on the regulators—RBI, Sebi, Irda and PFRDA. It does not say that the decision is binding on anybody else. In particular, it is not binding on any of the regulated entities.
Suppose, for example, the joint committee decides that a particular product offered by a bank is actually a security that falls under the jurisdiction of Sebi. If Sebi then imposes a penalty on the bank, the latter could well go to court challenging the jurisdiction of Sebi. Neither the bank nor the court is bound by the decision of the joint committee. The decision is binding on RBI, but surely RBI cannot impose a penalty for violation of a Sebi regulation.
I remain convinced that when we have swinging regulatory fists and bleeding regulatory noses, a judicial solution is far more viable and sensible than section 45Y. The time for a Financial Sector Appellate Tribunal is now.
Posted at 10:34 am IST on Wed, 21 Jul 2010 permanent link
Categories: law, regulation
RBI on India and the Global Financial Crisis
The Report on Currency and Finance 2008-09 published by the Reserve Bank of India this month is on the “Global Financial Crisis and the Global Economy.” Well over two-thirds of this 380 page report is about the global crisis itself and does not contain anything new. But about a hundred pages are devoted to the impact of the global crisis on India and to the policy responses in India.
There are a number of interesting empirical analyses in these two chapters of the report. Well, there is an occasional piece of silly econometrics like the regression in levels between trending variables in footnote 9 on page 277; the absurdly high r-square of 0.9981 should have alerted the authors to the possibility (near certainty?) that this is a spurious regression. However, most of the empirical analyses do appear to be sound econometrically.
A few results that I found interesting:
- “An empirical analysis confirmed that the regional stock markets in Asia including India, Hong Kong, and Singapore and global markets such as the US, UK, and Japan shared a single long-run co-integrating relationship in terms of stock price indices measured in US dollars rather than the local currencies (Table 5.10). The Indian market held the key to this integration process. This was evident from the analysis that, excluding India, the other five stock markets did not show a co-integrating relationship. The coefficients of the long-run co-integration vector showed that the impact of the global markets on the Indian stock market was more pronounced than the impact of the regional markets.” (page 213-214)
- “a bivariate VAR model revealed that there was a significant Granger causal relationship from FII flows to the Indian stock market. The feedback causality from the BSE index to FII flows was also significant, albeit at a 10 per cent level of significance. ... FII investment and money supply do exert significant influence on the movements of stock markets in India; while the relationship is the other way round in the case of investments by mutual funds wherein stock markets cause their variations.” (page 216-217). I am aware of some earlier studies that indicated Granger causality ran the other way, so it would be useful to replicate this result over longer time periods. Unfortunately, here as in most of their other regressions, the RBI does not indicate the sample period used.
- The report asserts that the financial stress index (FSI) for India “exhibited synchronised movements with that of the US, Western Europe, Japan and aggregate advanced economies (Figures V.20 & V.21). This either shows that financial stress in India and other advanced countries is driven by a common factor or that transmission of stress to Indian markets from advanced markets is contemporaneous.” (page 231-232). Unfortunately, the report does not present any statistical analysis regarding this and relies on the visual evidence of a chart plotting the FSI in India and other countries. Looking carefully at this chart, however, it appears that the only synchronized movement was in October 2008 (after Lehman). Neither before nor later, do I see a synchronized movement in the chart.
Posted at 2:40 pm IST on Fri, 9 Jul 2010 permanent link
Categories: crisis
Drunken trading and risk limits
The Financial Services Authority of the UK put out an order last month regarding a drunken broker (Steven Perkins) who bought $520 million of crude oil futures sitting at home at night with his laptop (hat tip Finance Professor).
I find it amazing that somebody sitting at home with a laptop between 1:00 am and 4:00 am can execute over 2,000 buy trades worth over half a billion dollars when his broking firm is essentially an execution only oil brokerage, and Perkins himself (and almost all other brokers in the firm) were barred from doing proprietary trading.
What does it say about the risk management and control systems at the brokerage firm (PVM)? The FSA explicity says that it “makes no criticism of PVM”:
Mr Perkins’ behaviour was contrary to PVM’s policies and procedures and the FSA makes no criticism of PVM in this notice. Mr Perkins was immediately suspended by PVM on 30 June 2009 and his employment later terminated.
I would however think that there are issues of risk management that cannot be wished away. The Telegraph reports that the transaction caused a loss to PVM of $9.8 million and resulted in the brokerage reporting a loss of $7.6 million for the year. The implication is that the normal profits of the firm were $2.2 million. A drunken broker supposed to execute only client transactions lost in a single night an amount which was more than four times the normal annual profits of the entire brokerage firm!
I think that financial firms need better risk management systems and need to think carefully about operational risk. When Perkins lied to his company that the trade was for a client, he did not claim that it was for large oil company, he said that it was for a “local trader” which is how independent oil traders are referred to. There should have been some counterparty risk controls kicking in when a half a billion dollar trade is made at the dead of night for a client who is a small time local trader. Some alerts should have been trigerred when the purchases by a single broker during the three hour window was more than 17 times the average daily volume of the entire market for this period of the night.
Interestingly, PVM is owned by senior brokers themselves and the poor risk management cannot be attributed to corporate governance issues. I think it reflects the failure of systems and processes at many financial firms to adjust to the modern reality of round-the-clock high-speed electronic trading.
Posted at 11:36 am IST on Fri, 2 Jul 2010 permanent link
Categories: risk management
Is UK imitating Ireland on Financial Regulation?
There is too little detail in the new UK plan (also here) to abolish the Financial Services Authority (FSA) by folding it into a subsidiary of the Bank of England. But the more I think about it, the more it looks like the pre crisis Ireland model.
If imitation is the sincerest form of flattery, Ireland must be feeling quite flattered right now. It is a different matter that the Irish Central Bank itself in its post mortem of the crisis now thinks that their model had nothing to commend it:
Though few would now defend the institutional structure invented for the organisation in 2003, it would be hard to show that its complexity materially contributed to the major failures that occurred. (page 42)
The grass is always greener on the other side!
Posted at 9:09 pm IST on Fri, 18 Jun 2010 permanent link
Categories: regulation
Why market surveillance can no longer be left to the exchanges
I wrote a column in the Financial Express today arguing that the financial market regulators need to get directly involved in real time market surveillance.
Traditionally, securities regulators globally have regarded the exchanges as the front line regulators with primary responsibility for market surveillance. As a result, regulators have traditionally not invested in the computing resources and the human capital required to perform real time surveillance themselves. A number of developments are making this model unviable in the developed markets and the same factors are at work, a little more slowly, in India as well.
I think it is time for Indian regulators like Sebi, FMC and RBI to develop in-house real time market surveillance capabilities rather than rely on the capabilities that may currently exist at the exchanges or exchange-like entities that they supervise (NSE, BSE, MCX, NCDEX, NDS).
I believe there are two key factors that make this regulatory shift necessary. First is the dramatic change in the nature of exchanges themselves. In the past, exchanges were regarded as ‘utilities’ providing key financial infrastructure and regulatory services. In recent years, they have evolved into businesses just like any other financial services business. Many observers in India (including some of the exchanges themselves) have been concerned about this transformation, but this is a global phenomenon and it is delusional to deny this reality. Concomitantly, there has been a blurring of the line between exchanges and brokers. Globally, alternative trading systems and dark pools have gained market share in recent years, and the operators of these systems are half way between traditional exchanges and large broker dealers, in terms of their business models and regulatory incentives.
In India, too, we have seen the blurring of the line between exchanges and non-exchanges. Examples include the subsidiaries of regional stock exchanges that trade on national exchanges; the exchanges in the commodity space whose promoters had or have large trading arms; and RBI regulated entities that perform many functions of an exchange but are not legally classified as exchanges.
The second and even more important factor is the rise of algorithmic and high frequency trading that links different exchanges together at much shorter time scales than in the past. Each exchange looking only at the trading in its own system has only a very limited view of what is happening in the market as a whole. It becomes very much like the story of the six blind men and the elephant.
The best example of this is the flash crash in the US on May 6, 2010. The US SEC, which like other regulators had never dirtied its hands with real time surveillance, found itself struggling to figure out what happened in those few turbulent minutes on that day. In an interim report, the SEC stated: “To conduct this analysis, we are undertaking a detailed market reconstruction, so that cross-market patterns can be detected and the behaviour of stocks or traders can be analysed in detail. Reconstructing the market on May 6 from dozens of different sources and calibrating the time stamps from each source to ensure consistency across all the data is consuming a significant amount of SEC staff resources. The data are voluminous and include hundreds of millions of records comprising an estimated five to ten terabytes of information.”
This is what happens when a regulator leaves it to others to do its job, but is forced one day to do the job itself. Is it not scandalous that a systemically important institution like an exchange or a depository is not required to synchronise its clocks to a standard time (say GPS time) with an error of not more than a few microseconds at worst? Exchanges are willing to spend a fortune to bring down the latency of their trading engine to a millisecond or so to attract trading volume, but are unwilling to spend a modest amount to synchronise their clocks because nobody asked them to.
There is another important hidden message in this. Modern finance is increasingly high frequency finance and those who do not dirty their hands with it become increasingly out of touch with the reality of financial markets. Doctoral students in finance today, for example, have to learn the econometrics of high frequency data and grapple first hand with the challenges of handling this data.
Unless regulators collect this high frequency data and encourage their staff to explore it, they risk becoming progressively disconnected with the reality that they are supposed to regulate. Interestingly, the US derivatives regulator, CFTC, is moving rapidly to develop this capability. They already collect all trade data on a T+1 basis and run their own surveillance software on that data. Over the next year, they hope to enhance this to receive the entire order book data from the exchanges that they regulate. All regulators worldwide need to move in that direction.
It is true that this will be difficult, expensive and time-consuming for Indian regulators. That is all the more reason to start immediately.
Posted at 12:19 pm IST on Wed, 16 Jun 2010 permanent link
Categories: equity markets, insider trading, manipulation
Why do we not have high quality investigative reports in India?
As the global financial crisis rolls on, I have read a bunch of outstanding investigative reports from around the world:
- Perhaps the best of these is the April 2010 report of the Special Investigative Commission set up by the parliament of Iceland (the Althingi) to investigate and analyse the processes leading to the collapse of the three main banks in Iceland. The Commission consisted of a Supreme Court judge, the Parliamentary Ombudsman and an academic. Only a small part of the report – the executive summary (18 pages), Chapter 18 (65 pages) and Chapter 21 (160 pages) – is available in English. But what is available is truly impressive in terms of the detailed factual presentation, the dispassionate analysis of what went wrong and the balanced indictment of those who were responsible.
- Much narrower in scope but equally impressive in terms of factual detail is the report of the Examiner appointed by the bankruptcy court for Lehman Brothers. At 2,200 pages with 8,000 footnotes, this document is a goldmine of authentic information about what happened at that one company.
- Another report that I liked was the external evaluation of the active management of Norway’s sovereign wealth fund commissioned by the government in response to the sharp fall in its market value during 2008. This study was carried out by three academics in the US and the UK. The response of Norway’s central bank (which manages the wealth fund) is also very interesting. Of course, Norway is no newcomer to transparency. During the crisis of 2008, everybody was reading the detailed reports that Norway had published about their banking crisis of the early 1990s.
Surprisingly, apart from Lehman, there are too few high quality official investigative reports about other US financial firms that have suffered badly in the crisis. We do not really know what happened at Merrill Lynch or Citi. We know a lot more about the problems at UBS for example thanks to the shareholders report produced at the insistence of the Swiss regulators. US regulators have been acting as if everything related to the crisis is a state secret. On AIG, we know a little more due to the efforts of the Congressional Oversight Panel, but even this picture is incomplete.
The US does however have an adversarial system of congressional testimony and all this testimony is available online. Apart from the Financial Crisis Inquiry Commission there are many congressional committees that have held hearings and released valuable information during the process. The quality of the official reports that emerge at the end is not that important. Years ago while studying what happened at Enron, I had found the same thing – the testimony and trascripts of the hearings are far more useful than the reports themselves.
While the investigative reports of the Nordic countries have been exemplary, and those of the US have been very good, India has simply been unable to produce data rich investigative reports of a quality useful to a researcher. A Joint Parliamentary Committee (JPC) was set up to investigate the securities scam of 1991, but the factual details in this report were not sufficient from a researcher’s point of view. The Indian parliamentary committees tend to hold closed door meetings and treat all testimony as confidential. The regulators have also not filled the void. In early days of the scam, the Reserve Bank of India published the Janakiraman report which was rich in factual detail, but this report had a narrow remit. I do not think that we still have a comprehensive authentic data source on what happened in this scam two decades ago.
The situation is not any better in the Ketan Parikh scam that took place a decade ago in technology stocks. Again there was a JPC on this scam, but again the report was not data rich. Turning to more recent times, the Satyam fraud took place a year and a half ago and we still have no authentic information at all on what happened.
I do not believe that developing countries are incapable of producing good investigative reports. I remember being impressed with the Nukul Commission Report in Thailand in the aftermath of the Asian Crisis of 1997, but the report is not available online and I have not been able to refresh my memory. Outside of finance, the Truth and Reconciliation Commission in South Africa produced a series of reports with an enormous amount of factual detail and careful analysis.
India’s failure to produce outstanding investigative reports into financial disasters is something that needs to be remedied. As Andrew Lo has been arguing for some time now, such detailed post mortems are very important. Lo recommends that even the US should go much further than it is doing currently:
The most pressing regulatory change with respect to the financial system is to provide the public with information regarding those institutions that have “blown up”, i.e., failed in one sense or another. This could be accomplished by establishing an independent investigatory agency or department patterned after the National Transportation Safety Board, e.g., a “Capital Markets Safety Board”, in which a dedicated and experienced team of forensic accountants, lawyers, and financial engineers sift through the wreckage of every failed financial institution and produces a publicly available report documenting the details of each failure and providing recommendations for avoiding such fates in the future.
Posted at 4:19 pm IST on Fri, 11 Jun 2010 permanent link
Categories: crisis, investigation
How do regulators cope with terabytes of data?
Traditionally, securities regulators have coped with the deluge of high frequency data by not asking for the data in the first place. The exchanges are supposed to be the front line regulators and leaving the dirty work to them allows the US SEC and its fellow regulators around the world to avoid drowning under terabytes of data.
But the flash crash seems to be changing that. The US SEC had to figure out what happened in those few minutes on May 6, 2010. When it attempted to reconstruct the market using data from different exchanges, it ended up with nearly 10 terabytes of data. The SEC says in its joint report with the CFTC on the preliminary findings about the flash crash:
To conduct this analysis, we are undertaking a detailed market reconstruction, so that cross-market patterns can be detected and the behavior of stocks or traders can be analyzed in detail. Reconstructing the market on May 6 from dozens of different sources and calibrating the time stamps from each source to ensure consistency across all the data is consuming a significant amount of SEC staff resources. The data are voluminous, and include hundreds of millions of records comprising an estimated five to ten terabytes of information. (page 72)
It turns out that the CFTC which regulates the futures exchanges is well ahead in the learning curve as far as the terabytes of data are concerned:
The CFTC also collects trade data on a daily, transaction date + 1 (“T+1”), basis from all U.S. futures exchanges through “Trade Capture Reports.” Trade Capture Reports contain trade and related order information for every matched trade facilitated by an exchange, whether executed via open outcry or electronically, or non-competitively (e.g., block trades, exchange for physical, etc.). Among the data included in the Trade Capture Report are trade date, product, contract month, trade execution time, price, quantity, trade type (e.g., open outcry outright future, electronic outright option, give-up, spread, block, etc.), trader ID, order entry operator ID, clearing member, opposite broker and opposite clearing member, order entry date, order entry time, order number, customer type indicator, trading account numbers, and numerous other data points. Additional information is also required for options on futures, including put/call indicators and strike price, as well as for give-ups, spreads, and other special trade types.
All transactional data is received overnight, loaded in the CFTC’s databases, and processed by specialized software applications that detect patterns of potentially abusive trades or otherwise raise concern. Alerts are available to staff the following morning for more detailed and individualized analysis using additional tools and resources for data mining, research, and investigation.
Time and sales quotes for pit and electronic transactions are also received from the exchanges daily. CFTC staff is able to access the market quotes to validate alerts as well as reconstruct markets for the time periods in question. Currently, staff is working with exchanges to receive all order book information in addition to the executed order information already provided in the Trade Capture Report. This project is expected to be completed within the next year; at present such data remains available to staff through “special calls” (described below) requesting exchange data. (page B-15 in the Appendix)
However, the flash crash did not put the CFTC’s data handling abilities to the test because most of the action was in the cash equity market and the only action in the derivatives exchanges was in a handful of index futures and options contracts.
Finally, I am puzzled by the statement of the SEC quoted above that “calibrating the time stamps from each source to ensure consistency across all the data is consuming a significant amount of SEC staff resources.” Regulators should perhaps require that exchanges synchronize their computer clocks with GPS time to achieve accuracy of a few microseconds. With the exchange latency times close to a millisecond these days, normal NTP (internet) accuracy of 10 milliseconds or so is grossly inadequate. I would not be surprised if some exchanges do not even have formal procedures to ensure accuracy of their system clocks.
All of which goes to show that traditional securities regulator strategies of not dirtying their hands with high frequency data is a big mistake. This should be a wake call for regulators around the world.
Posted at 6:00 pm IST on Thu, 3 Jun 2010 permanent link
Categories: regulation, technology
FASB says IFRS is for less developed financial reporting systems
The FASB’s criticism is buried inside a couple of hundreds of pages of dense accounting proposals, but it is unusually direct and clear:
What may be considered an improvement in jurisdictions with less developed financial reporting systems applying International Financial Reporting Standards (IFRS) may not be considered an improvement in the United States.
This is part of its description of why the FASB is putting aside its convergence project with IASB and is pushing ahead on its own on a new accounting standard for financial instruments. The IASB’s description of the divergence is more muted:
However, the ... efforts [of the FASB and IASB] to achieve a common and improved financial instruments standard have been complicated by the establishment of different project timetables to respond to their respective stakeholder groups in the light of the financial crisis.
The stumbling block in improving the accounting for financial instruments is not technical but political. The key ideas for reform were put forth in a report ten years ago by a Joint Working Group consisting of representatives from the FASB and IASB as well as standard setters from twelve other countries (Joint Working Group of Standard Setters, “Recommendations on Accounting for Financial Instruments and Similar Items”, FASB, Financial Accounting Series, No. 215-A December 22, 2000).
From there we have progressed to the pot calling the kettle black. The global financial crisis has not been good for the reputation of either the FASB or the IASB. Both have appeared vulnerable to pressure from the politicians and lobbying by the big banks.
Posted at 8:10 pm IST on Wed, 2 Jun 2010 permanent link
Categories: accounting
When is a foreign exchange swap not really a foreign exchange swap?
The answer is when it is a swap between the US Federal Reserve and a foreign central bank under the famed swap lines. Last month, the New York Fed described the operational mechanics of these swap lines in considerable detail in its publication Current Issues in Economics and Finance:
The swaps involved two transactions. At initiation, when a foreign central bank drew on its swap line, it sold a specified quantity of its currency to the Fed in exchange for dollars at the prevailing market exchange rate. At the same time, the Fed and the foreign central bank entered into an agreement that obligated the foreign central bank to buy back its currency at a future date at the same exchange rate. ...
The foreign central bank lent the borrowed dollars to institutions in its jurisdiction ... And the foreign central bank remained obligated to return the dollars to the Fed and bore the credit risk for the loans it made.
At the conclusion of the swap, the foreign central bank paid the Fed an amount of interest on the dollars borrowed that was equal to the amount the central bank earned on its dollar lending operations. In contrast, the Fed did not pay interest on the foreign currency it acquired in the swap transaction, but committed to holding the currency at the foreign central bank instead of lending it or investing it. This arrangement avoided the reserve-management difficulties that might arise at foreign central banks if the Fed were to invest its foreign currency holdings in the market.
What this means is that the foreign currency (say, the euro) that the Fed purportedly receives under the swap is completely fictitious because (a) the Fed earns no interest on the euros and (b) the euros are not available to the Fed if it wishes to lend the euros to a US bank or for any other purpose. In fact, in April 2009, the Fed entered into a different swap agreement with the ECB and other central banks to obtain foreign currency liquidity. This would not have been needed at all if the original swap had been a genuine swap.
The so called swap is simply a loan of dollars to the foreign central bank. Why does the Fed want to call it a swap instead of a loan? I think the reason for this use (or rather abuse) of terminology is that a swap with a foreign central bank sounds politically more palatable than a loan to a foreign central bank. All the more so when the swap is for an unlimited amount!
This is another reminder that deceptive disclosure practices are not limited to companies like Enron or to sovereigns like Greece – they are all pervasive.
Posted at 9:10 pm IST on Tue, 1 Jun 2010 permanent link
Categories: derivatives, international finance, law
Am on vacation
I am on vacation for the rest of this month. There will be no posts during this period. I shall try to moderate comments during this period, but there are bound to be delays.
Posted at 10:18 pm IST on Tue, 11 May 2010 permanent link
Categories: miscellaneous
Grand daddy of algorithmic trading bites the dust?
The ten minutes of mayhem in the US stock market last Thursday may have involved the oldest form of algorithmic trading – the stop loss order. We do not often think of the stop loss order as algorithmic trading, but that is what it is. If its conditions are satisfied, it executes without seeking any confirmation from the person who placed the order.
These days, it is usually a computer which implements the stop loss order, but even in the old days when a human broker implemented it, the stop loss was an algorithm. Whether the hardware on which an algorithm runs is made of silicon or of carbon is totally immaterial.
When I blogged last week about the dangers of market orders, I did not realize that many of the market orders that executed at absurd prices on Thursday afternoon in the US might have been stop loss orders. When the stop loss limit is breached, the stop loss order becomes a market order and executes blindly against whatever bids are available. This can be a prescription for disaster in fast moving markets as an avalanche of stop losses can eat through the entire order book and execute at penny prices as well.
Ironically, stop loss orders might not only have been a big victim of Thursday’s “flash crisis,” but might have been one of its major causes as well. Stop losses are inherently destabilizing as they aggravate the current trend. By demanding liquidity when it is least available, they degrade market quality. By making investors complacent about risks (the stop loss order is supposed to limit losses!), they tend to make investors reckless. All the more reason why these people ought not to have been bailed out by cancelling trades.
Posted at 10:13 pm IST on Tue, 11 May 2010 permanent link
Categories: equity markets, risk management
Market panics and bailout manias
People seem to be debating whether it was computers or humans that panicked and caused a temporary intraday drop of almost 10% in the US stock market yesterday. What we do know is that the bailout mania that followed was due entirely to humans. This is what the Nasdaq media release says:
The NASDAQ Stock Market had no technology or system issues associated with the trading that occurred between 2:00 and 3:00 p.m. ET today. The NASDAQ Stock Market operated continuously and its close process ran successfully.
In addition, there is no indication at this time that a NASDAQ market participant experienced a technological failure in connection with this event. NASDAQ has coordinated a process among U.S. Exchanges and therefore, pursuant to rule 11890(b), NASDAQ, on its own motion, will cancel all trades executed between 14:40:00 and 15:00:00 greater than or less than 60% away from the consolidated last print in that security at 14:40:00 or immediately prior. This decision cannot be appealed. NASDAQ has coordinated this decision with all other UTP Exchanges. NASDAQ will be canceling trades on the participant’s behalf.
Make no mistake. This is a bailout as bad and sordid as all the bailouts that we saw in the financial sector in 2008. It is bad because it reduces incentives for the firms to discipline their traders (or redesign their computer algorithms) to reduce the risk of such problems.
Anybody who uses a large market sell order instead of a marketable limit order during a falling market (and that too without looking at the order book) is begging to receive a price of zero for the stock. The market is happy to oblige. These people deserve the price that they got. There is no need for the exchange to ride to their rescue by cancelling their trades.
I wrote a few posts about this kind of thing four years ago:
Why have things not changed? Because, it is so easy to bailout everybody, it is much harder to change things.
Posted at 2:19 pm IST on Fri, 7 May 2010 permanent link
Categories: behavioural finance, crisis
Goldman Sachs and the economic function of investment banks
I wrote a column in the Financial Express on what the securities fraud case against Goldman Sachs tells us about the economic function of investment banks.
Regardless of its ultimate outcome, the SEC’s case against Goldman Sachs alleging securities fraud has already transformed the debate on financial sector reforms in the US. More importantly, I believe the case has raised disturbing questions about the economic function performed by investment banks in modern financial markets.
At the centre of the SEC case is the Abacus deal that Goldman brought to market in early 2007. The structure was created at the request of the hedge fund, Paulson & Co, which wanted to bet on the collapse of the US housing market by taking a short position on subprime securities. Goldman created synthetic subprime securities through the Abacus vehicle and sold these to the German bank, IKB. Paulson took the opposite (short) position on these securities.
The prospectus of Abacus highlighted the role of a reputed CDO manager, ACA Management, in selecting the portfolio of subprime assets underlying the Abacus deal and gave full details of this portfolio. But the 196-page prospectus did not mention the fact that Paulson had played a role in selecting the portfolio. This non-disclosure is a key element in the SEC case against Goldman.
In addition, the SEC alleges that Goldman misled ACA about Paulson’s intentions. Apparently, ACA believed that Paulson intended to buy the equity (first loss) piece of Abacus and was, therefore, motivated to exclude truly bad assets from the portfolio. In reality, Paulson planned to take a short position in Abacus and wished to stuff it with the worst possible assets.
It is difficult to predict the outcome of the SEC case because there are few precedents for invoking the anti-fraud provisions of US securities law in similar situations. The SEC might be in uncharted waters here, but it is pursuing a civil case where the standards of proof are lighter. Moreover, Goldman would certainly not relish having to defend its unsavoury conduct in a jury trial.
It is true that during the crisis the SEC acquired a reputation for incompetence (for example, Madoff and Stanford), which makes people sceptical about the Goldman case as well, but the new director of enforcement, Robert Khuzami, whom the SEC hired last year, has a formidable reputation from his days as a federal prosecutor.
Interestingly, Goldman in its defence thinks of itself more as a broker-dealer in complex derivatives and less as an issuer or underwriter of the Abacus securities. Broker-dealers have no obligation to disclose the identity or motivations of either counterparty to the other. It is true that Goldman could have achieved the same economic effect as Abacus by intermediating a credit default swap (CDS) between IKB and Paulson, but that is not what it chose to do. It chose to issue securities in which a CDS was embedded.
I am, however, less interested in whether the SEC wins this case or not. I am more concerned about the role of investment banks like Goldman in modern financial markets. In an ideal, perfectly efficient market, buyers and sellers would deal with each other directly through an electronic limit order book without any gatekeepers or intermediaries. In reality, intermediaries are needed to solve the problem of information asymmetry where one side knows a lot more about the transaction than the other.
It follows that the value added by an investment bank is measured by the extent to which it reduces information asymmetry. Otherwise, it is only exploiting oligopolistic rents or earning the rewards of excess leverage made possible by implicit ‘too big to fail’ guarantees.
From this perspective, the major banks of the 19th century or early 20th century like Rothschilds, Barings or JP Morgan did serve an economically useful function. Academic studies have shown that sovereign bonds underwritten by these major banks during that period had significantly lower default rates than other sovereign bonds (Flandreau, et al, 2009, The End of Gatekeeping: Underwriters and the Quality of Sovereign Bond Markets, 1815-2007, NBER Working Paper 15128).
Similarly, financial historians tell us that 19th century investment banks like JP Morgan played a critical role in bridging the information asymmetry between US railroads and their British investors. To their contemporaries, rich and powerful bankers like the Rothschilds and the Morgans were among the many ugly faces of capitalism. But the hard facts show that while they did not claim to be doing God’s work, they did do something useful.
The Abacus case makes one wonder whether modern investment banks do play any such useful role. The Goldman defence asserts that sophisticated investors like ACA and IKB were capable of looking after their own interests and did not need help from Goldman or anybody else. If there are no information asymmetries to be resolved or if modern investment banks have too little reputational capital to resolve them, then it is not at all clear what economic function they perform in today’s highly liquid and sophisticated markets.
Posted at 12:02 pm IST on Wed, 5 May 2010 permanent link
Categories: financial history, fraud, regulation
Principles versus rules: HSBC Mutual Fund in India
An order passed by the Securities and Exchange Board of India (SEBI) on Friday regarding a mutual fund run by HSBC in India provides a fascinating example of the advantages of principles based regulation.
Indian regulations require that before a mutual fund makes a “change in any fundamental attribute” of any mutual fund scheme it should not only inform all unit holders but also give them a costless exit option (Regulation 18(15A) of the Mutual Funds Regulations). The regulations do not define what is a fundamental attribute so that absent any further “clarifications” by the regulator, we would have a very sensible principles based regulation.
In 2009, the HSBC Mutual Fund made the following changes in one of its mutual fund schemes, the “HSBC Gilts – Short Term Plan.”
- It changed the name of the scheme to “HSBC Gilt Fund”
- It changed the ceiling on the modified duration of the fund from 5 years to 15 years
- It changed the benchmark index from a sub index covering the 1-3 year maturity to a composite index covering all maturities.
Under a principles based regulation, there is no question that this would be a change in the fundamental attribute of the scheme. In fact, it changes the nature of the scheme so drastically that it is conceivable that many investors in the original scheme might not wish to remain invested in the changed scheme.
Unfortunately, the SEBI regulations were not truly principles based. Way back in 1998, it issued clarifications that replaced the nice principles based regulation with a set of bright red lines by giving a laundry list of things that are fundamental attributes. Neither the change of name, nor the change in the modified duration, nor the change in the benchmark index figured in this list.
The regulator was forced to accept that HSBC was technically correct in claiming that it had not changed any fundamental attribute of its scheme.
The moral of the story is that while principles based regulation is genuinely hard both for regulator and regulatees, rules based regulation is often a farce.
Posted at 7:41 pm IST on Mon, 26 Apr 2010 permanent link
Categories: bond markets, regulation
Lehman and the Derivative Exchanges
The unredacted Volume 5 of the Lehman examiner’s report released earlier this week provides details about how CME handled the Lehman default by auctioning the positions of Lehman to other large dealers. The table below summarizes the data given in the report.
Asset Class | Negative Option Value | Span Risk Margin | Total Collateral | Price paid by CME to buyer | Loss to Exchange | Percentage Loss to Exchange | Loss to Lehman |
Energy Derivatives | 372 | 261 | 633 | 707 | 74 | 12% | 335 |
FX Derivatives | -4 | 12 | 8 | 2 | -6 | -72% | 6 |
Interest Rate Derivatives | 93 | 130 | 223 | 333 | 110 | 49% | 240 |
Equity Derivatives | -5 | 737 | 732 | 445 | -287 | -39% | 450 |
Agricultural Derivatives | -5 | 55 | 50 | 52 | 2 | 4% | 57 |
Total auctioned by CME | 451 | 1195 | 1646 | 1539 | -107 | -6% | 1088 |
Natural Gas Derivatives sold by Lehman itself | 482 | 129 | 611 | 622 | 11 | 2% | 140 |
Grand Total | 933 | 1324 | 2257 | 2161 | -96 | -4% | 1228 |
The negative option value is as the close of business before the Lehman bankruptcy and the loss to Lehman is computed as the excess of the price paid by CME to the buyer over this negative option value. Futures positions are presumably assumed to have zero value after they have been marked to market. On the other hand, CME incurs a loss only if it pays a price in excess of the collateral provided by Lehman. For comparison purposes, the same computation is done for the positions sold by Lehman itself, though, in this case, the exchange does not make any profit or loss.
What I find puzzling here is that in the case of interest rate derivatives, CME had to pay the winning dealer a price of about 1.5 times the collateral available. Had it not been for excess collateral in other asset classes, the CME might have had to take a large loss. Was the CME seriously undermargined or was the volatility in the days after Lehman default so high or was this the result of a panic liquidation by the CME?
We do have an independent piece of information on this subject. LCH.Clearnet in London also had to liquidate Lehman’s swap positions amounting to $9 trillion of notional value. LCH has stated here and here that the Lehman “default was managed well within Lehman margin held and LCH.Clearnet will not be using the default fund in the management of the Lehman default.”
A number of questions arise in this context:
- Did LCH.Clearnet charge higher margins than CME? It is interesting in this context that only a few days ago, Risk Magazine quoted LCH as saying that one of its rivals (IDCH) charges too low a margin: “bordering on reckless.” But LCH did not make this claim about CME.
- During the week after Lehman’s default, was there a big difference in the price behaviour of the swaps cleared at LCH and the bond futures and eurodollar futures cleared at CME?
- Was Lehman arbitraging between swaps and eurodollar futures so that its positions in the two exchanges were in opposite directions? In this case, price movements might have produced a profit at LCH and a loss at CME.
- Were the proprietary positions of Lehman at CME more risky than its (customer?) positions at LCH which might have been more balanced?
- Did the “panic” liquidation by CME exacerbate the loss? LCH hedged the position over a period of about a week and then auctioned off a hedged book.
- Did dealers trade against the Lehman book after the CME disclosed the book to potential bidders a couple of days before the auction? Or did each dealer think that the others would trade against the book? This problem did not arise at LCH because only hedged books were auctioned and the unhedged book was not disclosed to others.
In the context of the ongoing debate about better counterparty risk management (including clearing) of OTC derivatives, I think the regulators should release much more detailed information about what happened. Unfortunately, in the aftermath of the crisis, it is only the courts that have been inclined to release information – regulators and governments like to regard all information as state secrets.
Posted at 7:56 pm IST on Sat, 17 Apr 2010 permanent link
Categories: bankruptcy, crisis, derivatives, exchanges, risk management
SEBI, IRDA and the courts
I wrote a column in the Financial Express today about letting the courts resolve disputes between two financial regulators.
When I wrote a month ago “At a crunch, I do not see anything wrong in a dispute between two regulators... being resolved in the courts,” (‘Fill the gaps with apex regulator’, FE, March 19); I did not imagine that my wish would be fulfilled so soon. The dispute between Sebi and Irda regarding Ulips seems to be headed to the courts for resolution. There is nothing unseemly or unfortunate about this development. On the contrary, I believe that this is the best possible outcome.
An independent regulator should be willing and able to carry out the mission laid down in its statute, without worrying about whether its actions would offend another regulator. Its primary loyalty should be to its regulatory mandate and not to any supposed comity of regulators. Equally, if a regulator intrudes on the mandate of another, the other regulator or its regulatees should have no compunctions in challenging it in a court of law.
In any case, the idea that regulators share cordial relationships with each other is a myth. Turf wars are the rule and not the exception. In the UK, for example, after Northern Rock, the Bank of England and the FSA began to talk to each other only through the press, it was obvious to all that the relationship was extremely bitter. In the US, during the crisis, severe strains were evident between the Fed and the FDIC. The relationship between the SEC and the CFTC has, of course, been strained for decades.
In the financial sector in particular, we want strong-willed regulators to act on the courage of their conviction. Since many of their regulatees probably have outsized egos, perhaps it does not hurt to have a regulator with an exaggerated sense of self-importance. We do not want regulators who are too nice to their regulatees. It follows then that we cannot wish that regulators be too nice to each other either.
What we need instead is a mechanism to deal with the problem of regulatory overreach—democracies thrive on checks and balances. Regulatory overreach is a problem, even when it does not involve another regulator at all. Instead of hoping that regulators will always exercise self-restraint, we need a process to deal with the consequences of regulators overstepping the line.
The best mechanism is a robust appellate process—appellate tribunals and beyond them, the regular judiciary. Regulators, too, must be accountable to the rule of law and an appellate process is the only way to ensure this. The judicial process is as capable of resolving disputes between two regulators as it is of resolving disputes between the regulator and its regulatees.
In the context of the dispute between Sebi and Irda, many people have argued that a bureaucratic process of resolving disputes is preferable to a judicial process. There is, however, little evidence for such a view from around the world. Bureaucratic processes are less likely to provide a lasting solution and more likely to produce unseemly compromises that paper over the problem.
The two-decade-long dispute in the US between the SEC and CFTC about equity futures provides an interesting case study to demonstrate this. In the early 1980s, the SEC and the CFTC came to an agreement (the Shad Johnson accord) dividing up the regulatory jurisdiction of stock index futures and index options, but they were unable to agree on the regulation of single stock futures. Futures on narrow indices were left somewhere in the middle, with the CFTC having regulatory jurisdiction but the SEC having a veto power on the introduction of the contract itself.
In the late 1990s, when the SEC barred the Chicago Board of Trade (CBOT) from trading futures on the Dow Jones Utilities and Transportation indices, CBOT took the SEC to court and won. The court sternly declared that, “SEC is not entitled to adopt a ‘my way or the highway’ view by using its approval power—as a lever.” With the Shad Johnson accord in tatters, the two regulators were finally forced to sort out the regulation of single stock futures—a matter that they had not been able to settle by bureaucratic processes for two decades.
It is evident that the resolution of the single stock futures dispute would not have happened without judicial intervention. For two decades, inter-regulatory coordination mechanisms in the US, like the President’s Working Group on Financial Markets were not able to resolve the matter—it was too convenient for both regulators to agree to disagree.
An important advantage of judicial resolution is that regulatory conflicts that have the most serious impact on the markets are more likely to be litigated than those that are less damaging. It is, therefore, more likely that the final outcome would be socially and economically efficient. There are no such incentives to guide a bureaucratic solution towards the social optimum.
Posted at 12:58 pm IST on Sat, 17 Apr 2010 permanent link
Categories: law, regulation
The SEC and the Python
Last week, the SEC put out a 667 page proposal regarding disclosures for asset backed securities. What I found exciting was this:
We are proposing to require that most ABS issuers file a computer program that gives effect to the flow of funds, or “waterfall,” provisions of the transaction. We are proposing that the computer program be filed on EDGAR in the form of downloadable source code in Python. ... (page 205)
Under the proposed requirement, the filed source code, when downloaded and run by an investor, must provide the user with the ability to programmatically input the user’s own assumptions regarding the future performance and cash flows from the pool assets, including but not limited to assumptions about future interest rates, default rates, prepayment speeds, loss-given-default rates, and any other necessary assumptions ... (page 210)
The waterfall computer program must also allow the use of the proposed asset-level data file that will be filed at the time of the offering and on a periodic basis thereafter. (page 211)
This is absolutely the right way to go particularly when coupled with the other proposal that detailed asset level data be also provided in machine readable (XML) format. For a securitization of residential mortgages for example, the proposal requires disclosure of as many as 137 fields (page 135) on each of the possibly thousands of mortgages in the pool.
Waterfall provisions in modern securitizations and CDOs are horrendously complicated and even the trustees who are supposed to implement these provisions are known to make mistakes. A year ago, Expect[ed] Loss gave an example where approximately $4 million was paid to equity when that amount should have been used to pay down senior notes (hat tip Deus Ex Macchiato).
Even when the trustees do not make a mistake, the result is not always what investors had expected. A few months ago, FT Alphaville reported on two Abacus deals where the documentation allowed the issuer (Goldman Sachs) to use its “sole discretion” to redeem the notes without regard to seniority. People realized that this was possible only when Goldman Sachs actually paid off (at face value) some junior tranches of these CDOs at the expense of senior tranches.
When provisions become complex beyond a point, computer code is actually the simplest way to describe them and requiring the entire waterfall to be implemented in open source software is a very good idea. The SEC does not say so, but it would be useful to add that if there is a conflict between the software and textual description, the software should prevail.
Now to the inevitable question — Why Python? The SEC actually asks for comments on whether they should mandate Perl, Java or something else instead. I use Perl quite extensively, but the idea that Perl is a suitable language for implementing a transparency requirement is laughable. Perl is a model of powerful but unreadable and cryptic code. As for Java and C-Sharp, there is little point in having open source code if the interpreter is not also open source. I do not use Python myself, but it appears to be a good choice for the task at hand.
It is gratifying that the SEC continues the one good thing that Cox initiated when he was Chairman – the use of technology as a key regulatory tool.
Posted at 6:25 pm IST on Fri, 16 Apr 2010 permanent link
Categories: bond markets, derivatives, regulation, technology
Icesave: What is in a name?
The Special Investigation Committee set up by the Icelandic parliament (Althingi) to investigate and analyse the processes leading to the collapse of the three main banks in Iceland submitted its report this week. A portion of the report is available in English.
One of the interesting stories in the report (Chapter 18, page 5) is about the choice of the brandname Icesave for the deposit accounts offered by the Icelandic Bank, Landsbanki in the UK and in the Netherlands. The SIC states:
... Arnason [CEO of Landsbanki] also described how the brand name Icesave was created. He claimed that Landsbanki representatives had initially thought it was negative for an Icelandic bank to market deposit accounts in the UK. An advertising agency employed by the bank pointed out that it would never be possible to conceal the origin of the bank and, therefore, it would be better to simply advertise it especially. As a result, the brand name “Icesave” was created.
... Research indicated that a simple and clear message together with a strong link to Iceland would prove beneficial.
I think this has some implication for the literature about the relationship between geographical names on stock prices. For example, Kee-Hong Bae and Wei Wang show that during the China stock market boom in 2007, Chinese stocks listed in the US that had China or Chinese in their names significantly outperform US listed Chinese stocks that do not have China or Chinese in their names. (“What’s in a ‘China’ Name? A Test of Investor Sentiment Hypothesis”, http://ssrn.com/abstract=1411788)
What the Icesave example shows is that the choice of the name is not independent of the advertising, pricing and other strategies of the company. Some of what appears to be the result of a name change might in fact be due to other changes in the company’s business and strategy.
This might be true even in case of other studies about the impact of name changes on the stock price. For example, Cooper, Dimitrov, and Rau (“A Rose.com by Any Other Name”, Journal of Finance, 56 (2001), 2371–2387) found that stock prices rose 74% when they changed their names to dot com names in 1999. Similarly, Rau, Patel, Osobov, Khorana and Cooper (Journal of Corporate Finance, 11 (2005), 319-335) showed that stock prices rose when firms removed dot.com from their name after the bubble burst.
It is possible that these name changes were also accompanied by changes in business strategies.
Posted at 4:58 pm IST on Wed, 14 Apr 2010 permanent link
Categories: crisis, investigation
IFRS in the Indian financial sector: Regulatory Capture?
A group constituted by the Ministry of Corporate Affairs with representation of all major financial sector regulators in India has approved a road map for the convergence to international accounting standards (IFRS) by insurance companies, banking companies and non-banking finance companies.
First of all, why should there be a different road map for the financial sector? Why not let financial entities be subject to the same road map as the rest of the corporate sector? The only plausible argument is that the most important change from Indian accounting standards to IFRS would be the treatment of financial instruments (IAS 39) and this impacts the financial sector more than any other sector.
But this argument is rather weak because there are other sectors which are disproportionately impacted by IFRS and there is no kid glove treatment for those sectors. The accounting treatment for agriculture for example changes quite substantially under IFRS. But agriculture does not have a powerful set of regulators protecting their regulatees while the financial sector does.
What I found even more interesting was the different treatment of insurance companies and banks within the financial sector itself. Insurance companies will adopt IFRS in 2012 while banks get an extra year. Is this because insurance companies do not stand to lose much from IFRS and might even stand to gain, while banks stand to lose a lot more?
If one looks only at the complexity of the transition to IFRS, it is not possible to argue that the transition is easier for insurance companies than for banks. Insurance companies too have large investment portfolios and they too will have to contend with all the complexities of IAS 39. In addition, there is an entire accounting standard (IFRS 4) for the insurance industry and IFRS 4 is by no means a model of simplicity. The insurance regulator (IRDA) has a 200 page report describing the implications of IFRS for Indian insurance companies.
Nor is it true that contemplated changes in IFRS will impact banks more and that therefore it makes sense for them to transition directly to the revised standards as and when they come out. IFRS 4 relating to insurance is explicitly described as Phase I of the IASB’s insurance project and Phase II promises drastic and fundamental changes in the accounting approach.
No, I do not see any strong argument why it is in the public interest for insurance companies to converge to IFRS a year ahead of banks. It is obvious however that it is in the interest of the banks themselves to postpone IFRS because of the stringent treatment of held to maturity investments. A cynic would say that regulators in every country and every sector are in danger of being captured by their regulatees.
I think this is a powerful reason for not mixing up regulatory capital and accounting capital. It would be nice if regulators could accept that accounting is for investors and agree to stay from interfering in this. Regulators are free to collect whatever data they want and define capital and profits in whatever way they want. They are free to ignore everything that the accountants put out. That would help make it easier for accounting standards to provide what is most relevant and useful for investors.
Posted at 2:02 pm IST on Tue, 13 Apr 2010 permanent link
Categories: accounting, banks, regulation
Prudent at night but reckless during the day
I have been thinking a lot about what the court examiner’s report on Lehman tells us about other banks. Looking at many things mentioned in the report, my conclusion is that even the banks that are prudent at night become quite reckless during the day. Banks that are careful about their end of day (overnight) exposures seem to be happy to assume very large exposures during the day provided they believe that the position will be unwound before close of the day.
My first example of this phenomenon is a repo transaction undertaken by Barclays after it bought a major part of the Lehman broker dealer business (LBI) in the bankruptcy court. The examiner describes this transaction and its consequences in detail in his report:
The parties then began to implement ... a repo transaction between LBI and Barclays under which Barclays would send $45 billion in cash to JPMorgan for the benefit of LBI, and Lehman would pledge securities to Barclays. Barclays planned to wire the $45 billion cash to JPMorgan in $5 billion components, and Barclays (actually, Barclays’ triparty custodian bank, BNYM) would receive collateral to secure each $5 billion cash transfer. (Page 2165, Volume 5)
Shortly after noon on Thursday, Barclays wired the initial $5 billion of cash to JPMorgan for the benefit of LBI. (Page 2166, Volume 5)
... a senior executive from JPMorgan then contacted Diamond, and asked Barclays to send the $40 billion in cash all at once to expedite the process. According to Ricci, the JPMorgan executive provided Diamond with assurances that, if Barclays sent the $40 billion in cash, JPMorgan would follow up promptly in delivering the remaining collateral. Early Thursday evening, Barclays wired the remaining $40 billion in cash. Barclays did not receive $49.6 billion in securities that evening. Although both the FRBNY and DTCC kept their securities transfer facilities open long after their usual closing times, by 11:00 p.m. on Thursday evening, September 18, Barclays had received collateral with a marked value of only approximately $42 billion. (Page 2167, Volume 5)
To put matters in perspective, $40 billion was roughly equal to the total shareholders’ equity of Barclays at that time (according to the June 30, 2008 balance sheet, shareholders’ equity was £ 22.3 billion or $40.5 billion at the exchange rate of 1.82 $/£ on September 18, 2008). In other words, Barclays was willing to take an unsecured intraday exposure to another bank equivalent to roughly its entire worth. I am sure that an overnight unsecured exposure of this magnitude would be regarded as reckless and irresponsible, but an intraday position was acceptable.
My second example is the triparty clearing bank services provided by JP Morgan Lehman and other broker-dealers. The examiner’s report provides a lucid explanation of the whole matter:
In a triparty repo, a triparty clearing bank such as JPMorgan acts as an agent, facilitating cash transactions from investors to broker- dealers, which, in turn, post securities as collateral. The broker-dealers and investors negotiate their own terms; JPMorgan acts only as an agent. Triparty repos typically mature overnight ... Each night collateral is allocated to investors ... The investors, in turn, provide overnight ... funding to the broker-dealer. The following morning, JPMorgan “unwinds” the triparty repos, returning cash to the triparty investors and retrieving the securities posted the night before by the broker-dealer. These securities then serve as collateral against the risk created by JPMorgan’s cash advance to investors. During the business day, broker-dealers arrange the funding that they will need at the close of business through new triparty-repo agreements. This new funding must repay the cash that JPMorgan advanced during the business day... (Page 1086-87, Volume 4)
The premise of a triparty repo is that it constitutes secured funding in which the lender (investor) has the opportunity to sell the collateral immediately upon a broker-dealer’s (borrower’s) failure to pay maturing principal. (Page 1092, Volume 4)
To guard against the possibility of the investor realizing less than the loan amount in a liquidation scenario, the borrower must pledge additional “margin” (i.e., additional collateral) to the lender – for example, $100 million of Treasury securities in exchange for $98 million in cash. (Page 1092, Volume 4)
As triparty-repo agent to broker-dealers, JPMorgan was effectively their intraday triparty lender. When JPMorgan paid cash to the triparty investors in the morning and received collateral into broker-dealer accounts (which secured its cash advance), it bore a similar risk for the duration of the business day that triparty lenders bore overnight. If a broker-dealer such as LBI defaulted during the day, JPMorgan would have to sell the securities it was holding as collateral to recoup its morning cash advance. (Page 1093, Volume 4)
Through February 2008, JPMorgan gave full value to the securities pledged by Lehman in the NFE calculation and did not require a haircut for its effective intraday triparty lending. Consequently, through February 2008, JPMorgan did not require that Lehman post the margin required by investors overnight to JPMorgan during the day. (Page 1094, Volume 4)
That last paragraph left me stunned. Why would the clearing bank not impose a haircut/margin on the intraday secured lending, while the repo lenders do require such a haircut on the overnight lending? It makes no sense to me. First, the clearing bank is taking a large concentrated exposure, while this exposure gets distributed over a large number of overnight lenders. If anything, the intraday lender should be more worried and should be charging a higher margin. Second, most financial asset prices instruments are more volatile when the markets are open than when they are closed. Since prices are expected to change more during the day than during the night, the intraday lender actually needs a higher margin. Yet, the intraday lender did not ask for any margins at all till February 2008!
For a moment, I thought that the clearing bank was not charging margins because it was willing to take some amount of unsecured exposure to the broker-dealer and it was willing to dispense with the margin under the assumption that the margin would be less than the unsecured exposure that it was willing to have. But no, the examiner’s report clearly states that the margin free secured lending was over and above the maximum unsecured lending that JPMorgan was willing to provide:
JPMorgan used a measurement for triparty and all other clearing exposure known as Net Free Equity (“NFE”). In its simplest form, NFE was the market value of Lehman securities pledged to JPMorgan plus any unsecured credit line JPMorgan extended to Lehman minus cash advanced by JPMorgan to Lehman. An NFE value greater than zero indicated that Lehman had not depleted its available credit with JPMorgan. (Page 1093, Volume 4)
Yet, on a reading of the entire examiner’s report, JPMorgan comes across as a bank with a very robust risk management culture. Again and again one sees that in the most turbulent of times, the bank is seen to be sensitive to various market risks and operational risks and appears to have taken corrective action very quickly. The only conclusion that I can come to is that even well run banks are complacent about intraday risks.
Why should banks be prudent at night but reckless during the day? Probably this has got to do with the fact that nobody prepares intraday balance sheets and so positions that are reversed before close of day do not appear in any external reports (and probably not in many internal reports either). Probably, it has to do with the primordial fear of darkness dating back to our evolutionary struggles in the African Savannah. As the biologists remind us, you can take the man out of the Savannah, but you can not take the Savannah out of the man.
Posted at 4:55 pm IST on Thu, 1 Apr 2010 permanent link
Categories: risk management
Indian Financial Stability and Development Council
I wrote a column in the Financial Express today about the proposal to create a Financial Stability and Development Council in India as a potential precursor to an apex regulatory body.
The announcement in the Budget speech this year about the setting up of a Financial Stability and Development Council (FSDC) has revived the long-standing debate about an apex regulatory body. Much of the debate on FSDC has focused on the politically important but economically trivial question of the chairmanship of the council. I care little about who heads FSDC—I care more about whether it has a permanent and independent secretariat. And I care far more about what the FSDC does.
The global financial crisis has highlighted weaknesses in the regulatory architecture around the world. Neither the unified regulator of the UK nor the highly fragmented regulators of the US came out with flying colours in dealing with the crisis. Everywhere, the crisis has brought to the fore the problems of regulatory overlap and underlap. In every country, there are areas where multiple regulators are fighting turf wars over one set of issues, while more pressing regulatory issues fall outside the mandate of any regulator. Regulation and supervision of systemically important financial conglomerates is an area seen as critical in the aftermath of the crisis. It is an area that has been highly problematic in India.
The most important failure (and bail-out) of a systemically important financial institution in India in recent times was the rescue of UTI, which did not completely fall under any regulator’s jurisdiction. The most systemically important financial institution in India today is probably the LIC, whose primary regulator has struggled to assert full regulatory jurisdiction over it. Even the remaining three or four systemically critical financial conglomerates in India are not subject to adequate consolidated financial supervision. The global crisis has shown that the concept of a lead regulator as a substitute for effective consolidated supervision is a cruel joke. The court examiner’s report in the Lehman bankruptcy released this month describes in detail how the ‘consolidated supervision’ by the US SEC of the non-broker-dealer activities of Lehman descended into a farce. Even before that we knew what happened when a thrift regulator supervised the derivative activities of AIG.
Consolidated supervision means a lot more than just taking a cursory look at the consolidated balance sheet of a financial conglomerate. An important lesson from the global crisis is that we must abandon the silly idea that effective supervision can be done without a good understanding of each of the key businesses of the conglomerate. High-level consolidated supervision of the top five or top ten financial conglomerates is, I think, the most important function that the FSDC should perform drawing on the resources of all the sectoral regulators as well as the staff of its own permanent secretariat.
Another important function is that of monitoring regulatory gaps and taking corrective action at an early stage. Unregulated or inadequately supervised segments of the financial sector are often the source of major problems. Globally, we have seen the important role played by under-regulated mortgage brokers in the sub-prime crisis.
In India, we have seen the same phenomenon in the case of cooperative banks, plantation companies and accounting/auditing deficiencies in the corporate sector. Cooperative banks were historically under-regulated because RBI believed that their primary regulator was the registrar of cooperative societies. The registrar, of course, did not bother about prudential regulation. Similarly, in the mid-1990s, plantation companies and other collective investment schemes were regulated neither as mutual funds nor as depository institutions. Only after thousands of investors had been defrauded was the regulatory jurisdiction clarified.
As far as accounting and auditing review is concerned, the regulatory vacuum has not been filled even after our experience with Satyam. Neither Sebi nor the registrar of companies undertakes the important task of reviewing published accounting statements for conformity with accounting standards. There is an urgent need for a body like FSDC that systematically identifies these regulatory gaps and develops legislative, administrative and technical solutions to these problems. By contrast, I believe that the role of ‘coordination’ between regulators emphasised in the current title of the high-level coordination committee is the least important role of an FSDC. Some degree of competition and even turf war between two regulators is a healthy regulatory dynamic.
At a crunch, I do not see anything wrong in a dispute between two regulators (or between one regulator and regulatees of another regulator) being resolved in the courts. After all, the Indian constitution gives the judiciary the power to resolve disputes even between two governments!
My favourite example from the US is the court battle between the SEC and the derivative exchanges (supported by their regulator, the CFTC) that led to the introduction of index futures in that country. A truly independent regulator should be able and willing to go to court against another arm of the government in order to perform its mission.
Posted at 10:19 am IST on Fri, 19 Mar 2010 permanent link
Categories: regulation
Lehman and its computer systems
Perhaps, I have a perverse interest in the computer systems of failed financial firms – I blogged about Madoff and his AS400 last year. Even while struggling to cope with the fantastic 2,200 page report of the court examiner on Lehman, I homed in on the discussion about Lehman’s computer systems:
At the time of its bankruptcy filing, Lehman maintained a patchwork of over 2,600 software systems and applications. ... Many of Lehman’s systems were arcane, outdated or non-standard. Becoming proficient enough to use the systems required training in some cases, study in others, and trial and error experimentation in others. ... Lehman’s systems were highly interdependent, but their relationships were difficult to decipher and not well documented. It took extraordinary effort to untangle these systems to obtain the necessary information.
My limited experience suggests that outdated and unusable software is a problem in most large organizations. I do hope that the ongoing consumerization of information technology will help reduce these problems by putting intense pressure on corporate IT to reform their ways. Perhaps, organizations should consider releasing the source code of most of their proprietary software on their own intranet to help manage the complexity and user unfriendliness of their systems. Consumerization plus crowd sourcing might just be able to tame the beast.
Posted at 6:20 pm IST on Thu, 18 Mar 2010 permanent link
Categories: bankruptcy, technology
Law, Madoff, fairness and interest rates
I would grant that there is probably no fair way for the courts to deal with the mess created by the Madoff fraud. But I am intrigued by the discussions about fairness in the ruling of the US Bankruptcy Court about the rights of the Madoff victims.
I have nothing to say about the part of the judgement which interprets the law, and will confine myself to the fairness example that the judge discusses (page 32):
Investor 1 invested $10 million many years ago, withdrew $15 million in the final year of the collapse of Madoff’s Ponzi scheme, and his fictitious last account statement reflects a balance of $20 million. Investor 2 invested $15 million in the final year of the collapse of Madoff’s Ponzi scheme, in essence funding Investor 1’s withdrawal, and his fictitious last account statement reflects a $15 million deposit. Consider that the Trustee is able to recover $10 million in customer funds and that the Madoff scheme drew in 50 investors, whose fictitious last account statements reflected “balances” totaling $100 million but whose net investments totaled only $50 million.
The judge believes that Investor 1 has no net investment “because he already withdrew more than he deposited” while Investor 2 has a $15 million net investment. Since the recovery of $10 million is 20% of the $50 million net investment of all investors put together, Investor 2 is entitled to $3 million and Investor 1 is entitled to nothing.
The court states that Madoff apparently started his Ponzi scheme (“investment advisory services”) in the 1960s. Since the fraud was exposed at the end of 2008, the Ponzi scheme went on for maybe 40 years. Let us therefore take “many years ago” in the judge’s example to mean 20 years ago.
Between 1988 and 2008, the 3 month US Treasury Bill yield averaged a little over 4% so that the present value of Investor 1’s $10 million at the risk free interest rate would be about $22 million. After the withdrawal of $15 million, there would still be $7 million left – a little less than half of Investor 2’s $15 million. If you believe in the time value of money, Investor 1 should get a little less than half of Investor 2. The judge thinks Investor 1 should get nothing.
Alternatively, if you believe that the purchasing power of money is important, then the US consumer price inflation during those 20 years averaged about 3%. The $10 million that Investor 1 put in two decades ago would be worth $18 million in 2008 dollars and Investor 1 would still have $3 million of net investment left after the withdrawal of $15 million. Yet the judge thinks he should get nothing.
Posted at 3:08 pm IST on Thu, 11 Mar 2010 permanent link
Categories: bankruptcy, fraud, law
Report on Rating Agency Regulation in India
Last week, the Reserve Bank of India published the Report of the Committee on Comprehensive Regulation of Credit Rating Agencies appointed by the Government of India (more precisely, the High Level Coordination Committee on Financial Markets). This was also accompanied by a study by the National Institute for Securities Markets entitled An assessment of the long term performance of the Credit Rating Agencies in India
The report provides a comprehensive analysis of the issues mentioned in the terms of reference for the committee. Unfortunately, those terms of reference did not include what I believe are the only two questions worth looking at about credit rating in the aftermath of the global financial crisis:
- How should India eliminate or at least reduce the use of credit ratings in financial sector regulations?
- How should India try to introduce greater competition in credit rating?
Rating agencies are fond of saying that “AAA” is just the shortest editorial in the world. Regulators should take the rating agencies at their word and act accordingly. They should give as little regulatory sanction for these ratings as they do to the editorial in a newspaper. Also, regulators should make it as easy to start a rating agency as it is to start a newspaper. These are the two issues that I think need urgent consideration.
As I pointed out in this blog post last year, the US is an outlier in terms of the use of credit ratings in its regulations, and since India has largely adopted US style regulations, it too is an outlier. By unilateral action, India can eliminate all use of credit ratings except what is required by Basel-II. Even Basel-II is not something for which Indian regulators can disown responsibility – India is now a member of the Basel Committee. Indian regulators should be providing thought leadership on eliminating credit rating from Basel-III or Basel-IV.
I am disappointed that India’s apex regulatory forum (High Level Coordination Committee on Financial Markets) having recognized the important role of credit rating agencies in the global crisis, did not bother to ask the truly important questions. All the more so, because the report did a good job of addressing the questions that were referred to it in the terms of reference. If only the same bunch of competent people had been asked the right questions!
Posted at 6:05 pm IST on Tue, 9 Mar 2010 permanent link
Categories: credit rating, regulation
Bayesians in finance redux
In November last year, I wrote a brief post about Bayesians in finance. The post was brief because I thought that what I was saying was obvious. A long and inconclusive exchange with Naveen in the comments section of another post has convinced me that a much longer post is called for. The Bayesian approach is perhaps not as obvious as I assumed.
When finance professors walk into a classroom, they want to build on what the statistics professors have covered in their courses. When I am teaching portfolio theory, I do not want to spend half an hour explaining the meaning of covariance; I would like to assume that the statistics professor has already done that. That is how division of labour is supposed to work in a pin factory or in a university.
Unfortunately, there is a problem with this division of labour – most statistics professors teach classical statistics. That is true even of those statisticians who prefer Bayesian techniques in their research work! The result is that many finance students wrongly think that when the finance professors talk of expected returns, variances and betas, they are referring to the classical concepts grounded in relative frequencies. Worse still, some students think that the means and covariances used in finance are sample means and sample covariances and not the population means and covariances.
In business schools like mine where the case method dominates the pedagogy, these errors are probably less (or at least do less damage) because in the case context, the need for judgemental estimates for almost everything of interest becomes painfully obvious to the students. The certainties of classical statistics dissolve into utter confusion when confronted with messy “case facts”, and this is entirely a good thing.
But if cases are not used or used sparingly, and the statistics courses are predominantly classical, there is a very serious danger that finance students end up thinking of the probability concepts in finance in classical relative frequency terms.
Nothing could be farther from the truth. To see how differently finance theory looks at these things, it is instructive to go back to some of the key papers that established and developed modern portfolio theory over the years.
Here is how Markowitz begins his Nobel prize winning paper (“Portfolio Selection”, Journal of Finance, 1952) more than half a century ago:
The process of selecting a portfolio may be divided into two stages. The first stage starts with observation and experience and ends with beliefs about the future performances of available securities. The second stage starts with the relevant beliefs about future performances and ends with the choice of portfolio.
Many finance students would probably be astonished to read words like observation, experience, and beliefs instead of terms like historical data and maximum likelihood estimates. This was the paper that gave birth to modern portfolio theory and there is no doubt in Markowitz’ mind that the probability distributions (and the means, variances and covariances) are subjective beliefs and not classical relative frequencies.
Markowitz is also crystal clear that what matters is not the historical data but beliefs about the future – historical data is of interest only in so far as it helps form those beliefs about the future. He also seems to take it for granted that different people will have different beliefs. He is helping each individual solve his or her portfolio problem and is not bothered about how these choices affect the equilibrium prices in the market.
When William Sharpe developed the Capital Asset Pricing Model that won him the Nobel prize, he was trying to determine the market equilibrium and he had to assume that all investors have the same beliefs but did so with great reluctance:
... we assume homogeneity of investor expectations: investors are assumed to agree on the prospects of various investments – the expected values, standard deviations and correlation coefficients described in Part II. Needless to say, these are highly restrictive and undoubtedly unrealistic assumptions. However, ... it is far from clear that this formulation should be rejected – especially in view of the dearth of alternative models
But finance theory quickly went back to the idea that investors had different beliefs. Treynor and Black (“How to use security analysis to improve portfolio selection,” Journal of Business, 1973) interpreted the CAPM as saying that:
...in the absence of insight generating expectations different from the market consensus, the investor should hold a replica of the market portfolio.
Treynor and Black devised an elegant model of portfolio choice when investors had out of consensus beliefs.
The viewpoint in this paper is that of an individual investor who is attempting to trade profitably on the diiference between his expectations and those of a monolithic market so large in relation to his own trading that market prices are unaffected by it.
Similar ideas can be seen in the popular Black Litterman model (“Global Portfolio Optimization,” Financial Analysts Journal, September-October 1992). Black and Litterman started with the following postulates:
- We believe there are two distinct sources of information about future excess returns – investor views and market equilibrium.
- We assume that both sources of information are uncertain and are best expressed as probability distributions.
- We choose expected excess returns that are as consistent as possible with both sources of information.
Even if we stick to the market consensus, the CAPM beta itself has to be interpreted with care. The derivation of the CAPM makes it clear that the beta is actually the ratio of a covariance to a variance and both of these are parameters of the subjective probability distribution that defines the market consensus. Statisticians instantly recognize that the ratio of a covariance to a variance is identical to the formula for a regression coefficient and are tempted to reinterpret the beta as such.
This may be formally correct, but it is misleading because it suggests that the beta is defined in terms of a regression on past data. That is not the conceptual meaning of beta at all. Rosenberg and Guy explained the true meaning of beta very elegantly in their paper (“Prediction of beta from investment fundamentals”, Financial Analysts Journal, 1976) introducing what are now called fundamental betas:
It is instructive to reach a judgement about beta by carrying out an imaginary experiment as follows. One can imagine all the various events in the economy that may occur, and attempt to answer in each case the two questions: (l) What would be the security return as a result of that event? and (2) What would be the market return as a result of that event?
This approach is conceptually revealing but is not always practical (though if you are willing to spend enough money, you can access the fundamental betas computed by firms like Barra which Barr Rosenberg founded and later left). In practice, our subjective belief about the true beta of a company involves at least the following inputs:
- The beta is equal to unity unless there is enough reason to believe otherwise. The value of unity (the beta of an average stock) provides an important anchor which must be taken into account even when there is other evidence. It is not uncommon to find that simply equating beta to unity outperforms the beta estimated by naive regression.
- What this means is that betas obtained by other means must be shrunk towards unity. An estimated beta exceeding one must be reduced and an estimated beta below one must be increased. One can do this through a formal Bayesian process (for example, by using a Bayes-Stein shrinkage estimator), or one can do it purely subjectively based on the confidence that one has in the original estimate.
- The beta depends on the industry to which the firm belongs. Since portfolio betas can be estimated more accurately than individual betas, this is often the most important input into arriving at a judgement about the true beta of a company.
- The beta depends on the leverage of the company and if the leverage of the company is significantly different from that of the rest of the industry, this needs to be taken into account by unlevering and relevering the beta.
- The beta estimated by regressing the returns of the stock on the market over different time periods provides useful information about the beta provided the business mix and the leverage have not changed too much over the sample period. Since this assumption usually precludes very long sample periods, the beta estimated through this route typically has a large confidence band and becomes meaningful only when combined with the other inputs.
- Subjective beliefs about possible future changes in the beta because of changing business strategy or financial strategy must also be taken into account.
Much of the above discussion is valid for estimating Fama-French betas and other multi-factor betas, for estimating the volatility (used for valuing options and for computing convexity effects), for estimating default correlations in credit risk models and many other contexts.
Good classical statisticians are quite smart and in a practical context would do many of the things discussed above when they have to actually estimate a financial parameter. In my experience, they usually agree that (a) there is a lot of randomness in historical returns; (b) the data generating process does not remain unchanged for too long; (c) therefore in practice there is not enough data to avoid sampling error; and (d) hence it is desirable to use a method in which sampling error is curtailed by fundamental judgement.
On the other side, Bayesians shamelessly use classical tools because Bayes theorem is an omnivore that can digest any piece of information whatever its source and put it to use to revise the prior probabilities. In practical terms, Bayesians and classical statisticians may end up doing very similar stuff.
The advantage of shifting to Bayesian statistics and subjective probabilities is primarily conceptual and theoretical. It would eliminate confusion in the minds of students on the ontological status of the fundamental constructs of finance theory.
I am now therefore debating in my own mind whether finance professors must spend some time in the class room discussing subjective probabilities.
How would it be like to begin the first course in finance with a case study of subjective probabilities – something like the delightful paper by Karl Borch (“The monster in Loch Ness”, Journal of Risk and Insurance, 1976)? Borch analyzes the probability that the Loch Ness monster exists (and would be captured within a one year period) given that a large company had to pay a rather high 0.25% premium to obtain a million pound insurance cover from Lloyd’s of London against that risk? This is obviously a question which a finance student cannot refuse to answer; yet there is no obvious way to interpret this probability in relative frequency terms.
Posted at 9:57 am IST on Sun, 7 Mar 2010 permanent link
Categories: Bayesian probability, CAPM, post crisis finance, statistics
Greek bond issue
That Greece could borrow money at all (even if it is at 3% above the risk free rate) seems to have calmed the markets a great deal. I am reminded of this piece of Rothschild wisdom:
You are certainly right that there is much to be earned from a government which has no money. But you have to take risks.
That is James Rothschild writing to Nathan Rothschild nearly two centuries ago as quoted by Niall Ferguson, The House of Rothschild: Money’s Prophets 1798-1848, Chapter 4.
Posted at 7:19 pm IST on Fri, 5 Mar 2010 permanent link
Categories: sovereign risk
Regulation by placebo
This is a very nice phrase that I picked up from SEC Commissioner Kathleen Casey’s speech dissenting from the short selling rules that the SEC introduced recently:
But this is regulation by placebo; we are hopeful that the pill we’ve just had the patient take, although lacking potency, will convince him that everything is all right.
Casey’s speech itself was a bit of political grandstanding and was in the context of an SEC vote that went on predictable party lines. I am not therefore inclined to take the speech too seriously. But the phrase “regulation by placebo” very elegantly captures a phenomenon that is all too common in financial sector regulation all over the world.
Securities regulators, banking regulators and other financial regulators have this great urge to be seen to be doing something regardless of whether that something is the right thing or not. The result is often a half hearted measure that does not stop the wrong doing but convinces the public that the evil doers have been kept at bay.
Regular readers of my blog know that I am against short sale restrictions in general. At the very least, I would like short sale restrictions to be accompanied by corresponding and equally severe restrictions on leveraged longs. If you are not allowed to short a stock when it has dropped 10%, then you should not be allowed to buy a stock (with borrowed money) when the stock has risen 10%. Market manipulation is done far more often by longs than by shorts!
Posted at 6:58 pm IST on Wed, 3 Mar 2010 permanent link
Categories: regulation, short selling
Regulation of mutual funds
Morley and Curtis wrote a very interesting paper earlier this month on the regulation of mutual funds. Their fundamental point is that the open-end mutual fund presents governance problems of a completely different nature from that of normal companies.
Unit holders do not sell their shares in the market, they redeem them from the issuing funds for cash. This uniquely effective form of exit almost completely eliminates the role of voice – investors have no incentives to use voting or any other form of activism.
Morley and Curtis advocate product-style regulation of mutual funds. As I understand this, we must treat unit holders as customers and not as owners of the fund. They also advocate regulations that make it easier for investors to exercise exit rights effectively.
I think this insight is fundamentally correct. In the US context where mutual funds are organized as companies with unit holders as shareholders, this implies a huge change in the regulatory framework.
In the Indian context, mutual funds are organized as trusts and investors are legally the beneficiaries of the trust rather than the owners. Indian regulation already uses exit as a regulatory device. Whenever there is any change in the fundamental characteristics of a fund or in the ownership of the asset manager, the fund is required to provide an opportunity to investors to exit at net asset value without any exit load.
However, the trust structure creates another set of confusing and meaningless legal requirements. The governance is divided between the board of the asset management company and the trustees of the trust. This creates a duplication of functions and regulators might hope that if one of them fails, the other would still operate.
It is however more likely that each level might rely on the other to do the really serious work. The job might be done less effectively than if the locus of regulation is made clearer. There is probably merit in creating a brain-dead trust structure and making the board of the asset management company the primary focus of regulation. This is more consistent with the idea of unit holders as customers.
One implication that Morley and Curtis do not draw from their analysis is that closed-end funds are dramatically different from open-end funds and require totally different regulatory structures. Regulations in most countries tend to regard these two types of funds as minor variants of each other and therefore apply similar regulatory regimes to both. If Morley and Curtis are right, we must treat open-end unit holders as customers and closed-end unit holders as owners.
The governance of a closed-end fund should more closely mimic that of a normal corporation. Regulations should permit the unit holders of a closed-end fund to easily throw out the asset manager or even to wind up the fund. The trust structure in India does not give unit holders formal ownership rights – explicit regulations are required to vest them with such rights.
Posted at 7:58 pm IST on Wed, 24 Feb 2010 permanent link
Categories: mutual funds, regulation
Short selling and public issues
When the Indian government company NTPC was conducting a public offering of share, it was alleged that many institutional investors short sold NTPC shares on a large scale (by selling stock futures). This gave rise to some talk about suspending futures trading in shares of a company while a public issue is in progress. Thankfully, the government and the regulators did not do anything as foolish as this.
The US has a different approach to the problem – Rule 105 (Regulation M) prohibits the purchase of offering shares by any person who sold short the same securities within five business days before the pricing of the offering. Last month, the SEC brought charges against some hedge funds for violating this rule.
Obviously, Rule 105 is a far better solution than shutting down the whole market, but it is necessary to ask whether even this is necessary. Take for example, the SEC’s argument that:
Short selling ahead of offerings can reduce the proceeds received by public companies and their shareholders by artificially depressing the market price shortly before the company prices its offering.
We can turn this around to say:
Short sale restrictions ahead of offerings can allow companies to sell their shares to the public at inflated prices by artificially increasing the market price shortly before the company prices its offering.
Why do regulators have to assume that issuers of capital are saints and that investors are the sinners in all this? Provided there is complete transparency about short selling (and open interest in the futures market), it is difficult to see why short selling with the full intention to cover in the public issue should depress the issue price.
The empirical evidence is that an equity issue has a negative impact on the share price. This is partly due to the signalling effect of raising equity rather than debt, and partly due to the need to induce a portfolio rebalancing of all investors to accommodate the new shares.
Now imagine that a hedge fund short sells with the intention to buy back in the issue. Since the short seller is committed to buying in the issue, a part of the issue is effectively pre-sold. To this extent, the price impact of an equity issue is reduced. While the short selling could depress prices, this would be offset by the lower price impact of the issue itself.
In short, the short sellers would not change prices at all. What they would do is to advance the effective date of the public issue. If there is a 100 million share issue happening on the 15th and the hedge funds short 20 million shares on the 10th, then somebody has to take the long position of 20 million shares on the 10th itself. For this amount of portfolio rebalancing to happen on the 10th, there has to be a price adjustment and this can be quite visible.
But the flip side is that on the 15th there are only 80 million shares to be bought by long only investors. There is less price adjustment required on that date. The total portfolio adjustment required with or without short selling is the same – 100 million shares. The only question is whether the price adjustment happens on the 15th or earlier.
In an efficient market, the impact of unrestricted short selling would be to force the entire price adjustment to happen on the announcement date itself. The issue itself would then have zero price impact and this would be a good thing.
Because of limited short selling in the past, we are accustomed to issues being priced at a discount to the market price on the pricing date. With unlimited short selling, this would disappear. If the short selling were excessive, the issue may even be priced at a premium as the shorts scramble to cover their positions. It will take some time for market participants to adjust to the new environment. Regulators should just step back and let the market participants learn and adjust.
Posted at 8:30 pm IST on Sat, 20 Feb 2010 permanent link
Categories: equity markets, short selling
Taxation of securities
I wrote a column in the Financial Express today about the taxation of securities.
Over a period of time, several distortions have crept into the taxation of investment income in India. The budget later this month provides an opportunity to correct some of these without waiting for the sweeping reforms proposed in the direct taxes code. I would highlight the non-taxation of capital gains on equities, the securities transaction tax (STT) and the taxation of foreign institutional investors.
In the current system, the capital gain on sale of equity shares is not taxable provided the sale takes place on an exchange. Of course, there is STT, but the STT rate is a tiny fraction of the rate that applies to normal capital gains.
The taxation of capital gains is itself very low compared to the taxation of normal income. There is no justification for taxing capital gains at a lower rate than, say, salary income. After all, a substantial part of the salary of skilled workers is a return on the investment in human capital that led to the development of their skills.
Why should returns on human capital be taxed at normal rates and returns on financial capital at concessional rates? Even within financial assets, why should, say, interest income be taxed at normal rates while capital gains are taxed at concessional rates?
The discussion paper on the direct taxes code argues that a concessional rate is warranted because the cumulative capital gains of several years are brought to tax in one year and this would push the tax rate to a higher slab. This factor is in many cases overwhelmed by the huge benefit that arises from deferment of tax.
For example, consider an asset bought for Rs 100 that appreciates at the rate of 10% per annum for 20 years so that it fetches Rs 673 when it is sold. A 30% tax on the capital gain of Rs 573 amounts to Rs 172 and the owner is left with Rs 501 after tax. By contrast, if the owner had received interest at 10% every year and paid taxes at a lower slab of 20% each year, the post-tax return would have been 8%. Compounding 8% over 20 years would leave the investor with only Rs 466. In other words, 20% tax paid each year is a stiffer drag on returns than a 30% tax that can be deferred till the time of sale.
In practice, of course, indexation and the periodic re-basing of the original cost of the asset makes the tax burden on capital gains even lighter. There is no economic or moral justification for subsidising the wealthy in this manner.
Ideally, tax rates should be calibrated in such a manner that equal pre-tax rates of return translate into equal post-tax rates of return regardless of the form in which that return is earned. This might be too much to ask for, but a near zero tax rate for capital gains earned on equity shares makes a mockery of the tax system in the country and should be redressed as soon as possible.
The STT is by its very nature a bad tax because it is unrelated to whether the transaction resulted in a profit or a loss. The real reason for the STT was to make the process of tax collection easier. In this sense, the STT is best regarded as a form of the nefarious system of tax farming that is shunned by modern nation states.
Some attempts are being made to justify the STT as a form of Tobin tax on financial transactions. Without entering into a debate on whether a Tobin tax is good or bad, it should be pointed out that the STT is not a Tobin tax. This is evident from the fact that delivery-based transactions attract STT at rates several times higher than on the presumably more speculative non-delivery-based transactions. The rate on delivery-based transactions is far higher than any reasonable Tobin tax.
A final argument for STT in lieu of capital gains is that foreigners pay lower taxes in India. Many other countries tax foreign portfolio investors at low rates. Indian investors can make portfolio investments in the US (within the limit of $200,000 per annum permitted by RBI). They would not pay income taxes in the US on their income from this investment and would pay only Indian taxes.
There is symmetry here to the US portfolio investor paying taxes in the US but not in India. The only difference is that the foreign investor into India has to come through Mauritius while the portfolio investor into the US can go in directly.
We should also exempt foreign portfolio investors from taxation without forcing them to come via Mauritius. The real problem with the Mauritius loophole is that it allows even non-portfolio investors to avoid Indian taxes, but that is a different topic altogether.
Posted at 2:11 pm IST on Fri, 19 Feb 2010 permanent link
Categories: equity markets, taxation
Currency values since the gold standard
I did some analysis of how the value of various currencies have behaved in the hundred years or so since the last decade of the gold standard. I found the results interesting and I am posting them here.
I focus on the twelve countries which are either part of the G7 today, or were one of the eight great powers before World War I, or figure in the top seven traded currencies according to the BIS survey of 2007. I have started with the gold parities during the last few years of the gold standard from 1900-1914. All the currencies of interest were on the gold standard by 1900 and did not change their parities during this period.
I then convert the gold parities into exchange rates against the US dollar. Next, I take into account all the redenominations in which some hyper-inflating currencies had a few zeroes lopped off during the last hundred years. Finally, I take into account the re-denomination of several European currencies into the euro. This leads to the re-denominated gold standard value in USD. I would welcome corrections of any errors that you may find in my data and my computations.
The re-denominated gold standard value tells us what the exchange rate of the modern currency should be to preserve the gold standard value of the old currency through all the redenominations. I compare this with the actual value of the modern currency and compute an annual percentage change in currency value (taking the time period from the gold standard days as a nice round hundred years).
Only two currencies have appreciated against the US dollar over this long period – the Swiss Franc and the Dutch guilder – while the Canadian dollar has held its own. Switzerland has enjoyed a great deal of geo-political luck during this period, but the performance of the Dutch guilder is truly amazing. The euro is commonly regarded as a successor currency of the Deutsch Mark, but if we go back beyond World War II, it makes greater sense to regard it as the successor of the Dutch guilder.
The data has been split into two tables to fit the width of the page better. Countries have been listed in the order of their current GDP.
Country | US | Japan | Germany | France | UK | Italy |
Gold Standard Currency | dollar | yen | mark | franc | pound | lira |
Grams of gold | 1.505 | 0.752 | 0.358 | 0.290 | 7.322 | 0.290 |
Gold standard value in USD | 1.000 | 0.500 | 0.238 | 0.193 | 4.866 | 0.193 |
Re-denomination | 1.00E+12 x 1.95583 | 100 x 6.55957 | 1936.27 | |||
Current Currency | dollar | yen | euro | euro | pound | euro |
Re-denominated gold standard value in USD | 1.0000 | 0.5000 | 4.66E+11 | 126.5684 | 4.8665 | 373.6078 |
Current value in USD(mid Feb 2010) | 1.00 | 0.01 | 1.36 | 1.36 | 1.56 | 1.36 |
Annual change | 0.00% | -3.74% | -23.33% | -4.43% | -1.13% | -5.46% |
Country | Canada | Russia | Switzerland | Australia | Netherlands | Austria |
Gold Standard Currency | dollar | ruble | franc | pound | guilder | krone |
Grams of gold | 1.505 | 0.774 | 0.290 | 7.322 | 0.605 | 0.305 |
Gold standard value in USD | 1.000 | 0.515 | 0.193 | 4.866 | 0.402 | 0.203 |
Re-denomination | 5.00E+15 | 0.5 | 2.20371 | 1.00E+4 x 13.7603 | ||
Current Currency | dollar | ruble | franc | dollar | euro | euro |
Re-denominated gold standard value in USD | 1.0000 | 2.57E+15 | 0.1930 | 2.4332 | 0.8858 | 2.79E+04 |
Current value in USD(mid Feb 2010) | 0.96 | 0.03 | 0.93 | 0.90 | 1.36 | 1.36 |
Annual change | -0.04% | -32.22% | 1.58% | -0.99% | 0.43% | -9.45% |
Not surprisingly, gold itself has done better than any currency, appreciating 4% annually against the US dollar and 2.4% annually against even the Swiss franc. I do not know what the average Swiss interest rate has been during this period; it is conceivable that it compensates for most or even all this depreciation.
What about the Indian rupee? From its gold standard parity of 32 US cents (Rs 15 to the British pound), it has fallen to about 2 US cents – an annual depreciation of 2.67%. This is bad, but better than the Japanese yen and the French franc. The rupee entered the gold standard at a low value reflecting the depreciation of the old silver rupee during the global demonetization of silver in the late nineteenth century. The depreciation of the rupee began only in 1967. The last fifty years would be a lot worse than the last hundred years.
Posted at 6:05 pm IST on Thu, 18 Feb 2010 permanent link
Categories: currency, gold, international finance
SEC under Schapiro after one year
Mary Schapiro took over as the Chairman of the US SEC a year ago when the SEC’s reputation was in tatters. In a speech yesterday, she reviewed the achievements of the past year.
First of all, she believes that the problems of the SEC were at the top and that if the leadership at the top is changed, the rest of the organization had the competence and attitude required to deal with its challenges. Schapiro says: “having served as Commissioner 20 years earlier, I knew what the agency was capable of ... through this process, I witnessed firsthand the dedication and expertise that I had long believed embodied this agency”
Schapiro believes that the new hires she has made and the new technological initiatives that she has undertaken will be sufficient to enable the SEC to recover its last glory. My own sense is that the problems are not merely at the top but permeate the whole organization. The report on the Madoff investigations revealed problems at all levels of the SEC (see my blog post last year).
Schapiro lists the SEC’s achievements in enforcement during the last year and refers to the case that has been brought against State Street Bank. She says nothing about the Bank of America case where the judge has taken the SEC to task, and subsequently the New York Attorney General has seized the initiative.
On the rule making agenda also, Schapiro is unable to list too much of significance. Even the utterly misguided short sale restrictions of the crisis is cited as an achievement. Some minor changes on proxy rules, rating agencies and money market funds are also cited though none of these changes went far enough to make a serious difference.
Schapiro came to the SEC with high expectations after a succession of lacklustre leaders. I think she needs to do a lot more to fulfill these expectations.
Posted at 10:27 pm IST on Sat, 6 Feb 2010 permanent link
Categories: regulation
The Volcker rule
I wrote a column in the Financial Express today on the Volcker rule and other proposals of President Obama.
President Obama has proposed the ‘Volcker rule’ preventing banks from running hedge funds, private equity funds or proprietary trading operations unrelated to serving their customers. Simultaneously, he also proposed size restrictions to prevent the financial sector from consolidating into a few large firms. While this might look like unwarranted government meddling in the functioning of the financial sector, I argue that, in fact, free market enthusiasts should welcome these proposals.
Obama has chosen to frame the proposal as a kind of morality play in which the long-suffering public get their revenge against greedy bankers. While that might make political sense, the reality is that the proposals are pro free markets. To understand why this is so, we must go back to the moral hazard roots of the global financial crisis.
These roots go back to 1998 when the US Fed bailed out the giant hedge fund, LTCM. The Fed orchestrated an allegedly private sector bailout of LTCM, but more importantly, it also flooded the world with liquidity on such a scale that it not only solved LTCM’s problems, but also ended the Asian crisis almost overnight.
LTCM had no retail investors that needed to be protected. The actual reason for its bailout was the same as the reason for the bailout of AIG a decade later. Both these bailouts were in reality bailouts of the banks that would have suffered heavily from the chaotic bankruptcy of these entities.
Back in 1998, the large global banks themselves ran proprietary trading books that were also short liquidity and short volatility on a large scale like LTCM. A panic liquidation of LTCM positions would have inflicted heavy losses on the banks and so the Fed was compelled to intervene.
From a short-term perspective, the LTCM bailout was a huge success, but it engendered a vast moral hazard play. The central bank had now openly established itself as the risk absorber of last resort for the entire financial sector. The existence of such an unwarranted safety net made the financial markets complacent about risk and leverage and set the stage for the global financial crisis.
Those of us who like free markets abhor moral hazard and detest bailouts. The ideal world is one in which there is no deposit insurance and the governments do not bail out banks and their depositors. Since this is politically impossible, the second best solution is to limit moral hazard as much as possible.
If banking is an island in which the laws of capitalism are suspended, this island should be as small as possible, and the domain of truly free markets—free of government meddling and moral hazard—should be as large as possible. Looked at this way, the Volcker rule is a step in the right direction. If banks are not shadow LTCMs, then at least the LTCMs of the world can be allowed to fail.
The post-Lehman policy of extending government safety net to all kinds of financial entities amounted to a creeping socialisation of the entire global financial system. The Volcker rule is the first and essential step in de-socialising the financial sector by limiting socialism to a small walled garden of narrow banking while letting the rest of the forest grow wild and free.
What about the second part of the Obama proposal seeking to limit the size of individual banks? I see this as reducing oligopolies and making banking more competitive. Much of the empirical evidence today suggests that scale economies in banking are exhausted at levels far below those of the largest global banks, and there is some evidence that scale diseconomies set in at a certain level.
There is very little reason to believe that banks with assets exceeding, say, $100 billion are the result of natural scale economies. On the contrary, they appear to be the result of an artificial scale economy caused by the too-big-to-fail (TBTF) factor. The larger the bank, the more likely it is to be bailed out when things go wrong. It is therefore rational for a customer to bank with an insolvent mega-bank rather than with a well-run small bank.
This creates a huge distortion in which banks seek to recklessly grow to become eligible for the TBTF treatment. Well-run banks that grow in a prudent manner are put at a competitive disadvantage. This makes the entire financial sector less competitive and less efficient.
The Obama size restrictions will reduce the distortions created by the TBTF factor, and will make banking more competitive. One could argue that the restrictions do not go far enough because they legitimise the mega firms that already exist and only seek to prevent them from becoming even bigger. Nevertheless, it is in the right direction. It does not undo the damage that has already been done, but prevents further damage.
Posted at 2:27 pm IST on Tue, 26 Jan 2010 permanent link
Categories: regulation
Computational and sociological analyses of financial modeling
I have been reading a number of papers that examine financial modeling in the context of the current crisis from a computational complexity and sociology of knowledge point of view:
- Looking Out, Locking In: Financial Models and the Social Dynamics of Arbitrage Disasters, by Daniel Beunza and David Stark, September 2009
- Credit Models and the Crisis, or: How I learned to stop worrying and love the CDOs by Damiano Brigo, Andrea Pallavicini and Roberto Torresetti, December 2009
- The Credit Crisis as a Problem in the Sociology of Knowledge, by Donald MacKenzie, November 2009
- Computational complexity and informational asymmetry in financial products, by Sanjeev Arora, Boaz Barak, Markus Brunnermeier and Rong Ge, October 2009.
I liked all these papers and learned a lot from each of them which is not the same as saying that I agree with all of them.
The paper that I liked most was Beunza and Stark which is really about cognitive interdependence and systemic risk. Their work is based on an ethnographic study of financial modeling carried out over a three year period at a top-ten global investment bank. Some of their conclusions are:
Using models in reverse, traders find out what their rivals are collectively thinking. As they react to this knowledge, their actions introduce a degree of interdependence ...
Quantitative tools and models thus give back with one hand the interdependence that they took away with the other. They hide individual identities, but let traders know what the consensus is. Arbitrageurs are thus not embedded in personal ties, but neither are they disentangled from each other.
Scopic markets are fundamentally different from traditional social settings in that the tool, not the network, is the central coordinating device.
Instead of ascribing crises to excessive risk-taking, misuse of the models, or irreflexive imitation, our notion of reflexive modeling offers an account of crises in which problems unfold in spite of repeated reassurances, early warnings, and an appreciation for independent thinking.
Implicit in the behavioral accounts of systemic risk is an emphasis on the individual biases and limitations of the investors. At the extreme, investors are portrayed as reckless gamblers, mindless lemmings, or foolish users of models they do not understand. By contrast, our detailed examination of the tools of arbitrage offers a theory of crisis that does not call for any such bias. The reflexive risks that we identified befall on arbitrageurs that are smart, creative, and reflexive about their own limitations.
Though the paper is written in a sociological language, what it most reminded me of was Aumann’s paper more than 30 years ago on “Agreeing to disagree” (The Annals of Statistics, 1976). What Beunza and Stark describe as reflexivity is closely related to Aumann’s celebrated theorem: “If two people have the same priors, and their posteriors for a given event A are common knowledge, then these posteriors must be equal.”
The Brigo et al paper is mathematically demanding as they take “an extensive technical path, starting with static copulas and ending up with dynamic loss models.” But it is very useful in explaining why the Gaussian copula model is still used in its base correlation formulation though its limitations have been known for several years. My complaint about the paper is that it focuses too much on the difficulties in fitting the Gaussian copula to observed market prices and too little on the difficulties of using it to estimate the impact of plausible stress events.
MacKenzie focuses on “evaluation cultures” which are broader than just models. They are “pockets of local consensus on how financial instruments should be valued.” He argues that “‘Greed’ – the egocentrically-rational pursuit of profits and bonuses – matters, but the calculations that the greedy have to make are made within evaluation cultures”. MacKanzie highlights “the peculiar status of the ABS CDO as what one might call an epistemic orphan – cognitively peripheral to both its parent cultures, corporate CDOs and ABSs.”
The Arora et al paper is probably the most mathematical of the lot. It essentially shows that an originator can put bad loans into CDOs in such a way that it is computationally infeasible for the investors to figure this out even ex post.
However, for a real-life buyer who is computationally bounded, this enumeration is infeasible. In fact, the problem of detecting such a tampering is equivalent to the so-called hidden dense subgraph problem, which computer scientists believe to be intractable ... Moreover, under seemingly reasonable assumptions, there is a way for the seller to ‘plant’ a set S of such over-represented assets in a way that the resulting pooling will be computationally indistinguishable from a random pooling.”
Furthermore, we can show that for suitable parameter choices the tampering is undetectable by the buyer even ex post. The buyer realizes at the end that the financial products had a higher default rate than expected, but would be unable to prove that this was due to the seller’s tampering.
The derivatives that Arora et al discuss are weird binary CDOs and my interpretation of this result is that in a rational market, these kinds of exotic derivatives would never be created or traded. Nevertheless, this is an important way of looking at how computational complexity can reinforce information asymmetry under certain conditions.
Posted at 6:10 pm IST on Thu, 21 Jan 2010 permanent link
Categories: behavioural finance