We assume ...
The Basel Committee on Banking Supervision (BCBS) issued a Consultative Document on "Revisions to the Basel Securitisation Framework" earlier this month. Under "Key assumptions and theoretical underpinnings", I found this gem:
Another important assumption embedded in the RBA recalibration is that the same ratings for structured finance and corporate exposures imply the same expected loss rates for investors. One implication of this is that it is assumed that rating agencies will “fix” or have fixed the errors in rating methodologies for structured finance that were revealed during the recent crisis.
Just to put that assumption in perspective, the plot below shows the difference in default rate over the last three decades (and not just during the crisis) between Asset Backed Securities (ABS) or structured finance and corporate paper with the same ratings. The source for the underlying data is Standard and Poor’s via a recent paper by Gorton and Metrick (Table 8) that I hope to blog about soon.
I used to make the assumption that the regulators will “fix” or have fixed the errors in regulatory capital calculation methodologies for structured finance that were revealed during the recent crisis. That assumption must now be abandoned.
Posted at 10:01 pm IST on Sat, 29 Dec 2012 permanent link
Categories: credit rating
Mutual funds bailing out affiliated funds
Two years ago, I blogged about a paper showing that mutual funds support the share prices of their parent banks in Spain. The authors of that paper (Golezyand and Marinz) however argued that Spain is a country “where this type of activities are not closely monitored nor severely prosecuted and punished by the authorities”. In reality, however, this kind of behaviour knows no geographical boundaries.
Pinto and Schmidt study mutual funds in the United States and show that when a mutual fund has difficulty selling illiquid shares in response to redemption pressures, other funds in the same family very conveniently buy the shares and avoid a fire sale. The buying fund (usually a larger and more liquid fund in the same family) suffers a performance loss by absorbing these shares without a sufficiently large price discount.
The takeaway is that if you a buying an illiquid mutual fund, you should buy a fund run by a large mutual fund family to benefit from the liquidity insurance provided by its affiliates. But if you are buying a liquid mutual fund, then you should avoid funds that belong to large families to avoid the performance drag arising out of supporting its affiliates.
Posted at 9:06 pm IST on Mon, 24 Dec 2012 permanent link
Categories: banks, mutual funds
Are bond market bubbles quiet or loud?
Harrison Hong and David Sraer have a nice paper on quiet bubbles at NBER. They argue that while equity bubbles are very loud (high trading volumes in the bubble stocks), debt market bubbles are very quiet (trading volumes actually decline in bubble bonds). This prediction of their theoretical model is vindicated in the empirical data about bond trading volumes during the 2003-2007 credit boom.
In the theoretical model, the quietness of the bubble is driven by the limited upside in bond prices which makes heterogeneity of beliefs less important. By contrast loud equity bubbles arise from large disagreements about fundamentals coupled with short sale restrictions.
This characterization is both interesting and plausible. But, I think that it ignores the relative importance of the primary market in bonds as compared to equities. The annual issuance of stocks is less than two days’ trading in the secondary market which means that the primary market can be ignored in any discussion of the loudness of an equity bubble. In corporate bonds however, the annual issuance is 82 days of secondary market trading, and this is by no means negligible. (Both these pieces of data are from Dealbreaker). Issuance of bonds is analytically the same as shorting of bonds, and a large primary market also serves to attenuate short sale restrictions in the bond markets.
It appears to me that bond market bubbles are very loud when looked at in terms of a rise in issuance activity. It is perhaps for this reason that most attempts to measure credit market bubbles focus on the growth in the amount of credit outstanding which is entirely a measure of the primary market activity.
A closely related point is that the limited upside in bond prices means that a lot of leverage is required to exploit any divergence of opinion. This too leads to increased issuance of debt (typically short term) which leads to increased fragility of the financial system. This implies that bond market bubbles also pop very loudly.
Posted at 10:59 am IST on Sat, 22 Dec 2012 permanent link
Categories: bond markets, bubbles
Single stock futures and promoter share pledges
I participated in a discussion on CNBC TV18 about the trading of single stock futures on companies whose promoters have pledged a significant portion of their shareholding. (You can watch this show here (Part 10) though I participated only by audio).
The discussion was about a proposal that companies whose promoters have pledged a significant fraction of their shareholding should be punished by stopping trading in single stock futures in the shares of these companies. My views were as follows:
- Allowing trading of single stock futures should not be seen as a mark of prestige for the company or any form of a reward or seal of approval. The single stock future exists to meet the need of investors and not the needs of the companies or of the exchanges.
- Single stock futures are the easiest way to short shares, and so banning single stock futures would be a mild form of banning short selling. Short selling is an absolutely essential counterweight for the leveraged long. Restricting short selling when the promoters are leveraged long would essentially be a way of bailing them out and protecting them from sharp falls in their share prices.
- Large leveraged longs create a cliff risk for the share price – if the price falls far enough to produce large margin calls for the leveraged buyers, then the price can just fall off a cliff due to distress liquidation of the pledged shares by the lenders. The answer to this is more stringent margins when cliff risk is high either because of a large leveraged long position (or on the opposite side, because of a large short interest).
Posted at 9:18 pm IST on Mon, 17 Dec 2012 permanent link
Categories: corporate governance, derivatives, short selling
The absurdity of the leveraged super senior trade
The Financial Times has a couple of stories (behind a paywall) about the Leveraged Super Senior (LSS) trades of Deutsche Bank during the financial crisis. Grossly oversimplified, the story is roughly on the following lines:
- Let us say that in the heady days before the global financial crisis, a US bank seeks protection against catastrophic default losses on a portfolio of leading companies from around the world. Say on a portfolio of $125 million, it is willing to absorb the first $25 million of credit losses and wants protection only on losses above this threshold. This is like a catastrophe insurance in that losses on this scale would probably require a second great depression.
- The German bank provides this catastrophe insurance for a modest premium to many such banks on a truly colossal scale – apparently to the tune of $130 billion.
- The German bank then turns to a bunch of Canadian pension funds to offload this default risk, and the Canadians invest in massive amounts of Leveraged Super Senior (LSS) securities that embed this catastrophic default risk for a modest risk premium. At this point, it appears that the Germans have locked in a tiny spread and gotten rid of all the risk.
- There is a catch though. The Canadian fund that bought an LSS on say a billion dollar notional put in only $100 million of cash collateral. And no, we are not talking about counter party risk here. The Canadian fund did not assume a $ 1 billion obligation. They simply had an option to post more collateral and keep the security alive if losses threatened to eat away the original $100 million of cash collateral. But if they chose not to do so, the Canadians were perfectly within their rights to walk away, and the German bank will simply have to unwind the whole structure at prevailing market prices (assuming there is a market at that point).
- The German bank models the LSS on the assumption that the Canadians will keep posting more collateral. These models imply that the LSS is almost as good as an outright hedge of the entire $ 1 billion notional, and subtract a small amount (the “gap option”) to account for the risk that the Canadians will walk away.
- A proper analysis is provided by Gregory’s paper of 2008 which is also referenced in the Financial Times story. Figure 2 of this paper provides a succinct summary of the situation. Gregory says that instead of treating the (Canadian) LSS as a $1 billion hedge less a small correction for the gap option, we should actually treat it as only a $100 million hedge plus a small correction for the deleveraging option. Gregory also argues that under most situations, it would be suboptimal for the (Canadian) investors to post more collateral and therefore this positive correction is quite small. In other words, the gap option approach is really wrong. I strongly recommend reading Gregory’s paper in its entirety; it may appear mathematically forbidding, but it is a lot more readable than it looks.
- On top of all this, it is now being alleged that the German bank at one stage even stopped bothering to subtract the small gap option.
- During the global financial crisis, when the risk of a second great depression began to appear a little less remote, the German bank woke up to the fact that the Canadian LSS was denominated in Canadian dollars while the protection that it had provided to the US banks was denominated in US dollars. There was a currency mismatch and somebody had to worry about how the CAD/USD exchange rate would behave in an end of the world scenario.
- The only person that they could find willing to take a view on this fiendishly complex “quanto” risk (and put his money where his mouth is) was Warren Buffet. His Bershire Hathaway pocketed a $75 million premium for covering this risk, but very cleverly limited its risk to $3 billion. I suspect that Warren Buffet did not try to value this quanto derivative at all, but simply calculated that he was being paid a 2.5% premium to cover this risk. In comparison to most insurance deals, this must have appeared to be a very fat premium. To an insurer who is accustomed to working with physical probabilities rather than the risk neutral probabilities that are really relevant here, this must have looked like a bet that one could take blindfolded, and maybe that is what Berkshire did. Warren Buffet screams about weapons of mass destruction when he loses money on derivatives; he just keeps quiet and pockets the cash when he makes money on them.
- The German bank concluded that since losses in excess of $3 billion were extremely unlikely, the quanto risk was completely covered and they could stop worrying about it.
- If you are worried that a large portfolio of top grade global corporations could experience default losses of 20-25%, whom would you buy insurance on this from? Most certainly not from a bank! The best run bank in the world would be broke long before this scale of default losses appears on a high quality corporate credit portfolio. This is worse than buying insurance on the Titanic from somebody who is himself on the Titanic. It is like buying insurance on a lifeboat (after the Titanic has sunk) from somebody who is swimming in the water without even a raft. There is only one situation where this hedge makes economic sense – if you are sure that you are dealing with a systemically important bank that would be bailed out if it fails. In this case, of course, you are buying insurance from the German taxpayer and that probably makes sense. The other reason for doing this is not economic at all – maybe the only reason for doing the trade was to save regulatory capital. In this case, of course, it does not matter whether you would ever collect on this insurance; maybe you would not be around to collect either.
- The Canadian pension fund which posts collateral upfront is obviously a far more credible seller of protection than a bank that is probably levered 30:1. The only problem is that the buyer now needs to model the “gap risk” and a large global bank obviously has a comparative advantage in browbeating its regulators into accepting a deeply flawed valuation of a complex hedge.
- I am not totally convinced that even Berkshire Hathaway is a credible seller of protection on a great depression risk particularly on a complex quanto risk.
All this reminds me of those fallacious mathematical proofs that use division by zero to prove that 1 equals 2. All these proofs work by creating a significant amount of needless complexity in the midst of which the audience does not notice that somewhere in the long chain of reasoning, you have actually divided by zero. The same thing is happening here; you need a significant amount of complexity to ensure that the regulators do not observe that some risks have slipped between the cracks unobserved waiting to be picked up by the unwary taxpayer.
Posted at 5:03 pm IST on Sun, 9 Dec 2012 permanent link
Categories: derivatives, risk management
When regulation collides with free speech
In a ruling earlier this week that has implications for regulations in other fields (including finance), the US Court of Appeals (second circuit) concluded that “the government cannot prosecute pharmaceutical manufacturers and their representatives under the FDCA for speech promoting the lawful, off-label use of an FDA-approved drug.” The US Food and Drug Administration when approving a drug for certain (on-label) purposes does not prohibit physicians from prescribing the drug for other (off-label) purposes, but prohibits the drug companies from marketing the drug for off-label purposes. The Court ruled that the FDA could ban off-label use if it chose to, but could not permit such use and then require some parties to keep quiet about it:
... prohibiting off-label promotion by a pharmaceutical manufacturer while simultaneously allowing off-label use “paternalistically” interferes with the ability of physicians and patients to receive potentially relevant treatment information; such barriers to information about off-label use could inhibit, to the public’s detriment, informed and intelligent treatment decisions. (Page 44)
Financial regulators are also in the habit of regulating speech in all kinds of situations – the US SEC’s infamous quiet period rule is a good example. The Circuit Court ruling quotes a Supreme Court judgement that “regulating speech must be a last – not first – resort”. This is something that all regulators particularly in the financial sector must bear in mind.
Posted at 2:06 pm IST on Thu, 6 Dec 2012 permanent link
Categories: law, regulation
Nice Finance Quotes
I came across two nice quotes related to finance recently:
-
- “Risk is the pollution created by the process of making money. So where you find people making one you will surely find them hiding the other. ” — David Malone (Golem XIV) (h/t Deus Ex Macchiato).
I think of this as a more forceful way of stating the “No free lunch” form of the Efficient Market Hypothesis in the post crisis world where there is no risk free asset.
-
- “Accounting is the beginning of all economic wisdom, but not the end.” — Willem H. Buiter and Ebrahim Rahbari, Target2 Redux: The simple accountancy and slightly more complex economics of Bundesbank loss exposure through the Eurosystem.
In my experience, most finance MBAs are unwilling to accept the first half of the statement, while most accountants ignore the second half of the statement.
Posted at 3:10 pm IST on Fri, 30 Nov 2012 permanent link
Categories: accounting, miscellaneous, risk management
What is front running?
A recent order of the Securities Appellate Tribunal in India has raised quite a furore over the precise meaning of front running. The Tribunal ruled that Regulation 4(2)(q) of the Fraudulent and Unfair Trade Practices Regulations prohibit front running by intermediaries like stock brokers but not by others.
Many people argue (a) that front running by any entity should be prohibited, and (b) even in the absence of such a prohibition, front running is a fraud on the market that is covered by the general anti fraud regulations. I do not wish to make any comment on the particular case that was decided by the Tribunal, but do think that we should be careful about criminalizing any and all forms of front running. Front running by brokers and other intermediaries is a breach of fiduciary obligation that they owe their clients, but it is not self evident that every Tom, Dick and Harry on the planet has any fiduciary obligation not to front run orders that they expect other people to place. In fact, such form of front running is legal and common through out the world.
The most famous example of such legal front running occurred when the giant fund LTCM (Long Term Capital Management) was near its death in 1998. LTCM brought in Goldman Sachs to help raise new money to recapitalize LTCM. Goldman flatly refused to sign a Non Disclosure Agreement (NDA) when requested to do so, but LTCM was so desperate that they let Goldman do due diligence anyway. What happened thereafter is well described by many people. Here is the description from Sebastian Lullaby’s More money than God: hedge funds and the making of a new elite, New York, Penguin Press (page 239-240):
[Goldman's] proprietary trading desk was selling positions that resembled LTCM’s, feeding on Long-Term like a hyena feeding on a trapped but living antelope. The firm made only a qualified effort to defend what it was up to. A Goldman trader in London was quoted as saying: “If you think a gorilla has to sell, then you sure want to sell first. We are very clear on where the line is; that’s not illegal.” Corzine himself conceded the possibility that Goldman “did things in markets that might have ended up hurting LTCM. We had to protect our own positions. That part I’m not apologetic for.”
The critical point that separates Goldman’s actions from the zone of illegality is its refusal to sign the NDA. This very clearly highlights that the prohibition of front running is rooted in a fiduciary obligation – take that fiduciary duty away and there is nothing immoral or even illegal about trading ahead of somebody else. In fact, the practice is so widespread that in the finance literature, there is a technical term for it – predatory trading (Markus K. Brunnermeier and Lasse Heje Pedersen (2005) “Predatory Trading”, The Journal of Finance, 60(4), pp. 1825-1863). Brunnermeier and Pedersen identify several situations where this kind of front running occurs routinely:
- Hedge funds with (nearing) margin calls may need to liquidate, and this could be known to certain counterparties such as the bank financing the trade.
- Similarly, traders who use portfolio insurance, stop loss orders, or other risk management strategies can be known to liquidate in response to price drops.
- A short-seller may need to cover his position if the price increases significantly or if his share is recalled (i.e., a “short squeeze”).
- Certain institutions have an incentive to liquidate bonds that are downgraded or in default.
- Intermediaries who take on large derivative positions must hedge them by trading the underlying security.
This list is by no means exhaustive – there is a whole industry devoted to front running index mutual funds trading ahead of an index reconstitution or a commodity Exchange Traded Fund rolling over its futures positions to the next month.
Front running can happen even with much more imprecise forecasts of other people’s orders. A paper forthcoming in the Journal of Financial Economics (Sophie Shive and Hayong Yun “Are mutual funds sitting ducks?” available here or in working paper version here) shows that:
We find that patient traders profit from the predictable, flow-induced trades of mutual funds. In anticipation of a 1%-of-volume change in mutual fund flows into a stock next quarter, the institutions in the same 13F category as hedge funds trade 0.29–0.45% of volume in the current quarter. ... A one standard deviation higher measure of anticipatory trading by a hedge fund is associated with a 0.9% higher annualized four-factor alpha. A one standard deviation higher measure of anticipation of a mutual fund’s trades by institutions is associated with a 0.07–0.15% lower annualized four-factor alpha.
On the opposite side there is a paper showing that hedge funds short stock ahead of expected sales by mutual funds experiencing large redemptions (Joseph Chen, Samuel Hanson, Harrison Hong, Jeremy C. Stein (2008) “Do Hedge Funds Profit From Mutual-Fund Distress?”, NBER Working Paper No. 13786)
In short, finance is an ugly world very similar to the African Savannah where the lion lives only if it can outrun the slowest gazelle and the gazelle lives only if it can outrun the fastest lion. We may not like it, but I am not sure that it is practical or desirable to criminalize all this.
I repeat that I am not expressing any view on the case that was before Tribunal; I am only responding to suggestions that any and all forms of front running should be criminalized.
Posted at 5:20 pm IST on Mon, 26 Nov 2012 permanent link
Categories: insider trading, manipulation, regulation
Is finance dumbing us down?
Gerald Crabtree’s scary paper on “Our Fragile Intellect” (h/t Paul Kedrosky) has only one sentence about finance, but it is a damning one:
Needless to say a hunter gather that did not correctly conceive a solution to providing food or shelter probably died along with their progeny, while a modern Wall Street executive that made a similar conceptual mistake would receive a substantial bonus.
I have absolutely no idea whether Crabtree’s speculations about the genetic fragility of human intelligence are correct or not, but I am certain that the problem of moral hazard in finance is a real one.
Meanwhile, finance students who complain about the technical complexity of modern finance may do well to ponder Crabtree’s claim that “life as a hunter gather required at least as much abstract thought as operating successfully in our present society”. Computing the right way to hedge a complicated derivative is hard, but as Crabtree would say “non-verbal comprehension of things such as the aerodynamics and gyroscopic stabilization of a spear while hunting a large dangerous animal” is probably as hard or harder.
Posted at 1:35 pm IST on Mon, 26 Nov 2012 permanent link
Categories: behavioural finance, regulation
Irrational but arbitrage free
Last month I blogged about the Palm-3Com episode as an instance of prices being horribly wrong without there being an arbitrage opportunity (“free lunch”). Last week John Hempton of Bronte Capital had a blog post about Great Northern Iron Ore Properties (GNIOP) whose overpricing is extremely easy to establish. The post concludes by saying:
Because that is so well known the stock has a 20 percent borrow cost -- roughly offsetting the profit you will get from shorting it. In that sense there is a rational market. But that is the only sense there is a rational market. People own this. They will lose money.
Great Northern Iron Ore Properties is a trust set up in 1906 by the Great Northern Railway for regulatory reasons (apparently under the Hepburn Act of 1906, no railroad was permitted to haul commodities which they had produced themselves).
The 1906 agreement states that the Trust shall continue for twenty years after the death of the last survivor of eighteen persons named in the Trust Agreement. I would imagine that this provision has something to do with the rule against perpetuities. Anyway, the last survivor of these eighteen persons died on April 6, 1995 and the Trust terminates twenty years later or April 6, 2015.
All this is very clearly disclosed on the GNIOP website and in SEC filings. It is clear therefore that investors in GNIOP will get dividends for a couple of years and then a final dividend in 2015; these dividends can be estimated within reasonable bounds and discounting this short stream of dividends gives the fundamental value of GNIOP. But a careless investor who applies a PE multiple to GNIOP wrongly assuming a perpetual stream of dividends would arrive at an absurdly high valuation and would consider the stock hugely undervalued. Apparently some investors (humans and computers) who use simple stock screens based on PE ratios are eagerly buying the stock making it overvalued in relation to the true short stream of dividends.
Rational investors see a free lunch and step in to short the stock. Markets abhor free lunches, and the stock borrow rises to the point where it eliminates the free lunch. This is not enough to correct the distorted price.
Posted at 8:54 pm IST on Mon, 19 Nov 2012 permanent link
Categories: arbitrage, market efficiency
On rating Rembrandts
No, this post is not about the masterpieces of the Dutch painter Rembrandt van Rijn, but about the Rembrandt CPDO (Constant Proportion Debt Obligation) notes created by the Dutch bank ABN Amro in 2006. The Federal Court of Australia ruled last week that:
S&P’s rating of AAA of the Rembrandt 2006-2 and 2006-3 CPDO notes was misleading and deceptive and involved the publication of information or statements false in material particulars and otherwise involved negligent misrepresentations to the class of potential investors in Australia ... because by the AAA rating there was conveyed a representation that in S&P’s opinion the capacity of the notes to meet all financial obligations was “extremely strong” and a representation that S&P had reached this opinion based on reasonable grounds and as the result of an exercise of reasonable care when neither was true and S&P also knew not to be true at the time made. (Summary, Para 53)
The judgement is indeed very long – Felix Salmon says that Jayne Jagot’s judgement “runs to an astonishing 635,500 words, or almost 1,500 pages: it’s literally longer than War and Peace”. I agree with Felix that the judge does a remarkable job of understanding this complex instrument and the analyzing the intricacies of rating it. She has obviously benefited from the testimony of numerous experts, but she still deserves full credit for the clarity of her analysis of the key drivers of the performance of a CPDO. The court is thus able to arrive at a cogently argued conclusion that “S&P’s modelling and assignment of the AAA rating was not such as a reasonably competent ratings agency could have carried out and assigned in all of the circumstances.” (Summary, Para 27)
There is a wealth of information in the judgement about the modelling of CPDOs, but from the point of view of legal liability of the rating agency, there are three crucial hurdles to overcome:
- Investors have a responsibility to perform their own credit assessment.
- The court cannot become the regulator of rating agency methodology.
- Rating agencies do not insure investment performance.
The court deals with each of this with forcefully, but I am sure there will be a great deal of debate whether the views of the court are correct. On the investors’ responsibility to perform their own credit assessment: the court says:
I consider the proposition that a prudent person must not invest in any product they do not themselves understand problematic. It suggests that a prudent person could never take and rely on advice. It suggests that a prudent person who had been advised that a particular investment should be made must reject the advice if they themselves are capable of understanding the advice but incapable of understanding the way in which the investment operates. It is the equivalent of saying that only people who truly understand the principles of flight should be allowed to travel by plane. It seems to me that the rigidity of the proposition is a recipe for imprudence. Prudent people do not assume they know or can know everything. They do not assume that they are best placed to assess every fact, matter or thing. They do not assume that their own limitations dictate what can and cannot prudently be done. Prudence does not involve solipsism. (Para 1472).
Prudent people seek to identify others who are best placed and have demonstrated they can be trusted to assess relevant facts, matters and things. ... All of the councils relied on: – (i) the belief that LGFS’s conduct had induced that LGFS, as specialists in local government financial markers and investments, had applied its expertise to the CPDO and assessed it to be a suitable investment for councils to make, and (ii) the belief that S&P had applied its expertise as a body specialising in assessing the creditworthiness of financial products and had concluded that this product warranted the highest possible rating of AAA in respect of interest and principal. The councils’ beliefs to this effect were reasonable in the circumstances and, indeed, were correct. For the councils to refuse to invest in these circumstances, by reason only of the fact that they did not understand how the product operated, does not accord with the dictates of prudence. (Para 1473).
On the issue of the court becoming the regulator of rating agency methodology, the judgement says:
It is also not the case that the councils “seek to place the Court in the untenable position of being the regulator of rating agency methodology”. This is an inaccurate description of the issues in this case. As will be apparent from the discussion and findings below this is not a case about alternative methods of rating, questions of reasonable qualitative judgment or whether one or other method or judgment is to be preferred or is superior to another. This is a case about what S&P did and did not do and whether any reasonable ratings agency could have so conducted itself. It is not a case about the appropriateness or otherwise of a rating. It is case about negligence and misleading and deceptive conduct. (Para 2482)
On the argument that rating agencies are not insurers, the court says:
The imposition of a duty of care in this case does not transform S&P into an insurer of investment performance. It does no more than ensure that S&P, if it chooses to earn money from holding itself out as having specialised expertise in ascertaining the creditworthiness of structured financial products, knowing that it can do so because many potential investors do not have or cannot practically access the same expertise, exercises reasonable care in the assigning of ratings to structured financial products. The criterion for potential liability in respect of such a duty of care is not the performance of the product. The performance of the product determines the potential for loss and thus completion of the potential cause of action. But breach of the proposed duty cannot be determined by reference to the performance of the product. As S&P correctly said the assigning of a rating of a structured financial product embodies a forward-looking opinion about creditworthiness assigned at a particular time. The ratings agency either did or did not exercise reasonable care at that particular time. (Para 2799)
But I am not a lawyer, and my interest is not in legal liability but financial modelling. I have been asking myself a more fundamental question – could this note have been rated at all (not a AAA rating but any rating at all)? The Australian court does conclude that the Rembrandts were securities and not derivatives for the purposes of the Corporations Act, but economically they are more derivatives than bonds. As the court put it: “However else it might be described, the CPDO was ultimately an extraordinarily complicated bet on the future performance of two CDS indices over a period of up to 10 years.” (Summary, Para 9). The risk in the Rembrandts is market risk rather than credit risk. Yes, the derivatives are credit derivatives, but the Rembrandts could be cashed out at a 90% loss of principal not because there were too many any defaults on the names underlying the credit derivatives, but because of the movement of the credit spread. Counter-intuitively, this could happen if the credit spread were too low rather than too high.
Another way of looking at it is that the Rembrandts were a bet that the risk premium embedded in the credit spread (more precisely the CDS spread) could be harvested in a “safe” manner. If the credit spread were say 1% while the expected default losses were only 0.20%, the notes would be expected to make 0.80% annually by selling CDS (before accounting for leverage which could be as high as 15:1). In finance jargon, the Rembrandts were betting that the risk neutral expected default loss (say 1%) is much higher than the real expected default loss (say 0.20%) and the balance is just a risk premium. The complexity of the CPDO structure is all about (a) making this a leveraged bet and (b) dynamically adjusting the leverage ratio to deliver a bimodal outcome where either the investor gets back full principal with a coupon 1.90% above the risk free rate or gets cashed out at a 90% loss of principal. To a finance theorist, there is something absurd about a risk free (AAA) instrument yielding 1.90% above the risk free rate. It is almost axiomatic that there is no risk free way of harvesting risk premia.
It appears to me that such instruments should not be rated at all. Analyzing the probability of loss in the Rembrandt makes no sense when it does not take into account the fact that the loss in case of default is 90% and not the much smaller losses in AAA corporate bonds. Comparing the loss probability of the Rembrandt with that of a AAA corporate bond over a ten year horizon is meaningless since unlike AAA rated corporate bonds which default only after several years, the biggest risk of loss in a CPDO like Rembrandt is in the early years when the leverage is very high. A 0.28% probability of loss over ten years might be consistent with a AAA rating, but a AAA rated corporate bond also has less than 0.01% default probability over the first two years and not 0.06% as one might expect if one tried to spread the 0.28% out equally over ten years.
Posted at 3:33 pm IST on Mon, 12 Nov 2012 permanent link
Categories: credit rating, derivatives, law, regulation
Parochialism of national and global exchanges
I blogged three years ago about how Indian exchanges pretend to be national in their scope, but shut down when conditions in their home city make it convenient to take a holiday. Their equally parochial regulators are also complicit in this. (I have been critical of the 9/11 closures in the US as well).
This week as hurricane Sandy hit the east coast of the US, it was the turn of the big US exchanges with global footprints to reveal their parochialism. Their regulator was also happy to endorse the decision of these exchanges to shut down. It was left to a former Chairman of the US SEC, Arthur Levitt to state the obvious:
If you’re going to have a stock exchange, it should have a backup facility of some sort so that regional events don’t cause its closure, ... This should not happen to the world’s most prominent exchange.
The response of the NYSE CEO was that Arthur Levitt “maybe a little out of date with the facts.” No it is the exchanges and their regulators who are out of date with the facts – somebody forgot to tell them that modern exchanges are not trading floors subject to the vagaries of the local weather, but electronic networks which can be rerouted very easily. And no, the difficulty of New York brokers to get to their offices is no excuse for shutting the national exchange. By this logic, they should have shut the NYSE when hurricane Katrina struck New Orleans; surely, brokers based there would have had great difficulty reaching their offices.
In my experience, backup sites in the financial industry are a big joke. Typically, these systems are set up only to satisfy check box ticking regulators who require them to have back up sites, but do not bother to check whether these are actually adequate. Many of these backup systems have significantly less processing capacity than the main site. Moreover, they are not designed to run the full suite of software that runs on the main system. Given the willingness of spineless regulators worldwide to shut down national financial market places at the drop of a hat, this reluctance to spend money on genuine backup sites is fully rational.
I am convinced that regulators should simply force each institution to operate out of its backup site on a few random days each year. They should get very minimal notice (otherwise, they would fly down their entire management team to the backup site to make it work). Accountable algorithms that I blogged about recently are ideal to ensure that the dates are indeed randomly chosen.
Posted at 7:43 pm IST on Sat, 3 Nov 2012 permanent link
Categories: exchanges
Purported solution of Palm-3Com relative pricing puzzle
Martin Cherkes and Chester Spatt claim in a recent paper to have solved the puzzle about the relative pricing of Palm and 3Com. During the dot com bubble, 3Com sold 5% of its Palm subsidiary to the public, and announced its intention to spin off the remaining 95% to its shareholders. The market valued the common stock portion of Palm owned by 3Com at more than the whole of 3Com implying a negative value (-$22 billion) to the residual business of 3Com. This was implausible because 3Com had positive value before it acquired Palm, and after the spin off was complete, the residual part of 3Com was valued at $5 billion.
Cherkes and Spatt “solve” this problem by using the forward price of the Palm share instead of the spot price. Of course, single stock futures were not available in the US in those days; so they use synthetic futures prices computed from the options market (long call plus short put). The forward price is well below the spot price and based on this price, 3Com appears to be correctly valued. They also showed that as changes in the expected spin off date altered the maturity of the required synthetic future, all the relative prices adjusted to keep the valuation correct.
This is definitely an important addition to what we know about the 3Com puzzle – at the very least, it shows that the no arbitrage conditions and the law of one price were satisfied even at the peak of the dot com frenzy. But I do not believe that this is a complete solution. It is a little like claiming to solve the uncovered interest parity puzzle by pointing out that covered interest parity does hold. Yes, it is nice to know that covered interest parity is not violated and there are no risk free arbitrage opportunities available. But this only substitutes one problem for another: the forward rate is now biased and one has to appeal to some kind of time varying risk premium to explain this away.
The same problem does come up here. How does one justify the depressed forward price of the Palm stock? Cherkes and Spatt argue that this is explained by the securities lending fees that could be earned on the Palm stock. These fees arise because rational investors want to short Palm stock and buy 3Com to arbitrage the difference away. Since there are too few Palm stocks available (only 5% of the shares have been sold to the public), what happens is that the lending fees rise to the point where the arbitrage is no longer available. This is just like the currency forward premium rising till it equals the interest differential and the risk free arbitrage opportunity is eliminated.
The fundamental problem remains – either or both of the Palm and 3Com stock were mispriced. As usual, the “there is no free lunch” version of the Efficient Markets Hypothesis holds, but the “prices are correct” version fails.
Posted at 11:39 am IST on Tue, 30 Oct 2012 permanent link
Categories: arbitrage, behavioural finance, market efficiency
Selective Price Sensitive Disclosure by Government Functionaries
I was mulling over the interesting paper on “Selective disclosure by federal officials and the case for an FGD (Fairer Government Disclosure) regime” by Donna M. Nagy and Richard W. Painter when I came across this bombshell from the Chair of the UK Statistics Authority to the Prime Minister of the UK (h/t FT Alphaville):
I was made aware during the course of yesterday afternoon of your remarks at Prime Minister’s Questions in respect of the economy, in particular your statement that “the good news will keep coming”. This was ahead of this morning’s Office for National Statistics release of the preliminary estimate of Gross Domestic Product for the third quarter of 2012, to which you receive pre-release access up to 24 hours ahead of publication.
... The Pre-Release Access to Official Statistics Order 2008 states that recipients of pre-release access must not disclose ‘any suggestion of the size or direction of any trend’ indicated by the statistic to which the recipient has been given such access. It is clear from media reports that, although this may not have been your intent, your remarks were indeed widely interpreted as providing an indication about the GDP figures.
This episode is yet another reminder that the selective release of price sensitive information by government functionaries is a serious problem. Nagy and Painter propose that government functionaries must be subject to a regime of fair disclosure similar to that imposed on corporate insiders by Regulation FD in the US. (They explain that the recently enacted STOCK Act that deals with insider trading by members of the US Congress does not deal with selective disclosure.) Nagy and Painter also point out that there are a number of legal problems in creating such a regime because of the enhanced constitutional protection to communications between federal officials and members of the public because “speech on public issues occupies the highest rung of the hierarchy of First Amendment values, and is entitled to special protection.” But they believe that a Fair Government Disclosure regime can be created that addresses these concerns.
In India also we have seen selective (and even misleading) disclosure of information by government functionaries. There is a need to develop mechanisms to reduce the chance of such events.
Posted at 7:02 pm IST on Fri, 26 Oct 2012 permanent link
Categories: law, regulation
Luddites in technology company finance departments
I was fascinated by yesterday's fiasco in which Google filed its draft earnings statement with the SEC prematurely – the principal giveaway was a press release that said right at the top “PENDING LARRY QUOTE”. Several newspapers reported a statement from Google stating:
Earlier this morning RR Donnelley, the financial printer, informed us that they had filed our draft 8K earnings statement without authorization. We have ceased trading on NASDAQ while we work to finalize the document. Once it's finalized we will release our earnings, resume trading on NASDAQ and hold our earnings call as normal at 1:30 PM PT.
Interestingly, this statement is nowhere to been seen at the Google Investor Relations web site or in the SEC filings. Obviously, Regulation FD does not cover everything!
Footnoted.com raises the very interesting question as to why Google would use RR Donnelley to file its financial statements. This was the same thought that came to my mind – surely, a company whose software can navigate driverless cars, or translate automatically from one language to another, or find almost anything that there is on the vast world wide web should not find it too hard to click the “Send” button.
But Google is not alone in this. Some of India's largest technology companies who make money by running the most challenging business processes for their clients, turn to RR Donnelley to file their financial statements with the SEC. This is part of a broader phenomenon that I see all the time. Technology companies which use very sophisticated information technology in their core operations often have a fair share of Luddites in their corporate finance departments.
I am reminded of an episode almost an eternity ago, when some of my coffee loving colleagues and I were stuck in one of India's largest coffee plantations. My colleagues spent the better part of a day driving around the whole place in search of a cup of fresh coffee. It was all in vain; the management of the coffee plantation had not yet given up the old colonial mindset in which tea was the beverage of choice, and coffee was only something that you sold to make your money. The attitude in technology company corporate finance departments is very similar – technology is something to be monetized and not necessarily to be used.
Posted at 11:37 am IST on Fri, 19 Oct 2012 permanent link
Categories: technology
Predictable unpredictable numbers compromise Chip and PIN cards
A group of researchers at the University of Cambridge have a paper describing serious security weaknesses in Chip and PIN or EMV cards (h/t Bruce Schneier). EMV or “Chip and PIN” which is the leading system for card payments world-wide contains a chip that executes an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate an unpredictable number for each transaction to ensure it is fresh. The ATM sends this unpredictable number to the card along with various transaction fields. The card responds with an authorization request cryptogram (ARQC), which is calculated over the supplied data. If properly implemented this ARQC allows the ATM or POS to verify that the card is alive, present, and engaged in the transaction.
The reality is very different. The Cambridge researchers discovered that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply the “unpredictable” number which is the heart and soul of the entire protocol. Moreover, the fault actually lies with the EMV designers themselves:
The first flaw is that the EMV protocol designers did not think through carefully enough what is required for it to be “unpredictable”. The specifications and conformance testing procedures simply require that four consecutive transactions performed by the terminal should have unique unpredictable numbers ... Thus a rational implementer who does not have the time to think through the consequences will probably prefer to use a counter rather than a cryptographic random number generator (RNG); the latter would have a higher probability of failing conformance testing (because of the birthday paradox).
If the “unpredictable number” can actually be predicted, it is possible to perform all kinds of “pre-play” attacks. A crooked merchant can harvest an ARQC while having custody of the card in his POS termimal and than replay this at an ATM without the card being present and execute transactions there.
The researchers conclude:
Just as the world’s bank regulators were gullible in the years up to 2008 in accepting the banking industry’s assurances about its credit risk management, so also have regulators been credulous in accepting industry assurances about operational risk management.
Posted at 9:41 pm IST on Sat, 13 Oct 2012 permanent link
Categories: technology
Accountable algorithms
Ed Felten argues that with modern cryptography it is possible to make randomized algorithms accountable (h/t Bruce Schneier). This means that the public can verify that the algorithm was executed correctly in a particular case even though the algorithm used random numbers to make it unpredictable.
Felten’s idea is to use one random number to achieve unpredictability and another random number to achieve randomness. The authority running the algorithm chooses the first random number (R) secretly and then commits it (the cryptographic equivalent of putting it in a tamper proof sealed envelope which will be opened later). Then, it chooses the second random number (Q) publicly (for example, by rolling the dice in public). The two random numbers are added and the sum (R + Q) is the input to the algorithm. Note that the public cannot verify that R was chosen randomly, but this does not matter because even if R is non random, R + Q is still random.
Felten’s examples are not from finance, but I find the finance applications quite fascinating. For example, the income tax department selects some individuals randomly for detailed scrutiny. Using Felten’s ideas, it is possible for the individual who is selected for scrutiny to verify that this scrutiny is a result of a genuine random selection and not because of the assessing officer’ bias. It is possible to do this without making the selection predictable.
As a second example, suppose a stock exchange wants to look at prices at random times because if fixed times are chosen, there is greater risk of the prices being manipulated. The random time must be unpredictable to participants. But after the fact, we want to be able to verify that the time was chosen randomly and that some exchange official did not deliberately choose a specific time “after the fact” with knowledge of the actual prices. Felten’s ideas can be used to solve this problem as well.
In a comment on his second post, Felten introduces even more interesting ideas. For a financial example, consider an organization which requires certain employees to take prior approval before trading stocks on personal account. Suppose the compliance officer disallows a trade on the ground that the particular stock is on a negative list of stocks that cannot be traded. How does the employee verify that the compliance officer is not lying if the list itself is secret? Felten’s method can be used to deal with this problem. The compliance officer should publicly announce the root hash of a Merkle tree containing the restricted list of stocks. This root hash by itself reveals nothing. Now the compliance officer can reveal a single path in the Merkle tree which allows the employee to verify that the stock in question is on the list. But this would not reveal anything about what else is on the list.
A lot of regulations are written assuming that the people implementing the regulation are honest. This assumption is clearly inappropriate. The Right to Information Act ensures only transparency; it does not guarantee accountability in the presence of randomization. We should require that all algorithms that are used during the implementation and enforcement of the regulations should be accountable in Felten’s sense.
Posted at 9:52 pm IST on Sat, 29 Sep 2012 permanent link
Categories: law, regulation, technology
More on Abolishing IPOs
I received many comments on my blog post regarding Pritchard’s proposal on abolishing IPOs. Several comments suggested that while this might reduce losses by retail investors on hot IPOs, it would not eliminate it. I agree completely. Another set of comments asked whether the proposal would deny investors the opportunity to earn high rates of return in IPOs. This is a more subtle issue because the average rate of return on IPOs is nothing great (for the buy and hold investor, it is in fact a below average rate of return). However, many IPOs are “lottery stocks” – though the expected return is low, there is a small probability of very high return (similar to that in a lottery ticket).
If one assumes that retail investors are leverage constrained, these lottery stocks might be attractive to some categories of investors. I recall reading long ago that when Thai telecom tycoon Thaksin Shinawatra wanted to take his company public, he chose to launch the IPO just before the launch of the telecom satellite that was crucial for his business. By doing so, he offered investors an opportunity to gamble on the successful launch of the satellite. Both the IPO and the satellite launch were successful, and years later, he went on to become the Prime Minister of his country.
If one thinks about it carefully, Pritchard’s proposal would not rule out such IPOs, because the only requirement is a seasoning period of continuing disclosures. He does not propose that IPO should happen only after the business model stabilizes.
Posted at 5:27 pm IST on Sun, 23 Sep 2012 permanent link
Categories: equity markets
Abolishing IPOs
Adam Pritchard has a provocative paper arguing that Initial Public Offerings (IPOs) must simply be abolished (“Revisiting ‘Truth in securities revisited’: Abolishing IPOs and harnessing markets in the public good”). He suggests that “companies bec[o]me public, with required periodic disclosures to a secondary market, before they [a]re allowed to make public offerings”.
Pritchard writes:
No one believes that IPOs reflect an efficient capital market. In fact the evidence is fairly strong that IPOs are inefficient. IPOs are bad deals.
IPOs are bad for companies, bad for insiders, and bad for investors. The only parties that clearly benefit from these deals are the individuals who service them: accountants, lawyers, and underwriters.
Despite the provocative language, what Pritchard is referring to is simply the robust empirical result of short term underpricing (which makes IPOs bad for companies and insiders) and long term under performance (which makes IPOs bad for investors). Pritchard correctly attributes these problems to information asymmetry between issuers and investors.
His solution is to create separate primary and secondary markets for private and public companies, and make the transition between them depend on (a) minimum size requirements and (b) acceptance of enhanced disclosure obligations. The primary and secondary markets for private companies would exclude retail investors. Retail investors would be restricted to public companies; moreover, public companies would have been seasoned in the private market before becoming public. During the seasoning period, would-be public companies would file annual reports and quarterly reports on the same lines as public companies. Price discovery would happen in the private secondary market (markets like SecondMarket and SharesPost) on the basis of these public disclosures.
After the seasoning period is over, the company trades in public markets open to retail investors. Pritchard believes that the primary market for these companies should simply be the secondary market itself – so called “At the Market” offerings.
Overall, I like these ideas as they have the potential to make the equity markets more efficient. The only thing that I do not like is Pritchard’s idea that the private markets can be opened not only to Qualified Institutional Buyers (QIBs) but also to Accredited Investors. I have been reading Jennifer Johnson’s paper describing the accredited investor idea as a Ponzi scheme run by regulators (“Fleecing grandma: a regulatory Pfionzi scheme”).
Posted at 12:52 pm IST on Sat, 8 Sep 2012 permanent link
Categories: equity markets
Resolving Central Counter Parties (CCPs) by selective tear-ups
In July 2012, the CPSS (Committee on Payment and Settlement Systems of the Bank for International Settlement) and the IOSCO (International Organization of Securities Commissions) put out of consultation a report on resolution of CCPs (Recovery and resolution of financial market infrastructures: Consultative report).
Buried deep inside the report is a proposal that would permit orderly failure of even systemically important CCPs. The idea is that the CCP could simply tear up some of its settlement guarantees and wash its hands off positions that it is unable to honour. The CPSS-IOSCO document says:
... contracts could be given a final value based on the price at which the most recent variation margin payment obligations from and to participants had been calculated. To the extent that defaulting participants with out-of-the-money positions had been unable to pay variation margin to the CCP, the CCP’s obligations and variation margin payments to all in-the-money participants could be haircut pro rata to the size of their variation margin claims. This would have the effect of allocating in full the losses that had been suffered, and limiting exposure to future losses by eliminating unmatched positions or the possibility of further obligations arising on these unmatched positions. All other contracts – probably the vast majority of the contracts cleared – could remain in force. (para 3.13)
The idea seems to be that if huge price swings and defaults in some particular segment of the CCP’s activities inflicts life threatening losses on the CCP, then the resolution mechanism steps in, cuts this segment loose and allows this segment to die. The remaining segments of the CCP can continue to function unimpeded.
Another way of looking at this is that all settlement guarantees provided by the CCP are loss limited by deep out of the money options that kick in when the CCP enters resolution. If I buy a future at 500, I would normally expect the CCP to honour this contract however much the asset price rises in value. Selective tear up means that if the asset price shoots up to say 5,000 and so many sellers default that the default losses overwhelm the capital of the CCP, it (the CCP or the resolution authority) may simply haircut me and forcibly close out my position at 4,000. It is as if along with buying the future at 500, I also sold a call to the CCP at 4,000.
The big difference is that ex ante, I do not know the strike price of this call. If I had a choice of executing my buy trade at different exchanges (with different CCPs), I would clearly choose the CCP with highest expected strike price for the call that it would wrest from me in resolution. That gives me an incentive to choose the CCP that risk manages this contract well – high margins, aggressive intra-day margin calls, and intense scrutiny of concentrated positions. Volumes in each asset class would drift to the exchange or CCP that imposes strict risk management in that asset class. Instead of a race to the bottom, there would be a risk to the top. Exchanges and CCPs would try to compete on the basis of the most exacting margin requirements. Healthy competition among CCPs would be possible.
Absent any segregation of business segments, a large CCP which clears many different products has a huge incumbency advantage. It can enter a new product segment with low margins and grab market share. People would still trade there relying on the total resources of the CCP (across all segments) even if they know that on a standalone basis, this segment of the CCP is not a reliable guarantor of trades because of the inadequate margins. In effect, the established segments of the CCP would subsidize the new segment and allow it to drive new entrants out business. The threat of selective tear up by a resolution authority has the potential to limit such cross subsidies and make the market for CCP services more contestable and competitive.
Incidentally, the use of haircuts to provide partial insulation of different segments of a CCP from losses in other segments is nothing new. For example, LCH.Clearnet runs a Swap Clear service for Interest Rate Swaps which is structured in such a manner that other segments of LCH are partially insulated from losses in this segment. LCH.Clearnet default fund rules (especially SwapClear Default Fund Supplement rules S8-S11) provide for haircuts if the resources available in the Swap Clear segment are inadequate to meet the obligation of the CCP. My memory is that when the Swap Clear service was first started, the old members of LCH were worried about the potential large losses in this segment being allocated to them, and this separation of segments was worked out to allay their concerns.
The advantage of building selective tear-up into the resolution process is that this allows a carve-out of segments to happen ex post after life threatening losses have materialized. This makes a resolution (without bailout) of a CCP more credible, palatable and feasible. While the large global CCPs came out of the 2008 crisis unscathed, I fear that the next crisis will not be so kind to them. I consider it highly likely that within the next decade a prominent CCP in a G7 country would need to be resolved.
Posted at 3:38 pm IST on Tue, 4 Sep 2012 permanent link
Categories: bankruptcy, exchanges
Structured by cows or by foxes?
An Instant Message Dialog in which a rating agency employee claimed that “it could be structured by cows and we would rate it ” has been repeatedly quoted as evidence of the failures of the rating agencies in rating complex structured products in the build up to the global financial crisis. It also finds mention in a ruling earlier this month by the New York District Court allowing a case against a rating agency to go to trial (h/t FT Alphaville).
I find this puzzling because the least dangerous structured products to rate are those designed by incompetent simpletons. These would more likely correspond to the random samples to which statistical modelling is easiest to apply. The hardest instruments to rate are those put together by cunning foxes rather than by dumb cows. The cunning foxes are likely to design instruments with an intent to game the rating agency models. Model errors that may be harmless in the context of randomly designed pools could be disastrous when the pool is designed to include the worst securities that could scrape through the rating agency’s models.
That the rating agency employees did not realize this is strange. However, the fact that after years of being exposed to Abacus and Magnetar, many commentators do not seem to realize this is even more puzzling.
Posted at 7:57 pm IST on Sun, 26 Aug 2012 permanent link
Categories: behavioural finance, bond markets
Corporate Hedging and Distorted Benchmarks
I wrote the following short piece on the subject of “Corporate Hedging and Distorted Benchmarks” for the magazine CFO Connect.
Background
The ongoing investigations into the manipulation of Libor fixing have highlighted the possibility that important benchmarks underlying corporate hedges may be manipulated by large players with the possible tacit acquiescence of the global regulators. Similarly, recent developments in key crude oil benchmarks (Brent and WTI) have demonstrated that these benchmarks can be distorted by factors that could not have been anticipated a few years ago. How does this affect corporate risk management? What can corporate risk managers do to make their hedging programmes more resilient?
There are some corporate hedges that are completely unaffected by the distortion or manipulation of benchmarks. This comfortable situation arises when the hedge is designed in such a way that the benchmark in question is completely eliminated from the all-in hedged cost. For example, consider the following:
- A company borrows under a floating rate instrument where it is required to pay Libor + 3%
- It enters into an interest rate swap under which it pays 5% fixed and receives Libor.
- Clearly, the hedged cost is Libor + 3% + 5% - Libor = 8%. Whatever happens to Libor, the company’s borrowing cost is guaranteed to be 8% fixed and the company does not care whether Libor is manipulated upward or downward.
The crucial feature of the above example is that the hedge has no “basis risk” at all because the hedging instruments exactly matches the risk exposure and the risk is neatly cancelled out (Libor - Libor = 0). Not all real life hedges are so neat and simple – “basis risk” is quite common.
Consider another example which illustrates the problem:
- A company (for example, an oil refinery) is exposed to crude oil price risk.
- It hedges this risk using Brent crude futures
- In reality, it sources its crude from say Saudi Arabia and not from the Brent oil fields.
- Now the crude oil price does not cancel out completely. Instead, there is a “basis risk” where: basis = Saudi crude price - Brent crude price.
- The company might believe that the “basis risk” is negligible or at least much lower than the original crude oil price risk. While crude prices can move from $50 to $150 within a few months, the price basis between Saudi crude and Brent crude might be expected to change by only $3 or $5.
- Now the company has to worry about whether somebody is manipulating the Brent price or whether other distortions are creeping in which were not anticipated when the hedge was set up.
Libor manipulation
Libor fixing methodology
Libor stands for the London Inter Bank Offer Rate – the rate at which the large banks are able to borrow on an unsecured basis for short maturities. The official definition is that Libor* is: “The rate at which an individual contributor panel bank could borrow funds, were it to do so by asking for and then accepting interbank offers in reasonable market size, just prior to 11.00am London time.”
There are two problems with this benchmark. First, as the definition makes very clear, Libor is not the rate at which a bank has actually borrowed – it is the rate at which it could borrow funds. Second, unlike say a stock market, where all trades take place in public and transaction prices are known to all, the interbank borrowing is a bilateral market about which there is very little transparency. Therefore, if a bank says that it could borrow at 2.45%, it is not easy for anybody to verify whether this is a reasonable estimate at all.
The Libor computation tries to ameliorate this problem by polling several banks, dropping the bottom 25% and the top 25% of all quotes and averaging the central 50%. If one rogue bank submits an unreasonable quote, it is likely to fall in the top or bottom 25% which are dropped and therefore the final average may not be contaminated by this rogue quote.
The Libor computation has one final problem which people did not worry about in the good old days, but is probably very important in retrospect. The entire Libor computation methodology and process are managed by the British Bankers Association and not by an official body independent of the banks who are being polled.
Pre-crisis manipulation
Evidence that has become available in recent weeks indicates that prior to the global financial crisis, some banks were trying to manipulate Libor to suit their own exposures. Banks are large players in interest rate swaps and other derivative markets based on Libor. The British Bankers Association was of course aware of this possibility and they stipulated that “The rates must be submitted by members of staff at a bank with primary responsibility for management of a bank’s cash, rather than a bank’s derivative book.”
When regulators in the US and the UK examined all the internal emails as part of their investigation, they found that the derivative traders were routinely requesting the “submitters” to submit false quotes designed to suit the positions of those traders. The submitters were routinely complying with these requests.
In a few cases, requests were coming from traders at other banks and these were also being accommodated. Such collusion between banks would of course imply that the simple expedient of dropping the top and bottom quartiles of quotes would no longer be sufficient to prevent the average itself from being manipulated.
Manipulation during the crisis
The other evidence that has become available is about the period during the global financial crisis, when people were scared about the solvency of banks and were unwilling to lend to weak banks. During this period, it appears that most banks were systematically under reporting the rates at which they could borrow.
Comparison with the Credit Default Swap (CDS) market which measures the credit worthiness of banks suggests that Libor might have been understated by several percentage points.
It also appears that the Bank of England and the Federal Reserve Board in the US were aware of this and it is suggested that they tacitly approved of this practice. Regulators apparently feared that if the true borrowing cost of banks were widely known, that could add to the panic in the markets.
Example of Failed Libor Hedges
The case of US municipalities provides a very interesting example of how hedges based on Libor can go very badly wrong when the underlying benchmark is distorted or manipulated.
US municipalities traditionally borrowed using auction rate securities. The interest rate was a floating rate, but instead of being set as a spread over Libor, it was determined by periodic auctions. Historically, these auction rates (adjusted for the tax free status of municipal bonds) tended to be very close to Libor. It was common for them to hedge their interest rate risk by using interest rate swaps based on Libor. In normal times, this hedge worked quite well.
During the crisis however, the municipalities faced very steep borrowing rates in their auction rate securities (some auctions actually failed). This was partly driven by the general lack of liquidity in the market and partly by perceived risk of insolvency of some municipalities. Of course, the banks also faced similar perceived risk of insolvency, but because of the manipulation, the reported Libor did not reflect the same degree of stress.
As a result, the floating rate (Libor) that the municipalities received on their swaps was much lower than the floating rate (auction rates) that they paid on their borrowings. The actual borrowing cost turned out to be far in excess of the fixed rate that they thought they had locked in through the hedges.
This is a dramatic example of how a “basis risk” which was regarded as modest and manageable during normal times can become life threatening when the underlying benchmarks are distorted.
Crude oil benchmark distortion
Almost all crude oil trading in the world (both the physical market and the derivative markets) is based on a handful of benchmarks of which the two most prominent are Brent and WTI (West Texas Intermediate). As is to be expected, WTI used to be the dominant benchmark in the US while Brent was the benchmark of choice elsewhere in the world. Based on the quality of the crude (for example, the sulphur content), the crude from a specific oilfield might trade at a premium or discount to Brent or WTI. A long term supply contract between an oil producer and an importer might therefore specify the price as simply Brent + 3$ or Brent - 1$.
In recent years, the emergence of shale oil has drastically altered the supply-demand imbalances within the US. The Cushing region on which the WTI benchmark is based has become an oil surplus region and WTI prices have become artificially depressed. This position is expected to be solved as new pipelines are built and existing pipelines are modified to run in the opposite direction. In the meantime, retail gasoline prices in the US appear to have completely decoupled from WTI prices and seem to be much more closely aligned to Brent prices.
At the same time, Brent has been affected by declining production in the North Sea oilfields on which this benchmark is based. The short supply of Brent has led to a rise in prices to the extent that Brent now trades at a premium to WTI though historically, WTI was more expensive because of its superior quality.
A lot of oil price hedgers have struggled to cope with the unexpected blow out of “basis risk” due to the historically unprecedented distortion of the two principal benchmarks. To make matters worse, the methodology underlying crude oil benchmarks suffers from the same infirmities as Libor possibly on a larger scale. The markets depend on prices reported by some private agencies like Platt who are completely unregulated.
Broader lessons for corporate hedging
Some of us are fond of joking that much of what passes for hedging is actually speculating on the “basis”. Like all good jokes, this joke too has some grain of truth in it. For example, US municipalities were to some extent taking a speculative position that their borrowing cost would not materially exceed Libor on a tax adjusted basis. Their only real cause for complaint is that Libor was manipulated and did not reflect free market outcomes.
In reality, however, “basis risk” is impossible to eliminate completely. Liquid derivative markets must perforce be based on liquid benchmarks, and a specific company’s costs are unlikely to exactly mirror these benchmarks. Moreover, a hedge with significant “basis risk” is likely to be much less risky than a completely unhedged position. Thus even an imperfect hedge is risk reducing and can not fairly be described as speculative risk taking.
The exception is when hedging is used to justify high levels of leverage. Many of the problems that banks have faced during and after the crisis were due to this. A mistaken belief that “basis risk” is negligible leads to the assumption that the hedged position is practically risk free and can be supported by astronomical levels of leverage. Even modest movements in the “basis” can then wipe out the capital and expose the bank to risk of insolvency.
Outside of finance, some manufacturing companies might be making similar mistakes. By underestimating the basis risk, they may be emboldened to adopt risky financial and operating policies that they might not have chosen if they were fully aware of the “basis risk”. These are the companies that can be truly described as speculating on the “basis”.
Posted at 8:12 pm IST on Sun, 19 Aug 2012 permanent link
Categories: benchmarks, derivatives, risk management
Anchoring bias as a regulatory tool
The anchoring bias is a well known phenomenon in behavioural finance. As Tversky and Kahneman described it long ago(Amos Tversky and Daniel Kahneman (1974), “Judgment under Uncertainty: Heuristics and Biases”, Science, New Series, 185(4157), pp. 1124-1131):
Milind Kulkarni from FinIQ, a leading structured products solution provider gave me some information on an interesting regulatory measure by the Central Bank of Taiwan that exploits this behavioural bias to protect retail investors. Though he could not find an official English language text of the regulation, his colleague was able to provide a translation of the Chinese text:In many situations,people make estimates by starting from an initial value that is adjusted to yield the final answer. ... adjustments are typically insufficient. That is, different starting points yield different estimates, which are biased toward the initial values. We call this phenomenon anchoring.
When a bank buys an option from the client (to create yield enhancement) which is collateralized by client’s deposit which happens in to be the call currency with matching notional amount, in the event of the option exercise by the bank the client’s deposit (in call currency) will be retained (bought) by the bank and the alternate (put) currency will be repaid to the client at the strike rate, leading to potential capital loss to the client, such loss should not be more than 30% of the capital at any cost.
This means that the bank must sell a 70% out of the money call option back to the client to create such an airbag type protection against extreme capital loss. In short client sells a regular near ATM call option to the bank and buys back a deep OTM call option from the bank.
The interesting part of this regulation is that it does not rule out short term toxic products in which the retail investor’s annualized rate of return is hugely negative. If you lose 30% every month, you can lose practically everything pretty quickly. Even products that have a maturity of several months or even a year do not need to produce potential losses of 30% to create meaningful yield enhancement. On the other end of the scale, some of the most toxic products do not produce capital losses at all. Consider for example, some of the highly toxic principal protected Power Reverse Dual Currency (PRDC) notes that suddenly became 30 year near-zero coupon bonds when the yen moved sharply during the global financial crisis. The present value loss can be huge even if the principal is fully protected.
The practical effect of the Taiwanese regulation is not therefore so much economic as behavioural. When the product is structured with a 30% airbag, the structure draws the investor’s attention to the potential loss of 30%. Of course, investors know that the 30% loss is extremely unlikely, but 30% is now the anchor from which an adjustment is made to estimate the likely loss. This probably leads to an overestimate of the true loss. In the absence of the airbag, the bank probably tries to deflect the investor’s attention away from possible losses. The smart investor would of course take the bank’s sales pitch with a pinch of salt. But now the anchor is zero loss and insufficient adjustment from this anchor leads to an underestimate of potential losses.
Posted at 9:22 pm IST on Wed, 15 Aug 2012 permanent link
Categories: behavioural finance, derivatives
Minimum balance at risk for all safe assets
Last month, the Federal Reserve Bank of New York published a staff report with a very interesting proposal to reduce the systemic risk of runs on money market mutual funds (Patrick E. McCabe, Marco Cipriani, Michael Holscher and Antoine Martin, “The Minimum Balance at Risk: A Proposal to Mitigate the Systemic Risks Posed by Money Market Funds”, Federal Reserve Bank of New York, Staff Report No. 564, July 2012).
I found the proposal very innovative and my only quibble with the proposal is that I see no need at all to limit the idea to just money market mutual funds. I think that the same idea can be applied to bank deposits, liquid mutual funds and many other pools that offer high levels of liquidity.
The proposal is that when an investor redeems his or her investment, a small percentage (say 3-5%) of the investment is held back for a short period (say 30 days). If losses are detected at the fund during this period, the balance held back from the redeeming investor is available to absorb the losses. McCabe and his co-authors show that it is possible to design the loss allocation mechanism in such a way that runs on the fund are discouraged without eliminating market discipline. A fund that pursues risky investment strategies would see redemption from rational investors who anticipate losses in the long term (beyond 30 days). But investors who did not redeem before the losses are revealed do not gain anything by redeeming at the last minute. This eliminates panic runs and allows orderly liquidation.
I think this idea could be extended to bank deposits and many other savings vehicles. All “safe assets” or “informationally insensitive assets” to use Gorton’s phrase could be subject to this rule to prevent disorderly runs without requiring taxpayer bailouts.
The authors themselves suggest that small balances could be exempted from some of the subordination requirements and clearly insured deposits do not need to be subject to the minimum balance at risk requirement. The major impact of the proposal would be on large investors, and I do not believe that large investors have any god given right to safe and liquid assets. In fact, society can make such assets available to them only by imposing losses on the taxpayer.
Pozsar and Singh have pointed out that:
Asset managers do not just invest long-term, but also have a large demand for money (or more precisely, money-market instruments). ... The money demand aspect of the asset management complex ... involves massive volumes of reverse maturity transformation, whereby significant portions of long-term savings are transformed into short-term savings. It is due to portfolio allocation decisions, the peculiarities of modern portfolio management and the routine lending of securities for use as collateral. This reverse maturity transformation occurs in spite of the long-term investment horizon of the households whose funds are being managed. This reverse maturity transformation is the dominant source of marginal demand for money-type instruments in the financial system.
If the minimum balance at risk leads to a re-engineering of the asset management industry to reduce the demand for safe and liquid assets, I think that would be a good thing.
Posted at 5:01 pm IST on Fri, 10 Aug 2012 permanent link
Categories: mutual funds, regulation
Statistics for finance in a post crisis world
I made a presentation on “Statistics for finance in a post crisis world” at the Sixth Statistics Day Conference organized by the Reserve Bank of India on July 17, 2012. Bullet points from my slides are given below.
Big Data
Example: US Flash Crash
- The Flash Crash refers to unusual price movements in the US stock market on
May 6, 2010
- Market index dropped by over 5% in the space of less than five minutes only to bounce back in the next five minutes.
- The crash was even worse in individual stocks. For example, Accenture fell from $30 to $0.01 in the space of seven seconds and then snapped back to the old level within two minutes.
- The joint study by the US SEC and the CFTC collected hundreds of millions of records comprising an estimated five to ten terabytes of information.
- The study however ended up aggregating the data into one minute intervals (and sometimes fifteen minute intervals) when the interesting events were happening at millisecond time frames.
- Big data became ordinary data by discarding data!
- And then analysis of the Flash Crash could use traditional methods.
Big Data in Finance
- Financial markets are the major source of big
data. The US data feed company, Nanex, provides quote by quote data
for US financial markets:
- 4.5 million quotes per second
- 8 billion quotes per trading day
- Trade repositories for OTC derivative markets will also create large databases.
- In traditional banking, massive amounts of data is available in core banking systems and is potentially available to regulators who by and large are not equipped to use it.
- Some experts are recommending that financial entities must provide disclosures with “gigabyte richness” (Hu, Henry T.C., “Too Complex to Depict? Innovation, ‘Pure Information,’ and the SEC Disclosure Paradigm”, Texas Law Review, Vol. 90, No. 7, 2012)
Hidden Data
Shadow Banking and Hidden Credit
- Shadow banking has been estimated to be very large in the developed world – comparable in size to regular banking.
- These forms of shadow banking are not very prevalent in emerging markets like India, but there are other forms of credit that are beneath the radar of official statistics.
- During 2008, there were concerns in India about:
- Suppliers’ credit and buyers’ credit (hidden short term foreign currency corporate debt).
- Pledge of shares by promoters(hidden credit to the corporate sector/promoters)
- Today gold loans may be becoming a large hidden source of credit to the
household sector
- Partly unsecured personal loans (inadequate or overvalued or undermargined gold collateral).
- Some recent estimates suggest that Chinese corporate sector has hidden dollar debt of $800 billion!
Hidden credit: the data challenge
- Data is hidden because somebody wants to hide it.
- Often credit hides in the shadows to evade regulation.
- Regulating one part of shadow banking only drives the activity somewhere else – even more underground.
- How can regulator collect data unobtrusively without driving the activity underground?
- Collect data through some industry association?
- Collect data from other elements of the chain?
- Use random sampling with anonymized data collection?
- Use econometric models for estimation?
Hidden Debt
- During a crisis, policy makers try to protect the banks and strategic non bank
entities while letting other businesses to fend for themselves.
- Examples of this during 2008 include Korea and Russia.
- From this point of view, it is important to monitor the liabilities of the protected core – on balance sheet and off balance sheet, explicit and implicit.
- This is not easy. For example, consider the SIVs and ABCP conduits of banks in the US during the crisis.
- Indian banks have large operations outside India:
- Large derivative books
- Maturity mismatches and liquidity risk
- Credit link notes linked to Indian corporates
- Indian corporate sector has increasingly opaque off balance sheet structures outside India.
Hidden Risks
- During periods of stress, hidden risks constitute a vulnerability that can lead to unanticipated defaults that put strain on the banking system.
- This vulnerability can also put constraints on policy makers:
- Policy makers may be reluctant to crystallize losses for entities that are economically important.
- Vulnerable entities may be politically powerful and may be able to successfully lobby against economic policy measures that will materialize the hidden risk.
- For example, foreign exchange risk in the corporate sector may force policy makers to a suboptimal defence of the currency.
Hidden Foreign Exchange Risks
- Best known source of foreign exchange risk is unhedged foreign borrowing by the corporate sector.
- But mishedged currency exposures are as important as unhedged exposures:
- The exotic currency derivatives disaster of 2008.
- Similar problem in Korea at the same time was euphemistically described as “overhedging”
- Currency risks are also embedded in commodity prices:
- During the last year or so, global gold prices have fallen while domestic prices have remained stable or even risen. Leveraged gold buyers have accumulated a large currency exposure.
Hidden Interest Rate Risk: Household Sector
- The Indian household sector today has a large interest rate exposure through floating rate home loans.
- At some point, this could become a constraint on monetary policy.
- There is a need to quantify the impact of interest rate reset on the distribution of household debt service ratios.
- Extreme example of this problem was the ERM crisis of 1992. Swedish interest rate defence of the krona had such a big impact on mortgage payments that the Prime Minister called an “all party” meeting to discuss monetary policy!
Broken Data
Only Traded Prices are Real
- Consider two different kinds of “prices”
- Prices at which actual trades have happened (for example, the weighted average call rate)
- Price at which people claim that they can trade or could have traded (LIBOR or MIBOR)
- During the last few decades, very large credit and derivative markets have come to rely on the second kind of price which we now know is more fiction than reality.
- This problem is present in many other markets as well. For example, practically all crude oil transactions (both physical and derivative) rely on polled prices reported by agencies like Platt.
- It is necessary to wean these markets away from polled prices and use actual traded prices wherever possible.
Enronic Accounting
- A decade after the Enron collapse, Enronic accounting is alive and well and not just in the private sector.
- Some of the sovereign debt problem in Europe is due to falsified public debt data.
-
Much of the data in financial institution balance sheets is widely
believed to be unreliable:
- Enronic off-balance sheet liabilities like the SIVs and ABCPs
- Level 3 assets (or “marked to myth”) assets
- Banking book assets which are badly impaired but are carried at book value.
Forensic statistics
- Statisticians need to approach certain kinds of financial data with deep distrust.
- This requires a change of mindset.
- Mainstream statistics discards outliers and works with the remaining nice data.
- Sometimes the outliers are the data and the rest is just noise.
- Nice data is often the result of fraudulent smoothing.
- Using the right statistical tools, we may be able to ferret out the fraud.
- Statistical analysis has an important role in design of markets and benchmarks.
Broken Models
The Gaussian Distribution
- The Gaussian distribution is to be found everywhere in nature, but it is rarely found in finance.
- Most financial asset prices are closer to Student-t with 4-10 degrees of freedom.
- The really bad distributions (for example, the distribution of default losses in a large credit portfolio) have almost pathological levels of skewness and kurtosis.
- The multivariate Gaussian is even rarer as relationships between variables usually exhibit strong tail dependence.
- Non Gaussian copulas are therefore needed in addition to non Gaussian univariate distributions.
Non Gaussian Copulas and Marginals
- The marginal distribution determines the thickness of the tail
- The copula determines the tail dependence – whether large losses on one asset are accompanied by large losses on other assets
-
Gaussian Copula
Student-t Copula
Gaussian
MarginalThin tails
Low tail dependence
Low riskThin tails
High tail dependence
High riskStudent-t
MarginalFat tails
Low tail dependence
High riskFat tails
High tail dependence
Very high risk
Gaussian Copulas and CDOs
- CDO pricing before the crisis and even today is based on the Gaussian copula
- This model required unrealistically high correlations to fit the observed prices of the senior and super-senior tranche of a CDO (the correlation skew).
- Even that was sometimes insufficient. There were situations in which no correlations could be found to match observed market prices.
- The paper by Donald MacKenzie and Taylor Spears describes how banks ended up using the Gaussian copula to price CDOs though the quants themselves were unhappy with the model. ‘The Formula That Killed Wall Street’? The Gaussian Copula and the Material Cultures of Modelling, June 2012.
The Way Forward
Simpler finance, maybe. Complex statistics, surely.
- The global financial crisis has led to calls for simplifying finance. This has not happened so far, and it is doubtful whether it will happen anytime soon.
- What is clear is that financial statistics will become a lot more complex:
- Terabyte and potentially petabyte data
- Indirect statistical estimates of the shadow financial sector
- Forensic statistical analysis
- Pervasive use of non Gaussian distributions and copulas
Posted at 9:53 pm IST on Thu, 19 Jul 2012 permanent link
Categories: post crisis finance, statistics
What is a price?
As I keep thinking about Libor fixing (see my post last week on this), I have realized that the word price is used in many ways to mean many things not all of which deserve to be called a price:
- An actual traded price
- This is the simplest and perhaps most unambiguous definition of a price. The only problem with this notion is due to illiquidity. If the asset is highly illiquid, there may be no recent traded price. More commonly and more importantly, the traded price is subject to the bid-ask bounce – a trade initiated by the seller executes at the bid price while a buyer initiated trade executes at the ask price. If the stock is traded frequently enough and the bid-ask spread is small in relation to the desired level of accuracy, the traded price is a clean and transparent definition of price.
- The mid price
- Even if the stock is modestly illiquid, there is often an ask price and a bid price in the order book and the average of these is a reasonable approximation to the true price. It is probably better however to use the entire bid-ask interval instead of just the mid price to communicate the range of uncertainty about the true price. Moreover, the bid and ask are valid only for small transaction sizes. It may be better to use the full information in the order book to do an impact cost calculation and present the bid and ask for a more reasonable order size.
- An average of traded prices
- Quite often closing prices on an exchange are determined as averages of prices during the last few minutes of trading – though in some cases, “few” gets stretched to quite a long period. This averages out the bid-ask bounce and is a tolerable approximation if the volatility of the “true” price during the averaging period is small in relation to the impact cost of a reasonable trade. Sometimes, the averaging is designed to deal with attempts to manipulate the closing price and then it may be reasonable in the above comparison to use the impact cost of the expected trade size of a potential market manipulator which may be significantly larger than typical trade sizes. An alternative to averaging is to use a call auction to determine the closing price.
- Polled or indicative prices
- Libor and the well known US Constant Maturity Treasury (CMT) fall in this category. The attempt here is to average over market participants’ quotes about what they believe is the true price. The difference between polled prices and traded prices is like the difference between an opinion poll and an actual election. I think it is a mistake to base large derivative markets on “opinion polls”.
- Model prices
- In the absence of traded prices, it is common to use a pricing model to estimate prices. Of course, there are several shades of grey here: accountants talk about Level One, Level Two and Level Three assets to capture some of the greyness. Outside of finance, hedonic estimates of the price of real goods are also model prices. For an even more extreme case, one could consider a surveyor’s real estate valuation opinion as a model price where the model is less precisely articulated. At the opposite end in terms of formalization of models, the equilibrium prices derived out of general equilibrium models are also model prices with the added twist that in many of these models, the no trade theorem is actually in force and the model price is an estimate of the price at which nobody wishes to trade. My own view on this is that model prices are valuation opinions and not prices.
Where does that leave us? I think that for liquid assets, actual traded prices (perhaps determined by a call auction) are the best way to define the price. For illiquid assets, it is best to recognize that there is no unique price and to use a price interval as the best way to communicate the range of uncertainty involved. I do not understand why physicists are quite happy to say that the gravitational constant in appropriate SI units is 6.67384 ± 0.00080, but in finance and economics we are unwilling to say that the price of an asset is 103.23 ± 0.65.
Posted at 2:27 pm IST on Mon, 9 Jul 2012 permanent link
Categories: benchmarks, manipulation
Libor, the Gaussian Copula and the Sociology of Finance
I have blogged about the sociology of finance several times (for example in 2010, and in 2011). Two pieces that I read (or in one case re-read) recently have reinforced my view that this literature is important for understanding modern finance.
When the penalties imposed on Barclays by the UK FSA and the US CFTC brought Libor back into the limelight, I found myself re-reading MacKenzie’s fascinating description of the Libor fixing (Donald MacKenzie, “What’s in a Number?”, London Review of Books, 30(18), 25 September 2008, pages 11-12) based on his ethnographic study carried out prior to the financial crisis.
None of the finance textbooks describe the actual mechanics of the Libor fixing as well as this piece. Every source on Libor recites the standard definition that Libor is “The rate at which an individual contributor panel bank could borrow funds, were it to do so by asking for and then accepting interbank offers in reasonable market size, just prior to 11.00am London time.” But one has to read MacKenzie to understand how this hypothetical condition (“were it to do so”) is actually operationalized. Similarly, MacKenzie tells us very casually that a mere $50 million or so may fall short of reasonable market size which for the major currencies would be of the order of several hundred millions.
The second paper that I have been reading also co-authored by MacKenzie is weightier and more recent (Donald MacKenzie and Taylor Spears, “‘The Formula That Killed Wall Street’? The Gaussian Copula and the Material Cultures of Modelling”, June 2012). This paper discusses the well known (and by now notorious) Gaussian copula model for pricing CDOs.
The crucial claim in this paper is that Gaussian copula models were and are crucial to intra- and inter-organizational co-ordination, while simultaneously being ‘othered’ by the modellers themselves. The word ‘other’ might be a simple word, but it has a complex meaning. What is being argued is that the modellers steeped in the culture of no-arbitrage modelling never ‘naturalized’ the Gaussian copula and did not even regard it as a proper model. The dissonance between actuarial models and no-arbitrage models is also brought out very well. I found myself thinking that the battle between CreditMetrics and CreditRisk+ more than a decade ago was also one between actuarial models and no-arbitrage models.
As an aside, the authors also bring up the issue of counterperformativity (models being invalidated by their widespread adoption): “models used for governance are undermined by being gamed; models used to hedge derivatives are undermined by the effects of that hedging on the market for the underlying asset” They also speculate on the possibility of ‘deliberate counterperformativity’: “the employment of a model that one knows overestimates the probability of ‘bad’ events, with a view to reducing the likelihood of those events.”
Posted at 12:32 pm IST on Thu, 5 Jul 2012 permanent link
Categories: behavioural finance, derivatives, mathematics, risk management
Questioning the benefits of 1930s US securities reforms
Cheffins, Bank and Wells posted an interesting paper earlier this month on SSRN (“Questioning ‘Law and Finance’: US Stock Market Development, 1930-70”) arguing that the creation of the US SEC and the associated legislation did not energize the development of the US securities markets.
- The number of stockholders flat-lined and maybe even fell between the early 1930s and the early 1950s, despite the US population increasing by over 20%.
- The total equity market capitalization was less than 10% of GDP in the 1940s and 1950s, while it had been much higher in the mid 1930s.
- There were hardly any issuance of stock by new companies (IPOs) in the 1930s, 1940s and 1950s.
After this long period of stagnation and decline, the US stock markets began to recover and grow in the late 1950s and early 1960s. Cheffins et al. argue that the SEC cannot claim any credit for this: Seligman’s influential history describes the SEC during the 1950s as having “reached its nadir” when “its enforcement and policy-making capabilities were less effective than at any other period in its history.” (Seligman, Joel. The Transformation of Wall Street: A History of the Securities and Exchange Commission and Modern Corporate Finance. Boston: Houghton, Mifflin, 1982, page 265).
One counter-argument could be that the decline of the stock market development in the two decades after the formation of the SEC was due to the Great Depression and the World War and not due to the reforms themselves. Unfortunately for this view, the under-regulated “over the counter” (OTC) market grew from 16% to 61% as a percentage of total national stock exchange sales between 1935 and 1961.
Cheffins et al. add to the sceptical literature going back to Stigler and Benston about the contribution of the SEC and the 1930s reforms for the securities markets (Stigler, George J. (1964) “Public Regulation of the Securities Markets”, The Journal of Business, 37(2), 117-142 and Benston, George J. (1973) “Required Disclosure and the Stock Market: An Evaluation of the Securities Exchange Act of 1934”, The American Economic Review, 63(1), 132-155).
One could also argue that broader macroeconomic governance reforms have played a bigger role than micro regulatory reforms. Sylla, for example, gives credit to the Hamiltonian reforms of the 1790s for the remarkable growth of securities markets in the US. Sylla points out that by the early nineteenth century, “the United States led the world in the proportion of financial assets held in the form of corporate stock.” and that “By the third and fourth decades of the nineteenth century, there was probably no place in the world as ‘well banked’ and ‘security marketed’ as the northeastern United States.” (Sylla, Richard (1998) “U.S. Securities Markets and the Banking System, 1790-1840 ”, Federal Reserve Bank of St. Louis Review, May/June 1998, 83-98).
If I may now indulge in some self promotion, MastersinAccounting.info have put my blog in their list of Top 10 Finance Professor Blogs. Their review says: “... Varma writes a comprehensive series of posts on his subject of choice and does so with real insight and obvious passion for the topic. A professor at the Indian Institute of Management, Varma studies financial markets and the regulation of markets throughout the world and uses his knowledge and experience to give readers a perspective on current issues as well as the history of world markets.”
Posted at 5:42 pm IST on Mon, 18 Jun 2012 permanent link
Categories: law, market efficiency, regulation
Regulation as a response to state failure
Normally, one thinks of regulation as a response to market failure, but Luigi Zingales has a piece (behind a paywall) in the Financial Times earlier this week (“Why I was won over by Glass-Steagall”) in which the principal argument seems to be that a regulatory separation between commercial banking and investment banking is required to deal with a state failure. Zingales argues that:
Under the old regime, commercial banks, investment banks and insurance companies had different agendas, so their lobbying efforts tended to offset one another. But after the restrictions ended, the interests of all the major players were aligned. This gave the industry disproportionate power in shaping the political agenda.
The result according to Zingales was “a demise of public equity markets and an explosion of opaque over-the-counter ones.”:
With the repeal of Glass-Steagall, investment banks exploded in size and so did their market power. As a result, the new financial instruments (such as credit default swaps) developed in an opaque over-the-counter market populated by a few powerful dealers, rather than in a well regulated and transparent public market.
Adam Levitin at Credit Slips makes the same point in greater detail with some good examples:
Glass-Steagal also split the financial services industry politically and enabled the different parts of the industry to be played against each other. Commercial banks, investment banks, and insurance companies fought each other for turf for decades. This mattered in terms of regulation because regulation is a political game.
Because of Glass-Steagal, the financial services industry did not present a monolith in terms of lobbying, and a Congressman could afford to take a stand against one part of the industry because there would be campaign contributions forthcoming from the other parts of the industry. This is how William O. Douglas got the Trust Indenture Act of 1939 passed – he made concessions to the commercial banks in order to get their support for legislation that kept the investment banks out of the indenture trustee business. In the agencies, each part of the industry had its pet group of regulators who would push back against other regulators when they thought that there was an encroachment on their turf, which is the basic nature of deregulation – allowing greater activities than previously allowed. And it even mattered in the courts, as the insurance and investment banking industries financed major litigation challenges to commercial bank deregulation.
... Sarbanes-Oxley passed in part because of a split between the Business Roundtable and the US Chamber of Commerce. And in the financial institutions space, the Durbin Interchange Amendment passed because it posed banks against another heavy duty group, retailers.
One can dispute several elements of this narrative. Zingales’ CDS example is perhaps easiest to refute. Insurance companies should according to the Zingales-Levitin theory have strenuously argued that CDS is an insurance contract and therefore should be their exclusive preserve. Their lobbying and litigation should have prevented the commercial and investment banks from walking away with the CDS market. Despite the repeal of Glass Steagall, most of the big insurers were independent of the leading players in the CDS market, yet they made no serious attempt to block the growth of CDS prior to the crisis. Even after the crisis, much of the movement for regulating CDS as insurance has come from academics and regulators and not from insurance companies.
Yet, I think the idea that market fragmentation guards against state failure is a very interesting perspective on how one should go about designing a regulatory architecture. After all, the life cycle of financial market bubbles is much shorter than that of political bubbles (to borrow an elegant phrase from the George Soros speech about Europe earlier this month). Market failures can be very ghastly, but perhaps they correct faster than state failures.
Posted at 1:03 pm IST on Wed, 13 Jun 2012 permanent link
Categories: regulation
Elected Regulators
Rose and LeBlanc posted a paper last month (Rose, Amanda M. and LeBlanc, Larry J. , Policing Public Companies: An Empirical Examination of the Enforcement Landscape and the Role Played by State Securities Regulators (May 23, 2012). Available at SSRN: http://ssrn.com/abstract=2065378) showing that in the US elected regulators (typically attorneys general) at the state level were far more aggressive in pursuing securities related enforcement than non elected regulators:
states with elected enforcers brought matters at more than four times the rate of other states, and states with an elected Democrat serving as the securities regulator brought matters at nearly seven times the rate of other states.
Of course, this is completely consistent with the incentive structures facing elected and appointed regulators. Appointed regulators do not gain much from pursuing complex matters; as many of the reports about the SEC failures during recent years have shown, SEC enforcement staff are incentivized to pursue a numbers game – pursuing a large number of easy, low risk and low cost was the best way to make the internal appraisal reports look good. On the other hand, elected regulators have incentives to pursue high risk, high stakes actions. Success could help the elected regulator move on to a higher political position – Spitzer became Governor of New York after a very controversial stint as attorney general.
The difficulty with this model (as with any other high power incentives) is the possibility of harassment of innocent people to gain political mileage. The solution is obviously an appellate process that limits the ability of the regulator to unilaterally destroy legitimate businesses and people. The US has got this reasonably right, but regulators can still put pressure on regulatees to pay fines and settle cases that lack merit to avoid expensive litigation.
The big issue with the paper is whether the empirical results are driven entirely by New York. Tables T.7 and T.8 (page 23) show that the effect remains very strong even if New York is excluded. But the statistical regressions reported in the paper do not use a New York dummy.
Posted at 7:00 pm IST on Sun, 10 Jun 2012 permanent link
Categories: regulation
Sovereign default and international law
The ongoing sovereign debt crisis in Europe and elsewhere has made it necessary for finance professionals to understand the legal niceties of sovereign defaults. I have been reading Michael Waibel’s book Sovereign Defaults Before International Courts and Tribunals, Cambridge University Press, 2011 which covers sovereign defaults and international law over the last two centuries.
Chapter 4 of the book dealing with “Monetary reform and sovereign default” is particularly interesting in the context of the current difficulties in the euro zone. From my point of view, the problem with this chapter (and the book in general) is that it is too narrowly focused on the law – the book discusses the legal arguments and outcomes of a legal dispute extensively, but it has very little discussion about the underlying financial transaction or the movement of exchange rates. This makes it difficult to understand the economic significance of many of the disputes.
One of the interesting things that I learnt from this book is that defaulting on a debt does not violate any international law at all. Mere non payment of debt when it falls due is only a breach of contract; there is a violation of international law only if the sovereign repudiates the debt. Waibel quotes Feilchenfeld’s very elegant phrasing of this distinction: “... international law will guarantee to the creditor the existence of debt and of a debtor, but not the existence of a good debt or a rich debtor”. (page 299)
The book devotes a whole chapter to the doctrine of “financial necessity” as an excuse for non performance of an obligation. It quotes the judgement of the tribunal in the Russia Indemnity Case (Russia v Turkey) that the states’s “first duty was to itself. Its own preservation was paramount” (page 97). Similarly, another tribunal held that “the duty of a government to ensure the proper functioning of its essential public services outweighs that of paying its debts” (page 98).
As a practical matter, this principle is of help to a sovereign only if its debt is governed by its own ‘municipal law’. (International law uses the term ‘municipal law’ to denote everything except international law – it includes national, provincial and local laws). For example, until the restructuring earlier this year, most of the Greek debt was governed by Greek law; but post restructuring, most of the debt is now governed by English law.
When sovereign debt is governed by foreign law, the sovereign usually is bound by the jurisdiction of a foreign court and the dispute is then resolved according to the ‘municipal law’ of that financial centre – usually London or New York. One of the developments in international law during the last century has been the progressive erosion of sovereign immunity in international law when it comes to sovereign debt. This trend is very nicely discussed by Panizza, Sturzenegger, and Zettelmeyer in a recent paper in the Journal of Economic Literature (“The Economics and Law of Sovereign Debt and Default”, Journal of Economic Literature, 2009, 47:3, 1-47).
In short, the only relevant law appears to be the ‘municipal law’ under which the debt was issued – international law is by and large irrelevant. For any financial institution that has bought a lot of local law sovereign debt of almost any sovereign in the world, this is not very good news.
Posted at 4:02 pm IST on Thu, 7 Jun 2012 permanent link
Categories: credit rating, law, sovereign risk
Does finance need 128 bit integers?
A blog post yesterday at Marginal Revolution raised the interesting question of whether stock prices should quote in increments of one-hundredth of a cent instead of one cent. That is a very complex question in market micro structure that I do not wish to get into now. But the comment thread on this blog post got into a much more interesting question of how to represent money in a computer.
It is well known that using a floating point number to represent money is a very bad idea – every computer scientist knows that we must use integers for this purpose even if fractions are involved. If we do not have to deal with fractions of a cent, then internally everything should be stored in cents so that a million dollars is represented by 100 million which is well within the range of a 32 bit integer (for example, the long int in ISO C++). If fractions of a cent (say milli-cents) are possible, then everything should be stored in milli-cents and a million dollars is represented by 100 billion which is beyond the range of a 32 bit integer but is well within the range of the more modern 64 bit integer (for example, the long integer in Java).
Dick King commenting at the Marginal Revolution blog post points out that if fractions of one-tenth of a micro-cent are permitted, then a 64 bit integer would allow us to represent up to around a trillion dollars. As he says: “If AAPL ever reaches a market cap of $1 trillion and you decide you want to buy it all you will not be able to place an ordinary order on an ordinary exchange ... sorry about that.” If you really want to be precise about these things, the 64 bit integer takes us only up to a little over $922 billion if you want negative numbers also; if only unsigned quantities are required, then it could take us to $1.8 trillion. But as a rough order of magnitude, we can take one trillion as the upper limit on monetary quantities that can be represented to an accuracy of one micro-cent using 64 bit integers.
I would think that there might be situations where much more than a trillion – perhaps, even a quadrillion – might be needed. In the days before the euro, we used to joke that the word quadrillion was invented to count the Italian public debt (in lire of course). But more seriously, the total open interest in all the global derivative markets is not far short of a quadrillion dollars. Equally, there may be situations where micro cents or nano cents might make sense for micro payments for charging internet transactions. For example, the digital currency Bitcoin (which is valued at approximately one US dollar) allows subdivision up to 8 decimal places or one hundredth of one millionth of a Bitcoin. If we need a single representation for all monetary quantities from 1015 (one quadrillion) to 10-8 (one hundredth of one millionth), the 64 bit integer is simply insufficient. Perhaps, finance will at some point need a 128 bit (16 byte) long integer.
Of course, some people do argue that data structures like Java’s BigInteger that allows arbitrary size of integers should be used. But this arbitrary size comes at a very heavy price. It appears that a Java BigInteger takes about 80 bytes that is five times more than a 128 bit (16 byte) long integer. The performance penalties would also be substantial. While a 128 bit integer would not be sufficient to count the number of protons in the universe, it should be adequate for the full range of monetary quantities that we are likely to encounter for a long time – it will take us from 1021 to 10-15 with a couple of digits to spare.
Posted at 2:04 pm IST on Fri, 25 May 2012 permanent link
Categories: technology
Hedging at negative cost?
I have been reading the transcript of the conference call in which JPMorgan Chase reported a $2 billion loss on a position that was intended to hedge tail risk (h/t for the transcript to Deus Ex Macchiato). Much has been written about the hedge that JPM Chairman, Jamie Dimon, himself described as “a bad strategy ... badly executed ... poorly monitored.” I want to focus instead on another interesting statement that he made about the hedge:
It was there to deliver a positive result in a quite stressed environment and we feel we can do that and make some net income
Note the tense of that verb “feel”: he does not say “felt”, he says “feel” – after that $2 billion loss, he still thinks, that you can set up a hedge which makes money! The Chairman of one of the largest banks in the world – a bank which is still well respected for highly sophisticated risk management thinks that a tail risk hedge need not cost money, but can actually make money. In other words, there are negative cost hedges out there that can protect you against tail risk.
If you believe in the Efficient Markets Hypothesis (EMH), you know that this is not possible – there is no free lunch. Sure, you can hedge against tail risk, but that will cost you money, and in turbulent markets, it will cost you a good deal of money. The global financial crisis was in a sense the revenge of the Efficient Markets Hypothesis. Those who ignored the “no free lunch” principle and chased illusory excess returns were ruined (or would have been ruined but for their successfully persuading the state to bail them out). The biggest moral hazard of the egregious bail outs of 2008 is that the financial sector has still not internalized the “no free lunch” principle of the Efficient Markets Hypothesis. That is a tragedy for which surely the taxpayer will one day have to pay once again.
In fact, the term hedge seems to have a very different meaning in the financial sector than in the corporate sector (or perhaps, I should say the old fashioned non-financialized part of the corporate sector). If you are an airline that hedges oil price risk, chances are that you are more prudent (more risk averse) than the airline that does not hedge its risk. This is because all airlines face somewhat similar oil price risks and the one that hedges is probably less risky. At least that would be the case if the airline does not use oil price hedging to justify an excessively high level of debt in its capital structure (that was why, I began by confining my remarks to the old fashioned non-financialized corporate sector).
In the financial sector (and in highly financialized industrial companies as well), things are very different. The bank that puts on a hedge does not necessarily keep its portfolio unchanged. On the contrary, it uses the hedge to take on more risks on the underlying portfolio. The total hedged portfolio is not necessarily less risky than the original unhedged portfolio. Chances are that the hedged portfolio is riskier – much riskier.
At a theoretical level, this was established more than three decades ago in a very interesting and highly readable paper by Hayne E. Leland (“Who Should Buy Portfolio Insurance?”, The Journal of Finance, 1980, 35(2), pp. 581-594.) Leland started with a very simple observation: since derivatives are zero sum games, for every buyer of portfolio insurance, there must be a seller. He then asked the obvious question – which investors would buy insurance and who would sell them.
If one were naive, one might be tempted to answer that the buyers of insurance must be either bearish on stocks or highly risk averse while the sellers must be bullish on stocks or highly risk tolerant. Leland’s answer was totally different. He showed that the bears should be selling insurance and the bulls should be buying them. The reason is that the bulls would load themselves up so heavily on stocks (possibly borrowing to buy stocks) that they need downside protection to maintain the position at all. On the other hand, if you are so bearish on stocks that you have put all your money in bonds, clearly you are not going to be buying portfolio insurance!
The situation regarding risk aversion is more complex. Everything depends on how risk tolerance increases with wealth and it will take too long to describe that argument here. The interested reader should read Leland’s original paper.
Anyway, the key point is that the hedge permits the underlying portfolio to become riskier and more toxic. It is like the old adage that the brakes make the car go faster. So when the banks argue that they need complex derivative products to hedge their risks, what they really mean is that they need these derivatives to create very risky asset portfolios while managing the downside risk up to the point where it can be palmed off to the taxpayer.
To quote another adage (this time from the world of financial trading itself), hedging in the financial world is nothing but speculation on the “basis”; it has little to do with risk reduction.
Posted at 10:21 pm IST on Sun, 20 May 2012 permanent link
Categories: derivatives, risk management
Automating financial advice
Two months back, Abnormal Returns wrote a post entitled “You are not all that unique an investor” which linked to a short survey of the online money management space at World Beta. There are a number of websites in the US that provide personalized financial advice based on software. When one probes further, however, it is clear that this field is still evolving and has a long way to go. One website does not provide advice on asset allocation, it only compares funds that you are already holding with other funds in the same category and recommends cheaper or better performing funds from the same category. Another site emphasises its ability to give personalized advice but it is only in the legal fine print that I could find a disclosure that the advice is based on software tools.
But today I was reading an NBER working paper by Mullainathan, Noeth and Schoar entitled “The market for financial advice: an audit study” and I realized that software does not have to be particularly good to be competitive with traditional advisors. The bar for that is so low that existing software is probably good enough and of course the software will get better. On the other hand, five years after the financial crisis, there is no evidence whatsoever that traditional financial advisors are becoming any less conflicted.
Mullainathan, Noeth and Schoar used an audit methodology where they hired trained auditors to meet with financial advisers with different types of portfolios and submit a detailed report of their interaction with the adviser (for a total of 284 client visits). They find that “advisers not only fail to de-bias their clients but they often reinforce biases that are in the interests of the advisors. Advisers encourage returns-chasing behavior and push for actively managed funds that have higher fees, even if the client starts with a well-diversified, low-fee portfolio.”
It does not even appear that the traditional adviser personalizes the advice adequately: “advisers are less likely to ask younger or female auditors some basic question about their financial situation, and it also leads to worse advice since the adviser does not have full information.”
I think that financial advice is an industry ripe for disruptive transformation through the internet and software.
Posted at 9:01 pm IST on Sun, 13 May 2012 permanent link
Categories: technology
Government cash management and liquidity squeezes
India witnesses predictable periodic liquidity squeezes due to large outflows of money from the banking system around the dates on which advance tax instalments are due to the government. The central bank does take some offsetting action to pump liquidity into the banking system, but these actions are often not quite adequate. Sometimes, the liquidity situation is fully restored only as the government starts spending out of the tax receipts. In India, we have gotten used to this as if this is the natural and unavoidable state of affairs.
It was therefore interesting to read a nice paper from the New York Federal Reserve describing how the US has solved this problem completely. The paper by Paul J. Santoro is about the evolution of treasury cash management during the financial crisis, but it is description of the pre-crisis system that is of interest for the advance tax problem. The US Treasury’s cash balance is also “highly volatile: between January 1, 2006, and December 31, 2010, it varied from as little as $3.1 billion to as much as $188.6 billion”. But this volatility does not create any problem either for the banking system or the central bank.
The Treasury divides its cash balance between two types of accounts: a Treasury General Account (TGA) at the Federal Reserve and Treasury Tax and Loan Note accounts (TT&L accounts) at private depository institutions.
If, in the pre-crisis regime, the Treasury had deposited all of its receipts in the TGA as soon as they came in, and if it had held the funds in the TGA until they were disbursed, the supply of reserves available to the banking system – and hence the overnight federal funds rate – would have exhibited undesirable volatility. To dampen the volatility, the Fed would have had to conduct frequent and large-scale open market operations, draining reserves when TGA balances were declining and adding reserves when TGA balances were rising. A more efficient strategy, and the one used by the Treasury in its Tax and Loan program, was to seek to maintain a stable TGA balance.
Each morning Treasury cash managers and analysts at the Federal Reserve Bank of New York estimated the current day’s receipts and disbursements. During a telephone conference call at 9 a.m., they combined the estimates with the previous day’s closing TGA balance, scheduled payments of principal and interest, scheduled proceeds from sales of new securities, and other similar items to produce an estimate of the current day’s closing balance. If the estimated closing balance exceeded the target, the Treasury would invest the excess at investor institutions that had sufficient free collateral and room under their balance limits to accept additional funds. If the estimated balance was below target, the Treasury would call for funds from retainer and investor institutions to make up the shortfall.
The key role in this system is played by retainer and investor institutions with whom the Treasury maintains its TT&L balances. The Santoro paper describes their role as follows:
A retainer institution also accepted tax payments but, subject to a limit specified by the institution and pledge of sufficient collateral, retained the payments in an interest-bearing “Main Account” until called for by the Treasury. If a Main Account balance exceeded the institution’s limit, or if it exceeded the collateral value of the assets pledged by the institution, the excess was transferred promptly to the TGA.
An investor institution did everything a retainer institution did and, as described below, also accepted direct investments from the Treasury. The investments were credited to the institution’s Main Account and had to be collateralized.
During the crisis, as the Fed expanded its balance sheet and banks ended up holding vast excess reserves, the pre-crisis policy of stabilizing the TGA balance ceased to be relevant. Moreover, with the Fed paying interest on excess reserves, depositing money in TT&L accounts would have been an additional subsidy to the banks. Therefore, the Treasury moved to a policy of keeping almost all its cash in the TGA (allowing it to become volatile). As and when monetary policy normalizes, the pre-crisis system will probably come back:
Nevertheless, a significant decline in excess reserves resulting from a shift in monetary policy may once again make it necessary to target a more stable TGA, so that TGA volatility does not cause undesirable federal funds rate volatility and interfere with the implementation of monetary policy.
In short, the advance tax related liquidity squeezes in India is simply the outcome of faulty government cash management practices. Other countries have solved this problem long ago (the late 1970s in the case of the US) and the solution is simple and effective. All that is lacking in India is the willingness to do the sensible thing.
Posted at 5:45 pm IST on Mon, 7 May 2012 permanent link
Categories: monetary policy, taxation
Disclosure of risk factors
I have long felt that the risk factors that are disclosed in most offer documents are next to useless in assessing the risk of a security. In utter frustration, I have often wondered whether it would better to replace all that legalese with a simple empirical fact embellished with a nice skull and crossbones symbol:
☠ Numerous studies covering many different countries have shown that over the long term, initial public offerings tend to underperform the rest of the stock market. Subscribing to these offerings can therefore be injurious to your wealth.
Of course, the same studies also document a large positive initial return to investors who sell immediately after listing, but that is not a risk factor!
Tom C. W. Lin has a different idea in his paper, “A Behavioral Framework for Securities Risk” (34 Seattle University Law Review 325 (2011)).
In order to better capture the advantages of disclosure-based risk regulations given the behavioral tendencies of investors, this Article proposes a behavioral framework for Risk Factors built on (1) the relative likelihood of the risks and (2) the relative impact of dynamic risks. This framework makes risk disclosures more accessible and meaningful to investors and would serve as the new default for public firms. An important feature of the new default is that firms will be able to opt out of the new framework if they believe that the existing Risk Factors requirements are more appropriate. But these firms would need to explain to investors why they opted out. This new default framework would be spatially, optically, and substantively superior to the current framework for investors.
Tom Lin phrases the entire proposal in terms of behavioural finance, but nothing in the proposal depends on behavioural finance. Classifying risks on the basis of likelihood (or frequency) and impact is perfectly rational, and is in fact standard practice in risk management. Thanks to the Basel regulations for operational risks, at least the financial sector has plenty of experience doing this. So it cannot be claimed that it is not feasible.
I think this is definitely worth trying out, and if it works, we may not need the skull and crossbones after all.
Posted at 1:42 pm IST on Wed, 2 May 2012 permanent link
Categories: equity markets, regulation
US Department of Labour degrades BLS data releases
When I argued in my blog post a few days back that monopoly providers of official data are unlikely to innovate, I still did not imagine that lack of accountability could lead to their actually degrading their data releases. But that is exactly what the US Department of Labour is proposing to do as regards the BLS (Bureau of Labor Statistics) non farm payroll data release which is perhaps the most powerful market moving data release on the planet today. The proposed draft rules and a transcript of a conference call on the subject are available at the website of the Department of Labour (hat tip for the links and for the whole story to FT Alphaville).
I remember reading a couple of papers by Ederington and Lee two decades ago on the BLS data releases and marvelling both at the ingenuity of the data release system and the speed with which markets process the information. The two papers by Louis H. Ederington and Jae Ha Lee are “How Markets Process Information: News Releases and Volatility”, The Journal of Finance, 48(4),1993, 1161-1191 and “The Short-Run Dynamics of the Price Adjustment to New Information”, The Journal of Financial and Quantitative Analysis, 30(1), 1995, 117-134. The JFQA paper describes the release system as follows:
The release is distributed to reporters with a “need for timely access” about 30 minutes prior to the scheduled release time. While the reporters may type their reports, they cannot leave the room or use the phone. Approximately one minute before the scheduled release time, the reporters are allowed to plug in their modems or pick up the phones but the lines are dead until the scheduled time.
The same paper also describes the speed with which the eurodollar futures market processes the information as follows:
Using 10-second returns and tick-by-tick data, we find that prices adjust in a series of numerous small, but rapid, price changes that begin within 10 seconds of the news release and are basically completed within 40 seconds of the release.
We must remember that this was two decades ago when modern high frequency trading was practically absent and eurodollar futures trading was still on the trading pit.
The Department of Labour now plans to degrade the data release system. In the conference call, it described the proposed changes in the system as follows:
Currently, organizations that participate in our lock-ups use their own computer and phone equipment, which is installed in our facilities. The telephone and data lines they use belong to and have been maintained by them. That, too, is changing.
... all currently participating organizations should plan to have their equipment removed.
... The department’s main lock-up facility will be ... reconfigured with new computer equipment, and telephone and data lines. The Labor Department will own and maintain that equipment and those lines.
... Each work station will offer a telephone, monitor, mouse and keyboard. The server and network gear will be located in the lock-up room within a locked cage but separate from the workspace area. Users will be able to log onto their desktops at assigned work spaces. Those who want time to prepare notes or drafts can take advantage of the extra half hour to use Microsoft Word, which will be loaded on the computers.
... There will be a new rule that personal effects must be placed in lockers outside the lock-up facilities before entering the rooms. However, carrying in paper research notes and other paper materials will be allowed. Carrying in pens and pencils will not be permitted. The department will provide writing instruments as well as plain paper for notetaking inside the lock-up rooms.
... You can’t bring in discs or thumb drives or any type of electronic devices.
The Department of Labour will thus control what historical or background data is available to the reporters (they cannot bring anything in electronic form), and the Department of Labour will also control what software is available to the reporters (only Microsoft Word).
To understand how this degrades the information processing, let us go back to the market reacting to the release within 10 seconds in the age of human trading on the pit two decades ago. The only way this can happen is that a lot of analysis takes place prior to the data release and contingent trading strategies are worked out and well rehearsed in advance. The market thus waits not for the raw data, but for an interpreted news report that places the raw data in the context of all the consensus estimates, past trends and other background information and allows a pre-rehearsed trading strategy to be invoked. By reducing the quality of this analysis (by disallowing the tools required to do this), the Department of Labour is degrading the quality of its data release.
What is more, the Department of Labour has no real reasons for making these changes. Look at this exchange in the transcript:
Daniel Moss: Bloomberg News. I’m just wondering, why is the Labor Department choosing to do this now? What is the problem that you believe you are trying to fix given the master switch is already in place working effectively?
Carl Fillichio: [Department of Labour] It’s been, as I mentioned, 10 years since we took a holistic view of the lock- up, and times have certainly changed. ...
Daniel Moss: What is the problem that you imagine you’re trying to fix given there is an effective master switch there already that controls access out of the room for the information?
Carl Fillichio: There’s nothing we necessarily expect. I think we’re doing prudent business management of reviewing our systems and looking at the changes in technology and the way that the news is delivered and have decided that now is the correct time to institute these changes. ...
Daniel Moss: Do I interpret your response, Carl, as meaning there's no current problem?
Carl Fillichio: What I’m trying to do is prevent a problem, Daniel.
Daniel Moss: What is the problem you think, you imagine that this will prevent?
Carl Fillichio: I think we’re going to move on. Operator, we’ll take the next question.
What we see clearly in this exchange is the total lack of accountability. I am reminded of the famous lines in Shakespeare’s Julius Caesar (Act II, Scene II, 72-76):
DECIUS BRUTUS: Most mighty Caesar, let me know some cause, Lest I be laugh'd at when I tell them so. CAESAR: The cause is in my will: I will not come That is enough to satisfy the Senate.
As far as the governmental monopolist’s preference for a private sector monopoly (Microsoft Office), that is probably the subject of another post.
Posted at 9:54 pm IST on Thu, 19 Apr 2012 permanent link
Categories: regulation, technology
Exit policy for financial institutions
I wrote a piece in the Mint newspaper yesterday arguing that instead of worrying about granting licences, financial regulators need to focus on exit policies:
Rogues thrive most in regulatory regimes designed to keep them out. This paradox arises because regulations that try to keep them out also unintentionally keep out the good entrants. The few rogues who do get in are able to thrive because they are shielded from competitors who might otherwise have driven them out of the market. Therefore, an open entry policy coupled with a ruthless exit policy might be the best way to keep the system clean.
In India there are two areas regulators are struggling to arrive at the right entry policy—stock exchanges and banks. It is a fact that it is very hard to get a licence for a new stock exchange or a new bank, but it is also very hard to cancel the licence of an existing stock exchange or an existing bank. We have many existing licensed stock exchanges and many existing banks that are so poorly run that they would be unlikely to get a licence today if they were applying for the first time. Yet, it is not easy to kick them out.
If anything, there is a case for a complete reversal of this policy regime. It should be very easy to start a new stock exchange or a new bank, but it should be much harder to retain the licence. The rationale for this approach is that it is very difficult for any regulator to figure out whether a particular set of promoters will be able to run a proposed stock exchange or bank well enough. This question is completely hypothetical and speculative at the entry stage, and the regulators are forced to extrapolate from the promoters’ experience in other sectors or environments to make an assessment if the given licensees are able to do well in the new venture. In contrast, it is much easier to assess whether an existing bank or stock exchange is well run or not. It is easier because we are now dealing with a question of fact and not speculating about hypothetical future possibilities.
It is easier for rogues to get past stringent entry barriers because they can spend enough money to acquire all the trappings of respectability. They can easily meet minimum capital requirements. They do often succeed in hiring distinguished personalities to serve on their boards (or in senior management positions) and lobby on their behalf. They can engage expensive consultants to prepare impressive business plans. Of course, once they have got the licence, they can discard these ostensible plans, sideline the “distinguished” personalities, and get on with their real business plans.
For these reasons, tough entry barriers do not really keep out the rogues. For example, of the 10 new banks licensed in India in the first phase in the 1990s, the majority were failures in the broadest sense. There were serious and honest promoters who failed because of environmental changes or genuine management mistakes, but that cannot be said of all the banks that failed. A tiny number of those who did not deserve banking licences did manage to obtain them despite very tough entry standards set by a regulatory process that was widely regarded as free of corruption. At the same time, a large number of serious professionals with valuable ideas would have been denied a licence because of the tight entry norms.
The principal objection to the idea of relying on a strong exit policy to keep the rogues in check is the problem of “too big to fail”. This objection has, I think, lost its force after the global financial crisis. It is now accepted that a financial intermediary that is too big to fail is simply too big to exist. Current regulatory thinking is that big institutions should prepare “living wills” or “funeral plans” that ensure that they can die gracefully.
The idea of “funeral plans” applies with equal force to stock exchanges and clearing corporations as well. There is a high probability that a large clearing corporation in a Group of Twenty (G-20) country (or for that matter even in a G-7 country) will fail over the next five years or so. Rather than pretend that central counterparties can never fail, we should be working hard to ensure that they could fail without dragging the whole system down with them. This means multiple central counterparties, which in turn means higher margins and collateral requirements. In a post crisis world, a lower level of leverage is probably not a bad thing.
It is often argued that stock exchanges are a natural monopoly because liquidity begets more liquidity and trading gravitates towards the most liquid trading venue. In reality, however, liquidity has many dimensions. A dark pool can be the best source of liquidity for some investors, while being a terrible trading venue for others. Moreover smart order routing can aggregate liquidity across multiple venues.
A financial system with numerous small banks, many exchanges and multiple central counter parties would be more robust. It would also have fewer rogues, or at least fewer big rogues who can do great damage.
Posted at 5:21 pm IST on Tue, 17 Apr 2012 permanent link
Categories: bankruptcy, regulation
Crowd sourcing official statistics
Yesterday, the Indian government admitted a huge error in the Index of Industrial Production (IIP) data for January 2012 and corrected the growth rate from a healthy 6.8% to a dismal 1.1%:
... during the compilation of IIP for January, 2012, the sugar production was wrongly taken as 134.08 lakh tonnes in place of actual figure of 58.09 lakh tonnes. ... Immediately after detection of the error, the revised IIP numbers and growth rates for the month of January, 2012 have been compiled. ... the IIP for January 2012 has been revised from 187.9 to 177.9 and, therefore, growth rate over the corresponding period of previous year has been revised from 6.8% to 1.1%.
In my view, the fact that the government has a monopoly in the production of official statistics leads to poor quality, low accountability and lack of innovation. Perhaps, these problems are worse in an emerging economy and the costs of the public sector monopoly are less severe in developed countries. But the problem is not confined to emerging markets.
Even in the US, the seasonal adjustments being used for various official statistics has been called into question (see for example, here, and here). The whole process of seasonal adjustment is ripe for disruptive innovation. First of all, the reliance on a Gregorian calendar for seasonal adjustment is increasingly inappropriate in a world where some of the fastest growing economies with large populations base their principal holidays on a lunar calendar (China, India and the entire Islamic world). China’s influence on commodity prices is so great that it is possible that the commodity price component of seasonally adjusted prices even in the developed world are probably distorted by the incorrect use of Gregorian seasonality adjustment. Via inventories and collateralized commodity financing, this might be an issue for some financial data series as well. Who knows, some large global central banks may be getting their monetary policy wrong because of Gregorian seasonal adjustments!
Secondly, I would argue that the whole idea of seasonality adjustment is an abdication of responsibility by the econometrician. Wherever we use time as an independent variable, it is a proxy for omitted variables that are more fundamental. A time trend, for example, proxies for variables like population growth, technological progress, inflation and productivity improvements. A seasonality adjustment is also a proxy for more fundamental physical and economic variables like temperature, rainfall, holidays, advance tax payment due dates, government bond issuance calendars and the like. It is far better to model these variables directly so that the economic model is more robust and meaningful. The belief that economic variables have a different behaviour in different months solely because of the position of the sun in the zodiac is astrology and not economics. Seasonality adjustments need to move from the age of astrology to the age of econometrics.
Such radical changes are unlikely to happen so long as official data is provided by a monopolist (whether in the public or in the private sector). The time has come in my view to crowd source the creation of official statistics. The government should simply make the digitized raw data publicly available and should not publish anything else. There would be no official Index of Industrial Production, but the government website would have the raw production statistics submitted by various businesses. Yes, not the aggregate sugar production, but the sugar production of each sugar mill in the country. Every user would be free to choose what outlier tests to run, what aggregation algorithm (for example, mean, median or trimmed mean) to apply on this raw data, which base year and which base year weights to adopt, and which index computation methodology (Laspeyres or Paasche, arithmetic mean or geometric mean) to use in computing indices at whatever level of aggregation or disaggregation he or she wishes.
The lack of an authoritative index may also reduce systemic risk in the economy because different indices computed by different agencies may be giving a different picture of the economy. We would probably have less herding and more muted boom-bust cycles. Like the story of the six blind men and the elephant, each of the competing privately produced indices would be a partial and therefore incomplete view. That however is far superior to one blind man and the elephant – because the one blind man does not even know that his understanding is incomplete.
Posted at 11:48 am IST on Fri, 13 Apr 2012 permanent link
Categories: market efficiency, regulation
The social utility of hedging
I have been engaged in a stimulating email conversation with Vivek Oberoi on the social utility of hedging. The hedger is clearly better off by hedging and reducing risk, but Vivek’s question was whether society as a whole can be worse off? I found the discussion quite interesting and thought it worthwhile to widen the conversation by sharing it on this blog. Moreover, it is heartening to see people in the financial industry introspect about the social utility of their industry. Perhaps, this will encourage others in the financial industry to look at their own work more critically.
My position is that hedging has powerful redistributive effects but is socially useful so long as (a) the hedging is carried out in liquid derivative markets and (b) hedgers do not suffer too much from the Endowment Effect. Vivek of course is not convinced. Anyway here is the conversation so far
Vivek Oberoi writes:
If I buy tickets to travel for my vacation in December today, I am indifferent to any subsequent change in the price of oil. To be able to sell me the ticket forward, a risk averse airline will hedge their fuel for the sale. It too is now indifferent to the price of oil. The airline and I have made allocational choices based on forward price of oil today. If oil price on the day of the flight is different from what it is today, there will be an allocational loss. Both the airline and I may be better off (we are risk averse). But there will be dampening of the price signal. That will lead to an allocational loss (negative externality?) to society.
My response:
One key question is whether the forward contracts can be sold to a third party or can be unwound with the original party at market related prices. The ticket cannot, but the oil hedge can. Illiquid derivatives can be harmful for allocative efficiency. For example:
- You may be willing to accept $500 in return for postponing your vacation by a couple of days
- Somebody who needs to take that flight due to a personal emergency may be willing to pay $1000 premium to get on that flight.
The airline and the government would step in and say that you cannot do the trade. The wrong person gets on the plane and the outcome is inefficient.
But if you were allowed to do the trade then the Coase Theorem implies that allocative efficiency is achieved regardless of the initial allocation of property rights. In other words, it does not matter whether you owned the ticket or the other person owned the ticket in the beginning; after the bargaining and trading, the right person will get on the plane. The initial ownership will only determine who is richer/poorer at the end of the trade. The Coase Theorem requires low transaction costs which would be the case in liquid futures markets and in liquid OTC markets, but not in highly customized and illiquid bilateral forward contracts.
Behavioural finance will of course have a different take on this. The Endowment Effect could imply a loss of allocative efficiency due to derivative contracts. In the corporate context, you need some takeover threats from asset strippers (who would monetize the fuel hedges and then shut down the airline) to prevent the loss of allocative efficiency caused by managers suffering from the Endowment Effect.
Vivek Oberoi continues:
Imagine a risk-averse consumer of oil. He needs 1 unit of oil to drive to office. The price of oil is USD 100/unit. The consumer has USD 100. If the price of oil goes up to, say 150, he will have to take public transport. To keep things simple, assume the public transport ticket will cost 100. If the price goes down to 50 he will use the money saved to see a movie. The consumer is risk-averse. He dislikes the thought of using public transport more than the enjoyment of seeing a movie.
At time t0 the consumer gets into a fixed price contract for the purchase of 1 unit of oil at time t1. Assume oil prices spike to USD 150/bbl. Now the customer has a choice. He can either use the oil to drive to work. Or he can sell the oil for USD 150/unit. Use USD 100 of that for public transport and the remaining USD 50 for the movie. The essence of risk-aversion is that the combination of using public transport and seeing the movie will not be as good as driving to work.
My response:
I would interpret the situation a little differently. First of all, we can avoid expected values and risk aversion by just focusing on the case where the price of oil is 150. We must still take into account the non linearity (concavity) of the utility function which is what leads to risk aversion, but we can avoid probabilities and expectations.
In the $150 price scenario, the choices of the hedger are
- Spending $100 to drive to work and
- Spending $50 (net) to take the train leaving $50 surplus for the movie ticket.
The person who did not hedge also has two choices:
- Spending $150 to drive to work
- Spending $100 to take the train.
What is the difference? It is as if the hedger won a lottery ticket with a prize of $50. That is all. For the hedger, the car and the train are both cheaper by $50, but the relative cost of car versus train is the same for both hedger and non hedger (150-100=100-50=50).
You are right in saying that at the higher level of wealth induced by winning the lottery ticket, the consumer may be willing to pay $150 to drive to work and so he will not sell the forward contract while at the lower level of wealth, he would take the train. That is the result of the concavity of the utility function (risk aversion). Yet, allocative efficiency is achieved in both cases. The hedger taking the car is not allocative inefficiency – it is simply the redistributive effect of the lottery ticket. Exactly as the Coase theorem would say, the initial allocation of property rights (whether or not there is a forward contract) gives rise to windfall gains and losses (lottery prizes), but there is no loss of allocative efficiency.
Vivek Oberoi continues:
A similar case can be built for a risk-averse producer. Imagine a USD 50/unit drop in oil price will lead to a shutdown of his fields. He hedges to avoid that eventuality. An outcome in which he gets USD 50 in cash and shuts down his field is worse than him producing 1 unit.
My response:
Absolutely correct. The consumer wants to buy a lottery ticket that gives a $50 prize when oil is at $150. The producer wants to buy a lottery ticket that gives a $50 prize when oil is at $50. Each is willing to sell the lottery ticket that the other wants in order to pay for the lottery ticket that he wants. Risk aversion (concave utility functions) is what makes this trade possible. More generally, one party is a put buyer and the other is a call buyer. The combination of these two lotteries (options) is a forward contract. Again the Coase theorem says that the trade that they agree to do is allocatively efficient.
Your arguments do of course bring out some counter intuitive aspects of derivative markets:
- Not all hedgers who cancel their hedges opportunistically are evil. In fact, this behaviour is sometimes necessary to preserve allocative efficiency.
- Not all asset stripping corporate raiders are evil. Sometimes they are necessary to overcome the Endowment Effect and restore allocative efficiency by monetizing derivatives and real options.
Vivek Oberoi responds to my response:
- As you say, the redistributive effect of the lottery and the concavity of utility function ensures that the consumer of oil drives to work. That decision is allocationally inefficient. The consumer has no incentive to reduce his consumption of oil by taking public transport when the price of oil goes up to USD 150/bbl. Similarly the producer has no incentive (and no surplus funds with which) to increase his production of oil. The price signal is being damped.
- This transaction results in a net welfare gain for both the consumer and the producer. They are risk averse after all. But for society (i.e everyone besides the two principals) as a whole there is a welfare loss (negative externality). The price at which the derivative contract was struck and the ultimate price of oil are contractually unarbitragable. This reverses the gains from trade. The economic effect of the transaction *on society* is identical to that of price fixing by a government.
My response to his response to my response:
I do not agree that there is any inefficiency for the following reasons:
- Because the demand of the hedgers has become inelastic, the price of oil will rise more than it otherwise would. For example, with no hedging oil may go to say 120 to force everybody to cut consumption by 10%. With hedging suppose half the consumers do not cut consumption at all. Then oil may rise to 150 to force the remaining half of the consumers to cut consumption by 20% so that demand still equals supply. The cost of adjusting to a shortage of oil has been shifted from one set of consumers to another set. This is redistribution and not inefficiency.
- Probably the oil field got developed only because of hedging. If there is a risk that oil may go to $50, the oil company may be reluctant to invest in the wells and pipelines and so on.
- This illustrates why hedging is needed: decisions today depend on the oil price five years from now. Futures markets lead to efficient outcomes in this case. Similar issue arises with the consumer's decision to buy a car – the risk that oil may go to $150 may make him reluctant to buy the car in the first place.
- You are focusing on the price signal from the spot markets. The more powerful price signals come from the futures market. This signal drives oil exploration, purchase of cars, power plants and so on. The long run supply and demand responses are much stronger because of the futures markets.
I think you are underestimating the power of the Coase theorem when markets are deep and liquid.
Posted at 6:09 pm IST on Wed, 11 Apr 2012 permanent link
Categories: derivatives, risk management
Pricing of liquidity
Prior to the crisis, liquidity risk was under priced and even ignored. Now, the pendulum has swung to the other extreme, but the result may once again be that liquidity is mispriced.
The Financial Stability Institute set up by the Bank for International Settlements and the Basel Committee on Banking Supervision has published a paper “Liquidity transfer pricing: a guide to better practice” by Joel Grant of the Australian Prudential Regulation Authority. The paper argues that a matched maturity transfer pricing method based on the swap yield curve does not price liquidity at all:
These banks came to view funding liquidity as essentially free, and funding liquidity risk as essentially zero. ... If we assume that interest rate risk is properly accounted for using the swap curve, then a zero spread above the swap curve implies a zero charge for the cost of funding liquidity.
I find myself in total disagreement with this assertion. The standard liquidity preference theory of the term structure says that the long term interest rate is equal to the expected average short term interest rate plus a liquidity premium. So matched maturity transfer pricing does price liquidity. If you accept the market liquidity premium as correct, then one can go further and say that the swap based approach prices liquidity perfectly; but I do not wish to push the argument that far. I would only say that Grant’s argument would hold true only under the pure expectations theory of the term structure, and in this case, the entire market is, by definition, placing a zero price on liquidity.
The paper argues that matched maturity transfer pricing must be based on the bank’s borrowing yield curve – the bank’s fixed rate borrowing cost is converted into a floating rate cost (using an “internal swap”) and the spread of this floating rate borrowing cost over the swap yield curve is treated as a liquidity premium. I believe that the error in this prescription is that it conflates credit and liquidity risk. The spread above the swap curve reflects the term structure of the bank’s default risk. Grant seems to recognize this, but then he ignores the problem:
This ... reflects both idiosyncratic credit risks and market access premiums and is considered to be a much better measure of the cost of liquidity.
I believe that there is a very big problem in including the bank’s default risk premium in pricing the assets that the bank is holding. The problem is that the bank’s default risk depends on the asset quality of the bank. Transfer pricing based on this yield curve can thus set up up a vicious circle that turns a healthy bank into a toxic bank. A high transfer price of funds means that the bank is priced out of the market for low risk assets and the bank ends up with higher risk assets. The higher risk profile of the bank increases its borrowing cost and therefore its transfer price. This pushes the bank into even more risky assets and the vicious circle continues until the bank fails or is bailed out.
This problem is well known even in corporate finance where a firm is engaged in many different lines of business. There the solution is to use a divisional cost of capital which ignores the risk of the company as a whole and focuses on the risk of the division in question. The use of a corporate cost of capital in diversified companies leads to the lower risk businesses being starved of funds while the high risk businesses are allowed to grow. Ultimately, the corporate cost of capital also rises. Divisional cost of capital solves this problem.
It would be very odd if a regulatory guide to best practice ignores all this learning and pushes banks in the wrong direction. We should not lose sight of the simple principle that assets must be priced based on the characteristics of the asset and not the characteristics of the owner of the asset.
Posted at 3:34 pm IST on Thu, 29 Mar 2012 permanent link
Categories: regulation
Globalized Finance
It is interesting to find a well known G-SIFI (Global Systemically Important Financial Institution) being described as:
a London based hedge fund, headed by a rajestani, masquerading as a German bank
In all fairness, the description is perhaps partly facetious and in any case, I doubt whether this G-SIFI is either as globalized or as important as the Rothschilds (another Anglo-German combination) were in their heyday.
If you are keen to verify your guess of the identity of the G-SIFI in question, goto this dealbreaker.com story, scroll down to the comments, and read the comment of Edmond Dantes, from which the above quote is taken.
Posted at 8:37 pm IST on Fri, 23 Mar 2012 permanent link
Categories: banks
Reviving structural models: Pirrong tackles commodity price dynamics
The last quarter century has seen the slow death of structural models in finance and the relentless rise of reduced form models. I have argued that this leads to models that are “over-calibrated to markets and under-grounded in fundamentals”, and was therefore quite happy to see Craig Pirrong revive structural models with his recent book on Commodity Price Dynamics.
Ironically, it was a paper based on a structural model that made it possible to jettison structural models. The 1985 paper by Cox, Ingersoll and Ross (“An Intertemporal General Equilibrium Model of Asset Prices”, Econometrica, 1985, 53(2), 363-384) took a structural model of a very simple economy and showed that asset prices must equal discounted values of the asset payoff after making a risk adjustment in the drift term of the dynamics of the state variables. This was a huge advance because it became possible for modellers to simply assume a set of relevant state variables, calibrate the drift adjustments (risk premia) to other market prices, and value derivatives without any direct reference to fundamentals at all.
Over time, reduced form models swept through the whole of finance. Structural (Merton) models of credit risk were replaced by reduced form models. Structural models of the yield curve (based on the mean reversion and other dynamics of the short rate) were replaced by the Libor Market Model (LMM). In commodity price modelling, fundamentals were swept aside, and replaced by an unobservable quantity called the convenience yield.
All this was useful and perhaps necessary because the reduced form models were eminently tractable and could be made to fit market prices quite closely. By contrast, structural models were either intractable or too oversimplified to fit market prices well enough. Yet, there is reason to worry that the use of reduced form models has gone beyond the point of diminishing returns. It is worth trying to reconnect the models to fundamentals.
This is what Pirrong is trying to do in the context of commodity prices. What he has done is to abandon the idea of closed form solutions and rely on computing power to solve the structural models numerically. I believe this a very promising idea though Pirrong’s approach stretches computing feasibility to its limits.
Pirrong regards the spot commodity price to be a function of one state variable (inventory denoted x) and two fundamentals (denoted by y and z, representing demand shocks with different degrees of persistence or a supply shock and a demand shock). As long as inventory is non zero, the spot price must equal the discounted forward price, where the forward price in turn satisfies a differential equation of the Black-Scholes type. The level of inventory is the result of an inter-temporal optimization problem.
Pirrong solves all these problems numerically using a discrete grid of values for x, y and z. Moreover, to use numerical methods, time (t) must also be discretized – Pirrong uses a time interval of one day and the forward prices are for one-day maturity. After discretization, the optimization becomes a stochastic dynamic programming problem. For each day on the grid, a series of problems have to be solved to get the spot price and forward price functions. For each value of inventory in the x grid, a two dimensional partial differential equation has to be solved numerically to get the grid of forward prices associated with that level of inventory. Then for each point in the x-y-z grid, a fixed point (or root finding) problem has to be solved to determine the closing inventory at that date. Once opening and closing inventories are known, the spot price is determined by equating supply and demand. All this has to be repeated for each date: the dynamic programming problem has to be solved recursively starting from the terminal date.
In this process, the computation of forward prices assumes a spot price function, and the spot price function assumes a forward price function. The solution of the stochastic dynamic programming problem consists essentially of iterating this process until the process converges (the new value of the spot price function is sufficiently close to the previous value).
Pirrong reports that the solution of the stochastic dynamic programming problem takes six hours on a 1.2GHz computer. To calibrate the volatility, persistence and correlation of the fundamentals to observed data, it is necessary to run an extended Kalman Filter and the stochastic dynamic programming problem has to be solved for each value of these parameters. All in all, the computational process is close to the limits of what is possible without massive distributed computing. Pirrong reports that when he tried to add one more state variable, the computations did not converge despite running for 20 days on a fast desktop computer.
Though the numerical solution used only one-day forward prices, it is possible to obtain longer maturity (one-year and two-year) forward prices as well as option prices by solving the Black-Scholes type partial differential equation numerically. Pirrong shows that models of this type are able to explain several empirical phenomena.
Perhaps, it should be possible to use models of this kind elsewhere in finance. Term structure models are one obvious problem with similarities to the storage problem.
Posted at 2:41 pm IST on Thu, 15 Mar 2012 permanent link
Categories: commodities, derivatives, interesting books
Glimcher: Foundations of Neuroeconomics Analysis
Over the last several weeks, I have been slowly assimilating Paul Glimcher’s Foundations of Neuroeconomics Analysis. Most of the neuroeconomics that I had read previously was written by economists (particularly behavioural economomists) who have ventured into neuroscience. Glimcher is a neuroscientist who has ventured into psychology and economics. It appears to me that this make a very profound difference.
First of all, neuroscientists (and biologists in general) treat the human brain (and more generally the animal brain) with an enormous amount of respect. The biologists’ view is that an organ that has evolved over hundreds of millions of years must be pretty close to perfection. For example, Glimcher points out that the ability of a rod cell in the human eye to detect a single photon of light “places human vision at the physical limits of light sensitivity imposed by quantum physics” (page 145). Similarly, the detection of image features in the visual cortex uses Gabor functions which also have well known optimality properties (page 237).
This view needs to be reconciled with the findings of psychologists and behavioural economists that the human brain makes the most egregious mistakes on very simple verbal problems. Glimcher provides one answer – evolution performs a constrained optimization in which greater accuracy has to be constantly balanced against greater computational costs (the brain consumes a disproportionate amount of energy despite its small size). Once again, this trade-off is carried out in a near perfect manner (pages 276-278). I would think that Gigerenzer’s Rationality for Mortals is another way of looking at this puzzle – many of these verbal problems are totally different from the problems that the brain has encountered during millions of years of evolution.
The second profound difference is that biologists do not put human behaviour on a totally different pedestal from animal behaviour. They tend to believe that the neural processes of a rhesus monkey are very similar to that of human beings. After all, they are separated by a mere 25 million years of evolution (page 169). Economists and psychologists probably have a much more anthropocentric view of the world. On this, I am with the biologists; in the whole of human history, anthropocentrism has at almost all times and in almost all contexts been a delusion.
This leads to a third big difference in neuroeconomics itself. Much of Glimcher’s book is based on studies of single neurons or multiple neurons and is therefore extremely precise and detailed. Highly intrusive single neuron studies are obviously much easier to do on animals than on human beings. Much of the neuroeconomics written by economists is therefore based on functional magnetic resonance imaging (functional MRI or fMRI) which provides only a very coarse grained picture of what is going on inside the brain but is easy to do on human beings. The problem is that if one reads only the fMRI based neuroeconomics, one gets the feeling that neuroscience is highly speculative and imprecise.
Glimcher’s book also leads to a view of economics in which economic constructs like utility and maximization are reified in the form of physical representations inside the brain. I am tempted to call this Platonic economics (drawing an analogy with Platonic realism in philosophy), but Glimcher refers to this as “because models” instead of “as if models” – individuals do not act as if they maximize expected utility; they actually compute expected utility and maximize it. There are neural processes that actually encode expected utility and there are neural processes that actually compute the argmax of a function.
One of the interesting aspects of this process of reification is the detailed discussion of the neural mechanisms behind the “reference point” of prospect theory. Glimcher argues that “all sensory encoding is reference dependent: nowhere in the nervous system are the objective values of consumable rewards encoded. ” Glimsher raises the tantalising possibility that temporal difference learning models could allow the reference point to be unambiguously identified (page 321 et seq).
Another important observation is that directly experienced probability and verbally communicated probabilities are totally different things. When random events are directly experienced, there are neural mechanisms that compute the expected utility directly without probabilities and utilities being separately available for subsequent processing. As predicted by learning theory, these probabilities reflect an underweighting of low-probability events (because of a high learning rate). Symbolically communicated probabilities are a different thing altogether where we find the standard Kahnemann-Tversky phenomenon of overweighting of low-probability events.
Expected subjective values constructed from highly symbolic information are an evolutionarily new event, although they are also hugely important features of our human economies, and it may be the novelty of this kind of expectation that is problematic. ... If [symbolically communicated probabilities] is a phenomenon that lies outside the range of human maximization behavior, then we may need to rethink key elements of the neoclassical program. (page 373)
This too is probably related to Gigerenzer’s finding that frequencies work much better than probabilities in symbolically communicated problems and that single event probabilities are handled very badly.
Posted at 8:09 pm IST on Wed, 14 Mar 2012 permanent link
Categories: behavioural finance, interesting books
Decumulation phase of retirement savings
During the last couple of decades, pension reforms have focused much attention on the accumulation phase in which individuals build up their retirement savings. Well designed defined contribution schemes have incorporated insights from neo-classical finance and behavioural finance to create low cost well diversified savings vehicles with simple default options. Much less attention has been paid to the decumulation phase after retirement where the savings are drawn down.
A report last month from the National Association of Pension Funds (NAPF), and the Pensions Institute in the UK argues that investor ignorance combined with lack of transparency and undesirable industry practices lead to large losses for the investors. According to the report:
Each annual cohort of pensioners loses in total around £500m-£1bn in lifetime income. This could treble as schemes mature and auto-enrolment brings 5-8m more employees into the system.
This represents 5-10% of the annual amount consumers spend on annuities.
The report makes a number of excellent suggestions including creating a default option for annuitization. I would argue that a more radical approach would ultimately be needed.
In the accumulation phase, the key advance was the distinction between systematic/market risk and diversifiable/idiosyncratic risk. By restricting choice to well diversified portfolios, the investor’s choice is dramatically simplified – the only choice that is required is the desired exposure to market risk (proxied by the percentage allocation to equities).
The corresponding distinction in the decumulation phase is between aggregate mortality risk (what I like to call macro-mortality risk) and individual specific mortality risk (micro-mortality). Given a large pool of investors in any defined contribution scheme and some degree of compulsory annuitization, it can be assumed that micro-mortality risk is largely diversified away. Compulsory annuitization eliminates adverse selection to a great extent and large pools provide diversification.
What is left is therefore the risk of a change in population-wide life expectancy or macro-mortality risk. It is not at all self-evident that insurance companies are well equipped to manage this risk. Perhaps, capital markets can deal with this risk better by spreading the risk across large pools of investment capital. In fact, it would make sense for many individuals in the accumulation phase to bear life-expectancy risk (as it increases the period during which their savings can accumulate). At least since the days of the Damsels of Geneva more than two centuries ago, pools of investment capital have been quite willing to speculate on diversified mortality risk. Shiller’s proposal regarding macro futures is another way of implementing this idea.
If we separate out macro mortality risk, then the decumulation phase of retirement savings can be commoditized in exactly the same way that indexation allowed the commoditization of the accumulation phase.
Posted at 10:08 pm IST on Thu, 8 Mar 2012 permanent link
Categories: pension
Stock market watching
From an interview with the President of the European Central Bank, Mario Draghi in the Wall Street Journal (WSJ) yesterday:
WSJ: What’s the first statistic you look at in the morning?
Draghi: Stock markets.
WSJ: Do you look at the euro exchange rate?
Draghi: Not in the early morning.
I am surprised that he did not mention the TED Spread or some other interest rate spread. And even within the stock market, it would appear that he is looking at market levels and not something like VIX. Are inflation targeting central banks actually closet asset price targeters?
Interesting to compare the Draghi quote with a controversial statement in the Indian parliament by former central banker, then finance minister, and future prime minister, Manmohan Singh in 1992:
But that does not mean that I should lose my sleep simply because stock market goes up one day and falls next day.
This provoked a retort from a parliamentary committee a year later:
It is good to have a Finance Minister who does not lose his sleep easily, but one would wish that when such cataclysmic changes take place all around, some alarm would ring to disturb his slumber
Posted at 2:44 pm IST on Fri, 24 Feb 2012 permanent link
Categories: equity markets, monetary policy
Intra day exposures once again
No, I am not talking about my obsession with intra-day risks (see here and here), but about the New York Fed’s uncharacteristically blunt criticism of the big clearing banks on this issue:
... the amount of intraday credit provided by clearing banks has not yet been meaningfully reduced, and therefore, the systemic risk associated with this market remains unchanged.
These structural weaknesses are unacceptable and must be eliminated.
... the Task Force [of the clearing banks] ... has not proved to be an effective mechanism for managing individual firms’ implementation of process changes
The Fed’s response is to step in directly to ensure that practices change:
... the New York Fed will intensify its direct oversight of the infrastructure changes
Ideas that have surfaced and could be considered include restrictions on the types of collateral that can be financed in tri-party repo and the development of an industry-financed facility to foster the orderly liquidation of collateral in the event of a dealer's default.
Nor is the criticism restricted to the banks; the Fed is equally critical of the non bank participants in this market:
Ending tri-party repo market participants’ reliance on intraday credit from the tri-party clearing banks remains a critical financial stability policy goal.
The Federal Reserve and other regulators will be monitoring the actions of market participants to ensure that timely action is being taken to reduce sources of instability in this market.
The background to all this is the strange way in which the tri-party repo market operates in the US. Leveraged investors in various securities finance their positions using overnight repos which can be regarded as a form of secured borrowing. The securities in question are not however pledged directly to the lenders but with the clearing bank (hence the name tri-party). Next day morning, the lender gets the loan back, but the borrower does not repay the money. What happens is that the clearing bank lends the money intra-day. Over the course of the day, the borrower drums up a new set of borrowers to lend against its securities that night, and the bank again ends the day without any exposure to the borrower.
The weakness here is that the repo lenders are relying on the clearing bank to get their money back in the morning; the bank is relying on the repo lenders to get its money back in the evening; and both could become complacent about the risks involved. It is a little like two people passing a hot potato back and forth to each other, and pretending that there is no longer any hot potato to worry about. (It is actually worse than that because while the potato would cool in a few minutes, the securities underlying the repo may take years to mature – assuming that they do not default in between.) Of course, everybody wakes up at times of stress, and then the clearing bank is in the position of having to decide each day whether to throw the borrower into bankruptcy by refusing to clear its repos. This is hardly the way to organize such a large and systemically important market.
We should all be happy that, for once, the Federal Reserve seems to be taking things seriously instead of succumbing to regulatory capture.
Posted at 6:27 pm IST on Thu, 16 Feb 2012 permanent link
Categories: risk management
Intra-day laxity: MF Global edition
I blogged two years back about the tendency in finance to be prudent at night but reckless during the day in the context of Lehman bankruptcy. A related phenomenon (compliant at night but trangressing during the day) is seen in the MF Global bankruptcy according to the preliminary trustee report released earlier this week:
The investigation to date has found that transactions regularly moved between accounts and that funds believed to be in excess of segregation requirements in the commodities segregated accounts were used to fund other daily activities of MF Global ... apparently with the assumption that funds would be restored by the end of the day. By Wednesday, October 26th, as the result of increasing demands for funds or collateral throughout MF Global, funds did not return as anticipated. As these withdrawals occurred, a lack of intraday accounting visibility existed, caused in part by the volume of transactions being executed ... (Paragraph 7, emphasis added)
Of course, I am not a lawyer, but it appears to me that such intra-day laxity is not consistent with the Commodity Exchange Act or the CFTC Regulations:
... all money, securities, and property received by such ... [futures commission merchant] to margin, guarantee, or secure the trades or contracts of any customer ... shall be separately accounted for and shall not be commingled with the funds of such commission merchant ... (Section 4d of the Commodity Exchange Act)
Each futures commission merchant shall treat and deal with the customer funds of a commodity customer or of an option customer as belonging to such commodity or option customer. All customer funds shall be separately accounted for, and shall not be commingled with the money, securities or property of a futures commission merchant or of any other person ... (CFTC Regulation 1.20)
... futures commission merchant ... [may add] to such segregated customer funds such amount or amounts of money, from its own funds or unencumbered securities from its own inventory, of the type set forth in §1.25, as it may deem necessary to ensure any and all commodity or option customers’ accounts from becoming undersegregated at any time. The books and records of a futures commission merchant shall at all times accurately reflect its interest in the segregated funds. (CFTC Regulation 1.23, emphasis added)
It would appear to me that the words “at any time” and “at all times” prohibit intra-day withdrawal “with the assumption that funds would be restored by the end of the day” as well as “lack of intraday accounting visibility”.
As an aside, it is interesting to note that after looking at 800 computer drives and 100 terabytes of data, the trustees still do not know where the money has gone
For three months the Trustee’s investigative team has worked to understand what happened during the final days of MF Global when cash and related securities movements were not always accurately and promptly recorded due to the chaotic situation and the complexity of the transactions. With these preliminary investigative conclusions in hand, the Trustee’s investigative team will analyze where the property wired out of bank accounts established to hold segregated and secured property ultimately ended up. (Paragraph 6)
The Trustee’s investigators, including the legal and forensic accounting teams, have conducted over 50 witness interviews, preserved secure access to thousands of boxes of hard copy documents, imaged over 800 computer drives, and are maintaining over 100 terabytes of data. (Paragraph 12)
Posted at 4:05 pm IST on Thu, 9 Feb 2012 permanent link
Categories: risk management
Market microstructure: Limit orders and order flow
One of my favourite post crisis themes has been the idea that market microstructure has macro consequences. I touch upon this in my paper on post crisis finance (mentioned in my blog posts here and here). Two papers that I read last month are related to this theme.
Psy-Fi blog pointed me towards a paper by Linnainmaa showing that the under performance of individual investors’ portfolios can be attributed largely to their use of limit orders. When one looks at trades, it may appear that these investors were stupidly selling when the smart money was buying in response to good news. Linnainmaa’s point is that quite often, this apparent “selling” is not really active selling; their limit orders were simply being hit by the smart money. Looking at the totality of the orders of individual orders may show that these orders had no sell bias. What happens is that when the smart money is buying, the individual investors’ buy limit orders do not execute while their sell orders do.
If this is true, then one implication of this use of limit orders by uninformed traders is that the smart money is able to buy shares too cheap. The price does not move as much as it should have. This phenomenon may also be contributing to the well known momentum effect. This leads straight on to the second paper that I read recently (I do not recall to whom I owe a hat tip for this paper).
Beber, Brandt and Kavajecz published their paper “What Does Equity Sector Orderflow Tell Us About the Economy?” recently (the working paper version is available here). They not only show that the order flow into defensive sectors of the stock market forecasts recessions, but they go on to show that the order flow does a better job than prices or returns. One possible explanation of why order flow is more informative than prices is of course the phenomenon described by Linnainmaa – since uninformed limit orders absorb some of the impact of informed buying, the full information of these orders is not impounded in prices..
This is of course totally different from genuine equilibrium models with representative agents where prices are not only fully informative but also move with zero trading volume implying that order flows and trading volumes are totally uninformative (or rather totally irrelevant).
Posted at 9:49 pm IST on Sun, 5 Feb 2012 permanent link
Categories: exchanges
Safe (or informationally insensitive) assets
Gary Gorton and his co-authors have produced a large literature on what they calls safe assets (assets whose prices are informationally insensitive). They published two new papers this month on collateral crises and on the constant share of safe assets through the last half century. Their earlier papers on being slapped by the invisible hand and the run on repo are quite well known. The basic argument of this literature is that:
- Safe assets serve an important social function
- Safe assets are in short supply – the demand for these assets exceeds the stock of government securities and other obvious safe assets.
- The shadow banking system is an important source of supply of safe assets
- The shadow banking system and the safe assets that they create must be protected from “runs” in the same way that bank deposits are protected.
A more radical version of this idea can be found in a paper by Morgan Ricks which argues that only licensed money-claim issuers should be permitted to issue short term debt and that all this debt should then be explicitly insured by the government.
Much of what we know about the demand for safe assets come from the work of IMF economist Manmohan Singh (not to be confused with the Indian Prime Minister!). In a series of papers on the use of collateral in OTC derivatives, counterparty risk and central counterparties, collateral velocity, rehypothecation, and the reverse maturity transformation by asset managers, Singh and his co-authors have documented the need for safe assets in derivative markets and asset management.
What emerges from this discussion is that much of the demand for safe assets comes from sophisticated financial institutions and sovereign reserve managers. To my mind, this completely weakens the case for any form of subsidy for the creation of safe assets. The literature on participation in equity markets (which can be regarded as a proxy for risk taking in financial markets) demonstrates that participation is determined to a great extent by intelligence (Grinblatt et al), cognitive ability (Christelis et al), education (Cole and Shastry) and financial literacy (Rooij et al).
Most of the demanders of safe assets are big institutions (according to Manmohan Singh’s work), and one would expect them to possess a sufficient pool of intelligence, cognitive ability, education and financial literacy to be able to invest in risky assets. In some cases, portfolio risk may actually be lower if safe assets are replaced by equities. For example, Manmohan Singh explains how the security lending activities of asset managers creates a reverse maturity transformation – it converts the long term investment portfolio of households into a demand for short term assets (collateral). To the extent to which equities are correlated with each other, it is plausible that collateral in the form of stocks similar to those that are lent out might reduce risk. To the extent to which the borrower of the stocks is engaged in a “pair trade”, a natural supply of such collateral might exist.
I suspect that the demand for safe assets is better explained by a rational tradeoff between the costs and benefits of risk assessment (in a manner that bears some similarities to the rational inattention model of Sims). I therefore look at the huge demand for safe assets as a consequence of the moral hazard engendered by repeated bail outs of the financial sector. Even sophisticated investors may find it optimal not to make a serious risk assessment of any asset which has little idiosyncratic risks and is exposed only to systemic risks if the probability of such an asset (or rather its investors) being bailed out is quite high.
When one reads Gorton carefully, it becomes apparent that that the safe (or informationally insensitive assets) are not risk free – they are only free of idiosyncratic risk. Systemic risk is less subject to information asymmetry and therefore does not pose the problems that Gorton attributes to risky assets in general. But then the ability of the state to insure against systemic risk is highly suspect because if such an insurance is attempted in sufficiently large scale, the result is likely to be a sovereign debt crisis when the systemic risk event materializes. Capitalism to my mind is about accepting and dealing with failure, while the path that Gorton and Ricks are proposing is the path of socialism.
I see a similarity between the desire of the rentier class for safe assets and the desire of the working class for defined benefit pension plans. In both cases, the desire is to shift the risks to the taxpayers and thereby avoid the cognitive burden of making informed choices. In the case of the working class, society has over the last few decades rejected the demand for “informationally insensitive” pensions (defined benefit plans) despite the fact that lower levels of financial education might make the cognitive burden quite high for many of these people. I see no reason why the rentier class should receive a more favourable treatment.
Posted at 9:39 pm IST on Thu, 26 Jan 2012 permanent link
Categories: bond markets