Legal theory of finance
The Journal of Comparative Economics (subscription required) has a special issue on Law in Finance (The CLS Blue Sky Blog has a series of posts summarizing and commenting about this work – see here, here and here). The lead paper in the special issue by Katharina Pistor presents what she calls the Legal Theory of Finance (LTF); the other papers are case studies of different aspects of this research programme.
Most finance researchers are aware of the Law and Finance literature (La Porta, Shleifer, Vishny and a host of others), but Pistor argues that “Law & Finance is ... a theory for good times in finance, not one for bad times.” She argues that though finance contracts may appear to be clear and rigid, they are in reality in the nature of incomplete contracts because of imperfect knowledge and inherent uncertainty. When tail events materialize, it is desirable to rewrite the contracts ex post. This can be done in two ways: first by the taxpayer bailing out the losers, or by an elastic interpretation of the law.
One of the shrill claims of the LTF is that legal enforcement is much more elastic at the centre while being quite rigid at the periphery. Bail out is also more likely at the centre. I do not see anything novel in this observation which should be obvious to anybody who has not forgotten the first word of the phrase “political economy”. It should also be obvious to anybody who has read Shakespeare’s great play about finance (The Merchant of Venice), and noted how differently the law was applied to Jews and Gentiles. It has also been all too visible throughout the global financial crisis and now in the eurozone crisis.
Another persistent claim is that all finance requires the backstop of the sovereign state which is the sole issuer of paper money. This is in some sense true of most countries for the last hundred years or so though I must point out that the few financial markets that one finds in Somalia or Zimbabwe function only because they are not dependent on the state. The LTF claim on the primacy of the state was certainly not true historically. Until the financial revolution in the Holland and later England, merchants were historically more credit worthy than sovereigns. Bankers bailed out the state and not the other way around.
Most of the case studies in the special issue do not seem to be empirically grounded in the way that we have come to expect in modern finance. I was not expecting any fancy econometrics, but I did expect to see the kind of rich detail that I have seen in the sociology of finance literature. The only exception was the paper by Akos Rona-Tas and Alya Guseva on “Information and consumer credit in Central and Eastern Europe”. I learned a lot from this paper and will probably blog about it some day, but it seemed to be only tangentially about the LTF.
Posted at 1:40 pm IST on Thu, 25 Jul 2013 permanent link
Categories: law, regulation
Dubious legal foundations of modern finance?
I have been reading a 2008 paper by Kenneth C. Kettering (“Securitization and Its Discontents: the Dynamics of Financial Product Development”) arguing that securitization is built on dubious legal foundations – specifically there are possible conflicts with aspects of fraudulent transfer law. Kettering argues that securitization is an example of a financial product that has become so widely used that it cannot be permitted to fail, notwithstanding its dubious legal foundations.
I am not a lawyer (and Kettering’s paper is over 150 pages long) and therefore I am unable to comment on the legal validity of his claims. But, I also recall reading Annelise Riles’s book Collateral Knowledge: Legal Reasoning in the Global Financial Markets (University of Chicago Press, 2011), which makes somewhat similar claims. But her ethnographic study was focused on Japan, and when I read that book, I had assumed that the problems were specific to that country.
Posted at 9:55 pm IST on Sun, 14 Jul 2013 permanent link
Categories: bond markets, derivatives, law
Non discretionary portfolio management
Last month, the Reserve Bank of India (RBI) released draft guidelines on wealth management by banks. I have no quarrels with the steps that the RBI has taken to reduce mis-selling. My comments are related to something that they did not change:
4.3.2 PMS-Non-Discretionary
4.3.2.1 The non-discretionary portfolio manager manages the funds in accordance with the directions of the client. Thus under Non-Discretionary PMS, the portfolio manager will provide advisory services enabling the client to take decisions with regards to the portfolio. The choice as well as the timings of the investment decisions rest solely with the investor. However the execution of the trade is done by the portfolio manager. Since in non-discretionary PMS, the portfolio manager manages client portfolio/funds in accordance with the specific directions of the client, the PMS Manager cannot act independently.
4.3.2.2 Banks may offer non-discretionary portfolio management services.
...
4.3.2.3 Portfolio Management Services (PMS)- Discretionary: The discretionary portfolio manager individually and independently manages the funds of each client in accordance with the needs of the client. Under discretionary PMS, independent charge is given by the client to the portfolio manager to manage the portfolio/funds. ... Banks are prohibited from offering discretionary portfolio management services. (emphasis added)
I am surprised that regulators have learnt nothing from the 2010 episode in which an employee of a large foreign bank was able to misappropriate billions of rupees from high net worth individuals including one of India’s leading business families. (see for example, here, here and here).
My takeaway from that episode was that discretionary PMS is actually safer and more customer friendly than non discretionary PMS. After talking to numerous people, I am convinced that the so called non-discretionary PMS is pure fiction. In reality, there are only two ways to run a large investment portfolio:
- The advisory model where the bank provides investment advice and the client takes investment decisions and also handles execution, custody and accounting separately.
- The de facto discretionary PMS where the bank takes charge of everything. The fiction of a non-discretionary PMS is maintained by the customer signing off on each transaction often by signing blank cheques and other documents.
When you think carefully about it, the bundling of advice, execution, custody and accounting without accountability is a serious operational risk. One could in fact argue that the RBI should ban non-discretionary PMS and allow only discretionary PMS. Discretionary PMS is relatively safe because the bank has unambiguous responsibility for the entire operational risk.
The only argument for non-discretionary PMS might be if the PMS provider is poorly capitalized or otherwise not very reliable. But in this case, the investor should be imposing strict segregation of functions and should never be entrusting advice, execution, custody and accounting to the same entity.
Posted at 1:59 pm IST on Sun, 7 Jul 2013 permanent link
Categories: mutual funds, regulation
Consumer protection may be a bigger issue than systemic risk
Since the global financial crisis, policy makers and academics alike have focused attention on systemic risk, but consumer protection is an equally big if not bigger issue that has not received equal attention. John Lanchester has a long (6700 word) essay in the London Review of Books, arguing that the mis-sold PPI (payment protection insurance) scandal in the UK was bigger than all other banking scandals – the London Whale, UBS (Adoboli), HBOS, Libor rigging and several others.
Lanchester argues the case not only because the costs of the PPI scandal could go up to £16-25 billion ($24-37 billion), but also because it happened at the centre of the banks’ retail operations and involved a more basic breach of what banking is supposed to be about. Interestingly, the huge total cost of the scandal is the aggregation of small average payouts of only £2,750 to each affected customer indicating that the mis-selling was so pervasive as to become an integral part of the business model itself.
Posted at 12:40 pm IST on Tue, 2 Jul 2013 permanent link
Categories: behavioural finance, regulation
Starred items from Google Reader
Updated: In the comments, Maries pointed me to Mihai Parparita’s Reader is Dead tools (see also here and here). Though Google Reader has officially shut down, it is still accessible, Mihai’s tools are still working, and I was able to create a multi-GB archive of everything that existed in my Google Reader. But the tools required to read this archive are still under development. So in the meantime, I still need my old code and may be more such code to read all the XML and JSON files in these archives.
With Google Reader shutting down, I have been experimenting with many other readers including Feedly and The Old Reader. Since many feed readers are still being launched and existing readers are being improved, I may keep changing my choice over the next few weeks. Importing subscriptions from Google Reader to any feed reader is easy using Google Takeout. The problem is with the starred items. I finally sat down and wrote a python script that reads the starred.json file that is available from Google Takeout and writes out an html file containing all the starred items.
Python’s json library makes reading and parsing the json file a breeze. By looking at some of the entries, I think I have figured out the most important elements of the structure. I am not sure that I have understood everything, and so suggestions for improving the script are most welcome.
Where the original feed does not contain the entire post, but only a summary, ideally I would like to follow the link, convert the web page to PDF and add a link pointing to the converted PDF file. This would protect against link rot. I tried doing this with wkhtmltopdf but I was not satisfied with the quality of the conversion. Any suggestions for doing this would be most welcome. Ideally, I would like to use Google Chrome’s ability to print a web page as PDF, but I do not find any command line options to automate this from within the python script.
Posted at 12:07 pm IST on Tue, 2 Jul 2013 permanent link
Categories: miscellaneous, technology
What is the worth of net worth?
The Securities and Exchange Board of India (SEBI) announced today:
Presently, mutual funds are not allowed to appoint a custodian belonging to the same group, if the sponsor of the mutual fund or its associates hold 50 per cent or more of the voting rights of the share capital of such a custodian or where 50 per cent or more of the directors of the custodian represent the interests of the sponsor or its associates.
The Board has decided that the custodian in which the sponsor of a mutual fund or its associates are holding 50 percent or more of the voting rights of the share capital of the custodian, would be allowed to act as custodian subject to fulfilling the following conditions i.e. (a) the sponsor should have net worth of atleast Rs.20,000 crore at all points of time, ...
To provide a perspective on this, the last reported net worth of Lehman was $19.283 billion which is about five times the Rs.20,000 crore stipulated in the above announcement. (The Lehman figure is from the quarterly 10-Q report filed by Lehman on July 10, 2008 about two months before it filed for bankruptcy.)
Even assuming that the reported net worth is reliable, what I fail to understand is the implicit assumption in the world of finance that wealthy people are somehow more honest than poor people. As far as I am aware, the evidence for this is zero. This widely prevalent view is simply the result of intellectual capture by the plutocracy.
Capital in finance has only function – to absorb losses. I would have understood if SEBI had proposed that a variety of sins of the custodian would be forgiven if it (the custodian and not its sponsor) had a ring fenced net worth of Rs.20,000 crore invested in high quality assets.
Posted at 9:37 pm IST on Tue, 25 Jun 2013 permanent link
Categories: regulation
CBOE 2013 versus BSE 1993
Earlier this week, the US SEC imposed a fine on the Chicago Board Options Exchange (CBO) for conduct reminiscent of what used to happen in the Bombay Stock Exchange (BSE) two decades ago. In the early 1990s, the BSE board was dominated by broker members, and allegations of favouritism, conflict of interest and neglect of regulatory duties were very common. At that time, many of us believed that these were the kinds of problems that the US SEC had solved way back in the late 1930s under Chairman Douglas. India might have been six decades behind the US, but it is widely accepted that security market reforms in the 1990s solved this problem in India, though this solution might have created a different set of problems.
The SEC order reveals problems at the CBOE which are very similar to those that the BSE used to have in the early 1990s:
- The SEC order concerns the CBOE’s treatment of a member firm whose CEO was a member of CBOE’s Board of Directors and sat on CBOE’s Audit Committee (para 30 and 60). Incidentally, if you glossed over these two sentences while reading the entire 32 page order, you would completely miss this crucial fact. The SEC clearly did not want to highlight this issue at all and does not mention it in its press release.
- “Not only did CBOE fail to adequately detect violations and investigate and discipline one of its members, CBOE also took misguided and unprecedented steps to assist that same member which was under investigation by the Commission’s Enforcement Division staff and failed to provide information to Commission staff when requested.” (para 22)
- The CBOE helped the member firm to respond to a notice from the SEC by providing it with a summary of its (CBOE’s) investigative file despite knowing that CBOE investigations, and information obtained from other regulators during those investigations, were to be kept confidential. The member firm asked CBOE to provide “review, modification and insight” into its response to the SEC notice and the CBOE edited the response and emailed the “redlined” edits to the member firm. (para 34-35)
- When informed that the CBOE Department of Member Firm Regulation planned to issue a notice to this member firm for operating a non-registered dealer, CBOE’s former President and Chief Operating Officer asked that the notice not be issued until after an upcoming meeting with the member firm’s CEO (para 60).
- “CBOE made several financial accommodations to certain members [including the above member], and not to others, that were not authorized by existing rules. ... The accommodations were made for business reasons and were authorized by senior CBOE business executives who lacked an understanding of CBOE’s legal obligations as a self-regulatory organization.” (para 65 and 66)
- “CBOE staff responsible for the Exchange’s Reg. SHO surveillance never received any formal training on Reg. SHO, were instructed to read the rules themselves, did not have a basic understanding of what a failure to deliver was, and were unaware of the relationship between failures to deliver and a clearing firm’s net short position at the Depository Trust and Clearing Corporation (‘DTCC’). ... In fact, the investigator primarily responsible for monitoring the Reg. SHO surveillance from the third quarter 2009 to the second quarter 2010 had never even read the rule in its entirety, but only briefly perused it.” (para 14)
In financially regulation, no problems are permanently solved – potential problems just remain dormant ready to resurface under more favourable conditions.
Posted at 2:29 pm IST on Sat, 15 Jun 2013 permanent link
Categories: exchanges, regulation
The Baselization of CCPs
There was a time when central counter parties (CCPs) used robust and coherent risk measures. Way back in 1999, Artzner et al. could write that “We do not know of organized exchanges using value at risk as the basis of risk measurement for margin requirements” (Artzner, Delbaen, Eber and Heath (1999), “Coherent measures of risk”, Mathematical Finance, 9(3), 203-228, Remark 3.9 on page 217). During the global financial crisis, while Basel style risk management failed spectacularly, exchanges and their CCPs coped with the risks quite well. (I wrote about that here and here).
But things are changing as CCPs gear up to clear OTC derivatives. The robust risk management of CCPs is not percolating to the OTC world; instead, the model-risk infested risk measures of the OTC dealers are spreading to the CCPs. The OTC Space has a nice discussion of how the systems that CCP use to margin OTC derivatives are different from the systems that they use for exchange traded derivatives. No, the CCPs are sticking to expected shortfall and are not jumping into value at risk. But their systems are becoming more model dependent, more dynamic (and therefore procyclical) and more sensitive to recent market conditions. These are the characteristics of Basel (even with Basel’s proposed shift to expected shortfall), and these characteristics are gradually spreading to the CCP world.
I am not convinced that this is going to end well, but then CCPs are also rapidly becoming CDO-like (see my post here) and therefore their failure in some segments might not matter anymore.
Posted at 3:48 pm IST on Sun, 9 Jun 2013 permanent link
Categories: banks, exchanges, risk management
The NASDAQ Facebook Fiasco and Open Sourcing Exchange Software
Last week, the US SEC issued an order imposing a $10 million fine on NASDAQ for the software errors that caused a series of problems during the Facebook IPO on May 18, 2012. I think the SEC has failed in its responsibilities because this order does nothing whatsoever to solve the problems that it has identified. The order reveals the complete cognitive capture of the SEC and other securities regulators worldwide by the exchanges that they regulate.
The entire litany of errors during the Facebook IPO demonstrates that critical financial market infrastructures like exchanges and depositories should be forced to publish the source code of the systems through which their rules and bylaws are implemented. Of course, the exchanges will complain about the dilution of their “intellectual property”. But the courts have whittled down the “intellectual property” embedded in standard-essential patents and this principle applies with even greater force to software which implements rules and bylaws that are effectively subordinate legislation. Financial regulators have simply fallen behind the times in this respect.
What is the point of an elaborate process of filing and approval for rule changes, if there is no equivalent process for the actual software that implements the rule? The SEC order shows several instances where the lack of disclosure or approval processes for software changes made a complete mockery of the disclosure or approval processes for the rules and regulations themselves:
- “While NASDAQ’s rules had previously provided for a randomization interval at the conclusion of the DOP, in 2007, NASDAQ filed a rule change removing the randomization period from its rule. ... However, the randomization function had never been removed from NASDAQ’s systems, and therefore the IPO Cross Application for Facebook – and for all other companies that had an IPO on NASDAQ since August 31, 2007 – was run after a randomized period of delay in contravention of NASDAQ’s rules.” (para 16).
- While installing an upgrade to the NASDAQ trading systems, an “employee misinterpreted the instructions associated with the upgrade and assumed that the SHO Through application was not needed and could be removed from the system. As a result, the employee removed the SHO Through application.” A second employee was responsible for checking the work of the first employee “also misinterpreted the upgrade instructions to mean that the SHO Through application could be removed” Personnel running the daily configuration test for the exchange’s trading systems“ received a system alert based on the fact that the SHO Through application was no longer part of the system. ... they also thought the SHO Through application could be removed.” The error was detected only several days later in response to an enquiry from a trading member.(para 52-54)
- With inadequate understanding of the software bug that was causing problems in the Facebook IPO, the exchange implemented a hasty software change to bypass a validation check with full knowledge that this would cause “the exchange itself to take the opposite side of the mismatch ” caused by the removal of the validation check. However, “NASDAQ did not have a rule that allowed NASDAQ ... to assume an error position in any listed security.” (para 24 and 28)
The Facebook fiasco was itself the result of an infinite loop in the software. This infinite loop would almost certainly have been detected if the source code had been publicly released and discussed with the same attention to detail that characterizes rule changes.
The lack of well defined processes for software testing is revealed in this tidbit: “Given the heightened anticipation for the Facebook IPO, NASDAQ took steps during the week prior to the IPO to test its systems in both live trading and test environments. Among other things, NASDAQ conducted intraday test crosses in NASDAQ’s live trading environment, which allowed member firms to place dummy orders in a test security (symbol ZWZZT) during a specified quoting period. NASDAQ limited the total number of orders that could be received in the test security to 40,000 orders. On May 18, 2012, NASDAQ members entered over 496,000 orders into the Facebook IPO cross.” It should be obvious that the one thing that could have been anticipated prior to the Facebook IPO was the vastly greater volumes than in small time IPOs. Doing a test that excluded this predictable issue is laughable. Proper rules would have required the postponement of the IPO when the volume exceeded the tested capacity of the system.
It is my considered view that the SEC and other securities regulators worldwide are complicit in the fraud that exchanges perpetrate on investors in their greed to protect the alleged “intellectual property” embedded in their software. I have been writing about this for a dozen years now: (1, 2, 3, and 4). So the chances of anything changing any time soon are pretty remote.
Posted at 6:38 pm IST on Sat, 1 Jun 2013 permanent link
Categories: exchanges, technology
St Petersburg once again
One and a half years ago, I blogged about a paper by Peters that purported to resolve the paradox by using time averages. I finally got around to writing this up as a working paper (also available at SSRN). The content is broadly similar to the blog post except for some more elaboration and the introduction of a time reversed St Petersburg game as a further rebuttal of the time resolution idea.
Posted at 12:37 pm IST on Thu, 30 May 2013 permanent link
Categories: behavioural finance, mathematics, statistics
Currency versus stocks
In India we are accustomed to see the rupee and the stock market moving in the same direction as both respond to foreign investment flows. In recent weeks, the pattern has changed as a weakening rupee has coincided with a rising stock market. Another country with the same pattern is Japan where some commentators have argued that the pattern makes sense if foreign investors are hedging the currency risk while buying stocks. This is an interesting idea with potential relevance to India – foreigners can be long the private sector (equities) and short the government (currency).
Posted at 2:28 pm IST on Tue, 28 May 2013 permanent link
Categories: international finance
Macroprudential policy or financial repression
Douglas J. Elliott, Greg Feldberg, and Andreas Lehnert published a FEDS working paper last week entitled The History of Cyclical Macroprudential Policy in the United States. In gory detail, the paper describes every conceivable credit restriction that the US has imposed at some time or the other over some eight decades. It appears to me that most of them are best characterized as financial repression and not macroprudential policy. If one adopts the authors’ logic, one could go back to the middle ages and describe the usury laws as macroprudential policy.
Some two decades ago, we thought that financial repression had been more or less eliminated in the developed world, and was being gradually eliminated in the developing world as well. Post crisis, as much of the developed world deals with the sustainability of sovereign debt, financial repression is back in fashion, and macroprudential regulation provides a wonderful figleaf.
Posted at 12:07 pm IST on Mon, 20 May 2013 permanent link
Categories: monetary policy, regulation
The CDO'ization of everything
Six years ago, when the global financial crisis began, Collateralized Debt Obligations (CDOs) were regarded as the villains that were the source of all problems. Today, the clock has turned full circle, and CDO like structures have become the solution to all problems.
- All government rescue packages around the world were structured like CDOs. The typical recipe was as follows: all problem assets were pooled into a CDO like structure (for example, Maiden Lane); the pool was tranched; the beneficiary institution held the equity tranche (first dollar of loss subject to a modest limit); and the central bank or government held the senior piece. The toxic assets disappeared from the balance sheet of the tottering TBTF institution which therefore became solvent (or a little less insolvent). The central bank then embarked on unconventional monetary policy actions that boosted asset prices and ensured that the senior piece became worth par.
- The same strategy was adopted for dealing with sovereign debt crisis in the eurozone. The European Stability Mechanism (ESM) like its predecessor the EFSF, was a CDO like structure that received a AAA rating because of over collateralization though its sponsors included a number of beleaguered nations whose creditworthiness was hardly pristine.
- Of late, the strategy has been adopted retrospectively to failing institutions in Europe. Post facto, banks have become CDOs with lower tranches being written off in typical CDO style without a formal bankruptcy process. The Dutch bank SNS Bank (and SNS Reaal) were turned into CDOs by government decree overnight. On January 31, 2013, you might have thought that you had lent money to SNS Bank; on February 1, you were told that you were actually holding a tranche of a CDO, and that this tranche has now been written down to zero (The decree actually called it expropriation). Meanwhile, the bank continued to operate normally and the depositors were fine because they were holding a senior tranche that was still unimpaired.
- The strategy has been pushed even further in the Cyprus package. In this case, the depositors were told that their tranche was also impaired.
- The clearing corporations that were the oases of stability during 2008 have also joined the game. They are now openly saying that their obligation to guarantee all trades is not really a guarantee anymore. If your trade has been novated by the clearing corporation, then you actually hold a highly senior tranche of a CDO: if the losses eat through all junior tranches, then first your mark to market gain will be haircut, and if that is not enough then your contracts will simply be torn up at some settlement price that ensures the continued solvency of the clearing corporation. After that, the clearing corporation will continue to operate normally. The Bank of England recently published a Financial Stability Paper entitled “Central counterparty loss-allocation rules” by David Elliott which describes in gory detail how far this process has progressed around the world.
The biggest innovation in the CDO was actually a contractual bankruptcy process that is lightning fast and extremely low cost (see my blog posts on the Gorton-Metrick and Squire papers that argue this in detail). The world is gradually coming around to realizing that normal bankruptcy does not work for the financial sector and the contractual CDO alternative is far better. In 2006, I wrote that the invention of CDOs has made banks and other legacy financial institutions unnecessary. The crisis seems to be turning that speculation into reality.
Posted at 11:53 am IST on Thu, 16 May 2013 permanent link
Categories: derivatives
Interest rate models and central bank corridors
In my blog post last month about interest rate models at the zero bound, I did not consider the effect of central bank corridor policies. I realized that this is an important omission when I looked at the decision of the European Central Bank (ECB) a couple of days ago to lower the main refinancing rate (the central rate in the corridor) by 0.25% and the marginal lending facility rate (the upper rate in the corridor) by 0.50%. Why was one rate lowered by twice as much as the other? The answer is that with the deposit rate (the lower rate in the corridor) stuck at zero since July 2012, the only way to keep the corridor symmetric is to set the upper rate to be exactly twice the central rate. So the marginal lending facility rate will always change by twice the change in the main refinancing rate!
Of course, despite the deeply (biologically) ingrained love of symmetry, central banks can decide to abandon symmetry and move the central and upper rates independently. In fact, the historical data shows that in the first three months of the ECB’s existence, the corridor was not symmetric around the central rate, but since April 1999, the symmetry has been maintained.
Modelling short term rates in a symmetric corridor floored at zero is problematic. The log normal model has problems because it does not allow rates to be zero. Yet it is the natural way to model the proportionate changes in the central and upper rates.
Posted at 1:46 pm IST on Fri, 3 May 2013 permanent link
Categories: derivatives, monetary policy, post crisis finance, risk management, statistics
Seigniorage, Tobin tax, fiat money, gold and Bitcoin
It is obvious that fiat money leads to seigniorage income for the sovereign, but one would imagine that a decentralized open source money like Bitcoin (see my blog post earlier this month) would not allow anybody to earn seignorage income. When one examines the Bitcoin design, we find that it allows those with enough computing power to extract two forms of seigniorage income:
- In the early years of Bitcoin, computing power allows the mining of new bitcoins. This is pure seigniorage.
- When most of the coins have been mined, computing power can be used to charge transaction fees on every bitcoin transaction. This is also seigniorage income in the form of an all encompassing Tobin tax beyond the wildest dreams of the proponents of that tax.
Is this a design flaw or is it a necessary feature? After careful consideration, I think it is necessary. A monetary system can be sustained only if there are people with the incentive to invest in the maintenance of the system. In the case of fiat money, the sovereign expends considerable effort in preventing counterfeiting. One might think that commodity money like gold does not require such effort. But the historical evidence suggests otherwise:
- After the collapse of the Roman empire, “within a generation, by about A.D. 435, coin ceased to be used [in Britain] as a medium of exchange ... although many survived as jewellery, or were used for gifts or for compensation.” (Christine Desan, “Coin Reconsidered: The Political Alchemy of Commodity Money”, quoting Peter Spufford.) With nobody having enough seigniorage income to try and maintain the system, commodity money was simply re-purposed to non monetary uses, and Britain relapsed into a barter economy.
- Christine Desan also points out that a monetary system based on silver was reestablished centuries later by sovereigns who extracted seignorage income by charging a 5-10% spread between the mint and melting points of the metal.
- On the other hand, Luther and White have several papers showing that after the collapse of the Somalian government, the old currency continued to circulate and local warlords maintained the money supply by counterfeiting the old currency notes to earn seigniorage income.
All this suggests that any form of money (whether fiat, commodity or a decentralized open source money like Bitcoin) needs some form of seigniorage to sustain it.
Posted at 10:17 pm IST on Sat, 27 Apr 2013 permanent link
Categories: blockchain and cryptocurrency, monetary policy
Interest rate modelling at the zero lower bound
A long time ago, before the Libor Market Model came to dominate interest rate modelling, a lot of attention was paid to how interest rate volatility depended on the level of interest rates. If rate are moving up and down by 0.5% around a level of 3%, how much movement is to be expected when the level changes to 6%? One school of thought argued that rates would continue to fluctuate ±0.5%; this very conveniently allows the modeller to assume that rates follow the normal distribution. An opposing school argued that a fluctuation of ±0.5% around a level of 3% was actually a fluctuation of 1⁄6 of the level. Therefore when the level shifts to 6%, the fluctuation would be ±1% to preserve the same proportionality of 1⁄6 of the level. This was also convenient as modellers could assume that interest rates are log-normally distributed.
It was also possible to take a middle ground – the celebrated square root model related the fluctuations to the square root of the level. A doubling of the level from 3% to 6% would cause the fluctuation to rise by a factor of √2 from 0.5% to 0.71%. People generalized this even further by assuming that the fluctuations scaled as (level)λ where λ=0 gives the normal model, λ=1 leads to the log-normal, and λ=0.5 yields the square root model. Of course, there is no need to restrict oneself to just one of these three magic values. The natural thing for any statistician to do is to estimate λ from the data using standard maximum likelihood or other methods. Long ago, I did do such estimations for Indian interest rates.
The Libor Market Model killed this cottage industry. It was most natural to assume log normal distributions for the interest rates and then let the option implied volatility smile deal with departures from this distributional assumption. And there matters rested until the problem resurfaced when interest rates were driven down to zero after the global financial crisis. The difficulty is that zero is an inaccessible boundary point for a log normal process. A log normal process (geometric Brownian motion) can not reach zero (in any finite time) starting from any positive rate, and if you somehow started it out from zero, it could never leave zero (because the volatility becomes zero).
The regulatory push to mandate central clearing for OTC derivatives has turned this esoteric modelling issue into an important policy concern because central clearing counterparties (CCPs) have to set margins for a variety of interest rate derivatives where the modelling of volatility becomes a first order issue. A variety of different approaches are being taken. The OTCSpace blog links to a couple of practitioner oriented discussions on this subject (here and here). Among the solutions being proposed are the following:
- Shift to a normal model
- This would eliminate under margining at zero interest rates, but potentially create severe under margining at high rates.
- Combine normal and log-normal fluctuations
- The idea is that there are two sources of fluctuations in interest rates – one behaves in a “normal” and the other in a “log-normal” manner. This may be intractable for valuation purposes, but might be acceptable for risk modelling since it solves the under margining problem at both ends of the interest rate spectrum.
- Interest rate plus a small constant is log normal
- For example, assume that the fluctuations in interest are proportional to the level of rates +1%.
As an aside, I believe that the zero lower bound is actually a bound not on the interest rate, but on the contango on money. In other words, the zero lower bound is simply the proposition that money (being the unit of account itself) can neither be in contango nor in backwardation. The standard cost of carry model for futures pricing tells us that the contango on money is equal to the risk free interest rate PLUS the storage cost of money MINUS the convenience yield. It is this contango that is constrained to be zero.
If the convenience yield of money is larger than the storage costs (as it usually is in normal times), the contango is zero when the interest rate is positive. In an era of unlimited monetary easing, the convenience yield of money can become very small and the zero contango implies a slightly negative interest rate since the storage cost is not zero. For physical currency, the storage cost is high because of the need to guard against theft. For insured bank deposits, the bank needs to recoup deposit insurance in some form through various fees. Of course, uninsured bank deposits are not money – they are simply a form of haircut prone debt (think Cyprus). Actually, Cyprus makes one sceptical about whether even insured bank deposits are money.
Posted at 4:48 pm IST on Tue, 16 Apr 2013 permanent link
Categories: derivatives, monetary policy, post crisis finance, risk management, statistics
Bitcoin, negative interest rates and the future of money
I believe that everybody who is interested in money should study the digital currency Bitcoin very carefully because monetary innovations can have long lasting consequences even when they fail miserably:
- China’s experiments with paper money ended in inflationary disaster (as almost all fiat money appear to do), but it succeeded in replacing China’s long standing bronze coin standard with a silver unit of account (see for example, Richard von Glahn (2010), “Monies of Account and Monetary Transition in China, Twelfth to Fourteenth Centuries”, Journal of the Economic and Social History of the Orient, 53(3), 463-505)
- Johan Palmstruch who brought paper money to Europe and founded the world’s first central bank (the Sveriges Riksbank of Sweden) was sentenced to death. Though he was reprieved, he still lost everything and ended up in jail.
There is little doubt in my mind that digital currencies represent a vast technical and conceptual advance over the currencies in existence today. This would remain true even if Bitcoin implodes in a collapsing bubble or is destroyed by technical flaws in its design or implementation.
Nemo at self-evident.org has an excellent ten part series providing a gentle introduction to all the mathematics that one needs to understand how Bitcoin works. This is a good starting point for somebody wanting to go on to Satoshi Nakamoto’s seminal paper introducing the idea of Bitcoin.
From a finance point of view, what is most interesting about Bitcoin is that it is perhaps the first currency to be designed with a strong deflationary bias. There is an upper limit on the number of bitcoins that can ever be created – even lost bitcoins cannot be replaced unlike normal central banks that replace worn out notes with newly printed ones. In paper currencies, if I lose a currency note, somebody else probably finds it, and so the note remains in circulation. By contrast, Bitcoin is so designed that if the owner loses a bitcoin, the “finder” cannot use it, and so the lost bitcoin ceases to exist for all practical purposes. (If you are puzzled by the apparently inconsistent capitalization of bitcoin/Bitcoin in this paragraph, you may want to read this).
While most fiat currencies end up printing notes in higher and higher denominations to combat inflation, Bitcoin is designed to combat deflation by using smaller and smaller denominations like milli bitcoins and micro bitcoins all the way down to the smallest unit named the Satoshi which is equivalent to 10 nano bitcoins. As a result, the zero interest rate lower bound could be an even more serious problem for Bitcoin than for existing currencies.
One theoretical possibility is that the deflation overshoots significantly so that the currency can experience a mild inflation from that point onward somewhat on the lines of the Dornbusch overshooting model. But for that to work on a sustained basis, there would need to be periodic bouts of intense episodic deflation. The sharp appreciation of the bitcoin in the last few weeks in response to the Cyprus crisis suggests one way in which this could happen, but that would be a nightmare to model.
Posted at 5:44 pm IST on Wed, 10 Apr 2013 permanent link
Categories: blockchain and cryptocurrency
Option pricing with bimodal distributions
Jack Schwager’s book Hedge Fund Market Wizards has a chapter on James Mai of Cornwall Capital in which Mai talks about seeking opportunities in mispriced options. Many of us know about Mai from Michael Lewis’ Big Short which described how Mai made money by betting against subprime securities. But in the Schwager book, Mai talks mainly about options. Specifically, at page 232, Mai discusses opportunities “where the market assigned normal probability distributions to situations that clearly had bimodal outcomes”.
At first reading, I thought that Mai was simply talking about fat tails and the true volatility being higher than the option implied volatility. But on closer reading, this does not appear to be the case. In another section of the interview, Mai talks about the market under estimating the volatility of the distribution, while at another point, he describes the market making mistakes in the mean of the option implied distribution. So it does appear that Mai is distinguishing between errors in the mean, the volatility and the shape of the distribution.
This set me thinking about whether the bimodality of the distribution would make a big difference if the market assumes a (log) normal distribution with the correct mean and variance. Bimodality is very different from fat tails. In fact, if the distribution around each of the two modes is tight, then the tails are actually very thin. The departure from normality is actually a hollowing out of the middle of the distribution. For example, one may believe that a stock would either go to near zero (bankruptcy) or would double (if the risk of bankruptcy is eliminated) – the probability that the stock would remain close to the current level may be thought to be quite small. Mai himself discusses such an example.
To understand the phenomenon, let us take an extreme case of bimodality where there are actually only two outcomes. For simplicity, I assume that the risk free rate is zero. To facilitate comparison with the log normal distribution, I assume that the distribution of log asset prices is symmetric. If the current asset price is equal to 1, then by log symmetry, the two outcomes must be H and 1⁄H. Since the two possible outcomes of the log price are ± ln H, the volatility is ln H assuming that the option maturity is 1. The risk neutral probabilities of the two outcomes (p and 1 − p) are easy to compute. Since the risk free rate is zero, p H + (1 − p) 1⁄H = 1 implying that p = 1 ⁄ (1 + H) and 1 − p = H ⁄ (1 + H). (Unless H is quite large, these probabilities are not very far from 1⁄2).
With all these computations in place, it is straightforward to compare the true bimodal option price with that obtained by the Black Scholes formula using the correct volatility. The plot below is for H = 1.2.
It might appear that the impact of the bimodal distribution is quite small. However, the important question is what is the expected return from buying an option at the wrong (Black Scholes) price in the market and holding it to maturity. The plot below shows that the best strategy is to buy an option with a strike about 6% out of the money. This earns a return of almost 31% (there is a 45% chance of earning a return of 188% and a 55% chance of losing 100%).
The bimodal example tells us that even with thin tails and no under estimation of volatility (no Black Swan events), there can be significant opportunities in the option market arising purely from the shape of the distribution. How would one detect whether the market is already implying a bimodal outcome? This is easily done by looking at the volatility smile. If the market is using a bimodal distribution, the volatility smile would be an inverted U shape which is very different from that normally observed in most asset markets.
Posted at 10:19 pm IST on Tue, 9 Apr 2013 permanent link
Categories: derivatives
Financial Sector Legislative Reforms Commission
The Financial Sector Legislative Reforms Commission submitted its report a few days ago. The Commission also submitted a draft law to replace many of the financial sector laws in India. Since I was a member of this Commission, I have no comments to add.
Posted at 3:06 pm IST on Sat, 30 Mar 2013 permanent link
Categories: law, regulation
Big data crushes the regulators
I have repeatedly blogged (for example, here and here) about the urgent need for financial regulators to get their act together to deal with the big data generated by the financial markets that these regulators are supposed to regulate. The reality however is that the regulators are steadily falling behind.
Last week, Commissioner Scott D. O’Malia of the US Commodities and Futures Trading Commission delivered a speech in which he admitted that “Big Data Is the Commission’s Biggest Problem ”. This is what he had to say (emphasis added):
This brings me to the biggest issue with regard to data: the Commission’s ability to receive and use it. One of the foundational policy reforms of Dodd-Frank is the mandatory reporting of all OTC trades to a Swap Data Repository (SDR). The goal of data reporting is to provide the Commission with the ability to look into the market and identify large swap positions that could have a destabilizing effect on our markets. Since the beginning of 2013, certain market participants have been required to report their interest rate and credit index swap trades to an SDR.
Unfortunately, I must report that the Commission’s progress in understanding and utilizing the data in its current form and with its current technology is not going well. Specifically, the data submitted to SDRs and, in turn, to the Commission is not usable in its current form. The problem is so bad that staff have indicated that they currently cannot find the London Whale in the current data files. Why is that? In a rush to promulgate the reporting rules, the Commission failed to specify the data format reporting parties must use when sending their swaps to SDRs. In other words, the Commission told the industry what information to report, but didn’t specify which language to use. This has become a serious problem. As it turned out, each reporting party has its own internal nomenclature that is used to compile its swap data.
The end result is that even when market participants submit the correct data to SDRs, the language received from each reporting party is different. In addition, data is being recorded inconsistently from one dealer to another. It means that for each category of swap identified by the 70+ reporting swap dealers, those swaps will be reported in 70+ different data formats because each swap dealer has its own proprietary data format it uses in its internal systems. Now multiply that number by the number of different fields the rules require market participants to report.
To make matters worse, that’s just the swap dealers; the same thing is going to happen when the Commission has major swap participants and end-users reporting. The permutations of data language are staggering. Doesn’t that sound like a reporting nightmare? Aside from the need to receive more uniform data, the Commission must significantly improve its own IT capability. The Commission now receives data on thousands of swaps each day. So far, however, none of our computer programs load this data without crashing. This would seem odd with such a seemingly small number of trades. The problem is that for each swap, the reporting rules require over one thousand data fields of information. This would be bad enough if we actually needed all of this data. We don’t. Many of the data fields we currently receive are not even populated.
Solving our data dilemma must be our priority and we must focus our attention to both better protect the data we have collected and develop a strategy to understand it. Until such time, nobody should be under the illusion that promulgation of the reporting rules will enhance the Commission’s surveillance capabilities. As Chairman of the Technology Advisory Committee, I am more than willing to leverage the expertise of this group to assist in any way I can.
The regulators have only themselves to blame for this predicament. As I pointed out in a blog post nearly two years ago, the SEC and the CFTC openly flouted the express provision in the Dodd Frank Act to move towards algorithmic descriptions of derivatives. I would simply repeat what I wrote then:
Clearly, the financial services industry does not like this kind of transparency and the regulators are so completely captured by the industry that they will openly flout the law to protect the regulatees.
Posted at 7:51 pm IST on Mon, 25 Mar 2013 permanent link
Categories: regulation, technology
JPMorgan London Whale and Macro Hedges
Last week, the US Senate Permanent Subcommittee on Investigations released a staff report on the London Whale trades in which JPMorgan Chase lost $6.2 billion last year. The 300 page report puts together a lot of data that was missing in the JPMorgan internal task force report which was published in January.
Unsurprisingly, the Senate staff report takes a very critical view of the JPMorgan trades which the bank’s chairman described in a conference call last May as a “bad strategy ... badly executed ... poorly monitored.” Where I think the staff report goes overboard is in describing even the original relatively simple hedging strategy that JPMorgan adopted during the global financial crisis (well before the complete corruption of the strategy in late 2011 and early 2012).
The staff report says:
A number of bank representatives told the Subcommittee that the SCP was intended to provide, not a dedicated hedge, but a macro-level hedge to offset the CIO’s $350 billion investment portfolio against credit risks during a stress event. In a letter to the OCC and other agencies, JPMorgan Chase even contended that taking away the bank’s ability to establish that type of hedge would undermine the bank’s ability to ride out a financial crisis as it did in 2009. The bank also contended that regulators should not require a macro or portfolio hedge to have even a “reasonable correlation” with the risks associated with the portfolio of assets being hedged. The counter to this argument is that the investment being described would not function as a hedge at all, since all hedges, by their nature, must offset a specified risk associated with a specified position. Without that type of specificity and a reasonable correlation between the hedge and the position being offset, the hedge could not be sized or tested for effectiveness. Rather than act as a hedge, it would simply function as an investment designed to take advantage of a negative credit environment. That the OCC was unable to identify any other bank engaging in this type of general, unanchored “hedge” suggests that this approach is neither commonplace nor useful
I think everything about this paragraph is wrong and indeed perverse.
- What the crisis taught us is that tail risks are more important than any other risks and far from criticising tail hedges, policy makers should be doing everything possible to encourage them. That the US regulators could not find any other bank that implemented such tail hedges speaks volumes about the complacency of most bank managements. It is those banks that deserve to be criticized.
- We do not need correlations to size or test the effectiveness of macro hedges. Consider for example hedging a diversified equity portfolio with deep out of the money puts. For a complete tail hedge, the notional value of the put would be equal to the value of the portfolio itself. A beta equal to one might be a perfectly reasonable assumption for a diversified portfolio since a precise estimate of the tail beta might not be very easy. There is no need to compute a correlation between the put value and the portfolio value to determine the effectiveness of the hedge. Even the correlation between the index and the equity portfolio is not too critical because in a crisis, correlations can be expected to go to one.
- A put option of the kind described above is not an investment designed to take advantage of a stock market crash. Viewed as an investment, the most likely return on a deep out of the money put option is -100% (the put option expires worthless), just as the most likely return on a fire insurance policy is -100% because there are no fires and no insurance claims.
I think the problem with the JPMorgan hedges as they metamorphosed during 2011 was something totally different. The key is a statement that the JPMorgan Chairman made in the May 2012 conference call after the losses became clear:
It was there to deliver a positive result in a quite stressed environment and we feel we can do that and make some net income.
Sorry, tail hedges do not produce income, they cost money. Any alleged tail hedge that is expected to earn income under normal conditions is neither a hedge nor a speculative investment – it is just a disaster waiting to happen.
Posted at 5:25 pm IST on Sun, 17 Mar 2013 permanent link
Categories: risk management
Is India experiencing incipient capital flight?
A number of phenomena we observe in India in the last few years can be interpreted as incipient capital flight:
- Gold imports have risen sharply not only in value terms but also in terms of quantity. The nature of the gold demand has also changed. In recent years, we have been seeing a significant amount of gold being bought by the rich as an investment. A poor household buying gold jewelry could be interpreted as a form of social security, but a rich household buying gold bars and biscuits is a form of capital flight. Instead of converting INR into USD or CHF, many rich investors are converting INR into XAU.
- Many Indian business groups are investing more outside India than in India. Many of them are openly justifying it on the ground that the investment climate in India is poor. This is of course a form of capital flight.
- It is difficult to explain India’s large current account deficit and poor export growth solely on the basis of low growth in the developed world. First, many of our competitors in Asia and elsewhere are posting large trade surpluses in the same environment. Second, the depreciation of the Indian rupee has improved competitiveness of Indian companies in world markets. Indian companies used to complain loudly about their lack of competitiveness when the dollar was worth only 40 rupees, but with the dollar fetching 55 rupees, these complaints have disappeared. I fear that some of the current account deficit that we see today is actually disguised capital flight via under-invoicing of exports and over-invoicing of imports.
While economists have focused on the impossible trinity (open capital account, independent monetary policy and fixed exchange rates), I am more concerned about the unholy trinity that leads to full blown capital flight. This unholy trinity has three elements: (a) a de facto open capital account, (b) poor perceived economic fundamentals and (c) heightened political uncertainty. I believe that the first two elements of this unholy trinity are already in place; we can only hope that the 2014 elections do not deliver the third element.
While our policy makers keep up the pretence that India has a closed capital account, the reality is that during the last decade, the capital account has in fact become largely open. Outward capital flows were largely opened up by liberalizing outward FDI and allowing every person to remit $200,000 every year for investment outside India. This means that the first element of the unholy trinity (an open capital account) has been in place for sometime now. If the other two elements were also to materialize, a full blown capital flight is perfectly conceivable. India’s reserves may appear comfortable in terms of number of months of imports, but in an open capital account, this is not a relevant metric. What is relevant is that India’s reserves are about 20% of the money supply (M3), and in a full blown capital flight, a large part of M3 is at risk of fleeing the country.
Posted at 2:04 pm IST on Wed, 6 Mar 2013 permanent link
Categories: international finance
More on 2014 as 1994 redux
Last week, I wrote a blog post on how 2014 may witness the same withdrawal of capital flows from emerging markets as was seen when the US Fed tightened interest rates in 1994. Over the weekend (India time), the Fed published a speech by Chairman Ben Bernanke which spells out the issues with surprising bluntness. The key points as I see it in this speech are:
- Long term US rates are likely to rise: “Overall, then, we anticipate that long-term rates will rise as the recovery progresses and expected short-term real rates and term premiums return to more normal levels. The precise timing and pace of the increase will depend importantly on how economic conditions develop, however, and is subject to considerable two-sided uncertainty. ”
- A repeat of 1994 cannot be ruled out: “... in 1994, 10-year Treasury yields rose about 220 basis points over the course of a year ... A rise of more than 200 basis points in a year is at the upper end of what is implied by the mean paths and uncertainty measures shown in charts 4 and 5, but these measures still admit a substantial probability of higher--and lower--paths”. In fact, that last sentence is even more scary. Bernanke is saying that it could in fact be even worse than in 1994!
- The Fed is concerned about financial stability risks arising from a large rise in interest rates: “First, we have greatly increased our macroprudential oversight ... Second, ... we are using regulatory and supervisory tools to help ensure that financial institutions are sufficiently resilient to weather losses and periods of market turmoil ... Third, ... greater clarity concerning the likely course of the federal funds rate ... should ... reduce the risk that market misperceptions ... would lead to unnecessary interest rate volatility.”
Bernanke is clearly warning US financial institutions to prepare for the coming bond market sell-off. It is not Bernanke’s job to warn emerging markets, but to those emerging market policy makers who read the speech, the message is loud and clear – it is time for serious preparation.
Posted at 7:49 pm IST on Mon, 4 Mar 2013 permanent link
Categories: international finance, monetary policy
Looking at 2014 through the prism of 1994
Unless the United States shoots itself in the foot during the fiscal negotiations, it could conceivably be on the cusp of a recovery. There is a serious possibility that the unemployment rate starts falling towards 7%, and the US Fed begins to consider unwinding some of its unconventional monetary easing measures. Unconventional monetary policy is equivalent to a highly negative policy rate, and so a substantial monetary tightening can happen well before the Fed starts raising the Fed Funds rate.
The situation is reminiscent of 1994 when the US Fed tightened monetary policy as the economy recovered from the recession of the early 1990s. This monetary tightening is best known for the upheaval that it caused in the US bond markets, but the turbulence in US Treasuries lasted only a few months. The more lasting impact was on emerging markets as higher US yields dampened capital flows to emerging economies:
- The tightening phase lasted from early 1994 to early 1995 during which the Fed Funds rate was increased from 3% to 6%. In today’s situation the equivalent would be a unwinding of the Quantitative Easing (QE) programme beginning early 2014 possibly leading up to a nominal increase in the Fed Funds rate in late 2014 or early 2015.
- In 1994, 10-year US Treasury yields rose from around 6% to about 8% in less than a year, but during the later stages of the tightening, long term rates actually fell even as the policy rates were being raised. By late 1995, the 10-year yield had fallen back to 6%. The bond market upheaval was frightening to a levered investor in long term bonds, but immaterial for the buy and hold investor.
- The US stock market shrugged off the tightening completely. Moreover, in the late stages of the tightening, as people realized that the tightening was nearing its end, the US stock market took off. The S&P 500 rose by over a third during 1995.
- Even as US stocks were soaring, emerging market equities were hammered. The MSCI Emerging Markets Index lost about a third of its value in late 1994 and early 1995. In India, the Sensex fell from a peak of over 4,500 in late 1994 to below 3,000 in late 1996. The Sensex regained its 1994 peak only in the dot com boom of 1999.
- The Indian rupee also suffered during the period. After years of holding rock steady at 31.37 to the US dollar (with the RBI intervening only to prevent its appreciation), the rupee started falling in late 1995. This fall intensified in the late 1990s.
- The US tightening claimed its first emerging market victim in the Mexican peso crisis of late 1994 and early 1995.
- It is conceivable that the US tightening of 1994 set in motion forces that ultimately brought about the Asian Crisis of 1997.
History never repeats itself (though as Mark Twain remarked, it sometimes rhymes). Yet, there is reason to fear that a normalization of interest rates in the US in the coming year could be destabilizing to many emerging markets which are today bathed in the tide of liquidity unleashed by the US Fed and other global central banks. India, in particular, has become overly addicted to foreign capital flows to cover its large current account deficit, and any retrenchment of these flows in response to better opportunities in the US could be quite painful.
Posted at 4:31 pm IST on Wed, 27 Feb 2013 permanent link
Categories: international finance, monetary policy
Indian Gold ETFs become Gold ETNs
Last week, the Securities and Exchange Board of India allowed Indian gold Exchange Traded Funds (ETFs) to deposit their gold with a bank under a Gold Deposit Scheme instead of holding the gold in physical form. In the Gold Deposit Scheme, the bank does not act as a custodian of the gold. Instead, the bank lends the gold out to jewellers (and others) and promises to repay the gold on maturity.
In my view, use of the Gold Deposit Scheme will convert the Gold ETF into an ETN (Exchange Traded Note) or an ETP (Exchange Traded Product). The ETF does not hold gold – it only holds an unsecured claim against a bank and is thus exposed to the credit risk of the bank. If the bank were to fail, the ETF would stand in the queue as an unsecured creditor of the bank. The ETF therefore does not hold gold; it holds a gold linked note.
So far, the ETFs in India have been honest-to-God ETFs instead of the synthetic ETNs and ETPs that have unfortunately become so popular in Europe and elsewhere. With the new scheme, India has also joined the bandwagon of synthetic ETNs and ETPs masquerading as ETFs.
Truth in labelling demands that any ETF that uses the Gold Deposit Scheme should immediately be rechristened as an ETN. I also think that this is a change in fundamental attribute of the ETF and should require unit holder approval.
From a systemic risk perspective, I fail to see why this concoction makes sense at all. It unnecessarily increases the inter-connectedness of the banking and mutual fund industries and aggravates systemic risk. A run on the bank could induce a run on the ETF and vice versa. All this is in addition to the maturity mismatch issues described by Kunal Pawaskar.
I can understand the desire to put idle gold to work, but that does not require the intermediation of the bank at all. The ETF can lend the gold directly against cash collateral with daily mark to market margins. Even if it were desired to use the services of a bank, there are better ways to do this than to treat the ETF just like any other retail depositor. For example, the bank could provide cash collateral to the ETF with daily mark to market margins. As is standard in such contracts, a portion of the interest that the ETF earns on the cash collateral would be rebated back to the bank to cover its hedging and custody costs.
Posted at 1:29 pm IST on Fri, 22 Feb 2013 permanent link
Categories: banks, gold, mutual funds
Disincentivising Cheques
More than a year ago, I blogged about how banks in India were perversely incentivising retail customers to use cheques instead of electronic transfers though the cost to the whole system of processing a cheque is much higher. I also hypothesized that it may well be rational for an individual bank to follow this perverse pricing under certain assumptions about price elasticity of demand.
Now the Reserve Bank of India has put out a discussion paper on Disincentivising Issuance and Usage of Cheques. It discusses at length ways to disincentivize individuals, institutions and government departments from using cheques. I was surprised to find however that there was no proposal to disincentivize the banks themselves. I think it makes a lot more sense to impose a significant charge on the paying bank for every cheque that is presented for clearing. It can be left to the banks to decide on whether (and how) to pass on the charge itself to some or all their customers. The more important purpose of the charge would be to incentivize the banks to educate and incentivize their customers and also to make their payment gateways more user friendly. Why should the charge be on the paying banks? Because, they own the customer who writes the cheque and also because they sit on the float when cheques are used.
Posted at 8:29 pm IST on Fri, 8 Feb 2013 permanent link
Categories: banks, technology
Financial Risk: Perspectives from psychology and biology
During the last week, I found myself reading two different perspectives on financial risk:
- A fascinating paper by Anat Bracha and Elke Weber entitled “A Psychological Perspective of Financial Panic” (h/t Mostly Economics).
- A marvellous book by John Coates called The Hour Between Dog and Wolf: Risk Taking, Gut Feelings and the Biology of Boom and Bust.
The main thesis of Bracha and Weber is that:
... perceived control is a key concept in understanding mania and panic, as the need for control is a basic human need that contributes to optimism bias and affects risk perception more generally. Lack of control is therefore a violation of a basic need and will trigger episodes of panic and retreat to the safe and known.
The illusion of control refers to the human tendency to believe we can control or at least influence outcomes, even when these outcomes are the results of chance events.
The book by Coates is much more complex. The title itself requires a whole paragraph of explanation – it translates a French phrase that refers to the time around dusk when it is difficult to determine whether a shadow that one is seeing is that of a dog or a wolf, implying that the one could metamorphose into the other at any time.
From a biological perspective, it appears that:
... researchers have found that three types of situations signal threat and elicit a massive physiological stress response – those characterized by novelty, uncertainty and uncontrollability
...
Novelty, uncertainty and uncontrollability – the three conditions are similar in that when subjected to them we have no downtime, but are in a constant state of preparedness.
The uncontrollability that the psychologists emphasize is present in the biologist's description as well, but it does not seem to have a privileged position compared to other forms of risk – novelty and uncertainty. The biological response to all these forms of risk is the same – the body is flooded with stress hormones (mainly cortisol) which command the body to “shut down long term functions of the body and marshal all available resources, mainly glucose, for immediate use.”
More interesting is that the biological (unconscious) stress response closely mirrors the objective reality unlike the self reported (conscious) risk perception that is elicited by questionnaires. In his research with a groups of bond market traders, Coates asked the traders to report their level of stress at the end of each day. This self reported stress was totally unrelated either to their losing money or swings in their P&L or the volatility in the market. At the same time, their cortisol level faithfully measured the volatility that the individual traders were experiencing. That is not all – the average cortisol level of this group of traders very closely tracked the implied volatility of options related to the bonds that they were trading.
Coates links this finding to what biologists had found with rats. After several days of being placed in an objectively dangerous situation, the rats got habituated to the situation and became outwardly calm. However, their stress hormones reflected the stress that existed. Again, the unconscious biology reflected the objective reality while the conscious behaviour did not.
This seems to suggest that the “illusion of control” that Bracha and Weber talk about may be an illusion that afflicts only the conscious mind and not the unconscious mind that governs actual risk taking. Biology teaches us to assume that millions of years of evolution have perfected the more primitive (unconscious) parts of the brain to achieve near optimal behaviour (at least relative to the original environment). The more recent (conscious) parts of the brain perhaps have still some way to go before reaching evolutionary perfection.
Posted at 10:58 am IST on Sun, 3 Feb 2013 permanent link
Categories: behavioural finance, interesting books
Sociology of the evolution of electronic trading
Donald MacKenzie has a couple of papers recently analyzing the evolution of electronic trading from a sociology of finance point of view. The first paper describes the emergence of ETNs in the United States beginning with the Island system which became Instinet and was ultimately acquired by Nasdaq. The second paper describes the rise of electronic trading at the Chicago Mercantile Exchange.
I have in the past blogged about MacKenzie's previous works (here, and here) and find his approach useful. Others have been less impressed – one critic dismissed some of MacKenzie's previous works as “remarkable close-up studies ... without context ... all cogs and no car”. MacKenzie gets back at this criticism brilliantly at the end of the Island paper:
... historical change can involve shift in scale. In this paper, we have focussed on a small actor becoming big ... However, we could equally have told a story of big actors becoming small ... NYSE was a car, and has become a cog. Island was a cog that became a car ... Scales are indeed not stable, and cogs – and their histories – matter.
I knew most of the facts about Island from Scott Patterson's fascinating book on Dark Pools (subtitled “High-Speed Traders, A.I. Bandits, and the Threat to the Global Financial System”). Still, I learned a lot from MacKenzie's paper – the theoretical framework (particularly the idea of bricolage in the process of financial innovation) is quite valuable. I learned less from the paper on the CME, though, in this case, many of the facts were new to me.
Posted at 4:25 pm IST on Fri, 1 Feb 2013 permanent link
Categories: behavioural finance, technology
Pamper the consumers or the computer programmers?
In case you thought that the answer to this question is obvious, you should read the report of the Reserve Bank of India’s Technical Committee to Examine Uniform Routing Code and A/c Number Structure. While recommending 26-digit bank account numbers (IBAN) in India, the Committee has this to say:
6.5.3 The main disadvantage (if we really have to pamper to customers as the information can be easily displayed/stored on debit cards and cell phones, besides the traditional paper diary/chit of paper) of this IBAN option is that though it entails least effort from banks and facilitates faster IBAN implementation, it provides a more complex payment system interface to customers due to long IBAN string. In other words, while efforts at banks’ end will be minimized, the customers will still have to remember and provide the long IBAN, including check digits, for their payment system activities. (emphasis added)
In other words, the convenience of the banks’ computers and their programmers trumps the convenience of hundreds of millions of consumers.
Another troubling passage in the report is the following discussion about why the branch code cannot be omitted in the bank code (IFSC) that is used for electronic fund transfers:
Upon enquiring with banks, it is learnt that many banks have not built any check-digit in their account numbers. Thus, any inward remittance which comes to a bank will be processed even if there is any mistake in account number, as long as that account number exists in the beneficiary bank. In the absence of check digit in account numbers, many banks depend on the branch identifier to avoid credit being afforded to wrong accounts. This is a significant irreversible risk where wrong beneficiary would get the credit and customer would have no recourse – legal or moral
The idea that a branch identifier is a substitute for a check digit is a serious mistake. Any reasonable check digit should catch all single digit errors and most (if not all) transposition errors (where two neighbouring digits are interchanged). These are the most common errors in writing or typing a long number (the other common error of omitting a digit is easily caught even without a check digit because the number of digits in an account number is fixed for each bank). The use of the branch identifier on the other hand is not guaranteed to catch the most commonly occurring errors – many single digit errors would lead to a valid account number at the same branch. With the increasing use of electronic fund transfers (which ignore the name of the account holder and rely only on the account number), I would have thought that it would make sense to insist that all account numbers should have a check digit instead of insisting that the IFSC code should include a branch code. But that would place a greater burden on some overworked computer programmers in some banks – and regulators apparently think that systems people (unlike consumers) must be pampered at all costs.
The problem is not confined to banking. In the financial markets also, the convenience of the programmers often dictates the nature of market regulation, and the systems people are able to hold the regulator to ransom by simply asserting that software changes are too difficult. On the other hand, whenever I go to websites like stackoverflow in search of answers to some computing problem, I am constantly amazed that there are so many people able and willing to find solutions to the most difficult problems. In an ideal world, I think regulators would require every systemically important financial organization to have senior systems people with a reputation of say 10,000 at stackoverflow or some such metric of competence and a “can do” attitude.
While we have “fit and proper” requirements for the top management of banks and financial organizations, Basel and IOSCO do not impose any “fit and proper” requirement on the systems people. I think this needs to change because so much of risk comes from poorly designed and poorly maintained software.
Posted at 4:33 pm IST on Fri, 25 Jan 2013 permanent link
Categories: regulation, technology
Single factor asset pricing model with leverage shocks
I have been reading an interesting paper by Tobias Adrian, Erkko Etula and Tyler Muir proposing a single factor asset pricing model that is based on shocks to securities broker-dealer leverage. The performance of this single factor model in pricing the Fama-French and momentum portfolios seems to be as good as that of the four factor model that includes the three Fama-French factors (market, size and value) and the momentum factor. In addition, the leverage factor model prices risk free bond portfolios as well as the four factor model augmented with a factor for interest rate level.
The results seem too good to be true and Bayesian theory teaches us that surprising results are likely to be false even if they are published in a top notch peer reviewed journal (see for example here or here). (I do recall the incident a couple of years ago when the Chen-Zhang q-factor papers became “defunct” after a timing error was identified in the initial work.) Having said that, the Adrian-Etula-Muir paper has been around since 2008 and was last revised in March 2012. Maybe, it has survived long enough to be taken seriously.
Another possible criticism is that the Adrian-Etula-Muir paper does all the empirical analysis using the Fama-French style size-value-momentum portfolios and not on the individual stocks themselves. Falkenblog goes so far as to say “What I suspect, though I haven’t done the experiment, is that if you regress individual stocks against this factor there will be a zero correlation with returns.” My own intuition is that the effect would not weaken so dramatically in going from portfolios to individual stocks. In any case, asset pricing tests have to be based on portfolios to obtain statistical power – the correct question to ask is whether the correlation with a random well diversified portfolio is likely to be high.
Adrian-Etula-Muir motivate their finding with the argument that broker-dealer leverage proxies for the health of the financial sector as a whole, and that because of limited participation and other factors, the wealth of the financial intermediaries matters more than that of the representative household in forming the aggregate Stochastic Discount Factor (SDF). This appears to me to be a stretch because even if we focus on intermediaries, leverage is not the same thing as wealth.
My initial reaction was that the leverage factor is actually a liquidity factor, but their results show that leverage shocks are largely uncorrelated with the shocks to the Pastor-Stambaugh (2003) liquidity factor.
I wonder whether the leverage factor may be a very elegant way of picking up time varying risk aversion so that the single factor model is close to the CAPM with time varying risk aversion. The empirical results show that the leverage factor mimicking portfolio is very close to being mean variance efficient. If this is so, then we may have a partial return to the cosy world from which Fama and French evicted us a couple of decades ago.
Posted at 11:36 am IST on Mon, 21 Jan 2013 permanent link
Categories: factor investing
Financial stability, financial resilience and systemic risk
Last week, I found myself involved in a discussion arguing that systemic risk regulation is not the same as the pursuit of financial stability. This discussion helped to clarify my own thoughts on the subject.
There is no doubt that financial stability is currently a highly politically correct term: according to a working paper published by the International Monetary Fund (IMF) a year ago, the number of countries publishing financial stability reports increased from 1 in the mid 1990s to 50 by the mid 2000s and rose further to 80 in 2011. India and the United States have been among those that joined the bandwagon after the global financial crisis. Meanwhile the Financial Stability Board (which was first set up under a slightly different name after the Asian Crisis) has now been transformed into the apex forum for governing global financial regulation.
Yet, there has been a strong view that the pursuit of financial stability is a mistake. The best known proponent of this view was Hyman Minsky who was fond of saying that financial stability is inherently destabilizing. Post crisis, there has also been a great deal of interest in resilience as opposed to stability. The Macroeconomic Resilience blog has become particularly well known for arguing this case eloquently.
Rather than repeat what has been well articulated by these people, I have chosen to put together a totally politically incorrect table highlighting the contrast between financial stability and financial resilience.
Financial Stability | Financial Resilience |
Rigidity and resistance to change | Adaptability and survival amidst change |
Stasis and Stagnation | Dynamism and progress |
Pro-incumbent | Pro-competition |
Too big to fail | Too big to exist |
Great Moderation | New normal |
Alan Greenspan | Hyman Minsky |
To my mind, systemic risk regulation is the pursuit not of financial stability but of financial resilience.
Posted at 5:37 pm IST on Sun, 20 Jan 2013 permanent link
Categories: regulation
Why exchanges should be forced to use open source software
For more than a decade now, I have arguing for using open source software in critical parts of the financial system like stock exchanges (here and here) and depositories (here). At the risk of sounding like a broken record, I want to come back to this in the light of the following cryptic announcement from the BATS exchange in the US two days ago:
Please be advised that BATS has determined that upon an NBBO update on BATS’ BYX Exchange, Dividend Notifications BZX Exchange and BATS Options, there are certain cases where the Matching Engine will allow for a trade through or an execution of a short sale order at a price that is equal to or less than the NBB when a short sale circuit breaker is in effect under Regulation SHO. These cases result from the sequencing New Listings Short Sale Circuit Breakers of certain required events in the Matching Engine related to re-pricing and sliding orders in response to the NBBO update.
I found this almost impossible to understand as it is not clear whether the scenario “when a short sale circuit breaker is in effect” applies only to the second type of error (“execution of a short sale order at a price that is equal to or less than the NBB”) or also to the first type of error (“trade through” the NBBO). Focusing on the first type of error, we can make some headway by consulting the BATS exchange User Manual which describes the price sliding process with a numerical example:
Example of BATS Displayed Price Sliding:
NBBO:
10.00X10.01
BATS:
10.00X10.02
1) Buy BATS-Only Order at 10.03
2) Order is re-priced and ranked 10.01 and displayed down to 10.00 (10.01 would lock the NBBO)
3) NBBO goes to 10.00X10.02
4) Order is re-displayed at 10.01 using its existing priority
5) NBBO goes to 10.01X10.03
6) Order remains unchanged (it’s only allowed to unslide once after entry)
Note: Order will always execute at 10.01 regardless of its display price at the time
But even with this explanation, it is hard to understand the precise nature of the software bug. My first thought was that in the above example, if the NBBO moved to 9.99X10.00, the sliding order might execute at 10.01 if it were matched against an incoming order at the BATS exchange order. On second thought, I ruled that out because it is too simple not to have been thought about during the software design. Maybe, it is a more complex sequence of events, but the terse announcement from the exchange does not really tell us what happened. It is interesting that even when admitting to a serious error, the exchange does not consider it essential to be transparent about the error.
Over a period of time, exchanges have been designing more and more complex order types. In some ways, these complex order types are actually the limiting case of co-location – instead of executing on the trader’s computer located close to the exchange server, the algorithm is now executing on the exchange server itself, and that too in the core order matching engine itself. The same business logic that favours extensive co-location also favours ever increasing complexity in order types.
In this situation, it makes sense to mandate open source implementations of the core order matching engine. As I wrote six years ago:
It is also evident that in a complex trading system, the number of eventualities to be considered while testing the trading software is quite large. It is very likely that even a reasonable testing effort might not detect all bugs in the system.
Given the large externalities involved in bugs in such core systems, a better approach is needed. The open source model provides such an alternative. By exposing the source code to a large number of people, the chances of discovering any bugs increase significantly. Since there are many software developers building software that interacts with the exchange software, there would be a large developer community with the skill, incentive and knowledge required to analyse the trading software and verify its integrity. In my view, regulators and self regulatory organizations have not yet understood the full power of the open source methodology in furthering the key regulatory goals of market integrity.
But it is not just the exchanges. Regulators too write very complex regulations which too should ideally be written in the form of open source software. Instead, regulators all over the world write long winded regulations and circulars which are open to many different implementations and which do not function as expected when they are most needed.
Posted at 12:23 pm IST on Fri, 11 Jan 2013 permanent link
Categories: exchanges, technology
Liquidation efficiency of CCPs (clearing corporations)
Earlier this week, I wrote a blog post applying the Gorton-Metrick idea of contractual liquidation efficiency to CCPs or clearing corporations. After that, I came across an interesting paper by Richard Squire (December 2012) arguing that the only real benefit of a clearing house is speed and certainty of liquidation and that this benefit obtains even if the clearing house itself is insolvent.
Squire accepts the arguments of Pirrong and others that the risk reduction benefits of central clearing are dubious (risk reduction in one part of the system comes at the cost of greater risk elsewhere in the system). Yet CCPs are valuable because they speed up the bankruptcy process and give greater certainty to all creditors (even those who are outside the clearing house).
It is clear that Squire has a point. The worst part of the Lehman bankruptcy was that counter parties had their money trapped in the bankruptcy court for years without either liquidity or certainty.
Four years after Lehman filed for protection under Chapter 11, the Lehman estate still held $14.3 billion in restricted cash, which included $10.9 billion in a reserve fund for paying out unsecured claims. (Page 37)
Squire points out how the normal bankruptcy process is designed to be extremely slow:
To distribute assets among creditors, a bankruptcy trustee must do two things. First, she must determine what the assets are worth, which she can do through financial valuation methods or with an auction that converts the assets to cash. Second, she must determine the amount of the debtor’s liabilities, which requires her to collect all creditor proofs of claim and resolve challenges to their enforceability and amounts. Given these requirements, it is difficult to think of a slower rule for distributing debtor assets than the pro rata rule. Under that rule, each creditor is paid according to the ratio between the amount of his claim and the debtor’s total liabilities. It follows that all liabilities must be confirmed and valuated before any creditor can be paid. (Page 36)
The clearing house speeds up this process enormously and provides greater liquidity and certainty. More importantly, this is not at the cost of other creditors of the bankrupt entity:
Unlike netting’s purely redistributive consequences, its payout-acceleration benefit is not zero-sum. Thus, the faster payouts for the clearinghouse members are not the result of slower payouts for the outside creditors. To the contrary, netting simplifies the work of the failed member’s bankruptcy trustee, which might permit the outside creditors also to be paid more quickly than they would otherwise. ... And while the arithmetical amounts of their payouts will be reduced by netting’s redistributive effect, the loss may partly be neutralized by the fact that the smaller scope of the bankruptcy estate may save on administrative costs and hence leave more value left over for creditors. Netting therefore is clearly a source of value creation. (Page 38)
The most important part of the paper is the argument that the benefits of netting would remain even if the clearing house itself is bankrupt.
Whereas creditors typically insist on being paid in cash, they are generally willing to accept cancellation of their own debts as payment for their own claims. And netting within the clearinghouse increases the opportunities for this to occur. ... Because of netting, Firm A is, in effect, able to take [an IOU from Firm C] and force Firm B to accept it in satisfaction of Firm A’s debt to Firm B. And Firm B, in turn, can take the same IOU and use it to repay its $100 debt to Firm C. Since the IOU is now back in the hands of its issuer, it is cancelled. No cash has changed hands, and therefore none been paid into a bankruptcy estate. And because each transfer of the IOU occurs through setoff rights, the transfers can occur even if the clearinghouse is bankrupt. This capacity for a clearinghouse to transform a debt obligation into a medium of exchange as good as cash is of obvious social value during a liquidity shortage. (Page 42)
I am now even more convinced that CCPs (clearing houses) must be designed to fail gracefully. Many of them have done so through loss allocation rules for each segment that effectively cap the liability of the CCP and make it less likely that it goes bust. We must extend the scope of these mechanisms to make it almost impossible for a CCP to become bankrupt just as securitization waterfalls make it almost impossible for an SPV to become bankrupt. Such rules are the only way to prevent the need for bailing out the CCP and engendering moral hazard through the process.
If we see CCPs not as a magic bullet to eliminate risk, but as a legal mechanism to achieve fast bankruptcy with high legal certainty for payouts, then the CCP becomes more and more like a CDO than an over regulated financial infrastructure. This would be a great achievement because it solves the dilemma that forces regulators to either regulate CCPs as utilities and forgo the benefits of competition or allow free competition and see a race to the bottom in risk management. By pushing the risks of CCP failure back to the users of the CCP, a mandatory loss allocation mechanism (like a CDO waterfall clause), allows competition to work its usual magic without creating systemic risk or moral hazard. The world should then be able to withstand a credit event at even the largest CCPs like LCH.Clearnet, CME Clearing or Eurex Clearing. Similarly, India should then be able to withstand a credit event at its largest CCPs like CCIL or NSCCL.
Post crisis, regulators have expended much energy on resolution mechanisms to eliminate the “too big to fail” problem. I think resolution mechanisms need to draw upon lessons learnt from securitization and CDOs about how to make this work. I often say that the key purpose of resolution is not to ensure that firms do not die, but to ensure that when they do die, there are no stinking corpses. CDOs and securitization SPVs have shown how this can be done effectively – these methods have proven themselves on the ground and have stood the test of time. Instead of designing resolution mechanisms on a clean slate, regulators should take these proven methods and extend their scope and application to cover large swathes of the financial sector.
Posted at 7:05 pm IST on Sun, 6 Jan 2013 permanent link
Categories: bankruptcy, exchanges
Contractual living wills and liquidation efficiency
Gary Gorton and Andrew Metrick published a fantastic paper last month on “Securitization” (NBER Working Paper 18611). This paper contains a wealth of information, a detailed survey of the literature and a number of very interesting theoretical ideas. What I found most interesting is the idea that the most important benefit of securitization could be a reduction in bankruptcy costs. In passing, Gorton and Metrick talk about “contractual living wills” a set of contractual arrangements in securitization that have some similarities to the living wills that are being proposed as mechanisms to enable easy resolution of banks in the post crisis regulatory reforms. I think this analogy is worth pursuing even further.
In a securitization, all the assets and liabilities are housed in a Special Purpose Vehicle (SPV) which is structured in such a way as to make bankruptcy all but impossible. Gorton and Metrick see this as a big part of the economic function of securitization:
... the SPV cannot become bankrupt. This was an innovation. That is, the design of SPVs to have this feature is an important part of the value of securitization. Moreover, it has economic substance. Since the cash flows are passive, there are no valuable control rights over corporate assets to be contested in a bankruptcy process. Thus, it is in all claimants’ interest to avoid a costly bankruptcy process. (Page 19)
If the assets perform badly and the cash flows from the assets are not sufficient to pay all the coupons, the SPV does not enter bankruptcy – instead the available funds are used to pay the senior claimants early while writing down the liabilities to the junior claimants. Gorton and Metrick call this a contractual living will (Page 8). But I think it is much more than the living wills that post crisis banks are being required to prepare for themselves. It is not just that the SPV waterfall rules are contractual and therefore self implementing unlike the wishful thinking that goes into the living wills of the banks. What is more important is that the SPV waterfall rules constitute a contractual bail-in arrangement whereby the junior claimants’ principal gets written down to restore the solvency of the SPV. Similarly liquidity problems are automatically addressed by extending maturities contractually. (It is not uncommon to see securitization structures in which the expected weighted average life of a securitization tranche is only 5 years, but its rated and legal final maturity is 30 years.)
Gorton and Metrick are right to point out that some of these things are easy to do because the cash flows of an SPV are passive and therefore there is no judgement required to manage them. The SPV is “brain dead” and is completely governed by contract. But I think that resolution of banks and other financial institutions can learn a lot from the SPV liquidation arrangements. Failed institutions can often be put in run-off mode where most of the management can be passive. Private ordering usually fares better than complex regulatory mechanisms.
It is also possible for a business segment to be put into SPV style liquidation arrangements (with near zero bankruptcy costs) while the rest of the institution runs normally. Many central counterparties (CCPs or clearing corporations) have framed rules under which if the losses in a particular segment exceeds a certain threshold, then loss allocation mechanisms kick in that would effectively shut down that segment – contractual bail-in eliminates bankruptcy. I think regulators should consider mandating such contractual provisions that make it impossible for a CCP to go bankrupt. CCPs should be allowed to fail, but the failure should not involve bankruptcy. Post crisis, many CCPs are beginning to clear very risky products that make it extremely likely that a large CCP in a G-7 country would fail in the next decade or so. Contractual living wills and contractual bail-ins would prevent such a failure from being a catastrophic event.
I think it is also possible to convert a failed bank into a CDO that is put into run-off mode with contractual provisions governing the loss allocations without any need for formal bankruptcy at all. Nearly seven years ago (well before the global crisis), I wrote in a blog post that “Having invented banks first, humanity found it necessary to invent CDOs because they are far more efficient and transparent ways of bundling and trading credit risk. Had we invented CDOs first, would we have ever found it necessary to invent banks?” Even if we do not want to replace all banks by CDOs, we can at least replace failed banks by CDOs that are “liquidation efficient” in Gorton and Metrick’s elegant phrase.
Posted at 8:51 pm IST on Tue, 1 Jan 2013 permanent link
Categories: bankruptcy, bond markets
We assume ...
The Basel Committee on Banking Supervision (BCBS) issued a Consultative Document on "Revisions to the Basel Securitisation Framework" earlier this month. Under "Key assumptions and theoretical underpinnings", I found this gem:
Another important assumption embedded in the RBA recalibration is that the same ratings for structured finance and corporate exposures imply the same expected loss rates for investors. One implication of this is that it is assumed that rating agencies will “fix” or have fixed the errors in rating methodologies for structured finance that were revealed during the recent crisis.
Just to put that assumption in perspective, the plot below shows the difference in default rate over the last three decades (and not just during the crisis) between Asset Backed Securities (ABS) or structured finance and corporate paper with the same ratings. The source for the underlying data is Standard and Poor’s via a recent paper by Gorton and Metrick (Table 8) that I hope to blog about soon.
I used to make the assumption that the regulators will “fix” or have fixed the errors in regulatory capital calculation methodologies for structured finance that were revealed during the recent crisis. That assumption must now be abandoned.
Posted at 10:01 pm IST on Sat, 29 Dec 2012 permanent link
Categories: credit rating
Mutual funds bailing out affiliated funds
Two years ago, I blogged about a paper showing that mutual funds support the share prices of their parent banks in Spain. The authors of that paper (Golezyand and Marinz) however argued that Spain is a country “where this type of activities are not closely monitored nor severely prosecuted and punished by the authorities”. In reality, however, this kind of behaviour knows no geographical boundaries.
Pinto and Schmidt study mutual funds in the United States and show that when a mutual fund has difficulty selling illiquid shares in response to redemption pressures, other funds in the same family very conveniently buy the shares and avoid a fire sale. The buying fund (usually a larger and more liquid fund in the same family) suffers a performance loss by absorbing these shares without a sufficiently large price discount.
The takeaway is that if you a buying an illiquid mutual fund, you should buy a fund run by a large mutual fund family to benefit from the liquidity insurance provided by its affiliates. But if you are buying a liquid mutual fund, then you should avoid funds that belong to large families to avoid the performance drag arising out of supporting its affiliates.
Posted at 9:06 pm IST on Mon, 24 Dec 2012 permanent link
Categories: banks, mutual funds
Are bond market bubbles quiet or loud?
Harrison Hong and David Sraer have a nice paper on quiet bubbles at NBER. They argue that while equity bubbles are very loud (high trading volumes in the bubble stocks), debt market bubbles are very quiet (trading volumes actually decline in bubble bonds). This prediction of their theoretical model is vindicated in the empirical data about bond trading volumes during the 2003-2007 credit boom.
In the theoretical model, the quietness of the bubble is driven by the limited upside in bond prices which makes heterogeneity of beliefs less important. By contrast loud equity bubbles arise from large disagreements about fundamentals coupled with short sale restrictions.
This characterization is both interesting and plausible. But, I think that it ignores the relative importance of the primary market in bonds as compared to equities. The annual issuance of stocks is less than two days’ trading in the secondary market which means that the primary market can be ignored in any discussion of the loudness of an equity bubble. In corporate bonds however, the annual issuance is 82 days of secondary market trading, and this is by no means negligible. (Both these pieces of data are from Dealbreaker). Issuance of bonds is analytically the same as shorting of bonds, and a large primary market also serves to attenuate short sale restrictions in the bond markets.
It appears to me that bond market bubbles are very loud when looked at in terms of a rise in issuance activity. It is perhaps for this reason that most attempts to measure credit market bubbles focus on the growth in the amount of credit outstanding which is entirely a measure of the primary market activity.
A closely related point is that the limited upside in bond prices means that a lot of leverage is required to exploit any divergence of opinion. This too leads to increased issuance of debt (typically short term) which leads to increased fragility of the financial system. This implies that bond market bubbles also pop very loudly.
Posted at 10:59 am IST on Sat, 22 Dec 2012 permanent link
Categories: bond markets, bubbles
Single stock futures and promoter share pledges
I participated in a discussion on CNBC TV18 about the trading of single stock futures on companies whose promoters have pledged a significant portion of their shareholding. (You can watch this show here (Part 10) though I participated only by audio).
The discussion was about a proposal that companies whose promoters have pledged a significant fraction of their shareholding should be punished by stopping trading in single stock futures in the shares of these companies. My views were as follows:
- Allowing trading of single stock futures should not be seen as a mark of prestige for the company or any form of a reward or seal of approval. The single stock future exists to meet the need of investors and not the needs of the companies or of the exchanges.
- Single stock futures are the easiest way to short shares, and so banning single stock futures would be a mild form of banning short selling. Short selling is an absolutely essential counterweight for the leveraged long. Restricting short selling when the promoters are leveraged long would essentially be a way of bailing them out and protecting them from sharp falls in their share prices.
- Large leveraged longs create a cliff risk for the share price – if the price falls far enough to produce large margin calls for the leveraged buyers, then the price can just fall off a cliff due to distress liquidation of the pledged shares by the lenders. The answer to this is more stringent margins when cliff risk is high either because of a large leveraged long position (or on the opposite side, because of a large short interest).
Posted at 9:18 pm IST on Mon, 17 Dec 2012 permanent link
Categories: corporate governance, derivatives, short selling
The absurdity of the leveraged super senior trade
The Financial Times has a couple of stories (behind a paywall) about the Leveraged Super Senior (LSS) trades of Deutsche Bank during the financial crisis. Grossly oversimplified, the story is roughly on the following lines:
- Let us say that in the heady days before the global financial crisis, a US bank seeks protection against catastrophic default losses on a portfolio of leading companies from around the world. Say on a portfolio of $125 million, it is willing to absorb the first $25 million of credit losses and wants protection only on losses above this threshold. This is like a catastrophe insurance in that losses on this scale would probably require a second great depression.
- The German bank provides this catastrophe insurance for a modest premium to many such banks on a truly colossal scale – apparently to the tune of $130 billion.
- The German bank then turns to a bunch of Canadian pension funds to offload this default risk, and the Canadians invest in massive amounts of Leveraged Super Senior (LSS) securities that embed this catastrophic default risk for a modest risk premium. At this point, it appears that the Germans have locked in a tiny spread and gotten rid of all the risk.
- There is a catch though. The Canadian fund that bought an LSS on say a billion dollar notional put in only $100 million of cash collateral. And no, we are not talking about counter party risk here. The Canadian fund did not assume a $ 1 billion obligation. They simply had an option to post more collateral and keep the security alive if losses threatened to eat away the original $100 million of cash collateral. But if they chose not to do so, the Canadians were perfectly within their rights to walk away, and the German bank will simply have to unwind the whole structure at prevailing market prices (assuming there is a market at that point).
- The German bank models the LSS on the assumption that the Canadians will keep posting more collateral. These models imply that the LSS is almost as good as an outright hedge of the entire $ 1 billion notional, and subtract a small amount (the “gap option”) to account for the risk that the Canadians will walk away.
- A proper analysis is provided by Gregory’s paper of 2008 which is also referenced in the Financial Times story. Figure 2 of this paper provides a succinct summary of the situation. Gregory says that instead of treating the (Canadian) LSS as a $1 billion hedge less a small correction for the gap option, we should actually treat it as only a $100 million hedge plus a small correction for the deleveraging option. Gregory also argues that under most situations, it would be suboptimal for the (Canadian) investors to post more collateral and therefore this positive correction is quite small. In other words, the gap option approach is really wrong. I strongly recommend reading Gregory’s paper in its entirety; it may appear mathematically forbidding, but it is a lot more readable than it looks.
- On top of all this, it is now being alleged that the German bank at one stage even stopped bothering to subtract the small gap option.
- During the global financial crisis, when the risk of a second great depression began to appear a little less remote, the German bank woke up to the fact that the Canadian LSS was denominated in Canadian dollars while the protection that it had provided to the US banks was denominated in US dollars. There was a currency mismatch and somebody had to worry about how the CAD/USD exchange rate would behave in an end of the world scenario.
- The only person that they could find willing to take a view on this fiendishly complex “quanto” risk (and put his money where his mouth is) was Warren Buffet. His Bershire Hathaway pocketed a $75 million premium for covering this risk, but very cleverly limited its risk to $3 billion. I suspect that Warren Buffet did not try to value this quanto derivative at all, but simply calculated that he was being paid a 2.5% premium to cover this risk. In comparison to most insurance deals, this must have appeared to be a very fat premium. To an insurer who is accustomed to working with physical probabilities rather than the risk neutral probabilities that are really relevant here, this must have looked like a bet that one could take blindfolded, and maybe that is what Berkshire did. Warren Buffet screams about weapons of mass destruction when he loses money on derivatives; he just keeps quiet and pockets the cash when he makes money on them.
- The German bank concluded that since losses in excess of $3 billion were extremely unlikely, the quanto risk was completely covered and they could stop worrying about it.
- If you are worried that a large portfolio of top grade global corporations could experience default losses of 20-25%, whom would you buy insurance on this from? Most certainly not from a bank! The best run bank in the world would be broke long before this scale of default losses appears on a high quality corporate credit portfolio. This is worse than buying insurance on the Titanic from somebody who is himself on the Titanic. It is like buying insurance on a lifeboat (after the Titanic has sunk) from somebody who is swimming in the water without even a raft. There is only one situation where this hedge makes economic sense – if you are sure that you are dealing with a systemically important bank that would be bailed out if it fails. In this case, of course, you are buying insurance from the German taxpayer and that probably makes sense. The other reason for doing this is not economic at all – maybe the only reason for doing the trade was to save regulatory capital. In this case, of course, it does not matter whether you would ever collect on this insurance; maybe you would not be around to collect either.
- The Canadian pension fund which posts collateral upfront is obviously a far more credible seller of protection than a bank that is probably levered 30:1. The only problem is that the buyer now needs to model the “gap risk” and a large global bank obviously has a comparative advantage in browbeating its regulators into accepting a deeply flawed valuation of a complex hedge.
- I am not totally convinced that even Berkshire Hathaway is a credible seller of protection on a great depression risk particularly on a complex quanto risk.
All this reminds me of those fallacious mathematical proofs that use division by zero to prove that 1 equals 2. All these proofs work by creating a significant amount of needless complexity in the midst of which the audience does not notice that somewhere in the long chain of reasoning, you have actually divided by zero. The same thing is happening here; you need a significant amount of complexity to ensure that the regulators do not observe that some risks have slipped between the cracks unobserved waiting to be picked up by the unwary taxpayer.
Posted at 5:03 pm IST on Sun, 9 Dec 2012 permanent link
Categories: derivatives, risk management
When regulation collides with free speech
In a ruling earlier this week that has implications for regulations in other fields (including finance), the US Court of Appeals (second circuit) concluded that “the government cannot prosecute pharmaceutical manufacturers and their representatives under the FDCA for speech promoting the lawful, off-label use of an FDA-approved drug.” The US Food and Drug Administration when approving a drug for certain (on-label) purposes does not prohibit physicians from prescribing the drug for other (off-label) purposes, but prohibits the drug companies from marketing the drug for off-label purposes. The Court ruled that the FDA could ban off-label use if it chose to, but could not permit such use and then require some parties to keep quiet about it:
... prohibiting off-label promotion by a pharmaceutical manufacturer while simultaneously allowing off-label use “paternalistically” interferes with the ability of physicians and patients to receive potentially relevant treatment information; such barriers to information about off-label use could inhibit, to the public’s detriment, informed and intelligent treatment decisions. (Page 44)
Financial regulators are also in the habit of regulating speech in all kinds of situations – the US SEC’s infamous quiet period rule is a good example. The Circuit Court ruling quotes a Supreme Court judgement that “regulating speech must be a last – not first – resort”. This is something that all regulators particularly in the financial sector must bear in mind.
Posted at 2:06 pm IST on Thu, 6 Dec 2012 permanent link
Categories: law, regulation
Nice Finance Quotes
I came across two nice quotes related to finance recently:
-
- “Risk is the pollution created by the process of making money. So where you find people making one you will surely find them hiding the other. ” — David Malone (Golem XIV) (h/t Deus Ex Macchiato).
I think of this as a more forceful way of stating the “No free lunch” form of the Efficient Market Hypothesis in the post crisis world where there is no risk free asset.
-
- “Accounting is the beginning of all economic wisdom, but not the end.” — Willem H. Buiter and Ebrahim Rahbari, Target2 Redux: The simple accountancy and slightly more complex economics of Bundesbank loss exposure through the Eurosystem.
In my experience, most finance MBAs are unwilling to accept the first half of the statement, while most accountants ignore the second half of the statement.
Posted at 3:10 pm IST on Fri, 30 Nov 2012 permanent link
Categories: accounting, miscellaneous, risk management
What is front running?
A recent order of the Securities Appellate Tribunal in India has raised quite a furore over the precise meaning of front running. The Tribunal ruled that Regulation 4(2)(q) of the Fraudulent and Unfair Trade Practices Regulations prohibit front running by intermediaries like stock brokers but not by others.
Many people argue (a) that front running by any entity should be prohibited, and (b) even in the absence of such a prohibition, front running is a fraud on the market that is covered by the general anti fraud regulations. I do not wish to make any comment on the particular case that was decided by the Tribunal, but do think that we should be careful about criminalizing any and all forms of front running. Front running by brokers and other intermediaries is a breach of fiduciary obligation that they owe their clients, but it is not self evident that every Tom, Dick and Harry on the planet has any fiduciary obligation not to front run orders that they expect other people to place. In fact, such form of front running is legal and common through out the world.
The most famous example of such legal front running occurred when the giant fund LTCM (Long Term Capital Management) was near its death in 1998. LTCM brought in Goldman Sachs to help raise new money to recapitalize LTCM. Goldman flatly refused to sign a Non Disclosure Agreement (NDA) when requested to do so, but LTCM was so desperate that they let Goldman do due diligence anyway. What happened thereafter is well described by many people. Here is the description from Sebastian Lullaby’s More money than God: hedge funds and the making of a new elite, New York, Penguin Press (page 239-240):
[Goldman's] proprietary trading desk was selling positions that resembled LTCM’s, feeding on Long-Term like a hyena feeding on a trapped but living antelope. The firm made only a qualified effort to defend what it was up to. A Goldman trader in London was quoted as saying: “If you think a gorilla has to sell, then you sure want to sell first. We are very clear on where the line is; that’s not illegal.” Corzine himself conceded the possibility that Goldman “did things in markets that might have ended up hurting LTCM. We had to protect our own positions. That part I’m not apologetic for.”
The critical point that separates Goldman’s actions from the zone of illegality is its refusal to sign the NDA. This very clearly highlights that the prohibition of front running is rooted in a fiduciary obligation – take that fiduciary duty away and there is nothing immoral or even illegal about trading ahead of somebody else. In fact, the practice is so widespread that in the finance literature, there is a technical term for it – predatory trading (Markus K. Brunnermeier and Lasse Heje Pedersen (2005) “Predatory Trading”, The Journal of Finance, 60(4), pp. 1825-1863). Brunnermeier and Pedersen identify several situations where this kind of front running occurs routinely:
- Hedge funds with (nearing) margin calls may need to liquidate, and this could be known to certain counterparties such as the bank financing the trade.
- Similarly, traders who use portfolio insurance, stop loss orders, or other risk management strategies can be known to liquidate in response to price drops.
- A short-seller may need to cover his position if the price increases significantly or if his share is recalled (i.e., a “short squeeze”).
- Certain institutions have an incentive to liquidate bonds that are downgraded or in default.
- Intermediaries who take on large derivative positions must hedge them by trading the underlying security.
This list is by no means exhaustive – there is a whole industry devoted to front running index mutual funds trading ahead of an index reconstitution or a commodity Exchange Traded Fund rolling over its futures positions to the next month.
Front running can happen even with much more imprecise forecasts of other people’s orders. A paper forthcoming in the Journal of Financial Economics (Sophie Shive and Hayong Yun “Are mutual funds sitting ducks?” available here or in working paper version here) shows that:
We find that patient traders profit from the predictable, flow-induced trades of mutual funds. In anticipation of a 1%-of-volume change in mutual fund flows into a stock next quarter, the institutions in the same 13F category as hedge funds trade 0.29–0.45% of volume in the current quarter. ... A one standard deviation higher measure of anticipatory trading by a hedge fund is associated with a 0.9% higher annualized four-factor alpha. A one standard deviation higher measure of anticipation of a mutual fund’s trades by institutions is associated with a 0.07–0.15% lower annualized four-factor alpha.
On the opposite side there is a paper showing that hedge funds short stock ahead of expected sales by mutual funds experiencing large redemptions (Joseph Chen, Samuel Hanson, Harrison Hong, Jeremy C. Stein (2008) “Do Hedge Funds Profit From Mutual-Fund Distress?”, NBER Working Paper No. 13786)
In short, finance is an ugly world very similar to the African Savannah where the lion lives only if it can outrun the slowest gazelle and the gazelle lives only if it can outrun the fastest lion. We may not like it, but I am not sure that it is practical or desirable to criminalize all this.
I repeat that I am not expressing any view on the case that was before Tribunal; I am only responding to suggestions that any and all forms of front running should be criminalized.
Posted at 5:20 pm IST on Mon, 26 Nov 2012 permanent link
Categories: insider trading, manipulation, regulation
Is finance dumbing us down?
Gerald Crabtree’s scary paper on “Our Fragile Intellect” (h/t Paul Kedrosky) has only one sentence about finance, but it is a damning one:
Needless to say a hunter gather that did not correctly conceive a solution to providing food or shelter probably died along with their progeny, while a modern Wall Street executive that made a similar conceptual mistake would receive a substantial bonus.
I have absolutely no idea whether Crabtree’s speculations about the genetic fragility of human intelligence are correct or not, but I am certain that the problem of moral hazard in finance is a real one.
Meanwhile, finance students who complain about the technical complexity of modern finance may do well to ponder Crabtree’s claim that “life as a hunter gather required at least as much abstract thought as operating successfully in our present society”. Computing the right way to hedge a complicated derivative is hard, but as Crabtree would say “non-verbal comprehension of things such as the aerodynamics and gyroscopic stabilization of a spear while hunting a large dangerous animal” is probably as hard or harder.
Posted at 1:35 pm IST on Mon, 26 Nov 2012 permanent link
Categories: behavioural finance, regulation
Irrational but arbitrage free
Last month I blogged about the Palm-3Com episode as an instance of prices being horribly wrong without there being an arbitrage opportunity (“free lunch”). Last week John Hempton of Bronte Capital had a blog post about Great Northern Iron Ore Properties (GNIOP) whose overpricing is extremely easy to establish. The post concludes by saying:
Because that is so well known the stock has a 20 percent borrow cost -- roughly offsetting the profit you will get from shorting it. In that sense there is a rational market. But that is the only sense there is a rational market. People own this. They will lose money.
Great Northern Iron Ore Properties is a trust set up in 1906 by the Great Northern Railway for regulatory reasons (apparently under the Hepburn Act of 1906, no railroad was permitted to haul commodities which they had produced themselves).
The 1906 agreement states that the Trust shall continue for twenty years after the death of the last survivor of eighteen persons named in the Trust Agreement. I would imagine that this provision has something to do with the rule against perpetuities. Anyway, the last survivor of these eighteen persons died on April 6, 1995 and the Trust terminates twenty years later or April 6, 2015.
All this is very clearly disclosed on the GNIOP website and in SEC filings. It is clear therefore that investors in GNIOP will get dividends for a couple of years and then a final dividend in 2015; these dividends can be estimated within reasonable bounds and discounting this short stream of dividends gives the fundamental value of GNIOP. But a careless investor who applies a PE multiple to GNIOP wrongly assuming a perpetual stream of dividends would arrive at an absurdly high valuation and would consider the stock hugely undervalued. Apparently some investors (humans and computers) who use simple stock screens based on PE ratios are eagerly buying the stock making it overvalued in relation to the true short stream of dividends.
Rational investors see a free lunch and step in to short the stock. Markets abhor free lunches, and the stock borrow rises to the point where it eliminates the free lunch. This is not enough to correct the distorted price.
Posted at 8:54 pm IST on Mon, 19 Nov 2012 permanent link
Categories: arbitrage, market efficiency
On rating Rembrandts
No, this post is not about the masterpieces of the Dutch painter Rembrandt van Rijn, but about the Rembrandt CPDO (Constant Proportion Debt Obligation) notes created by the Dutch bank ABN Amro in 2006. The Federal Court of Australia ruled last week that:
S&P’s rating of AAA of the Rembrandt 2006-2 and 2006-3 CPDO notes was misleading and deceptive and involved the publication of information or statements false in material particulars and otherwise involved negligent misrepresentations to the class of potential investors in Australia ... because by the AAA rating there was conveyed a representation that in S&P’s opinion the capacity of the notes to meet all financial obligations was “extremely strong” and a representation that S&P had reached this opinion based on reasonable grounds and as the result of an exercise of reasonable care when neither was true and S&P also knew not to be true at the time made. (Summary, Para 53)
The judgement is indeed very long – Felix Salmon says that Jayne Jagot’s judgement “runs to an astonishing 635,500 words, or almost 1,500 pages: it’s literally longer than War and Peace”. I agree with Felix that the judge does a remarkable job of understanding this complex instrument and the analyzing the intricacies of rating it. She has obviously benefited from the testimony of numerous experts, but she still deserves full credit for the clarity of her analysis of the key drivers of the performance of a CPDO. The court is thus able to arrive at a cogently argued conclusion that “S&P’s modelling and assignment of the AAA rating was not such as a reasonably competent ratings agency could have carried out and assigned in all of the circumstances.” (Summary, Para 27)
There is a wealth of information in the judgement about the modelling of CPDOs, but from the point of view of legal liability of the rating agency, there are three crucial hurdles to overcome:
- Investors have a responsibility to perform their own credit assessment.
- The court cannot become the regulator of rating agency methodology.
- Rating agencies do not insure investment performance.
The court deals with each of this with forcefully, but I am sure there will be a great deal of debate whether the views of the court are correct. On the investors’ responsibility to perform their own credit assessment: the court says:
I consider the proposition that a prudent person must not invest in any product they do not themselves understand problematic. It suggests that a prudent person could never take and rely on advice. It suggests that a prudent person who had been advised that a particular investment should be made must reject the advice if they themselves are capable of understanding the advice but incapable of understanding the way in which the investment operates. It is the equivalent of saying that only people who truly understand the principles of flight should be allowed to travel by plane. It seems to me that the rigidity of the proposition is a recipe for imprudence. Prudent people do not assume they know or can know everything. They do not assume that they are best placed to assess every fact, matter or thing. They do not assume that their own limitations dictate what can and cannot prudently be done. Prudence does not involve solipsism. (Para 1472).
Prudent people seek to identify others who are best placed and have demonstrated they can be trusted to assess relevant facts, matters and things. ... All of the councils relied on: – (i) the belief that LGFS’s conduct had induced that LGFS, as specialists in local government financial markers and investments, had applied its expertise to the CPDO and assessed it to be a suitable investment for councils to make, and (ii) the belief that S&P had applied its expertise as a body specialising in assessing the creditworthiness of financial products and had concluded that this product warranted the highest possible rating of AAA in respect of interest and principal. The councils’ beliefs to this effect were reasonable in the circumstances and, indeed, were correct. For the councils to refuse to invest in these circumstances, by reason only of the fact that they did not understand how the product operated, does not accord with the dictates of prudence. (Para 1473).
On the issue of the court becoming the regulator of rating agency methodology, the judgement says:
It is also not the case that the councils “seek to place the Court in the untenable position of being the regulator of rating agency methodology”. This is an inaccurate description of the issues in this case. As will be apparent from the discussion and findings below this is not a case about alternative methods of rating, questions of reasonable qualitative judgment or whether one or other method or judgment is to be preferred or is superior to another. This is a case about what S&P did and did not do and whether any reasonable ratings agency could have so conducted itself. It is not a case about the appropriateness or otherwise of a rating. It is case about negligence and misleading and deceptive conduct. (Para 2482)
On the argument that rating agencies are not insurers, the court says:
The imposition of a duty of care in this case does not transform S&P into an insurer of investment performance. It does no more than ensure that S&P, if it chooses to earn money from holding itself out as having specialised expertise in ascertaining the creditworthiness of structured financial products, knowing that it can do so because many potential investors do not have or cannot practically access the same expertise, exercises reasonable care in the assigning of ratings to structured financial products. The criterion for potential liability in respect of such a duty of care is not the performance of the product. The performance of the product determines the potential for loss and thus completion of the potential cause of action. But breach of the proposed duty cannot be determined by reference to the performance of the product. As S&P correctly said the assigning of a rating of a structured financial product embodies a forward-looking opinion about creditworthiness assigned at a particular time. The ratings agency either did or did not exercise reasonable care at that particular time. (Para 2799)
But I am not a lawyer, and my interest is not in legal liability but financial modelling. I have been asking myself a more fundamental question – could this note have been rated at all (not a AAA rating but any rating at all)? The Australian court does conclude that the Rembrandts were securities and not derivatives for the purposes of the Corporations Act, but economically they are more derivatives than bonds. As the court put it: “However else it might be described, the CPDO was ultimately an extraordinarily complicated bet on the future performance of two CDS indices over a period of up to 10 years.” (Summary, Para 9). The risk in the Rembrandts is market risk rather than credit risk. Yes, the derivatives are credit derivatives, but the Rembrandts could be cashed out at a 90% loss of principal not because there were too many any defaults on the names underlying the credit derivatives, but because of the movement of the credit spread. Counter-intuitively, this could happen if the credit spread were too low rather than too high.
Another way of looking at it is that the Rembrandts were a bet that the risk premium embedded in the credit spread (more precisely the CDS spread) could be harvested in a “safe” manner. If the credit spread were say 1% while the expected default losses were only 0.20%, the notes would be expected to make 0.80% annually by selling CDS (before accounting for leverage which could be as high as 15:1). In finance jargon, the Rembrandts were betting that the risk neutral expected default loss (say 1%) is much higher than the real expected default loss (say 0.20%) and the balance is just a risk premium. The complexity of the CPDO structure is all about (a) making this a leveraged bet and (b) dynamically adjusting the leverage ratio to deliver a bimodal outcome where either the investor gets back full principal with a coupon 1.90% above the risk free rate or gets cashed out at a 90% loss of principal. To a finance theorist, there is something absurd about a risk free (AAA) instrument yielding 1.90% above the risk free rate. It is almost axiomatic that there is no risk free way of harvesting risk premia.
It appears to me that such instruments should not be rated at all. Analyzing the probability of loss in the Rembrandt makes no sense when it does not take into account the fact that the loss in case of default is 90% and not the much smaller losses in AAA corporate bonds. Comparing the loss probability of the Rembrandt with that of a AAA corporate bond over a ten year horizon is meaningless since unlike AAA rated corporate bonds which default only after several years, the biggest risk of loss in a CPDO like Rembrandt is in the early years when the leverage is very high. A 0.28% probability of loss over ten years might be consistent with a AAA rating, but a AAA rated corporate bond also has less than 0.01% default probability over the first two years and not 0.06% as one might expect if one tried to spread the 0.28% out equally over ten years.
Posted at 3:33 pm IST on Mon, 12 Nov 2012 permanent link
Categories: credit rating, derivatives, law, regulation
Parochialism of national and global exchanges
I blogged three years ago about how Indian exchanges pretend to be national in their scope, but shut down when conditions in their home city make it convenient to take a holiday. Their equally parochial regulators are also complicit in this. (I have been critical of the 9/11 closures in the US as well).
This week as hurricane Sandy hit the east coast of the US, it was the turn of the big US exchanges with global footprints to reveal their parochialism. Their regulator was also happy to endorse the decision of these exchanges to shut down. It was left to a former Chairman of the US SEC, Arthur Levitt to state the obvious:
If you’re going to have a stock exchange, it should have a backup facility of some sort so that regional events don’t cause its closure, ... This should not happen to the world’s most prominent exchange.
The response of the NYSE CEO was that Arthur Levitt “maybe a little out of date with the facts.” No it is the exchanges and their regulators who are out of date with the facts – somebody forgot to tell them that modern exchanges are not trading floors subject to the vagaries of the local weather, but electronic networks which can be rerouted very easily. And no, the difficulty of New York brokers to get to their offices is no excuse for shutting the national exchange. By this logic, they should have shut the NYSE when hurricane Katrina struck New Orleans; surely, brokers based there would have had great difficulty reaching their offices.
In my experience, backup sites in the financial industry are a big joke. Typically, these systems are set up only to satisfy check box ticking regulators who require them to have back up sites, but do not bother to check whether these are actually adequate. Many of these backup systems have significantly less processing capacity than the main site. Moreover, they are not designed to run the full suite of software that runs on the main system. Given the willingness of spineless regulators worldwide to shut down national financial market places at the drop of a hat, this reluctance to spend money on genuine backup sites is fully rational.
I am convinced that regulators should simply force each institution to operate out of its backup site on a few random days each year. They should get very minimal notice (otherwise, they would fly down their entire management team to the backup site to make it work). Accountable algorithms that I blogged about recently are ideal to ensure that the dates are indeed randomly chosen.
Posted at 7:43 pm IST on Sat, 3 Nov 2012 permanent link
Categories: exchanges
Purported solution of Palm-3Com relative pricing puzzle
Martin Cherkes and Chester Spatt claim in a recent paper to have solved the puzzle about the relative pricing of Palm and 3Com. During the dot com bubble, 3Com sold 5% of its Palm subsidiary to the public, and announced its intention to spin off the remaining 95% to its shareholders. The market valued the common stock portion of Palm owned by 3Com at more than the whole of 3Com implying a negative value (-$22 billion) to the residual business of 3Com. This was implausible because 3Com had positive value before it acquired Palm, and after the spin off was complete, the residual part of 3Com was valued at $5 billion.
Cherkes and Spatt “solve” this problem by using the forward price of the Palm share instead of the spot price. Of course, single stock futures were not available in the US in those days; so they use synthetic futures prices computed from the options market (long call plus short put). The forward price is well below the spot price and based on this price, 3Com appears to be correctly valued. They also showed that as changes in the expected spin off date altered the maturity of the required synthetic future, all the relative prices adjusted to keep the valuation correct.
This is definitely an important addition to what we know about the 3Com puzzle – at the very least, it shows that the no arbitrage conditions and the law of one price were satisfied even at the peak of the dot com frenzy. But I do not believe that this is a complete solution. It is a little like claiming to solve the uncovered interest parity puzzle by pointing out that covered interest parity does hold. Yes, it is nice to know that covered interest parity is not violated and there are no risk free arbitrage opportunities available. But this only substitutes one problem for another: the forward rate is now biased and one has to appeal to some kind of time varying risk premium to explain this away.
The same problem does come up here. How does one justify the depressed forward price of the Palm stock? Cherkes and Spatt argue that this is explained by the securities lending fees that could be earned on the Palm stock. These fees arise because rational investors want to short Palm stock and buy 3Com to arbitrage the difference away. Since there are too few Palm stocks available (only 5% of the shares have been sold to the public), what happens is that the lending fees rise to the point where the arbitrage is no longer available. This is just like the currency forward premium rising till it equals the interest differential and the risk free arbitrage opportunity is eliminated.
The fundamental problem remains – either or both of the Palm and 3Com stock were mispriced. As usual, the “there is no free lunch” version of the Efficient Markets Hypothesis holds, but the “prices are correct” version fails.
Posted at 11:39 am IST on Tue, 30 Oct 2012 permanent link
Categories: arbitrage, behavioural finance, market efficiency
Selective Price Sensitive Disclosure by Government Functionaries
I was mulling over the interesting paper on “Selective disclosure by federal officials and the case for an FGD (Fairer Government Disclosure) regime” by Donna M. Nagy and Richard W. Painter when I came across this bombshell from the Chair of the UK Statistics Authority to the Prime Minister of the UK (h/t FT Alphaville):
I was made aware during the course of yesterday afternoon of your remarks at Prime Minister’s Questions in respect of the economy, in particular your statement that “the good news will keep coming”. This was ahead of this morning’s Office for National Statistics release of the preliminary estimate of Gross Domestic Product for the third quarter of 2012, to which you receive pre-release access up to 24 hours ahead of publication.
... The Pre-Release Access to Official Statistics Order 2008 states that recipients of pre-release access must not disclose ‘any suggestion of the size or direction of any trend’ indicated by the statistic to which the recipient has been given such access. It is clear from media reports that, although this may not have been your intent, your remarks were indeed widely interpreted as providing an indication about the GDP figures.
This episode is yet another reminder that the selective release of price sensitive information by government functionaries is a serious problem. Nagy and Painter propose that government functionaries must be subject to a regime of fair disclosure similar to that imposed on corporate insiders by Regulation FD in the US. (They explain that the recently enacted STOCK Act that deals with insider trading by members of the US Congress does not deal with selective disclosure.) Nagy and Painter also point out that there are a number of legal problems in creating such a regime because of the enhanced constitutional protection to communications between federal officials and members of the public because “speech on public issues occupies the highest rung of the hierarchy of First Amendment values, and is entitled to special protection.” But they believe that a Fair Government Disclosure regime can be created that addresses these concerns.
In India also we have seen selective (and even misleading) disclosure of information by government functionaries. There is a need to develop mechanisms to reduce the chance of such events.
Posted at 7:02 pm IST on Fri, 26 Oct 2012 permanent link
Categories: law, regulation
Luddites in technology company finance departments
I was fascinated by yesterday's fiasco in which Google filed its draft earnings statement with the SEC prematurely – the principal giveaway was a press release that said right at the top “PENDING LARRY QUOTE”. Several newspapers reported a statement from Google stating:
Earlier this morning RR Donnelley, the financial printer, informed us that they had filed our draft 8K earnings statement without authorization. We have ceased trading on NASDAQ while we work to finalize the document. Once it's finalized we will release our earnings, resume trading on NASDAQ and hold our earnings call as normal at 1:30 PM PT.
Interestingly, this statement is nowhere to been seen at the Google Investor Relations web site or in the SEC filings. Obviously, Regulation FD does not cover everything!
Footnoted.com raises the very interesting question as to why Google would use RR Donnelley to file its financial statements. This was the same thought that came to my mind – surely, a company whose software can navigate driverless cars, or translate automatically from one language to another, or find almost anything that there is on the vast world wide web should not find it too hard to click the “Send” button.
But Google is not alone in this. Some of India's largest technology companies who make money by running the most challenging business processes for their clients, turn to RR Donnelley to file their financial statements with the SEC. This is part of a broader phenomenon that I see all the time. Technology companies which use very sophisticated information technology in their core operations often have a fair share of Luddites in their corporate finance departments.
I am reminded of an episode almost an eternity ago, when some of my coffee loving colleagues and I were stuck in one of India's largest coffee plantations. My colleagues spent the better part of a day driving around the whole place in search of a cup of fresh coffee. It was all in vain; the management of the coffee plantation had not yet given up the old colonial mindset in which tea was the beverage of choice, and coffee was only something that you sold to make your money. The attitude in technology company corporate finance departments is very similar – technology is something to be monetized and not necessarily to be used.
Posted at 11:37 am IST on Fri, 19 Oct 2012 permanent link
Categories: technology
Predictable unpredictable numbers compromise Chip and PIN cards
A group of researchers at the University of Cambridge have a paper describing serious security weaknesses in Chip and PIN or EMV cards (h/t Bruce Schneier). EMV or “Chip and PIN” which is the leading system for card payments world-wide contains a chip that executes an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate an unpredictable number for each transaction to ensure it is fresh. The ATM sends this unpredictable number to the card along with various transaction fields. The card responds with an authorization request cryptogram (ARQC), which is calculated over the supplied data. If properly implemented this ARQC allows the ATM or POS to verify that the card is alive, present, and engaged in the transaction.
The reality is very different. The Cambridge researchers discovered that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply the “unpredictable” number which is the heart and soul of the entire protocol. Moreover, the fault actually lies with the EMV designers themselves:
The first flaw is that the EMV protocol designers did not think through carefully enough what is required for it to be “unpredictable”. The specifications and conformance testing procedures simply require that four consecutive transactions performed by the terminal should have unique unpredictable numbers ... Thus a rational implementer who does not have the time to think through the consequences will probably prefer to use a counter rather than a cryptographic random number generator (RNG); the latter would have a higher probability of failing conformance testing (because of the birthday paradox).
If the “unpredictable number” can actually be predicted, it is possible to perform all kinds of “pre-play” attacks. A crooked merchant can harvest an ARQC while having custody of the card in his POS termimal and than replay this at an ATM without the card being present and execute transactions there.
The researchers conclude:
Just as the world’s bank regulators were gullible in the years up to 2008 in accepting the banking industry’s assurances about its credit risk management, so also have regulators been credulous in accepting industry assurances about operational risk management.
Posted at 9:41 pm IST on Sat, 13 Oct 2012 permanent link
Categories: technology