Why does SEC think that 333,251 minus 5 times 66,421 is not 1,146?
The CME futures contracts on the S&P 500 index comes in two flavours – the big or full-size (SP) contract is five times the E-Mini (ES) contract. For clearing purposes, SP and ES contracts are fungible with a five to one ratio. The daily settlement price of both contracts is obtained by taking a volume weighted average price of both contracts taken together weighted in the same ratio.
Yet, according to a recent SEC order against Latour Trading LLC and Nicolas Niquet, a broker-dealer is required to maintain a net-capital on the two contracts separately. In Para 28 of its order, the SEC says that in February 2010, Latour held 333,251 long ES contracts and 66,421 short SP contracts, and it netted these out to a long position of 1,146 ES contracts requiring a net capital of $14,325. According to the SEC, these should not have been netted out and Latour should have held a net capital of $8.32 million ($4.17 million for the ES and $4.15 million for the SP). This is surely absurd.
It is not as if the SEC does not allow netting anywhere. It allows index products to be offset by qualified stock baskets (para 10). In other words, an approximate hedge (index versus an approximate basket) can be netted but an exact hedge (ES versus SP) cannot be netted.
PS: I am not defending Latour at all. The rest of the order makes clear that there was a great deal of incompetence and deliberate under-estimation of net capital going on. It is only on the ES/SP netting claim that I think the SEC regulations are unreasonable.
Posted at 9:43 pm IST on Sun, 28 Sep 2014 permanent link
Categories: derivatives, exchanges
Outsourcing financial repression to China and insourcing it back
It is well known that financial repression more or less disappeared in advanced economies during the 1980s and 1990s, but has been making a comeback recently. Is it possible that financial repression did not actually disappear, but was simply outsourced to China? And the comeback that we are seeing after the Global Financial Crisis is simply a case of insourcing the repression back?
This thought occurred to me after reading an IMF Working Paper on “Sovereign Debt Composition in Advanced Economies: A Historical Perspective”. What this paper shows is that many of the nice things that happened to sovereign debt in advanced economies prior to the Global Financial Crisis was facilitated by the robust demand for this debt by foreign central banks. In fact, the authors refer to this period not as the Great Moderation, but as the Great Accumulation. Though they do not mention China specifically, it is clear that the Great Accumulation is driven to a great extent by China. It is also clear that much of the Chinese reserve accumulation is made possible by the enormous financial repression within that country.
This leads me to my hypothesis that just as the advanced economies outsourced their manufacturing to more efficient manufacturers in China, they outsourced their financial repression to the most efficient manufacturer of financial repression – China. Now that China is becoming a less efficient and less willing provider of financial repression, advanced economies are insourcing this job back to their own central banks.
In this view of things, we overestimated the global reduction of financial repression in the 1990s and are overestimating the rise in financial repression since the crisis.
Posted at 11:59 am IST on Mon, 22 Sep 2014 permanent link
Categories: bond markets, international finance, monetary policy
Fama French and Momentum Factors: Updated Data Library for Indian
MarketA year ago, my colleagues, Prof. Sobhesh K. Agarwalla, Prof. Joshy Jacob and I created a publicly available data library providing the Fama-French and momentum factor returns for the Indian equity market, and promised to keep updating the data on a regular basis. It has taken a while to deliver on that promise, but we have now updated the data library. More importantly, we believe that we have now set up a process to do this on a sustainable basis by working together with the Centre for Monitoring Indian Economy (CMIE) who were the source of the data anyway. CMIE agreed to implement our algorithms on their servers and give us the data files every month. That ensures more comprehensive coverage of the data and faster updates.
Posted at 9:10 pm IST on Sat, 13 Sep 2014 permanent link
Categories: factor investing
A benchmark is to price what a credit rating agency is to quality
Andrew Verstein has an interesting paper on the Law and Economics of Benchmark Manipulation. One of the gems in that paper is the title of this blog post: “A benchmark is to price what a credit rating agency is to quality.” Verstein is saying that just as credit rating agencies became destructive when their ratings were hardwired into various legal requirements, benchmarks also become dangerous when they are hardwired into various legal documents.
Just as in the case of rating agencies, in the case of price benchmarks also, regulators have encouraged reliance on benchmarks. Even in the equity world where exchange trading eliminates the need for many kinds of benchmarks, the closing price is an important benchmark which derives its importance mainly from its regulatory use. Verstein points out that “Indeed, it is hard to find an example of stock price manipulation that does not target the closing (or opening) price.” So we have taken a liquid and transparent market and conjured an opaque and vulnerable benchmark out of it. Regulators surely take some of the blame for this unfortunate outcome.
Another of Verstein’s points is that governments use benchmarks even when they know that it is broken: “the United States Treasury used Libor to make TARP loans during the financial crisis, despite being on notice that Libor was a manipulated benchmark.” In this case, Libor was not only manipulated but had become completely dysfunctional – I remember that the popular definition of Libor at that time was that it was the rate at which banks do not lend to each other in London. That was well before Libor became Lie-bor. The US government could easily have taken a reference rate from the US Treasury market or repo markets and then set a fat enough spread over that reference rate (say 1000 basis points) to cover the TED spread, the CDS spread, and a Bagehotian penal spread. By choosing not to do so they lent legitimacy to what they knew very well was an illegitimate benchmark.
Posted at 7:57 pm IST on Sun, 7 Sep 2014 permanent link
Categories: benchmarks, manipulation
Regulatory overreach: SEBI definition of research analyst
Yesterday, the Securities and Exchange Board of India (SEBI) issued regulations requiring all Research Analysts to be registered with SEBI. The problem is that the regulations use a very expansive definition of research analyst. This reminds me of my note of dissent to the report of the Financial Sector Legislative Reforms Commission (FSLRC) on the issue of definition of financial service. I wrote in that dissent that:
Many activities carried out by accountants, lawyers, actuaries, academics and other professionals as part of their normal profession could attract the regitration requirement because these activities could be construed as provision of a financial service ... All this creates scope for needless harassment of innocent people without providing any worthwile benefits.
Much the same could be said about the definition of the definition of research analyst. Consider for example this blog post by Prof. Aswath Damodaran of the Stern School of Business at New York University on the valuation of Twitter during its IPO. It clearly meets the definition of a research report in Regulation 2(w):
any written or electronic communication that includes research analysis or research recommendation or an opinion concerning securities or public offer, providing a basis for investment decision
Regulation 2(w) has a long list of exclusions, but Damodaran’s post does not fall under any of them. Therefore, clearly Damodaran would be a research analyst under Regulation 2(u) under several of its prongs:
a person who is primarily responsible for:
- preparation or publication of the content of the research report; or
- providing research report; or
- offering an opinion concerning public offer,
with respect to securities that are listed or to be listed in a stock exchange
Under Regulation 3(1), Prof. Damodaran would need a certificate of registration from SEBI if he were to write a similar blog post about an Indian company. Or, under Regulation 4, he would have to tie up with a research entity registered in India.
Regulations of this kind are a form of regulatory overreach that must be prevented by narrowly circumscribing the powers of regulators in the statute itself. To quote another sentence that I wrote in the FSLRC dissent note: “regulatory self restraint ... is often a scarce commodity”.
Posted at 7:09 pm IST on Tue, 2 Sep 2014 permanent link
Categories: regulation
IPO as call option for insiders
A couple of weeks ago, Matt Levine at Bloomberg View described a curious incident of a company that was a public company for only six days before cancelling its public issue:
- On July 30, 2014, an Israeli company, Vascular Biogenics Ltd. (VBL) announced that it had priced its initial public offering (IPO) at $12 per share and that the shares would begin trading on Nasdaq the next day. The registration statement relating to these securities was filed with and was declared effective by the US Securities and Exchange Commission (SEC) on the same day.
- On August 8, VBL announced that it had cancelled its IPO.
What happened in between was that on July 31, the shares opened at $11.00 and sank further to close at $10.25 (a 15% discount to the IPO price) on a large volume of 1.5 million shares as compared to the total issue size of 5.4 million shares excluding the Greeshoe option (Source for price and volume data is Google Finance). This price drop was bad news for one of the large shareholders who had agreed to purchase almost 45% of the shares in the IPO. This insider was unwilling or unable to pay for the shares that he had agreed to buy. Technically, the underwriters were on the hook now, and the default could have triggered a spate of law suits. Instead, the company cancelled the IPO and the underwriting agreement. Nasdaq instituted a trading halt but the company appears to be still technically listed on Nasdaq.
Matt Levine does a fabulous job of dissecting the underwriting agreement to understand the legal issues involved. I am however more concerned about the relationship between the insider and the company. The VBL episode seems to suggest that if you are an insider in a company, a US IPO is a free call option. If the stock price goes up on listing, the insider pays the IPO price and buys the stock. If the price goes down, the insider refuses to pay and the company cancels the IPO.
Posted at 6:50 pm IST on Sat, 30 Aug 2014 permanent link
Categories: corporate governance, equity markets, insider trading
Mutual fund liquidity, valuation and gates
Last month, the US Securities and Exchange Commission (SEC) adopted rules allowing money market funds (MMFs) to restrict (or “gate”) redemptions when there is a liquidity problem. These proposals have been severely criticized on the ground that they could lead to pre-emptive runs as investor rush to the exit before the gates are imposed.
I think the criticism is valid though I was among those who recommended the imposition of gates in Indian mutual funds during the crisis of 2008. The difference is that I see gates as a solution not to a liquidity problem, but to a valuation problem. The purpose of the gate in my view is to protect remaining investors from the risk that redeeming investors exit the fund at a valuation greater than the true value of the assets. An even better solution to this valuation problem is the minimum balance at risk proposal that I blogged about two years ago.
Posted at 3:12 pm IST on Sat, 23 Aug 2014 permanent link
Categories: bond markets, mutual funds
Carry trades and the forward premium puzzle
Tarek Hassan and Rui Mano have an interesting NBER conference paper (h/t Econbrowser (Menzie Chinn) that comes pretty close to saying that there is really no forward premium puzzle at all. Their paper itself tends to obscure the message using phrases like cross-currency, between-time-and-currency, and cross-time components of uncovered interest parity violations. So what follows is my take on their paper.
Uncovered interest parity says that ignoring risk aversion, currencies with high interest rates should be expected to depreciate so as to neutralise the interest differential. If not risk neutral investors from the rest of the world would move all their money into the high yielding currency and earn higher returns. Similarly, currencies with low interest rates should be expected to appreciate to compensate the interest differential so that risk neutral investors do not stampede out of the currency.
Violation of uncovered interest parity therefore have a potentially simple explanation in terms of risk premia. The problem is that the empirical relationship between interest differentials and currency appreciation is in the opposite direction to that predicted by uncovered interest parity. In a pooled time-series cross-sectional regression, currencies with high interest rates appreciate instead of depreciating. A whole investment strategy called the carry trade has been built on this observation. A risk based explanation of this phenomenon would seem to require implausible time varying risk premia. For example, if we interpret the pooled in terms of a single exchange rate (say dollar-euro), the risk premium would have to keep changing sign depending on whether the dollar interest rate was higher or lower than the euro interest rate.
This is where Hassan and Mano come in with a decomposition of the pooled regression result. They argue that in a pooled sample, the result could be driven by currency fixed effects. For example, over their sample period, the New Zealand interest rate was consistently higher than the Japanese rate and an investor who was consistently short the yen and long the New Zealand dollar would have made money. The crucial point here is that a risk based explanation of this outcome would not require time varying risk premia – over the whole sample, the risk premium would be in one direction. What Hassan and Mano do not say is that a large risk premium would be highly plausible in this context. Japan is a net creditor nation and Japanese investors would require a higher expected return on the New Zealand dollar to take the currency risk of investing outside their country. At the same time, New Zealand is a net debtor country and borrowers there would pay a higher interest rate to borrow in their own currency than take the currency risk of borrowing in Japanese yen. It would be left to hedge funds and other players with substantial risk appetite to try and arbitrage this interest differential and earn the large risk premium on offer. Since the aggregate capital of these investors is quite small, the return differential is not fully arbitraged away.
Hasan and Mano show that empirically only the currency fixed effect is statistically significant. The time varying component of the uncovered interest parity violation within a fixed currency pair is not statistically significant. Nor is there a statistically significant time fixed effect related to the time varying interest differential between the US dollar and a basket of other currencies. To my mind, if there is no time varying risk premium to be explained, the forward premium puzzle disappears.
The paper goes on to show that the carry trade as an investment strategy is primarily about currency fixed effects. Hasan and Mano consider “a version of the carry trade in which we never update our portfolio. We weight currencies once, based on our expectation of the currencies’ future mean level of interest rates, and never change the portfolio thereafter.” This “static carry trade” strategy accounts for 70% of the profits of the dynamic carry trade that rebalances the portfolio each period to go long the highest yielding currencies at that time and go short the highest yielding currencies at that time. More importantly, in the carry trade portfolio, the higher yielding currencies do depreciate against the low yielding currencies. It is just that the depreciation is less than the interest differential and so the strategy makes money. So uncovered interest parity gets the sign right and only the magnitude of the effect is lower because of risk premium. There is a large literature showing that the carry trade loses money at times of global financial stress when investors can least afford to lose money and therefore a large risk premium is intuitively plausible.
Posted at 12:26 pm IST on Sat, 16 Aug 2014 permanent link
Categories: arbitrage, international finance
Tax avoidance with derivatives
Last month, the Permanent Subcommittee on Investigations of the United States Senate published a Staff Report on how hedge funds were using basket options to reduce their tax liability. The hedge fund’s underlying trading strategy used 100,000 to 150,000 trades per day and many of those trading positions lasted only a few minutes. Yet, because of the use of basket options, the trading profits ended up being taxed at the long term capital gains rate of 15-20% instead of the short term capital gains rate of 35%. The hedge fund saved $6.8 billion in taxes during the period 2000-2013. Perhaps, more importantly, the hedge fund was also able to circumvent leverage restrictions.
The problem is that derivatives blur a number of distinctions that are at the foundation of the tax law everywhere in the world. Alvin Warren described the problem in great detail more than two decades ago (“Financial contract innovation and income tax policy.” Harvard Law Review, 107 (1993): 460). More importantly, Warren’s paper also showed that none of the obvious solutions to the problem would work.
We have similar problems in India as well. Mutual funds that invest at least 65% in equities produce income that is practically tax exempt for the investor, while debt mutual funds involve substantially higher tax incidence. A very popular product in India is the “Arbitrage Mutual Fund” which invests at least 65% in equities, but also hedges the equity risk using futures contracts. The result is “synthetic debt” that has the favourable tax treatment of equities.
In some sense, this is nothing new. In the Middle Ages, usury laws in Europe prohibited interest bearing debt, but allowed equity and insurance contracts. The market response was the infamous “triple contract” (contractus trinus) which used equity and insurance to create synthetic debt.
What modern taxmen are trying to do therefore reminds me of Einstein’s definition of insanity as doing the same thing over and over again and expecting different results.
Posted at 4:03 pm IST on Sat, 9 Aug 2014 permanent link
Categories: derivatives, taxation
Betting Against Beta in the Indian Market
My colleagues, Prof Sobhesh Kumar Agarwalla, Prof. Joshy Jacob, Mr. Ellapulli Vasudevan and I have written a working paper on “Betting Against Beta in the Indian Market” (also available at SSRN)
Recent empirical evidence from different markets suggests that the security market line is flatter than posited by CAPM and a market neutral portfolio long in low-beta assets and short in high-beta assets earns positive returns. Frazzini and Pedersen (2014) conceptualize a Betting against Beta (BAB) factor that tracks such a portfolio. They find that the BAB factor earns significant returns using data from 20 international equity markets, treasury bond markets, credit markets, and futures markets. We find that a similar BAB factor earns significant positive returns in the Indian equity market. The returns on the BAB factor dominate the returns on the size, value and momentum factors. We also find that stocks with higher volatility earn relatively lower returns. These findings are consistent with the Frazzini and Pedersen model in which many investors do not have access to leverage and therefore overweight the high-beta assets to achieve their target return.
Like our earlier work on the Fama-French and momentum factor returns in India (see this blog post), this study also contributes to an understanding of the cross section of equity returns in India. Incidentally, the long promised update of the Fama-French and momentum factor returns is coming soon. We wanted to put the data update process on a more sound foundation and that has taken time. While the update has been delayed, we expect it to be more reliable as a result.
Posted at 1:30 pm IST on Mon, 14 Jul 2014 permanent link
Categories: CAPM, factor investing
Making margin models less procyclical
Last month, the Bank of England (BOE) published a Financial Stability Paper entitled “An investigation into the procyclicality of risk-based initial margin models”. After the Global Financial Crisis, there has been growing concern that procyclical margin requirements (margins are higher in times of market stress and lower in calm markets) induce complacency in good times and panic in bad times. There is therefore a desire to reduce procyclicality, but this is difficult to do without sacrificing the risk sensitivity of the margin system.
The BOE paper uses historical and simulated data to compare various margin models on their risk sensitivity and their procyclicality. Though they do not state this as a conclusion, their comparison does show that the exponentially weighted moving average (EWMA) model with a floor (minimum margin) is one of the better performing models on both risk sensitivity and procyclicality. This is gratifying in that India uses a system of this kind.
However, the study leaves me quite dissatisfied. First procyclicality is measured in terms of elevated realized volatility. Market stress in my view is better measured by implied volatility (for example, the VIX) and by measures of funding liquidity. Second, the four models that the paper compares are all standard pre-crisis models. Even when they use simulated data from a regime switching model, they do not consider margin model based on regime switching. Nor do they consider models based on fat tailed distributions. There are no models that adjust margins slowly to reduce liquidity stresses in the system. Finally, they do little to quantify the tradeoff between risk sensitivity and procyclicality – how much risk sensitivity do we have to give up to achieve a desired reduction in procyclicality.
Posted at 2:16 pm IST on Thu, 19 Jun 2014 permanent link
Categories: exchanges, risk management
How to borrow $10 million against forged shares
James Altucher narrates a fascinating story about how a guy claiming to be related to Middle Eastern royalty almost succeeded in borrowing $10 million from a fund manager against forged shares representing $25 million of restricted stock of a private internet company (h/t Bruce Schneier).
To me the red flag in the story was that the borrower agreed without a murmur to the outrageous terms that the fund manager asked for:
- 15% interest, paid quarterly
- the full loan is due back in two years
- $600,000 fee paid up front.
- 25% of all the upside on the full $25 million in shares for the next ten years
Assuming that the loan is for all practical purposes without recourse to any other assets of the borrower because of the uncertainties of local law, all this can be valued using call and put options on the stock. The upside clause is just 25% of an at-the-money call option on the stock. The default loss is just the value of a put with a strike of $10 million. To discount the interest payments, we need the risk neutral probability of default which I conservatively estimate as the probability of exercise of the two year put option (In fact, the interest is paid quarterly and some interest payments will be received even if the loan ultimately defaults).
For simplicity, I assume the risk free rate to be zero which is realistic for the first two years, but probably undervalues the ten year call. To add to the conservatism, I assume that the volatility of the stock is 100% for the first two years (life of the loan) and drops sharply to 30% for the remaining life of the ten year period of the call option. Taking the square root of the weighted average variance gives the volatility of the call option to be 52%. Since it is an internet stock, one can safely assume that the dividends are zero.
Under these assumptions, the fund manager expects to lose $3 million (put option value) out of the $10 million loan, but expects to make $3.7 million on the call, $1.4 million in interest and $0.6 million upfront fee. That is a net gain of $2.7 million or 27%. If the short term volatility is reduced to 50%, the default loss drops to less than $0.5 million and the net gain rises to 52%. Even if the short term volatility is raised to 160% (without raising the long term volatility), the deal still breaks even.
If a deal looks too good to be true, it usually is. The fund manager should have got suspicious right there.
As an aside, forged shares were a big menace in India in the 1990s, but we have solved that problem by dematerialization. (It is standard while lending against shares in India to ask for the shares to be dematerialized before being pledged.) The Altucher story suggests that the US still has the forged share problem.
Posted at 2:56 pm IST on Wed, 18 Jun 2014 permanent link
Categories: fraud
19th century UK gilts mispricing versus modern on-the-run bond
pricingAndrew Odlyzko has an interesting paper entitled “Economically irrational pricing of 19th century British government bonds ” (available on SSRN) which demonstrates that more liquid perpetual bonds (consols) issued by the UK government often traded at prices about 1% higher than less liquid bonds with almost identical cash flows. Given that interest rates in that era were around 3%, these perpetual bonds would have a duration of well over 30 years. So the 1% pricing disparity would correspond to a yield differential of about 3 basis points. That is much less than the yield differential between long maturity on-the-run and off-the-run treasuries in the US in recent decades, let alone the differentials in the Indian gilt market.
In other words, contrary to what Odlyzko seems to imply, the 19th century UK gilt market would appear to have been more efficient than modern government bond markets! Odlyzko provides a solution to this puzzle. Most of UK consols in the 19th century were held by retail investors and very little was held by financial institutions. As Odlyzko rightly points out, this would substantially depress the premium for liquidity. Odlyzko argues that the liquidity premium should be zero because the stock of the liquid consols was more than adequate to meet any reasonable liquidity demands. I do not agree with this claim. The experience with quantitative easing since the global financial crisis tells us that the demand for safe and liquid assets can be almost insatiable. That might well have been true two centuries ago.
Posted at 10:03 pm IST on Sat, 14 Jun 2014 permanent link
Categories: bond markets, market efficiency
Waiting for a national stock market in India
Today was another reminder that India still does not have a national stock market. The Indian stock markets are closed because Mumbai goes to the poll today. The country as a whole goes to the polls on ten different days spread over more than a month. Either the stock market should be closed on ten days or on none.
It is high time that the regulators required that the exchanges should operate out of their disaster recovery location when Mumbai has a holiday and most of the country is working. That would also be a wonderful way of testing whether all those business continuity plans work as nicely on the ground as they do on paper. But something tells me that this is unlikely to happen anytime soon
Two decades ago, we abolished the physical trading floor in Mumbai. But the trading floor in Mumbai lives on in the minds of key decision makers, and it will take long to liberate ourselves from the oppression of this imaginary trading floor.
Posted at 6:13 pm IST on Thu, 24 Apr 2014 permanent link
Categories: exchanges, regulation, technology
The human rights of insider traders
The European Court of Human Rights (ECHR) has an interesting judgement (h/t June Rhee) upholding the human rights of those guilty of insider trading (The judgement itself is available only in French but the Press Release is available in English).
Though the fines and penalties imposed by the Italian Companies and Stock Exchange Commission (Consob) were formally defined as administrative in nature under Italian law, the ECHR ruled that “the severity of the fines imposed on the applicants meant that they were criminal in nature.”. As such, the ECHR found fault with the procedures followed by Consob. For example, the accused had not had an opportunity to question any individuals who could have been interviewed by Consob. Moreover, the functions of investigation and judgement were within the same institution reporting to the same president. The only thing that helped Consob was that the accused could and did challenge the Consob ruling in the Italian courts.
The ECHR ruling that the Consob fines were a criminal penalty brought into play the important principle that a person cannot be tried for the same offence twice. Under Italian law (based on the EC Market Abuse Directive), a criminal prosecution had taken place in addition to the Consob fines. ECHR ruled that this violated the human rights of the accused.
It is important to recognize that the ECHR is not objecting to the substance of the insider trading statutes and the need to penalize the alleged offences. The Court clearly states that the regulations are “intended to guarantee the integrity of the financial markets and to maintain public confidence in the security of transactions, which undeniably amounted to an aim that was in the public interest. ... Accordingly, the fines imposed on the applicants, while severe, did not appear disproportionate in view of the conduct with which they had been charged.” Rather, the Court’s concerns are about due process of law and the protection of the rights to fair trial.
I think the principles of human rights are broadly similar across the free world – US, Europe and India. The judgement therefore raises important issues that go far beyond Italy.
Posted at 11:03 am IST on Tue, 15 Apr 2014 permanent link
Categories: insider trading, law
Heartbleed and the need for air-gapped backups in finance
Heartbleed is perhaps the most catastrophic computer security disaster ever (For those not technically inclined, this xkcd comic is perhaps the most readable explanation of the bug). Bruce Schneier says that “On the scale of 1 to 10, this is an 11.” Since the bug has been around for a few years and the exploit leaves no trace on the server, the assumption has to be that passwords and private keys have been stolen from every server that was ever vulnerable. If you have the private key, you can read everything that is being sent to or received from the server until the private key (SSL Certificate) is changed even if the vulnerability itself has been fixed.
Many popular email, social media and other popular sites are affected and we need to change our passwords everywhere. Over the next few weeks, I intend to change every single password that I am using on the web – more than a hundred of them.
Thankfully, only a few banking sites globally seem to be affected. When I check now, none of the Indian banking sites that I use regularly are being reported as vulnerable. However, the banks have not said anything officially and I am not sure whether they were never vulnerable or whether they fixed the vulnerability over the last few days after the bug was revealed. Even the RBI has been silent on this; if all Indian banks were safe, they should publicly say so, and if some were affected and have been fixed, they should say so too. Incidentally, many Indian banking sites do not seem to implement Perfect Forward Security and that is not good at all.
More importantly, I think it is only a matter of time before large financial institutions around the world suffer a catastrophic security breach. Even if the mathematics of cryptography is robust (P ≠ NP), all the mathematics is implemented in code that often goes through only flimsy code reviews. I think it is necessary to have offline repositories of critical financial data so that one disastrous hack does not destroy the entire financial system. For example, I think every large depository, bank, mutual fund and insurance company should create a monthly backup of the entire database in a secure air-gapped location. Just connect a huge storage rack to the server (or perhaps the disaster recovery backup server), dump everything (encrypted) on the rack, disconnect and remove the rack, and store the air-gapped rack in a secure facility. A few thousands of dollars or even a few tens of thousands of dollars a month is a price that each of these institutions should be willing to pay for partial protection against the tail risk of an irrecoverable security breach.
Posted at 7:04 pm IST on Sat, 12 Apr 2014 permanent link
Categories: technology
Campbell on 2013 Economics Nobel Prizes
While much has been written about the 2013 Economics Nobel Prizes, almost everybody has focused on the disagreements between Fama and Shiller, with Hansen mentioned (if at all) as an afterthought (Asness and Lieuw is a good example). By contrast, John Campbell has a paper (h/t Justin Fox) on the 2013 Nobels for the Scandinavian Journal of Economics, in which Hansen appears as the chief protagonist, while Fama and Shiller play supporting roles. The very title of the paper (“Empirical Asset Pricing”) indicates the difference in emphasis – market efficiency and irrational exuberance play second fiddle to Hansen’s GMM methodology.
To finance people like me, this comes as a shock; Fama and Shiller are people in “our field” while Hansen is an “outsider” (a mere economist, not even a financial economist). Yet on deeper reflection, it is hard to disagree with Campbell’s unstated but barely concealed assessment: while Fama and Shiller are story tellers par excellence, Hansen stands on a different pedestal when it comes to rigour and mathematical elegance.
And even if you have no interest in personalities, I would still strongly recommend Campbell’s paper – it is by far, the best 30 page introduction to Empirical Asset Pricing that I have seen.
Posted at 4:56 pm IST on Sat, 5 Apr 2014 permanent link
Categories: market efficiency, statistics
Diversification, Skewness and Adverse Selection
When I first read about the fascinating ‘Star Wars’ deal between Steven Spielberg and George Lucas, my reaction was that this was a simple diversification story. But then I realized that it is more complex than that; the obstacles in the form of skewness preference, adverse selection and moral hazard are strong enough to make deals like this probably quite rare.
The story itself is very simple and Business Insider tells it well. Back in 1977, George Lucas was making his ‘Star Wars’ film, and Steven Spielberg was making ‘Close Encounters of the Third Kind’. Lucas was worried that his ‘Star Wars’ film might bomb and thought that ‘Close Encounters’ would be great hit. So he made an offer to his friend Spielberg:
All right, I’ll tell you what. I’ll trade some points with you. You want to trade some points? I’ll give you 2.5% of ‘Star Wars’ if you give me 2.5% of ‘Close Encounters’.
Spielberg’s response was:
Sure, I’ll gamble with that. Great.
Both films ended up as great classics, but ‘Star Wars’ was by far the greater commercial success and Lucas ended up paying millions of dollars to Spielberg.
At the time when neither knew whether either of the films would succeed, the exchange was a simple diversification trade that made both better off. So why are such trades not routine? One reason could be that many films are made by large companies that are already well diversified.
A more important factor is information asymmetry: normally, each director would know very little of the other’s film and then trades become impossible. The Lucas-Spielberg trade was possible because they were friends. It is telling that the trade was made after Lucas had spent a few days watching Spielberg make his film. It takes a lot of due diligence to overcome the information asymmetry.
The other problem is skewness preference. Nobody buys a large number of lottery tickets to “diversify the risk”, because that diversification would also remove the skewness that makes lottery tickets worthwhile. Probably both Lucas and Spielberg thought their films had risk adjusted returns that made them attractive even without the skewness characteristic.
It is also possible that Lucas simply did an irrational trade. Lucas is described as “a nervous wreck ... [who] felt he had just made this little kids’ movie”. Perhaps, Spielberg was simply at the right time at the right place to do a one-sided trade with an emotional disturbed counterparty. Maybe, we should all be looking out for friends who are sufficiently depressed to offer us a Lucas type trade.
Posted at 1:38 pm IST on Thu, 27 Mar 2014 permanent link
Categories: behavioural finance, CAPM
China and Japan: Risk of Currency War
Over the last few months, the risks of such a currency war between China and Japan have increased substantially as pressing domestic economic problems in both countries could tempt them down this path.
In Japan, Abe came to power with a promise to revive the economy through drastic means. Though Abenomics has three “arrows”, the only arrow that is at all effective now is the monetary arrow that has worked by depreciating the yen. The risk is that Japan would seek to rely more and more on this arrow and try to push the yen down to 110 or even 120 against the US dollar. It is even possible that such a strategy might finally revive the Japanese economy.
China also faces a similar temptation. House of Debt has a fantastic blog post showing that since 2008, China has been forced to rely more and more on debt to keep its economy growing because its earlier strategy of export led growth is not working any more. The second graph in their blog post drives this point home very forcefully. Unfortunately, the debt led model is increasingly unsustainable. This month, China witnessed the first onshore corporate bond default. Earlier, a default on a popular wealth management product was avoided only by a bailout.
China’s leaders must now be sorely tempted to depreciate the currency to maintain economic growth without further exacerbating the country’s internal debt problem. Many observers believe that after many years of high inflation and gradual appreciation, the Chinese Renminbi is overvalued today. That would be another reason to attempt a weakening of the currency.
The high degree of intra-Asian economic integration means that a depreciation by either Asian giant would drive down many other Asian currencies (for example, the Korean Won) and make it difficult for the other Asian giant to refrain from depreciating its currency. A vicious cycle of competitive devaluations could rapidly become a currency war. And the already strained political relations between the two countries would clearly not help.
The yen and the yuan are in some ways like the yin and yang of Asian currency markets. A “beggar thy neighbour” currency war between Japan and China would of course have a dramatic impact on the whole of Asia.
Posted at 7:53 pm IST on Fri, 14 Mar 2014 permanent link
Categories: international finance
Bitcoin as a retail RTGS without a central bank
Richard Gendal Brown has a very valuable blog post about bank payment systems that ends with a brief discussion about Bitcoin. His conclusion is very interesting:
My take is that the Bitcoin network most closely resembles a Real Time Gross Settlement system. There is no netting, there are (clearly) no correspondent banking relationships and we have settlement, gross, with finality.
I agree with this characterization, but would only add that Bitcoin is an RTGS (Real-Time Gross Settlement) without a central bank. To computer scientists, the core of Bitcoin is an elegant solution to the Byzantine Generals problem. To finance people, perhaps, the core of Bitcoin is an RTGS that (a) is open to all (and not just the privileged banks) and (b) functions without a central bank.
Posted at 6:42 pm IST on Tue, 11 Mar 2014 permanent link
Categories: blockchain and cryptocurrency
Why central banks should not regulate markets
The best reason for keeping central banks out of the regulation of markets is highlighted by the announcement a couple of days back by the Bank of England that it was suspending one of its employees and beginning an independent investigation into whether any of its staff were involved in or aware of any attempted manipulation of the foreign exchange market.
The simple fact of the matter is that the central bank is totally conflicted when it comes to market regulation. It is a big participant in financial markets – in fact its primary mandate is to legally manipulate these markets in the pursuit of the macroeconomic mandates entrusted to it. Monetary policy gives central banks a mandate to manipulate bond markets to fix interest rates at particular levels; in several countries, central banks are also mandated to manipulate foreign exchange markets; and occasionally (for example, Hong Kong and Japan at different points of time), they have even been mandated to manipulate the stock index market.
This completely legal manipulation mandate makes central banks unsuitable for enforcing conduct regulation of financial markets. There is too great a temptation for the central bank to condone or even encourage large banks to indulge in manipulation of markets in the same direction that the central bank desires. After all, this is just another very convenient “transmission mechanism” for the central bank.
In this light, the post crisis decision in the UK to move market regulation into a subsidiary of the central bank is a ghastly mistake.
Posted at 9:52 pm IST on Fri, 7 Mar 2014 permanent link
Categories: manipulation, regulation
Insider trading inside the regulator
Rajgopal and White have a paper euphemistically (or sarcastically) titled “Stock Picking Skills of SEC Employees”. The paper is actually about potential insider trading by the regulator’s employees. The empirical results show that sales (but not purchases) by SEC employees earned abnormal profits (as measured by the standard Fama-French four factor model). There is evidence that some of these sales were based on impending SEC enforcement actions or disclosures made to the SEC that have not yet been made public. This indicates that the measures introduced by the SEC after an earlier insider trading scandal in 2009 (see here, pages 40-43) are not sufficiently effective or are not properly enforced.
If my memory serves me right, back in 2000, when I was in SEBI (the Securities and Exchange Board of India), employees (from the Chairman down to all staff) were forbidden from investing in equities except through mutual funds. This is arguably too draconian, but clearly the SEC rules (and their enforcement) were not tight enough.
Posted at 1:29 pm IST on Sat, 1 Mar 2014 permanent link
Categories: regulation
Edgar for Humans: Where individual effort trumps mighty organizations
Last week, Maris Jensen released her web site SEC Filings for Humans. (There is a nice interview with Maris Jensen at E Pluribus Unum.)
I use the SEC’s Edgar database quite often, but nowadays I never go there without first having identified the exact document that I need through other means. Searching for the document itself on Edgar is not for the faint hearted. I use Yahoo Finance and Google Finance quite extensively and find both quite disappointing. It is therefore truly amazing that one individual using a bunch of open source software (particularly D3.js and SQLAlchemy) can do something that none of these powerful organizations with vast resources have been able to accomplish.
For example, on Edgar, if you look for JPMorgan, you will find two registrants with the same name Jpmorgan Chase & Co. Only by trial and error would you be able to figure out which is the true JPMorgan. At Maris’ site, both registrants are listed, but the correct one is identified by the ticker symbol (JPM). Not rocket science, but saves a few minutes of searching for the wrong documents. Once you select JPM, you can view all its financial information (from the XBRL filings) in tabular form instead of wading through a huge text file. A lot of interesting information is displayed visually – for example, you can find a time series chart of all of the company’s subsidiaries. (For a company like JPM with hundreds of subsidiaries, this chart is quite intimidating, a similar chart for say Apple is more enjoyable). The influence chart of cross ownership is also truly impressive.
It is quite likely that in a few days as more and more users try out her website, it will become unresponsive and possibly even crash. One hopes that a large organization with more bandwidth and hardware takes over the site and keeps it running. But the prospects do not look very good – Maris tried to donate the whole thing to the SEC, but they did not even bother to respond. Meanwhile the SEC spends a lot of money buying back its own Edgar data from commercial vendors.
Finally, will something like this ever become available in India?
Posted at 5:39 pm IST on Tue, 25 Feb 2014 permanent link
Categories: regulation, technology
Looking for smuggled gold in the balance of payments
The World Gold Council (WGC) reported last week that despite import curbs imposed during 2013, Indian gold demand continued to grow with gold smuggling (what the WGC euphemistically calls unofficial gold imports) compensating for the fall in official imports. This is of course in line with a lot of anecdotal evidence.
In principle, gold smuggling should show up in the balance of payments (BOP) data in some form – after all the smuggled gold also has to be paid for in foreign exchange. For example, smugglers could collect foreign currency from migrant workers outside India and remit the money in Indian rupees to their families in India via the “hawala” channels. Corporate “hawala” could take the form of under/over invoicing of trade or inflating outbound foreign direct investment from India.
The Indian balance of payments data is available only for July-September 2013 while smuggling is likely to have picked up more in the subsequent quarter. Nevertheless, the data does show some tentative evidence for the financing of gold smuggling. For example, in item 2.2.2.2 (Other capital transfers including migrants transfers), the gross inflows fell by nearly $1.0 billion and the net flow fell by $0.8 billion. Similarly, item 3.1.B (Direct Investment by India) rose by $1.2 billion on gross outflow basis and by $0.6 billion on a net outflow basis. I am grateful to my colleague Prof. Ravindra Dholakia for pointing out to me that the gross flows are possibly more important than the net flows.
The WGC data and the BOP data are consistent with the anecdotal evidence that smuggling is on the rise. Some economists tend to be dismissive of such anecdotal evidence – their standard refrain is that “the plural of anecdote is not data”. In finance, we tend to be much more respectful of anecdotal and suggestive evidence. Our standard reflex is to “buy the rumour and sell the fact”. Financial markets are forward looking and by the time conclusive statistical data becomes available, it is too late to be actionable.
In any case, it is dangerous to let smuggling take root. Smuggling of gold requires setting up a complex and sophisticated supply chain including financing, insurance, transportation, warehousing and distribution. Stringent import curbs create incentives to incur the large fixed costs required to set up such a supply chain. But once the supply chain has been set up, it may continue to operate even after the curbs are relaxed so long as the arbitrage differentials exceed the variable costs of the supply chain. In this sense, there are large hysteresis effects (path dependence) in these kinds of phenomena. More dangerously, the supply chain created to smuggle gold can be easily re-purposed for more nefarious activities. In the long run, the gold import curbs may turn out to be a very costly mistake.
Posted at 1:35 pm IST on Sun, 23 Feb 2014 permanent link
Categories: gold, international finance
High Frequency Manipulation at Futures Expiry
My colleagues, Prof. Sobhesh Kumar Agarwalla and Prof. Joshy Jacob and I have a working paper on “High Frequency Manipulation at Futures Expiry: The Case of Cash Settled Indian Single Stock Futures” (also available at SSRN).
Some extracts from the abstract and the conclusion:
In 2013, the Securities and Exchange Board of India identified a case of alleged manipulation (in September 2012) of the settlement price of cash settled single stock futures based on high frequency circular trading. This alleged manipulation exploited several interesting characteristics of the Indian single stock futures market: (a) the futures contract is cash settled, (b) the settlement price is not based on a call auction or special session, but is the volume weighted average price (VWAP) during the last half an hour of trading in the cash market on the expiry date, and (c) anecdotal evidence suggests that the Indian market is more vulnerable to circular trading in which different entities associated with the same person trade with each other to create a false market.
We demonstrate that the combination of cash settlement with the use of a volume weighted average price (VWAP) to determine the settlement price on expiry day makes the Indian single stock futures market vulnerable to a form of high frequency manipulation that targets price insensitive execution algorithms. This type of manipulation is hard to prevent using mechanisms like position limits, and therefore it is necessary to establish a robust program to detect and deter manipulation.
We develop an econometric technique that uses high frequency data and which can be integrated with the automated surveillance system to identify suspected cases of high frequency manipulation very close to the event. Human judgement then needs to be applied to identify cases which prima facie justify detailed investigation and possible prosecution. Our results suggest that high frequency manipulation of price insensitive execution algorithms may be taking place. However, successful manipulation of the settlement price is relatively rare with only one clear instance (the September 27, 2012 episode) and one (milder) parallel.
Finally, the use of the volume weighted average price (VWAP) to determine the cash settlement price of the futures contract might require reconsideration.
Posted at 9:02 pm IST on Sun, 16 Feb 2014 permanent link
Categories: exchanges, manipulation
Does the market close at 4:00:00 pm or at 4:00:01 pm?
A few years ago, somebody asking this question would have been dismissed as a nit picking nerd, but today that question has become extremely important. Last week, the Wall Street Journal’s MoneyBeat blog carried an interesting story about how this difference cost a trader $100,000.
The official market close in the US is 4:00:00 pm, but the computers at Nasdaq keep humming for almost one second longer to reconcile all trades and determine the market closing price. About 150 milliseconds after 4:00 pm on December 5, the earnings announcement of Ulta Salon Cosmetics & Fragrance Inc. hit Business Wire and within 50 milliseconds after that a series of sale orders started hitting the market. When the market closed 700 milliseconds after 4:00 pm, the stock had fallen from $122 to $118.
The problem is that companies that want to release earnings after trading hours assume that trading stops at 4:00:00 pm, while smart traders know that the actual close is nearer to 4:00:01. That creates a profit opportunity for the fastest machine readable news feeds and the fastest trading algorithms. Traders are thinking in terms of milliseconds, but regulators are probably thinking in terms of minutes. Time for the regulators to catch up!
Posted at 5:51 pm IST on Thu, 13 Feb 2014 permanent link
Categories: exchanges, technology
Flaws in EMV (Chip and Pin) Card Security
Steven J. Murdoch and Ross Anderson have a fascinating paper entitled “Security Protocols and Evidence: Where Many Payment Systems Fail” (h/t Bruce Schenier). The paper proposes five principles to guide the design of good security protocols:
Principle 1: Retention and disclosure. Protocols designed for evidence should allow all protocol data and the keys needed to authenticate them to be publicly disclosed, together with full documentation and a chain of custody.
...
Principle 2: Test and debug evidential functionality. When a protocol is designed for use in evidence, the designers should also specify, test and debug the procedures to be followed by police officers, defence lawyers and expert witnesses.
...
Principle 3: Open description of TCB [trusted computing base]. Systems designed to produce evidence must have an open specification, including a concept of operations, a threat model, a security policy, a reference implementation and protection profiles for the evaluation of other implementations.
...
Principle 4: Failure-evidentness. Transaction systems designed to produce evidence must be failure-evident. Thus they must not be designed so that any defeat of the system entails the defeat of the evidence mechanism.
...
Principle 5: Governance of forensic procedures. The forensic procedures for investigating disputed payments must be repeatable and be reviewed regu- larly by independent experts appointed by the regulator. They must have access to all security breach notifications and vulnerability disclosures.
EMV cards violate several of these principles and the authors propose several ideas to improve the evidential characteristics of the system. One idea is a cryptographic audit log of all transactions to be maintained by the card. A forward secure Message Authentication Code (MAC) would prevent a forger from inserting fake transactions in the past even with possession of the current audit key. Similarly, committing a hash chain over all past transactions would mean that a forger with knowledge of the audit key (but not the card itself) cannot insert fake transactions without inducing a discrepancy between the bank server log and the audit log on the genuine card. By putting the card into a forensic mode to retrieve the audit log, a customer would thus be able to demonstrate that the card was not present in a disputed transaction – presumably, the merchant and the bank will be left to figure out how to share the loss.
One of the comments (by mike~acke) on Bruce Schneier’s blog points out that in today’s system, the card holder has to trust the merchant completely: “when you use your card: you are NOT authorizing ONE transaction: you are giving the merchant INDEFINITE UNRESTRICTED access to your account.”. His solution is a very simple though radical idea which simply removes the merchant from the trusted chain. (mike~acke’s comment below is probably easier to understand if you interpret POST to mean merchant and PCI to mean bank though neither identification is completely correct.)
When the customer presents the card it DOES NOT send the customer’s card number to the POST. Instead, the POST will submit an INVOICE to the customer’s card. On customer approval the customer’s card will encrypt the invoice together with authorization for payment to the PCI (Payment Card Industry Card Service Center) for processing and forward the cipher text to the POST. Neither the POST nor the merchant’s computer can read the authorizing message because it is PGP encrypted for the PCI service. Therefore the merchant’s POST must forward the authorizing message cipher text to the PCI service center. On approval the PCI Service Center will return an approval note to the POST and an EFT from the customer’s account to the merchant’s account. The POST will then print the PAID invoice. The customer picks up the merchandise and the transaction is complete. The merchant never knows who the customer was: the merchant never has ANY of the customer’s PII data.
I like this idea and would like to extend the idea even to ATM cards. That way, we will never have to worry about inserting a card into a fake or compromised ATM, because our ATM card would not trust the ATM machine – it would talk directly to the bank server in encrypted messages that the ATM cannot understand. At the end of it all, the bank server would simply send a message to the ATM to dispense the cash.
Updated February 11, 2014 to insert block quotes and ellipses in quote from Murdoch-Anderson paper.
Posted at 10:47 am IST on Tue, 11 Feb 2014 permanent link
Categories: fraud, technology
To short the rupee go to London and Singapore
Rajan Goyal, Rajeev Jain and Soumasree Tewari have an interesting paper in the RBI Working Paper series on the “Non Deliverable Forward and Onshore Indian Rupee Market: A Study on Inter-linkages” (WPS(DEPR):11/2013, December 2013).
They use a error correction model (ECM) to measure the linkages between the onshore and offshore rupee markets. The econometric model tells a very simple story: in normal times, much of the price discovery happens in the onshore market though there is a statistically significant information flow from the offshore market. But during a period of rupee depreciation, the price discovery shifts completely to the offshore market. (While the authors do not explicitly report Hasbrouk information shares or Granger-Gonzalo metrics, it seems pretty likely from the reported coefficients that the change in these measures from one regime to the other would be dramatic).
My interpretation of this result is that the exchange control system in India makes it very difficult to short the rupee onshore. The short interest emerges in the offshore market and is quickly transmitted to the onshore market via arbitrageurs who have the ability to operate in both markets:
- A hedge fund with a bearish view on the currency might short the rupee in the offshore market depressing the rupee in the offshore market.
- A foreign institutional investor with a relatively neutral view on the currency might buy the rupee (at a slightly lower price) in the offshore market from the bearish hedge fund
- This foreign institutional investor might then offset its offshore long position with a short position (at a slightly higher price) in the onshore market (clothed as a hedge of its existing Indian assets). This would transmit the price drop from the offshore market to the onshore market with a small lag.
On the other hand, during the stable or appreciation phase, there is no need to short the rupee and divergent views on the rupee can be accommodated in the onshore market in the form of differing hedge propensities of exporters, importers and foreign currency borrowers.
Short sale restrictions in the onshore market have two perverse effects:
- They contribute to the migration of the currency market from onshore to offshore.
- They make currency crashes more likely because they prevent rational bearish investors from contributing to price discovery in the build up to the crash. (This is a standard argument about short sale restrictions: see for example Harrison Hong and Jeremy Stein(2003) “Differences of opinion, short-sales constraints, and market crashes”, Review of financial studies, 16(2), 487-525.)
Posted at 5:11 pm IST on Tue, 28 Jan 2014 permanent link
Categories: arbitrage, international finance
Rating Agencies: What changed in 2000s?
Consider three alternative descriptions of what happened to the big global rating agencies during the early 2000s:
- Kedia, Rajgopal and Zhou wrote a paper last year presenting evidence showing that the deterioration of Moody's credit rating was due to its going public and the consequent pressure for increasing profits.
- Bo Becker and Todd Milbourn wrote a paper three years ago arguing that increased competition from Fitch coincides with lower quality ratings from the incumbents (S&P and Moody's).
- Way back in 2005, Frank Partnoy wrote a highly prescient paper describing the transformation of the rating industry since the 1990s that turned “gate keepers” into “gate openers”. He attributed the very high profitability of the gate openers to three things: (a) the regulatory licences that made ratings valuable even if they were uninformative, (b) the “free speech” immunity from civil and criminal liability for malfeasance and (c) the rapid growth of CDOs and structured finance.
I find Partnoy’s paper the most convincing despite its total lack of econometrics. The sophisticated difference-in-difference econometrics of the other two papers is, in my view, vitiated by reverse causation. When rating becomes “a much more valuable franchise than other financial publishing” as Partnoy showed, there would be greater pressure to do an IPO and also greater willingness to disregard any adverse reputational effects on other publishing businesses of the group. Similarly, the structural changes in the industry would invite greater competition from previously peripheral players like Fitch who happen to hold the same regulatory licence.
Posted at 11:04 am IST on Sun, 19 Jan 2014 permanent link
Categories: bond markets, corporate governance, regulation
Day dreaming about electronic money
Earlier this week, the Reserve Bank of India published the report of the Nachiket Mor Committee on financial inclusion (technically the Committee on Comprehensive Financial Services for Small Businesses and Low Income Households). Its first recommendation was that “By January 1, 2016 each Indian resident, above the age of eighteen years, would have an individual, full-service, safe, and secure electronic bank account.”
The Committee’s mandate was obviously to look at financial inclusion within the context of the current financial architecture and so it could not by any means have recommended a change in the core of that financial architecture itself. But for us sitting outside the Committee, there is no such constraint. We are entitled to day-dream about anything. So I would like to ask the question: if we were designing everything on a completely clean slate, what would we like to do?
Day dreaming begins here.
In my day dream, India would embrace electronic money and give every Indian an eWallet. Instead of linking India’s Unique ID (Aadhaar number) to a bank account, we would link it to an eWallet provided by the central bank. We would simultaneously move to abolish paper money by converting existing currency notes (with their famous “I promise to pay the bearer”) into genuine promissory notes redeemable in eRupees delivered into our eWallets. Financial inclusion would then have three ingredients: a Unique ID (Aadhaar) for everyone which is more or less in place now, the proposed eWallet for everyone, and a mobile phone for everyone. All eminently doable by 2016.
The costs of creating all the computing and communication infrastructure for a billion eWallets would be huge, but could be easily financed by a small cess on all paper money and bank money. The cess would also serve to incentivize a rapid shift to eRupees. (At some stage, we could even decide to make demand deposits illegal just like bearer demand promissory notes are illegal today, but I think that a ban would not be necessary at all.)
The operating costs of eRupees would be easily covered by the seigniorage income on the electronic money. Because of its greater convenience, safety and liquidity, eRupees should become at least as large as M2, and probably would grow to 25-30% of M3, making it about twice as large as paper money. The operating costs of eRupees should be significantly less than that of paper currency, and the seigniorage income much greater. The government would earn a fatter dividend from the Reserve Bank of India after covering all the cost of eRupees.
A huge chunk of the current banking infrastructure is now devoted to the useless paper shuffling activity that constitutes the current payment system. If this infrastructure is re-purposed to perform genuine financial intermediation, this would support much higher levels of economic growth. Divested of a payment system, the banks would be more like non bank finance companies and would pose far less systemic risk as well.
All this would allow India to leapfrog the rest of world and create the most advanced payment system on the planet (something like a Bitcoin backed by an army). In a world that struggles to ensure that systemically important settlement systems like clearing corporations settle in central bank money, we would have a system in which every individual could settle in central bank money. It is even possible that eRupees would find international adoption in the absence of any competition.
Day dreaming ends here.
Posted at 5:08 pm IST on Thu, 9 Jan 2014 permanent link
Categories: blockchain and cryptocurrency, technology
Tapering Talk: Why was India hit so hard?
Barry Eichengreen and Poonam Gupta have written a paper on how the “Tapering Talk” by the US Federal Reserve in mid 2013 impacted emerging markets.
In order to determine which countries were affected more severely, Eichengreen and Gupta construct a “Pressure Index” based on changes in the exchange rate and foreign exchange reserves. They also construct a Pressure Index 2 that also includes the impact on the stock market. By both measures, they find that India was the worst affected within a peer group of seven countries. The peer group includes all the countries that Morgan Stanley have called the Fragile Five (Brazil, India, Indonesia, South Africa and Turkey); in addition, it includes China and Russia. The Pressure Index 1 for India was 7.15 compared to a median of 3.46 for the peer group. Since the Indian stock market did not do too badly, the Pressure Index 2 for India was slightly better at 6.57 compared to a median of 4.63 for the peer group.
Turning to why some countries were hit harder than others, the paper finds:
What mattered more was the size of their financial markets; investors seeking to rebalance their portfolios concentrated on emerging markets with relatively large and liquid financial systems; these were the markets where they could most easily sell without incurring losses and where there was the most scope for portfolio rebalancing. The obvious contrast is with so-called frontier markets with smaller and less liquid financial systems. This is a reminder that success at growing the financial sector can be a mixed blessing. Among other things, it can accentuate the impact on an economy of financial shocks emanating from outside
In addition, we find that the largest impact of tapering was felt by countries that allowed exchange rates to run up most dramatically in the earlier period of expectations of continued ease on the part of the Federal Reserve, when large amounts of capital were flowing into emerging markets. Similarly, we find the largest impact in countries that allowed the current account deficit to widen most dramatically in the earlier period when it was easily financed. Countries that used policy and in some cases, perhaps, enjoyed good luck that allowed them to limit the rise in the real exchange rate and the growth of the current account deficit in the boom period suffered the smallest reversals.
Clearly, India’s increasing integration with global financial markets imposes greater market discipline on our policy makers than they have been used to in the past.
Posted at 9:27 pm IST on Wed, 8 Jan 2014 permanent link
Categories: crisis, international finance
Clearing of OTC Derivatives
Dr. David Murphy of the Deus ex Machiatto blog has published a comprehensive book on clearing of OTC derivatives (OTC Derivatives, Bilateral Trading and Central Clearing, Palgrave Macmillan, 2013). I was surprised that the author information on the book cover flap does not mention the blog at all but gives prominence to his having been head of risk at ISDA. Had I found this book at a book shop, the ISDA connection might have made me less likely to buy the book because of the obvious bias that the position entails. This was a book that I read only because of my respect for the blogger. Many publishers have obviously not received the memo on how the internet changes everything.
The book presents a balanced discussion of most issues while of course leaning towards the ISDA view of things. Many of the arguments in the book against the clearing mandate would be familiar to those who read the Streetwise Professor blog. Yet, I found the book quite informative and enjoyable.
In Figure 10.1 (page 261), Murphy summarizes the winners and losers from the clearing reforms. To summarize that highly interesting summary:
- Leading G14 dealers: Mixed (higher capital but reduced competition)
- Smaller dealer: Lose (they will end up becoming clients of the G14)
- End users: Mostly lose (higher costs)
- Regulators: Mostly win (enhanced role and power)
- CCPs: Win (new business)
Obviously, the clearing mandate has not quite worked out the way its advocates expected. Clearing was originally expected to lead to greater competition and reduce the dominance of the big (G14) dealers. Murphy explains that the big dealers will actually benefit from the mandate as they can more easily cope with the compliance costs.
I am not disturbed to find corporate end users listed as losers. If Too Big to Fail (TBTF) banks were being subsidized by the taxpayer to write complex customized derivatives, these products would clearly have been under priced and over produced. When the subsidy is removed, supply will drop and prices will rise. This is a feature and not a bug.
If the price rises sufficiently, end users may shift to more standardized and simpler products. Of course, this will imply basis risks because the hedge no longer matches the exposure exactly. This matters less than one might think. The Modigliani Miller (MM) argument applied to hedging (which is actually very similar to a capital structure decision) implies that most hedging decisions are irrelevant. The only relevant hedging decisions are the ones that involve risks large enough to threaten bankruptcy or financial distress and therefore invalidate the MM assumptions. Basis risks are small enough to allow the MM arguments to be applied. Inability to hedge them has zero real costs for the corporate end user and for society as a whole.
One could visualize many ways in which the market may evolve:
- The reforms could lead to the futurization of OTC derivatives. That might be the best possible outcome – exchange trading has even more social benefits than clearing in terms of transparency and competition. The increased basis risk is a non issue because of the MM argument.
- Another possible outcome could be a reduction in end user hedging and consequently a smaller derivatives market. Under the MM assumptions, this need not be problematic either.
- The worst possible outcome would be an OTC market that is even more concentrated (G10 or even G5) and that uses clearing services provided by badly managed CCPs. This would be a nightmare scenario with a horrendous tail risk.
Posted at 4:21 pm IST on Wed, 18 Dec 2013 permanent link
Categories: exchanges, regulation, risk management
Electronic Trading
There was a time not so long ago when equities traded on electronic exchanges and everything else traded on OTC markets. We used to hear people argue vehemently that electronic trading would not work outside of the equities world. The belief was that the central order book could not handle large trade sizes. Algorithmic and high frequency trading changed all that. We learned that large trades could be sliced and diced into smaller orders that the central order book could handle easily. With exchanges offering lower and lower latency trading, a big order could be broken into pieces and fully executed faster than a block trade could be worked out upstairs in the old style.
Slowly the new paradigm is expanding into new asset classes. The 2013 triennial survey shows that electronic trading has virtually taken over spot foreign exchange trading and is dominant in other parts of the foreign exchange market as well (Dagfinn Rime and Andreas Schrimpf, “The anatomy of the global FX market through the lens of the 2013 Triennial Survey”, BIS Quarterly Review, December 2013). The foreign exchange market has ceased to be an inter bank market with hedge funds and other non bank financial entities becoming the biggest players in the market. As Rime and Schrimp explain:
Technological change has increased the connectivity of participants, bringing down search costs. A new form of “hot potato” trading has emerged where dealers no longer play an exclusive role.
The next battle ground is corporate bonds. Post Dodd Frank, the traditional market makers are less willing to provide liquidity and people are looking for alternatives including the previously maligned electronic trading idea. McKinsey and Greenwich Associates have produced a report on Corporate Bond E-Trading which discusses the emerging trends but is pessimistic about equities style electronic trading. I am not so pessimistic because in my view if you can get hedge funds and HFTs to trade something, then it will do fine on a central order book.
Posted at 6:03 pm IST on Tue, 17 Dec 2013 permanent link
Categories: exchanges, technology
The Pharoah's funeral plans
Post crisis, there has been a lot of interest in ensuring that large banks prepare a living will or funeral plan describing how they will be resolved if they fail. Though this does look like a good idea, I think there is a catch which is best illustrated with an example.
Some of the most successful funeral plans in history were of the Egyptian Pharoahs who began their reigns with the construction of the pyramids in which they were to be entombed. If anything, this would have increased the cost of the funeral. It is very likely that left to their successors, the pyramids would have been less grandiose. Moreover, to a finance person it is obvious that the present value of the cost increased because the pyramids were built earlier than required.
Much the same thing may be true of the banks as well. Funeral plans may make regulators complacent about excessively large and complex banks; as a results, the costs would be higher when they do fail. Banks may also incur a lot of wasteful expenditure to prepare and defend funeral plans that may ultimately prove useless. The existence of these plans may lead to delays in taking prompt and corrective action for vulnerable banks. Why shut down a bank now when you believe (perhaps wrongly) that it can be shut down later without great difficulty? In short, the plans may allow mere words to substitute for real action.
Posted at 3:54 pm IST on Fri, 13 Dec 2013 permanent link
Categories: banks, regulation
Revisiting Fischer Black's deathbed paper
In 1995, Fischer Black submitted a paper on “Interest rates as options” when he was terminally ill with cancer. While publishing the paper (Journal of Finance, 1995, 50(5), 1371-1376), the Journal noted:
Note from the Managing Editor: Fischer Black submitted this paper on May 1, 1995. His submission letter stated: “I would like to publish this, though I may not be around to make any changes the referee may suggest. If I’m not, and if it seems roughly acceptable, could you publish it as is with a note explaining the circumstances?” Fischer received a revise and resubmit letter on May 22 with a detailed referee’s report. He worked on the paper during the Summer and had started to think about how to address the comments of the referee. He died on August 31 without completing the revision.
The paper contained an interesting idea to deal with the problem of negative interest rates – assume that the true or ‘shadow short rate’ can be negative, but the rate that we do observe is never negative because currency provides an option to earn a zero interest rate instead. Viewed this way, the interest rate can itself be viewed as an option (with a strike price of zero). What Black found attractive about this idea was that it made modelling easy: one could for example assume that the shadow rate follows a normal (Gaussian) distribution. Whenever the Gaussian distribution produces a negative interest rate, we simply replace it by zero. We do not need to assume a log normal or square root process just to avoid negative interest rates.
While interesting in theory, the model did not prove very popular in practice. But five years of zero interest rates in the US has changed this. Neither the lognormal nor the square root process can easily yield a persistent zero interest rate. Black’s shadow rate achieves this in a very easy and natural manner. More than the finance community, it the macroeconomics world that has rediscovered Black’s model. For example, Wu and Xia have a paper in which they show that macroeconomic models perform nicely even at the zero lower bound (ZLB) if the actual short rate is replaced by the shadow rate (h/t Econbrowser). The shadow rate has the same correlations with other macroeconomic variables at the ZLB as the actual rate has during normal times.
As I have mentioned previously on this blog, modelling interest rate risk at the ZLB is problematic and different clearing corporations have taken different approaches to the problem. Maybe, they should take Black’s shadow short rate more seriously.
Posted at 11:39 am IST on Fri, 29 Nov 2013 permanent link
Categories: bond markets, derivatives, monetary policy
Rakoff on financial crisis prosecutions
Judge Rakoff who is best known for rejecting SEC settlements against Bank of America and Citigroup for not going far enough, has come out with a devastating critique of the US failure to prosecute high level executives for frauds related to the financial crisis.
Rakoff points out that the frauds of the 1970s, 1980s and 1990s all resulted in successful prosecutions of even the highest level figures.
In striking contrast with these past prosecutions, not a single high level executive has been successfully prosecuted in connection with the recent financial crisis, and given the fact that most of the relevant criminal provisions are governed by a five-year statute of limitations, it appears very likely that none will be.
First of all, Rakoff dismisses the legal difficulties in prosecuting crisis crimes:
- the doctrine of “willful blindness”or “conscious disregard” ameliorates the difficulty of proving fraudulent intent;
- the fact that the counterparties were sophisticated is irrelevant because in criminal fraud cases, it is never required to prove reliance;
- the “too big to jail” excuse reflects a disregard for equality under the law.
Rakoff thinks that there are three reasons why there have been no prosecutions:
- Prosecutors have other priorities –
- the FBI’s resources were diverted to fighting terrorism;
- the SEC was focused on Ponzi schemes and accounting frauds;
- the Department of Justice was bogged down with the prosecution of insider trading based on the Rajaratnam tapes
- “[T]he Government’s own involvement in the underlying circumstances that led to the financial crisis ... [and] in the aftermath of the financial crisis ... would give a prudent prosecutor pause in deciding whether to indict a C.E.O. who might, with some justice, claim that he was only doing what he fairly believed the Government wanted him to do.”
- “The shift that has occurred over the past 30 years or more from focusing on prosecuting high-level individuals to focusing on prosecuting companies and other institutions.”
Rakoff is known for his strong views on the last point and he lays out the case brilliantly:
If you are a prosecutor attempting to discover the individuals responsible for an apparent financial fraud, you go about your business in much the same way you go after mobsters or drug kingpins: you start at the bottom and, over many months or years, slowly work your way up. Specifically, you start by “flipping” some lower or mid-level participant in the fraud ... With his help, and aided by the substantial prison penalties now available in white collar cases, you go up the ladder. ...
But if your priority is prosecuting the company, a different scenario takes place. Early in the investigation, you invite in counsel to the company and explain to him or her why you suspect fraud. He or she responds by assuring you that the company wants to cooperate and do the right thing, and to that end the company has hired a former Assistant U.S. Attorney, now a partner at a respected law firm, to do an internal investigation. ... Six months later the company’s counsel returns, with a detailed report showing that mistakes were made but that the company is now intent on correcting them. You and the company then agree that the company will enter into a deferred prosecution agreement that couples some immediate fines with the imposition of expensive but internal prophylactic measures. For all practical purposes the case is now over. You are happy ...; the company is happy ...; and perhaps the happiest of all are the executives, or former executives, who actually committed the underlying misconduct, for they are left untouched.
I suggest that this is not the best way to proceed. Although it is supposedly justified in terms of preventing future crimes, I suggest that the future deterrent value of successfully prosecuting individuals far outweighs the prophylactic benefits of imposing internal compliance measures that are often little more than window-dressing. Just going after the company is also both technically and morally suspect. It is technically suspect because, under the law, you should not indict or threaten to indict a company unless you can prove beyond a reasonable doubt that some managerial agent of the company committed the alleged crime; and if you can prove that, why not indict the manager? And from a moral standpoint, punishing a company and its many innocent employees and shareholders for the crimes committed by some unprosecuted individuals seems contrary to elementary notions of moral responsibility.
Rakoff concludes with a scathing criticism:
So you don’t go after the companies, at least not criminally, because they are too big to jail; and you don’t go after the individuals, because that would involve the kind of years-long investigations that you no longer have the experience or the resources to pursue.
After the series of frauds in the late 1990s and early 2000s in the US (Enron, Worldcom, Tyco and Adelphia), Europe (Lernout and Hauspie, Vivendi, ABB and KirchMedia) and India (Tata Finance), I wrote that: “ The US has shown that it can prosecute and punish wrong doers far more speedily than most other jurisdictions.”. I am not at all sure about this today.
Posted at 9:34 pm IST on Fri, 15 Nov 2013 permanent link
Categories: crisis, law, regulation
Equity markets are different
Equity markets (specifically the market for large capitalization stocks) seem to be very different from other markets in that they are the only markets that are unconditionally liquid. The Basel Committee has officially recognized this – in their classification of 24 markets by liquidity horizons, the large cap equity market is the only market in the most liquid bucket. (Basel Committee on Banking Supervision, Fundamental review of the trading book: A revised market risk framework, Second Consultative Document, October 2013, Table 2, page 16)
There is abundant anecdotal evidence for the greater liquidity of large cap equity markets in stressed conditions – you may not like the price but you would not have any occasion to complain about the volume. For example, in India when the fraud in Satyam was revealed, the price of the stock dropped dramatically, but the market remained very liquid. In fact, the liquidity of the stock on that day was far greater than normal. During the global financial crisis, stock markets remained very liquid while liquidity in many other markets dried up. During the 2008 crisis, Societe General could unwind Kerviel’s unauthorized equity derivative position of € 50 billion in just two days.
There could be many reasons why large cap equity markets are indeed different:
- Equities trade in exchanges with pre and post trade transparency unlike many other asset classes that trade in opaque OTC markets.
- Equity markets are populated with a diverse pool of participants unlike many other markets where a single class of participants (for example, banks) dominate.
- Many participants in equity market are unregulated – individual investors and many lightly regulated investment pools.
- Equity markets are highly volatile and therefore suffer less from price stickiness. The equity market responds to a shock with a price adjustment. Most other markets have sticky prices and respond with a quantity adjustment (or quantity rationing).
At least some of these features can be replicated in other markets, and such replication should perhaps be a design goal.
Posted at 9:45 pm IST on Fri, 8 Nov 2013 permanent link
Categories: equity markets, exchanges
Gorton defends opacity of the plutocrats
Gary Gorton has published a paper on “The development of opacity in U.S. banking” (NBER working paper 19540, October 2013). He writes that before the US Civil War:
... bank note markets functioned as “efficient” markets; the discounts were informative about bank risk. Banks at the same location competed, and the note market enforced common fundamental risk at these banks.
Then bank notes were replaced by checking accounts, the banks were taken over by rich men who kept the price per share high enough to keep it out of reach of most investors thereby effectively closing down the market for their stocks. Simultaneously,the clearing houses brought about a culture of secrecy so that depositors also knew little about the health of individual banks.
Gorton thinks that this shutdown of informative and efficient markets was a great thing for economic efficiency – a claim that I find difficult to believe.
On the other hand, the endogenous opacity that Gorton describes is completely analogous to the conclusion of another recent paper (“Shining a Light on the Mysteries of State: The Origins of Fiscal Transparency in Western Europe” by Timothy C. Irwin, IMF Working Paper, WP/13/219, October 2013) on the opacity of sovereign finances:
When power has been tightly held by a financially self-sufficient king, much information about government, including government finances, has remained secret. When power has been shared, either in democracies or sufficiently broad oligarchies, information on government finances has tended to become public.
Posted at 3:20 pm IST on Wed, 6 Nov 2013 permanent link
Categories: banks, market efficiency
Greenspan: successful policy will always create a bubble
In an interview with Gillian Tett in the Financial Times of October 25, 2013 (behind paywall), Alan Greenspan says:
Beware of success in policy. A stable, moderately growing, non-inflationary environment will create a bubble 100 per cent of the time.
The first objection to this argument is that a bubble is by definition unstable and so the term “stable” should be changed to “apparently stable”. That apart, Greenspan seems to be making inferences from just one event – the Great Moderation. From a sample size of one, inferences can be drawn in many directions, and many permutations and combinations are possible. Some possible variants are:
- Creating a credit bubble is the only way to bring about a (apparently) stable, moderately growing, non-inflationary environment. A bubble is very pleasant and nice as long as it lasts; it is the hangover which is unpleasant.
- Financial stability is inherently destabilizing (the Minsky critique).
- If you flood the world with liquidity, the only way to make that non inflationary is to ensure that the “too much money” that you are printing is channelled into chasing “too few assets” instead of chasing “too few goods”. In other words, asset price inflation is the only way to avoid goods price inflation when you run the printing presses at high speed.
- The only way to generate non-inflationary growth in an ageing fiscally challenged nation is to inflate a bubble.
Finally, not many would agree with Alan Greenspan’s self serving claim that bubble blowing can be regarded as a successful policy.
Posted at 2:28 pm IST on Sun, 27 Oct 2013 permanent link
Categories: bubbles, crisis, monetary policy
SEC order explains Knight Capital systems failure
More than a year ago, Knight Capital suffered a loss of nearly half a billion dollars and needed to sell itself after a defective software resulted in nearly $7 billion of wrong trades. A few days back, the US SEC issued an order against Knight Capital that described exactly what happened:
- Knight used a software called SMARS which broke up incoming “parent” orders into smaller “child” orders that were transmitted to various exchanges or trading venues for execution. (para 12)
- SMARS used to have a functionality called “Power Peg”. Knight stopped using this functionality in 2003, but the code was neither deleted nor deactivated. A decade later, the code was still sitting in the servers waiting to spring into action if a particular flag was set to “yes”. (para 13 and 14)
- “... [A]s child orders were executed, a cumulative quantity function counted the number of shares of the parent order that had been executed ... [and] instructed the code to stop routing child orders after the parent order had been filled completely. ... In 2005, Knight moved the tracking of cumulative shares function in the Power Peg code to an earlier point in the SMARS code sequence. Knight did not retest the Power Peg code after moving the cumulative quantity function to determine whether Power Peg would still function correctly if called.” (para 14)
- In July 2012, the New York Stock Exchange announced that it would launch its new Retail Liquidity Program (RLP) on August 1, 2012. The RLP would enable retail customers to get price improvement for their orders. Knight Capital therefore added new code to SMARS to allow its customers to participate in the RLP. (para 12)
- Knight decided that it would now delete the decade old Power Peg code and replace it with the new RLP code. The flag that was earlier used to activate the Power Peg code would be repurposed to now call the RLP code. (para 13)
- “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15)
- “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16)
- “While processing 212 small retail orders that Knight had received from its customers, SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Ultimately, Knight lost over $460 million from these unwanted positions. ” (para 1)
It appears to me that there were three failures:
- It could be argued that the first failure occurred in 2003 when Knight chose to let executable code lie dormant in the system after it was no longer needed. I would like such code to be commented out or disabled (through a conditional compilation flag) in the source code itself.
- I think the biggest failure was in 2005. While making changes to the cumulative order routine, Knight did not subject the Power Peg code to the full panoply of regression tests. Testing should be mandatory for any code that is left in the system even if it is in disuse.
- The third and perhaps least egregious failure was in 2012 when Knight did not have a second technician review the deployment of the RLP code. Furthermore, Knight did not have written procedures that required such a review.
I am thus in complete agreement with the SEC’s observation that:
Knight also violated the requirements of Rule 15c3-5(b) because Knight did not have technology governance controls and supervisory procedures sufficient to ensure the orderly deployment of new code or to prevent the activation of code no longer intended for use in Knight’s current operations but left on its servers that were accessing the market; and Knight did not have controls and supervisory procedures reasonably designed to guide employees’ responses to significant technological and compliance incidents; (para 9 D)
However, the SEC adopted Rule 15c3-5 only in November 2010. The two biggest failures occurred prior to this rule. Perhaps, the SEC found it awkward to levy a $12 million file for the failure of a technician to copy a file correctly to one out of eight servers. The SEC tries to get around this problem by providing a long litany of other alleged risk management failures at Knight many of which do not stand up under serious scrutiny.
For example, the SEC says: “Knight had a number of controls in place prior to the point that orders reached SMARS ... However, Knight did not have adequate controls in SMARS to prevent the entry of erroneous orders.” In well designed code, it is good practice to have a number of “asserts” that ensure that inputs are not logically inconsistent (for example, that price and quantity are not negative or that an order date is not in the future). But a piece of code that is called only from other code would not normally implement control checks.
For example, an authentication routine might verify a customer’s password (and other token in case of two factor authentication). Is every routine in the code required to check the password again before it does its work? This is surely absurd.
Posted at 9:45 pm IST on Sun, 20 Oct 2013 permanent link
Categories: failure, risk management, technology
When solvent sovereigns default (aka technical default)
As the US approaches the deadline for resolving its debt ceiling stalemate, there has been much talk about the consequences of a “technical default”. Across the Curve has an acerbic comment about the utter inappropriateness of this terminology:
I guess a technical default is one in which you personally do not own any maturing debt or hold a coupon due an interest payment. If you hold one of those instruments it is a real default!
It is more appropriate to talk about defaults by a solvent sovereign where the ability of the sovereign to repay remains high even after the promise of timely repayment has been broken. This kind of default used to be pretty common in the past (till about a century ago). In the old days, defaults of this kind arose due to liquidity problems or due to some kind of fiscal dysfunction. However strange the US situation may look to us on the basis of our experience in recent decades, it is not at all unusual in the broad sweep of history.
Phillip II of Spain defaulted four times during his reign. Spain was a superpower when it defaulted and it remained a superpower after its initial couple of defaults. In a fascinating paper, Drelichman and Voth explain:
The king’s repeated bankruptcies were not signs of insolvency ... future primary surpluses were sufficient to repay Philip II’s debts ... In addition, lending was profitable ... (Drelichman and Voth (2011), “Lending to the Borrower from Hell: Debt and Default in the Age of Philip II”, The Economic Journal, 121, 1205-1227)
As long as Spain owned the largest silver mines in the world, its abilty to repay debts was not seriously in question (even when the debts reached 60% of GDP under Philip II). One can see a close parallel with the very high ability of the US to repay its debts if its politicians choose to do so.
In England, the default of Charles II (the notorious stop of the exchequer) was a result of fiscal dysfunction rather than any inability of England to repay its modest debts. Charles was not on the best of terms with his parliament and therefore could not levy new taxes to finance his expenses. The same parliament was of course willing to levy far greater taxes and support far greater debts to finance the wars of a monarch more to its liking (William of Orange) after the bloodless revolution. This episode also seems to have much in common with modern day US politics.
Another interesting phenomenon which appears counter intuitive to many people is that sovereign default often happens under very strong and competent rulers. If we look at England, Edward III, Henry VIII and Charles II were among its greatest kings. (In the case of Henry, I am counting the great debasement as a default. In the case of Charles, the chartering of the Royal Society cements his place as one of that country’s greatest monarchs in my view.). Turning to the US, one of its outstanding presidents (Franklin Roosevelt) presided over that country’s only default so far (the repudiation of the gold clause). Perhaps, only a strong ruler is confident enough to risk all the consequences of default. Lesser rulers prefer to muddle along rather than force the issue.
On another note, it may be that we are entering a new age where in at least some rich countries, sovereign default will no longer be as much of a taboo as it is today. Default may indeed be the least unpleasant of all choices that await a rich, over indebted and ageing society, but only truly heroic leaders may be willing to take the plunge.
Posted at 3:47 pm IST on Wed, 16 Oct 2013 permanent link
Categories: sovereign risk
Fama French and Momentum Factors: Data Library for Indian Market
My colleagues, Prof. Sobhesh K. Agarwalla, Prof. Joshy Jacob and I have created a publicly available data library providing the Fama-French and momentum factor returns for the Indian equity market using data from CMIE Prowess. We plan to keep updating the data on a regular basis. Because of data limitations, currently the data library starts in January 1993, but we are trying to extend it backward.
We differ from the previous studies in several significant ways. First, we cover a greater number of firms relative to the existing studies. Second, we exclude illiquid firms to ensure that the portfolios are investable. Third, we have classified firms into small and big using more appropriate cut-off considering the distribution of firm size. Fourth, as there are several instances of vanishing of public companies in India, we have computed the returns with a correction for survival bias.
The methodology is described in more detail in our Working Paper (also available at SSRN): Sobhesh K. Agarwalla, Joshy Jacob & Jayanth R. Varma (2013) “Four factor model in Indian equities market”, W.P. No. 2013-09-05, Indian Institute of Management, Ahmedabad.
Posted at 6:05 pm IST on Sun, 6 Oct 2013 permanent link
Categories: factor investing
33 ways to control algorithmic trading
Earlier this month, the US Commodity Futures Trading Commission (CFTC) published a Concept Release on Risk Controls and System Safeguards for Automated Trading Environments. It seeks comments on a laundry list of 33 measures that could be adopted to control algorithmic and high frequency trading (I arrived at this count from the list on page 116-132, counting sub items in the first column also).
The proposals on this list range from the sensible to the problematic, and there does not seem to be much of an effort to analyse the economic consequences of these measures. The idea of the concept release appears to be to outsource this analysis to those who choose to submit comments on the concept release. There is nothing wrong with that. But with the current CFTC Chairman, Gary Gensler, set to step down soon, nothing much might come out of the concept release.
Posted at 12:32 pm IST on Sun, 29 Sep 2013 permanent link
Categories: exchanges
Systemic effects of the Merton model
David Merkel has posted on his The Aleph Blog a note that he wrote in 2004 about how widespread use of the Merton model to evaluate credit risk influences the corporate bond market itself. The Merton model regards risky debt as a combination of risk free debt and a short put option on the assets of the issuer. Credit risk assessment is then a question of valuing this put option – a process that relies largely on stock prices and implied volatilities. Merkel writes:
Over the last seven years, more and more managers of corporate credit risk use contingent claims models. Some use them exclusively, others use them in tandem with traditional models. They have a big enough influence on the corporate bond market that they often drive the level of spreads. Because of this, the decline in implied volatility for the indices and individual companies has been a major factor in the spread compression that has gone on. I would say that the decline in implied volatility, and deleveraging, has had a larger impact than improving profitability on spreads.
The Merton model is probably under-utilized in India and so I have not encountered this problem. But Merkel is saying that in some countries, it is over used and over reliance on it can be a problem. The global financial crisis highlighted the dangers of outsourcing credit evaluation to the rating agencies. The Merton model in some ways amounts to outsourcing credit evaluation to the equity markets, and this too could end badly. I have wondered for some time now as to why advanced country central banks act as if they have adopted equity price targeting. If the Merton model is so influential, then the primary channel of monetary transmission to the credit markets would lie via equity markets and targeting equity prices suddenly makes a lot of sense to the central banks themselves.
But those who buy poor credit risks on the basis of Merton model credit assessments that have been flattered by QE inflated stock prices (and QE dampened volatilities) might be in for a rude surprise if and when the central banks decide to let equity markets find their natural level and volatility.
Posted at 1:41 pm IST on Wed, 25 Sep 2013 permanent link
Categories: bond markets, derivatives
Snowden disclosures and the cryptographic foundations of modern finance
I have always believed that the greatest tail risk in finance is a threat to its cryptographic foundations. Everything in modern finance is an electronic book entry that could suddenly evaporate if the cryptography protecting it could be subverted. Such a cryptographic catastrophe would make the Lehman bankruptcy five years ago look like a picnic.
Global finance should therefore be alarmed by the Snowden disclosures earlier this month that the large technology companies have been collaborating with the US government to actively subvert internet encryption. It is claimed that backdoors have been built into many commercial encryption software and that even the standards relating to encryption have been compromised.
I do not think this is about the US at all. It is very likely that large technology companies are extending similar cooperation to other governments that control large markets. A decade ago, Microsoft publicly announced that it had provided the Chinese government access to the Windows source code. Blackberry’s long resistance to the Indian government’s desire for access to its encryption suggest that the Indian market is not large enough to induce quick cooperation, but I would be surprised if the US and China were the only countries that are able to bend the large technology companies to their ends. Countries like Russia and Israel with proven cyber warfare capabilities would also have achieved some measure of success.
In this situation, financial firms around the world should consider themselves as potential targets of cyber warfare. Alternatively, they could just become collateral damage in the struggle between two or more cyber superpowers. In my view, this is an existential threat to the modern financial system.
The saving grace is that there is nothing to suggest that the mathematics of encryption has become less reliable. The problems are all in the implementation – commercial routers, commercial operating systems, commercial browsers and commercial encryption software may have been compromised but not the mathematics of encryption, at least not yet.
Perhaps, finance can still escape a cryptographic meltdown if it embraces open source software for all cryptography critical applications. As computer security expert Bruce Schneier explains: “Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it.”
Posted at 4:45 pm IST on Sun, 15 Sep 2013 permanent link
Categories: mathematics, technology
Krugman on Asian Crisis as success story
Paul Krugman says so partly tongue in cheek, but still it is remarkable to read this from the world’s foremost authority on the Asian Crisis:
I will say, 15 years ago it would never have occurred to me that we would be looking back at Asia’s crisis as a success story.
My last blog post on the good that came out of the Asian Crisis looks a little less outrageous now. Also while I gave Malaysia of 1997-98 as an example of a bad response to a crisis, Krugman points to peripheral Europe. Now that is a truly atrocious response to a crisis – one in which the creditors are still in charge and are still thinking like creditors.
Posted at 11:03 am IST on Sun, 1 Sep 2013 permanent link
Categories: crisis, financial history
Why India's crisis could be a good thing
I recall telling some Indian policy makers in the late 1990s that it was unfortunate that India had not fallen victim to the Asian Crisis. I need hardly add that the rest of the conversation was not very pleasant. However, one of the great privileges of living in a democracy is that one get away with saying such things – policy makers do not have firing squads at their disposal (at least not yet).
Now we seem to be getting a crisis of the kind which I have been expecting for several months now (see my blog posts here, here and here). This is a good time to reflect on the aftermath of the Asian Crisis to understand how (under the right conditions) a lot of good can come out of our crisis.
Like in East Asia of 1997, the Indian corporate sector has come to be dominated by a rent seeking kleptocracy that resembles the Russian oligarchs. Unlike the businesses that came to prominence in the first decade after the 1991 reforms, many of the business group that have emerged in the last decade have been tainted by all kinds of unsavoury conduct. For the country to reestablish itself on the path of high growth and economic transformation, many of these unproductive businesses have to be swept away. In 1997, the bankruptcy of the Daewoo Group was important in reforming the Chaebol and getting Korea back on the track again. We need to see something similar happen in India. A useful analogy is that of a forest fire that clears all the deadwood and allows fresh shoots to grow and rejuvenate the forest.
One of the wonderful things about a financial crisis is that the capital allocation function shifts decisively from those who think like short term lenders to those who think like owners. In a debt restructuring for example, erstwhile lenders are forced to think like equity holders, and they end up allocating capital much better that they did when they were just chasing yields while floating on the high tide of liquidity. They have to stop worrying about sunk costs and focus more on future prospects.
A very good example is what the Asian Crisis did to Samsung. At the time of the crisis, Samsung was an also-ran Chaebol whose head was obsessed with building a car business like Daewoo or Hyundai. In the consumer electronics business, it was well behind Sony. The crisis forced Samsung to abandon its car making dreams under enormous pressure from the financial markets. As it focused on what it knew better, Samsung has created a world beating business while Sony ensconced in its cosy world in a country which largely escaped the Asian Crisis has simply gone downhill.
Even at the level of countries, one can see how a country like Malaysia that changed least in response to the crisis has been in relative decline as compared to its peers. I cannot help speculating that in the emerging crisis, China’s large reserves will allow that country the luxury of behaving like the Malaysia of 1997. If by chance, India responds like the Korea of 1997, Asia’s economic landscape in the next decade will be very interesting.
Another interesting parallel is that in 1997/1998, several of the crisis affected countries faced elections at the height of the crisis or had a change of government by other means (Indonesia). Far from leading to political confusion, these elections helped to legitimize decisive action at the political level. Nothing concentrates a politician’s mind more than a bankrupt treasury. We saw that in 1991 (another case of an election at the time of crisis). We could see that once again in 2014.
Of course, nothing is preordained. We can blow our chances. But to those who think that 1991 was the best thing that ever happened to this country, there is at last reason to hope that we will get another 1991. In these bleak times, all that one can do is to be optimistic in a pragmatic way.
Posted at 9:49 pm IST on Thu, 29 Aug 2013 permanent link
Categories: crisis, financial history
Casualties of credit
I just finished reading Carl Wennerlind’s book Casualties of Credit about the English financial revolution in the late seventeenth century. Much has been written about this period including of course the seminal paper by Douglas North and Barry Weingast on “Constitutions and commitment” (Journal of Economic History, 1989). Yet, I found a lot of material in the book new and highly illuminating.
Especially interesting was the description of the crisis of 1710 – which I think was the first instance in history of the bond market trying to arm twist the government to change its policies. I was also fascinated by the discussion about how Isaac Newton used his vast talents to hunt down coin clippers and counterfeiters, and then ruthlessly sent them to the gallows. I knew that apart from inventing calculus and much of physics, Newton had time to dabble in alchemy, but I had thought that his position as Master of the Mint was a sinecure. Well Newton chose not to treat it as a sinecure.
Posted at 3:41 pm IST on Wed, 21 Aug 2013 permanent link
Categories: bond markets, financial history, interesting books
Quadrillion mantle passes from Italy to Japan after a decade
In the 1990s, we used to joke that the word quadrillion was invented to measure the Italian pubic debt. The introduction of the euro put an end to this joke. Italy’s public debt is currently “only” around two trillion euros, but it would be around four quadrillion lire at 1999 exchange rates. After a gap of more than a decade, the mantle has passed to Japan whose public debt crossed the quadrillion yen mark recently.
The only other important monetary amount that I am aware of that could be in the quadrillion range is the total outstanding notional value of all financial derivatives in the world. The BIS estimate (which is perhaps conservative) for this is only around $600 trillion, but some other estimates (which are perhaps exaggerated) put it in the range of $1,200 – $1,500 trillion.
Posted at 1:38 pm IST on Sun, 18 Aug 2013 permanent link
Categories: miscellaneous, sovereign risk
Do regulators understand countervailing power in markets?
Practitioners understand the importance of countervailing power in keeping markets clean. The biggest obstacle that a would-be manipulator faces is a big player on the opposite side with the incentives and ability to block the attempted manipulation. Without that countervailing power, the regulator would be stretched very thin trying to combat the myriad games that are being played out in the market at any point of time. But regulators seem to be oblivious of this completely and often step in to curb the countervailing power without realizing that they are allowing people on the other side a free run.
This was highlighted yet again by a recent order of the UK Financial Conduct Authority (FCA), the successor to the Financial Services Authority (FSA). The FCA fined Michael Coscia for a trading strategy that made money at the cost of high frequency traders (HFTs).
HFTs often try to trade in front of other people. When the HFT suspects that a large trader is trying to buy (sell), the HFT tries to buy (sell) immediately before the price has gone up (down), and then tries to turn around to sell to (buy from) the large trader at an inflated (depressed) price. Michael Coscia created a trading strategy designed to give the HFTs a taste of their own medicine in the crude oil market. He placed a set of large orders designed to fool the HFTs into thinking that he was trying to sell a big block. When the HFTs began front running his purported large sell order, Coscia turned around and bought some crude from them at below market prices. He then performed the whole operation in reverse, fooling the HFTs into thinking that there was a large buy order in the market. When they tried to front run that buy order, Coscia sold the crude (that he had bought in the previous cycle) back to the HFTs.
The FCA thinks that Corcia violated the exchange rules which provided that “it shall be an offence for a trader or Member to engage in disorderly trading whether by high or low ticking, aggressive bidding or offering or otherwise.” From a legal point of view, the FCA is probably quite correct. But the net effect of the action is to neutralize the kind of trading strategies that would have held the HFTs in check. The FCA of course thinks that they are acting against HFTs because Corcia’s trading strategy also involved high frequency trading.
Posted at 2:29 pm IST on Sun, 4 Aug 2013 permanent link
Categories: commodities, exchanges, manipulation