FCA Clifford Chance Report Part II: The Menace of Selective Briefing
Yesterday, I blogged about Clifford Chance report on the UK FCA (Financial Conduct Authority) from the viewpoint of regulatory capture. Today, I turn to the issue of the selective pre-briefing provided by the FCA to journalists and industry bodies. Of course, the FCA is not alone in doing this: government agencies around the world indulge in this anachronistic practice.
In the pre internet era, government agencies had to rely on the mass media to disseminate their policies and decisions. It was therefore necessary for them to cultivate the mass media to ensure that their messages got the desired degree of coverage. One of the ways of doing this was to provide privileged access to select journalists in return for enhanced coverage.
This practice is now completely anachronistic. The internet has transformed the entire paradigm of mass communication. In the old days, we had a push channel in which the big media outlets pushed their content out to consumers. The internet is a pull channel in which consumers pull whatever content they want. For example, I subscribe to the RSS/Atom feeds of several regulators around the world. I also subscribe to the feeds of several blogs which comment on regulatory developments world wide. My feed reader pulls all this content to my computer and mobile devices and provides me instant excess to these messages without the intermediation of any big media gatekeepers.
In this context, the entire practice of pre-briefing is anachronistic. Worse, it is inimical to the modern democratic ideals of equal and fair access to all. The question then is why does it survive at all. I am convinced that what might have had some legitimate function decades ago has now been corrupted into something more nefarious. Regulators now use privileged access to suborn the mass media and to get favourable coverage of their decisions. Journalists have to think twice before they write something critical about the regulator who may simply cut off their privileged access.
It is high time we put an end to this diabolical practice. What I would like to see is the following:
- A regulator could meet a journalist one-on-one, but the entire transcript of the interview must then be published on the regulator’s website and the interview must be embargoed until such publication.
- A regulator could hold press conferences or grant live interviews to the visual media, but such events must be web cast live on the regulator’s website and transcripts must be published soon after.
- The regulators should not differentiate between (a) journalists from the mainstream media and (b) representatives of alternate media (including bloggers).
- Regulator web sites and feeds must be more friendly to the general public. For example, the item description field in an RSS feed or the item content field in an Atom feed should contain enough information for a casual reader to decide whether it is worth reading in full. Regulatory announcements must provide enough background to enable the general public to understand them.
Any breach of (1) or (2) above should be regarded as a selective disclosure that attracts the same penalties as selective disclosure by an officer of a listed company.
What I also find very disturbing is the practice of the regulator holding briefing sessions with select group of regulated entities or their associations or lobby groups. In my view, while the regulator does need to hold confidential discussions with regulated entities on a one-on-one basis, any meeting attended by more than one entity cannot by definition be about confidential supervisory concerns. The requirement of publication of transcripts or live web casts should apply in these cases as well. In the FCA case, it seems to be taken for granted by all (including the Clifford Chance report) that the FCA needs to have confidential discussions with the Association of British Insurers (ABI). I think this view is mistaken, particularly when it is not considered necessary to hold a similar discussion with the affected policy holders.
Posted at 5:06 pm IST on Sun, 21 Dec 2014 permanent link
Categories: law, regulation, technology
Regulatory capture is a bigger issue than botched communications
I just finished reading the 226 page report that the non independent directors of the UK FCA (Financial Conduct Authority) commissioned from the law firm Clifford Chance on the FCA’s botched communications regarding its proposed review of how insurance companies treat customers trapped in legacy pension plans. The report published earlier this month deals with the selective disclosure of market moving price sensitive information by the FCA itself to one journalist, and the failure of the FCA to issue corrective statements in a timely manner after large price movements in the affected insurance companies on March 28, 2014.
I will have a separate blog post on this whole issue of selective disclosure to journalists and to industry lobby groups. But in this post, I want to write about what I think is the bigger issue in the whole episode: what appears to me to be a regulatory capture of the Board of the FCA and of HM Treasury. It appears to me that the commissioning of the Clifford Chance review serves to divert attention from this vital issue and allows the regulatory capture to pass unnoticed.
The rest of this blog post is based on reading between the lines in the Clifford Chance report and is thus largely speculative. The evidence of regulatory capture is quite stark, but most of the rest of the picture that I present could be totally wrong.
The sense that I get is that there were two schools of thought within the FCA. One group of people thought that the FCA needed to do something about the 30 million policy holders who were trapped in exploitative pension plans that they could not exit because of huge exit fees. Since the plans were contracted prior to 2000 (in some cases they dated back to the 1970s), they did not enjoy the consumer protections of the current regulatory regime. This group within the FCA wanted to use the regulator’s powers to prevent these policy holders from being treated unfairly. The simplest solution of course was to abolish the exit fees, and let these 30 million policy holders choose new policies.
The other group within the FCA wanted to conduct a cosmetic review so that the FCA would be seen to be doing something, but did not want to do anything that would really hurt the insurance companies who made tons of money off these bad policies. Much of the confusion and lack of coordination between different officials of the FCA brought out in the Clifford Chance report appears to me to be only a manifestation of the tension between these two views within the FCA. It was critical for the second group’s strategy to work that the cosmetic review receive wide publicity that would fool the public into thinking that something was being done. Hence the idea of doing a selective pre-briefing to a journalist known to be sympathetic to the plight of the poor policy holders. The telephonic briefing with this journalist was not recorded, and was probably ambiguous enough to maintain plausible deniability.
The journalist drew the reasonable inference that the first group in the FCA had won and that the FCA was serious about giving a fair deal to the legacy policy holders and reported accordingly. What was intended to fool only the general public ended up fooling the investors as well, and the stock prices of the affected insurance companies crashed after the news report came out. The big insurance companies were now scared that the review might be a serious affair after all and pulled out all their resources to protect their profits. They reached out to the highest levels of the FCA and HM Treasury and ensured that their voice was heard. Regulatory capture is evident in the way in which the FCA abandoned even the pretence of serious action, and became content with cosmetic measures. Before the end of the day, a corrective statement came out of the FCA which made all the right noises about fairness, but made it clear that exit fees would not be touched.
The journalist in question (Dan Hyde of the Telegraph) nailed this contradiction in an email quoted in the Clifford Chance report (para 16.8)
But might I suggest that by any standard an exit fee that prevents a customer from getting a fairer deal later in life is in itself an unfair term on a policy.
On March 28, 2014, the top brass of the FCA and HM Treasury could see the billions of pounds wiped out on the stock exchange from the market value of the insurance companies, and they could of course hear the complaints from the chairmen of those powerful insurance companies. There was no stock exchange showing the corresponding improvement in the net worth of millions of policy holders savouring the prospect of escape from unfair policies, and their voice was not being heard at all. Out of sight, out of mind.
Posted at 6:25 pm IST on Sat, 20 Dec 2014 permanent link
Categories: insurance, regulation
Unwarranted complacency about regulated financial entities
Two days back, the Securities and Exchange Board of India (SEBI) issued a public Caution to Investors about entities that make false promises and assure high returns. This is quite sensible and also well intentioned. But the first paragraph of the press release is completely wrong in asking investors to focus on whether the investment is being offered by a regulated or by an unregulated entity:
It has come to the notice of Securities and Exchange Board of India (SEBI) that certain companies / entities unauthorisedly, without obtaining registration and illegally are collecting / mobilising money from the general investors by making false promises, assuring high return, etc. Investors are advised to be careful if the returns offered by the person/ entity is very much higher than the return offered by the regulated entities like banks, deposits accepted by Companies, registered NBFCs, mutual funds etc.
This is all wrong because the most important red flag is the very high return itself, and not the absence of registration and regulation. That is the key lesson from the Efficient Markets Hypothesis:
If something appears too good to be true, it is not true.
For the purposes of this proposition, it does not matter whether the entity is regulated. To take just one example, Bernard L. Madoff Investment Securities LLC was regulated by the US SEC as a broker dealer and as an investment advisor. Fairfield Greenwich Advisors LLC (through whose Sentry Fund, many investors invested in Madoff’s Ponzi scheme) was also an SEC regulated investment advisor.
Regulated entities are always very keen to advertise their regulated status as a sign of safety and soundness. (Most financial entities usually prefer light touch regulation to no regulation at all.) But regulators are usually at pains to avoid giving the impression that regulation amounts to a seal of approval. For example, every public issue prospectus in India contains the disclaimer:
The Equity Shares offered in the Issue have not been recommended or approved by the Securities and Exchange Board of India
In this week’s press release however, SEBI seems to have inadvertently lowered its guard, and has come dangerously close to implying that regulation is a seal of approval and respectability. Many investors would misinterpret the press release as saying that it is quite safe to put money in a bank deposit or in a mutual fund. No, that is not true at all: the bank could fail, and market risks could produce large losses in a mutual fund.
Posted at 3:44 pm IST on Sat, 13 Dec 2014 permanent link
Categories: regulation
Why no two factor authentication for advance tax payments in India?
I made an advance tax payment online today and it struck me that the bank never asks for two factor authentication for advance tax payments. It seems scandalous to me that payments of several hundreds of thousands of rupees are allowed without two factor authentication at a time when the online taxi companies are not allowed to bypass two factor authentication for payments of a few hundred rupees.
I can think of a couple of arguments why advance tax is different, but none are convincing:
- The advance tax will be refunded if it is excessive. This argument fails because the refund could take a year if one is talking about the first instalment of advance tax. Moreover, the taxi companies will also promise to make a refund (and much faster than a year).
- The hacker would gain nothing financially out of making an advance tax payment. This argument forgets the fact that a lot of hacking is of the “denial of service” kind. A businessman could hire a hacker to drain money out of his rival’s bank account and prevent the rival from bidding in an auction. That would give a clear financial benefit from hacking.
The point is that the rule of law demands that the same requirements apply to one and all. The “King can do no wrong” argument is inconsistent with the rule of law in a modern democracy. I believe that all payments above some threshold should require two factor authentication.
Posted at 4:41 pm IST on Mon, 8 Dec 2014 permanent link
Categories: taxation, technology
On fickle foreign direct investment and patient foreign portfolio capital
No that is not a typo; I am asserting the opposite of the conventional wisdom that foreign portfolio investment is fickle while foreign direct investment is more reliable. The conventional wisdom was on display today in news reports about the parliament’s apparent willingness to allow foreign direct investment in the insurance sector, but not foreign portfolio investment.
The conventional wisdom is propagated by macroeconomists who look at the volatility of aggregate capital flows – it is abundantly clear that portfolio flows stop and reverse during crisis periods (“sudden stops”) while FDI flows are more stable. Things look very different at the enterprise level, but economists working in microeconomics and corporate finance who can see a different world often do not bother to discuss policy issues.
Let me therefore give an example from the Indian banking industry to illustrate what I mean. In the late 1990s, after the Asian Crisis, one of the largest banks in the world decided that Asia was a dangerous place to do banking and sold a significant part of their banking operations in India and went home. That is what I mean by fickle FDI. At the same time, foreign portfolio investors were providing tons of patient capital to Indian private banks like HDFC, ICICI and Axis to grow their business in India. In the mid 1990s, many people thought that liberalization would allow foreign banks to thrive; in reality, they lost market share (partly due to the fickleness and short termism of their parents), and it is the Indian banks funded by patient foreign portfolio capital that gained a large market share.
In 2007, as the Great Moderation was about to end, but markets were still booming, ICICI Bank tapped the markets to raise $5 billion of equity capital (mainly from foreign portfolio investors) in accordance with the old adage of raising equity when it is available and not when it is needed. The bank therefore entered the global financial crisis with a large buffer of capital originally intended to finance its growth a couple of years ahead. During the crisis, even this buffer was perceived to be inadequate and the bank needed to downsize the balance sheet to ensure its survival. But without that capital buffer raised in good times, its position would have been a lot worse; it might even have needed a government bailout.
Now imagine that instead of being funded by portfolio capital, ICICI had been owned by say Citi. Foreign parents do not like to fund their subsidiaries ahead of need; they prefer to drip feed the subsidiary with capital as and when needed. In fact, if the need is temporary, the parent usually provides a loan instead of equity so that it can be called back when it is no longer needed. So the Indian subsidiary would have entered the crisis without that large capital buffer. During the crisis, the ability of the embattled parent to provide a large capital injection into its Indian operations would have been highly questionable. Very likely, the Indian subsidiary would have ended up as a ward of the state.
Macro patterns hide these interesting micro realities. The conventional wisdom ignores the fact that enterprise level risk management works to counter the vagaries of the external funding environment. It ignores the standard insight from the markets versus hierarchies literature that a funding that relies on a large number of alternate providers of capital is far more resilient than one that relies on just one provider of capital. In short it is time to overturn the conventional wisdom.
Posted at 4:48 pm IST on Thu, 4 Dec 2014 permanent link
Categories: international finance, regulation
Should governments hedge oil price risk?
I had an extended email conversation last month with a respected economist (who wishes to remain anonymous) about whether governments of oil importing countries should hedge oil price. While there is a decent literature on oil price hedging by oil exporters (for example, this IMF Working Paper of 2001), there does not seem to be much on oil importers. So we ended up more or less debating this from first principles. The conversation helped clarify my thinking, and this blog post summarizes my current views on this issue.
I think that hedging oil price risk does not make much sense for the government of an oil importer for several reasons:
- Oil imports are usually not a very large fraction of GDP; by contrast oil exports are often a major chunk of GDP for a large exporter. For most countries, oil price risk is just one among many different macroeconomic shocks that can hit the country. Just as for a company, equity capital is the best hedge against general business risks, for a country, external reserves and fiscal capacity are the best hedges against general macroeconomic shocks.
- For a country, the really important strategic risk relating to oil is a supply disruption (embargo for example) and this can be hedged only with physical stocks (like the US strategic oil reserve).
- A country is an amorphous entity. Probably, it is the government that will do the hedge, and private players that would consume the oil. Who pays for the hedge and who benefits from it? Does the government want the private players to get the correct price signal? Does it want to subsidize the private sector? If it is the private players who are consuming oil, why don’t we let them hedge the risk themselves?
- Futures markets may not provide sufficient depth, flexibility and liquidity to absorb a large importer’s hedging needs. The total open interest in ICE Brent futures is roughly equal to India’s annual crude import.
Frankly, I think it makes sense for the government to hedge oil price risk only if it is running an administered price regime. In this case, we can analyse its hedging like a corporate hedging program. The administered price regime makes the government short oil (it is contracted to sell oil to the private sector at the administered price), and then it makes sense to hedge the fiscal cost by buying oil futures to offset its short position.
But an administered price regime is not a good idea. Even if, for the moment, one accepts the dubious proposition that rapid industrialization requires strategic under pricing of key inputs (labour, capital or energy), we only get an argument for energy price subsidies not for energy price stabilization. The political pressure for short term price stabilization comes from the presence of a large number of vocal consumers (think single truck owners for example) who have large exposures to crude price risk but do not have access to hedging markets. If we accept that the elasticity of demand for crude is near zero in the short term (though it may be pretty high in the long term), then unhedged entities with large crude exposures will find it difficult to tide through the short term during which they cannot reduce demand. They can be expected to be very vocal about their difficulties. The solution is to make futures markets more accessible to small and mid size companies, unincorporated businesses and even self employed individuals who need such hedges. This is what India has done by opening up futures markets to all including individuals. Most individuals might not need these markets (financial savings are the best hedge against most risks for individuals who are not in business). But it is easier to open up the markets to all than to impose complex documentation requirements that restrict access. Easy hedging eliminates the political need for administered energy prices.
With free energy pricing in place, the most sensible hedge for governments is a huge stack of foreign exchange reserves and a large pool of oil under the ground in a strategic reserve.
Posted at 9:29 pm IST on Thu, 27 Nov 2014 permanent link
Categories: commodities, derivatives, risk management
Ethnic diversity and price bubbles
The socializing finance blog points to a PNAS paper showing that ethnic diversity drastically reduces the incidence of price bubbles in experimental markets. This is a conclusion that I am inclined to believe on theoretical grounds and the paper itself presents the theoretical arguments very persuasively. However, the experimental evidence leaves me unimpressed.
The biggest problem is that in both the locales (Southeast Asia and North America) in which they carried out the experiments:
In the homogeneous markets, all participants were drawn from the dominant ethnicity in the locale; in the diverse markets, at least one of the participants was an ethnic minority.
This means that the experimental design conflates the presence of ethnic diversity with that of ethnic minorities. This is all the more important because for the experiments, they recruited skilled participants, trained in business or finance. There could therefore be a significant self selection bias here in that ethnic minority members who chose to train in business or finance might have been those with exceptional talent or aptitude.
This fear is further aggravated by the result in Figure 2 showing that the Southeast Asian markets performed far better than the North American markets. In fact, the homogeneous Southeast Asian markets did better than the diverse North American markets! The diverse Southeast Asian market demonstrated near perfect pricing accuracy. This suggests that the ethnic fixed effects (particularly the gap between the dominant North American ethnic group and the minority Southeast Asian ethnic group) are very large. A proper experimental design would have had homogeneous markets made out of minority ethnic members as well so that the ethnic fixed effects could be estimated and removed.
Another reason that I am not persuaded by the experimental evidence that the experimental design prevented participants from seeing each other or communicating directly while trading. As the authors state “So, direct social influence was curtailed, but herding was possible.” With a major channel of diversity induced improvement blocked off by the design itself, one’s prior of the size of the diversity effect is lower than it would otherwise be.
Posted at 5:41 pm IST on Mon, 24 Nov 2014 permanent link
Categories: bubbles, market efficiency
CPMI fixation on two hour resumption time is reckless
The Basel Committee on Payments and Market Infrastructures (CPMI, previously known as CPSS) has issued a document about Cyber resilience in financial market infrastructures insisting that payment and settlement systems should be able to resume operations within 2 hours from a cyber attack and should be able to complete the settlement by end of day. The Committee is treating a cyber attack as a business continuity issue and is applying Principle 17 of its Principles for financial market infrastructures. Key Consideration 6 of Principle 17 requires that the business continuity plan “should be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events” and that the plan “should be designed to enable the FMI to complete settlement by the end of the day of the disruption, even in the case of extreme circumstances”.
I think that extending the business continuity resumption time target to a cyber attack is reckless and irresponsible because it ignores Principle 16 which requires an FMI to “safeguard its participants’ assets and minimise the risk of loss on and delay in access to these assets.” In a cyber attack, the primary focus should be on protecting participants’ assets by mitigating the risk of data loss and fraudulent transfer of assets. In the case of a serious cyber attack, this principle would argue for a more cautious approach which would resume operations only after ensuring that the risk of loss of participants’ assets has been dealt with.
I believe that if there were to be a successful cyber attack against a well run payment and settlement system, the attack would most likely be carried out by a nation-state. Such an attack would therefore be backed by resources and expertise far exceeding what any payment and settlement system would possess. Neutralizing such a threat would require assistance from the national security agencies of its own nation. It is silly to assume that such a cyber war between two nation states would be resolved within two hours just because a Committee in Basel mandates so.
The risk is that payment and settlement systems in their haste to comply with the Basel mandates would ignore security threats that have not been fully neutralized and expose their participants’ assets to unnecessary risk. I think the CPMI is being reckless and irresponsible in encouraging such behaviour.
This issue is all the more important for countries like India whose enemies and rivals include some powerful nation states with proven cyber capabilities. I think that Indian regulators should tell their payment and settlement systems that Principle 16 prevails over Principle 17 in the case of any conflict between the two principles. With this clarification, the CPMI guidance on cyber attacks would be effectively defanged.
Posted at 6:00 pm IST on Mon, 17 Nov 2014 permanent link
Categories: risk management, technology
Liquidity support for central counterparties
The UK seems to be going in the opposite direction to the US in terms of providing liquidity support to clearing corporations or central counterparties (CCPs). In the US, the amendments by the Dodd Frank Act made it extremely difficult for the central bank to provide liquidity assistance to any non bank. On the other hand, the Bank of England on Wednesday extended its discount window not only to all CCPs but also to systemically important broker-dealers (h/t OTC Space). The Bank of England interprets its liquidity provision function very widely:
As the supplier of the economy’s most liquid asset, central bank money, the Bank is able to be a ‘back-stop’ provider of liquidity, and can therefore provide liquidity insurance to the financial system.
My own view has always been that CCPs should have access to the discount window but only to borrow against the best quality paper (typically, government bonds). If there is a large short fall in the pay-in, a CCP has to mobilize liquidity in probably less than an hour (before pay-out) and the only entity able to provide large amounts of liquidity at such short notice is the central bank. But if a CCP does not have enough top quality collateral on hand, it should be allowed to fail. A quarter century ago, Ben Bernanke argued that it makes sense for the central bank to stand behind even a failing CCP (Ben S. Bernanke, “Clearing and Settlement during the Crash”, The Review of Financial Studies, Vol. 3, No. 1, pp. 133-151). But I would not go that far. Most jurisdictions today are designing resolution mechanisms to deal with failed CCPs, so this should work even in a crisis situation.
Posted at 11:57 am IST on Sun, 9 Nov 2014 permanent link
Categories: exchanges, risk management
Economics of counterfeit notes
If you are trying to sell $200 million of nearly flawless counterfeit $20 currency notes, there is only one real buyer – the US government itself. That seems to be the moral of a story in GQ Magazine about Frank Bourassa.
The story is based largely on Bourassa’s version of events and is possibly distorted in many details. However, the story makes it pretty clear that the main challenge in counterfeiting is not in the manufacture, but in the distribution. Yes, there is a minimum scale in the production process – Bourassa claims that a high end printing press costing only $300,000 was able to achieve high quality fakes. The challenge that he faced was in buying the correct quality of paper. The story does not say why he did not think of vertically integration by buying a mini paper mill, but I guess that is because it is difficult to operate a paper mill secretly unlike the printing press which can be run in a garage without anybody knowing about it. Bourassa was able to proceed because some paper mill somewhere in the world was willing to sell him the paper that he needed.
The whole point of anti counterfeiting technology is to increase the fixed cost of producing a note without increasing the variable cost too much. So high quality counterfeiting is not viable unless it is done in scale. But the distribution of fake notes suffers from huge diseconomies of scale – while it is pretty easy to pass off a few fake notes (especially small denomination notes), Bourassa found that it was difficult to sell large number of notes at even 70% discount to face value. He ended up selling his stockpile to the US government itself. The price was his own freedom.
To prevent counterfeiting, the government needs to ensure that at every possible scale of operations, the combined cost of production and distribution exceeds the face value of the note. At low scale, the high fixed production cost makes counterfeiting uneconomical, while at large scale, the high distribution cost is the counterfeiter’s undoing. That is why the only truly successful counterfeiters have been other sovereigns who have two decisive advantages: first for them the fixed costs are actually sunk costs, and second, they have access to distribution networks that ordinary counterfeiters cannot dream of.
Posted at 9:09 pm IST on Sat, 1 Nov 2014 permanent link
Categories: fraud, monetary policy, technology
Why is the IMF afraid of negative interest rates?
A few days back, the IMF made a change in its rule for setting interest rates on SDRs (Special Drawing Rights) and set a floor of 5 basis points (0.05%) on this rate. The usual zero lower bound on interest rates does not apply to the SDR as there are no SDR currency notes floating around. The SDR is only a unit of account and to some extent a book entry currency. There is no technical problem with setting the interest rate on the SDR to a substantially negative number like -20%.
In finance theory, there is no conceptual problem with a large negative interest rate. Though we often describe the interest rate (r) as a price, actually it is 1+r and not r itself that is a price. The price of one unit of money a year later in terms of money today is 1+r. Prices have to be non negative, but this only requires that r can not drop below -100%. With bearer currency in circulation, a zero lower bound (ZLB) comes about because savers have the choice of saving in the form of currency and earning a zero interest rate. Actually the return on cash is slightly negative (probably close to -0.5%) because of storage (and insurance) costs. As such, the ZLB is actually not at zero, but at somewhere between -0.25% and -0.50%.
It has long been understood that a book entry (or mere unit of account) currency like the SDR is not subject to the ZLB at all. Buiter for example proposed the use of a parallel electronic currency as a way around the ZLB.
In this context, it is unfortunate that the IMF has succumbed to the fetishism of positive interest rates. At the very least, it has surrendered its potential for thought leadership. At worst, the IMF has shown that it is run by creditor nations seeking to earn a positive return on their savings when the fundamentals do not justify such a return.
Posted at 9:08 pm IST on Wed, 29 Oct 2014 permanent link
Categories: bond markets, bubbles, monetary policy
Who should interpolate Libor: submitter or administrator?
ICE Benchmark Administration (IBA), the new administrator of Libor has published a position paper on the future evolution of Libor. The core of the paper is a shift to “a more transaction-based approach for determining LIBOR submissions” and a “more prescriptive calculation methodology”. In this post, I discuss the following IBA proposals regarding interpolation and extrapolation:
Interpolation and extrapolation techniques are currently used where appropriate by benchmark submitters according to formulas they have adopted individually.
We propose that inter/extrapolation should be used:
- When a benchmark submitter has no available transactions on which to base its submission for a particular tenor but it does have transaction-derived anchor points for other tenors of that currency, and
- If the submitter’s aggregate volume of eligible transactions is less than a minimum level specified by IBA.
To ensure consistency, IBA will issue interpolation formula guidelines
Para 5.7.8
In my view, it does not make sense for the submitter to perform interpolations in situations that are sufficiently standardized for the administrator to provide interpolation formulas. It is econometrically much more efficient for the administrator to perform the interpolation. For example, the administrator can compute a weighted average with lower weights on interpolated submission – ideally the weights would be a declining function of the width of the interpolation interval. Thus where many non interpolated submissions are available, the data from other tenors would be virtually ignored (because of low weights). But where there are no non-interpolated submissions, the data from other tenors would drive the computed value. The administrator can also use non linear (spline) interpolation across the full range of tenors. If submitters are allowed to interpolate, perverse outcomes are possible. For example, where the yield curve has a strong curvature but only a few submitters provide data on the correct tenor, these will differ sharply from the incorrect (interpolated) submissions of the majority of the submitters. The standard procedure of ignoring extreme submissions would discard all the correct data and average all the incorrect submissions!
Many people tend to forget that even the computation of an average is an econometric problem that can benefit from the full panoply of econometric techniques. For example, an econometrician might suggest interpolating across submission dates using a Kalman filter. Similarly, covered interest parity considerations would suggest that submissions for Libor in other currencies should be allowed to influence the estimation of Libor in each currency (simultaneous equation rather than single equation estimation). So long as the entire estimation process is defined in open source computer code, I do not see why Libor estimates should not be based on a complex econometric procedure – a Bayesian Vector Auto Regression (VAR) with Garch errors for example.
Posted at 12:08 pm IST on Thu, 23 Oct 2014 permanent link
Categories: benchmarks, statistics
Online finance and SIM-card security risks
For quite some time now, I have been concerned that the SIM card in the mobile phone is becoming the most vulnerable single point of failure in online security. The threat model that I worry about is that somebody steals your mobile, transfers the SIM card to another phone, and goes about quickly resetting the passwords to your email accounts and other sites where you have provided your mobile number as your recovery option. Using these email accounts, the thief then proceeds to reset passwords on various other accounts. This threat model cannot be blocked by having a strong PIN or pattern lock on the phone or by remotely wiping the device. That is because, the thief is using your SIM and not your phone.
If the thief knows enough of your personal details (name, data of birth and other identifying information), then with a little bit of social engineering, he could do a lot of damage during the couple of hours that it would take to block the SIM card. Remember that during this period, he can send text messages and Whatsapp messages in your name to facilitate his social engineering. The security issues are made worse by the fact that telecom companies simply do not have the incentives and expertise to perform the authentication that financial entities would do. There have been reports of smart thieves getting duplicate SIM cards issued on the basis of fake police reports and forged identity documents (see my blog post of three years ago).
Modern mobile phones are more secure than the SIM cards that we put inside them. They can be secured not only with PIN and pattern locks but also fingerprint scanner and face recognition software. Moreover, they support encryption and remote wiping. It is true that SIM cards can be locked with a PIN which has to be entered whenever the phone is switched off and on or the SIM is put into a different mobile. But I am not sure how useful this would be if telecom companies are not very careful while providing the PUK code which allows the PIN to be reset.
If we assume that the modern mobile phone can be made reasonable secure, then it should be possible to make SIM cards more secure without the inconvenience of entering a SIM card PIN. In the computer world, for example, it is pretty common (in fact recommended) to do remote (SSH) login using only authentication keys without any user entered passwords. This works with a pair of encryption keys – the public key sits in the target machine and the private key in the source machine. A similar system should be possible with SIM cards as well, with the private key sitting on the mobile and backed up on other devices. Moving the SIM to another phone would not work unless the thief can also transfer the private key. Moreover, you would be required to use the backed up private key to make a request for a SIM replacement. This would keep SIM security completely in your hands and not in the hands of a telecom company that has no incentive to protect your SIM.
This system could be too complex for many users who use a phone only for voice and non critical communications. It could therefore be an opt-in system for those who use online banking and other services a lot and require higher degree of security. Financial services firms should also insist on the higher degree of security for high value transactions.
I am convinced that encryption is our best friend: it protects us against thieves who are adept at social engineering, against greedy corporations who are too careless about our security, and against overreaching governments. The only thing that you are counting on is that hopefully P ≠ NP.
Posted at 4:04 pm IST on Mon, 20 Oct 2014 permanent link
Categories: fraud, technology
What are banks for?
Much has been written since the Global Financial Crisis about how modern banking system has become less and less about financing productive investments and more and more about shuffling pieces of paper in speculative trading. Last month, Jordà, Schularick and Taylor wrote an NBER Working Paper “The Great Mortgaging: Housing Finance, Crises, and Business Cycles” describing an even more fundamental change in banking during the 20th century. They construct a database of bank credit in advanced economies from 1870 to 2011 and document “an explosion of mortgage lending to households in the last quarter of the 20th century”. They conclude that:
To a large extent the core business model of banks in advanced economies today resembles that of real estate funds: banks are borrowing (short) from the public and capital markets to invest (long) into assets linked to real estate.
Of course, it can be argued that mortgage lending is an economically useful activity to the extent that it allows people early in their career to buy houses. But it is also possible that much of this lending only boosts house prices and does not improve the affordability of houses to any significant extent.
The more important question is why banks have become less important in lending to businesses. One possible answer that in this traditional function, they have been disintermediated by capital markets. On the mortgages side, however, perhaps, banks are dominant only because they with their Too-Big-To-Fail (TBTF) subsidies can afford to take the tail risks that capital markets refuse to take.
I think the Jordà, Schularick and Taylor paper raises the fundamental question of whether advanced economies need banks at all. If regulators impose the kind of massive capital requirements that Admati and her coauthors have been advocating, and banks were forced to contract, capital markets might well step in to fill the void in the advanced economies. The situation might well be different in emerging economies.
Posted at 5:39 pm IST on Mon, 13 Oct 2014 permanent link
Categories: banks
Why does SEC think that 333,251 minus 5 times 66,421 is not 1,146?
The CME futures contracts on the S&P 500 index comes in two flavours – the big or full-size (SP) contract is five times the E-Mini (ES) contract. For clearing purposes, SP and ES contracts are fungible with a five to one ratio. The daily settlement price of both contracts is obtained by taking a volume weighted average price of both contracts taken together weighted in the same ratio.
Yet, according to a recent SEC order against Latour Trading LLC and Nicolas Niquet, a broker-dealer is required to maintain a net-capital on the two contracts separately. In Para 28 of its order, the SEC says that in February 2010, Latour held 333,251 long ES contracts and 66,421 short SP contracts, and it netted these out to a long position of 1,146 ES contracts requiring a net capital of $14,325. According to the SEC, these should not have been netted out and Latour should have held a net capital of $8.32 million ($4.17 million for the ES and $4.15 million for the SP). This is surely absurd.
It is not as if the SEC does not allow netting anywhere. It allows index products to be offset by qualified stock baskets (para 10). In other words, an approximate hedge (index versus an approximate basket) can be netted but an exact hedge (ES versus SP) cannot be netted.
PS: I am not defending Latour at all. The rest of the order makes clear that there was a great deal of incompetence and deliberate under-estimation of net capital going on. It is only on the ES/SP netting claim that I think the SEC regulations are unreasonable.
Posted at 9:43 pm IST on Sun, 28 Sep 2014 permanent link
Categories: derivatives, exchanges
Outsourcing financial repression to China and insourcing it back
It is well known that financial repression more or less disappeared in advanced economies during the 1980s and 1990s, but has been making a comeback recently. Is it possible that financial repression did not actually disappear, but was simply outsourced to China? And the comeback that we are seeing after the Global Financial Crisis is simply a case of insourcing the repression back?
This thought occurred to me after reading an IMF Working Paper on “Sovereign Debt Composition in Advanced Economies: A Historical Perspective”. What this paper shows is that many of the nice things that happened to sovereign debt in advanced economies prior to the Global Financial Crisis was facilitated by the robust demand for this debt by foreign central banks. In fact, the authors refer to this period not as the Great Moderation, but as the Great Accumulation. Though they do not mention China specifically, it is clear that the Great Accumulation is driven to a great extent by China. It is also clear that much of the Chinese reserve accumulation is made possible by the enormous financial repression within that country.
This leads me to my hypothesis that just as the advanced economies outsourced their manufacturing to more efficient manufacturers in China, they outsourced their financial repression to the most efficient manufacturer of financial repression – China. Now that China is becoming a less efficient and less willing provider of financial repression, advanced economies are insourcing this job back to their own central banks.
In this view of things, we overestimated the global reduction of financial repression in the 1990s and are overestimating the rise in financial repression since the crisis.
Posted at 11:59 am IST on Mon, 22 Sep 2014 permanent link
Categories: bond markets, international finance, monetary policy
Fama French and Momentum Factors: Updated Data Library for Indian
MarketA year ago, my colleagues, Prof. Sobhesh K. Agarwalla, Prof. Joshy Jacob and I created a publicly available data library providing the Fama-French and momentum factor returns for the Indian equity market, and promised to keep updating the data on a regular basis. It has taken a while to deliver on that promise, but we have now updated the data library. More importantly, we believe that we have now set up a process to do this on a sustainable basis by working together with the Centre for Monitoring Indian Economy (CMIE) who were the source of the data anyway. CMIE agreed to implement our algorithms on their servers and give us the data files every month. That ensures more comprehensive coverage of the data and faster updates.
Posted at 9:10 pm IST on Sat, 13 Sep 2014 permanent link
Categories: factor investing
A benchmark is to price what a credit rating agency is to quality
Andrew Verstein has an interesting paper on the Law and Economics of Benchmark Manipulation. One of the gems in that paper is the title of this blog post: “A benchmark is to price what a credit rating agency is to quality.” Verstein is saying that just as credit rating agencies became destructive when their ratings were hardwired into various legal requirements, benchmarks also become dangerous when they are hardwired into various legal documents.
Just as in the case of rating agencies, in the case of price benchmarks also, regulators have encouraged reliance on benchmarks. Even in the equity world where exchange trading eliminates the need for many kinds of benchmarks, the closing price is an important benchmark which derives its importance mainly from its regulatory use. Verstein points out that “Indeed, it is hard to find an example of stock price manipulation that does not target the closing (or opening) price.” So we have taken a liquid and transparent market and conjured an opaque and vulnerable benchmark out of it. Regulators surely take some of the blame for this unfortunate outcome.
Another of Verstein’s points is that governments use benchmarks even when they know that it is broken: “the United States Treasury used Libor to make TARP loans during the financial crisis, despite being on notice that Libor was a manipulated benchmark.” In this case, Libor was not only manipulated but had become completely dysfunctional – I remember that the popular definition of Libor at that time was that it was the rate at which banks do not lend to each other in London. That was well before Libor became Lie-bor. The US government could easily have taken a reference rate from the US Treasury market or repo markets and then set a fat enough spread over that reference rate (say 1000 basis points) to cover the TED spread, the CDS spread, and a Bagehotian penal spread. By choosing not to do so they lent legitimacy to what they knew very well was an illegitimate benchmark.
Posted at 7:57 pm IST on Sun, 7 Sep 2014 permanent link
Categories: benchmarks, manipulation
Regulatory overreach: SEBI definition of research analyst
Yesterday, the Securities and Exchange Board of India (SEBI) issued regulations requiring all Research Analysts to be registered with SEBI. The problem is that the regulations use a very expansive definition of research analyst. This reminds me of my note of dissent to the report of the Financial Sector Legislative Reforms Commission (FSLRC) on the issue of definition of financial service. I wrote in that dissent that:
Many activities carried out by accountants, lawyers, actuaries, academics and other professionals as part of their normal profession could attract the regitration requirement because these activities could be construed as provision of a financial service ... All this creates scope for needless harassment of innocent people without providing any worthwile benefits.
Much the same could be said about the definition of the definition of research analyst. Consider for example this blog post by Prof. Aswath Damodaran of the Stern School of Business at New York University on the valuation of Twitter during its IPO. It clearly meets the definition of a research report in Regulation 2(w):
any written or electronic communication that includes research analysis or research recommendation or an opinion concerning securities or public offer, providing a basis for investment decision
Regulation 2(w) has a long list of exclusions, but Damodaran’s post does not fall under any of them. Therefore, clearly Damodaran would be a research analyst under Regulation 2(u) under several of its prongs:
a person who is primarily responsible for:
- preparation or publication of the content of the research report; or
- providing research report; or
- offering an opinion concerning public offer,
with respect to securities that are listed or to be listed in a stock exchange
Under Regulation 3(1), Prof. Damodaran would need a certificate of registration from SEBI if he were to write a similar blog post about an Indian company. Or, under Regulation 4, he would have to tie up with a research entity registered in India.
Regulations of this kind are a form of regulatory overreach that must be prevented by narrowly circumscribing the powers of regulators in the statute itself. To quote another sentence that I wrote in the FSLRC dissent note: “regulatory self restraint ... is often a scarce commodity”.
Posted at 7:09 pm IST on Tue, 2 Sep 2014 permanent link
Categories: regulation
IPO as call option for insiders
A couple of weeks ago, Matt Levine at Bloomberg View described a curious incident of a company that was a public company for only six days before cancelling its public issue:
- On July 30, 2014, an Israeli company, Vascular Biogenics Ltd. (VBL) announced that it had priced its initial public offering (IPO) at $12 per share and that the shares would begin trading on Nasdaq the next day. The registration statement relating to these securities was filed with and was declared effective by the US Securities and Exchange Commission (SEC) on the same day.
- On August 8, VBL announced that it had cancelled its IPO.
What happened in between was that on July 31, the shares opened at $11.00 and sank further to close at $10.25 (a 15% discount to the IPO price) on a large volume of 1.5 million shares as compared to the total issue size of 5.4 million shares excluding the Greeshoe option (Source for price and volume data is Google Finance). This price drop was bad news for one of the large shareholders who had agreed to purchase almost 45% of the shares in the IPO. This insider was unwilling or unable to pay for the shares that he had agreed to buy. Technically, the underwriters were on the hook now, and the default could have triggered a spate of law suits. Instead, the company cancelled the IPO and the underwriting agreement. Nasdaq instituted a trading halt but the company appears to be still technically listed on Nasdaq.
Matt Levine does a fabulous job of dissecting the underwriting agreement to understand the legal issues involved. I am however more concerned about the relationship between the insider and the company. The VBL episode seems to suggest that if you are an insider in a company, a US IPO is a free call option. If the stock price goes up on listing, the insider pays the IPO price and buys the stock. If the price goes down, the insider refuses to pay and the company cancels the IPO.
Posted at 6:50 pm IST on Sat, 30 Aug 2014 permanent link
Categories: corporate governance, equity markets, insider trading
Mutual fund liquidity, valuation and gates
Last month, the US Securities and Exchange Commission (SEC) adopted rules allowing money market funds (MMFs) to restrict (or “gate”) redemptions when there is a liquidity problem. These proposals have been severely criticized on the ground that they could lead to pre-emptive runs as investor rush to the exit before the gates are imposed.
I think the criticism is valid though I was among those who recommended the imposition of gates in Indian mutual funds during the crisis of 2008. The difference is that I see gates as a solution not to a liquidity problem, but to a valuation problem. The purpose of the gate in my view is to protect remaining investors from the risk that redeeming investors exit the fund at a valuation greater than the true value of the assets. An even better solution to this valuation problem is the minimum balance at risk proposal that I blogged about two years ago.
Posted at 3:12 pm IST on Sat, 23 Aug 2014 permanent link
Categories: bond markets, mutual funds
Carry trades and the forward premium puzzle
Tarek Hassan and Rui Mano have an interesting NBER conference paper (h/t Econbrowser (Menzie Chinn) that comes pretty close to saying that there is really no forward premium puzzle at all. Their paper itself tends to obscure the message using phrases like cross-currency, between-time-and-currency, and cross-time components of uncovered interest parity violations. So what follows is my take on their paper.
Uncovered interest parity says that ignoring risk aversion, currencies with high interest rates should be expected to depreciate so as to neutralise the interest differential. If not risk neutral investors from the rest of the world would move all their money into the high yielding currency and earn higher returns. Similarly, currencies with low interest rates should be expected to appreciate to compensate the interest differential so that risk neutral investors do not stampede out of the currency.
Violation of uncovered interest parity therefore have a potentially simple explanation in terms of risk premia. The problem is that the empirical relationship between interest differentials and currency appreciation is in the opposite direction to that predicted by uncovered interest parity. In a pooled time-series cross-sectional regression, currencies with high interest rates appreciate instead of depreciating. A whole investment strategy called the carry trade has been built on this observation. A risk based explanation of this phenomenon would seem to require implausible time varying risk premia. For example, if we interpret the pooled in terms of a single exchange rate (say dollar-euro), the risk premium would have to keep changing sign depending on whether the dollar interest rate was higher or lower than the euro interest rate.
This is where Hassan and Mano come in with a decomposition of the pooled regression result. They argue that in a pooled sample, the result could be driven by currency fixed effects. For example, over their sample period, the New Zealand interest rate was consistently higher than the Japanese rate and an investor who was consistently short the yen and long the New Zealand dollar would have made money. The crucial point here is that a risk based explanation of this outcome would not require time varying risk premia – over the whole sample, the risk premium would be in one direction. What Hassan and Mano do not say is that a large risk premium would be highly plausible in this context. Japan is a net creditor nation and Japanese investors would require a higher expected return on the New Zealand dollar to take the currency risk of investing outside their country. At the same time, New Zealand is a net debtor country and borrowers there would pay a higher interest rate to borrow in their own currency than take the currency risk of borrowing in Japanese yen. It would be left to hedge funds and other players with substantial risk appetite to try and arbitrage this interest differential and earn the large risk premium on offer. Since the aggregate capital of these investors is quite small, the return differential is not fully arbitraged away.
Hasan and Mano show that empirically only the currency fixed effect is statistically significant. The time varying component of the uncovered interest parity violation within a fixed currency pair is not statistically significant. Nor is there a statistically significant time fixed effect related to the time varying interest differential between the US dollar and a basket of other currencies. To my mind, if there is no time varying risk premium to be explained, the forward premium puzzle disappears.
The paper goes on to show that the carry trade as an investment strategy is primarily about currency fixed effects. Hasan and Mano consider “a version of the carry trade in which we never update our portfolio. We weight currencies once, based on our expectation of the currencies’ future mean level of interest rates, and never change the portfolio thereafter.” This “static carry trade” strategy accounts for 70% of the profits of the dynamic carry trade that rebalances the portfolio each period to go long the highest yielding currencies at that time and go short the highest yielding currencies at that time. More importantly, in the carry trade portfolio, the higher yielding currencies do depreciate against the low yielding currencies. It is just that the depreciation is less than the interest differential and so the strategy makes money. So uncovered interest parity gets the sign right and only the magnitude of the effect is lower because of risk premium. There is a large literature showing that the carry trade loses money at times of global financial stress when investors can least afford to lose money and therefore a large risk premium is intuitively plausible.
Posted at 12:26 pm IST on Sat, 16 Aug 2014 permanent link
Categories: arbitrage, international finance
Tax avoidance with derivatives
Last month, the Permanent Subcommittee on Investigations of the United States Senate published a Staff Report on how hedge funds were using basket options to reduce their tax liability. The hedge fund’s underlying trading strategy used 100,000 to 150,000 trades per day and many of those trading positions lasted only a few minutes. Yet, because of the use of basket options, the trading profits ended up being taxed at the long term capital gains rate of 15-20% instead of the short term capital gains rate of 35%. The hedge fund saved $6.8 billion in taxes during the period 2000-2013. Perhaps, more importantly, the hedge fund was also able to circumvent leverage restrictions.
The problem is that derivatives blur a number of distinctions that are at the foundation of the tax law everywhere in the world. Alvin Warren described the problem in great detail more than two decades ago (“Financial contract innovation and income tax policy.” Harvard Law Review, 107 (1993): 460). More importantly, Warren’s paper also showed that none of the obvious solutions to the problem would work.
We have similar problems in India as well. Mutual funds that invest at least 65% in equities produce income that is practically tax exempt for the investor, while debt mutual funds involve substantially higher tax incidence. A very popular product in India is the “Arbitrage Mutual Fund” which invests at least 65% in equities, but also hedges the equity risk using futures contracts. The result is “synthetic debt” that has the favourable tax treatment of equities.
In some sense, this is nothing new. In the Middle Ages, usury laws in Europe prohibited interest bearing debt, but allowed equity and insurance contracts. The market response was the infamous “triple contract” (contractus trinus) which used equity and insurance to create synthetic debt.
What modern taxmen are trying to do therefore reminds me of Einstein’s definition of insanity as doing the same thing over and over again and expecting different results.
Posted at 4:03 pm IST on Sat, 9 Aug 2014 permanent link
Categories: derivatives, taxation
Betting Against Beta in the Indian Market
My colleagues, Prof Sobhesh Kumar Agarwalla, Prof. Joshy Jacob, Mr. Ellapulli Vasudevan and I have written a working paper on “Betting Against Beta in the Indian Market” (also available at SSRN)
Recent empirical evidence from different markets suggests that the security market line is flatter than posited by CAPM and a market neutral portfolio long in low-beta assets and short in high-beta assets earns positive returns. Frazzini and Pedersen (2014) conceptualize a Betting against Beta (BAB) factor that tracks such a portfolio. They find that the BAB factor earns significant returns using data from 20 international equity markets, treasury bond markets, credit markets, and futures markets. We find that a similar BAB factor earns significant positive returns in the Indian equity market. The returns on the BAB factor dominate the returns on the size, value and momentum factors. We also find that stocks with higher volatility earn relatively lower returns. These findings are consistent with the Frazzini and Pedersen model in which many investors do not have access to leverage and therefore overweight the high-beta assets to achieve their target return.
Like our earlier work on the Fama-French and momentum factor returns in India (see this blog post), this study also contributes to an understanding of the cross section of equity returns in India. Incidentally, the long promised update of the Fama-French and momentum factor returns is coming soon. We wanted to put the data update process on a more sound foundation and that has taken time. While the update has been delayed, we expect it to be more reliable as a result.
Posted at 1:30 pm IST on Mon, 14 Jul 2014 permanent link
Categories: CAPM, factor investing
Making margin models less procyclical
Last month, the Bank of England (BOE) published a Financial Stability Paper entitled “An investigation into the procyclicality of risk-based initial margin models”. After the Global Financial Crisis, there has been growing concern that procyclical margin requirements (margins are higher in times of market stress and lower in calm markets) induce complacency in good times and panic in bad times. There is therefore a desire to reduce procyclicality, but this is difficult to do without sacrificing the risk sensitivity of the margin system.
The BOE paper uses historical and simulated data to compare various margin models on their risk sensitivity and their procyclicality. Though they do not state this as a conclusion, their comparison does show that the exponentially weighted moving average (EWMA) model with a floor (minimum margin) is one of the better performing models on both risk sensitivity and procyclicality. This is gratifying in that India uses a system of this kind.
However, the study leaves me quite dissatisfied. First procyclicality is measured in terms of elevated realized volatility. Market stress in my view is better measured by implied volatility (for example, the VIX) and by measures of funding liquidity. Second, the four models that the paper compares are all standard pre-crisis models. Even when they use simulated data from a regime switching model, they do not consider margin model based on regime switching. Nor do they consider models based on fat tailed distributions. There are no models that adjust margins slowly to reduce liquidity stresses in the system. Finally, they do little to quantify the tradeoff between risk sensitivity and procyclicality – how much risk sensitivity do we have to give up to achieve a desired reduction in procyclicality.
Posted at 2:16 pm IST on Thu, 19 Jun 2014 permanent link
Categories: exchanges, risk management
How to borrow $10 million against forged shares
James Altucher narrates a fascinating story about how a guy claiming to be related to Middle Eastern royalty almost succeeded in borrowing $10 million from a fund manager against forged shares representing $25 million of restricted stock of a private internet company (h/t Bruce Schneier).
To me the red flag in the story was that the borrower agreed without a murmur to the outrageous terms that the fund manager asked for:
- 15% interest, paid quarterly
- the full loan is due back in two years
- $600,000 fee paid up front.
- 25% of all the upside on the full $25 million in shares for the next ten years
Assuming that the loan is for all practical purposes without recourse to any other assets of the borrower because of the uncertainties of local law, all this can be valued using call and put options on the stock. The upside clause is just 25% of an at-the-money call option on the stock. The default loss is just the value of a put with a strike of $10 million. To discount the interest payments, we need the risk neutral probability of default which I conservatively estimate as the probability of exercise of the two year put option (In fact, the interest is paid quarterly and some interest payments will be received even if the loan ultimately defaults).
For simplicity, I assume the risk free rate to be zero which is realistic for the first two years, but probably undervalues the ten year call. To add to the conservatism, I assume that the volatility of the stock is 100% for the first two years (life of the loan) and drops sharply to 30% for the remaining life of the ten year period of the call option. Taking the square root of the weighted average variance gives the volatility of the call option to be 52%. Since it is an internet stock, one can safely assume that the dividends are zero.
Under these assumptions, the fund manager expects to lose $3 million (put option value) out of the $10 million loan, but expects to make $3.7 million on the call, $1.4 million in interest and $0.6 million upfront fee. That is a net gain of $2.7 million or 27%. If the short term volatility is reduced to 50%, the default loss drops to less than $0.5 million and the net gain rises to 52%. Even if the short term volatility is raised to 160% (without raising the long term volatility), the deal still breaks even.
If a deal looks too good to be true, it usually is. The fund manager should have got suspicious right there.
As an aside, forged shares were a big menace in India in the 1990s, but we have solved that problem by dematerialization. (It is standard while lending against shares in India to ask for the shares to be dematerialized before being pledged.) The Altucher story suggests that the US still has the forged share problem.
Posted at 2:56 pm IST on Wed, 18 Jun 2014 permanent link
Categories: fraud
19th century UK gilts mispricing versus modern on-the-run bond
pricingAndrew Odlyzko has an interesting paper entitled “Economically irrational pricing of 19th century British government bonds ” (available on SSRN) which demonstrates that more liquid perpetual bonds (consols) issued by the UK government often traded at prices about 1% higher than less liquid bonds with almost identical cash flows. Given that interest rates in that era were around 3%, these perpetual bonds would have a duration of well over 30 years. So the 1% pricing disparity would correspond to a yield differential of about 3 basis points. That is much less than the yield differential between long maturity on-the-run and off-the-run treasuries in the US in recent decades, let alone the differentials in the Indian gilt market.
In other words, contrary to what Odlyzko seems to imply, the 19th century UK gilt market would appear to have been more efficient than modern government bond markets! Odlyzko provides a solution to this puzzle. Most of UK consols in the 19th century were held by retail investors and very little was held by financial institutions. As Odlyzko rightly points out, this would substantially depress the premium for liquidity. Odlyzko argues that the liquidity premium should be zero because the stock of the liquid consols was more than adequate to meet any reasonable liquidity demands. I do not agree with this claim. The experience with quantitative easing since the global financial crisis tells us that the demand for safe and liquid assets can be almost insatiable. That might well have been true two centuries ago.
Posted at 10:03 pm IST on Sat, 14 Jun 2014 permanent link
Categories: bond markets, market efficiency
Waiting for a national stock market in India
Today was another reminder that India still does not have a national stock market. The Indian stock markets are closed because Mumbai goes to the poll today. The country as a whole goes to the polls on ten different days spread over more than a month. Either the stock market should be closed on ten days or on none.
It is high time that the regulators required that the exchanges should operate out of their disaster recovery location when Mumbai has a holiday and most of the country is working. That would also be a wonderful way of testing whether all those business continuity plans work as nicely on the ground as they do on paper. But something tells me that this is unlikely to happen anytime soon
Two decades ago, we abolished the physical trading floor in Mumbai. But the trading floor in Mumbai lives on in the minds of key decision makers, and it will take long to liberate ourselves from the oppression of this imaginary trading floor.
Posted at 6:13 pm IST on Thu, 24 Apr 2014 permanent link
Categories: exchanges, regulation, technology
The human rights of insider traders
The European Court of Human Rights (ECHR) has an interesting judgement (h/t June Rhee) upholding the human rights of those guilty of insider trading (The judgement itself is available only in French but the Press Release is available in English).
Though the fines and penalties imposed by the Italian Companies and Stock Exchange Commission (Consob) were formally defined as administrative in nature under Italian law, the ECHR ruled that “the severity of the fines imposed on the applicants meant that they were criminal in nature.”. As such, the ECHR found fault with the procedures followed by Consob. For example, the accused had not had an opportunity to question any individuals who could have been interviewed by Consob. Moreover, the functions of investigation and judgement were within the same institution reporting to the same president. The only thing that helped Consob was that the accused could and did challenge the Consob ruling in the Italian courts.
The ECHR ruling that the Consob fines were a criminal penalty brought into play the important principle that a person cannot be tried for the same offence twice. Under Italian law (based on the EC Market Abuse Directive), a criminal prosecution had taken place in addition to the Consob fines. ECHR ruled that this violated the human rights of the accused.
It is important to recognize that the ECHR is not objecting to the substance of the insider trading statutes and the need to penalize the alleged offences. The Court clearly states that the regulations are “intended to guarantee the integrity of the financial markets and to maintain public confidence in the security of transactions, which undeniably amounted to an aim that was in the public interest. ... Accordingly, the fines imposed on the applicants, while severe, did not appear disproportionate in view of the conduct with which they had been charged.” Rather, the Court’s concerns are about due process of law and the protection of the rights to fair trial.
I think the principles of human rights are broadly similar across the free world – US, Europe and India. The judgement therefore raises important issues that go far beyond Italy.
Posted at 11:03 am IST on Tue, 15 Apr 2014 permanent link
Categories: insider trading, law
Heartbleed and the need for air-gapped backups in finance
Heartbleed is perhaps the most catastrophic computer security disaster ever (For those not technically inclined, this xkcd comic is perhaps the most readable explanation of the bug). Bruce Schneier says that “On the scale of 1 to 10, this is an 11.” Since the bug has been around for a few years and the exploit leaves no trace on the server, the assumption has to be that passwords and private keys have been stolen from every server that was ever vulnerable. If you have the private key, you can read everything that is being sent to or received from the server until the private key (SSL Certificate) is changed even if the vulnerability itself has been fixed.
Many popular email, social media and other popular sites are affected and we need to change our passwords everywhere. Over the next few weeks, I intend to change every single password that I am using on the web – more than a hundred of them.
Thankfully, only a few banking sites globally seem to be affected. When I check now, none of the Indian banking sites that I use regularly are being reported as vulnerable. However, the banks have not said anything officially and I am not sure whether they were never vulnerable or whether they fixed the vulnerability over the last few days after the bug was revealed. Even the RBI has been silent on this; if all Indian banks were safe, they should publicly say so, and if some were affected and have been fixed, they should say so too. Incidentally, many Indian banking sites do not seem to implement Perfect Forward Security and that is not good at all.
More importantly, I think it is only a matter of time before large financial institutions around the world suffer a catastrophic security breach. Even if the mathematics of cryptography is robust (P ≠ NP), all the mathematics is implemented in code that often goes through only flimsy code reviews. I think it is necessary to have offline repositories of critical financial data so that one disastrous hack does not destroy the entire financial system. For example, I think every large depository, bank, mutual fund and insurance company should create a monthly backup of the entire database in a secure air-gapped location. Just connect a huge storage rack to the server (or perhaps the disaster recovery backup server), dump everything (encrypted) on the rack, disconnect and remove the rack, and store the air-gapped rack in a secure facility. A few thousands of dollars or even a few tens of thousands of dollars a month is a price that each of these institutions should be willing to pay for partial protection against the tail risk of an irrecoverable security breach.
Posted at 7:04 pm IST on Sat, 12 Apr 2014 permanent link
Categories: technology
Campbell on 2013 Economics Nobel Prizes
While much has been written about the 2013 Economics Nobel Prizes, almost everybody has focused on the disagreements between Fama and Shiller, with Hansen mentioned (if at all) as an afterthought (Asness and Lieuw is a good example). By contrast, John Campbell has a paper (h/t Justin Fox) on the 2013 Nobels for the Scandinavian Journal of Economics, in which Hansen appears as the chief protagonist, while Fama and Shiller play supporting roles. The very title of the paper (“Empirical Asset Pricing”) indicates the difference in emphasis – market efficiency and irrational exuberance play second fiddle to Hansen’s GMM methodology.
To finance people like me, this comes as a shock; Fama and Shiller are people in “our field” while Hansen is an “outsider” (a mere economist, not even a financial economist). Yet on deeper reflection, it is hard to disagree with Campbell’s unstated but barely concealed assessment: while Fama and Shiller are story tellers par excellence, Hansen stands on a different pedestal when it comes to rigour and mathematical elegance.
And even if you have no interest in personalities, I would still strongly recommend Campbell’s paper – it is by far, the best 30 page introduction to Empirical Asset Pricing that I have seen.
Posted at 4:56 pm IST on Sat, 5 Apr 2014 permanent link
Categories: market efficiency, statistics
Diversification, Skewness and Adverse Selection
When I first read about the fascinating ‘Star Wars’ deal between Steven Spielberg and George Lucas, my reaction was that this was a simple diversification story. But then I realized that it is more complex than that; the obstacles in the form of skewness preference, adverse selection and moral hazard are strong enough to make deals like this probably quite rare.
The story itself is very simple and Business Insider tells it well. Back in 1977, George Lucas was making his ‘Star Wars’ film, and Steven Spielberg was making ‘Close Encounters of the Third Kind’. Lucas was worried that his ‘Star Wars’ film might bomb and thought that ‘Close Encounters’ would be great hit. So he made an offer to his friend Spielberg:
All right, I’ll tell you what. I’ll trade some points with you. You want to trade some points? I’ll give you 2.5% of ‘Star Wars’ if you give me 2.5% of ‘Close Encounters’.
Spielberg’s response was:
Sure, I’ll gamble with that. Great.
Both films ended up as great classics, but ‘Star Wars’ was by far the greater commercial success and Lucas ended up paying millions of dollars to Spielberg.
At the time when neither knew whether either of the films would succeed, the exchange was a simple diversification trade that made both better off. So why are such trades not routine? One reason could be that many films are made by large companies that are already well diversified.
A more important factor is information asymmetry: normally, each director would know very little of the other’s film and then trades become impossible. The Lucas-Spielberg trade was possible because they were friends. It is telling that the trade was made after Lucas had spent a few days watching Spielberg make his film. It takes a lot of due diligence to overcome the information asymmetry.
The other problem is skewness preference. Nobody buys a large number of lottery tickets to “diversify the risk”, because that diversification would also remove the skewness that makes lottery tickets worthwhile. Probably both Lucas and Spielberg thought their films had risk adjusted returns that made them attractive even without the skewness characteristic.
It is also possible that Lucas simply did an irrational trade. Lucas is described as “a nervous wreck ... [who] felt he had just made this little kids’ movie”. Perhaps, Spielberg was simply at the right time at the right place to do a one-sided trade with an emotional disturbed counterparty. Maybe, we should all be looking out for friends who are sufficiently depressed to offer us a Lucas type trade.
Posted at 1:38 pm IST on Thu, 27 Mar 2014 permanent link
Categories: behavioural finance, CAPM
China and Japan: Risk of Currency War
Over the last few months, the risks of such a currency war between China and Japan have increased substantially as pressing domestic economic problems in both countries could tempt them down this path.
In Japan, Abe came to power with a promise to revive the economy through drastic means. Though Abenomics has three “arrows”, the only arrow that is at all effective now is the monetary arrow that has worked by depreciating the yen. The risk is that Japan would seek to rely more and more on this arrow and try to push the yen down to 110 or even 120 against the US dollar. It is even possible that such a strategy might finally revive the Japanese economy.
China also faces a similar temptation. House of Debt has a fantastic blog post showing that since 2008, China has been forced to rely more and more on debt to keep its economy growing because its earlier strategy of export led growth is not working any more. The second graph in their blog post drives this point home very forcefully. Unfortunately, the debt led model is increasingly unsustainable. This month, China witnessed the first onshore corporate bond default. Earlier, a default on a popular wealth management product was avoided only by a bailout.
China’s leaders must now be sorely tempted to depreciate the currency to maintain economic growth without further exacerbating the country’s internal debt problem. Many observers believe that after many years of high inflation and gradual appreciation, the Chinese Renminbi is overvalued today. That would be another reason to attempt a weakening of the currency.
The high degree of intra-Asian economic integration means that a depreciation by either Asian giant would drive down many other Asian currencies (for example, the Korean Won) and make it difficult for the other Asian giant to refrain from depreciating its currency. A vicious cycle of competitive devaluations could rapidly become a currency war. And the already strained political relations between the two countries would clearly not help.
The yen and the yuan are in some ways like the yin and yang of Asian currency markets. A “beggar thy neighbour” currency war between Japan and China would of course have a dramatic impact on the whole of Asia.
Posted at 7:53 pm IST on Fri, 14 Mar 2014 permanent link
Categories: international finance
Bitcoin as a retail RTGS without a central bank
Richard Gendal Brown has a very valuable blog post about bank payment systems that ends with a brief discussion about Bitcoin. His conclusion is very interesting:
My take is that the Bitcoin network most closely resembles a Real Time Gross Settlement system. There is no netting, there are (clearly) no correspondent banking relationships and we have settlement, gross, with finality.
I agree with this characterization, but would only add that Bitcoin is an RTGS (Real-Time Gross Settlement) without a central bank. To computer scientists, the core of Bitcoin is an elegant solution to the Byzantine Generals problem. To finance people, perhaps, the core of Bitcoin is an RTGS that (a) is open to all (and not just the privileged banks) and (b) functions without a central bank.
Posted at 6:42 pm IST on Tue, 11 Mar 2014 permanent link
Categories: blockchain and cryptocurrency
Why central banks should not regulate markets
The best reason for keeping central banks out of the regulation of markets is highlighted by the announcement a couple of days back by the Bank of England that it was suspending one of its employees and beginning an independent investigation into whether any of its staff were involved in or aware of any attempted manipulation of the foreign exchange market.
The simple fact of the matter is that the central bank is totally conflicted when it comes to market regulation. It is a big participant in financial markets – in fact its primary mandate is to legally manipulate these markets in the pursuit of the macroeconomic mandates entrusted to it. Monetary policy gives central banks a mandate to manipulate bond markets to fix interest rates at particular levels; in several countries, central banks are also mandated to manipulate foreign exchange markets; and occasionally (for example, Hong Kong and Japan at different points of time), they have even been mandated to manipulate the stock index market.
This completely legal manipulation mandate makes central banks unsuitable for enforcing conduct regulation of financial markets. There is too great a temptation for the central bank to condone or even encourage large banks to indulge in manipulation of markets in the same direction that the central bank desires. After all, this is just another very convenient “transmission mechanism” for the central bank.
In this light, the post crisis decision in the UK to move market regulation into a subsidiary of the central bank is a ghastly mistake.
Posted at 9:52 pm IST on Fri, 7 Mar 2014 permanent link
Categories: manipulation, regulation
Insider trading inside the regulator
Rajgopal and White have a paper euphemistically (or sarcastically) titled “Stock Picking Skills of SEC Employees”. The paper is actually about potential insider trading by the regulator’s employees. The empirical results show that sales (but not purchases) by SEC employees earned abnormal profits (as measured by the standard Fama-French four factor model). There is evidence that some of these sales were based on impending SEC enforcement actions or disclosures made to the SEC that have not yet been made public. This indicates that the measures introduced by the SEC after an earlier insider trading scandal in 2009 (see here, pages 40-43) are not sufficiently effective or are not properly enforced.
If my memory serves me right, back in 2000, when I was in SEBI (the Securities and Exchange Board of India), employees (from the Chairman down to all staff) were forbidden from investing in equities except through mutual funds. This is arguably too draconian, but clearly the SEC rules (and their enforcement) were not tight enough.
Posted at 1:29 pm IST on Sat, 1 Mar 2014 permanent link
Categories: regulation
Edgar for Humans: Where individual effort trumps mighty organizations
Last week, Maris Jensen released her web site SEC Filings for Humans. (There is a nice interview with Maris Jensen at E Pluribus Unum.)
I use the SEC’s Edgar database quite often, but nowadays I never go there without first having identified the exact document that I need through other means. Searching for the document itself on Edgar is not for the faint hearted. I use Yahoo Finance and Google Finance quite extensively and find both quite disappointing. It is therefore truly amazing that one individual using a bunch of open source software (particularly D3.js and SQLAlchemy) can do something that none of these powerful organizations with vast resources have been able to accomplish.
For example, on Edgar, if you look for JPMorgan, you will find two registrants with the same name Jpmorgan Chase & Co. Only by trial and error would you be able to figure out which is the true JPMorgan. At Maris’ site, both registrants are listed, but the correct one is identified by the ticker symbol (JPM). Not rocket science, but saves a few minutes of searching for the wrong documents. Once you select JPM, you can view all its financial information (from the XBRL filings) in tabular form instead of wading through a huge text file. A lot of interesting information is displayed visually – for example, you can find a time series chart of all of the company’s subsidiaries. (For a company like JPM with hundreds of subsidiaries, this chart is quite intimidating, a similar chart for say Apple is more enjoyable). The influence chart of cross ownership is also truly impressive.
It is quite likely that in a few days as more and more users try out her website, it will become unresponsive and possibly even crash. One hopes that a large organization with more bandwidth and hardware takes over the site and keeps it running. But the prospects do not look very good – Maris tried to donate the whole thing to the SEC, but they did not even bother to respond. Meanwhile the SEC spends a lot of money buying back its own Edgar data from commercial vendors.
Finally, will something like this ever become available in India?
Posted at 5:39 pm IST on Tue, 25 Feb 2014 permanent link
Categories: regulation, technology
Looking for smuggled gold in the balance of payments
The World Gold Council (WGC) reported last week that despite import curbs imposed during 2013, Indian gold demand continued to grow with gold smuggling (what the WGC euphemistically calls unofficial gold imports) compensating for the fall in official imports. This is of course in line with a lot of anecdotal evidence.
In principle, gold smuggling should show up in the balance of payments (BOP) data in some form – after all the smuggled gold also has to be paid for in foreign exchange. For example, smugglers could collect foreign currency from migrant workers outside India and remit the money in Indian rupees to their families in India via the “hawala” channels. Corporate “hawala” could take the form of under/over invoicing of trade or inflating outbound foreign direct investment from India.
The Indian balance of payments data is available only for July-September 2013 while smuggling is likely to have picked up more in the subsequent quarter. Nevertheless, the data does show some tentative evidence for the financing of gold smuggling. For example, in item 2.2.2.2 (Other capital transfers including migrants transfers), the gross inflows fell by nearly $1.0 billion and the net flow fell by $0.8 billion. Similarly, item 3.1.B (Direct Investment by India) rose by $1.2 billion on gross outflow basis and by $0.6 billion on a net outflow basis. I am grateful to my colleague Prof. Ravindra Dholakia for pointing out to me that the gross flows are possibly more important than the net flows.
The WGC data and the BOP data are consistent with the anecdotal evidence that smuggling is on the rise. Some economists tend to be dismissive of such anecdotal evidence – their standard refrain is that “the plural of anecdote is not data”. In finance, we tend to be much more respectful of anecdotal and suggestive evidence. Our standard reflex is to “buy the rumour and sell the fact”. Financial markets are forward looking and by the time conclusive statistical data becomes available, it is too late to be actionable.
In any case, it is dangerous to let smuggling take root. Smuggling of gold requires setting up a complex and sophisticated supply chain including financing, insurance, transportation, warehousing and distribution. Stringent import curbs create incentives to incur the large fixed costs required to set up such a supply chain. But once the supply chain has been set up, it may continue to operate even after the curbs are relaxed so long as the arbitrage differentials exceed the variable costs of the supply chain. In this sense, there are large hysteresis effects (path dependence) in these kinds of phenomena. More dangerously, the supply chain created to smuggle gold can be easily re-purposed for more nefarious activities. In the long run, the gold import curbs may turn out to be a very costly mistake.
Posted at 1:35 pm IST on Sun, 23 Feb 2014 permanent link
Categories: gold, international finance
High Frequency Manipulation at Futures Expiry
My colleagues, Prof. Sobhesh Kumar Agarwalla and Prof. Joshy Jacob and I have a working paper on “High Frequency Manipulation at Futures Expiry: The Case of Cash Settled Indian Single Stock Futures” (also available at SSRN).
Some extracts from the abstract and the conclusion:
In 2013, the Securities and Exchange Board of India identified a case of alleged manipulation (in September 2012) of the settlement price of cash settled single stock futures based on high frequency circular trading. This alleged manipulation exploited several interesting characteristics of the Indian single stock futures market: (a) the futures contract is cash settled, (b) the settlement price is not based on a call auction or special session, but is the volume weighted average price (VWAP) during the last half an hour of trading in the cash market on the expiry date, and (c) anecdotal evidence suggests that the Indian market is more vulnerable to circular trading in which different entities associated with the same person trade with each other to create a false market.
We demonstrate that the combination of cash settlement with the use of a volume weighted average price (VWAP) to determine the settlement price on expiry day makes the Indian single stock futures market vulnerable to a form of high frequency manipulation that targets price insensitive execution algorithms. This type of manipulation is hard to prevent using mechanisms like position limits, and therefore it is necessary to establish a robust program to detect and deter manipulation.
We develop an econometric technique that uses high frequency data and which can be integrated with the automated surveillance system to identify suspected cases of high frequency manipulation very close to the event. Human judgement then needs to be applied to identify cases which prima facie justify detailed investigation and possible prosecution. Our results suggest that high frequency manipulation of price insensitive execution algorithms may be taking place. However, successful manipulation of the settlement price is relatively rare with only one clear instance (the September 27, 2012 episode) and one (milder) parallel.
Finally, the use of the volume weighted average price (VWAP) to determine the cash settlement price of the futures contract might require reconsideration.
Posted at 9:02 pm IST on Sun, 16 Feb 2014 permanent link
Categories: exchanges, manipulation
Does the market close at 4:00:00 pm or at 4:00:01 pm?
A few years ago, somebody asking this question would have been dismissed as a nit picking nerd, but today that question has become extremely important. Last week, the Wall Street Journal’s MoneyBeat blog carried an interesting story about how this difference cost a trader $100,000.
The official market close in the US is 4:00:00 pm, but the computers at Nasdaq keep humming for almost one second longer to reconcile all trades and determine the market closing price. About 150 milliseconds after 4:00 pm on December 5, the earnings announcement of Ulta Salon Cosmetics & Fragrance Inc. hit Business Wire and within 50 milliseconds after that a series of sale orders started hitting the market. When the market closed 700 milliseconds after 4:00 pm, the stock had fallen from $122 to $118.
The problem is that companies that want to release earnings after trading hours assume that trading stops at 4:00:00 pm, while smart traders know that the actual close is nearer to 4:00:01. That creates a profit opportunity for the fastest machine readable news feeds and the fastest trading algorithms. Traders are thinking in terms of milliseconds, but regulators are probably thinking in terms of minutes. Time for the regulators to catch up!
Posted at 5:51 pm IST on Thu, 13 Feb 2014 permanent link
Categories: exchanges, technology
Flaws in EMV (Chip and Pin) Card Security
Steven J. Murdoch and Ross Anderson have a fascinating paper entitled “Security Protocols and Evidence: Where Many Payment Systems Fail” (h/t Bruce Schenier). The paper proposes five principles to guide the design of good security protocols:
Principle 1: Retention and disclosure. Protocols designed for evidence should allow all protocol data and the keys needed to authenticate them to be publicly disclosed, together with full documentation and a chain of custody.
...
Principle 2: Test and debug evidential functionality. When a protocol is designed for use in evidence, the designers should also specify, test and debug the procedures to be followed by police officers, defence lawyers and expert witnesses.
...
Principle 3: Open description of TCB [trusted computing base]. Systems designed to produce evidence must have an open specification, including a concept of operations, a threat model, a security policy, a reference implementation and protection profiles for the evaluation of other implementations.
...
Principle 4: Failure-evidentness. Transaction systems designed to produce evidence must be failure-evident. Thus they must not be designed so that any defeat of the system entails the defeat of the evidence mechanism.
...
Principle 5: Governance of forensic procedures. The forensic procedures for investigating disputed payments must be repeatable and be reviewed regu- larly by independent experts appointed by the regulator. They must have access to all security breach notifications and vulnerability disclosures.
EMV cards violate several of these principles and the authors propose several ideas to improve the evidential characteristics of the system. One idea is a cryptographic audit log of all transactions to be maintained by the card. A forward secure Message Authentication Code (MAC) would prevent a forger from inserting fake transactions in the past even with possession of the current audit key. Similarly, committing a hash chain over all past transactions would mean that a forger with knowledge of the audit key (but not the card itself) cannot insert fake transactions without inducing a discrepancy between the bank server log and the audit log on the genuine card. By putting the card into a forensic mode to retrieve the audit log, a customer would thus be able to demonstrate that the card was not present in a disputed transaction – presumably, the merchant and the bank will be left to figure out how to share the loss.
One of the comments (by mike~acke) on Bruce Schneier’s blog points out that in today’s system, the card holder has to trust the merchant completely: “when you use your card: you are NOT authorizing ONE transaction: you are giving the merchant INDEFINITE UNRESTRICTED access to your account.”. His solution is a very simple though radical idea which simply removes the merchant from the trusted chain. (mike~acke’s comment below is probably easier to understand if you interpret POST to mean merchant and PCI to mean bank though neither identification is completely correct.)
When the customer presents the card it DOES NOT send the customer’s card number to the POST. Instead, the POST will submit an INVOICE to the customer’s card. On customer approval the customer’s card will encrypt the invoice together with authorization for payment to the PCI (Payment Card Industry Card Service Center) for processing and forward the cipher text to the POST. Neither the POST nor the merchant’s computer can read the authorizing message because it is PGP encrypted for the PCI service. Therefore the merchant’s POST must forward the authorizing message cipher text to the PCI service center. On approval the PCI Service Center will return an approval note to the POST and an EFT from the customer’s account to the merchant’s account. The POST will then print the PAID invoice. The customer picks up the merchandise and the transaction is complete. The merchant never knows who the customer was: the merchant never has ANY of the customer’s PII data.
I like this idea and would like to extend the idea even to ATM cards. That way, we will never have to worry about inserting a card into a fake or compromised ATM, because our ATM card would not trust the ATM machine – it would talk directly to the bank server in encrypted messages that the ATM cannot understand. At the end of it all, the bank server would simply send a message to the ATM to dispense the cash.
Updated February 11, 2014 to insert block quotes and ellipses in quote from Murdoch-Anderson paper.
Posted at 10:47 am IST on Tue, 11 Feb 2014 permanent link
Categories: fraud, technology
To short the rupee go to London and Singapore
Rajan Goyal, Rajeev Jain and Soumasree Tewari have an interesting paper in the RBI Working Paper series on the “Non Deliverable Forward and Onshore Indian Rupee Market: A Study on Inter-linkages” (WPS(DEPR):11/2013, December 2013).
They use a error correction model (ECM) to measure the linkages between the onshore and offshore rupee markets. The econometric model tells a very simple story: in normal times, much of the price discovery happens in the onshore market though there is a statistically significant information flow from the offshore market. But during a period of rupee depreciation, the price discovery shifts completely to the offshore market. (While the authors do not explicitly report Hasbrouk information shares or Granger-Gonzalo metrics, it seems pretty likely from the reported coefficients that the change in these measures from one regime to the other would be dramatic).
My interpretation of this result is that the exchange control system in India makes it very difficult to short the rupee onshore. The short interest emerges in the offshore market and is quickly transmitted to the onshore market via arbitrageurs who have the ability to operate in both markets:
- A hedge fund with a bearish view on the currency might short the rupee in the offshore market depressing the rupee in the offshore market.
- A foreign institutional investor with a relatively neutral view on the currency might buy the rupee (at a slightly lower price) in the offshore market from the bearish hedge fund
- This foreign institutional investor might then offset its offshore long position with a short position (at a slightly higher price) in the onshore market (clothed as a hedge of its existing Indian assets). This would transmit the price drop from the offshore market to the onshore market with a small lag.
On the other hand, during the stable or appreciation phase, there is no need to short the rupee and divergent views on the rupee can be accommodated in the onshore market in the form of differing hedge propensities of exporters, importers and foreign currency borrowers.
Short sale restrictions in the onshore market have two perverse effects:
- They contribute to the migration of the currency market from onshore to offshore.
- They make currency crashes more likely because they prevent rational bearish investors from contributing to price discovery in the build up to the crash. (This is a standard argument about short sale restrictions: see for example Harrison Hong and Jeremy Stein(2003) “Differences of opinion, short-sales constraints, and market crashes”, Review of financial studies, 16(2), 487-525.)
Posted at 5:11 pm IST on Tue, 28 Jan 2014 permanent link
Categories: arbitrage, international finance
Rating Agencies: What changed in 2000s?
Consider three alternative descriptions of what happened to the big global rating agencies during the early 2000s:
- Kedia, Rajgopal and Zhou wrote a paper last year presenting evidence showing that the deterioration of Moody's credit rating was due to its going public and the consequent pressure for increasing profits.
- Bo Becker and Todd Milbourn wrote a paper three years ago arguing that increased competition from Fitch coincides with lower quality ratings from the incumbents (S&P and Moody's).
- Way back in 2005, Frank Partnoy wrote a highly prescient paper describing the transformation of the rating industry since the 1990s that turned “gate keepers” into “gate openers”. He attributed the very high profitability of the gate openers to three things: (a) the regulatory licences that made ratings valuable even if they were uninformative, (b) the “free speech” immunity from civil and criminal liability for malfeasance and (c) the rapid growth of CDOs and structured finance.
I find Partnoy’s paper the most convincing despite its total lack of econometrics. The sophisticated difference-in-difference econometrics of the other two papers is, in my view, vitiated by reverse causation. When rating becomes “a much more valuable franchise than other financial publishing” as Partnoy showed, there would be greater pressure to do an IPO and also greater willingness to disregard any adverse reputational effects on other publishing businesses of the group. Similarly, the structural changes in the industry would invite greater competition from previously peripheral players like Fitch who happen to hold the same regulatory licence.
Posted at 11:04 am IST on Sun, 19 Jan 2014 permanent link
Categories: bond markets, corporate governance, regulation
Day dreaming about electronic money
Earlier this week, the Reserve Bank of India published the report of the Nachiket Mor Committee on financial inclusion (technically the Committee on Comprehensive Financial Services for Small Businesses and Low Income Households). Its first recommendation was that “By January 1, 2016 each Indian resident, above the age of eighteen years, would have an individual, full-service, safe, and secure electronic bank account.”
The Committee’s mandate was obviously to look at financial inclusion within the context of the current financial architecture and so it could not by any means have recommended a change in the core of that financial architecture itself. But for us sitting outside the Committee, there is no such constraint. We are entitled to day-dream about anything. So I would like to ask the question: if we were designing everything on a completely clean slate, what would we like to do?
Day dreaming begins here.
In my day dream, India would embrace electronic money and give every Indian an eWallet. Instead of linking India’s Unique ID (Aadhaar number) to a bank account, we would link it to an eWallet provided by the central bank. We would simultaneously move to abolish paper money by converting existing currency notes (with their famous “I promise to pay the bearer”) into genuine promissory notes redeemable in eRupees delivered into our eWallets. Financial inclusion would then have three ingredients: a Unique ID (Aadhaar) for everyone which is more or less in place now, the proposed eWallet for everyone, and a mobile phone for everyone. All eminently doable by 2016.
The costs of creating all the computing and communication infrastructure for a billion eWallets would be huge, but could be easily financed by a small cess on all paper money and bank money. The cess would also serve to incentivize a rapid shift to eRupees. (At some stage, we could even decide to make demand deposits illegal just like bearer demand promissory notes are illegal today, but I think that a ban would not be necessary at all.)
The operating costs of eRupees would be easily covered by the seigniorage income on the electronic money. Because of its greater convenience, safety and liquidity, eRupees should become at least as large as M2, and probably would grow to 25-30% of M3, making it about twice as large as paper money. The operating costs of eRupees should be significantly less than that of paper currency, and the seigniorage income much greater. The government would earn a fatter dividend from the Reserve Bank of India after covering all the cost of eRupees.
A huge chunk of the current banking infrastructure is now devoted to the useless paper shuffling activity that constitutes the current payment system. If this infrastructure is re-purposed to perform genuine financial intermediation, this would support much higher levels of economic growth. Divested of a payment system, the banks would be more like non bank finance companies and would pose far less systemic risk as well.
All this would allow India to leapfrog the rest of world and create the most advanced payment system on the planet (something like a Bitcoin backed by an army). In a world that struggles to ensure that systemically important settlement systems like clearing corporations settle in central bank money, we would have a system in which every individual could settle in central bank money. It is even possible that eRupees would find international adoption in the absence of any competition.
Day dreaming ends here.
Posted at 5:08 pm IST on Thu, 9 Jan 2014 permanent link
Categories: blockchain and cryptocurrency, technology
Tapering Talk: Why was India hit so hard?
Barry Eichengreen and Poonam Gupta have written a paper on how the “Tapering Talk” by the US Federal Reserve in mid 2013 impacted emerging markets.
In order to determine which countries were affected more severely, Eichengreen and Gupta construct a “Pressure Index” based on changes in the exchange rate and foreign exchange reserves. They also construct a Pressure Index 2 that also includes the impact on the stock market. By both measures, they find that India was the worst affected within a peer group of seven countries. The peer group includes all the countries that Morgan Stanley have called the Fragile Five (Brazil, India, Indonesia, South Africa and Turkey); in addition, it includes China and Russia. The Pressure Index 1 for India was 7.15 compared to a median of 3.46 for the peer group. Since the Indian stock market did not do too badly, the Pressure Index 2 for India was slightly better at 6.57 compared to a median of 4.63 for the peer group.
Turning to why some countries were hit harder than others, the paper finds:
What mattered more was the size of their financial markets; investors seeking to rebalance their portfolios concentrated on emerging markets with relatively large and liquid financial systems; these were the markets where they could most easily sell without incurring losses and where there was the most scope for portfolio rebalancing. The obvious contrast is with so-called frontier markets with smaller and less liquid financial systems. This is a reminder that success at growing the financial sector can be a mixed blessing. Among other things, it can accentuate the impact on an economy of financial shocks emanating from outside
In addition, we find that the largest impact of tapering was felt by countries that allowed exchange rates to run up most dramatically in the earlier period of expectations of continued ease on the part of the Federal Reserve, when large amounts of capital were flowing into emerging markets. Similarly, we find the largest impact in countries that allowed the current account deficit to widen most dramatically in the earlier period when it was easily financed. Countries that used policy and in some cases, perhaps, enjoyed good luck that allowed them to limit the rise in the real exchange rate and the growth of the current account deficit in the boom period suffered the smallest reversals.
Clearly, India’s increasing integration with global financial markets imposes greater market discipline on our policy makers than they have been used to in the past.
Posted at 9:27 pm IST on Wed, 8 Jan 2014 permanent link
Categories: crisis, international finance