Prof. Jayanth R. Varma's Financial Markets Blog

About me       Latest Posts       Posts by Year       Posts by Categories

Data access controls within banks

An order last month by the UK Financial Conduct Authority (FCA) against Barclays Bank highlights the problems faced by banks and other financial services firms in controlling the access that their employees have to customer data. I have long heard complaints about this: for example, some bank employees keep telling me that as soon as their bonus is paid to them, other employees with access to the core banking software can find out the exact quantum of this bonus.

Now we have confirmation that when one of the largest banks in the world wants to limit who can see the information about a customer, the best they can do is to go back to paper hard copies stored in a vault.

The FCA order refers to a £1.88 billion transaction that Barclays was doing for a group of ultra-high net worth Politically Exposed Persons (PEPs) who wanted a very high degree of confidentiality:

Prior to Barclays arranging the Transaction, Barclays agreed to enter into the Confidentiality Agreement which sought to keep knowledge of the Clients’ identity restricted to a very limited number of people within Barclays and its advisers. In the event that Barclays breached these confidentiality obligations, it would be required to indemnify the Clients up to £37.7 million. The terms of the Confidentiality Agreement were onerous and were considered by Barclays to be an unprecedented concession for clients who wished to preserve their confidentiality. (Para 4.11)

In view of these confidentiality requirements, Barclays determined that details of the Clients and the Transaction should not be kept on its computer systems. (Para 4.12)

Barclays decided to omit the names of the Clients from its internal electronic systems in order to comply with the terms of the Confidentiality Agreement. As a result, automated checks that would typically have been carried out against the Clients’ names were not undertaken. Such checks would have included regular overnight screenings of client names against sanctions and court order lists. If, for example, the Clients had become the subjects of law enforcement proceedings in any jurisdiction, Barclays could have been unaware of such a development. No adequate alternative manual process for carrying out such checks was established by Barclays. (Para 4.49)

Some documents relating to the Business Relationship were held by Barclays in hard copy in a safe purchased specifically for storing information relating to the Business Relationship. This was Barclays’ alternative to storing the records electronically. While there is nothing inherently wrong with keeping documents in hard copy, they must be easily identifiable and retrievable. However, few people within Barclays knew of the existence and location of the safe. (Para 4.52)

I am sure that 130,000 clients of HSBC Private Bank in Switzerland (now accused of evading taxes in their home countries) wish that their data too was kept in paper form in a vault beyond the reach of Falciani’s hacking skills.

More seriously, banks need to rethink the way they maintain customer confidentiality. With anywhere banking, far too many employees have access to the complete data of every customer. A lot of progress can be made with some very simple access control principles:

  1. Every access to customer information must be logged to provide a detailed audit trail of who, when, what and why. Ideally, the customer should have access to a suitably anonymously form of these logs.

  2. Every access must require justification in terms of a specific task falling within the accessor's job profile.

  3. Every access request should only result in the minimal information required to complete the task for which the access is requested.

For example, a customer comes to a branch (assuming such archaic things still exist) for a cash withdrawal. The cashier requests access by providing details of the requested withdrawal; and the system accepts the request because it is part of the cashier's job to process these withdrawals (Principle #2). The system responds with only a yes or a no: either the customer has sufficient balance to allow this withdrawal or not. The actual balance is not provided to the cashier (Principle #3). It should be emphasized that without Principle #1 and #2, the cashier could make repeated queries with different hypothetical withdrawal amounts and guess the true balance within a relatively small range using what computer scientists would recognize as a binary search method.

In my view, access controls are easy to implement if banks decide to prioritize (or regulators decide to enforce) customer confidentiality. However access controls have their limits and cryptographic tools are indispensable to achieve more complex objectives. Banks need to promote further research into these tools in order to make them usable for their needs:

I think the time has come for consumers and regulators to start demanding that banks pay greater attention to customer confidentiality. Actually, there is a similar problem in regulatory and self-regulatory organizations. For example, the surveillance staff in a stock exchange (and in the capital market regulator) have access to too much information and there is immense scope for abuse of this information. Mathematics (in the form of cryptography) gives us the tools required to solve many of these problems; we just need the will to use these tools.

Posted at 5:04 pm IST on Sun, 20 Dec 2015         permanent link

Categories: banks, technology

Comments

HBOS: An old fashioned bank failure

Most of the bank failures of the Global Financial Crisis involved complex products or an excessive reliance on markets rather than good old banking relationships. The HBOS failure as described in last month's 400 page report by the UK regulators (PRA and FCA) is quite different. One could almost say that this was a German or Japanese style relationship bank.

The report describes the approach of the Corporate Division where most of the losses arose:

The often-quoted approach of the division was to be a relationship bank that would ‘lend through the cycle’. Elsewhere the division’s approach had been called ‘counter-cyclical’. This was described as standing by and supporting existing customers through difficult times, while continuing to lend to those good opportunities that could be found. The division claimed it had a deep knowledge of the customers and markets in which it operated, which would enable it to pursue this approach with minimal threat to the Group. It was an approach that was felt to have served BoS well in the early 1990s downturn. (Para 274)

What could go wrong with such old fashioned banking? The answer is very simple:

Taking into account renting, hotels and construction, the firm’s overall exposure to property and related assets increases to £68 billion or 56% of the portfolio. (para 285)

And in some ways, relationship banking made things worse:

The top 30 exposures included a number of individual high-profile businessmen. Many of these had been customers of the division for many years, some going back to the BoS pre-merger. True to the division’s banking philosophy, it had supported these customers as they grew and expanded their businesses. However, business growth and expansion sometimes meant a change in business model to become significant property investors; not necessarily the original core business and expertise of the borrower. In the crisis, a number of these businessmen, though not all, incurred losses on their property investments. (Para 318)

When you as a bank lend a big chunk of your balance sheet into a bubble, it does not matter whether you are a transaction bank or a relationship bank: you are well on your way to failure. (If you do not want to jump to conclusions based on one bank, a recent BIS Working Paper on US commercial banks studies all bank failures in the US during the Great Recession and comes to a very similar conclusion).

Posted at 10:04 pm IST on Sat, 12 Dec 2015         permanent link

Categories: banks, failure, investigation

Comments

In the sister blog and on Twitter during October and November 2015

The following posts appeared on the sister blog (on Computing) during the last two months.

Tweets during the last two months (other than blog post tweets):

Posted at 1:41 pm IST on Tue, 1 Dec 2015         permanent link

Categories: technology

Comments

Potential self-trades are worse than actual self-trades

Update: While linking to Ajay Shah's blog for a summary of global regulatory regimes on self trades, I failed to mention that the particular post that I was referring to was authored not by Ajay Shah, but by Nidhi Aggarwal, Chirag Anand, Shefali Malhotra, and Bhargavi Zaveri.

Imagine that you are bidding at an auction and after a few rounds, most bidders have dropped out and you are left bidding against one competing bidder who pushes you to a very high winning bid before giving up. Much later you find that the competing bidder who forced you to pay close to your reservation price was an accomplice of the seller. You would certainly regard that as fraudulent; and many well running auction houses have regulations preventing it. Observe that the seller did not actually sell to himself; in fact there would have been no fraud (and no profit to the seller) if he actually did so. The seller defrauded you not by an actual (disguised) self-trade but by a (disguised) potential self-trade that did not actually happen. In fact, the best of auction houses do not prohibit actual self-trades: when the auction does not achieve the seller’s (undisclosed) reserve price, they allow the item to be “bought in” (the seller effectively buys the item from himself). So the lesson from well run auction houses is that potential self-trades (which do not happen) are much more dangerous than actual self-trades.

In the financial markets, we have lost sight of this basic intuition and focused on preventing actual self-trades instead of limiting potential self-trades. India goes overboard on this by regarding all self-trades as per se abusive. Most other countries also frown on self-trades but do not penalize bona fide self-trades; they take action only against self-trades that are manipulative in nature. However, they too regard frequent self-trades as suggestive of manipulative intent (see Ajay Shah for a nice summary of these regulatory regimes). Many exchanges and commercial software around the world therefore now provide automated methods of preventing self-trades: when an incoming order by an entity would execute against a pre-existing order on the opposite side by the same entity, these automated procedures cancel either the incoming order or the resting order or both.

A little reflection on the auction example would show that the whole idea of automated self-trade prevention is an utterly misguided response to an even more misguided regulatory regime. Manipulation does not happen when the trade is executed: it happens when the order is entered into the system. The first sign that the regulators are understanding this truth is in the complaint that the US Commodity and Futures Trading Commission (CFTC) filed against Oystacher and others last month. Para 53 of the complaint states:

Oystacher.and 3 Red manually traded these futures markets, using a commercially available trading platform, which included a function called “avoid orders that cross.” The purpose of this function is to prevent a trader’s own orders from matching with one another. Defendants exploited this functionality to place orders which automatically and almost simultaneously canceled existing orders on the opposite side of the market (that would have matched with the new orders) and thereby effectuated their manipulative and deceptive spoofing scheme ...

Far from preventing manipulation, automated self-trade prevention software is actually facilitating market manipulation. This might appear counter intuitive to many regulators, but is not at all surprising when one thinks through the auction example.

Posted at 1:47 am IST on Mon, 30 Nov 2015         permanent link

Categories: exchanges, regulation

Comments

Creditor versus Creditor and Creditor versus Debtor

In India, for far too long, bankruptcy has been a battle between creditor and debtor with the dice loaded against the creditor. In its report submitted earlier this month, the Bankruptcy Law Reforms Committee (BLRC) proposes to change all this with a fast track process that puts creditors in charge. It appears to me however that the BLRC ignores the fact that in well functioning bankruptcy regimes, the fight is almost entirely creditor and creditor: it is very much like the familiar scene in the Savannah where cheetahs, lions, hyenas and vultures can be seen fighting over the carcass which has no say in the matter.

The BLRC ignores this inter-creditor conflict completely and treats unsecured financial creditors as a homogeneous group; it believes that everything can be decided by a 75% vote of the Creditors Committee. In practice, this is not the case. Unsecured financial creditors can be senior or junior and multiple levels of subordination are possible. Moreover, the bankruptcy of any large corporate entity involves several levels of holding companies and subsidiary companies which also creates an implicit subordination among different creditors made more complex by inter company guarantees.

Consider for example, the recommendation of the BLRC that:

The evaluation of these proposals come under matters of business. The selection of the best proposal is therefore left to the creditors committee which form the board of the erstwhile entity in liquidation. (p 100)

If the creditors are homogeneous, this makes eminent sense. The creditors are the players with skin in the game and they should take the business decisions. The situation is much more complex and messy with heterogeneous creditors. Suppose for example that a company has 60 of senior debt and 40 of junior debt and that the business is likely to be sold for something in the range of 40-50. In this situation, the junior creditors should not have any vote at all: like the equity shareholders, they too are part of the carcass in the Savannah which others are fighting over. On the other hand, if the expected sale proceeds are 70-80, then the senior creditors should not have a vote at all. The senior creditors have no skin in the game because it matters absolutely nothing to them whether the sale fetches 70 or 80; they get their money in any case. They are like the lion that has had its fill and leaves it to lesser mortals to fight over what is left of the carcass.

The situation is made more complex by the fact that in practice the value of the proposals is not certain, and the variance matters as much as the expected value. A junior creditor’s position is often similar to that of the holder of an out of the money option – it tends to prefer proposals that are highly risky. Much of the upside of a risky sale plan may flow to the junior creditor, while most of the downside may be to the detriment of the senior creditor.

Another recommendation of the BLRC that I am uneasy about is the stipulation that operational creditors should be excluded from the decision making:

The Committee concluded that, for the process to be rapid and efficient, the Code will provide that the creditors committee should be restricted to only the financial creditors. (p 84)

Suppose for example that Volkswagen’s liabilities to its cheated customers were so large as to push it into bankruptcy. Would it make sense not to give these “operational creditors” a seat at the table? What about the bankruptcy of a electric utility whose nuclear reactor has suffered a core meltdown?

Posted at 5:59 pm IST on Mon, 16 Nov 2015         permanent link

Categories: bankruptcy

Comments

Distrust and cross-check

I have piece in today’s Mint arguing that the Volkswagen emission scandal is a wake-up call for all financial regulators worldwide:


The implications of big firms such as Volkswagen using software to cheat their customers go far beyond a few million diesel cars

The Volkswagen emissions scandal challenges us to move beyond Ronald Reagan’s favourite Russian proverb “trust but verify” to a more sceptical attitude: “distrust and cross-check”.

A modern car is reported to contain a hundred million lines of code to deliver optimised performance. But we learned last month that all this software can also be used to cheat. Volkswagen had a cheating software in its diesel cars so that the car appeared to meet emission standards in the lab while switching off the emission controls to deliver fuel economy on the road.

The shocking thing about Volkswagen is that (unlike, say Enron), it is not perceived to be a significantly more unethical company than its peers. Perhaps, the interposition of software makes the cheating impersonal, and allows managers to psychologically distance themselves from the crime. Individuals who might hesitate to cheat personally might have less compunctions in authorizing the creation of software that cheats.

The implications of big corporations using software to cheat their customers go far beyond a few million diesel cars. We are forced to ask whether, after Volkswagen, any corporate software can be trusted. In this article, I explore the implications of distrusting the software used by big corporations in the financial sector:

Can you trust your bank’s software to calculate the interest on your checking account correctly? Or might the software be programmed to check your Facebook and LinkedIn profiles to deduce that you are not the kind of person who checks bank statements meticulously, and then switch on a module that computes the interest due to you at a lower rate?

Can you be sure that the stock exchange is implementing price-time priority rules correctly or might the software in the order matching engine be programmed to favour particular clients?

Can you trust your mutual funds’ software to calculate Net Asset Value (NAV) correctly? Or might the software be programmed to understate the NAV on days where there are lots of redemption (and the mutual fund is paying out the NAV) while overstating the NAV on days of large inflows when the mutual fund is receiving the NAV?

Can you be sure that your credit card issuer has not programmed the software to deliberately add surcharges to your purchases. Perhaps, if you complain, the surcharges will be promptly reversed, but the issuer makes a profit from those who do not complain.

Can you trust the financials of a large corporation? Or could the accounting software be smart enough to figure out that it is the auditor who has logged in, and accordingly display a set of numbers different from what the management sees?

After Volkswagen, these fears can no longer be dismissed as mere paranoia. The question today is how can we, as individuals, protect ourselves against software-enabled corporate cheating? The answer lies in open source software and open data. Computing is cheap, and these days each of us walks around with a computer in our pocket (though, we choose to call it a smartphone instead of a computer). Each individual can, therefore, well afford to cross-check every computation if (a) the requisite data is accessible in machine-readable form, and (b) the applicable rules of computation are available in the form of open source software.

Financial sector regulations today require both the data and the rules to be disclosed to the consumers. What the rules do not do is to require the disclosures to be computer friendly. I often receive PDF files from which it is very hard to extract data for further processing. Even where a bank allows me to download data as a text or CSV (comma-separated value) file, the column order and format changes often and the processing code needs to be modified every time this happens. This must change. It must be mandatory to provide data in a standard format or in an extensible format like XML. Since data anyway comes from a computer database, the bank or financial firm can provide machine-readable data to the consumer at negligible cost.

When it comes to rules, disclosure is in the form of several pages of fine print legalese. Since the financial firm anyway has to implement rules in computer code, there is little cost to requiring that computer code be freely made available to the consumer. It could be Python code as the US SEC proposed five years ago in the context of mortgage-backed securities (http://www.sec.gov/rules/proposed/2010/33-9117.pdf), or it could be in any other open source language that does not require the consumer to buy an expensive compiler to run the code.

In the battle between the consumer and the corporation, the computer is the consumer’s best friend. Of course, the big corporation has far more powerful computers than you and I do, but it needs to process data of millions of consumers in real time. You and I need to process only one person’s data and that too at some leisure and so the scales are roughly balanced if only the regulators mandate that corporate computers start talking to consumers’ computers.

Volkswagen is a wake-up call for all financial regulators worldwide. I hope they heed the call.

Posted at 11:38 am IST on Mon, 19 Oct 2015         permanent link

Categories: regulation

Comments

Twitter or Newswires: Are regulators behind the curve?

Last week, I read two stories that made me wonder how regulators are far behind the curve when it comes to new media.

First, Business Insider reported that after the newswire hacking scandal (which I blogged about last month), Goldman Sachs was considering announcing its earnings on Twitter instead of on the newswires. Of course, such reports are often speculative and nothing may come of it, but it indicates that at least some organizations are taking the new media seriously.

Second, was an amendment to the New York Stock Exchange (NYSE) rules on how companies should release news to the public (h/t CLS Blue Sky Blog):

Currently, section 202.06(C) ... on the best way to release material news ... is outdated as it refers to, among other things, the release of news by telephone, facsimile or hand delivery. Instead, the Exchange proposes ... that listed companies releasing material news should either (i) include the news in a Form 8-K or other Commission filing, or (ii) issue the news in a press release to the major news wire services.

The regulators have finally decided to shift from obsolete media to the old media; the new media is not even on the horizon.

Posted at 9:41 pm IST on Tue, 13 Oct 2015         permanent link

Categories: regulation, technology

Comments

Interview in Bloomberg TV

Bloomberg TV carried an interview with me last week. The video is available at the channel’s website. Among several other things, the interview also covered the Amtek Auto episode that I have blogged about in the past. I argued that Amtek Auto is unlikely to be the last episode of distressed corporate bonds in mutual fund portfolios, and we need to be more proactive in future.

Posted at 6:09 pm IST on Mon, 12 Oct 2015         permanent link

Categories: bond markets, mutual funds, regulation

Comments

Are large fund managers problematic?

Last month, I read four seemingly unrelated papers which all point towards problems posed by large fund managers.

  1. Ben-David, Franzoni, Moussawi and Sedunov (The Granular Nature of Large Institutional Investors) show that the stocks owned by large institutions exhibit stronger price inefficiency and are also more volatile. They also study the impact of Blackrock’s acquisition of Barclays Global Investors (which the authors for some strange reason choose to identify only as “a mega-merger between two large institutional investors that took place at the end of 2009”). Post merger, the ownership of stocks which was spread across two fund managers became concentrated in one fund manager. The interaction term in their regression results show that this concentration increased the volatility of the stocks concerned. On the mispricing front, they show that the autocorrelation of returns is higher for stocks that are held by large institutional investors; and that stocks with common ownership by large institutions display abnormal co-movement. They also show that negative news about the fund manager (increase in the CDS spread) lead to an increase in volatility of stocks owned by that fund.

  2. Israeli, Lee and Sridharan (Is There a Dark Side to Exchange Traded Funds (ETFs)? An Information Perspective) find that stocks that are owned by Exchange Traded Funds (ETFs) suffer a decline in pricing efficiency: higher trading costs (measured as bid-ask spreads and price impact of trades), higher co-movement with general market and industry returns; a decline in the predictive power of current returns for future earnings); and a decline in the number of analysts covering the firm. They hypothesize that ETF ownership reduces the supply of securities available for trade, as well as the number of uninformed traders willing to trade these securities. Much the same factors may be behind the results found by Ben-David, Franzoni, Moussawi and Sedunov.

  3. Clare, Nitzsche and Motson (Are Investors Better Off with Small Hedge Funds in Times of Crisis?) argue that on average investors were better off investing with a small hedge fund instead of a large one in times of crisis (the dot com bust and the global financial crisis). They speculate that bigger hedge funds might attract more hot money (fund of funds) which might lead to large redemptions during crises. Smaller hedge funds might have less flighty investors and more stringent gating arrangements. Smaller hedge funds might also have lower beta portfolios.

  4. Elhauge (Horizontal Shareholding as an Antitrust Violation) focuses on problems in the real economy rather than in the financial markets. The argument is that when a common set of large institutions own significant shares in firms that are horizontal competitors in a concentrated product market, these firms are likely to behave anticompetitively. Elhauge discusses the DuPont-Monsanta situation to illustrate his argument. The top four shareholders of DuPont are also four of the top five shareholders in Monsanto, and they own nearly 20% of both companies. The fifth largest shareholder of DuPont, the Trian Fund, which did not own significant shares in Monsanto, launched a proxy contest criticizing DuPont management for failing to maximize DuPont profits. In particular, Trian complained that DuPont entered into a reverse payment patent settlement with Monsanto whereby, instead of competing, DuPont paid Monsanto for a license to use Monsanto’s patent. Trian’s proxy contest failed because it was not supported by the four top shareholders of DuPont who stood to gain from maximizing the joint profits of DuPont and Monsanto. I thought it might be useful for the author to compare this situation with the cartelization promoted by the big investment banks in 19th century US or by the big banks in early 20th century Germany or Japan.

Posted at 4:50 pm IST on Thu, 8 Oct 2015         permanent link

Categories: mutual funds

Comments

Negative interest rates wreak havoc with finance textbooks

By assuming non negative interest rates, finance textbooks arrive at many results that are false in a negative rates world. Finance theory does not rule out negative rates – theory requires only bond prices to be non negative, and this only prevents interest rates from dropping below −100%. In practice also, early 2015 saw interest rates go negative in many countries. The BIS 2015 Annual Report (Graph II.6, page 32) shows negative ten-year yields in Switzerland, and negative five year yields in Germany, France, Denmark and Sweden in April 2015.

Let us take a look at how many textbook results are no longer valid in this world:

Posted at 1:20 pm IST on Sun, 4 Oct 2015         permanent link

Categories: derivatives, monetary policy

Comments

US corporate disclosure delays

Corporate disclosures rules in the US still permit long delays more appropriate to a bygone age before technology speeded up everything from stock trading to instant messaging. Cohen, Jackson and Mitts wrote a paper earlier this month arguing that substantial insider trading occurs during the four business day window available to companies to disclose material events. The paper studied over forty thousand trades by insiders that occurred on or after the event date and before the filing date; the analysis demonstrates that these trades (which may be quite legal) were highly profitable.

Cohen, Jackson and Mitts also document that companies do usually disclose information much earlier than the legal deadline: about half of the disclosures are made on the same day; and large firms are even more prompt in their filing. But nearly 15% of all filings use the full four day delay that is available. In the early 2000s, after the Enron scandal, the US SEC tried to reduce the window to two days, but gave up in the face of intense opposition. I think the SEC should require each company to monitor the median delay between the event and the filing, and provide an explanation if this median delay exceeds one day. Since there are on average about four filings per company per year, it should be feasible to monitor the timeliness over a rolling three year period.

Another troubling thing about the US system is the use of press releases as the primary means of disclosure. Last month, the SEC filed a complaint against a group of traders and hackers who stole corporate press releases from the web site of the newswire agencies before their public release. What I found most disturbing about this case was that the SEC went out of its way to emphasize that the newswire agencies were not at fault; in fact, the SEC redacted the names of the agencies (though it was not at all hard for the media to identify them). Companies disclose material events to a newswire several hours before the scheduled time of public release of this information by the newswire; the newswire agencies are not regulated by the SEC; they are not required to encrypt market sensitive data during this interregnum; there are no standards on the computer security measures that the newswires are required to take during this period; a group of relatively unsophisticated hackers had no difficulty hacking the newswire websites repeatedly over a period of five years. And the SEC thinks that no changes are required in this anachronistic system.

Posted at 4:07 pm IST on Sun, 27 Sep 2015         permanent link

Categories: regulation

Comments

Could payments banks eat the private banks' CASA lunch?

The Reserve Bank of India (RBI) has granted “in principle” approval to eleven new payment banks and has also promised to license more in future. Many of the licensees could prove to be fierce competitors because of their deep pockets and strong distribution networks. For the incumbent banks, the most intense competition from the new entrants will probably be for the highly profitable Current and Savings Accounts (CASA) deposits which are primarily meant for payments. And it is here that the complacency of incumbents banks could provide an opening to the new payment banks.

In the late 1990s and early 2000s, new generation private banks innovated on technology and customer service and gained significant market share from the public sector banks. However, in recent years, some complacency seems to have set in; customer service has arguably deteriorated even as fees have escalated. Public sector banks have caught up with them on ATM and online channels; and in any case these channels are rapidly being overtaken by mobile and other platforms. In fact, India may not need any more ATMs at all.

In this competitive landscape, payment banks could gain significant market share if they are sufficiently innovative and provide better customer service than the incumbents. Unlike mainstream banks which have to worry about investments and advances and lots of other things, payment banks can be totally focused on serving retail customers. Since their survival would depend on this sharp focus, there is every likelihood that they would turn out to be more nimble and innovative in this segment.

The ₹100,000 limit on balances at the payment banks means that initially it would be the rural CASA that would be at risk. But if payment banks do a good job, the limit may be raised to a much larger level (maybe ₹500,000) over a few years. At that point, urban CASA will also be at risk of migration. It will be easy for RBI to raise the limit because the balances have to be invested in Government Securities and so customer money is subject only to operational risk.

eWallets could prove to be another competitive weapon in attacking the urban CASA segment. Large segments of the Indian population are uncomfortable with online credit card usage and with netbanking. A few years ago, eCommerce firms in India used Cash on Delivery (COD) to gain acceptance. However, COD is not scalable and it is breaking down for various reasons. In the last year or so, eWallets have begun to replace COD, and these too could pose a threat to traditional payment services. All banks are trying to launch eWallets and mobile banking apps, but I am not sure that traditional banks have a competitive advantage here. In fact, a customer who is worried about online security might well prefer to have an eWallet with a small balance for online transactions instead of exposing his or her main bank account to the internet. In this context, the payment banks may find that the ₹100,000 limit does not pose a competitive disadvantage at all.

All this is of course good news for the customer.

Posted at 6:56 pm IST on Wed, 9 Sep 2015         permanent link

Categories: banks, regulation

Comments

Amtek Auto and Mutual Funds: Use Side Pockets, not Gates

JPMorgan Mutual Fund has gated (restricted redemptions from) two of its debt funds which have large exposure to Amtek Auto which is in distress. A gate is better than nothing, but it is inferior to a side pocket. I would like to quote from a proposal that I made in a blog post that I wrote in October 2008 when the NAVs of many debt oriented mutual funds were not very credible:

At the very least what is required today is a partial redemption freeze to ensure that nobody is able to redeem units of mutual funds at above the true NAV of the fund. Anybody who wants to redeem should be paid 70% or 80% of the published NAV under the assumption that the true NAV would not be below this. The balance should be paid only after the true NAV is credibly determined through asset sales.

Unlike the generalized distress of 2008, what JPMorgan funds are facing today is distress limited to a single large exposure. According the July portfolio statement, Amtek Auto was about 15% of the NAV of the Short Term Income Fund. Even if this is valued at zero, the fund can pay out 85% of the NAV to everybody. (For the India Treasury Fund, Amtek is only 5% of NAV, so the fund can pay out 95%). Essentially, my proposal is what is known in the hedge fund world as a side pocket: the holding in Amtek Auto should go into a separate side pocket until it is liquidated and the value is realized. The rest of the money would remain in the normal mutual fund which would be open for unrestricted redemption (as well as for fresh investment).

The gate has two big disadvantages:

  1. The gate is not total: redemptions are not stopped, they are only restricted to 1%. This means that some redemptions are taking place at a wrong value. The money that is being paid out to this 1% is money that is partly money stolen from the remaining investors.

  2. The gate rewards the mutual fund for its own incompetence. A fund which has made a bad investment choice would be punished in the market place by a wave of redemptions. That is the competitive dynamic that encourages mutual funds to perform due diligence for their investment. A gate stops the redemption and shields the fund from this punishment.

It is possible that the mutual fund offer document might not contain a provision for a side pocket. But the Securities and Exchange Board of India (SEBI) as the regulator certainly has the power to issue directions to the fund to use this method. Let us see whether it acts and acts quickly.

Posted at 9:59 pm IST on Sun, 6 Sep 2015         permanent link

Categories: bond markets, mutual funds, regulation

Comments

In the sister blog and on Twitter during August 2015

The following posts appeared on the sister blog (on Computing) last month.

Tweets during the last month (other than blog post tweets):

Posted at 12:35 pm IST on Sat, 5 Sep 2015         permanent link

Categories: technology

Comments

SMS does not provide true two factor authentication

I am a strong supporter of two factor authentication (2FA), and I welcomed the idea of a one time password sent by SMS when it was introduced in India a few years ago. But gradually I have become disillusioned because SMS is not true 2FA.

Authentication is a problem that humanity has faced for centuries; and long before computers were invented, several authentication methods were developed and adopted. Two widely used methods are nicely illustrated by two different stories in the centuries old collection Arabian Nights. The first method is to authenticate with something that you know like Open Sesame in Ali Baba and the Forty Thieves. The Ali Baba story describes how the secret password is easily stolen during the process of authentication itself. What is worse is that while we would quickly detect the theft of a physical object, the theft of a secret password is not detected unless the theft does something stupid like Ali Baba’s brother did in the story.

The second method is to authenticate with something that you have, and its problems are eloquently portrayed in the story about Aladdin’s Wonderful Lamp. In the Aladdin story, the lamp changes hand involuntarily at least four times; physical keys or hardware tokens can also be stolen. The problem is that while you can carry “what you know” with you all the time (if you have committed it to memory), you cannot carry “what you have” with you all the time. When you leave it behind, you may (like Aladdin) find on your return that it is gone.

Clearly, the two methods – “what you know” and “what you have” – are complementary in that one is strong where the other is weak. Naturally, centuries ago, people came up with the idea of combining the two methods. This is the core idea of 2FA – you authenticate with something that you have and with something that you know. An interesting example of 2FA can be found in the Indian epic, the Ramayana. There is an episode in this epic where Rama sends a messenger (Hanuman) to his wife Sita. Since Hanuman was previously unknown to Sita, there was clearly a problem of authentication to be solved. Rama gives some personal ornaments to Hanuman which he could show to Sita for the “what you have” part of 2FA. But Rama does not rely on this alone. He also narrates some incidents known only to Rama and Sita to provide the “what you know” part of 2FA. The Ramayana records that the authentication was successful in a hostile environment where Sita regarded everything with suspicion (because her captors were adept in various forms of sorcery).

In the digital world, 2FA relies on a password for the “what you know” part and some piece of hardware for the “what you have” part. In high value applications, a hardware token – a kind of electronic key – is common. While it is vulnerable to MitM attacks, I like to think of this as reasonably secure (maybe I am just deluded). The kind of person who can steal your password is probably sitting in Nigeria or Ukraine, while the person who can steal your hardware must be living relatively close by. The skill sets required for the two thefts are quite different and it is unlikely that the same person would have both skill sets. The few people like Richard Feynman who are equally good at picking locks and cracking the secrets of the universe hopefully have better things to do in life than hack into your bank account.

The SMS based OTP has emerged as the poor man’s substitute for a hardware token. The bank sends you a text message with a one time password which you type in on the web site as the second factor in the authentication. Intuitively, your mobile phone becomes the the “what you have” part of 2FA.

Unfortunately, this intuition is all wrong – horribly wrong. The SMS which the bank sends is sent to your mobile number and not to your mobile phone. This might appear to be an exercise in hair splitting, but it is very important. The problem is that while my mobile phone is something that I have, my SIM card and mobile connection are both in the telecom operator’s hands and not in mine.

There have been cases around the world where somebody claiming to be you convinces the telecom operator that you have lost your mobile and need a new SIM card with the old number. The operator simply deactivates your SIM and gives the fake you a new SIM which has been assigned the old number. If you think this is a figment of my paranoid imagination, take a look at this 2013 story from India and this 2011 story from Malaysia. If you want something from the developed world, look at this 2011 story from Australia about how the crook simply went to another telecom operator and asked for the number to be “ported” from the original operator. (h/t I came across all these stories directly or indirectly via Bruce Schneier at different points of time). I have blogged about this problem in the past as well (see here and here).

My final illustration of why the SMS OTP that is sent to you is totally divorced from your mobile phone is provided by my own experience last week in Gujarat. In the wake of rioting in parts of the state, the government asked the telecom operators to shut down SMS services and mobile data throughout the state. I needed to book an air ticket urgently one night for a visiting relative who had to rush back because of an emergency at home. Using a wired internet connection, I could login to the bank site using my password (the “what I know” part of 2FA). The mobile phone (the “what I have” part of 2FA) was securely in my hand. All to no avail, because the telecom operator would not send me the SMS containing the OTP. I had to call somebody from outside the state to make the payment.

This also set me thinking that someday a criminal gang would (a) steal credit cards, (b) engineer some disorder to get SMS services shut down, and (c) use this “cover of darkness” to steal money using those cards. They would know that the victims would not receive the SMS messages that would otherwise alert them to the fraud.

I think we need to rethink the SMS OTP model. Perhaps, we need to protect the SIM with something like a Trusted Platform Module (TPM). The operator may be able to give away your SIM to a thief, but it cannot do anything about your TPM – it would truly be “something that you ” have. Or maybe the OTP must come via a secure channel different from normal SMS.

Posted at 9:59 pm IST on Mon, 31 Aug 2015         permanent link

Categories: fraud, technology

Comments

How greedy tax laws become a gift to other countries

Before coming to India and Mauritius, let me talk about US and the Dutch Antilles in the early 1980s. It took the US two decades to change their tax laws and stop the free gift they were giving to the Antilles. If we assume India acts with similar speed, it is around time we changed our tax laws because our generosity to Mauritius has been going on since the mid 1990s.

There is a vast literature about the US and the Netherlands Antilles. The description below is based on an old paper by Marilyn Doskey Franson (“Repeal of the Thirty Percent Withholding Tax on Portfolio Interest Paid to Foreign Investors”, Northwestern Journal of International Law & Business, Fall 1984, 930-978). Since this paper was written immediately after the change in US tax laws, it provides a good account of the different kinds of pulls and pressures that led to this outcome. Prior to 1984, passive income from investments in United States assets such as interest and dividends earned by foreigners was generally subject to a flat thiry percent tax which was withheld at the source of payment. Franson describes the Netherlands Antilles solution that was adopted by US companies to avoid this tax while borrowing in foreign markets:

In an effort to reduce the interest rates they were paying on debt, corporations began as early as the 1960s to access an alternative supply of investment funds by offering their debentures to foreign investors in the Eurobond market. The imposition of the thirty percent withholding tax on interest paid to these investors, however, initially made this an unattractive mode of financing. Since foreign investors could invest in the debt obligations of governments and businesses of other countries without the payment of such taxes, a United States offeror would have had to increase the yield of its obligation by forty-three percent in order to compensate the investor for the thirty percent United States withholding tax and to compete with other issuers. This prospect was totally unacceptable to most United States issuers.

In an effort to overcome these barriers, corporations began to issue their obligations to foreign investors through foreign “finance subsidiaries” located in a country with which the United States had a treaty exempting interest payments. Corporations generally chose the Netherlands Antilles as the site for incorporation of the finance subsidiary because of the favorable terms of the United States – Kingdom of the Netherlands Income Tax Convention ... The Antillean finance subsidiary would issue its own obligations in the Eurobond market, with the United States parent guaranteeing the bonds. Proceeds of the offering were then reloaned to the United States parent on the same terms as the Eurobond issue, but at one percent over the rate to be paid on the Eurobonds. Payments of interest and principal could, through the use of the U.S.-N.A. treaty, pass tax-free from the United States parent to the Antillean finance subsidiary; interest and principal paid to the foreign investor were also tax-free. The Antillean finance subsidiary would realize net income for the one percent interest differential, on which the Antillean government imposed a tax of about thirty percent. However, the United States parent was allowed an offsetting credit on its corporate income tax return for these taxes paid to the Antillean government. Indirectly, this credit resulted in a transfer of tax revenues from the United States Treasury to that of the Antillean government. (emphasis added)

The use of the Antillean route was so extensive that in the early 1980s, almost one-third of the total portfolio interest paid by US residents was paid through the Netherlands Antilles. (Franson, page 937, footnote 30). There was a lot of pressure on the US government to renegotiate the Antillean tax treaty to close this “loophole”. However, this was unattractive because of the adverse consequences of all existing Eurobonds being redeemed. This is very similar to the difficulties that India has in closing the Mauritius loophole. Just as in India, the tax department in the US too kept on questioning the validity of the Antillean solution on the ground “that while the Eurobond obligations were, in form, those of the finance subsidiary, that in substance, they were obligations of the domestic parent and, thus, subject to the thirty percent withholding tax.” (Franson, page 939).

Matters came to a head in 1984 when the US Congress began discussing amendments to the tax laws “that would have eliminated the foreign tax credit taken by the United States parent for taxes paid by the finance subsidiary to the Netherlands Antilles.” (Franson, page 939). The US Treasury was worried about the implications of closing down the Eurobond funding mechanism and proposed a complete repeal of the 30% withholding tax on portfolio interest. This repeal was enacted in 1984. Since then portfolio investors are not taxed on their US interest income at all. Similar benefits apply to portfolio investors in US equities as well. This tax regime has not only stopped the gift that the US government was giving to the Antilles, but it has also contributed to a vibrant capital market in the US.

It is interesting to note a parallel with the Participatory Note controversy in India: “The Eurobond market is largely composed of bearer obligations because of foreigners’ demand for anonymity. Throughout the congressional hearings on the repeal legislation, concerns were voiced over the possibility of increased tax evasion by United States citizens through the use of such bearer obligations.” (Franson, page 949).

It is perhaps not too much to hope that two decades after opening up the Indian market to foreign portfolio investors in the mid 1990s, India too could adopt a sensible tax regime for them. The whole world has moved to a model of zero or near zero withholding taxes on portfolio investors. Since capital is mobile, it is impossible to tax foreign portfolio investors without either driving them away or increasing the cost of capital to Indian companies prohibitively. It is thus impossible to close the Mauritius loophole just as it was impossible for the US to close the Antilles loophole without first removing the taxation of portfolio investors. The Mauritius loophole is a gift to that country because of the jobs and incomes that are created in that country solely to make an investment in India. Every shell company in Mauritius provides jobs to accountants, lawyers, nominee directors and the like. As the tax laws are tightened to require a genuine business establishment in Mauritius, even more income is generated in Mauritius through rental income and new jobs. All this is a free gift to Mauritius provided by greedy tax laws in India. It can be eliminated if we exempt portfolio income from taxation.

On the other hand, non portfolio investment is intimately linked to a business in India and must necessarily be subject to normal Indian taxes. In the US, the portfolio income exemption does not apply to a foreigner who owns 10% or more of the company which paid the interest or dividend, and India should also do something similar. The Mauritius loophole currently benefits non portfolio investors as well, and this is clearly unacceptable. Making portfolio investment tax free will enable renegotiation of the Mauritius tax treaty to plug this loophole.

Posted at 5:35 pm IST on Thu, 27 Aug 2015         permanent link

Categories: international finance, law, taxation

Comments

Hayekian Rational Turbulence: 15-Oct-2014 US Treasury versus 24-Aug-2015 US Stocks

On October 15, 2014, after an early morning release of weak US retail sales data, the benchmark 10-year US Treasury yield experienced a 16-basis-point drop and then rebounded to return to its previous level between 9:33 and 9:45 a.m. ET. The major US regulators were sufficiently disturbed by this event to prepare a Joint Staff Report about this episode. I blogged about this report last month arguing that there was nothing irrational about what happened in that market on that day.

Now compare that with what happened to the S&P 500 stock market index on August 24 and 25, 2015 in response to bad news from China. On the 24th, the market experienced the following before ending the day down about 4%:

The market was a little less erratic the next day, rising 2.5% before falling 4% and ending about 1.4% down.

I see similar phenomena at work in both episodes (15-Oct-2014 US Treasury and 24-Aug-2015 US Stocks): the market was trying to aggregate information from diverse participants in response to fundamental news which was hard to evaluate completely. In Hayek’s memorable phrase, prices arise from the “the interactions of people each of whom possesses only partial knowledge” (F. A. Hayek, “The Use of Knowledge in Society”, The American Economic Review, 35(4), 1945, p 530).

Sometimes, the news that comes to the market is such that it requires the “interactions of people” whose beliefs or knowledge are somewhat removed from the average, and these interactions can be achieved only when prices move at least temporarily to levels which induce them to enter the market. The presence of a large value buyer is revealed only when the price moves to that latent buyer’s reservation price. A temporary undershooting of prices which reveals the knowledge possessed by that buyer is thus an essential part of the process of price discovery in the market when fundamental uncertainty is quite high. To quote Hayek again, “the ‘data’ from which the economic calculus starts are never for the whole society ‘given’ to a single mind which could work out the implications, and can never be so given.” (p 519).

Hayek’s insights are timeless in some sense, but today seventy years later, I venture to think that if he were still alive, he would replace “people” by “people and their algorithms”. Algorithms can learn faster than people, and so sometimes when the algorithms are in charge, the overshooting of prices needs to last only a few minutes to serve their price discovery function. That is conceivably what happened in US Treasuries on October 15, 2014. Sometimes, when the evaluation and judgement required is beyond the capability of the algorithms, human learning takes over and overshooting often lasts for hours and days to allow aggregation of knowledge from people whose latency is relatively long.

Posted at 1:29 pm IST on Wed, 26 Aug 2015         permanent link

Categories: market efficiency

Comments

Moldova bank fraud challenges regulatory assumptions

There are many important and surprising lessons to be learned from the findings in the Kroll report on the bank fraud in Moldova. I believe that these have implications for regulators world wide.

The report is about the collapse in November 2014 of three of the largest banks of Moldova (Unibank, Banca Sociala, and Banca de Economii) which together accounted for 30% of the country’s banking sector. The missing money of more than $1 billion is over 10% of Moldova's GDP.

There are three elements in the story:

  1. A surreptitious takeover of three of the largest Moldovan banks in 2012.

  2. Use of interbank markets and other wholesale sources by these banks to borrow large amounts of money so that they could lend more.

  3. Surreptitious lending of very large amounts of money to one borrower.

The crucial take away for me from the report is that it is possible to evade all the rules and regulations that banking regulators have created to prevent such actions.

For example, as in many other countries, acquisition of a stake of more than 5% in any bank requires formal approval from the National Bank of Moldova. However, shares in the banks were acquired by a large number of apparently unrelated Moldovan, Russian and Ukrainian entities none of which crossed the 5% threshold. All the entities had different addresses and do not seem to have common directors or shareholders. The Kroll report presents some circumstantial evidence that they are related based largely on the fact that they followed similar strategies around the same time and that some of the directors of these entities appear to be nominee directors. I do not believe that this could have been detected in real time. More importantly, I seriously doubt that an attempt to block the purchase of shares at that time on highly speculative grounds would have stood up in a court of law. I conclude that in a modern open economy, ownership restrictions are largely meaningless and unenforceable. They are mere theatre.

Turning to change of control, this too is not easy to establish even in retrospect. The weakest element in the Kroll report in my opinion is that it provides too little evidence that there was a major change in the management and control of the banks. In some of the banks, the management appears to have been largely unchanged. In some cases, where new senior management personnel were inducted, they came from senior positions at other large banks. It is difficult to see how the banking regulator could have objected to these minor management changes.

Finally, the fact that these banks lent such large amounts to money to one single business group (the Shor group) has become apparent only after extensive investigation. The analysis included things like checking the IP addresses from which online banking facilities were accessed by these entities. Media reports suggest that people in Moldova were taken by surprise when the Kroll report identified the Shor group as the beneficiary of massive lending by the failed banks. I am not at all convinced that regulators could have identified all these linkages in real time.

Finally, it must be kept in mind that the whole fraud was accomplished in a little over two years. Supervisory processes work far too slowly to detect and prevent this before the money is gone. I would not be surprised if much of the money left Moldova long ago, and the Shor group was just a front for mafia groups outside the country.

This example has made me even more sympathetic than before to the view that larger capital requirements and size restrictions are the way to go to make banking safer.


As an aside, the “strictly confidential” Kroll report was published in an unusual way. The report was available to only a very limited number of people in the Moldovan government because of the stipulation that:

Any communication, publication, disclosure, dissemination or reproduction of this report or any portion of its contents to third parties without the advance written consent of Kroll is not authorized.

The Speaker of the Moldova Parliament, Mr. Andrian Candu, published it on his personal blog with the following statement (Google Translated from the original Romanian):

I decided to publish the report Kroll and I take responsibility for this action. I do it openly, without hiding behind anonymous sources. ... I understand the arguments of Kroll not to accept publication, but the situation in Moldova and our responsibility to be transparent with the citizens requires us to adapt to the realities of the moment ... I think it is important that every citizen should have access to that report.

Every page of the published report contains the footer:

Private and Confidential: Copy 33 of 33 – Mr. Andrian Candu, the Speaker of the Parliament of the Republic of Moldova

This is about as transparent as one can get. Yet many sections of the media have described the publication of the report as a leak. I think the use of the derogatory word leak in this context is quite inappropriate. In fact, I wish more people in high positions display the same courage of their convictions that Mr. Candu has demonstrated. The world would be a better place if they do so.

Posted at 4:43 pm IST on Wed, 5 Aug 2015         permanent link

Categories: crisis, fraud

Comments

In the sister blog and on Twitter during July 2015

The following posts appeared on the sister blog (on Computing) last month.

Tweets during the last month (other than blog post tweets):

Posted at 12:32 pm IST on Sat, 1 Aug 2015         permanent link

Categories: technology

Comments

There are no irrelevant alternatives

To a Bayesian, almost everything is informative and therefore relevant. This means that the Independence of Irrelevant Alternatives axiom is rarely applicable.

A good illustration is provided by the Joint Staff Report on “The U.S. Treasury Market on October 15, 2014”. On that day, in the narrow window between 9:33 and 9:45 a.m. ET, the benchmark 10-year US Treasury yield experienced a 16-basis-point drop and then rebounded to return to its previous level. The impact of apparently irrelevant alternatives is described in the Staff Report as follows:

Around 9:39 ET, the sudden visibility of certain sell limit orders in the futures market seemed to have coincided with the reversal in prices. Recall that only 10 levels of order prices above and below the best bid and ask price are visible to futures market participants. Around 9:39 ET, with prices still moving higher, a number of previously posted large sell orders suddenly became visible in the order book above the current 30-year futures price (as well as in smaller size in 10-year futures). The sudden visibility of these sell orders significantly shifted the visible order imbalance in that contract, and it coincided with the beginning of the reversal of its price (the top of the price spike). Most of these limit orders were not executed, as the price did not rise to their levels.

In other words, traders (and trading algorithms) saw some sell orders which were apparently irrelevant (nobody bought from these sellers at those prices), but this irrelevant alternative caused the traders to change their choice between two other alternatives. Consider a purely illustrative example: just before 9:39 am, traders faced the choice between buying a modest quantity at a price of say 130.05 and selling a modest quantity at a price of 129.95. They were choosing to buy at 130.05. At 9:39, they find that there is a new alternative: they can buy a larger quantity at a price of say 130.25. They do not choose this new alternative, but they change their earlier choice from buying at 130.05 to selling at 129.95. This is the behaviour that is ruled out by the axiom of the Independence of Irrelevant Alternatives.

But if one thinks about the matter carefully, there is nothing irrational about this behaviour at all. At 8:30 am, the market had seen the release of somewhat weaker-than-expected US retail sales data. Many traders interpreted this as a memo that the US economy was weak and needed low interest rates for a longer period. Since low interest rates imply higher bond prices, traders started buying bonds. At 9:39, they see large sell orders for the first time. They realize that many large investors did not receive this memo, or may be received a different memo. They think that their interpretation of the retail sales data might have been wrong and that they had possibly over reacted. They reverse the buying that they had done in the last few minutes.

In fact,the behaviour of the US Treasury markets on October 15 appears to me to be an instance of reasonably rational behaviour. Much of the action in those critical minutes was driven by algorithms which appear to have behaved rationally. With no adrenalin and testosterone flowing through their silicon brains, they could evaluate the new information in a rational Bayesian manner and quickly reverse course. The Staff Report says that human market makers stopped making markets, but the algorithms continued to provide liquidity and maintained an orderly market.

I expected the Staff Report to recommend that in the futures markets, the entire order book (and not just the best 10 levels) should be visible to all participants at all times. Given current computing power and communication bandwidth, there is no justification for sticking to this anachronistic practice of providing only limited information to the market. Surprisingly, the US authorities do not make this sensible recommendation because they fail to see the highly rational market response to newly visible orders. Perhaps their minds have been so conditioned by the Independence of Irrelevant Alternatives axiom, that they are blind to any other interpretation of the data. Axioms of rationality are very powerful even when they are wrong.

Posted at 5:55 pm IST on Sat, 25 Jul 2015         permanent link

Categories: market efficiency

Comments

Regulating Equity Crowdfunding Redux

In response to my blog post of a few days back on regulating crowd funding, my colleague Prof. Joshy Jacob writes in the comments:

I agree broadly with all the arguments in the blog post. I would like to add the following.

  1. If tapping the crowd wisdom on the product potential is the essence of crowdfunding, substituting that substantially with equity crowdfunding may not be a very good idea. While the donation based crowdfunding generates a sense of the product potential by way of the backings, the equity crowdfunding by financiers would not give the same, as their judgments still need to be based on the crowd wisdom. Is it possible to create a sequential structure involving donation based crowdfunding and equity based crowdfunding?

  2. Unlike most other forms of financing, the judgement in crowdfunding is often done sitting far away, without meeting the founders, devoid of financial numbers, and therefore almost entirely based on the campaign material posted. This intimately links the central role of the campaign success to the nature of the promotional material and endorsements by influential individuals. Evolving a role model for the multimedia campaigns would be appropriate, given the ample evidences on behavioral biases in retail investor decision making.

Both these are valid points that the regulator should take into account. However, I would worry a bit about people gaming the system. For example, if the regulator says that a successful donation crowdfunding is a prerequisite for equity crowdfunding, there is a risk that entrepreneurs will get their friends and relatives to back the project in a donation campaign. It is true that angels and venture capitalists rely on crowdfunding campaign success as a metric of project viability, but I presume that they would have a slightly greater ability to detect such gaming than the crowd.

Posted at 12:59 pm IST on Sun, 12 Jul 2015         permanent link

Categories: exchanges, investment

Comments

Regulating Equity Crowdfunding

Many jurisdictions are struggling with the problem of regulating crowd funding. In India also, the Securities and Exchange Board of India issued a consultation paper on the subject a year ago.

I believe that there are two key differences between crowd funding and other forms of capital raising that call for quite novel regulatory approaches.

  1. Crowd funding is for the crowd and not for the Wall Street establishment. There is a danger that if the regulators listen too much to the Wall Street establishment, they will produce something like a second tier stock market with somewhat diluted versions of a normal public issue. The purpose of crowd funding is different – it is to tap the wisdom of crowds. Crowd funding should attract people who have a passion for (and possibly expertise in) the product. Any attempt to attract those with expertise in finance instead of the product market would make a mockery of crowd funding.

  2. The biggest danger that the crowd funding investor faces is not exploitation by the promoter today, but exploitation by the Series A venture capitalist tomorrow. Most genuine entrepreneurs believe in doing well for their crowd fund backers. After all, they share the same passion. Everything changes when the venture capitalist steps in. We have plenty of experience with venture capitalists squeezing out even relatively sophisticated angel investors. The typical crowd funding investor is a sitting duck by comparison.

What do these two differences imply for the regulator?

In the spirit of crowd sourcing, I would like to hear in the comments on what a good equity crowd funding market should look like and how it should be regulated. Interesting comments may be hoisted out of the comments into a subsequent blog post.

Posted at 5:27 pm IST on Mon, 6 Jul 2015         permanent link

Categories: exchanges, investment

Comments

In the sister blog during June 2015

The following posts appeared on the sister blog (on Computing) last month.

Head in the clouds, feet on the ground: Part II (Feed Reader)

Head in the cloud, feet on the ground: Part I (email)

Posted at 2:14 pm IST on Thu, 2 Jul 2015         permanent link

Categories: technology

Comments

We must not mandate retention of all digital communications

After careful thought, I now think that it is a bad idea to mandate that regulated entities should store and retain records of all digital communications by their employees. Juicy emails and instant messages have been the most interesting element in many prosecutions including those relating to the Libor scandal and to foreign exchange rigging. Surely it is a good thing to force companies to retain these records for the convenience of prosecutors.

The problem is that today we use things like instant messaging where we would earlier have had an oral conversation. And there was no requirement to record these oral conversations (unless they took place inside specified locations like the trading room). The power of digital communications is that they transcend geographical boundaries. The great benefit of these technologies is that an employee sitting in India is able (in a virtual sense) to take part in a conversation happening around a coffee machine in the New York or London office.

Electronic communications can potentially be a great leveller that equalizes opportunities for employees in the centre and in the periphery. In the past, many jobs had to be in London or New York so that the employees could be tuned in to the office gossip and absorb the soft information that did not flow through formal channels. If we allowed a virtual chat room that spans the whole world, then the jobs too could be spread around the world. This potential is destroyed by the requirement that conversations in virtual chat rooms should be stored and archived while conversations in physical chat rooms can remain ephemeral and unrecorded. Real gossip will remain in the physical chat rooms and the jobs will also remain within earshot of these rooms.

India as a member of the G20 now has a voice in global regulatory organizations like IOSCO and BIS. Perhaps it should raise its voice in these fora to provide regulatory space for ephemeral digital communications that securely destroy themselves periodically.

Posted at 10:03 pm IST on Wed, 1 Jul 2015         permanent link

Categories: regulation, technology

Comments

Revolving door and favouring future employers

Canayaz, Martinez and Ozsoylev have a nice paper showing that the pernicious effect of the revolving door (at least in the US) is largely about government employees favouring their future private sector employers. It is not so much about government employees favouring their past private sector employers or about former government employees influencing their former colleagues in the government to favour their current private sector employers.

Their methodology relies largely on measuring the stock market performance of the private sector companies whose employees have gone through the revolving door (in either direction) and comparing these returns with a control group of companies which have not used the revolving door. The abnormal returns are computed using the Fama-French-Carhart four factor model.

The advantage of the methodology is that it avoids subjective judgements about whether for example, US Treasury Secretary Hank Paulson favoured his former employer, Goldman Sachs, during the financial crisis of 2008. It also avoids having to identify the specific favours that were done. The sample size also appears to be reasonably large – they have 23 years of data (1990-2012) and an average of 62 revolvers worked in publicly traded firms each year.

The negative findings in the paper are especially interesting, and if true could make it easy to police the revolving door. All that is required is a rule that when a (former) government employee joins the private sector, a special audit would be carried out of all decisions by the government employee during the past couple of years that might have provided favours to the prospective private sector employer. In particular, the resistance in India to hiring private sector professionals to important government positions (because they might favour their former employer) would appear to be misplaced.

One weakness in the methodology is that companies which anticipate financial distress in the immediate future might hire former government employees to help them lobby for some form of bail out. This might ensure that though their stock price declines due to the distress, it does not decline as much as it would otherwise have done. The excess return methodology would not however show any gain from hiring the revolver because the Fama French excess returns would be negative rather than positive. Similarly, companies which anticipate financial distress might make steps (for example, campaign contributions) that make it more likely that their employees are recruited into key government positions. Again, the excess return methodology would not pick up the resulting benefit.

Just in case you are wondering what all this has to do with a finance blog, the paper says that “[t]he financial industry, ... is a substantial employer of revolvers, giving jobs to twice as many revolvers as any other industry.” (Incidentally, Table A1 in their paper shows that including or excluding financial industry in the sample makes no difference to their key findings). And of course, the methodology is pure finance, and shows how much information can be gleaned from a rigorous examination of asset prices.

Posted at 3:43 pm IST on Wed, 24 Jun 2015         permanent link

Categories: corporate governance, regulation

Comments

On may versus must and suits versus geeks

On Monday, the Basel Committee on Banking Supervision published its Regulatory Consistency Assessment Programme (RCAP) Assessment of India’s implementation of Basel III risk-based capital regulations. While the RCAP Assessment Team assessed India as compliant with the minimum Basel capital standards, they had a problem with the Indian use of the word “may” where the rest of the world uses “must”:

The team identified an overarching issue regarding the use of the word “may” in India’s regulatory documents for implementing binding minimum requirements. The team considers linguistic clarity of overarching importance, and would recommend the Indian authorities to use the word “must” in line with international practice. More generally, authorities should seek to ensure that local regulatory documents can be unambiguously understood even in an international context, in particular where these apply to internationally active banks. The issue has been listed for further reflection by the Basel Committee. As implementation of Basel standards progresses, increased attention to linguistic clarity seems imperative for a consistent and harmonised transposition of Basel standards across the member jurisdiction.

Section 2.7 lists over a dozen instances of such usage of the word “may”. For example:

Basel III paragraph 149 states that banks “must” ensure that their CCCB requirements are calculated and publicly disclosed with at least the same frequency as their minimum capital requirements. The RBI guidelines state that CCCB requirements “may” be disclosed at table DF-11 of Annex 18 as indicated in the Basel III Master Circular.

Ultimately, the RCAP Assessment Team adopted a pragmatic approach of reporting this issue as an observation rather than a finding. They were no doubt swayed by the fact that:

Senior representatives of several Indian banks unequivocally confirmed to the team during the on-site discussions that there is no doubt that the intended meaning of “may” in Indian banking regulations is “shall” or “must” (except where qualified by the phrase “may, at the discretion of” or similar terms).

The Indian response to the RCAP Assessment argues that “may” is perfectly appropriate in the Indian context.

RBI strongly believes that communication, including regulatory communications, in order to be effective, must necessarily follow the linguistics and social characteristics of the language used in the region (Indian English in this case), which is rooted in the traditions and customs of the jurisdiction concerned. What therefore matters is how the regulatory communications have been understood and interpreted by the regulated entities. Specific to India, the use of word “may” in regulations is understood contextually and construed as binding where there is no qualifying text to convey optionality. We are happy that the Assessment Team has appreciated this point.

I tend to look at this whole linguistic analysis in terms of the suits versus geeks divide. It is true that in Indian banking, most of the suits would agree that when RBI says “may” it means “must”. But increasingly in modern finance, the suits do not matter as much as the geeks. In fact, humans matter less than the computers and the algorithms that they execute. I like to joke that in modern finance the humans get to decide the interesting things like when to have a tea break, while the computers decide the important things like when to buy and sell.

For any geek worth her salt, the bible on the subject of “may” and “must” is RFC 2119 which states that “must” means that the item is an absolute requirement; “should” means that there may exist valid reasons in particular circumstances to ignore a particular item; “may” means that an item is truly optional. I will let Arnold Kling have the last word: “Suits with low geek quotients are dangerous”.

Posted at 2:54 pm IST on Thu, 18 Jun 2015         permanent link

Categories: regulation

Comments

Back from vacation

My long vacation provided the ideal opportunity to reflect on the large number of comments that I received on my last blog post about the tenth anniversary of my blog. These comments convinced me that I should not only keep my blog going but also try to engage more effectively with my readers. Over the next few weeks and months, I intend to implement many of the excellent suggestions that you have given me.

First of all, I have set up a Facebook page for this blog. This post and all future blog posts will appear on that page so that readers can follow the blog from there as well. My blog posts have been on twitter for over six years now and this will continue.

Second, I have started a new blog on computing with its own Facebook page which will over a period of time be backed up by a GitHub presence. I did not want to dilute the focus of this blog on financial markets and therefore decided that a separate blog was the best route to take. At the end of every month, I intend to post on each blog a list of posts on the sister blog, but otherwise this blog will not be contaminated by my meanderings in fields removed from financial markets.

Third, I will be experimenting with different kinds of posts that I have not done so far. This will be a slow process of learning and you might not observe any difference for many months.

Posted at 1:31 pm IST on Wed, 17 Jun 2015         permanent link

Categories: miscellaneous

Comments

Reflections on tenth anniversary

My blog reaches its tenth anniversary tomorrow: over ten years, I have published 572 blog posts at a frequency of approximately once a week.

My first genuine blog post (not counting a test post and a “coming soon” post) on March 29, 2005 was about an Argentine creditor (NML Capital) trying to persuade a US federal judge (Thomas Griesa) to attach some bonds issued by Argentina. The idea that a debtor’s liabilities (rather than its assets) could be attached struck me as funny. Ten years on, NML and Argentina are still battling it out before Judge Griesa, but things have moved from the comic to the tragic (at least from the Argentine point of view).

The most fruitful period for my blog (as for many other blogs) was the global financial crisis and its aftermath. The blog posts and the many insightful comments that my readers posted on the blog were the principal vehicle through which I tried to understand the crisis and to formulate my own views about it. During the last year or so, things have become less exciting. The blogosphere has also become a lot more crowded than it was when I began. Many times, I find myself abandoning a potential blog post because so many others have already blogged about it.

When I look back at the best bloggers that I followed in the mid and late 2000s, some have quit blogging because they found that they no longer had enough interesting things to say; a few have sold out to commercial organizations that turned these blogs into clickbaits; at least one blogger has died; some blogs have gradually declined in relevance and quality; and only a tiny fraction have remained worthwhile blogs to read.

The tenth anniversary is therefore less an occasion for celebration, and more a reminder of senescence and impending mortality for a blog. I am convinced that I must either reinvent my blog or quit blogging. April and May are the months during which I take a long vacation (both from my day job and from my blogging). That gives me enough time to think about it and decide.

If you have some thoughts and suggestions on what I should do with my blog, please use the comments page to let me know.

Posted at 3:50 pm IST on Sat, 28 Mar 2015         permanent link

Categories: miscellaneous

Comments

How does a bank say that its employees are a big security risk?

Very simple. Describe them as your greatest resource!

In my last blog post, I pointed out that the Carbanak/Anunak hack was mainly due to the recklessness of the banks’ own employees and system administrators. Now that they are aware of this, banks have to disclose this as another risk factor in their regulatory filings. Here is how one well known US bank made this disclosure in their Form 10K (page 39) last week (h/t the ever diligent Footnoted.com):

We are regularly the target of attempted cyber attacks, including denial-of-service attacks, and must continuously monitor and develop our systems to protect our technology infrastructure and data from misappropriation or corruption.

...

Notwithstanding the proliferation of technology and technology-based risk and control systems, our businesses ultimately rely on human beings as our greatest resource, and from time-to-time, they make mistakes that are not always caught immediately by our technological processes or by our other procedures which are intended to prevent and detect such errors. These can include calculation errors, mistakes in addressing emails, errors in software development or implementation, or simple errors in judgment. We strive to eliminate such human errors through training, supervision, technology and by redundant processes and controls. Human errors, even if promptly discovered and remediated, can result in material losses and liabilities for the firm.

Posted at 8:12 pm IST on Sat, 28 Feb 2015         permanent link

Categories: risk management, technology

Comments

Carbanak/Anunak: Patient Bank Hacking

There were a spate of press reports a week back about a group of hackers (referred to as the Carbanak or Anunak group) who had stolen nearly a billion dollars from close to a hundred different banks and financial institutions from around the world. I got around to reading the technical reports about the hack only now: the Kaspersky report and blog post as well as the Group-IB/Fox-IT report of December 2014 and their recent update. A couple of blog posts by Brian Krebs also helped.

The two technical analyses differ on a few details: Kaspersky suggests that the hackers had a Chinese connection while Group-IB/Fox-IT suggests that they were Russian. Kaspersky also seems to have had access to some evidence discovered by law enforcement agencies (including files on the servers used by the hackers). Group-IB/Fox-IT talk only about Russian banks as the victims while Kaspersky reveals that some US based banks were also hacked. But by and large the two reports tell a similar story.

The hackers did not resort to the obvious ways of skimming money from a bank. To steal money from an ATM, they did not steal customer ATM cards or PIN numbers. Nor did they tamper with the ATM itself. Instead they hacked into the personal computers of bank staff including system administrators and used these hacked machines to send instructions to the ATM using the banks’ ATM infrastructure management software. For example, an ATM uses Windows registry keys to determine which tray of cash contains 100 ruble notes and which contains 5000 ruble notes. For example, the CASH_DISPENSER registry key might have VALUE_1 set to 5000 and VALUE_4 set to 100. A system administrator can change these settings to tell the ATM that the cash has been loaded into different bins by setting VALUE_1 to 100 and VALUE_4 to 5000 and restarting Windows to let the new values take effect. The hackers did precisely that (using the system administrators’ hacked PCs) so that the ATM which thinks it is dispensing 1000 rubles in the form of ten 100 ruble notes would actually dispense 50,000 rubles (ten 5000 ruble notes).

Similarly, an ATM has a debug functionality to allow a technician to test the functioning of the ATM. With the ATM vault door open, a technician could issue a command to the ATM to dispense a specified amount of cash. There is no hazard here because with the vault door open, the technician anyway has access to the whole cash without issuing any command. With access to the system administrators’ machines, the hackers simply deleted the piece of code that checked whether the vault door was open. All that they needed to do was to have a mole stand in front of the ATM when they issued a command to the ATM to dispense a large amount of cash.

Of course, ATMs were not the only way to steal money. Online fund transfer systems could be used to transfer funds to accounts owned by the hackers. Since the hackers had compromised the administrators’ accounts, they had no difficulty getting the banks to transfer the money. The only problem was to prevent the money from being traced back to the hackers after the fraud was discovered. This was achieved by using several layers of legal entities before being loaded into hundreds of credit cards which had been prepared in advance.

It is a very effective way to steal money, but it requires a lot of patience. “The average time from the moment of penetration into the financial institutions internal network till successful theft is 42 days.” Using emails with malicious attachments to hack a bank employee’s computer, the hackers patiently worked their way laterally infecting the machines of other employees until they succeeded in compromising a system administrator’s machine. Then they collected data patiently about the banks’ internal systems using screenshots and videos sent from the administrator’s machines by the hackers’ malware. Once they understood the internal systems well, they could use the systems to steal money.

The lesson for banks and financial institutions is that it is not enough to ensure that the core computer systems are defended in depth. The Snowden episode showed that the most advanced intelligence agencies in the world are vulnerable to subversion by their own administrators. The Carbanak/Anunak incident shows that well defended bank systems are vulnerable to the recklessness of their own employees and system administrators using unpatched Windows computers and carelessly clicking on malicious email attachments.

Posted at 4:24 pm IST on Sun, 22 Feb 2015         permanent link

Categories: banks, technology

Comments

Loss aversion and negative interest rates

Loss aversion is a basic tenet of behavioural finance, particularly prospect theory. It says that people are averse to losses and become risk seeking when confronted with certain losses. There is a huge amount of experimental evidence in support of loss aversion, and Daniel Kahneman won the Nobel Prize in Economics mainly for his work in prospect theory.

What are the implications of prospect theory for an economy with pervasive negative interest rates? As I write, German bund yields are negative up to a maturity of five years. Swiss yields are negative out to eight years (until a few days back, it was negative even at the ten year maturity). France, Denmark, Belgium and Netherlands also have negative yields out to at least three years.

A negative interest rate represents a certain loss to the investor. If loss aversion is as pervasive in the real world as it is in the laboratory, then investors should be willing to accept an even more negative expected return in risky assets if these risky assets offer a good chance of avoiding the certain loss. For example, if the expected return on stocks is -1.5% with a volatility of 15%, then there is a 41% chance that the stock market return is positive over a five year horizon (assuming a normal distribution). If the interest rate is -0.5%, a person with sufficiently strong loss aversion would prefer the 59% chance of loss in the stock market to the 100% chance of loss in the bond market. Note that this is the case even though the expected return on stocks in this example is less than that on bonds. As loss averse investors flee from bonds to stocks, the expected return on stocks should fall and we should have a negative equity risk premium. If there are any neo-classical investors in the economy who do not conform to prospect theory, they would of course see this as a bubble in the equity market; but if laboratory evidence extends to the real world, there would not be many of them.

The second consequence would be that we would see a flipping of the investor clientele in equity and bond markets. Before rates went negative, the bond market would have been dominated by the most loss averse investors. These highly loss averse investors should be the first to flee to the stock markets. At the same time, it should be the least loss averse investors who would be tempted by the higher expected return on bonds (-0.5%) than on stocks (-1.5%) and would move into bonds overcoming their (relatively low) loss aversion. During the regime of positive interest rates and positive equity risk premium, the investors with low loss aversion would all have been in the equity market, but they would now all switch to bonds. This is the flipping that we would observe: those who used to be in equities will now be in bonds, and those who used to be in bonds will now be in equities.

This predicted flipping is a testable hypothesis. Examination of the investor clienteles in equity and bond markets before and after a transition to negative interest rates will allow us to test whether prospect theory has observable macro consequences.

Posted at 5:07 pm IST on Thu, 19 Feb 2015         permanent link

Categories: behavioural finance, bond markets, bubbles, monetary policy

Comments

Bank deposits without those exotic swaptions

Yesterday, the Reserve Bank of India did retail depositors a favour: it announced that it would allow banks to offer “non-callable deposits”. Currently, retail deposits are callable (depositors have the facility of premature withdrawal).

Why can the facility of premature withdrawal be a bad thing for retail depositors? It would clearly be a good thing if the facility came free. But in a free market, it would be priced. The facility of premature withdrawal is an embedded American-style swaption and a callable deposit is just a non callable deposit bundled with that swaption whether the depositor wants that bundle or not. You pay for the swaption whether you need it or not.

Most depositors would not exercise that swaption optimally for the simple reason that optimal exercise is a difficult optimization problem to solve. Fifteen years ago, Longstaff, Santa-Clara and Schwartz wrote a paper showing that Wall Street firms were losing billions of dollars because they were using over simplified (single factor) models to exercise American-style swaptions (“Throwing away a billion dollars: The cost of suboptimal exercise strategies in the swaptions market.”, Journal of Financial Economics 62.1 (2001): 39-66.). Even those simplified (single factor) models would be far beyond the reach of most retail depositors. It is safe to assume that almost all retail depositors behave suboptimally in exercising their premature withdrawal option.

In a competitive market, the callable deposits would be priced using a behavioural exercise model and not an optimal exercise strategy. Still the problem remains. Some retail depositors would exercise their swaptions better than others. A significant fraction might just ignore the swaption unless they have a liquidity need to withdraw the deposits. These ignorant depositors would subsidize the smarter depositors who exercise it frequently (though still suboptimally). And it makes no sense at all for the regulator to force this bad product on all depositors.

Post global financial crisis, there is a push towards plain vanilla products. The non callable deposit is a plain vanilla product. The current callable version is a toxic/exotic derivative.

Posted at 9:37 pm IST on Wed, 4 Feb 2015         permanent link

Categories: banks, derivatives

Comments

The politics of SEC enforcement or is it data mining?

Last month, Jonas Heese published a paper on “Government Preferences and SEC Enforcement” which purports to show that the US Securities and Exchange Commission (SEC) refrains from taking enforcement action against companies for accounting restatements when such action could cause large job losses particularly in an election year and particularly in politically important states. The results show that:

All the econometrics appear convincing:

But then, I realized that there is one very big problem with the paper – the definition of labour intensity:

I measure LABOR INTENSITY as the ratio of the firm’s total employees (Compustat item: EMP) scaled by current year’s total average assets. If labor represents a relatively large proportion of the factors of production, i.e., labor relative to capital, the firm employs relatively more employees and therefore, I argue, is less likely to be subject to SEC enforcement actions.

Seriously? I mean, does the author seriously believe that politicians would happily attack a $1 billion company with 10,000 employees (because it has a relatively low labour intensity of 10 employees per $1 million of assets), but would be scared of targeting a $10 million company with 1,000 employees (because it has a relatively high labour intensity of 100 employees per $1 million of assets)? Any politician with such a weird electoral calculus is unlikely to survive for long in politics. (But a paper based on this alleged electoral calculus might even get published!)

I now wonder whether the results are all due to data mining. Hundreds of researchers are trying many things: they are choosing different subsets of SEC enforcement actions (say accounting restatements), they are selecting different subsets of companies (say non financial companies) and then they are trying many different ratios (say employees to assets). Most of these studies go nowhere, but a tiny minority produce significant results and they are the ones that we get to read.

Posted at 2:39 pm IST on Tue, 27 Jan 2015         permanent link

Categories: law, regulation

Comments

Why did the Swiss franc take half a million milliseconds to hit one euro?

Updated

In high frequency trading, nine minutes is an eternity: it is half a million milliseconds – enough time for five billion quotes to arrive in the hyperactive US equity options market at its peak rate. On a human time scale, nine minutes is enough time to watch two average online content videos.

So what puzzles me about the soaring Swiss franc last week (January 15) is not that it rose so much, nor that it massively overshot its fair level, but that the initial rise took so long. Here is the time line of how the franc moved:

It appears puzzling to me that no human trader was taking out every euro bid in sight at around 9:33 am or so. I find it hard to believe that somebody like a George Soros in his heyday would have taken more than a couple of minutes to conclude that the euro would drop well below 1.00. It would then make sense to simply lift every euro bid above 1.00 and then wait for the point of maximum panic to buy the euros back.

Is it that high frequency trading has displaced so many human traders that there are too few humans left who can trade boldly when the algorithms shut down? Or are we in a post crisis era of mediocrity in the world of finance?

Updated to correct 9:03 to 9:33, change eight billion to five billion and end the penultimate sentence with a question mark.

Posted at 7:26 am IST on Thu, 22 Jan 2015         permanent link

Categories: international finance, market efficiency

Comments

RBI is also concerned about two hour resumption time for payment systems

Two months back, I wrote a blog post on how the Basel Committee on Payments and Market Infrastructures was reckless in insisting on a two hour recovery time even from severe cyber attacks.

I think that extending the business continuity resumption time target to a cyber attack is reckless and irresponsible because it ignores Principle 16 which requires an FMI to “safeguard its participants’ assets and minimise the risk of loss on and delay in access to these assets.” In a cyber attack, the primary focus should be on protecting participants’ assets by mitigating the risk of data loss and fraudulent transfer of assets. In the case of a serious cyber attack, this principle would argue for a more cautious approach which would resume operations only after ensuring that the risk of loss of participants’ assets has been dealt with. ... The risk is that payment and settlement systems in their haste to comply with the Basel mandates would ignore security threats that have not been fully neutralized and expose their participants’ assets to unnecessary risk. ... This issue is all the more important for countries like India whose enemies and rivals include some powerful nation states with proven cyber capabilities.

I am glad that last month, the Reserve Bank of India (RBI) addressed this issue in its Financial Stability Report. Of course, as a regulator, the RBI uses far more polite words than a blogger like me, but it raises almost the same concerns (para 3.58):

One of the clauses 31 under PFMIs requires that an FMI operator’s business continuity plans must ‘be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events’ and that there can be ‘complete settlement’ of transactions ‘by the end of the day of the disruption, even in the case of extreme circumstances’. However, a rush to comply with this requirement may compromise the quality and completeness of the analysis of causes and far-reaching effects of any disruption. Restoring all the critical elements of the system may not be practically feasible in the event of a large-scale ‘cyber attack’ of a serious nature on a country’s financial and other types of information network infrastructures. This may also be in conflict with Principle 16 of PFMIs which requires an FMI to safeguard the assets of its participants and minimise the risk of loss, as in the event of a cyber attack priority may need to be given to avoid loss, theft or fraudulent transfer of data related to financial assets and transactions.

Posted at 1:53 pm IST on Tue, 13 Jan 2015         permanent link

Categories: risk management, technology

Comments

Heterogeneous investors and multi factor models

I read two papers last week that introduced heterogeneous investors into multi factor asset pricing models. The papers help produce a better understanding of momentum and value but they seem to raise as many questions as they answer. The easier paper is A Tug of War: Overnight Versus Intraday Expected Returns by Dong Lou, Christopher Polk, and Spyros Skouras. They show that:

100% of the abnormal returns on momentum strategies occur overnight; in stark contrast, the average intraday component of momentum profits is economically and statistically insignificant. ... In stark contrast, the profits on size and value ... occur entirely intraday; on average, the overnight components of the profits on these two strategies are economically and statistically insignificant.

The paper also presents some evidence that “is consistent with the notion that institutions tend to trade intraday while individuals are more likely to trade overnight.” In my view, their evidence is suggestive but by no means compelling. The authors also claim that individuals trade with momentum while institutions trade against it. If momentum is not a risk factor but a free lunch, then this would imply that individuals are smart investors.

The NBER working paper (Capital Share Risk and Shareholder Heterogeneity in U.S. Stock Pricing) by Martin Lettau, Sydney C. Ludvigson and Sai Ma presents a more complex story. They claim that rich investors (those in the highest deciles of the wealth distribution) invest disproportionately in value stocks, while those in lower wealth deciles invest more in momentum stocks. They then examine what happens to the two classes of investors when there is a shift in the share of income in the economy going to capital as opposed to labour. Richer investors derive most of their income from capital and an increase in the capital share benefits them. On the other hand, investors from lower deciles of wealth derive most of their income from labour and an increase in the capital share hurts them.

Finally, the authors show very strong empirical evidence that the value factor is positively correlated with the capital share while momentum is negatively correlated. This would produce a risk based explanation of both factors. Value stocks lose money when the capital share is moving against the rich investors who invest in value and therefore these stocks must earn a risk premium. Similarly, momentum stocks lose money when the capital share is moving against the poor investors who invest in momentum and therefore these stocks must also earn a risk premium.

The different portfolio choices of the rich and the poor is plausible but not backed by any firm data. The direction of causality may well be in the opposite direction: Warren Buffet became rich by buying value stocks; he did not invest in value because he was rich.

But the more serious problem with their story is that it implies that both rich and poor investors are irrational in opposite ways. If their story is correct, then the rich must invest in momentum stocks to hedge capital share risk. For the same reason, the poor should invest in value stocks. In an efficient market, investors should not earn a risk premium for stupid portfolio choices. (Even in a world of homogeneous investors, it is well known that a combination of value and momentum has a better risk-return profile than either by itself: see for example, Asness, C. S., Moskowitz, T. J. and Pedersen, L. H. (2013), Value and Momentum Everywhere. The Journal of Finance, 68: 929-985)

Posted at 5:16 pm IST on Sat, 3 Jan 2015         permanent link

Categories: factor investing

Comments

FCA Clifford Chance Report Part II: The Menace of Selective Briefing

Yesterday, I blogged about Clifford Chance report on the UK FCA (Financial Conduct Authority) from the viewpoint of regulatory capture. Today, I turn to the issue of the selective pre-briefing provided by the FCA to journalists and industry bodies. Of course, the FCA is not alone in doing this: government agencies around the world indulge in this anachronistic practice.

In the pre internet era, government agencies had to rely on the mass media to disseminate their policies and decisions. It was therefore necessary for them to cultivate the mass media to ensure that their messages got the desired degree of coverage. One of the ways of doing this was to provide privileged access to select journalists in return for enhanced coverage.

This practice is now completely anachronistic. The internet has transformed the entire paradigm of mass communication. In the old days, we had a push channel in which the big media outlets pushed their content out to consumers. The internet is a pull channel in which consumers pull whatever content they want. For example, I subscribe to the RSS/Atom feeds of several regulators around the world. I also subscribe to the feeds of several blogs which comment on regulatory developments world wide. My feed reader pulls all this content to my computer and mobile devices and provides me instant excess to these messages without the intermediation of any big media gatekeepers.

In this context, the entire practice of pre-briefing is anachronistic. Worse, it is inimical to the modern democratic ideals of equal and fair access to all. The question then is why does it survive at all. I am convinced that what might have had some legitimate function decades ago has now been corrupted into something more nefarious. Regulators now use privileged access to suborn the mass media and to get favourable coverage of their decisions. Journalists have to think twice before they write something critical about the regulator who may simply cut off their privileged access.

It is high time we put an end to this diabolical practice. What I would like to see is the following:

  1. A regulator could meet a journalist one-on-one, but the entire transcript of the interview must then be published on the regulator’s website and the interview must be embargoed until such publication.
  2. A regulator could hold press conferences or grant live interviews to the visual media, but such events must be web cast live on the regulator’s website and transcripts must be published soon after.
  3. The regulators should not differentiate between (a) journalists from the mainstream media and (b) representatives of alternate media (including bloggers).
  4. Regulator web sites and feeds must be more friendly to the general public. For example, the item description field in an RSS feed or the item content field in an Atom feed should contain enough information for a casual reader to decide whether it is worth reading in full. Regulatory announcements must provide enough background to enable the general public to understand them.

Any breach of (1) or (2) above should be regarded as a selective disclosure that attracts the same penalties as selective disclosure by an officer of a listed company.

What I also find very disturbing is the practice of the regulator holding briefing sessions with select group of regulated entities or their associations or lobby groups. In my view, while the regulator does need to hold confidential discussions with regulated entities on a one-on-one basis, any meeting attended by more than one entity cannot by definition be about confidential supervisory concerns. The requirement of publication of transcripts or live web casts should apply in these cases as well. In the FCA case, it seems to be taken for granted by all (including the Clifford Chance report) that the FCA needs to have confidential discussions with the Association of British Insurers (ABI). I think this view is mistaken, particularly when it is not considered necessary to hold a similar discussion with the affected policy holders.

Posted at 5:06 pm IST on Sun, 21 Dec 2014         permanent link

Categories: law, regulation, technology

Comments

Regulatory capture is a bigger issue than botched communications

I just finished reading the 226 page report that the non independent directors of the UK FCA (Financial Conduct Authority) commissioned from the law firm Clifford Chance on the FCA’s botched communications regarding its proposed review of how insurance companies treat customers trapped in legacy pension plans. The report published earlier this month deals with the selective disclosure of market moving price sensitive information by the FCA itself to one journalist, and the failure of the FCA to issue corrective statements in a timely manner after large price movements in the affected insurance companies on March 28, 2014.

I will have a separate blog post on this whole issue of selective disclosure to journalists and to industry lobby groups. But in this post, I want to write about what I think is the bigger issue in the whole episode: what appears to me to be a regulatory capture of the Board of the FCA and of HM Treasury. It appears to me that the commissioning of the Clifford Chance review serves to divert attention from this vital issue and allows the regulatory capture to pass unnoticed.

The rest of this blog post is based on reading between the lines in the Clifford Chance report and is thus largely speculative. The evidence of regulatory capture is quite stark, but most of the rest of the picture that I present could be totally wrong.

The sense that I get is that there were two schools of thought within the FCA. One group of people thought that the FCA needed to do something about the 30 million policy holders who were trapped in exploitative pension plans that they could not exit because of huge exit fees. Since the plans were contracted prior to 2000 (in some cases they dated back to the 1970s), they did not enjoy the consumer protections of the current regulatory regime. This group within the FCA wanted to use the regulator’s powers to prevent these policy holders from being treated unfairly. The simplest solution of course was to abolish the exit fees, and let these 30 million policy holders choose new policies.

The other group within the FCA wanted to conduct a cosmetic review so that the FCA would be seen to be doing something, but did not want to do anything that would really hurt the insurance companies who made tons of money off these bad policies. Much of the confusion and lack of coordination between different officials of the FCA brought out in the Clifford Chance report appears to me to be only a manifestation of the tension between these two views within the FCA. It was critical for the second group’s strategy to work that the cosmetic review receive wide publicity that would fool the public into thinking that something was being done. Hence the idea of doing a selective pre-briefing to a journalist known to be sympathetic to the plight of the poor policy holders. The telephonic briefing with this journalist was not recorded, and was probably ambiguous enough to maintain plausible deniability.

The journalist drew the reasonable inference that the first group in the FCA had won and that the FCA was serious about giving a fair deal to the legacy policy holders and reported accordingly. What was intended to fool only the general public ended up fooling the investors as well, and the stock prices of the affected insurance companies crashed after the news report came out. The big insurance companies were now scared that the review might be a serious affair after all and pulled out all their resources to protect their profits. They reached out to the highest levels of the FCA and HM Treasury and ensured that their voice was heard. Regulatory capture is evident in the way in which the FCA abandoned even the pretence of serious action, and became content with cosmetic measures. Before the end of the day, a corrective statement came out of the FCA which made all the right noises about fairness, but made it clear that exit fees would not be touched.

The journalist in question (Dan Hyde of the Telegraph) nailed this contradiction in an email quoted in the Clifford Chance report (para 16.8)

But might I suggest that by any standard an exit fee that prevents a customer from getting a fairer deal later in life is in itself an unfair term on a policy.

On March 28, 2014, the top brass of the FCA and HM Treasury could see the billions of pounds wiped out on the stock exchange from the market value of the insurance companies, and they could of course hear the complaints from the chairmen of those powerful insurance companies. There was no stock exchange showing the corresponding improvement in the net worth of millions of policy holders savouring the prospect of escape from unfair policies, and their voice was not being heard at all. Out of sight, out of mind.

Posted at 6:25 pm IST on Sat, 20 Dec 2014         permanent link

Categories: insurance, regulation

Comments

Unwarranted complacency about regulated financial entities

Two days back, the Securities and Exchange Board of India (SEBI) issued a public Caution to Investors about entities that make false promises and assure high returns. This is quite sensible and also well intentioned. But the first paragraph of the press release is completely wrong in asking investors to focus on whether the investment is being offered by a regulated or by an unregulated entity:

It has come to the notice of Securities and Exchange Board of India (SEBI) that certain companies / entities unauthorisedly, without obtaining registration and illegally are collecting / mobilising money from the general investors by making false promises, assuring high return, etc. Investors are advised to be careful if the returns offered by the person/ entity is very much higher than the return offered by the regulated entities like banks, deposits accepted by Companies, registered NBFCs, mutual funds etc.

This is all wrong because the most important red flag is the very high return itself, and not the absence of registration and regulation. That is the key lesson from the Efficient Markets Hypothesis:

If something appears too good to be true, it is not true.

For the purposes of this proposition, it does not matter whether the entity is regulated. To take just one example, Bernard L. Madoff Investment Securities LLC was regulated by the US SEC as a broker dealer and as an investment advisor. Fairfield Greenwich Advisors LLC (through whose Sentry Fund, many investors invested in Madoff’s Ponzi scheme) was also an SEC regulated investment advisor.

Regulated entities are always very keen to advertise their regulated status as a sign of safety and soundness. (Most financial entities usually prefer light touch regulation to no regulation at all.) But regulators are usually at pains to avoid giving the impression that regulation amounts to a seal of approval. For example, every public issue prospectus in India contains the disclaimer:

The Equity Shares offered in the Issue have not been recommended or approved by the Securities and Exchange Board of India

In this week’s press release however, SEBI seems to have inadvertently lowered its guard, and has come dangerously close to implying that regulation is a seal of approval and respectability. Many investors would misinterpret the press release as saying that it is quite safe to put money in a bank deposit or in a mutual fund. No, that is not true at all: the bank could fail, and market risks could produce large losses in a mutual fund.

Posted at 3:44 pm IST on Sat, 13 Dec 2014         permanent link

Categories: regulation

Comments

Why no two factor authentication for advance tax payments in India?

I made an advance tax payment online today and it struck me that the bank never asks for two factor authentication for advance tax payments. It seems scandalous to me that payments of several hundreds of thousands of rupees are allowed without two factor authentication at a time when the online taxi companies are not allowed to bypass two factor authentication for payments of a few hundred rupees.

I can think of a couple of arguments why advance tax is different, but none are convincing:

The point is that the rule of law demands that the same requirements apply to one and all. The “King can do no wrong” argument is inconsistent with the rule of law in a modern democracy. I believe that all payments above some threshold should require two factor authentication.

Posted at 4:41 pm IST on Mon, 8 Dec 2014         permanent link

Categories: taxation, technology

Comments

On fickle foreign direct investment and patient foreign portfolio capital

No that is not a typo; I am asserting the opposite of the conventional wisdom that foreign portfolio investment is fickle while foreign direct investment is more reliable. The conventional wisdom was on display today in news reports about the parliament’s apparent willingness to allow foreign direct investment in the insurance sector, but not foreign portfolio investment.

The conventional wisdom is propagated by macroeconomists who look at the volatility of aggregate capital flows – it is abundantly clear that portfolio flows stop and reverse during crisis periods (“sudden stops”) while FDI flows are more stable. Things look very different at the enterprise level, but economists working in microeconomics and corporate finance who can see a different world often do not bother to discuss policy issues.

Let me therefore give an example from the Indian banking industry to illustrate what I mean. In the late 1990s, after the Asian Crisis, one of the largest banks in the world decided that Asia was a dangerous place to do banking and sold a significant part of their banking operations in India and went home. That is what I mean by fickle FDI. At the same time, foreign portfolio investors were providing tons of patient capital to Indian private banks like HDFC, ICICI and Axis to grow their business in India. In the mid 1990s, many people thought that liberalization would allow foreign banks to thrive; in reality, they lost market share (partly due to the fickleness and short termism of their parents), and it is the Indian banks funded by patient foreign portfolio capital that gained a large market share.

In 2007, as the Great Moderation was about to end, but markets were still booming, ICICI Bank tapped the markets to raise $5 billion of equity capital (mainly from foreign portfolio investors) in accordance with the old adage of raising equity when it is available and not when it is needed. The bank therefore entered the global financial crisis with a large buffer of capital originally intended to finance its growth a couple of years ahead. During the crisis, even this buffer was perceived to be inadequate and the bank needed to downsize the balance sheet to ensure its survival. But without that capital buffer raised in good times, its position would have been a lot worse; it might even have needed a government bailout.

Now imagine that instead of being funded by portfolio capital, ICICI had been owned by say Citi. Foreign parents do not like to fund their subsidiaries ahead of need; they prefer to drip feed the subsidiary with capital as and when needed. In fact, if the need is temporary, the parent usually provides a loan instead of equity so that it can be called back when it is no longer needed. So the Indian subsidiary would have entered the crisis without that large capital buffer. During the crisis, the ability of the embattled parent to provide a large capital injection into its Indian operations would have been highly questionable. Very likely, the Indian subsidiary would have ended up as a ward of the state.

Macro patterns hide these interesting micro realities. The conventional wisdom ignores the fact that enterprise level risk management works to counter the vagaries of the external funding environment. It ignores the standard insight from the markets versus hierarchies literature that a funding that relies on a large number of alternate providers of capital is far more resilient than one that relies on just one provider of capital. In short it is time to overturn the conventional wisdom.

Posted at 4:48 pm IST on Thu, 4 Dec 2014         permanent link

Categories: international finance, regulation

Comments

Should governments hedge oil price risk?

I had an extended email conversation last month with a respected economist (who wishes to remain anonymous) about whether governments of oil importing countries should hedge oil price. While there is a decent literature on oil price hedging by oil exporters (for example, this IMF Working Paper of 2001), there does not seem to be much on oil importers. So we ended up more or less debating this from first principles. The conversation helped clarify my thinking, and this blog post summarizes my current views on this issue.

I think that hedging oil price risk does not make much sense for the government of an oil importer for several reasons:

  1. Oil imports are usually not a very large fraction of GDP; by contrast oil exports are often a major chunk of GDP for a large exporter. For most countries, oil price risk is just one among many different macroeconomic shocks that can hit the country. Just as for a company, equity capital is the best hedge against general business risks, for a country, external reserves and fiscal capacity are the best hedges against general macroeconomic shocks.
  2. For a country, the really important strategic risk relating to oil is a supply disruption (embargo for example) and this can be hedged only with physical stocks (like the US strategic oil reserve).
  3. A country is an amorphous entity. Probably, it is the government that will do the hedge, and private players that would consume the oil. Who pays for the hedge and who benefits from it? Does the government want the private players to get the correct price signal? Does it want to subsidize the private sector? If it is the private players who are consuming oil, why don’t we let them hedge the risk themselves?
  4. Futures markets may not provide sufficient depth, flexibility and liquidity to absorb a large importer’s hedging needs. The total open interest in ICE Brent futures is roughly equal to India’s annual crude import.

Frankly, I think it makes sense for the government to hedge oil price risk only if it is running an administered price regime. In this case, we can analyse its hedging like a corporate hedging program. The administered price regime makes the government short oil (it is contracted to sell oil to the private sector at the administered price), and then it makes sense to hedge the fiscal cost by buying oil futures to offset its short position.

But an administered price regime is not a good idea. Even if, for the moment, one accepts the dubious proposition that rapid industrialization requires strategic under pricing of key inputs (labour, capital or energy), we only get an argument for energy price subsidies not for energy price stabilization. The political pressure for short term price stabilization comes from the presence of a large number of vocal consumers (think single truck owners for example) who have large exposures to crude price risk but do not have access to hedging markets. If we accept that the elasticity of demand for crude is near zero in the short term (though it may be pretty high in the long term), then unhedged entities with large crude exposures will find it difficult to tide through the short term during which they cannot reduce demand. They can be expected to be very vocal about their difficulties. The solution is to make futures markets more accessible to small and mid size companies, unincorporated businesses and even self employed individuals who need such hedges. This is what India has done by opening up futures markets to all including individuals. Most individuals might not need these markets (financial savings are the best hedge against most risks for individuals who are not in business). But it is easier to open up the markets to all than to impose complex documentation requirements that restrict access. Easy hedging eliminates the political need for administered energy prices.

With free energy pricing in place, the most sensible hedge for governments is a huge stack of foreign exchange reserves and a large pool of oil under the ground in a strategic reserve.

Posted at 9:29 pm IST on Thu, 27 Nov 2014         permanent link

Categories: commodities, derivatives, risk management

Comments

Ethnic diversity and price bubbles

The socializing finance blog points to a PNAS paper showing that ethnic diversity drastically reduces the incidence of price bubbles in experimental markets. This is a conclusion that I am inclined to believe on theoretical grounds and the paper itself presents the theoretical arguments very persuasively. However, the experimental evidence leaves me unimpressed.

The biggest problem is that in both the locales (Southeast Asia and North America) in which they carried out the experiments:

In the homogeneous markets, all participants were drawn from the dominant ethnicity in the locale; in the diverse markets, at least one of the participants was an ethnic minority.

This means that the experimental design conflates the presence of ethnic diversity with that of ethnic minorities. This is all the more important because for the experiments, they recruited skilled participants, trained in business or finance. There could therefore be a significant self selection bias here in that ethnic minority members who chose to train in business or finance might have been those with exceptional talent or aptitude.

This fear is further aggravated by the result in Figure 2 showing that the Southeast Asian markets performed far better than the North American markets. In fact, the homogeneous Southeast Asian markets did better than the diverse North American markets! The diverse Southeast Asian market demonstrated near perfect pricing accuracy. This suggests that the ethnic fixed effects (particularly the gap between the dominant North American ethnic group and the minority Southeast Asian ethnic group) are very large. A proper experimental design would have had homogeneous markets made out of minority ethnic members as well so that the ethnic fixed effects could be estimated and removed.

Another reason that I am not persuaded by the experimental evidence that the experimental design prevented participants from seeing each other or communicating directly while trading. As the authors state “So, direct social influence was curtailed, but herding was possible.” With a major channel of diversity induced improvement blocked off by the design itself, one’s prior of the size of the diversity effect is lower than it would otherwise be.

Posted at 5:41 pm IST on Mon, 24 Nov 2014         permanent link

Categories: bubbles, market efficiency

Comments

CPMI fixation on two hour resumption time is reckless

The Basel Committee on Payments and Market Infrastructures (CPMI, previously known as CPSS) has issued a document about Cyber resilience in financial market infrastructures insisting that payment and settlement systems should be able to resume operations within 2 hours from a cyber attack and should be able to complete the settlement by end of day. The Committee is treating a cyber attack as a business continuity issue and is applying Principle 17 of its Principles for financial market infrastructures. Key Consideration 6 of Principle 17 requires that the business continuity plan “should be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events” and that the plan “should be designed to enable the FMI to complete settlement by the end of the day of the disruption, even in the case of extreme circumstances”.

I think that extending the business continuity resumption time target to a cyber attack is reckless and irresponsible because it ignores Principle 16 which requires an FMI to “safeguard its participants’ assets and minimise the risk of loss on and delay in access to these assets.” In a cyber attack, the primary focus should be on protecting participants’ assets by mitigating the risk of data loss and fraudulent transfer of assets. In the case of a serious cyber attack, this principle would argue for a more cautious approach which would resume operations only after ensuring that the risk of loss of participants’ assets has been dealt with.

I believe that if there were to be a successful cyber attack against a well run payment and settlement system, the attack would most likely be carried out by a nation-state. Such an attack would therefore be backed by resources and expertise far exceeding what any payment and settlement system would possess. Neutralizing such a threat would require assistance from the national security agencies of its own nation. It is silly to assume that such a cyber war between two nation states would be resolved within two hours just because a Committee in Basel mandates so.

The risk is that payment and settlement systems in their haste to comply with the Basel mandates would ignore security threats that have not been fully neutralized and expose their participants’ assets to unnecessary risk. I think the CPMI is being reckless and irresponsible in encouraging such behaviour.

This issue is all the more important for countries like India whose enemies and rivals include some powerful nation states with proven cyber capabilities. I think that Indian regulators should tell their payment and settlement systems that Principle 16 prevails over Principle 17 in the case of any conflict between the two principles. With this clarification, the CPMI guidance on cyber attacks would be effectively defanged.

Posted at 6:00 pm IST on Mon, 17 Nov 2014         permanent link

Categories: risk management, technology

Comments

Liquidity support for central counterparties

The UK seems to be going in the opposite direction to the US in terms of providing liquidity support to clearing corporations or central counterparties (CCPs). In the US, the amendments by the Dodd Frank Act made it extremely difficult for the central bank to provide liquidity assistance to any non bank. On the other hand, the Bank of England on Wednesday extended its discount window not only to all CCPs but also to systemically important broker-dealers (h/t OTC Space). The Bank of England interprets its liquidity provision function very widely:

As the supplier of the economy’s most liquid asset, central bank money, the Bank is able to be a ‘back-stop’ provider of liquidity, and can therefore provide liquidity insurance to the financial system.

My own view has always been that CCPs should have access to the discount window but only to borrow against the best quality paper (typically, government bonds). If there is a large short fall in the pay-in, a CCP has to mobilize liquidity in probably less than an hour (before pay-out) and the only entity able to provide large amounts of liquidity at such short notice is the central bank. But if a CCP does not have enough top quality collateral on hand, it should be allowed to fail. A quarter century ago, Ben Bernanke argued that it makes sense for the central bank to stand behind even a failing CCP (Ben S. Bernanke, “Clearing and Settlement during the Crash”, The Review of Financial Studies, Vol. 3, No. 1, pp. 133-151). But I would not go that far. Most jurisdictions today are designing resolution mechanisms to deal with failed CCPs, so this should work even in a crisis situation.

Posted at 11:57 am IST on Sun, 9 Nov 2014         permanent link

Categories: exchanges, risk management

Comments

Economics of counterfeit notes

If you are trying to sell $200 million of nearly flawless counterfeit $20 currency notes, there is only one real buyer – the US government itself. That seems to be the moral of a story in GQ Magazine about Frank Bourassa.

The story is based largely on Bourassa’s version of events and is possibly distorted in many details. However, the story makes it pretty clear that the main challenge in counterfeiting is not in the manufacture, but in the distribution. Yes, there is a minimum scale in the production process – Bourassa claims that a high end printing press costing only $300,000 was able to achieve high quality fakes. The challenge that he faced was in buying the correct quality of paper. The story does not say why he did not think of vertically integration by buying a mini paper mill, but I guess that is because it is difficult to operate a paper mill secretly unlike the printing press which can be run in a garage without anybody knowing about it. Bourassa was able to proceed because some paper mill somewhere in the world was willing to sell him the paper that he needed.

The whole point of anti counterfeiting technology is to increase the fixed cost of producing a note without increasing the variable cost too much. So high quality counterfeiting is not viable unless it is done in scale. But the distribution of fake notes suffers from huge diseconomies of scale – while it is pretty easy to pass off a few fake notes (especially small denomination notes), Bourassa found that it was difficult to sell large number of notes at even 70% discount to face value. He ended up selling his stockpile to the US government itself. The price was his own freedom.

To prevent counterfeiting, the government needs to ensure that at every possible scale of operations, the combined cost of production and distribution exceeds the face value of the note. At low scale, the high fixed production cost makes counterfeiting uneconomical, while at large scale, the high distribution cost is the counterfeiter’s undoing. That is why the only truly successful counterfeiters have been other sovereigns who have two decisive advantages: first for them the fixed costs are actually sunk costs, and second, they have access to distribution networks that ordinary counterfeiters cannot dream of.

Posted at 9:09 pm IST on Sat, 1 Nov 2014         permanent link

Categories: fraud, monetary policy, technology

Comments

Why is the IMF afraid of negative interest rates?

A few days back, the IMF made a change in its rule for setting interest rates on SDRs (Special Drawing Rights) and set a floor of 5 basis points (0.05%) on this rate. The usual zero lower bound on interest rates does not apply to the SDR as there are no SDR currency notes floating around. The SDR is only a unit of account and to some extent a book entry currency. There is no technical problem with setting the interest rate on the SDR to a substantially negative number like -20%.

In finance theory, there is no conceptual problem with a large negative interest rate. Though we often describe the interest rate (r) as a price, actually it is 1+r and not r itself that is a price. The price of one unit of money a year later in terms of money today is 1+r. Prices have to be non negative, but this only requires that r can not drop below -100%. With bearer currency in circulation, a zero lower bound (ZLB) comes about because savers have the choice of saving in the form of currency and earning a zero interest rate. Actually the return on cash is slightly negative (probably close to -0.5%) because of storage (and insurance) costs. As such, the ZLB is actually not at zero, but at somewhere between -0.25% and -0.50%.

It has long been understood that a book entry (or mere unit of account) currency like the SDR is not subject to the ZLB at all. Buiter for example proposed the use of a parallel electronic currency as a way around the ZLB.

In this context, it is unfortunate that the IMF has succumbed to the fetishism of positive interest rates. At the very least, it has surrendered its potential for thought leadership. At worst, the IMF has shown that it is run by creditor nations seeking to earn a positive return on their savings when the fundamentals do not justify such a return.

Posted at 9:08 pm IST on Wed, 29 Oct 2014         permanent link

Categories: bond markets, bubbles, monetary policy

Comments

Who should interpolate Libor: submitter or administrator?

ICE Benchmark Administration (IBA), the new administrator of Libor has published a position paper on the future evolution of Libor. The core of the paper is a shift to “a more transaction-based approach for determining LIBOR submissions” and a “more prescriptive calculation methodology”. In this post, I discuss the following IBA proposals regarding interpolation and extrapolation:

Interpolation and extrapolation techniques are currently used where appropriate by benchmark submitters according to formulas they have adopted individually.

We propose that inter/extrapolation should be used:

  1. When a benchmark submitter has no available transactions on which to base its submission for a particular tenor but it does have transaction-derived anchor points for other tenors of that currency, and
  2. If the submitter’s aggregate volume of eligible transactions is less than a minimum level specified by IBA.

To ensure consistency, IBA will issue interpolation formula guidelines

Para 5.7.8

In my view, it does not make sense for the submitter to perform interpolations in situations that are sufficiently standardized for the administrator to provide interpolation formulas. It is econometrically much more efficient for the administrator to perform the interpolation. For example, the administrator can compute a weighted average with lower weights on interpolated submission – ideally the weights would be a declining function of the width of the interpolation interval. Thus where many non interpolated submissions are available, the data from other tenors would be virtually ignored (because of low weights). But where there are no non-interpolated submissions, the data from other tenors would drive the computed value. The administrator can also use non linear (spline) interpolation across the full range of tenors. If submitters are allowed to interpolate, perverse outcomes are possible. For example, where the yield curve has a strong curvature but only a few submitters provide data on the correct tenor, these will differ sharply from the incorrect (interpolated) submissions of the majority of the submitters. The standard procedure of ignoring extreme submissions would discard all the correct data and average all the incorrect submissions!

Many people tend to forget that even the computation of an average is an econometric problem that can benefit from the full panoply of econometric techniques. For example, an econometrician might suggest interpolating across submission dates using a Kalman filter. Similarly, covered interest parity considerations would suggest that submissions for Libor in other currencies should be allowed to influence the estimation of Libor in each currency (simultaneous equation rather than single equation estimation). So long as the entire estimation process is defined in open source computer code, I do not see why Libor estimates should not be based on a complex econometric procedure – a Bayesian Vector Auto Regression (VAR) with Garch errors for example.

Posted at 12:08 pm IST on Thu, 23 Oct 2014         permanent link

Categories: benchmarks, statistics

Comments

Online finance and SIM-card security risks

For quite some time now, I have been concerned that the SIM card in the mobile phone is becoming the most vulnerable single point of failure in online security. The threat model that I worry about is that somebody steals your mobile, transfers the SIM card to another phone, and goes about quickly resetting the passwords to your email accounts and other sites where you have provided your mobile number as your recovery option. Using these email accounts, the thief then proceeds to reset passwords on various other accounts. This threat model cannot be blocked by having a strong PIN or pattern lock on the phone or by remotely wiping the device. That is because, the thief is using your SIM and not your phone.

If the thief knows enough of your personal details (name, data of birth and other identifying information), then with a little bit of social engineering, he could do a lot of damage during the couple of hours that it would take to block the SIM card. Remember that during this period, he can send text messages and Whatsapp messages in your name to facilitate his social engineering. The security issues are made worse by the fact that telecom companies simply do not have the incentives and expertise to perform the authentication that financial entities would do. There have been reports of smart thieves getting duplicate SIM cards issued on the basis of fake police reports and forged identity documents (see my blog post of three years ago).

Modern mobile phones are more secure than the SIM cards that we put inside them. They can be secured not only with PIN and pattern locks but also fingerprint scanner and face recognition software. Moreover, they support encryption and remote wiping. It is true that SIM cards can be locked with a PIN which has to be entered whenever the phone is switched off and on or the SIM is put into a different mobile. But I am not sure how useful this would be if telecom companies are not very careful while providing the PUK code which allows the PIN to be reset.

If we assume that the modern mobile phone can be made reasonable secure, then it should be possible to make SIM cards more secure without the inconvenience of entering a SIM card PIN. In the computer world, for example, it is pretty common (in fact recommended) to do remote (SSH) login using only authentication keys without any user entered passwords. This works with a pair of encryption keys – the public key sits in the target machine and the private key in the source machine. A similar system should be possible with SIM cards as well, with the private key sitting on the mobile and backed up on other devices. Moving the SIM to another phone would not work unless the thief can also transfer the private key. Moreover, you would be required to use the backed up private key to make a request for a SIM replacement. This would keep SIM security completely in your hands and not in the hands of a telecom company that has no incentive to protect your SIM.

This system could be too complex for many users who use a phone only for voice and non critical communications. It could therefore be an opt-in system for those who use online banking and other services a lot and require higher degree of security. Financial services firms should also insist on the higher degree of security for high value transactions.

I am convinced that encryption is our best friend: it protects us against thieves who are adept at social engineering, against greedy corporations who are too careless about our security, and against overreaching governments. The only thing that you are counting on is that hopefully P ≠ NP.

Posted at 4:04 pm IST on Mon, 20 Oct 2014         permanent link

Categories: fraud, technology

Comments

What are banks for?

Much has been written since the Global Financial Crisis about how modern banking system has become less and less about financing productive investments and more and more about shuffling pieces of paper in speculative trading. Last month, Jordà, Schularick and Taylor wrote an NBER Working Paper “The Great Mortgaging: Housing Finance, Crises, and Business Cycles” describing an even more fundamental change in banking during the 20th century. They construct a database of bank credit in advanced economies from 1870 to 2011 and document “an explosion of mortgage lending to households in the last quarter of the 20th century”. They conclude that:

To a large extent the core business model of banks in advanced economies today resembles that of real estate funds: banks are borrowing (short) from the public and capital markets to invest (long) into assets linked to real estate.

Of course, it can be argued that mortgage lending is an economically useful activity to the extent that it allows people early in their career to buy houses. But it is also possible that much of this lending only boosts house prices and does not improve the affordability of houses to any significant extent.

The more important question is why banks have become less important in lending to businesses. One possible answer that in this traditional function, they have been disintermediated by capital markets. On the mortgages side, however, perhaps, banks are dominant only because they with their Too-Big-To-Fail (TBTF) subsidies can afford to take the tail risks that capital markets refuse to take.

I think the Jordà, Schularick and Taylor paper raises the fundamental question of whether advanced economies need banks at all. If regulators impose the kind of massive capital requirements that Admati and her coauthors have been advocating, and banks were forced to contract, capital markets might well step in to fill the void in the advanced economies. The situation might well be different in emerging economies.

Posted at 5:39 pm IST on Mon, 13 Oct 2014         permanent link

Categories: banks

Comments