Prof. Jayanth R. Varma's Financial Markets Blog

About me       Latest Posts       Posts by Year       Posts by Categories

Why waste taxpayer money to enforce stupid exchange rules?

Early this month, the US SEC passed an order against Behruz and Kenny about how they fraudulently obtained liquidity rebates from the option exchanges on which they traded. When I read this order, my first reaction was to laugh out loud at the stupidity of the alleged victims: some of the largest option exchanges in the US were running pretty silly liquidity rebate schemes. I can understand that regulators might wish to step in to protect small retail investors against their own stupidity, but if somebody like the CBOE chooses to run a scheme that is basically an open invitation to be gamed, my inclination would be to let them suffer the consequences. For the regulator to go after the alleged offender is to my mind a waste of tax payers’ money. I do take Stigler’s classic paper on the optimum enforcement of laws quite seriously.

The first charge against Behruz and Kenny is that they earned $2 million of liquidity rebates (and exchange fees avoided) from the option exchanges by misrepresenting “customer” status for their trading accounts. If you are not a broker-dealer, your orders are treated as “customer” orders unless your trading goes above the threshold of 390-order per day. To reach the 390-order threshold, you would have to enter an order every minute from market open to market close. “Customer” orders do not incur any transaction fees and receive higher liquidity rebates from the exchanges. In practice, trading activity was reviewed quarterly to determine to determine the “customer” status. If the trading was below 390-order per day during one quarter, then the trading account received “customer” status in the next quarter. To see how silly this is, note that if you did not trade at all one quarter, you would have “customer” status in the next quarter even if you were pumping thousands of orders a day in that quarter. Why somebody would think up such a stupid implementation of the rule in this day and age is beyond me.

Behruz and Kenny could have traded thousands of orders a day for six months in the year, and spent their time at the beach for the remaining six months without falling afoul of the SEC. But they were more greedy and wanted to trade with “customer” status round the year. So they created two accounts and switched between them each quarter – when they were trading thousands of orders a day in one account, they kept the other account almost dormant so that that other account would have “customer” status in the next quarter when the first account lost that status. The rules did however require that accounts with the same beneficial ownership should be aggregated for determining “customer” status, and Behruz and Kenny misrepresented the beneficial ownership to avoid this result. One way of looking at the SEC action is that they brought offenders to book, but the other way of looking at it is that the SEC is encouraging large and sophisticated players to create silly rules and implement them in silly ways, confident that the SEC will clean up after them.

The second charge is that Behruz and Kenny used spoofing orders to earn liquidity rebates from the (Nasdaq OMX) PHLX options exchange. The typical scheme was to enter a series of large hidden All-or-None (AON) orders to buy options at a price that was a penny more than the option’s current best bid. Because they are hidden, these AON orders do not change the best bid. Behruz and Kenny then placed smaller (typically one lot), non-bona fide sell orders at the same price as the AON. These orders were too small to execute against the AON order, but (since they were not hidden) they lowered the option’s best offer by one penny. The idea was to induce genuine sellers to send sell orders at the new best offer. When enough such sell orders arrived to make up the quantity of the AON order, they all executed against the AON. The PHLX in its infinite wisdom regarded the AON orders (that nobody could see) as having provided liquidity to the market. Since the AON buy order was sitting in the order book before the sale orders arrived, the AON was deemed to have provided liquidity while the sell orders were deemed to have taken liquidity. The PHLX gave a liquidity rebate to Behruz and Kenny, and charged a liquidity take fee to the sellers. Behruz and Kenny then turned around to execute the same strategy on the opposite side to dispose of the options that they had just bought – a large hidden AON sell order and a small displayed buy order.

One can have a debate on whether liquidity rebates and the maker-taker model make sense at all. But there is no debate about the silliness of what PHLX is doing. The idea that a hidden AON buy order that did not even move the best bid offered liquidity to the market is laughable. In a rational market, exchanges that do stupid things should lose money or business or both – the survival of the smartest. The regulators should not be trying to protect the silly and impede this market dynamic.

A recent blog post by the Streetwise Professor makes an even broader but similar argument about spoofing in general. He says that sophisticated and knowledgeable players have the incentive to detect spoofing and take defensive measures that would reduce the frequency and scale of spoofing activity. Therefore regulators need not bother much about it. I tend to agree. Harris’ classic book on market microstructure for practictioners (Trading and Exchanges, OUP, 2002) has a whole chapter on “bluffers” and within that there is a section in particular on how bluffers discipline liquidity providers. We might have invented a more exotic name (spoofing) for what has been known for centuries as bluffing, but the basic principles remain the same – spoofers discipline the HFTs.

Posted at 4:21 pm IST on Thu, 31 Dec 2015         permanent link

Categories: exchanges, regulation, technology

Comments

Operational versus financial creditors redux

A month back when I blogged about Creditor versus Creditor and Creditor versus Debtor, I talked about the potential for conflicts between operational and financial creditors, but did not have any good examples of such battles. I am able to remedy that gap now thanks to the fading fortunes of shale oil producers in the United States. A couple of days ago, Reuters carried a story about three instances where operational creditors had initiated involuntary bankruptcy proceedings against large energy producers to avoid being outmanoeuvred by financial creditors:

Involuntary bankruptcy gives vendors some say over how an energy producers’ dwindling funds are managed, and vendors can use it to try to stop a company from cutting deals that favor lenders or investors.

Such cases also allow creditors to choose the court, and all three of the recent cases have been filed outside the busy bankruptcy court in Wilmington, Delaware. Bankruptcy lawyers in Texas said that may suggest suppliers are worried the court is too eager to approve quick sales of businesses, which tend to favor secured creditors.

A lawyer for the creditors ... said the involuntary bankruptcy prevented the Gulf of Mexico producer from being stripped of all of its value in favor of the company’s owners.

If the facts stated in the story are correct, then standard theory (governance rights vest with residual rights) would imply that the operational creditors should indeed be in charge of the bankruptcy process.

Posted at 7:46 pm IST on Fri, 25 Dec 2015         permanent link

Categories: bankruptcy, law

Comments

Have Indian banks gone berserk on FATCA?

Under the US FATCA Act and the related Inter-Governmental Agreement between India and the US, banks and other financial institutions in India are required to report information about accounts held with them by US persons or entities controlled by US persons. All the documents that I have read are clear that this should not affect Indian citizens who are tax resident in India. But I find Indian banks and financial institutions send out notices demanding complex information and threatening closure of accounts to Indian citizens resident in India.

I am not a lawyer, but both Rule 114H(3) and the RBI Guidance Notes are very clear that banks should seek information from the account holder only if any of the indicia of foreign citizenship or foreign tax residence are present. The indicia include:

In the cases that I am referring to, the account is fully KYC compliant, the Indian address and identity documents are on record with the bank, and none of the other indicia are present, and still the FATCA notice is being sent. In one case, where the Indian citizen and Indian resident account holder was threatened with closure of account, I spent several minutes struggling to understand the complex form in which information was sought before realizing that the form that had been sent to an individual account holder was the form relevant for legal entities! Surely, a bank should know whether its customer is an individual or a corporate entity. But this elementary confusion had caused the bank to apply the $250,000 threshold applicable to legal entities for identifying “high value” accounts instead of the $1 million threshold applicable to individuals. It is another matter that even if it was classified as a “high value” account, the FATCA notice should not have been sent because the bank knew that none of the indicia were present.

I think tax terrorism by governments in both hemispheres of the world has become so severe that banks would rather harass their customers needlessly and go berserk with enforcing non existent compliance requirements than risk being held guilty of any shortfall in compliance. Perhaps some customers should sue the banks for sending baseless threatening letters so that banks would start doing what is required by law – neither more nor less.

Posted at 12:23 pm IST on Wed, 23 Dec 2015         permanent link

Categories: banks, international finance, regulation, taxation

Comments

Data access controls within banks

An order last month by the UK Financial Conduct Authority (FCA) against Barclays Bank highlights the problems faced by banks and other financial services firms in controlling the access that their employees have to customer data. I have long heard complaints about this: for example, some bank employees keep telling me that as soon as their bonus is paid to them, other employees with access to the core banking software can find out the exact quantum of this bonus.

Now we have confirmation that when one of the largest banks in the world wants to limit who can see the information about a customer, the best they can do is to go back to paper hard copies stored in a vault.

The FCA order refers to a £1.88 billion transaction that Barclays was doing for a group of ultra-high net worth Politically Exposed Persons (PEPs) who wanted a very high degree of confidentiality:

Prior to Barclays arranging the Transaction, Barclays agreed to enter into the Confidentiality Agreement which sought to keep knowledge of the Clients’ identity restricted to a very limited number of people within Barclays and its advisers. In the event that Barclays breached these confidentiality obligations, it would be required to indemnify the Clients up to £37.7 million. The terms of the Confidentiality Agreement were onerous and were considered by Barclays to be an unprecedented concession for clients who wished to preserve their confidentiality. (Para 4.11)

In view of these confidentiality requirements, Barclays determined that details of the Clients and the Transaction should not be kept on its computer systems. (Para 4.12)

Barclays decided to omit the names of the Clients from its internal electronic systems in order to comply with the terms of the Confidentiality Agreement. As a result, automated checks that would typically have been carried out against the Clients’ names were not undertaken. Such checks would have included regular overnight screenings of client names against sanctions and court order lists. If, for example, the Clients had become the subjects of law enforcement proceedings in any jurisdiction, Barclays could have been unaware of such a development. No adequate alternative manual process for carrying out such checks was established by Barclays. (Para 4.49)

Some documents relating to the Business Relationship were held by Barclays in hard copy in a safe purchased specifically for storing information relating to the Business Relationship. This was Barclays’ alternative to storing the records electronically. While there is nothing inherently wrong with keeping documents in hard copy, they must be easily identifiable and retrievable. However, few people within Barclays knew of the existence and location of the safe. (Para 4.52)

I am sure that 130,000 clients of HSBC Private Bank in Switzerland (now accused of evading taxes in their home countries) wish that their data too was kept in paper form in a vault beyond the reach of Falciani’s hacking skills.

More seriously, banks need to rethink the way they maintain customer confidentiality. With anywhere banking, far too many employees have access to the complete data of every customer. A lot of progress can be made with some very simple access control principles:

  1. Every access to customer information must be logged to provide a detailed audit trail of who, when, what and why. Ideally, the customer should have access to a suitably anonymously form of these logs.

  2. Every access must require justification in terms of a specific task falling within the accessor's job profile.

  3. Every access request should only result in the minimal information required to complete the task for which the access is requested.

For example, a customer comes to a branch (assuming such archaic things still exist) for a cash withdrawal. The cashier requests access by providing details of the requested withdrawal; and the system accepts the request because it is part of the cashier's job to process these withdrawals (Principle #2). The system responds with only a yes or a no: either the customer has sufficient balance to allow this withdrawal or not. The actual balance is not provided to the cashier (Principle #3). It should be emphasized that without Principle #1 and #2, the cashier could make repeated queries with different hypothetical withdrawal amounts and guess the true balance within a relatively small range using what computer scientists would recognize as a binary search method.

In my view, access controls are easy to implement if banks decide to prioritize (or regulators decide to enforce) customer confidentiality. However access controls have their limits and cryptographic tools are indispensable to achieve more complex objectives. Banks need to promote further research into these tools in order to make them usable for their needs:

I think the time has come for consumers and regulators to start demanding that banks pay greater attention to customer confidentiality. Actually, there is a similar problem in regulatory and self-regulatory organizations. For example, the surveillance staff in a stock exchange (and in the capital market regulator) have access to too much information and there is immense scope for abuse of this information. Mathematics (in the form of cryptography) gives us the tools required to solve many of these problems; we just need the will to use these tools.

Posted at 5:04 pm IST on Sun, 20 Dec 2015         permanent link

Categories: banks, technology

Comments

HBOS: An old fashioned bank failure

Most of the bank failures of the Global Financial Crisis involved complex products or an excessive reliance on markets rather than good old banking relationships. The HBOS failure as described in last month's 400 page report by the UK regulators (PRA and FCA) is quite different. One could almost say that this was a German or Japanese style relationship bank.

The report describes the approach of the Corporate Division where most of the losses arose:

The often-quoted approach of the division was to be a relationship bank that would ‘lend through the cycle’. Elsewhere the division’s approach had been called ‘counter-cyclical’. This was described as standing by and supporting existing customers through difficult times, while continuing to lend to those good opportunities that could be found. The division claimed it had a deep knowledge of the customers and markets in which it operated, which would enable it to pursue this approach with minimal threat to the Group. It was an approach that was felt to have served BoS well in the early 1990s downturn. (Para 274)

What could go wrong with such old fashioned banking? The answer is very simple:

Taking into account renting, hotels and construction, the firm’s overall exposure to property and related assets increases to £68 billion or 56% of the portfolio. (para 285)

And in some ways, relationship banking made things worse:

The top 30 exposures included a number of individual high-profile businessmen. Many of these had been customers of the division for many years, some going back to the BoS pre-merger. True to the division’s banking philosophy, it had supported these customers as they grew and expanded their businesses. However, business growth and expansion sometimes meant a change in business model to become significant property investors; not necessarily the original core business and expertise of the borrower. In the crisis, a number of these businessmen, though not all, incurred losses on their property investments. (Para 318)

When you as a bank lend a big chunk of your balance sheet into a bubble, it does not matter whether you are a transaction bank or a relationship bank: you are well on your way to failure. (If you do not want to jump to conclusions based on one bank, a recent BIS Working Paper on US commercial banks studies all bank failures in the US during the Great Recession and comes to a very similar conclusion).

Posted at 10:04 pm IST on Sat, 12 Dec 2015         permanent link

Categories: banks, failure, investigation

Comments

In the sister blog and on Twitter during October and November 2015

The following posts appeared on the sister blog (on Computing) during the last two months.

Tweets during the last two months (other than blog post tweets):

Posted at 1:41 pm IST on Tue, 1 Dec 2015         permanent link

Categories: technology

Comments

Potential self-trades are worse than actual self-trades

Update: While linking to Ajay Shah's blog for a summary of global regulatory regimes on self trades, I failed to mention that the particular post that I was referring to was authored not by Ajay Shah, but by Nidhi Aggarwal, Chirag Anand, Shefali Malhotra, and Bhargavi Zaveri.

Imagine that you are bidding at an auction and after a few rounds, most bidders have dropped out and you are left bidding against one competing bidder who pushes you to a very high winning bid before giving up. Much later you find that the competing bidder who forced you to pay close to your reservation price was an accomplice of the seller. You would certainly regard that as fraudulent; and many well running auction houses have regulations preventing it. Observe that the seller did not actually sell to himself; in fact there would have been no fraud (and no profit to the seller) if he actually did so. The seller defrauded you not by an actual (disguised) self-trade but by a (disguised) potential self-trade that did not actually happen. In fact, the best of auction houses do not prohibit actual self-trades: when the auction does not achieve the seller’s (undisclosed) reserve price, they allow the item to be “bought in” (the seller effectively buys the item from himself). So the lesson from well run auction houses is that potential self-trades (which do not happen) are much more dangerous than actual self-trades.

In the financial markets, we have lost sight of this basic intuition and focused on preventing actual self-trades instead of limiting potential self-trades. India goes overboard on this by regarding all self-trades as per se abusive. Most other countries also frown on self-trades but do not penalize bona fide self-trades; they take action only against self-trades that are manipulative in nature. However, they too regard frequent self-trades as suggestive of manipulative intent (see Ajay Shah for a nice summary of these regulatory regimes). Many exchanges and commercial software around the world therefore now provide automated methods of preventing self-trades: when an incoming order by an entity would execute against a pre-existing order on the opposite side by the same entity, these automated procedures cancel either the incoming order or the resting order or both.

A little reflection on the auction example would show that the whole idea of automated self-trade prevention is an utterly misguided response to an even more misguided regulatory regime. Manipulation does not happen when the trade is executed: it happens when the order is entered into the system. The first sign that the regulators are understanding this truth is in the complaint that the US Commodity and Futures Trading Commission (CFTC) filed against Oystacher and others last month. Para 53 of the complaint states:

Oystacher.and 3 Red manually traded these futures markets, using a commercially available trading platform, which included a function called “avoid orders that cross.” The purpose of this function is to prevent a trader’s own orders from matching with one another. Defendants exploited this functionality to place orders which automatically and almost simultaneously canceled existing orders on the opposite side of the market (that would have matched with the new orders) and thereby effectuated their manipulative and deceptive spoofing scheme ...

Far from preventing manipulation, automated self-trade prevention software is actually facilitating market manipulation. This might appear counter intuitive to many regulators, but is not at all surprising when one thinks through the auction example.

Posted at 1:47 am IST on Mon, 30 Nov 2015         permanent link

Categories: exchanges, regulation

Comments

Creditor versus Creditor and Creditor versus Debtor

In India, for far too long, bankruptcy has been a battle between creditor and debtor with the dice loaded against the creditor. In its report submitted earlier this month, the Bankruptcy Law Reforms Committee (BLRC) proposes to change all this with a fast track process that puts creditors in charge. It appears to me however that the BLRC ignores the fact that in well functioning bankruptcy regimes, the fight is almost entirely creditor and creditor: it is very much like the familiar scene in the Savannah where cheetahs, lions, hyenas and vultures can be seen fighting over the carcass which has no say in the matter.

The BLRC ignores this inter-creditor conflict completely and treats unsecured financial creditors as a homogeneous group; it believes that everything can be decided by a 75% vote of the Creditors Committee. In practice, this is not the case. Unsecured financial creditors can be senior or junior and multiple levels of subordination are possible. Moreover, the bankruptcy of any large corporate entity involves several levels of holding companies and subsidiary companies which also creates an implicit subordination among different creditors made more complex by inter company guarantees.

Consider for example, the recommendation of the BLRC that:

The evaluation of these proposals come under matters of business. The selection of the best proposal is therefore left to the creditors committee which form the board of the erstwhile entity in liquidation. (p 100)

If the creditors are homogeneous, this makes eminent sense. The creditors are the players with skin in the game and they should take the business decisions. The situation is much more complex and messy with heterogeneous creditors. Suppose for example that a company has 60 of senior debt and 40 of junior debt and that the business is likely to be sold for something in the range of 40-50. In this situation, the junior creditors should not have any vote at all: like the equity shareholders, they too are part of the carcass in the Savannah which others are fighting over. On the other hand, if the expected sale proceeds are 70-80, then the senior creditors should not have a vote at all. The senior creditors have no skin in the game because it matters absolutely nothing to them whether the sale fetches 70 or 80; they get their money in any case. They are like the lion that has had its fill and leaves it to lesser mortals to fight over what is left of the carcass.

The situation is made more complex by the fact that in practice the value of the proposals is not certain, and the variance matters as much as the expected value. A junior creditor’s position is often similar to that of the holder of an out of the money option – it tends to prefer proposals that are highly risky. Much of the upside of a risky sale plan may flow to the junior creditor, while most of the downside may be to the detriment of the senior creditor.

Another recommendation of the BLRC that I am uneasy about is the stipulation that operational creditors should be excluded from the decision making:

The Committee concluded that, for the process to be rapid and efficient, the Code will provide that the creditors committee should be restricted to only the financial creditors. (p 84)

Suppose for example that Volkswagen’s liabilities to its cheated customers were so large as to push it into bankruptcy. Would it make sense not to give these “operational creditors” a seat at the table? What about the bankruptcy of a electric utility whose nuclear reactor has suffered a core meltdown?

Posted at 5:59 pm IST on Mon, 16 Nov 2015         permanent link

Categories: bankruptcy

Comments

Distrust and cross-check

I have piece in today’s Mint arguing that the Volkswagen emission scandal is a wake-up call for all financial regulators worldwide:


The implications of big firms such as Volkswagen using software to cheat their customers go far beyond a few million diesel cars

The Volkswagen emissions scandal challenges us to move beyond Ronald Reagan’s favourite Russian proverb “trust but verify” to a more sceptical attitude: “distrust and cross-check”.

A modern car is reported to contain a hundred million lines of code to deliver optimised performance. But we learned last month that all this software can also be used to cheat. Volkswagen had a cheating software in its diesel cars so that the car appeared to meet emission standards in the lab while switching off the emission controls to deliver fuel economy on the road.

The shocking thing about Volkswagen is that (unlike, say Enron), it is not perceived to be a significantly more unethical company than its peers. Perhaps, the interposition of software makes the cheating impersonal, and allows managers to psychologically distance themselves from the crime. Individuals who might hesitate to cheat personally might have less compunctions in authorizing the creation of software that cheats.

The implications of big corporations using software to cheat their customers go far beyond a few million diesel cars. We are forced to ask whether, after Volkswagen, any corporate software can be trusted. In this article, I explore the implications of distrusting the software used by big corporations in the financial sector:

Can you trust your bank’s software to calculate the interest on your checking account correctly? Or might the software be programmed to check your Facebook and LinkedIn profiles to deduce that you are not the kind of person who checks bank statements meticulously, and then switch on a module that computes the interest due to you at a lower rate?

Can you be sure that the stock exchange is implementing price-time priority rules correctly or might the software in the order matching engine be programmed to favour particular clients?

Can you trust your mutual funds’ software to calculate Net Asset Value (NAV) correctly? Or might the software be programmed to understate the NAV on days where there are lots of redemption (and the mutual fund is paying out the NAV) while overstating the NAV on days of large inflows when the mutual fund is receiving the NAV?

Can you be sure that your credit card issuer has not programmed the software to deliberately add surcharges to your purchases. Perhaps, if you complain, the surcharges will be promptly reversed, but the issuer makes a profit from those who do not complain.

Can you trust the financials of a large corporation? Or could the accounting software be smart enough to figure out that it is the auditor who has logged in, and accordingly display a set of numbers different from what the management sees?

After Volkswagen, these fears can no longer be dismissed as mere paranoia. The question today is how can we, as individuals, protect ourselves against software-enabled corporate cheating? The answer lies in open source software and open data. Computing is cheap, and these days each of us walks around with a computer in our pocket (though, we choose to call it a smartphone instead of a computer). Each individual can, therefore, well afford to cross-check every computation if (a) the requisite data is accessible in machine-readable form, and (b) the applicable rules of computation are available in the form of open source software.

Financial sector regulations today require both the data and the rules to be disclosed to the consumers. What the rules do not do is to require the disclosures to be computer friendly. I often receive PDF files from which it is very hard to extract data for further processing. Even where a bank allows me to download data as a text or CSV (comma-separated value) file, the column order and format changes often and the processing code needs to be modified every time this happens. This must change. It must be mandatory to provide data in a standard format or in an extensible format like XML. Since data anyway comes from a computer database, the bank or financial firm can provide machine-readable data to the consumer at negligible cost.

When it comes to rules, disclosure is in the form of several pages of fine print legalese. Since the financial firm anyway has to implement rules in computer code, there is little cost to requiring that computer code be freely made available to the consumer. It could be Python code as the US SEC proposed five years ago in the context of mortgage-backed securities (http://www.sec.gov/rules/proposed/2010/33-9117.pdf), or it could be in any other open source language that does not require the consumer to buy an expensive compiler to run the code.

In the battle between the consumer and the corporation, the computer is the consumer’s best friend. Of course, the big corporation has far more powerful computers than you and I do, but it needs to process data of millions of consumers in real time. You and I need to process only one person’s data and that too at some leisure and so the scales are roughly balanced if only the regulators mandate that corporate computers start talking to consumers’ computers.

Volkswagen is a wake-up call for all financial regulators worldwide. I hope they heed the call.

Posted at 11:38 am IST on Mon, 19 Oct 2015         permanent link

Categories: regulation

Comments

Twitter or Newswires: Are regulators behind the curve?

Last week, I read two stories that made me wonder how regulators are far behind the curve when it comes to new media.

First, Business Insider reported that after the newswire hacking scandal (which I blogged about last month), Goldman Sachs was considering announcing its earnings on Twitter instead of on the newswires. Of course, such reports are often speculative and nothing may come of it, but it indicates that at least some organizations are taking the new media seriously.

Second, was an amendment to the New York Stock Exchange (NYSE) rules on how companies should release news to the public (h/t CLS Blue Sky Blog):

Currently, section 202.06(C) ... on the best way to release material news ... is outdated as it refers to, among other things, the release of news by telephone, facsimile or hand delivery. Instead, the Exchange proposes ... that listed companies releasing material news should either (i) include the news in a Form 8-K or other Commission filing, or (ii) issue the news in a press release to the major news wire services.

The regulators have finally decided to shift from obsolete media to the old media; the new media is not even on the horizon.

Posted at 9:41 pm IST on Tue, 13 Oct 2015         permanent link

Categories: regulation, technology

Comments

Interview in Bloomberg TV

Bloomberg TV carried an interview with me last week. The video is available at the channel’s website. Among several other things, the interview also covered the Amtek Auto episode that I have blogged about in the past. I argued that Amtek Auto is unlikely to be the last episode of distressed corporate bonds in mutual fund portfolios, and we need to be more proactive in future.

Posted at 6:09 pm IST on Mon, 12 Oct 2015         permanent link

Categories: bond markets, mutual funds, regulation

Comments

Are large fund managers problematic?

Last month, I read four seemingly unrelated papers which all point towards problems posed by large fund managers.

  1. Ben-David, Franzoni, Moussawi and Sedunov (The Granular Nature of Large Institutional Investors) show that the stocks owned by large institutions exhibit stronger price inefficiency and are also more volatile. They also study the impact of Blackrock’s acquisition of Barclays Global Investors (which the authors for some strange reason choose to identify only as “a mega-merger between two large institutional investors that took place at the end of 2009”). Post merger, the ownership of stocks which was spread across two fund managers became concentrated in one fund manager. The interaction term in their regression results show that this concentration increased the volatility of the stocks concerned. On the mispricing front, they show that the autocorrelation of returns is higher for stocks that are held by large institutional investors; and that stocks with common ownership by large institutions display abnormal co-movement. They also show that negative news about the fund manager (increase in the CDS spread) lead to an increase in volatility of stocks owned by that fund.

  2. Israeli, Lee and Sridharan (Is There a Dark Side to Exchange Traded Funds (ETFs)? An Information Perspective) find that stocks that are owned by Exchange Traded Funds (ETFs) suffer a decline in pricing efficiency: higher trading costs (measured as bid-ask spreads and price impact of trades), higher co-movement with general market and industry returns; a decline in the predictive power of current returns for future earnings); and a decline in the number of analysts covering the firm. They hypothesize that ETF ownership reduces the supply of securities available for trade, as well as the number of uninformed traders willing to trade these securities. Much the same factors may be behind the results found by Ben-David, Franzoni, Moussawi and Sedunov.

  3. Clare, Nitzsche and Motson (Are Investors Better Off with Small Hedge Funds in Times of Crisis?) argue that on average investors were better off investing with a small hedge fund instead of a large one in times of crisis (the dot com bust and the global financial crisis). They speculate that bigger hedge funds might attract more hot money (fund of funds) which might lead to large redemptions during crises. Smaller hedge funds might have less flighty investors and more stringent gating arrangements. Smaller hedge funds might also have lower beta portfolios.

  4. Elhauge (Horizontal Shareholding as an Antitrust Violation) focuses on problems in the real economy rather than in the financial markets. The argument is that when a common set of large institutions own significant shares in firms that are horizontal competitors in a concentrated product market, these firms are likely to behave anticompetitively. Elhauge discusses the DuPont-Monsanta situation to illustrate his argument. The top four shareholders of DuPont are also four of the top five shareholders in Monsanto, and they own nearly 20% of both companies. The fifth largest shareholder of DuPont, the Trian Fund, which did not own significant shares in Monsanto, launched a proxy contest criticizing DuPont management for failing to maximize DuPont profits. In particular, Trian complained that DuPont entered into a reverse payment patent settlement with Monsanto whereby, instead of competing, DuPont paid Monsanto for a license to use Monsanto’s patent. Trian’s proxy contest failed because it was not supported by the four top shareholders of DuPont who stood to gain from maximizing the joint profits of DuPont and Monsanto. I thought it might be useful for the author to compare this situation with the cartelization promoted by the big investment banks in 19th century US or by the big banks in early 20th century Germany or Japan.

Posted at 4:50 pm IST on Thu, 8 Oct 2015         permanent link

Categories: mutual funds

Comments

Negative interest rates wreak havoc with finance textbooks

By assuming non negative interest rates, finance textbooks arrive at many results that are false in a negative rates world. Finance theory does not rule out negative rates – theory requires only bond prices to be non negative, and this only prevents interest rates from dropping below −100%. In practice also, early 2015 saw interest rates go negative in many countries. The BIS 2015 Annual Report (Graph II.6, page 32) shows negative ten-year yields in Switzerland, and negative five year yields in Germany, France, Denmark and Sweden in April 2015.

Let us take a look at how many textbook results are no longer valid in this world:

Posted at 1:20 pm IST on Sun, 4 Oct 2015         permanent link

Categories: derivatives, monetary policy

Comments

US corporate disclosure delays

Corporate disclosures rules in the US still permit long delays more appropriate to a bygone age before technology speeded up everything from stock trading to instant messaging. Cohen, Jackson and Mitts wrote a paper earlier this month arguing that substantial insider trading occurs during the four business day window available to companies to disclose material events. The paper studied over forty thousand trades by insiders that occurred on or after the event date and before the filing date; the analysis demonstrates that these trades (which may be quite legal) were highly profitable.

Cohen, Jackson and Mitts also document that companies do usually disclose information much earlier than the legal deadline: about half of the disclosures are made on the same day; and large firms are even more prompt in their filing. But nearly 15% of all filings use the full four day delay that is available. In the early 2000s, after the Enron scandal, the US SEC tried to reduce the window to two days, but gave up in the face of intense opposition. I think the SEC should require each company to monitor the median delay between the event and the filing, and provide an explanation if this median delay exceeds one day. Since there are on average about four filings per company per year, it should be feasible to monitor the timeliness over a rolling three year period.

Another troubling thing about the US system is the use of press releases as the primary means of disclosure. Last month, the SEC filed a complaint against a group of traders and hackers who stole corporate press releases from the web site of the newswire agencies before their public release. What I found most disturbing about this case was that the SEC went out of its way to emphasize that the newswire agencies were not at fault; in fact, the SEC redacted the names of the agencies (though it was not at all hard for the media to identify them). Companies disclose material events to a newswire several hours before the scheduled time of public release of this information by the newswire; the newswire agencies are not regulated by the SEC; they are not required to encrypt market sensitive data during this interregnum; there are no standards on the computer security measures that the newswires are required to take during this period; a group of relatively unsophisticated hackers had no difficulty hacking the newswire websites repeatedly over a period of five years. And the SEC thinks that no changes are required in this anachronistic system.

Posted at 4:07 pm IST on Sun, 27 Sep 2015         permanent link

Categories: regulation

Comments

Could payments banks eat the private banks' CASA lunch?

The Reserve Bank of India (RBI) has granted “in principle” approval to eleven new payment banks and has also promised to license more in future. Many of the licensees could prove to be fierce competitors because of their deep pockets and strong distribution networks. For the incumbent banks, the most intense competition from the new entrants will probably be for the highly profitable Current and Savings Accounts (CASA) deposits which are primarily meant for payments. And it is here that the complacency of incumbents banks could provide an opening to the new payment banks.

In the late 1990s and early 2000s, new generation private banks innovated on technology and customer service and gained significant market share from the public sector banks. However, in recent years, some complacency seems to have set in; customer service has arguably deteriorated even as fees have escalated. Public sector banks have caught up with them on ATM and online channels; and in any case these channels are rapidly being overtaken by mobile and other platforms. In fact, India may not need any more ATMs at all.

In this competitive landscape, payment banks could gain significant market share if they are sufficiently innovative and provide better customer service than the incumbents. Unlike mainstream banks which have to worry about investments and advances and lots of other things, payment banks can be totally focused on serving retail customers. Since their survival would depend on this sharp focus, there is every likelihood that they would turn out to be more nimble and innovative in this segment.

The ₹100,000 limit on balances at the payment banks means that initially it would be the rural CASA that would be at risk. But if payment banks do a good job, the limit may be raised to a much larger level (maybe ₹500,000) over a few years. At that point, urban CASA will also be at risk of migration. It will be easy for RBI to raise the limit because the balances have to be invested in Government Securities and so customer money is subject only to operational risk.

eWallets could prove to be another competitive weapon in attacking the urban CASA segment. Large segments of the Indian population are uncomfortable with online credit card usage and with netbanking. A few years ago, eCommerce firms in India used Cash on Delivery (COD) to gain acceptance. However, COD is not scalable and it is breaking down for various reasons. In the last year or so, eWallets have begun to replace COD, and these too could pose a threat to traditional payment services. All banks are trying to launch eWallets and mobile banking apps, but I am not sure that traditional banks have a competitive advantage here. In fact, a customer who is worried about online security might well prefer to have an eWallet with a small balance for online transactions instead of exposing his or her main bank account to the internet. In this context, the payment banks may find that the ₹100,000 limit does not pose a competitive disadvantage at all.

All this is of course good news for the customer.

Posted at 6:56 pm IST on Wed, 9 Sep 2015         permanent link

Categories: banks, regulation

Comments

Amtek Auto and Mutual Funds: Use Side Pockets, not Gates

JPMorgan Mutual Fund has gated (restricted redemptions from) two of its debt funds which have large exposure to Amtek Auto which is in distress. A gate is better than nothing, but it is inferior to a side pocket. I would like to quote from a proposal that I made in a blog post that I wrote in October 2008 when the NAVs of many debt oriented mutual funds were not very credible:

At the very least what is required today is a partial redemption freeze to ensure that nobody is able to redeem units of mutual funds at above the true NAV of the fund. Anybody who wants to redeem should be paid 70% or 80% of the published NAV under the assumption that the true NAV would not be below this. The balance should be paid only after the true NAV is credibly determined through asset sales.

Unlike the generalized distress of 2008, what JPMorgan funds are facing today is distress limited to a single large exposure. According the July portfolio statement, Amtek Auto was about 15% of the NAV of the Short Term Income Fund. Even if this is valued at zero, the fund can pay out 85% of the NAV to everybody. (For the India Treasury Fund, Amtek is only 5% of NAV, so the fund can pay out 95%). Essentially, my proposal is what is known in the hedge fund world as a side pocket: the holding in Amtek Auto should go into a separate side pocket until it is liquidated and the value is realized. The rest of the money would remain in the normal mutual fund which would be open for unrestricted redemption (as well as for fresh investment).

The gate has two big disadvantages:

  1. The gate is not total: redemptions are not stopped, they are only restricted to 1%. This means that some redemptions are taking place at a wrong value. The money that is being paid out to this 1% is money that is partly money stolen from the remaining investors.

  2. The gate rewards the mutual fund for its own incompetence. A fund which has made a bad investment choice would be punished in the market place by a wave of redemptions. That is the competitive dynamic that encourages mutual funds to perform due diligence for their investment. A gate stops the redemption and shields the fund from this punishment.

It is possible that the mutual fund offer document might not contain a provision for a side pocket. But the Securities and Exchange Board of India (SEBI) as the regulator certainly has the power to issue directions to the fund to use this method. Let us see whether it acts and acts quickly.

Posted at 9:59 pm IST on Sun, 6 Sep 2015         permanent link

Categories: bond markets, mutual funds, regulation

Comments

In the sister blog and on Twitter during August 2015

The following posts appeared on the sister blog (on Computing) last month.

Tweets during the last month (other than blog post tweets):

Posted at 12:35 pm IST on Sat, 5 Sep 2015         permanent link

Categories: technology

Comments

SMS does not provide true two factor authentication

I am a strong supporter of two factor authentication (2FA), and I welcomed the idea of a one time password sent by SMS when it was introduced in India a few years ago. But gradually I have become disillusioned because SMS is not true 2FA.

Authentication is a problem that humanity has faced for centuries; and long before computers were invented, several authentication methods were developed and adopted. Two widely used methods are nicely illustrated by two different stories in the centuries old collection Arabian Nights. The first method is to authenticate with something that you know like Open Sesame in Ali Baba and the Forty Thieves. The Ali Baba story describes how the secret password is easily stolen during the process of authentication itself. What is worse is that while we would quickly detect the theft of a physical object, the theft of a secret password is not detected unless the theft does something stupid like Ali Baba’s brother did in the story.

The second method is to authenticate with something that you have, and its problems are eloquently portrayed in the story about Aladdin’s Wonderful Lamp. In the Aladdin story, the lamp changes hand involuntarily at least four times; physical keys or hardware tokens can also be stolen. The problem is that while you can carry “what you know” with you all the time (if you have committed it to memory), you cannot carry “what you have” with you all the time. When you leave it behind, you may (like Aladdin) find on your return that it is gone.

Clearly, the two methods – “what you know” and “what you have” – are complementary in that one is strong where the other is weak. Naturally, centuries ago, people came up with the idea of combining the two methods. This is the core idea of 2FA – you authenticate with something that you have and with something that you know. An interesting example of 2FA can be found in the Indian epic, the Ramayana. There is an episode in this epic where Rama sends a messenger (Hanuman) to his wife Sita. Since Hanuman was previously unknown to Sita, there was clearly a problem of authentication to be solved. Rama gives some personal ornaments to Hanuman which he could show to Sita for the “what you have” part of 2FA. But Rama does not rely on this alone. He also narrates some incidents known only to Rama and Sita to provide the “what you know” part of 2FA. The Ramayana records that the authentication was successful in a hostile environment where Sita regarded everything with suspicion (because her captors were adept in various forms of sorcery).

In the digital world, 2FA relies on a password for the “what you know” part and some piece of hardware for the “what you have” part. In high value applications, a hardware token – a kind of electronic key – is common. While it is vulnerable to MitM attacks, I like to think of this as reasonably secure (maybe I am just deluded). The kind of person who can steal your password is probably sitting in Nigeria or Ukraine, while the person who can steal your hardware must be living relatively close by. The skill sets required for the two thefts are quite different and it is unlikely that the same person would have both skill sets. The few people like Richard Feynman who are equally good at picking locks and cracking the secrets of the universe hopefully have better things to do in life than hack into your bank account.

The SMS based OTP has emerged as the poor man’s substitute for a hardware token. The bank sends you a text message with a one time password which you type in on the web site as the second factor in the authentication. Intuitively, your mobile phone becomes the the “what you have” part of 2FA.

Unfortunately, this intuition is all wrong – horribly wrong. The SMS which the bank sends is sent to your mobile number and not to your mobile phone. This might appear to be an exercise in hair splitting, but it is very important. The problem is that while my mobile phone is something that I have, my SIM card and mobile connection are both in the telecom operator’s hands and not in mine.

There have been cases around the world where somebody claiming to be you convinces the telecom operator that you have lost your mobile and need a new SIM card with the old number. The operator simply deactivates your SIM and gives the fake you a new SIM which has been assigned the old number. If you think this is a figment of my paranoid imagination, take a look at this 2013 story from India and this 2011 story from Malaysia. If you want something from the developed world, look at this 2011 story from Australia about how the crook simply went to another telecom operator and asked for the number to be “ported” from the original operator. (h/t I came across all these stories directly or indirectly via Bruce Schneier at different points of time). I have blogged about this problem in the past as well (see here and here).

My final illustration of why the SMS OTP that is sent to you is totally divorced from your mobile phone is provided by my own experience last week in Gujarat. In the wake of rioting in parts of the state, the government asked the telecom operators to shut down SMS services and mobile data throughout the state. I needed to book an air ticket urgently one night for a visiting relative who had to rush back because of an emergency at home. Using a wired internet connection, I could login to the bank site using my password (the “what I know” part of 2FA). The mobile phone (the “what I have” part of 2FA) was securely in my hand. All to no avail, because the telecom operator would not send me the SMS containing the OTP. I had to call somebody from outside the state to make the payment.

This also set me thinking that someday a criminal gang would (a) steal credit cards, (b) engineer some disorder to get SMS services shut down, and (c) use this “cover of darkness” to steal money using those cards. They would know that the victims would not receive the SMS messages that would otherwise alert them to the fraud.

I think we need to rethink the SMS OTP model. Perhaps, we need to protect the SIM with something like a Trusted Platform Module (TPM). The operator may be able to give away your SIM to a thief, but it cannot do anything about your TPM – it would truly be “something that you ” have. Or maybe the OTP must come via a secure channel different from normal SMS.

Posted at 9:59 pm IST on Mon, 31 Aug 2015         permanent link

Categories: fraud, technology

Comments

How greedy tax laws become a gift to other countries

Before coming to India and Mauritius, let me talk about US and the Dutch Antilles in the early 1980s. It took the US two decades to change their tax laws and stop the free gift they were giving to the Antilles. If we assume India acts with similar speed, it is around time we changed our tax laws because our generosity to Mauritius has been going on since the mid 1990s.

There is a vast literature about the US and the Netherlands Antilles. The description below is based on an old paper by Marilyn Doskey Franson (“Repeal of the Thirty Percent Withholding Tax on Portfolio Interest Paid to Foreign Investors”, Northwestern Journal of International Law & Business, Fall 1984, 930-978). Since this paper was written immediately after the change in US tax laws, it provides a good account of the different kinds of pulls and pressures that led to this outcome. Prior to 1984, passive income from investments in United States assets such as interest and dividends earned by foreigners was generally subject to a flat thiry percent tax which was withheld at the source of payment. Franson describes the Netherlands Antilles solution that was adopted by US companies to avoid this tax while borrowing in foreign markets:

In an effort to reduce the interest rates they were paying on debt, corporations began as early as the 1960s to access an alternative supply of investment funds by offering their debentures to foreign investors in the Eurobond market. The imposition of the thirty percent withholding tax on interest paid to these investors, however, initially made this an unattractive mode of financing. Since foreign investors could invest in the debt obligations of governments and businesses of other countries without the payment of such taxes, a United States offeror would have had to increase the yield of its obligation by forty-three percent in order to compensate the investor for the thirty percent United States withholding tax and to compete with other issuers. This prospect was totally unacceptable to most United States issuers.

In an effort to overcome these barriers, corporations began to issue their obligations to foreign investors through foreign “finance subsidiaries” located in a country with which the United States had a treaty exempting interest payments. Corporations generally chose the Netherlands Antilles as the site for incorporation of the finance subsidiary because of the favorable terms of the United States – Kingdom of the Netherlands Income Tax Convention ... The Antillean finance subsidiary would issue its own obligations in the Eurobond market, with the United States parent guaranteeing the bonds. Proceeds of the offering were then reloaned to the United States parent on the same terms as the Eurobond issue, but at one percent over the rate to be paid on the Eurobonds. Payments of interest and principal could, through the use of the U.S.-N.A. treaty, pass tax-free from the United States parent to the Antillean finance subsidiary; interest and principal paid to the foreign investor were also tax-free. The Antillean finance subsidiary would realize net income for the one percent interest differential, on which the Antillean government imposed a tax of about thirty percent. However, the United States parent was allowed an offsetting credit on its corporate income tax return for these taxes paid to the Antillean government. Indirectly, this credit resulted in a transfer of tax revenues from the United States Treasury to that of the Antillean government. (emphasis added)

The use of the Antillean route was so extensive that in the early 1980s, almost one-third of the total portfolio interest paid by US residents was paid through the Netherlands Antilles. (Franson, page 937, footnote 30). There was a lot of pressure on the US government to renegotiate the Antillean tax treaty to close this “loophole”. However, this was unattractive because of the adverse consequences of all existing Eurobonds being redeemed. This is very similar to the difficulties that India has in closing the Mauritius loophole. Just as in India, the tax department in the US too kept on questioning the validity of the Antillean solution on the ground “that while the Eurobond obligations were, in form, those of the finance subsidiary, that in substance, they were obligations of the domestic parent and, thus, subject to the thirty percent withholding tax.” (Franson, page 939).

Matters came to a head in 1984 when the US Congress began discussing amendments to the tax laws “that would have eliminated the foreign tax credit taken by the United States parent for taxes paid by the finance subsidiary to the Netherlands Antilles.” (Franson, page 939). The US Treasury was worried about the implications of closing down the Eurobond funding mechanism and proposed a complete repeal of the 30% withholding tax on portfolio interest. This repeal was enacted in 1984. Since then portfolio investors are not taxed on their US interest income at all. Similar benefits apply to portfolio investors in US equities as well. This tax regime has not only stopped the gift that the US government was giving to the Antilles, but it has also contributed to a vibrant capital market in the US.

It is interesting to note a parallel with the Participatory Note controversy in India: “The Eurobond market is largely composed of bearer obligations because of foreigners’ demand for anonymity. Throughout the congressional hearings on the repeal legislation, concerns were voiced over the possibility of increased tax evasion by United States citizens through the use of such bearer obligations.” (Franson, page 949).

It is perhaps not too much to hope that two decades after opening up the Indian market to foreign portfolio investors in the mid 1990s, India too could adopt a sensible tax regime for them. The whole world has moved to a model of zero or near zero withholding taxes on portfolio investors. Since capital is mobile, it is impossible to tax foreign portfolio investors without either driving them away or increasing the cost of capital to Indian companies prohibitively. It is thus impossible to close the Mauritius loophole just as it was impossible for the US to close the Antilles loophole without first removing the taxation of portfolio investors. The Mauritius loophole is a gift to that country because of the jobs and incomes that are created in that country solely to make an investment in India. Every shell company in Mauritius provides jobs to accountants, lawyers, nominee directors and the like. As the tax laws are tightened to require a genuine business establishment in Mauritius, even more income is generated in Mauritius through rental income and new jobs. All this is a free gift to Mauritius provided by greedy tax laws in India. It can be eliminated if we exempt portfolio income from taxation.

On the other hand, non portfolio investment is intimately linked to a business in India and must necessarily be subject to normal Indian taxes. In the US, the portfolio income exemption does not apply to a foreigner who owns 10% or more of the company which paid the interest or dividend, and India should also do something similar. The Mauritius loophole currently benefits non portfolio investors as well, and this is clearly unacceptable. Making portfolio investment tax free will enable renegotiation of the Mauritius tax treaty to plug this loophole.

Posted at 5:35 pm IST on Thu, 27 Aug 2015         permanent link

Categories: international finance, law, taxation

Comments

Hayekian Rational Turbulence: 15-Oct-2014 US Treasury versus 24-Aug-2015 US Stocks

On October 15, 2014, after an early morning release of weak US retail sales data, the benchmark 10-year US Treasury yield experienced a 16-basis-point drop and then rebounded to return to its previous level between 9:33 and 9:45 a.m. ET. The major US regulators were sufficiently disturbed by this event to prepare a Joint Staff Report about this episode. I blogged about this report last month arguing that there was nothing irrational about what happened in that market on that day.

Now compare that with what happened to the S&P 500 stock market index on August 24 and 25, 2015 in response to bad news from China. On the 24th, the market experienced the following before ending the day down about 4%:

The market was a little less erratic the next day, rising 2.5% before falling 4% and ending about 1.4% down.

I see similar phenomena at work in both episodes (15-Oct-2014 US Treasury and 24-Aug-2015 US Stocks): the market was trying to aggregate information from diverse participants in response to fundamental news which was hard to evaluate completely. In Hayek’s memorable phrase, prices arise from the “the interactions of people each of whom possesses only partial knowledge” (F. A. Hayek, “The Use of Knowledge in Society”, The American Economic Review, 35(4), 1945, p 530).

Sometimes, the news that comes to the market is such that it requires the “interactions of people” whose beliefs or knowledge are somewhat removed from the average, and these interactions can be achieved only when prices move at least temporarily to levels which induce them to enter the market. The presence of a large value buyer is revealed only when the price moves to that latent buyer’s reservation price. A temporary undershooting of prices which reveals the knowledge possessed by that buyer is thus an essential part of the process of price discovery in the market when fundamental uncertainty is quite high. To quote Hayek again, “the ‘data’ from which the economic calculus starts are never for the whole society ‘given’ to a single mind which could work out the implications, and can never be so given.” (p 519).

Hayek’s insights are timeless in some sense, but today seventy years later, I venture to think that if he were still alive, he would replace “people” by “people and their algorithms”. Algorithms can learn faster than people, and so sometimes when the algorithms are in charge, the overshooting of prices needs to last only a few minutes to serve their price discovery function. That is conceivably what happened in US Treasuries on October 15, 2014. Sometimes, when the evaluation and judgement required is beyond the capability of the algorithms, human learning takes over and overshooting often lasts for hours and days to allow aggregation of knowledge from people whose latency is relatively long.

Posted at 1:29 pm IST on Wed, 26 Aug 2015         permanent link

Categories: market efficiency

Comments

Moldova bank fraud challenges regulatory assumptions

There are many important and surprising lessons to be learned from the findings in the Kroll report on the bank fraud in Moldova. I believe that these have implications for regulators world wide.

The report is about the collapse in November 2014 of three of the largest banks of Moldova (Unibank, Banca Sociala, and Banca de Economii) which together accounted for 30% of the country’s banking sector. The missing money of more than $1 billion is over 10% of Moldova's GDP.

There are three elements in the story:

  1. A surreptitious takeover of three of the largest Moldovan banks in 2012.

  2. Use of interbank markets and other wholesale sources by these banks to borrow large amounts of money so that they could lend more.

  3. Surreptitious lending of very large amounts of money to one borrower.

The crucial take away for me from the report is that it is possible to evade all the rules and regulations that banking regulators have created to prevent such actions.

For example, as in many other countries, acquisition of a stake of more than 5% in any bank requires formal approval from the National Bank of Moldova. However, shares in the banks were acquired by a large number of apparently unrelated Moldovan, Russian and Ukrainian entities none of which crossed the 5% threshold. All the entities had different addresses and do not seem to have common directors or shareholders. The Kroll report presents some circumstantial evidence that they are related based largely on the fact that they followed similar strategies around the same time and that some of the directors of these entities appear to be nominee directors. I do not believe that this could have been detected in real time. More importantly, I seriously doubt that an attempt to block the purchase of shares at that time on highly speculative grounds would have stood up in a court of law. I conclude that in a modern open economy, ownership restrictions are largely meaningless and unenforceable. They are mere theatre.

Turning to change of control, this too is not easy to establish even in retrospect. The weakest element in the Kroll report in my opinion is that it provides too little evidence that there was a major change in the management and control of the banks. In some of the banks, the management appears to have been largely unchanged. In some cases, where new senior management personnel were inducted, they came from senior positions at other large banks. It is difficult to see how the banking regulator could have objected to these minor management changes.

Finally, the fact that these banks lent such large amounts to money to one single business group (the Shor group) has become apparent only after extensive investigation. The analysis included things like checking the IP addresses from which online banking facilities were accessed by these entities. Media reports suggest that people in Moldova were taken by surprise when the Kroll report identified the Shor group as the beneficiary of massive lending by the failed banks. I am not at all convinced that regulators could have identified all these linkages in real time.

Finally, it must be kept in mind that the whole fraud was accomplished in a little over two years. Supervisory processes work far too slowly to detect and prevent this before the money is gone. I would not be surprised if much of the money left Moldova long ago, and the Shor group was just a front for mafia groups outside the country.

This example has made me even more sympathetic than before to the view that larger capital requirements and size restrictions are the way to go to make banking safer.


As an aside, the “strictly confidential” Kroll report was published in an unusual way. The report was available to only a very limited number of people in the Moldovan government because of the stipulation that:

Any communication, publication, disclosure, dissemination or reproduction of this report or any portion of its contents to third parties without the advance written consent of Kroll is not authorized.

The Speaker of the Moldova Parliament, Mr. Andrian Candu, published it on his personal blog with the following statement (Google Translated from the original Romanian):

I decided to publish the report Kroll and I take responsibility for this action. I do it openly, without hiding behind anonymous sources. ... I understand the arguments of Kroll not to accept publication, but the situation in Moldova and our responsibility to be transparent with the citizens requires us to adapt to the realities of the moment ... I think it is important that every citizen should have access to that report.

Every page of the published report contains the footer:

Private and Confidential: Copy 33 of 33 – Mr. Andrian Candu, the Speaker of the Parliament of the Republic of Moldova

This is about as transparent as one can get. Yet many sections of the media have described the publication of the report as a leak. I think the use of the derogatory word leak in this context is quite inappropriate. In fact, I wish more people in high positions display the same courage of their convictions that Mr. Candu has demonstrated. The world would be a better place if they do so.

Posted at 4:43 pm IST on Wed, 5 Aug 2015         permanent link

Categories: crisis, fraud

Comments

In the sister blog and on Twitter during July 2015

The following posts appeared on the sister blog (on Computing) last month.

Tweets during the last month (other than blog post tweets):

Posted at 12:32 pm IST on Sat, 1 Aug 2015         permanent link

Categories: technology

Comments

There are no irrelevant alternatives

To a Bayesian, almost everything is informative and therefore relevant. This means that the Independence of Irrelevant Alternatives axiom is rarely applicable.

A good illustration is provided by the Joint Staff Report on “The U.S. Treasury Market on October 15, 2014”. On that day, in the narrow window between 9:33 and 9:45 a.m. ET, the benchmark 10-year US Treasury yield experienced a 16-basis-point drop and then rebounded to return to its previous level. The impact of apparently irrelevant alternatives is described in the Staff Report as follows:

Around 9:39 ET, the sudden visibility of certain sell limit orders in the futures market seemed to have coincided with the reversal in prices. Recall that only 10 levels of order prices above and below the best bid and ask price are visible to futures market participants. Around 9:39 ET, with prices still moving higher, a number of previously posted large sell orders suddenly became visible in the order book above the current 30-year futures price (as well as in smaller size in 10-year futures). The sudden visibility of these sell orders significantly shifted the visible order imbalance in that contract, and it coincided with the beginning of the reversal of its price (the top of the price spike). Most of these limit orders were not executed, as the price did not rise to their levels.

In other words, traders (and trading algorithms) saw some sell orders which were apparently irrelevant (nobody bought from these sellers at those prices), but this irrelevant alternative caused the traders to change their choice between two other alternatives. Consider a purely illustrative example: just before 9:39 am, traders faced the choice between buying a modest quantity at a price of say 130.05 and selling a modest quantity at a price of 129.95. They were choosing to buy at 130.05. At 9:39, they find that there is a new alternative: they can buy a larger quantity at a price of say 130.25. They do not choose this new alternative, but they change their earlier choice from buying at 130.05 to selling at 129.95. This is the behaviour that is ruled out by the axiom of the Independence of Irrelevant Alternatives.

But if one thinks about the matter carefully, there is nothing irrational about this behaviour at all. At 8:30 am, the market had seen the release of somewhat weaker-than-expected US retail sales data. Many traders interpreted this as a memo that the US economy was weak and needed low interest rates for a longer period. Since low interest rates imply higher bond prices, traders started buying bonds. At 9:39, they see large sell orders for the first time. They realize that many large investors did not receive this memo, or may be received a different memo. They think that their interpretation of the retail sales data might have been wrong and that they had possibly over reacted. They reverse the buying that they had done in the last few minutes.

In fact,the behaviour of the US Treasury markets on October 15 appears to me to be an instance of reasonably rational behaviour. Much of the action in those critical minutes was driven by algorithms which appear to have behaved rationally. With no adrenalin and testosterone flowing through their silicon brains, they could evaluate the new information in a rational Bayesian manner and quickly reverse course. The Staff Report says that human market makers stopped making markets, but the algorithms continued to provide liquidity and maintained an orderly market.

I expected the Staff Report to recommend that in the futures markets, the entire order book (and not just the best 10 levels) should be visible to all participants at all times. Given current computing power and communication bandwidth, there is no justification for sticking to this anachronistic practice of providing only limited information to the market. Surprisingly, the US authorities do not make this sensible recommendation because they fail to see the highly rational market response to newly visible orders. Perhaps their minds have been so conditioned by the Independence of Irrelevant Alternatives axiom, that they are blind to any other interpretation of the data. Axioms of rationality are very powerful even when they are wrong.

Posted at 5:55 pm IST on Sat, 25 Jul 2015         permanent link

Categories: market efficiency

Comments

Regulating Equity Crowdfunding Redux

In response to my blog post of a few days back on regulating crowd funding, my colleague Prof. Joshy Jacob writes in the comments:

I agree broadly with all the arguments in the blog post. I would like to add the following.

  1. If tapping the crowd wisdom on the product potential is the essence of crowdfunding, substituting that substantially with equity crowdfunding may not be a very good idea. While the donation based crowdfunding generates a sense of the product potential by way of the backings, the equity crowdfunding by financiers would not give the same, as their judgments still need to be based on the crowd wisdom. Is it possible to create a sequential structure involving donation based crowdfunding and equity based crowdfunding?

  2. Unlike most other forms of financing, the judgement in crowdfunding is often done sitting far away, without meeting the founders, devoid of financial numbers, and therefore almost entirely based on the campaign material posted. This intimately links the central role of the campaign success to the nature of the promotional material and endorsements by influential individuals. Evolving a role model for the multimedia campaigns would be appropriate, given the ample evidences on behavioral biases in retail investor decision making.

Both these are valid points that the regulator should take into account. However, I would worry a bit about people gaming the system. For example, if the regulator says that a successful donation crowdfunding is a prerequisite for equity crowdfunding, there is a risk that entrepreneurs will get their friends and relatives to back the project in a donation campaign. It is true that angels and venture capitalists rely on crowdfunding campaign success as a metric of project viability, but I presume that they would have a slightly greater ability to detect such gaming than the crowd.

Posted at 12:59 pm IST on Sun, 12 Jul 2015         permanent link

Categories: exchanges, investment

Comments

Regulating Equity Crowdfunding

Many jurisdictions are struggling with the problem of regulating crowd funding. In India also, the Securities and Exchange Board of India issued a consultation paper on the subject a year ago.

I believe that there are two key differences between crowd funding and other forms of capital raising that call for quite novel regulatory approaches.

  1. Crowd funding is for the crowd and not for the Wall Street establishment. There is a danger that if the regulators listen too much to the Wall Street establishment, they will produce something like a second tier stock market with somewhat diluted versions of a normal public issue. The purpose of crowd funding is different – it is to tap the wisdom of crowds. Crowd funding should attract people who have a passion for (and possibly expertise in) the product. Any attempt to attract those with expertise in finance instead of the product market would make a mockery of crowd funding.

  2. The biggest danger that the crowd funding investor faces is not exploitation by the promoter today, but exploitation by the Series A venture capitalist tomorrow. Most genuine entrepreneurs believe in doing well for their crowd fund backers. After all, they share the same passion. Everything changes when the venture capitalist steps in. We have plenty of experience with venture capitalists squeezing out even relatively sophisticated angel investors. The typical crowd funding investor is a sitting duck by comparison.

What do these two differences imply for the regulator?

In the spirit of crowd sourcing, I would like to hear in the comments on what a good equity crowd funding market should look like and how it should be regulated. Interesting comments may be hoisted out of the comments into a subsequent blog post.

Posted at 5:27 pm IST on Mon, 6 Jul 2015         permanent link

Categories: exchanges, investment

Comments

In the sister blog during June 2015

The following posts appeared on the sister blog (on Computing) last month.

Head in the clouds, feet on the ground: Part II (Feed Reader)

Head in the cloud, feet on the ground: Part I (email)

Posted at 2:14 pm IST on Thu, 2 Jul 2015         permanent link

Categories: technology

Comments

We must not mandate retention of all digital communications

After careful thought, I now think that it is a bad idea to mandate that regulated entities should store and retain records of all digital communications by their employees. Juicy emails and instant messages have been the most interesting element in many prosecutions including those relating to the Libor scandal and to foreign exchange rigging. Surely it is a good thing to force companies to retain these records for the convenience of prosecutors.

The problem is that today we use things like instant messaging where we would earlier have had an oral conversation. And there was no requirement to record these oral conversations (unless they took place inside specified locations like the trading room). The power of digital communications is that they transcend geographical boundaries. The great benefit of these technologies is that an employee sitting in India is able (in a virtual sense) to take part in a conversation happening around a coffee machine in the New York or London office.

Electronic communications can potentially be a great leveller that equalizes opportunities for employees in the centre and in the periphery. In the past, many jobs had to be in London or New York so that the employees could be tuned in to the office gossip and absorb the soft information that did not flow through formal channels. If we allowed a virtual chat room that spans the whole world, then the jobs too could be spread around the world. This potential is destroyed by the requirement that conversations in virtual chat rooms should be stored and archived while conversations in physical chat rooms can remain ephemeral and unrecorded. Real gossip will remain in the physical chat rooms and the jobs will also remain within earshot of these rooms.

India as a member of the G20 now has a voice in global regulatory organizations like IOSCO and BIS. Perhaps it should raise its voice in these fora to provide regulatory space for ephemeral digital communications that securely destroy themselves periodically.

Posted at 10:03 pm IST on Wed, 1 Jul 2015         permanent link

Categories: regulation, technology

Comments

Revolving door and favouring future employers

Canayaz, Martinez and Ozsoylev have a nice paper showing that the pernicious effect of the revolving door (at least in the US) is largely about government employees favouring their future private sector employers. It is not so much about government employees favouring their past private sector employers or about former government employees influencing their former colleagues in the government to favour their current private sector employers.

Their methodology relies largely on measuring the stock market performance of the private sector companies whose employees have gone through the revolving door (in either direction) and comparing these returns with a control group of companies which have not used the revolving door. The abnormal returns are computed using the Fama-French-Carhart four factor model.

The advantage of the methodology is that it avoids subjective judgements about whether for example, US Treasury Secretary Hank Paulson favoured his former employer, Goldman Sachs, during the financial crisis of 2008. It also avoids having to identify the specific favours that were done. The sample size also appears to be reasonably large – they have 23 years of data (1990-2012) and an average of 62 revolvers worked in publicly traded firms each year.

The negative findings in the paper are especially interesting, and if true could make it easy to police the revolving door. All that is required is a rule that when a (former) government employee joins the private sector, a special audit would be carried out of all decisions by the government employee during the past couple of years that might have provided favours to the prospective private sector employer. In particular, the resistance in India to hiring private sector professionals to important government positions (because they might favour their former employer) would appear to be misplaced.

One weakness in the methodology is that companies which anticipate financial distress in the immediate future might hire former government employees to help them lobby for some form of bail out. This might ensure that though their stock price declines due to the distress, it does not decline as much as it would otherwise have done. The excess return methodology would not however show any gain from hiring the revolver because the Fama French excess returns would be negative rather than positive. Similarly, companies which anticipate financial distress might make steps (for example, campaign contributions) that make it more likely that their employees are recruited into key government positions. Again, the excess return methodology would not pick up the resulting benefit.

Just in case you are wondering what all this has to do with a finance blog, the paper says that “[t]he financial industry, ... is a substantial employer of revolvers, giving jobs to twice as many revolvers as any other industry.” (Incidentally, Table A1 in their paper shows that including or excluding financial industry in the sample makes no difference to their key findings). And of course, the methodology is pure finance, and shows how much information can be gleaned from a rigorous examination of asset prices.

Posted at 3:43 pm IST on Wed, 24 Jun 2015         permanent link

Categories: corporate governance, regulation

Comments

On may versus must and suits versus geeks

On Monday, the Basel Committee on Banking Supervision published its Regulatory Consistency Assessment Programme (RCAP) Assessment of India’s implementation of Basel III risk-based capital regulations. While the RCAP Assessment Team assessed India as compliant with the minimum Basel capital standards, they had a problem with the Indian use of the word “may” where the rest of the world uses “must”:

The team identified an overarching issue regarding the use of the word “may” in India’s regulatory documents for implementing binding minimum requirements. The team considers linguistic clarity of overarching importance, and would recommend the Indian authorities to use the word “must” in line with international practice. More generally, authorities should seek to ensure that local regulatory documents can be unambiguously understood even in an international context, in particular where these apply to internationally active banks. The issue has been listed for further reflection by the Basel Committee. As implementation of Basel standards progresses, increased attention to linguistic clarity seems imperative for a consistent and harmonised transposition of Basel standards across the member jurisdiction.

Section 2.7 lists over a dozen instances of such usage of the word “may”. For example:

Basel III paragraph 149 states that banks “must” ensure that their CCCB requirements are calculated and publicly disclosed with at least the same frequency as their minimum capital requirements. The RBI guidelines state that CCCB requirements “may” be disclosed at table DF-11 of Annex 18 as indicated in the Basel III Master Circular.

Ultimately, the RCAP Assessment Team adopted a pragmatic approach of reporting this issue as an observation rather than a finding. They were no doubt swayed by the fact that:

Senior representatives of several Indian banks unequivocally confirmed to the team during the on-site discussions that there is no doubt that the intended meaning of “may” in Indian banking regulations is “shall” or “must” (except where qualified by the phrase “may, at the discretion of” or similar terms).

The Indian response to the RCAP Assessment argues that “may” is perfectly appropriate in the Indian context.

RBI strongly believes that communication, including regulatory communications, in order to be effective, must necessarily follow the linguistics and social characteristics of the language used in the region (Indian English in this case), which is rooted in the traditions and customs of the jurisdiction concerned. What therefore matters is how the regulatory communications have been understood and interpreted by the regulated entities. Specific to India, the use of word “may” in regulations is understood contextually and construed as binding where there is no qualifying text to convey optionality. We are happy that the Assessment Team has appreciated this point.

I tend to look at this whole linguistic analysis in terms of the suits versus geeks divide. It is true that in Indian banking, most of the suits would agree that when RBI says “may” it means “must”. But increasingly in modern finance, the suits do not matter as much as the geeks. In fact, humans matter less than the computers and the algorithms that they execute. I like to joke that in modern finance the humans get to decide the interesting things like when to have a tea break, while the computers decide the important things like when to buy and sell.

For any geek worth her salt, the bible on the subject of “may” and “must” is RFC 2119 which states that “must” means that the item is an absolute requirement; “should” means that there may exist valid reasons in particular circumstances to ignore a particular item; “may” means that an item is truly optional. I will let Arnold Kling have the last word: “Suits with low geek quotients are dangerous”.

Posted at 2:54 pm IST on Thu, 18 Jun 2015         permanent link

Categories: regulation

Comments

Back from vacation

My long vacation provided the ideal opportunity to reflect on the large number of comments that I received on my last blog post about the tenth anniversary of my blog. These comments convinced me that I should not only keep my blog going but also try to engage more effectively with my readers. Over the next few weeks and months, I intend to implement many of the excellent suggestions that you have given me.

First of all, I have set up a Facebook page for this blog. This post and all future blog posts will appear on that page so that readers can follow the blog from there as well. My blog posts have been on twitter for over six years now and this will continue.

Second, I have started a new blog on computing with its own Facebook page which will over a period of time be backed up by a GitHub presence. I did not want to dilute the focus of this blog on financial markets and therefore decided that a separate blog was the best route to take. At the end of every month, I intend to post on each blog a list of posts on the sister blog, but otherwise this blog will not be contaminated by my meanderings in fields removed from financial markets.

Third, I will be experimenting with different kinds of posts that I have not done so far. This will be a slow process of learning and you might not observe any difference for many months.

Posted at 1:31 pm IST on Wed, 17 Jun 2015         permanent link

Categories: miscellaneous

Comments

Reflections on tenth anniversary

My blog reaches its tenth anniversary tomorrow: over ten years, I have published 572 blog posts at a frequency of approximately once a week.

My first genuine blog post (not counting a test post and a “coming soon” post) on March 29, 2005 was about an Argentine creditor (NML Capital) trying to persuade a US federal judge (Thomas Griesa) to attach some bonds issued by Argentina. The idea that a debtor’s liabilities (rather than its assets) could be attached struck me as funny. Ten years on, NML and Argentina are still battling it out before Judge Griesa, but things have moved from the comic to the tragic (at least from the Argentine point of view).

The most fruitful period for my blog (as for many other blogs) was the global financial crisis and its aftermath. The blog posts and the many insightful comments that my readers posted on the blog were the principal vehicle through which I tried to understand the crisis and to formulate my own views about it. During the last year or so, things have become less exciting. The blogosphere has also become a lot more crowded than it was when I began. Many times, I find myself abandoning a potential blog post because so many others have already blogged about it.

When I look back at the best bloggers that I followed in the mid and late 2000s, some have quit blogging because they found that they no longer had enough interesting things to say; a few have sold out to commercial organizations that turned these blogs into clickbaits; at least one blogger has died; some blogs have gradually declined in relevance and quality; and only a tiny fraction have remained worthwhile blogs to read.

The tenth anniversary is therefore less an occasion for celebration, and more a reminder of senescence and impending mortality for a blog. I am convinced that I must either reinvent my blog or quit blogging. April and May are the months during which I take a long vacation (both from my day job and from my blogging). That gives me enough time to think about it and decide.

If you have some thoughts and suggestions on what I should do with my blog, please use the comments page to let me know.

Posted at 3:50 pm IST on Sat, 28 Mar 2015         permanent link

Categories: miscellaneous

Comments

How does a bank say that its employees are a big security risk?

Very simple. Describe them as your greatest resource!

In my last blog post, I pointed out that the Carbanak/Anunak hack was mainly due to the recklessness of the banks’ own employees and system administrators. Now that they are aware of this, banks have to disclose this as another risk factor in their regulatory filings. Here is how one well known US bank made this disclosure in their Form 10K (page 39) last week (h/t the ever diligent Footnoted.com):

We are regularly the target of attempted cyber attacks, including denial-of-service attacks, and must continuously monitor and develop our systems to protect our technology infrastructure and data from misappropriation or corruption.

...

Notwithstanding the proliferation of technology and technology-based risk and control systems, our businesses ultimately rely on human beings as our greatest resource, and from time-to-time, they make mistakes that are not always caught immediately by our technological processes or by our other procedures which are intended to prevent and detect such errors. These can include calculation errors, mistakes in addressing emails, errors in software development or implementation, or simple errors in judgment. We strive to eliminate such human errors through training, supervision, technology and by redundant processes and controls. Human errors, even if promptly discovered and remediated, can result in material losses and liabilities for the firm.

Posted at 8:12 pm IST on Sat, 28 Feb 2015         permanent link

Categories: risk management, technology

Comments

Carbanak/Anunak: Patient Bank Hacking

There were a spate of press reports a week back about a group of hackers (referred to as the Carbanak or Anunak group) who had stolen nearly a billion dollars from close to a hundred different banks and financial institutions from around the world. I got around to reading the technical reports about the hack only now: the Kaspersky report and blog post as well as the Group-IB/Fox-IT report of December 2014 and their recent update. A couple of blog posts by Brian Krebs also helped.

The two technical analyses differ on a few details: Kaspersky suggests that the hackers had a Chinese connection while Group-IB/Fox-IT suggests that they were Russian. Kaspersky also seems to have had access to some evidence discovered by law enforcement agencies (including files on the servers used by the hackers). Group-IB/Fox-IT talk only about Russian banks as the victims while Kaspersky reveals that some US based banks were also hacked. But by and large the two reports tell a similar story.

The hackers did not resort to the obvious ways of skimming money from a bank. To steal money from an ATM, they did not steal customer ATM cards or PIN numbers. Nor did they tamper with the ATM itself. Instead they hacked into the personal computers of bank staff including system administrators and used these hacked machines to send instructions to the ATM using the banks’ ATM infrastructure management software. For example, an ATM uses Windows registry keys to determine which tray of cash contains 100 ruble notes and which contains 5000 ruble notes. For example, the CASH_DISPENSER registry key might have VALUE_1 set to 5000 and VALUE_4 set to 100. A system administrator can change these settings to tell the ATM that the cash has been loaded into different bins by setting VALUE_1 to 100 and VALUE_4 to 5000 and restarting Windows to let the new values take effect. The hackers did precisely that (using the system administrators’ hacked PCs) so that the ATM which thinks it is dispensing 1000 rubles in the form of ten 100 ruble notes would actually dispense 50,000 rubles (ten 5000 ruble notes).

Similarly, an ATM has a debug functionality to allow a technician to test the functioning of the ATM. With the ATM vault door open, a technician could issue a command to the ATM to dispense a specified amount of cash. There is no hazard here because with the vault door open, the technician anyway has access to the whole cash without issuing any command. With access to the system administrators’ machines, the hackers simply deleted the piece of code that checked whether the vault door was open. All that they needed to do was to have a mole stand in front of the ATM when they issued a command to the ATM to dispense a large amount of cash.

Of course, ATMs were not the only way to steal money. Online fund transfer systems could be used to transfer funds to accounts owned by the hackers. Since the hackers had compromised the administrators’ accounts, they had no difficulty getting the banks to transfer the money. The only problem was to prevent the money from being traced back to the hackers after the fraud was discovered. This was achieved by using several layers of legal entities before being loaded into hundreds of credit cards which had been prepared in advance.

It is a very effective way to steal money, but it requires a lot of patience. “The average time from the moment of penetration into the financial institutions internal network till successful theft is 42 days.” Using emails with malicious attachments to hack a bank employee’s computer, the hackers patiently worked their way laterally infecting the machines of other employees until they succeeded in compromising a system administrator’s machine. Then they collected data patiently about the banks’ internal systems using screenshots and videos sent from the administrator’s machines by the hackers’ malware. Once they understood the internal systems well, they could use the systems to steal money.

The lesson for banks and financial institutions is that it is not enough to ensure that the core computer systems are defended in depth. The Snowden episode showed that the most advanced intelligence agencies in the world are vulnerable to subversion by their own administrators. The Carbanak/Anunak incident shows that well defended bank systems are vulnerable to the recklessness of their own employees and system administrators using unpatched Windows computers and carelessly clicking on malicious email attachments.

Posted at 4:24 pm IST on Sun, 22 Feb 2015         permanent link

Categories: banks, technology

Comments

Loss aversion and negative interest rates

Loss aversion is a basic tenet of behavioural finance, particularly prospect theory. It says that people are averse to losses and become risk seeking when confronted with certain losses. There is a huge amount of experimental evidence in support of loss aversion, and Daniel Kahneman won the Nobel Prize in Economics mainly for his work in prospect theory.

What are the implications of prospect theory for an economy with pervasive negative interest rates? As I write, German bund yields are negative up to a maturity of five years. Swiss yields are negative out to eight years (until a few days back, it was negative even at the ten year maturity). France, Denmark, Belgium and Netherlands also have negative yields out to at least three years.

A negative interest rate represents a certain loss to the investor. If loss aversion is as pervasive in the real world as it is in the laboratory, then investors should be willing to accept an even more negative expected return in risky assets if these risky assets offer a good chance of avoiding the certain loss. For example, if the expected return on stocks is -1.5% with a volatility of 15%, then there is a 41% chance that the stock market return is positive over a five year horizon (assuming a normal distribution). If the interest rate is -0.5%, a person with sufficiently strong loss aversion would prefer the 59% chance of loss in the stock market to the 100% chance of loss in the bond market. Note that this is the case even though the expected return on stocks in this example is less than that on bonds. As loss averse investors flee from bonds to stocks, the expected return on stocks should fall and we should have a negative equity risk premium. If there are any neo-classical investors in the economy who do not conform to prospect theory, they would of course see this as a bubble in the equity market; but if laboratory evidence extends to the real world, there would not be many of them.

The second consequence would be that we would see a flipping of the investor clientele in equity and bond markets. Before rates went negative, the bond market would have been dominated by the most loss averse investors. These highly loss averse investors should be the first to flee to the stock markets. At the same time, it should be the least loss averse investors who would be tempted by the higher expected return on bonds (-0.5%) than on stocks (-1.5%) and would move into bonds overcoming their (relatively low) loss aversion. During the regime of positive interest rates and positive equity risk premium, the investors with low loss aversion would all have been in the equity market, but they would now all switch to bonds. This is the flipping that we would observe: those who used to be in equities will now be in bonds, and those who used to be in bonds will now be in equities.

This predicted flipping is a testable hypothesis. Examination of the investor clienteles in equity and bond markets before and after a transition to negative interest rates will allow us to test whether prospect theory has observable macro consequences.

Posted at 5:07 pm IST on Thu, 19 Feb 2015         permanent link

Categories: behavioural finance, bond markets, bubbles, monetary policy

Comments

Bank deposits without those exotic swaptions

Yesterday, the Reserve Bank of India did retail depositors a favour: it announced that it would allow banks to offer “non-callable deposits”. Currently, retail deposits are callable (depositors have the facility of premature withdrawal).

Why can the facility of premature withdrawal be a bad thing for retail depositors? It would clearly be a good thing if the facility came free. But in a free market, it would be priced. The facility of premature withdrawal is an embedded American-style swaption and a callable deposit is just a non callable deposit bundled with that swaption whether the depositor wants that bundle or not. You pay for the swaption whether you need it or not.

Most depositors would not exercise that swaption optimally for the simple reason that optimal exercise is a difficult optimization problem to solve. Fifteen years ago, Longstaff, Santa-Clara and Schwartz wrote a paper showing that Wall Street firms were losing billions of dollars because they were using over simplified (single factor) models to exercise American-style swaptions (“Throwing away a billion dollars: The cost of suboptimal exercise strategies in the swaptions market.”, Journal of Financial Economics 62.1 (2001): 39-66.). Even those simplified (single factor) models would be far beyond the reach of most retail depositors. It is safe to assume that almost all retail depositors behave suboptimally in exercising their premature withdrawal option.

In a competitive market, the callable deposits would be priced using a behavioural exercise model and not an optimal exercise strategy. Still the problem remains. Some retail depositors would exercise their swaptions better than others. A significant fraction might just ignore the swaption unless they have a liquidity need to withdraw the deposits. These ignorant depositors would subsidize the smarter depositors who exercise it frequently (though still suboptimally). And it makes no sense at all for the regulator to force this bad product on all depositors.

Post global financial crisis, there is a push towards plain vanilla products. The non callable deposit is a plain vanilla product. The current callable version is a toxic/exotic derivative.

Posted at 9:37 pm IST on Wed, 4 Feb 2015         permanent link

Categories: banks, derivatives

Comments

The politics of SEC enforcement or is it data mining?

Last month, Jonas Heese published a paper on “Government Preferences and SEC Enforcement” which purports to show that the US Securities and Exchange Commission (SEC) refrains from taking enforcement action against companies for accounting restatements when such action could cause large job losses particularly in an election year and particularly in politically important states. The results show that:

All the econometrics appear convincing:

But then, I realized that there is one very big problem with the paper – the definition of labour intensity:

I measure LABOR INTENSITY as the ratio of the firm’s total employees (Compustat item: EMP) scaled by current year’s total average assets. If labor represents a relatively large proportion of the factors of production, i.e., labor relative to capital, the firm employs relatively more employees and therefore, I argue, is less likely to be subject to SEC enforcement actions.

Seriously? I mean, does the author seriously believe that politicians would happily attack a $1 billion company with 10,000 employees (because it has a relatively low labour intensity of 10 employees per $1 million of assets), but would be scared of targeting a $10 million company with 1,000 employees (because it has a relatively high labour intensity of 100 employees per $1 million of assets)? Any politician with such a weird electoral calculus is unlikely to survive for long in politics. (But a paper based on this alleged electoral calculus might even get published!)

I now wonder whether the results are all due to data mining. Hundreds of researchers are trying many things: they are choosing different subsets of SEC enforcement actions (say accounting restatements), they are selecting different subsets of companies (say non financial companies) and then they are trying many different ratios (say employees to assets). Most of these studies go nowhere, but a tiny minority produce significant results and they are the ones that we get to read.

Posted at 2:39 pm IST on Tue, 27 Jan 2015         permanent link

Categories: law, regulation

Comments

Why did the Swiss franc take half a million milliseconds to hit one euro?

Updated

In high frequency trading, nine minutes is an eternity: it is half a million milliseconds – enough time for five billion quotes to arrive in the hyperactive US equity options market at its peak rate. On a human time scale, nine minutes is enough time to watch two average online content videos.

So what puzzles me about the soaring Swiss franc last week (January 15) is not that it rose so much, nor that it massively overshot its fair level, but that the initial rise took so long. Here is the time line of how the franc moved:

It appears puzzling to me that no human trader was taking out every euro bid in sight at around 9:33 am or so. I find it hard to believe that somebody like a George Soros in his heyday would have taken more than a couple of minutes to conclude that the euro would drop well below 1.00. It would then make sense to simply lift every euro bid above 1.00 and then wait for the point of maximum panic to buy the euros back.

Is it that high frequency trading has displaced so many human traders that there are too few humans left who can trade boldly when the algorithms shut down? Or are we in a post crisis era of mediocrity in the world of finance?

Updated to correct 9:03 to 9:33, change eight billion to five billion and end the penultimate sentence with a question mark.

Posted at 7:26 am IST on Thu, 22 Jan 2015         permanent link

Categories: international finance, market efficiency

Comments

RBI is also concerned about two hour resumption time for payment systems

Two months back, I wrote a blog post on how the Basel Committee on Payments and Market Infrastructures was reckless in insisting on a two hour recovery time even from severe cyber attacks.

I think that extending the business continuity resumption time target to a cyber attack is reckless and irresponsible because it ignores Principle 16 which requires an FMI to “safeguard its participants’ assets and minimise the risk of loss on and delay in access to these assets.” In a cyber attack, the primary focus should be on protecting participants’ assets by mitigating the risk of data loss and fraudulent transfer of assets. In the case of a serious cyber attack, this principle would argue for a more cautious approach which would resume operations only after ensuring that the risk of loss of participants’ assets has been dealt with. ... The risk is that payment and settlement systems in their haste to comply with the Basel mandates would ignore security threats that have not been fully neutralized and expose their participants’ assets to unnecessary risk. ... This issue is all the more important for countries like India whose enemies and rivals include some powerful nation states with proven cyber capabilities.

I am glad that last month, the Reserve Bank of India (RBI) addressed this issue in its Financial Stability Report. Of course, as a regulator, the RBI uses far more polite words than a blogger like me, but it raises almost the same concerns (para 3.58):

One of the clauses 31 under PFMIs requires that an FMI operator’s business continuity plans must ‘be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events’ and that there can be ‘complete settlement’ of transactions ‘by the end of the day of the disruption, even in the case of extreme circumstances’. However, a rush to comply with this requirement may compromise the quality and completeness of the analysis of causes and far-reaching effects of any disruption. Restoring all the critical elements of the system may not be practically feasible in the event of a large-scale ‘cyber attack’ of a serious nature on a country’s financial and other types of information network infrastructures. This may also be in conflict with Principle 16 of PFMIs which requires an FMI to safeguard the assets of its participants and minimise the risk of loss, as in the event of a cyber attack priority may need to be given to avoid loss, theft or fraudulent transfer of data related to financial assets and transactions.

Posted at 1:53 pm IST on Tue, 13 Jan 2015         permanent link

Categories: risk management, technology

Comments

Heterogeneous investors and multi factor models

I read two papers last week that introduced heterogeneous investors into multi factor asset pricing models. The papers help produce a better understanding of momentum and value but they seem to raise as many questions as they answer. The easier paper is A Tug of War: Overnight Versus Intraday Expected Returns by Dong Lou, Christopher Polk, and Spyros Skouras. They show that:

100% of the abnormal returns on momentum strategies occur overnight; in stark contrast, the average intraday component of momentum profits is economically and statistically insignificant. ... In stark contrast, the profits on size and value ... occur entirely intraday; on average, the overnight components of the profits on these two strategies are economically and statistically insignificant.

The paper also presents some evidence that “is consistent with the notion that institutions tend to trade intraday while individuals are more likely to trade overnight.” In my view, their evidence is suggestive but by no means compelling. The authors also claim that individuals trade with momentum while institutions trade against it. If momentum is not a risk factor but a free lunch, then this would imply that individuals are smart investors.

The NBER working paper (Capital Share Risk and Shareholder Heterogeneity in U.S. Stock Pricing) by Martin Lettau, Sydney C. Ludvigson and Sai Ma presents a more complex story. They claim that rich investors (those in the highest deciles of the wealth distribution) invest disproportionately in value stocks, while those in lower wealth deciles invest more in momentum stocks. They then examine what happens to the two classes of investors when there is a shift in the share of income in the economy going to capital as opposed to labour. Richer investors derive most of their income from capital and an increase in the capital share benefits them. On the other hand, investors from lower deciles of wealth derive most of their income from labour and an increase in the capital share hurts them.

Finally, the authors show very strong empirical evidence that the value factor is positively correlated with the capital share while momentum is negatively correlated. This would produce a risk based explanation of both factors. Value stocks lose money when the capital share is moving against the rich investors who invest in value and therefore these stocks must earn a risk premium. Similarly, momentum stocks lose money when the capital share is moving against the poor investors who invest in momentum and therefore these stocks must also earn a risk premium.

The different portfolio choices of the rich and the poor is plausible but not backed by any firm data. The direction of causality may well be in the opposite direction: Warren Buffet became rich by buying value stocks; he did not invest in value because he was rich.

But the more serious problem with their story is that it implies that both rich and poor investors are irrational in opposite ways. If their story is correct, then the rich must invest in momentum stocks to hedge capital share risk. For the same reason, the poor should invest in value stocks. In an efficient market, investors should not earn a risk premium for stupid portfolio choices. (Even in a world of homogeneous investors, it is well known that a combination of value and momentum has a better risk-return profile than either by itself: see for example, Asness, C. S., Moskowitz, T. J. and Pedersen, L. H. (2013), Value and Momentum Everywhere. The Journal of Finance, 68: 929-985)

Posted at 5:16 pm IST on Sat, 3 Jan 2015         permanent link

Categories: factor investing

Comments