Prof. Jayanth R. Varma's Financial Markets Blog

About me       Latest Posts       Posts by Year       Posts by Categories

Accountable algorithms

Ed Felten argues that with modern cryptography it is possible to make randomized algorithms accountable (h/t Bruce Schneier). This means that the public can verify that the algorithm was executed correctly in a particular case even though the algorithm used random numbers to make it unpredictable.

Felten’s idea is to use one random number to achieve unpredictability and another random number to achieve randomness. The authority running the algorithm chooses the first random number (R) secretly and then commits it (the cryptographic equivalent of putting it in a tamper proof sealed envelope which will be opened later). Then, it chooses the second random number (Q) publicly (for example, by rolling the dice in public). The two random numbers are added and the sum (R + Q) is the input to the algorithm. Note that the public cannot verify that R was chosen randomly, but this does not matter because even if R is non random, R + Q is still random.

Felten’s examples are not from finance, but I find the finance applications quite fascinating. For example, the income tax department selects some individuals randomly for detailed scrutiny. Using Felten’s ideas, it is possible for the individual who is selected for scrutiny to verify that this scrutiny is a result of a genuine random selection and not because of the assessing officer’ bias. It is possible to do this without making the selection predictable.

As a second example, suppose a stock exchange wants to look at prices at random times because if fixed times are chosen, there is greater risk of the prices being manipulated. The random time must be unpredictable to participants. But after the fact, we want to be able to verify that the time was chosen randomly and that some exchange official did not deliberately choose a specific time “after the fact” with knowledge of the actual prices. Felten’s ideas can be used to solve this problem as well.

In a comment on his second post, Felten introduces even more interesting ideas. For a financial example, consider an organization which requires certain employees to take prior approval before trading stocks on personal account. Suppose the compliance officer disallows a trade on the ground that the particular stock is on a negative list of stocks that cannot be traded. How does the employee verify that the compliance officer is not lying if the list itself is secret? Felten’s method can be used to deal with this problem. The compliance officer should publicly announce the root hash of a Merkle tree containing the restricted list of stocks. This root hash by itself reveals nothing. Now the compliance officer can reveal a single path in the Merkle tree which allows the employee to verify that the stock in question is on the list. But this would not reveal anything about what else is on the list.

A lot of regulations are written assuming that the people implementing the regulation are honest. This assumption is clearly inappropriate. The Right to Information Act ensures only transparency; it does not guarantee accountability in the presence of randomization. We should require that all algorithms that are used during the implementation and enforcement of the regulations should be accountable in Felten’s sense.

Posted at 9:52 pm IST on Sat, 29 Sep 2012         permanent link

Categories: law, regulation, technology

Comments

More on Abolishing IPOs

I received many comments on my blog post regarding Pritchard’s proposal on abolishing IPOs. Several comments suggested that while this might reduce losses by retail investors on hot IPOs, it would not eliminate it. I agree completely. Another set of comments asked whether the proposal would deny investors the opportunity to earn high rates of return in IPOs. This is a more subtle issue because the average rate of return on IPOs is nothing great (for the buy and hold investor, it is in fact a below average rate of return). However, many IPOs are “lottery stocks” – though the expected return is low, there is a small probability of very high return (similar to that in a lottery ticket).

If one assumes that retail investors are leverage constrained, these lottery stocks might be attractive to some categories of investors. I recall reading long ago that when Thai telecom tycoon Thaksin Shinawatra wanted to take his company public, he chose to launch the IPO just before the launch of the telecom satellite that was crucial for his business. By doing so, he offered investors an opportunity to gamble on the successful launch of the satellite. Both the IPO and the satellite launch were successful, and years later, he went on to become the Prime Minister of his country.

If one thinks about it carefully, Pritchard’s proposal would not rule out such IPOs, because the only requirement is a seasoning period of continuing disclosures. He does not propose that IPO should happen only after the business model stabilizes.

Posted at 5:27 pm IST on Sun, 23 Sep 2012         permanent link

Categories: equity markets

Comments

Abolishing IPOs

Adam Pritchard has a provocative paper arguing that Initial Public Offerings (IPOs) must simply be abolished (“Revisiting ‘Truth in securities revisited’: Abolishing IPOs and harnessing markets in the public good”). He suggests that “companies bec[o]me public, with required periodic disclosures to a secondary market, before they [a]re allowed to make public offerings”.

Pritchard writes:

No one believes that IPOs reflect an efficient capital market. In fact the evidence is fairly strong that IPOs are inefficient. IPOs are bad deals.

IPOs are bad for companies, bad for insiders, and bad for investors. The only parties that clearly benefit from these deals are the individuals who service them: accountants, lawyers, and underwriters.

Despite the provocative language, what Pritchard is referring to is simply the robust empirical result of short term underpricing (which makes IPOs bad for companies and insiders) and long term under performance (which makes IPOs bad for investors). Pritchard correctly attributes these problems to information asymmetry between issuers and investors.

His solution is to create separate primary and secondary markets for private and public companies, and make the transition between them depend on (a) minimum size requirements and (b) acceptance of enhanced disclosure obligations. The primary and secondary markets for private companies would exclude retail investors. Retail investors would be restricted to public companies; moreover, public companies would have been seasoned in the private market before becoming public. During the seasoning period, would-be public companies would file annual reports and quarterly reports on the same lines as public companies. Price discovery would happen in the private secondary market (markets like SecondMarket and SharesPost) on the basis of these public disclosures.

After the seasoning period is over, the company trades in public markets open to retail investors. Pritchard believes that the primary market for these companies should simply be the secondary market itself – so called “At the Market” offerings.

Overall, I like these ideas as they have the potential to make the equity markets more efficient. The only thing that I do not like is Pritchard’s idea that the private markets can be opened not only to Qualified Institutional Buyers (QIBs) but also to Accredited Investors. I have been reading Jennifer Johnson’s paper describing the accredited investor idea as a Ponzi scheme run by regulators (“Fleecing grandma: a regulatory Pfionzi scheme”).

Posted at 12:52 pm IST on Sat, 8 Sep 2012         permanent link

Categories: equity markets

Comments

Resolving Central Counter Parties (CCPs) by selective tear-ups

In July 2012, the CPSS (Committee on Payment and Settlement Systems of the Bank for International Settlement) and the IOSCO (International Organization of Securities Commissions) put out of consultation a report on resolution of CCPs (Recovery and resolution of financial market infrastructures: Consultative report).

Buried deep inside the report is a proposal that would permit orderly failure of even systemically important CCPs. The idea is that the CCP could simply tear up some of its settlement guarantees and wash its hands off positions that it is unable to honour. The CPSS-IOSCO document says:

... contracts could be given a final value based on the price at which the most recent variation margin payment obligations from and to participants had been calculated. To the extent that defaulting participants with out-of-the-money positions had been unable to pay variation margin to the CCP, the CCP’s obligations and variation margin payments to all in-the-money participants could be haircut pro rata to the size of their variation margin claims. This would have the effect of allocating in full the losses that had been suffered, and limiting exposure to future losses by eliminating unmatched positions or the possibility of further obligations arising on these unmatched positions. All other contracts – probably the vast majority of the contracts cleared – could remain in force. (para 3.13)

The idea seems to be that if huge price swings and defaults in some particular segment of the CCP’s activities inflicts life threatening losses on the CCP, then the resolution mechanism steps in, cuts this segment loose and allows this segment to die. The remaining segments of the CCP can continue to function unimpeded.

Another way of looking at this is that all settlement guarantees provided by the CCP are loss limited by deep out of the money options that kick in when the CCP enters resolution. If I buy a future at 500, I would normally expect the CCP to honour this contract however much the asset price rises in value. Selective tear up means that if the asset price shoots up to say 5,000 and so many sellers default that the default losses overwhelm the capital of the CCP, it (the CCP or the resolution authority) may simply haircut me and forcibly close out my position at 4,000. It is as if along with buying the future at 500, I also sold a call to the CCP at 4,000.

The big difference is that ex ante, I do not know the strike price of this call. If I had a choice of executing my buy trade at different exchanges (with different CCPs), I would clearly choose the CCP with highest expected strike price for the call that it would wrest from me in resolution. That gives me an incentive to choose the CCP that risk manages this contract well – high margins, aggressive intra-day margin calls, and intense scrutiny of concentrated positions. Volumes in each asset class would drift to the exchange or CCP that imposes strict risk management in that asset class. Instead of a race to the bottom, there would be a risk to the top. Exchanges and CCPs would try to compete on the basis of the most exacting margin requirements. Healthy competition among CCPs would be possible.

Absent any segregation of business segments, a large CCP which clears many different products has a huge incumbency advantage. It can enter a new product segment with low margins and grab market share. People would still trade there relying on the total resources of the CCP (across all segments) even if they know that on a standalone basis, this segment of the CCP is not a reliable guarantor of trades because of the inadequate margins. In effect, the established segments of the CCP would subsidize the new segment and allow it to drive new entrants out business. The threat of selective tear up by a resolution authority has the potential to limit such cross subsidies and make the market for CCP services more contestable and competitive.

Incidentally, the use of haircuts to provide partial insulation of different segments of a CCP from losses in other segments is nothing new. For example, LCH.Clearnet runs a Swap Clear service for Interest Rate Swaps which is structured in such a manner that other segments of LCH are partially insulated from losses in this segment. LCH.Clearnet default fund rules (especially SwapClear Default Fund Supplement rules S8-S11) provide for haircuts if the resources available in the Swap Clear segment are inadequate to meet the obligation of the CCP. My memory is that when the Swap Clear service was first started, the old members of LCH were worried about the potential large losses in this segment being allocated to them, and this separation of segments was worked out to allay their concerns.

The advantage of building selective tear-up into the resolution process is that this allows a carve-out of segments to happen ex post after life threatening losses have materialized. This makes a resolution (without bailout) of a CCP more credible, palatable and feasible. While the large global CCPs came out of the 2008 crisis unscathed, I fear that the next crisis will not be so kind to them. I consider it highly likely that within the next decade a prominent CCP in a G7 country would need to be resolved.

Posted at 3:38 pm IST on Tue, 4 Sep 2012         permanent link

Categories: bankruptcy, exchanges

Comments

Structured by cows or by foxes?

An Instant Message Dialog in which a rating agency employee claimed that “it could be structured by cows and we would rate it ” has been repeatedly quoted as evidence of the failures of the rating agencies in rating complex structured products in the build up to the global financial crisis. It also finds mention in a ruling earlier this month by the New York District Court allowing a case against a rating agency to go to trial (h/t FT Alphaville).

I find this puzzling because the least dangerous structured products to rate are those designed by incompetent simpletons. These would more likely correspond to the random samples to which statistical modelling is easiest to apply. The hardest instruments to rate are those put together by cunning foxes rather than by dumb cows. The cunning foxes are likely to design instruments with an intent to game the rating agency models. Model errors that may be harmless in the context of randomly designed pools could be disastrous when the pool is designed to include the worst securities that could scrape through the rating agency’s models.

That the rating agency employees did not realize this is strange. However, the fact that after years of being exposed to Abacus and Magnetar, many commentators do not seem to realize this is even more puzzling.

Posted at 7:57 pm IST on Sun, 26 Aug 2012         permanent link

Categories: behavioural finance, bond markets

Comments

Corporate Hedging and Distorted Benchmarks

I wrote the following short piece on the subject of “Corporate Hedging and Distorted Benchmarks” for the magazine CFO Connect.

Background

The ongoing investigations into the manipulation of Libor fixing have highlighted the possibility that important benchmarks underlying corporate hedges may be manipulated by large players with the possible tacit acquiescence of the global regulators. Similarly, recent developments in key crude oil benchmarks (Brent and WTI) have demonstrated that these benchmarks can be distorted by factors that could not have been anticipated a few years ago. How does this affect corporate risk management? What can corporate risk managers do to make their hedging programmes more resilient?

There are some corporate hedges that are completely unaffected by the distortion or manipulation of benchmarks. This comfortable situation arises when the hedge is designed in such a way that the benchmark in question is completely eliminated from the all-in hedged cost. For example, consider the following:

  • A company borrows under a floating rate instrument where it is required to pay Libor + 3%
  • It enters into an interest rate swap under which it pays 5% fixed and receives Libor.
  • Clearly, the hedged cost is Libor + 3% + 5% - Libor = 8%. Whatever happens to Libor, the company’s borrowing cost is guaranteed to be 8% fixed and the company does not care whether Libor is manipulated upward or downward.

The crucial feature of the above example is that the hedge has no “basis risk” at all because the hedging instruments exactly matches the risk exposure and the risk is neatly cancelled out (Libor - Libor = 0). Not all real life hedges are so neat and simple – “basis risk” is quite common.

Consider another example which illustrates the problem:

  • A company (for example, an oil refinery) is exposed to crude oil price risk.
  • It hedges this risk using Brent crude futures
  • In reality, it sources its crude from say Saudi Arabia and not from the Brent oil fields.
  • Now the crude oil price does not cancel out completely. Instead, there is a “basis risk” where: basis = Saudi crude price - Brent crude price.
  • The company might believe that the “basis risk” is negligible or at least much lower than the original crude oil price risk. While crude prices can move from $50 to $150 within a few months, the price basis between Saudi crude and Brent crude might be expected to change by only $3 or $5.
  • Now the company has to worry about whether somebody is manipulating the Brent price or whether other distortions are creeping in which were not anticipated when the hedge was set up.

Libor manipulation

Libor fixing methodology

Libor stands for the London Inter Bank Offer Rate – the rate at which the large banks are able to borrow on an unsecured basis for short maturities. The official definition is that Libor* is: “The rate at which an individual contributor panel bank could borrow funds, were it to do so by asking for and then accepting interbank offers in reasonable market size, just prior to 11.00am London time.”

There are two problems with this benchmark. First, as the definition makes very clear, Libor is not the rate at which a bank has actually borrowed – it is the rate at which it could borrow funds. Second, unlike say a stock market, where all trades take place in public and transaction prices are known to all, the interbank borrowing is a bilateral market about which there is very little transparency. Therefore, if a bank says that it could borrow at 2.45%, it is not easy for anybody to verify whether this is a reasonable estimate at all.

The Libor computation tries to ameliorate this problem by polling several banks, dropping the bottom 25% and the top 25% of all quotes and averaging the central 50%. If one rogue bank submits an unreasonable quote, it is likely to fall in the top or bottom 25% which are dropped and therefore the final average may not be contaminated by this rogue quote.

The Libor computation has one final problem which people did not worry about in the good old days, but is probably very important in retrospect. The entire Libor computation methodology and process are managed by the British Bankers Association and not by an official body independent of the banks who are being polled.

Pre-crisis manipulation

Evidence that has become available in recent weeks indicates that prior to the global financial crisis, some banks were trying to manipulate Libor to suit their own exposures. Banks are large players in interest rate swaps and other derivative markets based on Libor. The British Bankers Association was of course aware of this possibility and they stipulated that “The rates must be submitted by members of staff at a bank with primary responsibility for management of a bank’s cash, rather than a bank’s derivative book.”

When regulators in the US and the UK examined all the internal emails as part of their investigation, they found that the derivative traders were routinely requesting the “submitters” to submit false quotes designed to suit the positions of those traders. The submitters were routinely complying with these requests.

In a few cases, requests were coming from traders at other banks and these were also being accommodated. Such collusion between banks would of course imply that the simple expedient of dropping the top and bottom quartiles of quotes would no longer be sufficient to prevent the average itself from being manipulated.

Manipulation during the crisis

The other evidence that has become available is about the period during the global financial crisis, when people were scared about the solvency of banks and were unwilling to lend to weak banks. During this period, it appears that most banks were systematically under reporting the rates at which they could borrow.

Comparison with the Credit Default Swap (CDS) market which measures the credit worthiness of banks suggests that Libor might have been understated by several percentage points.

It also appears that the Bank of England and the Federal Reserve Board in the US were aware of this and it is suggested that they tacitly approved of this practice. Regulators apparently feared that if the true borrowing cost of banks were widely known, that could add to the panic in the markets.

Example of Failed Libor Hedges

The case of US municipalities provides a very interesting example of how hedges based on Libor can go very badly wrong when the underlying benchmark is distorted or manipulated.

US municipalities traditionally borrowed using auction rate securities. The interest rate was a floating rate, but instead of being set as a spread over Libor, it was determined by periodic auctions. Historically, these auction rates (adjusted for the tax free status of municipal bonds) tended to be very close to Libor. It was common for them to hedge their interest rate risk by using interest rate swaps based on Libor. In normal times, this hedge worked quite well.

During the crisis however, the municipalities faced very steep borrowing rates in their auction rate securities (some auctions actually failed). This was partly driven by the general lack of liquidity in the market and partly by perceived risk of insolvency of some municipalities. Of course, the banks also faced similar perceived risk of insolvency, but because of the manipulation, the reported Libor did not reflect the same degree of stress.

As a result, the floating rate (Libor) that the municipalities received on their swaps was much lower than the floating rate (auction rates) that they paid on their borrowings. The actual borrowing cost turned out to be far in excess of the fixed rate that they thought they had locked in through the hedges.

This is a dramatic example of how a “basis risk” which was regarded as modest and manageable during normal times can become life threatening when the underlying benchmarks are distorted.

Crude oil benchmark distortion

Almost all crude oil trading in the world (both the physical market and the derivative markets) is based on a handful of benchmarks of which the two most prominent are Brent and WTI (West Texas Intermediate). As is to be expected, WTI used to be the dominant benchmark in the US while Brent was the benchmark of choice elsewhere in the world. Based on the quality of the crude (for example, the sulphur content), the crude from a specific oilfield might trade at a premium or discount to Brent or WTI. A long term supply contract between an oil producer and an importer might therefore specify the price as simply Brent + 3$ or Brent - 1$.

In recent years, the emergence of shale oil has drastically altered the supply-demand imbalances within the US. The Cushing region on which the WTI benchmark is based has become an oil surplus region and WTI prices have become artificially depressed. This position is expected to be solved as new pipelines are built and existing pipelines are modified to run in the opposite direction. In the meantime, retail gasoline prices in the US appear to have completely decoupled from WTI prices and seem to be much more closely aligned to Brent prices.

At the same time, Brent has been affected by declining production in the North Sea oilfields on which this benchmark is based. The short supply of Brent has led to a rise in prices to the extent that Brent now trades at a premium to WTI though historically, WTI was more expensive because of its superior quality.

A lot of oil price hedgers have struggled to cope with the unexpected blow out of “basis risk” due to the historically unprecedented distortion of the two principal benchmarks. To make matters worse, the methodology underlying crude oil benchmarks suffers from the same infirmities as Libor possibly on a larger scale. The markets depend on prices reported by some private agencies like Platt who are completely unregulated.

Broader lessons for corporate hedging

Some of us are fond of joking that much of what passes for hedging is actually speculating on the “basis”. Like all good jokes, this joke too has some grain of truth in it. For example, US municipalities were to some extent taking a speculative position that their borrowing cost would not materially exceed Libor on a tax adjusted basis. Their only real cause for complaint is that Libor was manipulated and did not reflect free market outcomes.

In reality, however, “basis risk” is impossible to eliminate completely. Liquid derivative markets must perforce be based on liquid benchmarks, and a specific company’s costs are unlikely to exactly mirror these benchmarks. Moreover, a hedge with significant “basis risk” is likely to be much less risky than a completely unhedged position. Thus even an imperfect hedge is risk reducing and can not fairly be described as speculative risk taking.

The exception is when hedging is used to justify high levels of leverage. Many of the problems that banks have faced during and after the crisis were due to this. A mistaken belief that “basis risk” is negligible leads to the assumption that the hedged position is practically risk free and can be supported by astronomical levels of leverage. Even modest movements in the “basis” can then wipe out the capital and expose the bank to risk of insolvency.

Outside of finance, some manufacturing companies might be making similar mistakes. By underestimating the basis risk, they may be emboldened to adopt risky financial and operating policies that they might not have chosen if they were fully aware of the “basis risk”. These are the companies that can be truly described as speculating on the “basis”.

Posted at 8:12 pm IST on Sun, 19 Aug 2012         permanent link

Categories: benchmarks, derivatives, risk management

Comments

Anchoring bias as a regulatory tool

The anchoring bias is a well known phenomenon in behavioural finance. As Tversky and Kahneman described it long ago(Amos Tversky and Daniel Kahneman (1974), “Judgment under Uncertainty: Heuristics and Biases”, Science, New Series, 185(4157), pp. 1124-1131):

In many situations,people make estimates by starting from an initial value that is adjusted to yield the final answer. ... adjustments are typically insufficient. That is, different starting points yield different estimates, which are biased toward the initial values. We call this phenomenon anchoring.

Milind Kulkarni from FinIQ, a leading structured products solution provider gave me some information on an interesting regulatory measure by the Central Bank of Taiwan that exploits this behavioural bias to protect retail investors. Though he could not find an official English language text of the regulation, his colleague was able to provide a translation of the Chinese text:

When a bank buys an option from the client (to create yield enhancement) which is collateralized by client’s deposit which happens in to be the call currency with matching notional amount, in the event of the option exercise by the bank the client’s deposit (in call currency) will be retained (bought) by the bank and the alternate (put) currency will be repaid to the client at the strike rate, leading to potential capital loss to the client, such loss should not be more than 30% of the capital at any cost.

This means that the bank must sell a 70% out of the money call option back to the client to create such an airbag type protection against extreme capital loss. In short client sells a regular near ATM call option to the bank and buys back a deep OTM call option from the bank.

The interesting part of this regulation is that it does not rule out short term toxic products in which the retail investor’s annualized rate of return is hugely negative. If you lose 30% every month, you can lose practically everything pretty quickly. Even products that have a maturity of several months or even a year do not need to produce potential losses of 30% to create meaningful yield enhancement. On the other end of the scale, some of the most toxic products do not produce capital losses at all. Consider for example, some of the highly toxic principal protected Power Reverse Dual Currency (PRDC) notes that suddenly became 30 year near-zero coupon bonds when the yen moved sharply during the global financial crisis. The present value loss can be huge even if the principal is fully protected.

The practical effect of the Taiwanese regulation is not therefore so much economic as behavioural. When the product is structured with a 30% airbag, the structure draws the investor’s attention to the potential loss of 30%. Of course, investors know that the 30% loss is extremely unlikely, but 30% is now the anchor from which an adjustment is made to estimate the likely loss. This probably leads to an overestimate of the true loss. In the absence of the airbag, the bank probably tries to deflect the investor’s attention away from possible losses. The smart investor would of course take the bank’s sales pitch with a pinch of salt. But now the anchor is zero loss and insufficient adjustment from this anchor leads to an underestimate of potential losses.

Posted at 9:22 pm IST on Wed, 15 Aug 2012         permanent link

Categories: behavioural finance, derivatives

Comments

Minimum balance at risk for all safe assets

Last month, the Federal Reserve Bank of New York published a staff report with a very interesting proposal to reduce the systemic risk of runs on money market mutual funds (Patrick E. McCabe, Marco Cipriani, Michael Holscher and Antoine Martin, “The Minimum Balance at Risk: A Proposal to Mitigate the Systemic Risks Posed by Money Market Funds”, Federal Reserve Bank of New York, Staff Report No. 564, July 2012).

I found the proposal very innovative and my only quibble with the proposal is that I see no need at all to limit the idea to just money market mutual funds. I think that the same idea can be applied to bank deposits, liquid mutual funds and many other pools that offer high levels of liquidity.

The proposal is that when an investor redeems his or her investment, a small percentage (say 3-5%) of the investment is held back for a short period (say 30 days). If losses are detected at the fund during this period, the balance held back from the redeeming investor is available to absorb the losses. McCabe and his co-authors show that it is possible to design the loss allocation mechanism in such a way that runs on the fund are discouraged without eliminating market discipline. A fund that pursues risky investment strategies would see redemption from rational investors who anticipate losses in the long term (beyond 30 days). But investors who did not redeem before the losses are revealed do not gain anything by redeeming at the last minute. This eliminates panic runs and allows orderly liquidation.

I think this idea could be extended to bank deposits and many other savings vehicles. All “safe assets” or “informationally insensitive assets” to use Gorton’s phrase could be subject to this rule to prevent disorderly runs without requiring taxpayer bailouts.

The authors themselves suggest that small balances could be exempted from some of the subordination requirements and clearly insured deposits do not need to be subject to the minimum balance at risk requirement. The major impact of the proposal would be on large investors, and I do not believe that large investors have any god given right to safe and liquid assets. In fact, society can make such assets available to them only by imposing losses on the taxpayer.

Pozsar and Singh have pointed out that:

Asset managers do not just invest long-term, but also have a large demand for money (or more precisely, money-market instruments). ... The money demand aspect of the asset management complex ... involves massive volumes of reverse maturity transformation, whereby significant portions of long-term savings are transformed into short-term savings. It is due to portfolio allocation decisions, the peculiarities of modern portfolio management and the routine lending of securities for use as collateral. This reverse maturity transformation occurs in spite of the long-term investment horizon of the households whose funds are being managed. This reverse maturity transformation is the dominant source of marginal demand for money-type instruments in the financial system.

If the minimum balance at risk leads to a re-engineering of the asset management industry to reduce the demand for safe and liquid assets, I think that would be a good thing.

Posted at 5:01 pm IST on Fri, 10 Aug 2012         permanent link

Categories: mutual funds, regulation

Comments

Statistics for finance in a post crisis world

I made a presentation on “Statistics for finance in a post crisis world” at the Sixth Statistics Day Conference organized by the Reserve Bank of India on July 17, 2012. Bullet points from my slides are given below.

Big Data

Example: US Flash Crash

Big Data in Finance

Hidden Data

Shadow Banking and Hidden Credit

Hidden credit: the data challenge

Hidden Debt

Hidden Risks

Hidden Foreign Exchange Risks

Hidden Interest Rate Risk: Household Sector

Broken Data

Only Traded Prices are Real

Enronic Accounting

Forensic statistics

Broken Models

The Gaussian Distribution

Non Gaussian Copulas and Marginals

Gaussian Copulas and CDOs

The Way Forward

Simpler finance, maybe. Complex statistics, surely.

Posted at 9:53 pm IST on Thu, 19 Jul 2012         permanent link

Categories: post crisis finance, statistics

Comments

What is a price?

As I keep thinking about Libor fixing (see my post last week on this), I have realized that the word price is used in many ways to mean many things not all of which deserve to be called a price:

An actual traded price
This is the simplest and perhaps most unambiguous definition of a price. The only problem with this notion is due to illiquidity. If the asset is highly illiquid, there may be no recent traded price. More commonly and more importantly, the traded price is subject to the bid-ask bounce – a trade initiated by the seller executes at the bid price while a buyer initiated trade executes at the ask price. If the stock is traded frequently enough and the bid-ask spread is small in relation to the desired level of accuracy, the traded price is a clean and transparent definition of price.
The mid price
Even if the stock is modestly illiquid, there is often an ask price and a bid price in the order book and the average of these is a reasonable approximation to the true price. It is probably better however to use the entire bid-ask interval instead of just the mid price to communicate the range of uncertainty about the true price. Moreover, the bid and ask are valid only for small transaction sizes. It may be better to use the full information in the order book to do an impact cost calculation and present the bid and ask for a more reasonable order size.
An average of traded prices
Quite often closing prices on an exchange are determined as averages of prices during the last few minutes of trading – though in some cases, “few” gets stretched to quite a long period. This averages out the bid-ask bounce and is a tolerable approximation if the volatility of the “true” price during the averaging period is small in relation to the impact cost of a reasonable trade. Sometimes, the averaging is designed to deal with attempts to manipulate the closing price and then it may be reasonable in the above comparison to use the impact cost of the expected trade size of a potential market manipulator which may be significantly larger than typical trade sizes. An alternative to averaging is to use a call auction to determine the closing price.
Polled or indicative prices
Libor and the well known US Constant Maturity Treasury (CMT) fall in this category. The attempt here is to average over market participants’ quotes about what they believe is the true price. The difference between polled prices and traded prices is like the difference between an opinion poll and an actual election. I think it is a mistake to base large derivative markets on “opinion polls”.
Model prices
In the absence of traded prices, it is common to use a pricing model to estimate prices. Of course, there are several shades of grey here: accountants talk about Level One, Level Two and Level Three assets to capture some of the greyness. Outside of finance, hedonic estimates of the price of real goods are also model prices. For an even more extreme case, one could consider a surveyor’s real estate valuation opinion as a model price where the model is less precisely articulated. At the opposite end in terms of formalization of models, the equilibrium prices derived out of general equilibrium models are also model prices with the added twist that in many of these models, the no trade theorem is actually in force and the model price is an estimate of the price at which nobody wishes to trade. My own view on this is that model prices are valuation opinions and not prices.

Where does that leave us? I think that for liquid assets, actual traded prices (perhaps determined by a call auction) are the best way to define the price. For illiquid assets, it is best to recognize that there is no unique price and to use a price interval as the best way to communicate the range of uncertainty involved. I do not understand why physicists are quite happy to say that the gravitational constant in appropriate SI units is 6.67384 ± 0.00080, but in finance and economics we are unwilling to say that the price of an asset is 103.23 ± 0.65.

Posted at 2:27 pm IST on Mon, 9 Jul 2012         permanent link

Categories: benchmarks, manipulation

Comments

Libor, the Gaussian Copula and the Sociology of Finance

I have blogged about the sociology of finance several times (for example in 2010, and in 2011). Two pieces that I read (or in one case re-read) recently have reinforced my view that this literature is important for understanding modern finance.

When the penalties imposed on Barclays by the UK FSA and the US CFTC brought Libor back into the limelight, I found myself re-reading MacKenzie’s fascinating description of the Libor fixing (Donald MacKenzie, “What’s in a Number?”, London Review of Books, 30(18), 25 September 2008, pages 11-12) based on his ethnographic study carried out prior to the financial crisis.

None of the finance textbooks describe the actual mechanics of the Libor fixing as well as this piece. Every source on Libor recites the standard definition that Libor is “The rate at which an individual contributor panel bank could borrow funds, were it to do so by asking for and then accepting interbank offers in reasonable market size, just prior to 11.00am London time.” But one has to read MacKenzie to understand how this hypothetical condition (“were it to do so”) is actually operationalized. Similarly, MacKenzie tells us very casually that a mere $50 million or so may fall short of reasonable market size which for the major currencies would be of the order of several hundred millions.

The second paper that I have been reading also co-authored by MacKenzie is weightier and more recent (Donald MacKenzie and Taylor Spears, “‘The Formula That Killed Wall Street’? The Gaussian Copula and the Material Cultures of Modelling”, June 2012). This paper discusses the well known (and by now notorious) Gaussian copula model for pricing CDOs.

The crucial claim in this paper is that Gaussian copula models were and are crucial to intra- and inter-organizational co-ordination, while simultaneously being ‘othered’ by the modellers themselves. The word ‘other’ might be a simple word, but it has a complex meaning. What is being argued is that the modellers steeped in the culture of no-arbitrage modelling never ‘naturalized’ the Gaussian copula and did not even regard it as a proper model. The dissonance between actuarial models and no-arbitrage models is also brought out very well. I found myself thinking that the battle between CreditMetrics and CreditRisk+ more than a decade ago was also one between actuarial models and no-arbitrage models.

As an aside, the authors also bring up the issue of counterperformativity (models being invalidated by their widespread adoption): “models used for governance are undermined by being gamed; models used to hedge derivatives are undermined by the effects of that hedging on the market for the underlying asset” They also speculate on the possibility of ‘deliberate counterperformativity’: “the employment of a model that one knows overestimates the probability of ‘bad’ events, with a view to reducing the likelihood of those events.”

Posted at 12:32 pm IST on Thu, 5 Jul 2012         permanent link

Categories: behavioural finance, derivatives, mathematics, risk management

Comments

Questioning the benefits of 1930s US securities reforms

Cheffins, Bank and Wells posted an interesting paper earlier this month on SSRN (“Questioning ‘Law and Finance’: US Stock Market Development, 1930-70”) arguing that the creation of the US SEC and the associated legislation did not energize the development of the US securities markets.

After this long period of stagnation and decline, the US stock markets began to recover and grow in the late 1950s and early 1960s. Cheffins et al. argue that the SEC cannot claim any credit for this: Seligman’s influential history describes the SEC during the 1950s as having “reached its nadir” when “its enforcement and policy-making capabilities were less effective than at any other period in its history.” (Seligman, Joel. The Transformation of Wall Street: A History of the Securities and Exchange Commission and Modern Corporate Finance. Boston: Houghton, Mifflin, 1982, page 265).

One counter-argument could be that the decline of the stock market development in the two decades after the formation of the SEC was due to the Great Depression and the World War and not due to the reforms themselves. Unfortunately for this view, the under-regulated “over the counter” (OTC) market grew from 16% to 61% as a percentage of total national stock exchange sales between 1935 and 1961.

Cheffins et al. add to the sceptical literature going back to Stigler and Benston about the contribution of the SEC and the 1930s reforms for the securities markets (Stigler, George J. (1964) “Public Regulation of the Securities Markets”, The Journal of Business, 37(2), 117-142 and Benston, George J. (1973) “Required Disclosure and the Stock Market: An Evaluation of the Securities Exchange Act of 1934”, The American Economic Review, 63(1), 132-155).

One could also argue that broader macroeconomic governance reforms have played a bigger role than micro regulatory reforms. Sylla, for example, gives credit to the Hamiltonian reforms of the 1790s for the remarkable growth of securities markets in the US. Sylla points out that by the early nineteenth century, “the United States led the world in the proportion of financial assets held in the form of corporate stock.” and that “By the third and fourth decades of the nineteenth century, there was probably no place in the world as ‘well banked’ and ‘security marketed’ as the northeastern United States.” (Sylla, Richard (1998) “U.S. Securities Markets and the Banking System, 1790-1840 ”, Federal Reserve Bank of St. Louis Review, May/June 1998, 83-98).


masters in financeIf I may now indulge in some self promotion, MastersinAccounting.info have put my blog in their list of Top 10 Finance Professor Blogs. Their review says: “... Varma writes a comprehensive series of posts on his subject of choice and does so with real insight and obvious passion for the topic. A professor at the Indian Institute of Management, Varma studies financial markets and the regulation of markets throughout the world and uses his knowledge and experience to give readers a perspective on current issues as well as the history of world markets.”

Posted at 5:42 pm IST on Mon, 18 Jun 2012         permanent link

Categories: law, market efficiency, regulation

Comments

Regulation as a response to state failure

Normally, one thinks of regulation as a response to market failure, but Luigi Zingales has a piece (behind a paywall) in the Financial Times earlier this week (“Why I was won over by Glass-Steagall”) in which the principal argument seems to be that a regulatory separation between commercial banking and investment banking is required to deal with a state failure. Zingales argues that:

Under the old regime, commercial banks, investment banks and insurance companies had different agendas, so their lobbying efforts tended to offset one another. But after the restrictions ended, the interests of all the major players were aligned. This gave the industry disproportionate power in shaping the political agenda.

The result according to Zingales was “a demise of public equity markets and an explosion of opaque over-the-counter ones.”:

With the repeal of Glass-Steagall, investment banks exploded in size and so did their market power. As a result, the new financial instruments (such as credit default swaps) developed in an opaque over-the-counter market populated by a few powerful dealers, rather than in a well regulated and transparent public market.

Adam Levitin at Credit Slips makes the same point in greater detail with some good examples:

Glass-Steagal also split the financial services industry politically and enabled the different parts of the industry to be played against each other. Commercial banks, investment banks, and insurance companies fought each other for turf for decades. This mattered in terms of regulation because regulation is a political game.

Because of Glass-Steagal, the financial services industry did not present a monolith in terms of lobbying, and a Congressman could afford to take a stand against one part of the industry because there would be campaign contributions forthcoming from the other parts of the industry. This is how William O. Douglas got the Trust Indenture Act of 1939 passed – he made concessions to the commercial banks in order to get their support for legislation that kept the investment banks out of the indenture trustee business. In the agencies, each part of the industry had its pet group of regulators who would push back against other regulators when they thought that there was an encroachment on their turf, which is the basic nature of deregulation – allowing greater activities than previously allowed. And it even mattered in the courts, as the insurance and investment banking industries financed major litigation challenges to commercial bank deregulation.

... Sarbanes-Oxley passed in part because of a split between the Business Roundtable and the US Chamber of Commerce. And in the financial institutions space, the Durbin Interchange Amendment passed because it posed banks against another heavy duty group, retailers.

One can dispute several elements of this narrative. Zingales’ CDS example is perhaps easiest to refute. Insurance companies should according to the Zingales-Levitin theory have strenuously argued that CDS is an insurance contract and therefore should be their exclusive preserve. Their lobbying and litigation should have prevented the commercial and investment banks from walking away with the CDS market. Despite the repeal of Glass Steagall, most of the big insurers were independent of the leading players in the CDS market, yet they made no serious attempt to block the growth of CDS prior to the crisis. Even after the crisis, much of the movement for regulating CDS as insurance has come from academics and regulators and not from insurance companies.

Yet, I think the idea that market fragmentation guards against state failure is a very interesting perspective on how one should go about designing a regulatory architecture. After all, the life cycle of financial market bubbles is much shorter than that of political bubbles (to borrow an elegant phrase from the George Soros speech about Europe earlier this month). Market failures can be very ghastly, but perhaps they correct faster than state failures.

Posted at 1:03 pm IST on Wed, 13 Jun 2012         permanent link

Categories: regulation

Comments

Elected Regulators

Rose and LeBlanc posted a paper last month (Rose, Amanda M. and LeBlanc, Larry J. , Policing Public Companies: An Empirical Examination of the Enforcement Landscape and the Role Played by State Securities Regulators (May 23, 2012). Available at SSRN: http://ssrn.com/abstract=2065378) showing that in the US elected regulators (typically attorneys general) at the state level were far more aggressive in pursuing securities related enforcement than non elected regulators:

states with elected enforcers brought matters at more than four times the rate of other states, and states with an elected Democrat serving as the securities regulator brought matters at nearly seven times the rate of other states.

Of course, this is completely consistent with the incentive structures facing elected and appointed regulators. Appointed regulators do not gain much from pursuing complex matters; as many of the reports about the SEC failures during recent years have shown, SEC enforcement staff are incentivized to pursue a numbers game – pursuing a large number of easy, low risk and low cost was the best way to make the internal appraisal reports look good. On the other hand, elected regulators have incentives to pursue high risk, high stakes actions. Success could help the elected regulator move on to a higher political position – Spitzer became Governor of New York after a very controversial stint as attorney general.

The difficulty with this model (as with any other high power incentives) is the possibility of harassment of innocent people to gain political mileage. The solution is obviously an appellate process that limits the ability of the regulator to unilaterally destroy legitimate businesses and people. The US has got this reasonably right, but regulators can still put pressure on regulatees to pay fines and settle cases that lack merit to avoid expensive litigation.

The big issue with the paper is whether the empirical results are driven entirely by New York. Tables T.7 and T.8 (page 23) show that the effect remains very strong even if New York is excluded. But the statistical regressions reported in the paper do not use a New York dummy.

Posted at 7:00 pm IST on Sun, 10 Jun 2012         permanent link

Categories: regulation

Comments

Sovereign default and international law

The ongoing sovereign debt crisis in Europe and elsewhere has made it necessary for finance professionals to understand the legal niceties of sovereign defaults. I have been reading Michael Waibel’s book Sovereign Defaults Before International Courts and Tribunals, Cambridge University Press, 2011 which covers sovereign defaults and international law over the last two centuries.

Chapter 4 of the book dealing with “Monetary reform and sovereign default” is particularly interesting in the context of the current difficulties in the euro zone. From my point of view, the problem with this chapter (and the book in general) is that it is too narrowly focused on the law – the book discusses the legal arguments and outcomes of a legal dispute extensively, but it has very little discussion about the underlying financial transaction or the movement of exchange rates. This makes it difficult to understand the economic significance of many of the disputes.

One of the interesting things that I learnt from this book is that defaulting on a debt does not violate any international law at all. Mere non payment of debt when it falls due is only a breach of contract; there is a violation of international law only if the sovereign repudiates the debt. Waibel quotes Feilchenfeld’s very elegant phrasing of this distinction: “... international law will guarantee to the creditor the existence of debt and of a debtor, but not the existence of a good debt or a rich debtor”. (page 299)

The book devotes a whole chapter to the doctrine of “financial necessity” as an excuse for non performance of an obligation. It quotes the judgement of the tribunal in the Russia Indemnity Case (Russia v Turkey) that the states’s “first duty was to itself. Its own preservation was paramount” (page 97). Similarly, another tribunal held that “the duty of a government to ensure the proper functioning of its essential public services outweighs that of paying its debts” (page 98).

As a practical matter, this principle is of help to a sovereign only if its debt is governed by its own ‘municipal law’. (International law uses the term ‘municipal law’ to denote everything except international law – it includes national, provincial and local laws). For example, until the restructuring earlier this year, most of the Greek debt was governed by Greek law; but post restructuring, most of the debt is now governed by English law.

When sovereign debt is governed by foreign law, the sovereign usually is bound by the jurisdiction of a foreign court and the dispute is then resolved according to the ‘municipal law’ of that financial centre – usually London or New York. One of the developments in international law during the last century has been the progressive erosion of sovereign immunity in international law when it comes to sovereign debt. This trend is very nicely discussed by Panizza, Sturzenegger, and Zettelmeyer in a recent paper in the Journal of Economic Literature (“The Economics and Law of Sovereign Debt and Default”, Journal of Economic Literature, 2009, 47:3, 1-47).

In short, the only relevant law appears to be the ‘municipal law’ under which the debt was issued – international law is by and large irrelevant. For any financial institution that has bought a lot of local law sovereign debt of almost any sovereign in the world, this is not very good news.

Posted at 4:02 pm IST on Thu, 7 Jun 2012         permanent link

Categories: credit rating, law, sovereign risk

Comments

Does finance need 128 bit integers?

A blog post yesterday at Marginal Revolution raised the interesting question of whether stock prices should quote in increments of one-hundredth of a cent instead of one cent. That is a very complex question in market micro structure that I do not wish to get into now. But the comment thread on this blog post got into a much more interesting question of how to represent money in a computer.

It is well known that using a floating point number to represent money is a very bad idea – every computer scientist knows that we must use integers for this purpose even if fractions are involved. If we do not have to deal with fractions of a cent, then internally everything should be stored in cents so that a million dollars is represented by 100 million which is well within the range of a 32 bit integer (for example, the long int in ISO C++). If fractions of a cent (say milli-cents) are possible, then everything should be stored in milli-cents and a million dollars is represented by 100 billion which is beyond the range of a 32 bit integer but is well within the range of the more modern 64 bit integer (for example, the long integer in Java).

Dick King commenting at the Marginal Revolution blog post points out that if fractions of one-tenth of a micro-cent are permitted, then a 64 bit integer would allow us to represent up to around a trillion dollars. As he says: “If AAPL ever reaches a market cap of $1 trillion and you decide you want to buy it all you will not be able to place an ordinary order on an ordinary exchange ... sorry about that.” If you really want to be precise about these things, the 64 bit integer takes us only up to a little over $922 billion if you want negative numbers also; if only unsigned quantities are required, then it could take us to $1.8 trillion. But as a rough order of magnitude, we can take one trillion as the upper limit on monetary quantities that can be represented to an accuracy of one micro-cent using 64 bit integers.

I would think that there might be situations where much more than a trillion – perhaps, even a quadrillion – might be needed. In the days before the euro, we used to joke that the word quadrillion was invented to count the Italian public debt (in lire of course). But more seriously, the total open interest in all the global derivative markets is not far short of a quadrillion dollars. Equally, there may be situations where micro cents or nano cents might make sense for micro payments for charging internet transactions. For example, the digital currency Bitcoin (which is valued at approximately one US dollar) allows subdivision up to 8 decimal places or one hundredth of one millionth of a Bitcoin. If we need a single representation for all monetary quantities from 1015 (one quadrillion) to 10-8 (one hundredth of one millionth), the 64 bit integer is simply insufficient. Perhaps, finance will at some point need a 128 bit (16 byte) long integer.

Of course, some people do argue that data structures like Java’s BigInteger that allows arbitrary size of integers should be used. But this arbitrary size comes at a very heavy price. It appears that a Java BigInteger takes about 80 bytes that is five times more than a 128 bit (16 byte) long integer. The performance penalties would also be substantial. While a 128 bit integer would not be sufficient to count the number of protons in the universe, it should be adequate for the full range of monetary quantities that we are likely to encounter for a long time – it will take us from 1021 to 10-15 with a couple of digits to spare.

Posted at 2:04 pm IST on Fri, 25 May 2012         permanent link

Categories: technology

Comments

Hedging at negative cost?

I have been reading the transcript of the conference call in which JPMorgan Chase reported a $2 billion loss on a position that was intended to hedge tail risk (h/t for the transcript to Deus Ex Macchiato). Much has been written about the hedge that JPM Chairman, Jamie Dimon, himself described as “a bad strategy ... badly executed ... poorly monitored.” I want to focus instead on another interesting statement that he made about the hedge:

It was there to deliver a positive result in a quite stressed environment and we feel we can do that and make some net income

Note the tense of that verb “feel”: he does not say “felt”, he says “feel” – after that $2 billion loss, he still thinks, that you can set up a hedge which makes money! The Chairman of one of the largest banks in the world – a bank which is still well respected for highly sophisticated risk management thinks that a tail risk hedge need not cost money, but can actually make money. In other words, there are negative cost hedges out there that can protect you against tail risk.

If you believe in the Efficient Markets Hypothesis (EMH), you know that this is not possible – there is no free lunch. Sure, you can hedge against tail risk, but that will cost you money, and in turbulent markets, it will cost you a good deal of money. The global financial crisis was in a sense the revenge of the Efficient Markets Hypothesis. Those who ignored the “no free lunch” principle and chased illusory excess returns were ruined (or would have been ruined but for their successfully persuading the state to bail them out). The biggest moral hazard of the egregious bail outs of 2008 is that the financial sector has still not internalized the “no free lunch” principle of the Efficient Markets Hypothesis. That is a tragedy for which surely the taxpayer will one day have to pay once again.

In fact, the term hedge seems to have a very different meaning in the financial sector than in the corporate sector (or perhaps, I should say the old fashioned non-financialized part of the corporate sector). If you are an airline that hedges oil price risk, chances are that you are more prudent (more risk averse) than the airline that does not hedge its risk. This is because all airlines face somewhat similar oil price risks and the one that hedges is probably less risky. At least that would be the case if the airline does not use oil price hedging to justify an excessively high level of debt in its capital structure (that was why, I began by confining my remarks to the old fashioned non-financialized corporate sector).

In the financial sector (and in highly financialized industrial companies as well), things are very different. The bank that puts on a hedge does not necessarily keep its portfolio unchanged. On the contrary, it uses the hedge to take on more risks on the underlying portfolio. The total hedged portfolio is not necessarily less risky than the original unhedged portfolio. Chances are that the hedged portfolio is riskier – much riskier.

At a theoretical level, this was established more than three decades ago in a very interesting and highly readable paper by Hayne E. Leland (“Who Should Buy Portfolio Insurance?”, The Journal of Finance, 1980, 35(2), pp. 581-594.) Leland started with a very simple observation: since derivatives are zero sum games, for every buyer of portfolio insurance, there must be a seller. He then asked the obvious question – which investors would buy insurance and who would sell them.

If one were naive, one might be tempted to answer that the buyers of insurance must be either bearish on stocks or highly risk averse while the sellers must be bullish on stocks or highly risk tolerant. Leland’s answer was totally different. He showed that the bears should be selling insurance and the bulls should be buying them. The reason is that the bulls would load themselves up so heavily on stocks (possibly borrowing to buy stocks) that they need downside protection to maintain the position at all. On the other hand, if you are so bearish on stocks that you have put all your money in bonds, clearly you are not going to be buying portfolio insurance!

The situation regarding risk aversion is more complex. Everything depends on how risk tolerance increases with wealth and it will take too long to describe that argument here. The interested reader should read Leland’s original paper.

Anyway, the key point is that the hedge permits the underlying portfolio to become riskier and more toxic. It is like the old adage that the brakes make the car go faster. So when the banks argue that they need complex derivative products to hedge their risks, what they really mean is that they need these derivatives to create very risky asset portfolios while managing the downside risk up to the point where it can be palmed off to the taxpayer.

To quote another adage (this time from the world of financial trading itself), hedging in the financial world is nothing but speculation on the “basis”; it has little to do with risk reduction.

Posted at 10:21 pm IST on Sun, 20 May 2012         permanent link

Categories: derivatives, risk management

Comments

Automating financial advice

Two months back, Abnormal Returns wrote a post entitled “You are not all that unique an investor” which linked to a short survey of the online money management space at World Beta. There are a number of websites in the US that provide personalized financial advice based on software. When one probes further, however, it is clear that this field is still evolving and has a long way to go. One website does not provide advice on asset allocation, it only compares funds that you are already holding with other funds in the same category and recommends cheaper or better performing funds from the same category. Another site emphasises its ability to give personalized advice but it is only in the legal fine print that I could find a disclosure that the advice is based on software tools.

But today I was reading an NBER working paper by Mullainathan, Noeth and Schoar entitled “The market for financial advice: an audit study” and I realized that software does not have to be particularly good to be competitive with traditional advisors. The bar for that is so low that existing software is probably good enough and of course the software will get better. On the other hand, five years after the financial crisis, there is no evidence whatsoever that traditional financial advisors are becoming any less conflicted.

Mullainathan, Noeth and Schoar used an audit methodology where they hired trained auditors to meet with financial advisers with different types of portfolios and submit a detailed report of their interaction with the adviser (for a total of 284 client visits). They find that “advisers not only fail to de-bias their clients but they often reinforce biases that are in the interests of the advisors. Advisers encourage returns-chasing behavior and push for actively managed funds that have higher fees, even if the client starts with a well-diversified, low-fee portfolio.”

It does not even appear that the traditional adviser personalizes the advice adequately: “advisers are less likely to ask younger or female auditors some basic question about their financial situation, and it also leads to worse advice since the adviser does not have full information.”

I think that financial advice is an industry ripe for disruptive transformation through the internet and software.

Posted at 9:01 pm IST on Sun, 13 May 2012         permanent link

Categories: technology

Comments

Government cash management and liquidity squeezes

India witnesses predictable periodic liquidity squeezes due to large outflows of money from the banking system around the dates on which advance tax instalments are due to the government. The central bank does take some offsetting action to pump liquidity into the banking system, but these actions are often not quite adequate. Sometimes, the liquidity situation is fully restored only as the government starts spending out of the tax receipts. In India, we have gotten used to this as if this is the natural and unavoidable state of affairs.

It was therefore interesting to read a nice paper from the New York Federal Reserve describing how the US has solved this problem completely. The paper by Paul J. Santoro is about the evolution of treasury cash management during the financial crisis, but it is description of the pre-crisis system that is of interest for the advance tax problem. The US Treasury’s cash balance is also “highly volatile: between January 1, 2006, and December 31, 2010, it varied from as little as $3.1 billion to as much as $188.6 billion”. But this volatility does not create any problem either for the banking system or the central bank.

The Treasury divides its cash balance between two types of accounts: a Treasury General Account (TGA) at the Federal Reserve and Treasury Tax and Loan Note accounts (TT&L accounts) at private depository institutions.

If, in the pre-crisis regime, the Treasury had deposited all of its receipts in the TGA as soon as they came in, and if it had held the funds in the TGA until they were disbursed, the supply of reserves available to the banking system – and hence the overnight federal funds rate – would have exhibited undesirable volatility. To dampen the volatility, the Fed would have had to conduct frequent and large-scale open market operations, draining reserves when TGA balances were declining and adding reserves when TGA balances were rising. A more efficient strategy, and the one used by the Treasury in its Tax and Loan program, was to seek to maintain a stable TGA balance.

Each morning Treasury cash managers and analysts at the Federal Reserve Bank of New York estimated the current day’s receipts and disbursements. During a telephone conference call at 9 a.m., they combined the estimates with the previous day’s closing TGA balance, scheduled payments of principal and interest, scheduled proceeds from sales of new securities, and other similar items to produce an estimate of the current day’s closing balance. If the estimated closing balance exceeded the target, the Treasury would invest the excess at investor institutions that had sufficient free collateral and room under their balance limits to accept additional funds. If the estimated balance was below target, the Treasury would call for funds from retainer and investor institutions to make up the shortfall.

The key role in this system is played by retainer and investor institutions with whom the Treasury maintains its TT&L balances. The Santoro paper describes their role as follows:

A retainer institution also accepted tax payments but, subject to a limit specified by the institution and pledge of sufficient collateral, retained the payments in an interest-bearing “Main Account” until called for by the Treasury. If a Main Account balance exceeded the institution’s limit, or if it exceeded the collateral value of the assets pledged by the institution, the excess was transferred promptly to the TGA.

An investor institution did everything a retainer institution did and, as described below, also accepted direct investments from the Treasury. The investments were credited to the institution’s Main Account and had to be collateralized.

During the crisis, as the Fed expanded its balance sheet and banks ended up holding vast excess reserves, the pre-crisis policy of stabilizing the TGA balance ceased to be relevant. Moreover, with the Fed paying interest on excess reserves, depositing money in TT&L accounts would have been an additional subsidy to the banks. Therefore, the Treasury moved to a policy of keeping almost all its cash in the TGA (allowing it to become volatile). As and when monetary policy normalizes, the pre-crisis system will probably come back:

Nevertheless, a significant decline in excess reserves resulting from a shift in monetary policy may once again make it necessary to target a more stable TGA, so that TGA volatility does not cause undesirable federal funds rate volatility and interfere with the implementation of monetary policy.

In short, the advance tax related liquidity squeezes in India is simply the outcome of faulty government cash management practices. Other countries have solved this problem long ago (the late 1970s in the case of the US) and the solution is simple and effective. All that is lacking in India is the willingness to do the sensible thing.

Posted at 5:45 pm IST on Mon, 7 May 2012         permanent link

Categories: monetary policy, taxation

Comments

Disclosure of risk factors

I have long felt that the risk factors that are disclosed in most offer documents are next to useless in assessing the risk of a security. In utter frustration, I have often wondered whether it would better to replace all that legalese with a simple empirical fact embellished with a nice skull and crossbones symbol:

☠   Numerous studies covering many different countries have shown that over the long term, initial public offerings tend to underperform the rest of the stock market. Subscribing to these offerings can therefore be injurious to your wealth.

Of course, the same studies also document a large positive initial return to investors who sell immediately after listing, but that is not a risk factor!

Tom C. W. Lin has a different idea in his paper, “A Behavioral Framework for Securities Risk” (34 Seattle University Law Review 325 (2011)).

In order to better capture the advantages of disclosure-based risk regulations given the behavioral tendencies of investors, this Article proposes a behavioral framework for Risk Factors built on (1) the relative likelihood of the risks and (2) the relative impact of dynamic risks. This framework makes risk disclosures more accessible and meaningful to investors and would serve as the new default for public firms. An important feature of the new default is that firms will be able to opt out of the new framework if they believe that the existing Risk Factors requirements are more appropriate. But these firms would need to explain to investors why they opted out. This new default framework would be spatially, optically, and substantively superior to the current framework for investors.

Tom Lin phrases the entire proposal in terms of behavioural finance, but nothing in the proposal depends on behavioural finance. Classifying risks on the basis of likelihood (or frequency) and impact is perfectly rational, and is in fact standard practice in risk management. Thanks to the Basel regulations for operational risks, at least the financial sector has plenty of experience doing this. So it cannot be claimed that it is not feasible.

I think this is definitely worth trying out, and if it works, we may not need the skull and crossbones after all.

Posted at 1:42 pm IST on Wed, 2 May 2012         permanent link

Categories: equity markets, regulation

Comments

US Department of Labour degrades BLS data releases

When I argued in my blog post a few days back that monopoly providers of official data are unlikely to innovate, I still did not imagine that lack of accountability could lead to their actually degrading their data releases. But that is exactly what the US Department of Labour is proposing to do as regards the BLS (Bureau of Labor Statistics) non farm payroll data release which is perhaps the most powerful market moving data release on the planet today. The proposed draft rules and a transcript of a conference call on the subject are available at the website of the Department of Labour (hat tip for the links and for the whole story to FT Alphaville).

I remember reading a couple of papers by Ederington and Lee two decades ago on the BLS data releases and marvelling both at the ingenuity of the data release system and the speed with which markets process the information. The two papers by Louis H. Ederington and Jae Ha Lee are “How Markets Process Information: News Releases and Volatility”, The Journal of Finance, 48(4),1993, 1161-1191 and “The Short-Run Dynamics of the Price Adjustment to New Information”, The Journal of Financial and Quantitative Analysis, 30(1), 1995, 117-134. The JFQA paper describes the release system as follows:

The release is distributed to reporters with a “need for timely access” about 30 minutes prior to the scheduled release time. While the reporters may type their reports, they cannot leave the room or use the phone. Approximately one minute before the scheduled release time, the reporters are allowed to plug in their modems or pick up the phones but the lines are dead until the scheduled time.

The same paper also describes the speed with which the eurodollar futures market processes the information as follows:

Using 10-second returns and tick-by-tick data, we find that prices adjust in a series of numerous small, but rapid, price changes that begin within 10 seconds of the news release and are basically completed within 40 seconds of the release.

We must remember that this was two decades ago when modern high frequency trading was practically absent and eurodollar futures trading was still on the trading pit.

The Department of Labour now plans to degrade the data release system. In the conference call, it described the proposed changes in the system as follows:

Currently, organizations that participate in our lock-ups use their own computer and phone equipment, which is installed in our facilities. The telephone and data lines they use belong to and have been maintained by them. That, too, is changing.

... all currently participating organizations should plan to have their equipment removed.

... The department’s main lock-up facility will be ... reconfigured with new computer equipment, and telephone and data lines. The Labor Department will own and maintain that equipment and those lines.

... Each work station will offer a telephone, monitor, mouse and keyboard. The server and network gear will be located in the lock-up room within a locked cage but separate from the workspace area. Users will be able to log onto their desktops at assigned work spaces. Those who want time to prepare notes or drafts can take advantage of the extra half hour to use Microsoft Word, which will be loaded on the computers.

... There will be a new rule that personal effects must be placed in lockers outside the lock-up facilities before entering the rooms. However, carrying in paper research notes and other paper materials will be allowed. Carrying in pens and pencils will not be permitted. The department will provide writing instruments as well as plain paper for notetaking inside the lock-up rooms.

... You can’t bring in discs or thumb drives or any type of electronic devices.

The Department of Labour will thus control what historical or background data is available to the reporters (they cannot bring anything in electronic form), and the Department of Labour will also control what software is available to the reporters (only Microsoft Word).

To understand how this degrades the information processing, let us go back to the market reacting to the release within 10 seconds in the age of human trading on the pit two decades ago. The only way this can happen is that a lot of analysis takes place prior to the data release and contingent trading strategies are worked out and well rehearsed in advance. The market thus waits not for the raw data, but for an interpreted news report that places the raw data in the context of all the consensus estimates, past trends and other background information and allows a pre-rehearsed trading strategy to be invoked. By reducing the quality of this analysis (by disallowing the tools required to do this), the Department of Labour is degrading the quality of its data release.

What is more, the Department of Labour has no real reasons for making these changes. Look at this exchange in the transcript:

Daniel Moss: Bloomberg News. I’m just wondering, why is the Labor Department choosing to do this now? What is the problem that you believe you are trying to fix given the master switch is already in place working effectively?

Carl Fillichio: [Department of Labour] It’s been, as I mentioned, 10 years since we took a holistic view of the lock- up, and times have certainly changed. ...

Daniel Moss: What is the problem that you imagine you’re trying to fix given there is an effective master switch there already that controls access out of the room for the information?

Carl Fillichio: There’s nothing we necessarily expect. I think we’re doing prudent business management of reviewing our systems and looking at the changes in technology and the way that the news is delivered and have decided that now is the correct time to institute these changes. ...

Daniel Moss: Do I interpret your response, Carl, as meaning there's no current problem?

Carl Fillichio: What I’m trying to do is prevent a problem, Daniel.

Daniel Moss: What is the problem you think, you imagine that this will prevent?

Carl Fillichio: I think we’re going to move on. Operator, we’ll take the next question.

What we see clearly in this exchange is the total lack of accountability. I am reminded of the famous lines in Shakespeare’s Julius Caesar (Act II, Scene II, 72-76):

DECIUS BRUTUS:
   Most mighty Caesar, let me know some cause,
   Lest I be laugh'd at when I tell them so.
CAESAR:
    The cause is in my will: I will not come
    That is enough to satisfy the Senate. 

As far as the governmental monopolist’s preference for a private sector monopoly (Microsoft Office), that is probably the subject of another post.

Posted at 9:54 pm IST on Thu, 19 Apr 2012         permanent link

Categories: regulation, technology

Comments

Exit policy for financial institutions

I wrote a piece in the Mint newspaper yesterday arguing that instead of worrying about granting licences, financial regulators need to focus on exit policies:

Rogues thrive most in regulatory regimes designed to keep them out. This paradox arises because regulations that try to keep them out also unintentionally keep out the good entrants. The few rogues who do get in are able to thrive because they are shielded from competitors who might otherwise have driven them out of the market. Therefore, an open entry policy coupled with a ruthless exit policy might be the best way to keep the system clean.

In India there are two areas regulators are struggling to arrive at the right entry policy—stock exchanges and banks. It is a fact that it is very hard to get a licence for a new stock exchange or a new bank, but it is also very hard to cancel the licence of an existing stock exchange or an existing bank. We have many existing licensed stock exchanges and many existing banks that are so poorly run that they would be unlikely to get a licence today if they were applying for the first time. Yet, it is not easy to kick them out.

If anything, there is a case for a complete reversal of this policy regime. It should be very easy to start a new stock exchange or a new bank, but it should be much harder to retain the licence. The rationale for this approach is that it is very difficult for any regulator to figure out whether a particular set of promoters will be able to run a proposed stock exchange or bank well enough. This question is completely hypothetical and speculative at the entry stage, and the regulators are forced to extrapolate from the promoters’ experience in other sectors or environments to make an assessment if the given licensees are able to do well in the new venture. In contrast, it is much easier to assess whether an existing bank or stock exchange is well run or not. It is easier because we are now dealing with a question of fact and not speculating about hypothetical future possibilities.

It is easier for rogues to get past stringent entry barriers because they can spend enough money to acquire all the trappings of respectability. They can easily meet minimum capital requirements. They do often succeed in hiring distinguished personalities to serve on their boards (or in senior management positions) and lobby on their behalf. They can engage expensive consultants to prepare impressive business plans. Of course, once they have got the licence, they can discard these ostensible plans, sideline the “distinguished” personalities, and get on with their real business plans.

For these reasons, tough entry barriers do not really keep out the rogues. For example, of the 10 new banks licensed in India in the first phase in the 1990s, the majority were failures in the broadest sense. There were serious and honest promoters who failed because of environmental changes or genuine management mistakes, but that cannot be said of all the banks that failed. A tiny number of those who did not deserve banking licences did manage to obtain them despite very tough entry standards set by a regulatory process that was widely regarded as free of corruption. At the same time, a large number of serious professionals with valuable ideas would have been denied a licence because of the tight entry norms.

The principal objection to the idea of relying on a strong exit policy to keep the rogues in check is the problem of “too big to fail”. This objection has, I think, lost its force after the global financial crisis. It is now accepted that a financial intermediary that is too big to fail is simply too big to exist. Current regulatory thinking is that big institutions should prepare “living wills” or “funeral plans” that ensure that they can die gracefully.

The idea of “funeral plans” applies with equal force to stock exchanges and clearing corporations as well. There is a high probability that a large clearing corporation in a Group of Twenty (G-20) country (or for that matter even in a G-7 country) will fail over the next five years or so. Rather than pretend that central counterparties can never fail, we should be working hard to ensure that they could fail without dragging the whole system down with them. This means multiple central counterparties, which in turn means higher margins and collateral requirements. In a post crisis world, a lower level of leverage is probably not a bad thing.

It is often argued that stock exchanges are a natural monopoly because liquidity begets more liquidity and trading gravitates towards the most liquid trading venue. In reality, however, liquidity has many dimensions. A dark pool can be the best source of liquidity for some investors, while being a terrible trading venue for others. Moreover smart order routing can aggregate liquidity across multiple venues.

A financial system with numerous small banks, many exchanges and multiple central counter parties would be more robust. It would also have fewer rogues, or at least fewer big rogues who can do great damage.

Posted at 5:21 pm IST on Tue, 17 Apr 2012         permanent link

Categories: bankruptcy, regulation

Comments

Crowd sourcing official statistics

Yesterday, the Indian government admitted a huge error in the Index of Industrial Production (IIP) data for January 2012 and corrected the growth rate from a healthy 6.8% to a dismal 1.1%:

... during the compilation of IIP for January, 2012, the sugar production was wrongly taken as 134.08 lakh tonnes in place of actual figure of 58.09 lakh tonnes. ... Immediately after detection of the error, the revised IIP numbers and growth rates for the month of January, 2012 have been compiled. ... the IIP for January 2012 has been revised from 187.9 to 177.9 and, therefore, growth rate over the corresponding period of previous year has been revised from 6.8% to 1.1%.

In my view, the fact that the government has a monopoly in the production of official statistics leads to poor quality, low accountability and lack of innovation. Perhaps, these problems are worse in an emerging economy and the costs of the public sector monopoly are less severe in developed countries. But the problem is not confined to emerging markets.

Even in the US, the seasonal adjustments being used for various official statistics has been called into question (see for example, here, and here). The whole process of seasonal adjustment is ripe for disruptive innovation. First of all, the reliance on a Gregorian calendar for seasonal adjustment is increasingly inappropriate in a world where some of the fastest growing economies with large populations base their principal holidays on a lunar calendar (China, India and the entire Islamic world). China’s influence on commodity prices is so great that it is possible that the commodity price component of seasonally adjusted prices even in the developed world are probably distorted by the incorrect use of Gregorian seasonality adjustment. Via inventories and collateralized commodity financing, this might be an issue for some financial data series as well. Who knows, some large global central banks may be getting their monetary policy wrong because of Gregorian seasonal adjustments!

Secondly, I would argue that the whole idea of seasonality adjustment is an abdication of responsibility by the econometrician. Wherever we use time as an independent variable, it is a proxy for omitted variables that are more fundamental. A time trend, for example, proxies for variables like population growth, technological progress, inflation and productivity improvements. A seasonality adjustment is also a proxy for more fundamental physical and economic variables like temperature, rainfall, holidays, advance tax payment due dates, government bond issuance calendars and the like. It is far better to model these variables directly so that the economic model is more robust and meaningful. The belief that economic variables have a different behaviour in different months solely because of the position of the sun in the zodiac is astrology and not economics. Seasonality adjustments need to move from the age of astrology to the age of econometrics.

Such radical changes are unlikely to happen so long as official data is provided by a monopolist (whether in the public or in the private sector). The time has come in my view to crowd source the creation of official statistics. The government should simply make the digitized raw data publicly available and should not publish anything else. There would be no official Index of Industrial Production, but the government website would have the raw production statistics submitted by various businesses. Yes, not the aggregate sugar production, but the sugar production of each sugar mill in the country. Every user would be free to choose what outlier tests to run, what aggregation algorithm (for example, mean, median or trimmed mean) to apply on this raw data, which base year and which base year weights to adopt, and which index computation methodology (Laspeyres or Paasche, arithmetic mean or geometric mean) to use in computing indices at whatever level of aggregation or disaggregation he or she wishes.

The lack of an authoritative index may also reduce systemic risk in the economy because different indices computed by different agencies may be giving a different picture of the economy. We would probably have less herding and more muted boom-bust cycles. Like the story of the six blind men and the elephant, each of the competing privately produced indices would be a partial and therefore incomplete view. That however is far superior to one blind man and the elephant – because the one blind man does not even know that his understanding is incomplete.

Posted at 11:48 am IST on Fri, 13 Apr 2012         permanent link

Categories: market efficiency, regulation

Comments

The social utility of hedging

I have been engaged in a stimulating email conversation with Vivek Oberoi on the social utility of hedging. The hedger is clearly better off by hedging and reducing risk, but Vivek’s question was whether society as a whole can be worse off? I found the discussion quite interesting and thought it worthwhile to widen the conversation by sharing it on this blog. Moreover, it is heartening to see people in the financial industry introspect about the social utility of their industry. Perhaps, this will encourage others in the financial industry to look at their own work more critically.

My position is that hedging has powerful redistributive effects but is socially useful so long as (a) the hedging is carried out in liquid derivative markets and (b) hedgers do not suffer too much from the Endowment Effect. Vivek of course is not convinced. Anyway here is the conversation so far

Vivek Oberoi writes:

If I buy tickets to travel for my vacation in December today, I am indifferent to any subsequent change in the price of oil. To be able to sell me the ticket forward, a risk averse airline will hedge their fuel for the sale. It too is now indifferent to the price of oil. The airline and I have made allocational choices based on forward price of oil today. If oil price on the day of the flight is different from what it is today, there will be an allocational loss. Both the airline and I may be better off (we are risk averse). But there will be dampening of the price signal. That will lead to an allocational loss (negative externality?) to society.

My response:

One key question is whether the forward contracts can be sold to a third party or can be unwound with the original party at market related prices. The ticket cannot, but the oil hedge can. Illiquid derivatives can be harmful for allocative efficiency. For example:

  1. You may be willing to accept $500 in return for postponing your vacation by a couple of days
  2. Somebody who needs to take that flight due to a personal emergency may be willing to pay $1000 premium to get on that flight.

The airline and the government would step in and say that you cannot do the trade. The wrong person gets on the plane and the outcome is inefficient.

But if you were allowed to do the trade then the Coase Theorem implies that allocative efficiency is achieved regardless of the initial allocation of property rights. In other words, it does not matter whether you owned the ticket or the other person owned the ticket in the beginning; after the bargaining and trading, the right person will get on the plane. The initial ownership will only determine who is richer/poorer at the end of the trade. The Coase Theorem requires low transaction costs which would be the case in liquid futures markets and in liquid OTC markets, but not in highly customized and illiquid bilateral forward contracts.

Behavioural finance will of course have a different take on this. The Endowment Effect could imply a loss of allocative efficiency due to derivative contracts. In the corporate context, you need some takeover threats from asset strippers (who would monetize the fuel hedges and then shut down the airline) to prevent the loss of allocative efficiency caused by managers suffering from the Endowment Effect.

Vivek Oberoi continues:

Imagine a risk-averse consumer of oil. He needs 1 unit of oil to drive to office. The price of oil is USD 100/unit. The consumer has USD 100. If the price of oil goes up to, say 150, he will have to take public transport. To keep things simple, assume the public transport ticket will cost 100. If the price goes down to 50 he will use the money saved to see a movie. The consumer is risk-averse. He dislikes the thought of using public transport more than the enjoyment of seeing a movie.

At time t0 the consumer gets into a fixed price contract for the purchase of 1 unit of oil at time t1. Assume oil prices spike to USD 150/bbl. Now the customer has a choice. He can either use the oil to drive to work. Or he can sell the oil for USD 150/unit. Use USD 100 of that for public transport and the remaining USD 50 for the movie. The essence of risk-aversion is that the combination of using public transport and seeing the movie will not be as good as driving to work.

My response:

I would interpret the situation a little differently. First of all, we can avoid expected values and risk aversion by just focusing on the case where the price of oil is 150. We must still take into account the non linearity (concavity) of the utility function which is what leads to risk aversion, but we can avoid probabilities and expectations.

In the $150 price scenario, the choices of the hedger are

  1. Spending $100 to drive to work and
  2. Spending $50 (net) to take the train leaving $50 surplus for the movie ticket.

The person who did not hedge also has two choices:

  1. Spending $150 to drive to work
  2. Spending $100 to take the train.

What is the difference? It is as if the hedger won a lottery ticket with a prize of $50. That is all. For the hedger, the car and the train are both cheaper by $50, but the relative cost of car versus train is the same for both hedger and non hedger (150-100=100-50=50).

You are right in saying that at the higher level of wealth induced by winning the lottery ticket, the consumer may be willing to pay $150 to drive to work and so he will not sell the forward contract while at the lower level of wealth, he would take the train. That is the result of the concavity of the utility function (risk aversion). Yet, allocative efficiency is achieved in both cases. The hedger taking the car is not allocative inefficiency – it is simply the redistributive effect of the lottery ticket. Exactly as the Coase theorem would say, the initial allocation of property rights (whether or not there is a forward contract) gives rise to windfall gains and losses (lottery prizes), but there is no loss of allocative efficiency.

Vivek Oberoi continues:

A similar case can be built for a risk-averse producer. Imagine a USD 50/unit drop in oil price will lead to a shutdown of his fields. He hedges to avoid that eventuality. An outcome in which he gets USD 50 in cash and shuts down his field is worse than him producing 1 unit.

My response:

Absolutely correct. The consumer wants to buy a lottery ticket that gives a $50 prize when oil is at $150. The producer wants to buy a lottery ticket that gives a $50 prize when oil is at $50. Each is willing to sell the lottery ticket that the other wants in order to pay for the lottery ticket that he wants. Risk aversion (concave utility functions) is what makes this trade possible. More generally, one party is a put buyer and the other is a call buyer. The combination of these two lotteries (options) is a forward contract. Again the Coase theorem says that the trade that they agree to do is allocatively efficient.

Your arguments do of course bring out some counter intuitive aspects of derivative markets:

  • Not all hedgers who cancel their hedges opportunistically are evil. In fact, this behaviour is sometimes necessary to preserve allocative efficiency.
  • Not all asset stripping corporate raiders are evil. Sometimes they are necessary to overcome the Endowment Effect and restore allocative efficiency by monetizing derivatives and real options.

Vivek Oberoi responds to my response:

  1. As you say, the redistributive effect of the lottery and the concavity of utility function ensures that the consumer of oil drives to work. That decision is allocationally inefficient. The consumer has no incentive to reduce his consumption of oil by taking public transport when the price of oil goes up to USD 150/bbl. Similarly the producer has no incentive (and no surplus funds with which) to increase his production of oil. The price signal is being damped.
  2. This transaction results in a net welfare gain for both the consumer and the producer. They are risk averse after all. But for society (i.e everyone besides the two principals) as a whole there is a welfare loss (negative externality). The price at which the derivative contract was struck and the ultimate price of oil are contractually unarbitragable. This reverses the gains from trade. The economic effect of the transaction *on society* is identical to that of price fixing by a government.

My response to his response to my response:

I do not agree that there is any inefficiency for the following reasons:

  • Because the demand of the hedgers has become inelastic, the price of oil will rise more than it otherwise would. For example, with no hedging oil may go to say 120 to force everybody to cut consumption by 10%. With hedging suppose half the consumers do not cut consumption at all. Then oil may rise to 150 to force the remaining half of the consumers to cut consumption by 20% so that demand still equals supply. The cost of adjusting to a shortage of oil has been shifted from one set of consumers to another set. This is redistribution and not inefficiency.
  • Probably the oil field got developed only because of hedging. If there is a risk that oil may go to $50, the oil company may be reluctant to invest in the wells and pipelines and so on.
  • This illustrates why hedging is needed: decisions today depend on the oil price five years from now. Futures markets lead to efficient outcomes in this case. Similar issue arises with the consumer's decision to buy a car – the risk that oil may go to $150 may make him reluctant to buy the car in the first place.
  • You are focusing on the price signal from the spot markets. The more powerful price signals come from the futures market. This signal drives oil exploration, purchase of cars, power plants and so on. The long run supply and demand responses are much stronger because of the futures markets.

I think you are underestimating the power of the Coase theorem when markets are deep and liquid.

Posted at 6:09 pm IST on Wed, 11 Apr 2012         permanent link

Categories: derivatives, risk management

Comments

Pricing of liquidity

Prior to the crisis, liquidity risk was under priced and even ignored. Now, the pendulum has swung to the other extreme, but the result may once again be that liquidity is mispriced.

The Financial Stability Institute set up by the Bank for International Settlements and the Basel Committee on Banking Supervision has published a paper “Liquidity transfer pricing: a guide to better practice” by Joel Grant of the Australian Prudential Regulation Authority. The paper argues that a matched maturity transfer pricing method based on the swap yield curve does not price liquidity at all:

These banks came to view funding liquidity as essentially free, and funding liquidity risk as essentially zero. ... If we assume that interest rate risk is properly accounted for using the swap curve, then a zero spread above the swap curve implies a zero charge for the cost of funding liquidity.

I find myself in total disagreement with this assertion. The standard liquidity preference theory of the term structure says that the long term interest rate is equal to the expected average short term interest rate plus a liquidity premium. So matched maturity transfer pricing does price liquidity. If you accept the market liquidity premium as correct, then one can go further and say that the swap based approach prices liquidity perfectly; but I do not wish to push the argument that far. I would only say that Grant’s argument would hold true only under the pure expectations theory of the term structure, and in this case, the entire market is, by definition, placing a zero price on liquidity.

The paper argues that matched maturity transfer pricing must be based on the bank’s borrowing yield curve – the bank’s fixed rate borrowing cost is converted into a floating rate cost (using an “internal swap”) and the spread of this floating rate borrowing cost over the swap yield curve is treated as a liquidity premium. I believe that the error in this prescription is that it conflates credit and liquidity risk. The spread above the swap curve reflects the term structure of the bank’s default risk. Grant seems to recognize this, but then he ignores the problem:

This ... reflects both idiosyncratic credit risks and market access premiums and is considered to be a much better measure of the cost of liquidity.

I believe that there is a very big problem in including the bank’s default risk premium in pricing the assets that the bank is holding. The problem is that the bank’s default risk depends on the asset quality of the bank. Transfer pricing based on this yield curve can thus set up up a vicious circle that turns a healthy bank into a toxic bank. A high transfer price of funds means that the bank is priced out of the market for low risk assets and the bank ends up with higher risk assets. The higher risk profile of the bank increases its borrowing cost and therefore its transfer price. This pushes the bank into even more risky assets and the vicious circle continues until the bank fails or is bailed out.

This problem is well known even in corporate finance where a firm is engaged in many different lines of business. There the solution is to use a divisional cost of capital which ignores the risk of the company as a whole and focuses on the risk of the division in question. The use of a corporate cost of capital in diversified companies leads to the lower risk businesses being starved of funds while the high risk businesses are allowed to grow. Ultimately, the corporate cost of capital also rises. Divisional cost of capital solves this problem.

It would be very odd if a regulatory guide to best practice ignores all this learning and pushes banks in the wrong direction. We should not lose sight of the simple principle that assets must be priced based on the characteristics of the asset and not the characteristics of the owner of the asset.

Posted at 3:34 pm IST on Thu, 29 Mar 2012         permanent link

Categories: regulation

Comments

Globalized Finance

It is interesting to find a well known G-SIFI (Global Systemically Important Financial Institution) being described as:

a London based hedge fund, headed by a rajestani, masquerading as a German bank

In all fairness, the description is perhaps partly facetious and in any case, I doubt whether this G-SIFI is either as globalized or as important as the Rothschilds (another Anglo-German combination) were in their heyday.

If you are keen to verify your guess of the identity of the G-SIFI in question, goto this dealbreaker.com story, scroll down to the comments, and read the comment of Edmond Dantes, from which the above quote is taken.

Posted at 8:37 pm IST on Fri, 23 Mar 2012         permanent link

Categories: banks

Comments

Reviving structural models: Pirrong tackles commodity price dynamics

The last quarter century has seen the slow death of structural models in finance and the relentless rise of reduced form models. I have argued that this leads to models that are “over-calibrated to markets and under-grounded in fundamentals”, and was therefore quite happy to see Craig Pirrong revive structural models with his recent book on Commodity Price Dynamics.

Ironically, it was a paper based on a structural model that made it possible to jettison structural models. The 1985 paper by Cox, Ingersoll and Ross (“An Intertemporal General Equilibrium Model of Asset Prices”, Econometrica, 1985, 53(2), 363-384) took a structural model of a very simple economy and showed that asset prices must equal discounted values of the asset payoff after making a risk adjustment in the drift term of the dynamics of the state variables. This was a huge advance because it became possible for modellers to simply assume a set of relevant state variables, calibrate the drift adjustments (risk premia) to other market prices, and value derivatives without any direct reference to fundamentals at all.

Over time, reduced form models swept through the whole of finance. Structural (Merton) models of credit risk were replaced by reduced form models. Structural models of the yield curve (based on the mean reversion and other dynamics of the short rate) were replaced by the Libor Market Model (LMM). In commodity price modelling, fundamentals were swept aside, and replaced by an unobservable quantity called the convenience yield.

All this was useful and perhaps necessary because the reduced form models were eminently tractable and could be made to fit market prices quite closely. By contrast, structural models were either intractable or too oversimplified to fit market prices well enough. Yet, there is reason to worry that the use of reduced form models has gone beyond the point of diminishing returns. It is worth trying to reconnect the models to fundamentals.

This is what Pirrong is trying to do in the context of commodity prices. What he has done is to abandon the idea of closed form solutions and rely on computing power to solve the structural models numerically. I believe this a very promising idea though Pirrong’s approach stretches computing feasibility to its limits.

Pirrong regards the spot commodity price to be a function of one state variable (inventory denoted x) and two fundamentals (denoted by y and z, representing demand shocks with different degrees of persistence or a supply shock and a demand shock). As long as inventory is non zero, the spot price must equal the discounted forward price, where the forward price in turn satisfies a differential equation of the Black-Scholes type. The level of inventory is the result of an inter-temporal optimization problem.

Pirrong solves all these problems numerically using a discrete grid of values for x, y and z. Moreover, to use numerical methods, time (t) must also be discretized – Pirrong uses a time interval of one day and the forward prices are for one-day maturity. After discretization, the optimization becomes a stochastic dynamic programming problem. For each day on the grid, a series of problems have to be solved to get the spot price and forward price functions. For each value of inventory in the x grid, a two dimensional partial differential equation has to be solved numerically to get the grid of forward prices associated with that level of inventory. Then for each point in the x-y-z grid, a fixed point (or root finding) problem has to be solved to determine the closing inventory at that date. Once opening and closing inventories are known, the spot price is determined by equating supply and demand. All this has to be repeated for each date: the dynamic programming problem has to be solved recursively starting from the terminal date.

In this process, the computation of forward prices assumes a spot price function, and the spot price function assumes a forward price function. The solution of the stochastic dynamic programming problem consists essentially of iterating this process until the process converges (the new value of the spot price function is sufficiently close to the previous value).

Pirrong reports that the solution of the stochastic dynamic programming problem takes six hours on a 1.2GHz computer. To calibrate the volatility, persistence and correlation of the fundamentals to observed data, it is necessary to run an extended Kalman Filter and the stochastic dynamic programming problem has to be solved for each value of these parameters. All in all, the computational process is close to the limits of what is possible without massive distributed computing. Pirrong reports that when he tried to add one more state variable, the computations did not converge despite running for 20 days on a fast desktop computer.

Though the numerical solution used only one-day forward prices, it is possible to obtain longer maturity (one-year and two-year) forward prices as well as option prices by solving the Black-Scholes type partial differential equation numerically. Pirrong shows that models of this type are able to explain several empirical phenomena.

Perhaps, it should be possible to use models of this kind elsewhere in finance. Term structure models are one obvious problem with similarities to the storage problem.

Posted at 2:41 pm IST on Thu, 15 Mar 2012         permanent link

Categories: commodities, derivatives, interesting books

Comments

Glimcher: Foundations of Neuroeconomics Analysis

Over the last several weeks, I have been slowly assimilating Paul Glimcher’s Foundations of Neuroeconomics Analysis. Most of the neuroeconomics that I had read previously was written by economists (particularly behavioural economomists) who have ventured into neuroscience. Glimcher is a neuroscientist who has ventured into psychology and economics. It appears to me that this make a very profound difference.

First of all, neuroscientists (and biologists in general) treat the human brain (and more generally the animal brain) with an enormous amount of respect. The biologists’ view is that an organ that has evolved over hundreds of millions of years must be pretty close to perfection. For example, Glimcher points out that the ability of a rod cell in the human eye to detect a single photon of light “places human vision at the physical limits of light sensitivity imposed by quantum physics” (page 145). Similarly, the detection of image features in the visual cortex uses Gabor functions which also have well known optimality properties (page 237).

This view needs to be reconciled with the findings of psychologists and behavioural economists that the human brain makes the most egregious mistakes on very simple verbal problems. Glimcher provides one answer – evolution performs a constrained optimization in which greater accuracy has to be constantly balanced against greater computational costs (the brain consumes a disproportionate amount of energy despite its small size). Once again, this trade-off is carried out in a near perfect manner (pages 276-278). I would think that Gigerenzer’s Rationality for Mortals is another way of looking at this puzzle – many of these verbal problems are totally different from the problems that the brain has encountered during millions of years of evolution.

The second profound difference is that biologists do not put human behaviour on a totally different pedestal from animal behaviour. They tend to believe that the neural processes of a rhesus monkey are very similar to that of human beings. After all, they are separated by a mere 25 million years of evolution (page 169). Economists and psychologists probably have a much more anthropocentric view of the world. On this, I am with the biologists; in the whole of human history, anthropocentrism has at almost all times and in almost all contexts been a delusion.

This leads to a third big difference in neuroeconomics itself. Much of Glimcher’s book is based on studies of single neurons or multiple neurons and is therefore extremely precise and detailed. Highly intrusive single neuron studies are obviously much easier to do on animals than on human beings. Much of the neuroeconomics written by economists is therefore based on functional magnetic resonance imaging (functional MRI or fMRI) which provides only a very coarse grained picture of what is going on inside the brain but is easy to do on human beings. The problem is that if one reads only the fMRI based neuroeconomics, one gets the feeling that neuroscience is highly speculative and imprecise.

Glimcher’s book also leads to a view of economics in which economic constructs like utility and maximization are reified in the form of physical representations inside the brain. I am tempted to call this Platonic economics (drawing an analogy with Platonic realism in philosophy), but Glimcher refers to this as “because models” instead of “as if models” – individuals do not act as if they maximize expected utility; they actually compute expected utility and maximize it. There are neural processes that actually encode expected utility and there are neural processes that actually compute the argmax of a function.

One of the interesting aspects of this process of reification is the detailed discussion of the neural mechanisms behind the “reference point” of prospect theory. Glimcher argues that “all sensory encoding is reference dependent: nowhere in the nervous system are the objective values of consumable rewards encoded. ” Glimsher raises the tantalising possibility that temporal difference learning models could allow the reference point to be unambiguously identified (page 321 et seq).

Another important observation is that directly experienced probability and verbally communicated probabilities are totally different things. When random events are directly experienced, there are neural mechanisms that compute the expected utility directly without probabilities and utilities being separately available for subsequent processing. As predicted by learning theory, these probabilities reflect an underweighting of low-probability events (because of a high learning rate). Symbolically communicated probabilities are a different thing altogether where we find the standard Kahnemann-Tversky phenomenon of overweighting of low-probability events.

Expected subjective values constructed from highly symbolic information are an evolutionarily new event, although they are also hugely important features of our human economies, and it may be the novelty of this kind of expectation that is problematic. ... If [symbolically communicated probabilities] is a phenomenon that lies outside the range of human maximization behavior, then we may need to rethink key elements of the neoclassical program. (page 373)

This too is probably related to Gigerenzer’s finding that frequencies work much better than probabilities in symbolically communicated problems and that single event probabilities are handled very badly.

Posted at 8:09 pm IST on Wed, 14 Mar 2012         permanent link

Categories: behavioural finance, interesting books

Comments

Decumulation phase of retirement savings

During the last couple of decades, pension reforms have focused much attention on the accumulation phase in which individuals build up their retirement savings. Well designed defined contribution schemes have incorporated insights from neo-classical finance and behavioural finance to create low cost well diversified savings vehicles with simple default options. Much less attention has been paid to the decumulation phase after retirement where the savings are drawn down.

A report last month from the National Association of Pension Funds (NAPF), and the Pensions Institute in the UK argues that investor ignorance combined with lack of transparency and undesirable industry practices lead to large losses for the investors. According to the report:

Each annual cohort of pensioners loses in total around £500m-£1bn in lifetime income. This could treble as schemes mature and auto-enrolment brings 5-8m more employees into the system.

This represents 5-10% of the annual amount consumers spend on annuities.

The report makes a number of excellent suggestions including creating a default option for annuitization. I would argue that a more radical approach would ultimately be needed.

In the accumulation phase, the key advance was the distinction between systematic/market risk and diversifiable/idiosyncratic risk. By restricting choice to well diversified portfolios, the investor’s choice is dramatically simplified – the only choice that is required is the desired exposure to market risk (proxied by the percentage allocation to equities).

The corresponding distinction in the decumulation phase is between aggregate mortality risk (what I like to call macro-mortality risk) and individual specific mortality risk (micro-mortality). Given a large pool of investors in any defined contribution scheme and some degree of compulsory annuitization, it can be assumed that micro-mortality risk is largely diversified away. Compulsory annuitization eliminates adverse selection to a great extent and large pools provide diversification.

What is left is therefore the risk of a change in population-wide life expectancy or macro-mortality risk. It is not at all self-evident that insurance companies are well equipped to manage this risk. Perhaps, capital markets can deal with this risk better by spreading the risk across large pools of investment capital. In fact, it would make sense for many individuals in the accumulation phase to bear life-expectancy risk (as it increases the period during which their savings can accumulate). At least since the days of the Damsels of Geneva more than two centuries ago, pools of investment capital have been quite willing to speculate on diversified mortality risk. Shiller’s proposal regarding macro futures is another way of implementing this idea.

If we separate out macro mortality risk, then the decumulation phase of retirement savings can be commoditized in exactly the same way that indexation allowed the commoditization of the accumulation phase.

Posted at 10:08 pm IST on Thu, 8 Mar 2012         permanent link

Categories: pension

Comments

Stock market watching

From an interview with the President of the European Central Bank, Mario Draghi in the Wall Street Journal (WSJ) yesterday:

WSJ: What’s the first statistic you look at in the morning?

Draghi: Stock markets.

WSJ: Do you look at the euro exchange rate?

Draghi: Not in the early morning.

I am surprised that he did not mention the TED Spread or some other interest rate spread. And even within the stock market, it would appear that he is looking at market levels and not something like VIX. Are inflation targeting central banks actually closet asset price targeters?

Interesting to compare the Draghi quote with a controversial statement in the Indian parliament by former central banker, then finance minister, and future prime minister, Manmohan Singh in 1992:

But that does not mean that I should lose my sleep simply because stock market goes up one day and falls next day.

This provoked a retort from a parliamentary committee a year later:

It is good to have a Finance Minister who does not lose his sleep easily, but one would wish that when such cataclysmic changes take place all around, some alarm would ring to disturb his slumber

Posted at 2:44 pm IST on Fri, 24 Feb 2012         permanent link

Categories: equity markets, monetary policy

Comments

Intra day exposures once again

No, I am not talking about my obsession with intra-day risks (see here and here), but about the New York Fed’s uncharacteristically blunt criticism of the big clearing banks on this issue:

... the amount of intraday credit provided by clearing banks has not yet been meaningfully reduced, and therefore, the systemic risk associated with this market remains unchanged.

These structural weaknesses are unacceptable and must be eliminated.

... the Task Force [of the clearing banks] ... has not proved to be an effective mechanism for managing individual firms’ implementation of process changes

The Fed’s response is to step in directly to ensure that practices change:

... the New York Fed will intensify its direct oversight of the infrastructure changes

Ideas that have surfaced and could be considered include restrictions on the types of collateral that can be financed in tri-party repo and the development of an industry-financed facility to foster the orderly liquidation of collateral in the event of a dealer's default.

Nor is the criticism restricted to the banks; the Fed is equally critical of the non bank participants in this market:

Ending tri-party repo market participants’ reliance on intraday credit from the tri-party clearing banks remains a critical financial stability policy goal.

The Federal Reserve and other regulators will be monitoring the actions of market participants to ensure that timely action is being taken to reduce sources of instability in this market.

The background to all this is the strange way in which the tri-party repo market operates in the US. Leveraged investors in various securities finance their positions using overnight repos which can be regarded as a form of secured borrowing. The securities in question are not however pledged directly to the lenders but with the clearing bank (hence the name tri-party). Next day morning, the lender gets the loan back, but the borrower does not repay the money. What happens is that the clearing bank lends the money intra-day. Over the course of the day, the borrower drums up a new set of borrowers to lend against its securities that night, and the bank again ends the day without any exposure to the borrower.

The weakness here is that the repo lenders are relying on the clearing bank to get their money back in the morning; the bank is relying on the repo lenders to get its money back in the evening; and both could become complacent about the risks involved. It is a little like two people passing a hot potato back and forth to each other, and pretending that there is no longer any hot potato to worry about. (It is actually worse than that because while the potato would cool in a few minutes, the securities underlying the repo may take years to mature – assuming that they do not default in between.) Of course, everybody wakes up at times of stress, and then the clearing bank is in the position of having to decide each day whether to throw the borrower into bankruptcy by refusing to clear its repos. This is hardly the way to organize such a large and systemically important market.

We should all be happy that, for once, the Federal Reserve seems to be taking things seriously instead of succumbing to regulatory capture.

Posted at 6:27 pm IST on Thu, 16 Feb 2012         permanent link

Categories: risk management

Comments

Intra-day laxity: MF Global edition

I blogged two years back about the tendency in finance to be prudent at night but reckless during the day in the context of Lehman bankruptcy. A related phenomenon (compliant at night but trangressing during the day) is seen in the MF Global bankruptcy according to the preliminary trustee report released earlier this week:

The investigation to date has found that transactions regularly moved between accounts and that funds believed to be in excess of segregation requirements in the commodities segregated accounts were used to fund other daily activities of MF Global ... apparently with the assumption that funds would be restored by the end of the day. By Wednesday, October 26th, as the result of increasing demands for funds or collateral throughout MF Global, funds did not return as anticipated. As these withdrawals occurred, a lack of intraday accounting visibility existed, caused in part by the volume of transactions being executed ... (Paragraph 7, emphasis added)

Of course, I am not a lawyer, but it appears to me that such intra-day laxity is not consistent with the Commodity Exchange Act or the CFTC Regulations:

... all money, securities, and property received by such ... [futures commission merchant] to margin, guarantee, or secure the trades or contracts of any customer ... shall be separately accounted for and shall not be commingled with the funds of such commission merchant ... (Section 4d of the Commodity Exchange Act)

Each futures commission merchant shall treat and deal with the customer funds of a commodity customer or of an option customer as belonging to such commodity or option customer. All customer funds shall be separately accounted for, and shall not be commingled with the money, securities or property of a futures commission merchant or of any other person ... (CFTC Regulation 1.20)

... futures commission merchant ... [may add] to such segregated customer funds such amount or amounts of money, from its own funds or unencumbered securities from its own inventory, of the type set forth in §1.25, as it may deem necessary to ensure any and all commodity or option customers’ accounts from becoming undersegregated at any time. The books and records of a futures commission merchant shall at all times accurately reflect its interest in the segregated funds. (CFTC Regulation 1.23, emphasis added)

It would appear to me that the words “at any time” and “at all times” prohibit intra-day withdrawal “with the assumption that funds would be restored by the end of the day” as well as “lack of intraday accounting visibility”.

As an aside, it is interesting to note that after looking at 800 computer drives and 100 terabytes of data, the trustees still do not know where the money has gone

For three months the Trustee’s investigative team has worked to understand what happened during the final days of MF Global when cash and related securities movements were not always accurately and promptly recorded due to the chaotic situation and the complexity of the transactions. With these preliminary investigative conclusions in hand, the Trustee’s investigative team will analyze where the property wired out of bank accounts established to hold segregated and secured property ultimately ended up. (Paragraph 6)

The Trustee’s investigators, including the legal and forensic accounting teams, have conducted over 50 witness interviews, preserved secure access to thousands of boxes of hard copy documents, imaged over 800 computer drives, and are maintaining over 100 terabytes of data. (Paragraph 12)

Posted at 4:05 pm IST on Thu, 9 Feb 2012         permanent link

Categories: risk management

Comments

Market microstructure: Limit orders and order flow

One of my favourite post crisis themes has been the idea that market microstructure has macro consequences. I touch upon this in my paper on post crisis finance (mentioned in my blog posts here and here). Two papers that I read last month are related to this theme.

Psy-Fi blog pointed me towards a paper by Linnainmaa showing that the under performance of individual investors’ portfolios can be attributed largely to their use of limit orders. When one looks at trades, it may appear that these investors were stupidly selling when the smart money was buying in response to good news. Linnainmaa’s point is that quite often, this apparent “selling” is not really active selling; their limit orders were simply being hit by the smart money. Looking at the totality of the orders of individual orders may show that these orders had no sell bias. What happens is that when the smart money is buying, the individual investors’ buy limit orders do not execute while their sell orders do.

If this is true, then one implication of this use of limit orders by uninformed traders is that the smart money is able to buy shares too cheap. The price does not move as much as it should have. This phenomenon may also be contributing to the well known momentum effect. This leads straight on to the second paper that I read recently (I do not recall to whom I owe a hat tip for this paper).

Beber, Brandt and Kavajecz published their paper “What Does Equity Sector Orderflow Tell Us About the Economy?” recently (the working paper version is available here). They not only show that the order flow into defensive sectors of the stock market forecasts recessions, but they go on to show that the order flow does a better job than prices or returns. One possible explanation of why order flow is more informative than prices is of course the phenomenon described by Linnainmaa – since uninformed limit orders absorb some of the impact of informed buying, the full information of these orders is not impounded in prices..

This is of course totally different from genuine equilibrium models with representative agents where prices are not only fully informative but also move with zero trading volume implying that order flows and trading volumes are totally uninformative (or rather totally irrelevant).

Posted at 9:49 pm IST on Sun, 5 Feb 2012         permanent link

Categories: exchanges

Comments

Safe (or informationally insensitive) assets

Gary Gorton and his co-authors have produced a large literature on what they calls safe assets (assets whose prices are informationally insensitive). They published two new papers this month on collateral crises and on the constant share of safe assets through the last half century. Their earlier papers on being slapped by the invisible hand and the run on repo are quite well known. The basic argument of this literature is that:

  1. Safe assets serve an important social function
  2. Safe assets are in short supply – the demand for these assets exceeds the stock of government securities and other obvious safe assets.
  3. The shadow banking system is an important source of supply of safe assets
  4. The shadow banking system and the safe assets that they create must be protected from “runs” in the same way that bank deposits are protected.

A more radical version of this idea can be found in a paper by Morgan Ricks which argues that only licensed money-claim issuers should be permitted to issue short term debt and that all this debt should then be explicitly insured by the government.

Much of what we know about the demand for safe assets come from the work of IMF economist Manmohan Singh (not to be confused with the Indian Prime Minister!). In a series of papers on the use of collateral in OTC derivatives, counterparty risk and central counterparties, collateral velocity, rehypothecation, and the reverse maturity transformation by asset managers, Singh and his co-authors have documented the need for safe assets in derivative markets and asset management.

What emerges from this discussion is that much of the demand for safe assets comes from sophisticated financial institutions and sovereign reserve managers. To my mind, this completely weakens the case for any form of subsidy for the creation of safe assets. The literature on participation in equity markets (which can be regarded as a proxy for risk taking in financial markets) demonstrates that participation is determined to a great extent by intelligence (Grinblatt et al), cognitive ability (Christelis et al), education (Cole and Shastry) and financial literacy (Rooij et al).

Most of the demanders of safe assets are big institutions (according to Manmohan Singh’s work), and one would expect them to possess a sufficient pool of intelligence, cognitive ability, education and financial literacy to be able to invest in risky assets. In some cases, portfolio risk may actually be lower if safe assets are replaced by equities. For example, Manmohan Singh explains how the security lending activities of asset managers creates a reverse maturity transformation – it converts the long term investment portfolio of households into a demand for short term assets (collateral). To the extent to which equities are correlated with each other, it is plausible that collateral in the form of stocks similar to those that are lent out might reduce risk. To the extent to which the borrower of the stocks is engaged in a “pair trade”, a natural supply of such collateral might exist.

I suspect that the demand for safe assets is better explained by a rational tradeoff between the costs and benefits of risk assessment (in a manner that bears some similarities to the rational inattention model of Sims). I therefore look at the huge demand for safe assets as a consequence of the moral hazard engendered by repeated bail outs of the financial sector. Even sophisticated investors may find it optimal not to make a serious risk assessment of any asset which has little idiosyncratic risks and is exposed only to systemic risks if the probability of such an asset (or rather its investors) being bailed out is quite high.

When one reads Gorton carefully, it becomes apparent that that the safe (or informationally insensitive assets) are not risk free – they are only free of idiosyncratic risk. Systemic risk is less subject to information asymmetry and therefore does not pose the problems that Gorton attributes to risky assets in general. But then the ability of the state to insure against systemic risk is highly suspect because if such an insurance is attempted in sufficiently large scale, the result is likely to be a sovereign debt crisis when the systemic risk event materializes. Capitalism to my mind is about accepting and dealing with failure, while the path that Gorton and Ricks are proposing is the path of socialism.

I see a similarity between the desire of the rentier class for safe assets and the desire of the working class for defined benefit pension plans. In both cases, the desire is to shift the risks to the taxpayers and thereby avoid the cognitive burden of making informed choices. In the case of the working class, society has over the last few decades rejected the demand for “informationally insensitive” pensions (defined benefit plans) despite the fact that lower levels of financial education might make the cognitive burden quite high for many of these people. I see no reason why the rentier class should receive a more favourable treatment.

Posted at 9:39 pm IST on Thu, 26 Jan 2012         permanent link

Categories: bond markets

Comments

Finance teaching and research after the global financial crisis revisited

Almost a year ago, I wrote a paper on finance teaching and research after the global financial crisis (see this blog post). A revised version of this paper has been published in the latest issue of Vikalpa. The only significant change in the published version is that the portion dealing with learning from related disciplines has been expanded and rewritten. Most of the other changes were only to improve readability and clarity. As always, comments and suggestions are welcome.

Posted at 4:54 pm IST on Wed, 25 Jan 2012         permanent link

Categories: Bayesian probability, post crisis finance

Comments

The many different kinds of fixed exchange rate regimes

In recent decades, economists have been increasingly focused on the de facto exchange rate regime using the ideas developed by Reinhart and Rogoff (2004) and by Frankel and Wei (1994). This approach of looking at the actual data is of course a huge advance over the naive approach of relying on official pronouncements. Intermediate approaches are also possible as exemplified in the IMF’s De Facto Classification of Exchange Rate Regimes and Monetary Policy Framework.

Obsessive contemplation of currency breakups (see my blog post last month) has made me more sensitive to the legal nuances of a fixed exchange rate regime, and I am beginning to think that looking only at the statistical properties of the exchange rate time series is not sufficient.

I have been thinking of three small but rich and highly successful jurisdictions which have today adopted a fixed exchange rate regime – Switzerland, Hong Kong and Luxembourg. The statistical properties of recent exchange rate behaviour in these three countries might be very similar, but the legal and institutional underpinnings are very different. A de-pegging event would play out very differently in these three cases.

  1. Switzerland has temporarily pegged its currency (the Swiss franc) to the euro through an executive decision of its central bank. There is no statutory basis for this peg. Technically, the Swiss have put a floor (and not a peg) on the EUR/CHF exchange rate (Swiss francs per euro); but given the massive upward pressure on the franc, the floor is a de facto peg.

    Exiting this peg would be very easy through another executive decision of the central bank. The only real costs would be (i) the exchange losses on the euros bought by the Swiss central bank, and (ii) probably a modest loss of credibility of the central bank. I would imagine that a significant uptick in the inflation rate in Switzerland would be sufficient to cause the central bank to drop the peg and accept these costs.

  2. Hong Kong’s peg to the US dollar is much stronger and longer. It has lasted a whole generation and is enshrined in a formal currency board system. Having survived the Asian crisis, the peg is regarded as highly credible. Yet, it would be very easy to change the peg or even to remove the peg completely. In fact, my reading of the statutes is that this could happen through an executive decision of the government without any changes in the law.

    Indeed, there is a significant probability that over the course of the next decade, the HK dollar would be unpegged from the US dollar and repegged to the Chinese renminbi. This change could happen quite painlessly and without any legal complications.

  3. Luxembourg has adopted the euro as its currency. This means that leaving the euro and recreating its own currency would be a legal nightmare. The doctrine of lex monitae asserts that each country exercises sovereign power over its own currency, and that it is the law of that country which determines what happens when a currency is changed. This might appear to give enough leeway to the Luxembourg government to do whatever it wants.

    However, in a cross border contract, the other party would argue that the term “euro” in the contract did not refer to the currency of Luxembourg at all, but to the currency of the euro area as governed by various EU treaties. This argument may not help if the contract is governed by Luxembourg law because the local courts are likely to interpret lex monitae very broadly. But if the contract were governed by English law (as is quite common in international contracts), it is quite likely that the English courts would take the EU interpretation. Assuming that the UK remains a member of the EU, its courts might not have any other choice.

I am beginning to think that we tend to focus too much on the role of money as a medium of exchange or as a store of value. If we do this, it appears that all the three countries have surrendered their monetary sovereignty to an equal extent. But the role of money as a unit of account is extremely important. Of the three countries described above, only Luxembourg has (arguably) surrendered its sovereignty on the unit of account. This loss of sovereignty is the most damaging of all.

An alternate way of constructing the euro way back in 1999 might have been for Luxembourg to adopt its own new currency (say the Luxembourg euro) of which no notes would be printed, peg this currency to the euro issued by the ECB (the ECB euro) at 1:1, and declare the ECB euro to be the only legal tender in the country. From a medium of exchange or store of value point of view, this arrangement would be identical to what exists today because only ECB notes would circulate. But in Luxembourg law, under this alternate approach the ECB notes would just happen to be the legal tender for the Luxembourg euro which would just happen to be equal to the ECB euro. The Luxembourg euro would then be capable of being unpegged from the ECB euro at any time under the doctrine of lex monitae.

The problem as I see it is that technocrats always have a temptation to try and build something that cannot fail. The technocrats who created the euro therefore set out to create something irreversible and permanent. I think it is better to approach the matter with greater humility, and endeavour to build something that would fail gracefully rather than not fail at all.

Finally, there is a fourth small, rich and highly successful country – Singapore – which is also an important financial centre like the other three and has gotten by quite well without pegged exchange rates.

Posted at 1:22 pm IST on Thu, 19 Jan 2012         permanent link

Categories: international finance

Comments

The risks in Indian dependence on foreign risk capital

Last month, the McKinsey Global Institute (MGI) published a 95 page report (The emerging equity gap: Growth and stability in the new investor landscape) arguing that over the next decade, there is likely to be a shortage of equity investors globally. This is based on two arguments:

  1. Demographics and the regulatory aftermath of the financial crisis are reducing the demand for equities from investors in developed markets.
  2. Global wealth is shifting to emerging market investors who have historically had less appetite for equity investment.

The second prong of this argument is clearly debatable. Had there been a think tank examining such questions in the nineteenth century, it too would have worried about the shifting of wealth from the UK (which was then the dominant source of risk capital for the world) to newer rivals. We do know with hindsight that with increasing wealth, the rising powers of the nineteenth century went on to become major sources of risk capital to the rest of the world. That could well happen again, but it will not happen unless today’s emerging markets create the preconditions for a vibrant equity market.

MGI is therefore on much stronger ground when it discusses the policy steps that emerging markets should take to develop their equity markets – strengthen the legal and regulatory foundations of equity markets; expand channels for households to access equity markets; and enable the growth of institutional investors. (pages 55-56).

All of this is of great relevance to India which has thrived during the last two decades on foreign risk capital. India has a large domestic savings pool and could perhaps at a crunch get by on only these savings. But two-third of household financial savings go into risk free assets like currency, deposits and small savings. Most of the remaining third goes into insurance and retirement funds which in turn invest a very large part of their resources in government bonds and other safe assets. Only around 1% of household savings go into equities. India may have nearly enough aggregate savings, but there is an acute shortage of risk capital.

Foreign portfolio capital has bridged this gap during the last two decades. Since these capital inflows exceed the aggregate savings shortfall, a part of the capital flows ends up as foreign exchange reserves which finance profligate governments in the developed world (and post 2008, this lending is far from being risk free).

Without foreign risk capital, it would have been impossible for the Indian private sector to come anywhere near the growth rates that it has achieved in the last two decades. But we must recognize that the reliance on foreign risk capital is a short term fix to the shortage of domestic risk capital. As we saw in 1998 and again in 2008, this dependence creates serious vulnerabilities. When foreign portfolio flows reverse, risk capital disappears and weak balance sheets cannot raise money at all. (Strong balance sheets can perhaps raise debt locally). Secondly, capital inflows can ignite asset price bubbles and outflows can prick the bubbles. Asset prices in India often depend on global risk aversion even more than on domestic sentiment.

It is true that Indian equity markets have been one of the great success stories of financial sector reforms (the contrast with the dismal state of the corporate bond market is particularly glaring). But we must not forget that even this success consists principally in the fact that foreign equity risk capital is largely intermediated through Indian markets (by contrast, the Indian corporate debt market moved offshore because of poor regulatory choices).

Creating a pool of domestic risk capital will take a long time and that is all the more reason why we must start soon. We will need a lot of things to get there – well developed and liquid markets, institutional support to facilitate easy access, sound regulatory regimes to provide investor protection and confidence, and finally investor education and awareness.

India would need to do all this in its own interest. What MGI is saying is that India (and other emerging markets) might be forced to do this even faster because the foreign pool of risk capital may be about to dry up.

Posted at 2:39 pm IST on Mon, 9 Jan 2012         permanent link

Categories: equity markets

Comments

RBS and ABN Amro Due Diligence

Earlier this month, the UK Financial Services Authority bowed to public pressure and published a massive (452 page) report on the failure of the Royal Bank of Scotland. This report has been much commented upon in the press and the blogospherre (yes, I am rather late to this party!), but I do wish to comment on what the report says about the due diligence involved in the ABN Amro acquisition:

Many readers of the Report will be startled to read that the information made available to RBS by ABN AMRO in April 2007 amounted to ‘two lever arch folders and a CD’; and that RBS was largely unsuccessful in its attempts to obtain further non-publicly available information. (Chairman’s Foreword, page 9)

The RBS Board was unanimous in its support for the acquisition. The RBS Board’s decision to launch a bid of this scale on the basis of due diligence which was insufficient in scope and depth for the major risks involved entailed a degree of risk-taking that can reasonably be criticised as a gamble. The Review Team reached this conclusion in the knowledge that had a fully adequate due diligence process been possible, the RBS Board might still have been satisfied with the outcome and decided to proceed. (Para 415)

In contested takeovers only very limited due diligence is possible. Management and boards have to decide whether the potential benefits of proceeding on the basis of limited due diligence outweigh the risks involved. Institutional investors are well aware of the limited nature of the due diligence possible in these circumstances, and have the ability to vote against approval of the acquisition if they consider the risks are too great. If the acquisition turns out to be unsuccessful, they can dismiss the board and management. (Para 441)

In most sectors of the economy, this market discipline approach remains appropriate because the downside risks affect only the equity shareholders. Banks, however, are different because, if a major takeover goes wrong, it can have wider financial stability and macroeconomic effects. The potential downside is social, not just private. (Para 442)

As a result, further public policy responses to the lessons of the ABN AMRO acquisition need to be considered. ... Establishing within this formal approval regime a strong presumption that major contested takeovers would not be approved, or would only be approved if supported by exceptionally strong capital backing, given that specific risks are created by an inability to conduct adequate due diligence. (Para 443)

I think this whole idea is misguided. Absent outright fraud, there is in fact not much that can be gained from invasive due diligence of a large public company. The FSA report itself admits that all the major risks of the acquisition were crystal clear without any due diligence. RBS made a conscious strategic decision to buy the ABN Amro business. They thought that the assets were of great strategic value, when in fact they were toxic. The problem was not one of lack of information; it was simply a wrong macro view of this business. A million lever arch folders and CDs would not have cured this problem.

If FSA thinks that an investment decision based on ‘two lever arch folders and a CD’ worth of due diligence is a gamble, then they must also argue that Warren Buffet’s bail out of General Electric and Goldman Sachs in 2008 (and of Bank of America this year) were gambles. I think this is wrong. Absent outright fraud, buying a large listed company after analysing only the public filings is perfectly prudent and legitimate. And, if the FSA thinks that banking is a sector where there is a preponderance of outright frauds, then that is an admission of total and complete regulatory failure – the regulators surely have access to all the lever arch files and CDs in the bank.

Only a check box ticking regulatory mindset can lead somebody to the silly idea that the quality of decision making can be measured by the volume of data that was processed. I am reminded of the great chess player Jose R. Capablanca who when asked how many moves he analysed before making his move replied “I see only one move ahead, but it is always the correct one.” When RBS looked at ABN Amro, they were fixated on one big move and that was a horribly wrong one. For that, they do deserve all the blame in the world, but let us not get unduly fixated about the ‘two lever arch folders and a CD’.

Posted at 2:54 pm IST on Fri, 30 Dec 2011         permanent link

Categories: banks, regulation, risk management

Comments

Examples of Currency Breakup

Since the prophets of gloom and doom are now talking openly of a possible breakup of the euro zone, I thought it would be useful to look back at some instances of breakup of currencies to see what really happens. I have chosen some examples based on my familiarity with them and describe them below in reverse chronological order. I think the examples are fascinating in their own right regardless of what one thinks about the prospects of the euro zone

Argentina 2001-02

Many analysts have drawn parallels between the current Greek crisis and the Argentine crisis of 2001. Therefore, my first example is the Argentine pesification of 2002. The process began in December 2001 with the corralito which froze all bank accounts for 12 months while allowing withdrawals of $250 a week for essential expenses. This led to riots that forced the resignation of the president. During the next two weeks, Argentina went through three interim presidents while also defaulting on its debt. In January 2002, interim president Duhalde announced an asymmetric pesification in which dollar denominated bank deposits were converted to pesos at 1.40 peso to the dollar while dollar denominated loans given by the banks were converted at 1.00 peso to the dollar. The government issued compensation bonds to the banks for the differential of 0.40 pesos, but at that time, the government was widely regarded as insolvent. The free market exchange rate was approximately 4 pesos to the dollar. The Argentine Supreme Court declared the corralito and the pesification unconstitutional. The government responded by impeaching two judges and forcing the resignation of two others. In October 2004, the Supreme Court ruled that pesification was legal. A good chronology of most of these developments can be found in Gutierrez and Montes-Negret (“Argentina’s Banking System: Restoring Financial Viability”, produced by the World Bank Office for Argentina, Chile, Paraguay and Uruguay, 2004).

Ruble zone early 1990s

Another example with similarities to the euro zone is the breakup of the ruble zone in the early 1990s after the collapse of the Soviet Union. While the overthrow of Gorbachev and the fall of the Soviet Union were political in nature, the breakup of the ruble zone was primarily due to economic reasons. After the collapse of the USSR, no change in monetary arrangements were made – the newly formed Central Bank of Russia (CBR) took over the old Soviet central bank (Gosbank) in Russia while Gosbank branches in the other countries became 14 independent central banks. However, all the printing presses were in Russia and so only the CBR printed rubles. The other countries relied on ruble notes and coins shipped from Russia by the CBR.

The old soviet system was based on a dual monetary circuit: enterprises could convert rubles in the bank (beznalichnye or non-cash rubles) into cash (nalichnye) only for specified purposes – chiefly the payment of wages, which were paid in cash. All inter-enterprise transactions were required to be in non cash (beznalichnye) rubles to facilitate central planning and control (see for example, William Tompson, 1997, “Old Habits Die Hard: Fiscal Imperatives, State Regulation and the Role of Russia’s Banks”, Europe-Asia Studies, 49(7), 1159-1185). This dual circuit continued in the post soviet ruble zone as well. The implication was that while the CBR had monopoly on cash rubles (nalichnye), other central banks could and did create non cash (beznalichnye) rubles.

Initially, the CBR continued the old soviet practice of accepting beznalichnye rubles of other ruble zone countries as payment for exports from Russia to these countries. So the central bank of Ukraine could lend beznalichnye rubles to a local bank which could lend them to a local factory which could use these to buy inputs from Russia. Effectively, Ukraine was paying for this stuff with rubles created by itself. This has striking similarities to how Germany has been lending to the rest of the euro zone through the ECB’s Target2 system.

At some point, the CBR decided that it would not accept beznalichnye rubles of other central banks. It also began printing new Russian rubles for use within Russia while printing old soviet rubles for shipping to other ruble zone countries. Finally, in 1993, the CBR unilaterally demonetized soviet era ruble notes and exchanged them for Russian rubles. The ruble zone was effectively terminated and the remaining 9 ruble zone countries (some countries had left even earlier) were forced to adopt their own currencies. Ultimately, the ruble zone broke up because Russia (or more precisely CBR) was not prepared to pay the economic price required for its continuation. A good discussion of the collapse of the ruble zone can be found in Abdelal’s paper (“Contested currency: Russia’s rouble in domestic and international politics”, Journal of Communist Studies and Transition Politics, 2003.)

Pakistan/Bangladesh 1971

My next two examples are closer home from the Indian subcontinent. In early 1971, Bangladesh declared independence from Pakistan, but the government-in-exile could return to the country and start functioning only nine months later. During this war of independence, Bangladesh continued to use the Pakistani currency without any change. Many people dealt with this incongruity by rubber stamping “Bangladesh” or “Joy Bangla” on these notes in English or Bengali. Images of these notes can be seen here and here.

Pakistan however took the stance that the war of independence was a civil war and that the notes circulating in Bangladesh were looted from the branches of the Pakistan central bank (State Bank of Pakistan) in East Pakistan (Bangladesh). It then declared that all note carrying the inscription “Bangladesh” or “Joy Bangla” or “Dacca” in any language would not be legal tender in Pakistan. It also proceeded to issue new currency notes in different colours and withdraw the old notes from circulation. (These events are described at the web site of the State Bank of Pakistan). This demonetization resulted in the paradoxical situation where the old Pakistan currency notes now circulated only in Bangladesh which was at war with Pakistan!

Even after winning the war of independence, Bangladesh retained the old currency for several months. The statute setting up the Bank of Bangladesh stated that “all Bank Notes, Coins and Currency Notes ... which were in circulation in Bangladesh [on December 16, 1971] shall continue to be legal tender”. Subsequently, Bangladesh printed new currency and exchanged the old notes.

India/Pakistan 1947-48

The other example from the subcontinent was the partition of undivided British India into India and Pakistan in August 1947. The two countries agreed that the Reserve Bank of India (RBI) would act as the central bank of Pakistan also for over a year (till September 1948). During this period, the government of India agreed to take two nominees of the Pakistan Government on the central board of the RBI. During this transition period, Indian notes were to remain legal tender in Pakistan, and the RBI was to issue notes overprinted with the inscription ‘Government of Pakistan’ in English and Urdu. During the transition period, these overprinted notes were to be the liability of the RBI, but not of the Government of India.

At the end of the transition period, the Government of Pakistan was to exchange the (non overprinted) Indian notes circulating in Pakistan at par and return them to India. The overprinted notes would become the liabilities of Pakistan. The division of assets of the Issue Department of RBI was to take place after the transition period. The division was to be based on the ratio of notes circulating in the two countries at the end of the transition period.

When the Kashmir dispute erupted later, the financial settlement between India and Pakistan broke down, and the RBI’s role as the central bank of Pakistan was terminated three months ahead of time. An excellent account of all these events can be found in Chapter 18 of Volume 1 of the RBI History. Images of Indian rupees overprinted with ‘Government of Pakistan’ in English and Urdu can be found here.

Austro Hungarian Empire 1919

I now move back from the Indian subcontinent to Europe for my final example – the breakup of the Austro Hungarian Empire in 1919. Richard Roberts (“A stable currency in search of a stable Empire? The Austro-Hungarian experience of monetary union”, History and Policy Paper 127, October 2011) provides an excellent discussion of this episode and its relevance for the euro zone. After the defeat of the Austrian Habsburg Empire at the hands of Prussia in 1866, Hungary threatened secession from the empire. The Compromise of 1867 was a constitutional treaty that recognised the sovereign autonomy of Austria and Hungary under a single monarch – the Austro-Hungarian Dual Monarchy. The two parts of the empire had separate parliaments and separate national debt, but there was a monetary union under the Austro Hungarian Bank (AHB). Like the European Central Bank (ECB) today, the AHB established a strong reputation for a policy of sound money. For a long period, the AHB was also able to rein in the fiscal profligacy which had been the hallmark of the Austrian Habsburg Empire.

Everything changed with the First World War. After its defeat in this war, the Austro-Hungarian Empire collapsed into five successor states – Czechoslovakia, Romania, Yugoslavia, Austria and Hungary. The peace treaties specified that the successor states should stamp Austro-Hungarian Bank notes circulating in their areas and then introduce their own notes. Successor state claims on the reserves and other assets of the Austro-Hungarian Bank was in proportion to the notes circulating in their territories. Stamping was done by affixing adhesive stamps or by rubber or metal stamps. Images of these stamped notes can be seen in the delightful paper by Keller and Sandrock (“The Significance of Stamps Used on Bank Notes”) and also at Wikipedia.

Stamping of notes to turn them into legal tender was a form of taxation and in some countries the tax rate was excessive. This created incentives for people to forge the stamps especially when the stamping was lacking in security features. The value of the stamped currency depended on the monetary policy followed in the various countries. This created incentives for unstamped AHB notes to be smuggled out of profligate countries for stamping in countries with more sound money. Reducing these substantial illicit cross-border flows required customs check points and deployment of army patrols.

The interesting thing about this episode was that out of the five successor states of the empire, only one (Czechoslovakia) was able to create a central bank with anything resembling the sound money attributes of the old AHB. Hungary for example went on to have one of the worst hyperinflations in world history.

Posted at 6:30 pm IST on Thu, 15 Dec 2011         permanent link

Categories: currency, international finance

Comments

Book on SEBI Act

Sumit Agrawal and Robin Joseph Baby sent me a copy of their book on the SEBI Act. I am not a lawyer, but I found the book well written and useful. To my knowledge, this is the first book giving detailed commentary on each section of the Act including judgements of the Securities Appellate Tribunal and the Courts. While the bare SEBI Act is only 33 pages, the commentary comes to 576 pages which is a measure of the extent of judicial precedents that have come up around the SEBI Act in the last two decades. (The length is not due to coverage of the regulations that SEBI has framed under the Act. The book hardly covers these regulations and therefore hopefully would not become obsolete too quickly.)

I wish they or others would write a companion volume covering the Securities Contract Regulation Act, Depository Act and relevant sections of the Companies Act that define the securities law in India.

My only quibble with the book is that as serving SEBI Officers, they tend to uncritically endorse the official SEBI stance regarding most of the disputed legal issues. This is however a minor matter because it is easy to take this with a pinch of salt, and even while taking sides, the authors do present both sides of the case.

Posted at 5:21 pm IST on Sat, 10 Dec 2011         permanent link

Categories: miscellaneous

Comments

Mobile phones as Achilles heel of internet banking

I am increasingly worried that mobile phones are emerging as the Achilles heel of internet banking.

The most frightening news is the key logging software installed by the telecom companies on millions of smartphones (hat tip Bruce Schneier). Every key stroke and every received text message is recorded by the Carrier IQ spyware which logs even what is entered into https web pages that use the secure socket layer (SSL).

The point is that our mobile is not ours in the same sense that our computer is ours. Our mobile belongs first and foremost to our telecom operator and only secondarily to us. This is true even if the mobile runs an open source operating system – the Carrier IQ spyware runs on Android smartphones. On the other hand, when I use a personal computer on which I have installed (say) Ubuntu Linux and I am careful about what software I install on it, the computer is mine in a very real sense.

Unfortunately, this mobile which is not truly ours is increasingly our passport in the cyberworld. When banks were forced to adopt two factor authentication, they chose the mobile phone as the second authentication tool. Most internet banking transactions today require an additional one time password sent to the registered mobile. This is a problem when nobody else regards the mobile as an important element of a person’s identity.

Consider for example this story from Malaysia (hat tip again to Bruce Schneier. The crooks installed spyware an online banking kiosk at a bank and retrieved usernames, passwords and even the transaction authorisation code (TAC) which is sent out by the bank to the registered handphones of online banking users. Then, using fake MyKad, police report or authorisation letters from the target customers, the crooks would report the customers’ handphones lost and applied for new SIM cards from the unsuspecting telecommunications companies. The only saving grace is that it took six crooks about nine months to steal about $75,000; the fraud is simply not scalable.

But then there are other methods of scaling this up. Professional call centres are emerging whose business is to extract sensitive information needed for bank fraud and identity theft from individuals.

Posted at 8:19 pm IST on Tue, 6 Dec 2011         permanent link

Categories: fraud, technology

Comments

European Banks as America's Shadow Banks

Hyun Song Shin delivered the Mundell-Fleming lecture at the IMF Annual Research Conference earlier this month. This very interesting lecture argues that European banks essentially constitute the US shadow banking system.

While there has been much discussion of how the US has been relying on capital flows from Asia, there is little mention of Europe as a financing source. This is because Europe is not a significant source of net capital flow for the US – after all, Europe has a roughly balanced current account, and is not therefore a source of capital. The picture changes when one looks at gross capital flows instead of net capital flows. This is because European banks borrow dollars in the US and lend the dollars back in the US. This too is well known because it was a major source of distortions in the dollar Libor market and in the currency swap market (see for example, my blog post from April 2008 on this issue).

What makes Shin’s paper important is his demonstration that the sheer scale of this gross flow is much bigger than most people imagined. At least, it is an order of magnitude larger than what I thought it was. In fact, he shows that for a brief period in 2007 and early 2008, the total dollar assets of non US (largely European) banks exceeded the total assets of US commercial banks.

As a result, European banks while not being important sources of net capital, were hugely important sources of liquidity and credit transformation in the US financial system. They created liquid and apparently safe assets out of illiquid and risky loans to US borrowers, and they did this mostly through the shadow banking system (securitization and repos). As Shin points out, this is hugely important in the context of the European crisis. The ongoing deleveraging by European banks could be painful for the US financial system even if none of the big European banks fail.

I am tempted to think of the US as a giant CDO (collateralized debt obligation). China (and the rest of Asia) own the super-senior and senior pieces (Treasury and Agency paper) while Europe holds the equity piece. Much of the complacency about the US financial position is based on the idea that Asia cannot find another home for its money and so the super-senior piece will continue to find buyers. When all else fails, the US Federal Reserve has also provided buying support for this piece through its QE (quantitative easing) programmes. However, the real challenge in selling the CDO is in selling the equity piece because this piece has no natural buyer, and the only buyer in town might be delevering itself out of existence.

Posted at 4:18 pm IST on Wed, 23 Nov 2011         permanent link

Categories: banks, derivatives, international finance

Comments

Revisiting CME and LCH handling of Lehman default

A year and a half ago, I had a blog post comparing how CME and LCH.Clearnet coped with the Lehman default. I raised a number of questions and concluded by saying that:

In the context of the ongoing debate about better counterparty risk management (including clearing) of OTC derivatives, I think the regulators should release much more detailed information about what happened. Unfortunately, in the aftermath of the crisis, it is only the courts that have been inclined to release information – regulators and governments like to regard all information as state secrets.

While regulators have still not been too forthcoming, considerable new information has become public since then. Somehow I did not get around to revisiting this issue until I received a comment on my blog post a few days ago from Risk Dude saying:

LCH utilized excess margins from other products to auction the IRS book under margin. So it’s a bad comparison.

This comment appears to be quite correct. The best material that I have read on the subject is the book by Peter Norman entitled The Risk Controllers: Central Counterparty Clearing in Globalised Financial Markets, (Wiley, 2011). Chapter 2 of the book deals exclusively with the Lehman bankruptcy, and Norman quotes a personal conversation with the LCH.Clearnet Chief Executive, Liddell in which Liddell says:

We always thought that a common default fund would be the main benefit from being a multi-asset CCP. ... In fact, the big and far more valuable discovery during Lehman was that the initial margin in each market was completely fungible. ... There were inverse correlations with prices moving one way in some markets, another way in others. As we managed to liquidate some of the portfolios more quickly than others, it meant that the margin that was left after some had been liquidated was available to cover risk somewhere else. That was a massive, massive benefit. ... it meant we had a much bigger cushion all the time. ... We didn’t have the same sort of urgent need to get rid of everything straight away.

Norman also explains that over the same weekend that Lehman failed, the energy futures exchange, ICE, was to move all its positions from LCH.Clearnet to ICE Clear Europe. On Sunday evening around 7 pm, the FSA, LCH.Clearnet and ICE Clear agreed to defer this move. This meant that during the liquidation of Lehman position, the ICE positions (and more importantly, the associated margins) were also available to LCH.Clearnet and this was a big benefit.

Once again, I hope that regulators will disclose more details about what happened during those dark days.

Posted at 2:20 pm IST on Tue, 15 Nov 2011         permanent link

Categories: bankruptcy, derivatives, risk management

Comments

Unifying initial and maintenance margins

The bankruptcy of MF Global prompted CME Clearing to temporarily unify maintenance and initial margins (h/t Kid Dynamite). I would argue for a permanent abolition of the distinction between these two margins.

Initial margin is the margin that market participants must pay when they initiate a position while the maintenance margin is the level at which the margin must be maintained subsequently. I believe that this distinction is quite silly. The whole purpose of daily mark to market is to ensure that every day begins on a clean slate. There is absolutely no difference between a position initiated yesterday and a position initiated today because yesterday’s losses and gains have already been settled in cash. To pretend otherwise is a delusion. CME Clearing stated in its notice that:

Maintenance margins are set to provide appropriate risk management coverage. Initial margins are set to provide an additional buffer against future losses in the account.

In these troubled times, no one can argue against additional buffers, but the idea of applying buffers selectively to some positions is absurd. The situation becomes even more ludicrous when we consider a large portfolio of positions where new positions do not necessarily correspond to new risks.

In the days when fund transfer was slow and painful, it made sense for customers to deposit extra margins to avoid the hassle of managing daily cash inflows and outflows. This is no longer relevant with today’s electronic payment systems. In any case, that should be a matter for the individual customer to decide. There is little merit in the clearing corporation micro-managing the cash management policies of its customers.

CME Clearing is absolutely right in saying that “Maintenance margins are set to provide appropriate risk management coverage.” It should stop with risk management and leave cash management to market forces.

I am quite disturbed by the statement from CME Clearing that “This is a short term accommodation to maintain market integrity and provide temporary relief to customers whose accounts have been disrupted by this event.” One expects risk managers at clearing corporations to be ruthless and uncompromising on protecting the clearing corporation from risks. Words like accommodation and relief should not be part of their vocabulary at all. Instead CME Clearing should have said that since it is illogical for customers to pay a higher margin merely because their positions have been transferred from MF Global to another clearing member, they are permanently unifying their margins.

Posted at 7:28 pm IST on Mon, 7 Nov 2011         permanent link

Categories: derivatives, exchanges, risk management

Comments

Pricing of cheques and electronic payments

India seems to be now moving to a system where it costs the originator more to make an electronic payment than to issue a cheque. This is dysfunctional because the cheque imposes costs both on the receiver and on the payment system (the paying bank, the collecting bank and the clearing house). Of course, in a free market, banks are free to levy charges as they deem fit and charges do vary significantly across banks in India. What is interesting is (a) that the resulting market equilibrium is so perverse, and (b) that this perversity is a recent phenomenon.

In the past, many banks were not charging for electronic payments through the National Electronic Fund Transfer (NEFT) system operated by the Reserve Bank of India. Recently, more and more banks seem to be introducing charges at the maximum permitted rate of Rs 5 per outbound electronic transfer for small transactions. By contrast, most banks charge only about Rs 2 per cheque leaf, and the first 20-30 cheques per year are typically free.

This means that it may now be advantageous for many retail consumers to issue cheques instead of using NEFT for inter-bank transfers within the same city. This privately optimal decision does however have a huge negative externality. The cost to the paying and collecting bank of processing this cheque might be about Rs 50 each, and there are costs elsewhere in the system (for the payee who has to deposit the cheque and for the clearing house which has to process it).

What I would like to know is whether this pricing decision is optimal for the individual banks. Perverse as the end result is, the pricing can be rational for individual banks if customers sophisticated enough to use NEFT are assumed to be relatively price insensitive. In that case, the customers continue to use NEFT and the bank simply pockets another source of revenue. The alternative hypothesis is that the costing and transfer pricing system in many banks is badly broken. In that case, the costs of processing paper cheques is not fully reflected in the pricing decision, while the cost of the electronic transfer is a transparent out of pocket cost (the fee charged by NEFT). A naive cost-plus-pricing system does the rest of the mischief. It would be interesting to figure out which hypothesis is closer to the truth.

In either case, the problem can be solved by a Pigovian tax on cheques.

Posted at 8:29 pm IST on Sun, 6 Nov 2011         permanent link

Categories: technology

Comments

Does it make sense to hedge DVA?

This blog post is not about whether CVA/DVA accounting makes sense or not; it is only about whether it makes sense to hedge the DVA. Modern accounting standards require derivatives and many other financial assets and liabilities to be stated at fair value. Fair value must take into account all characteristics of the instrument including the risk of non performance (default). CVA and DVA arise out of this fair value accounting.

CVA or Credit Value Adjustment accounts for the potential loss that the reporting entity would incur to replace the existing derivative contract in the event of the counterparty’s default (less any recovery received from the defaulting counterparty). It obviously depends on the probability of the counterparty defaulting and on the recovery in the event of default. More importantly, it also depends on the expected positive value of the derivative at the point of default – if the entity owes money to the counterparty (instead of the other way around), the counterparty’s default does not cause any loss.

DVA or Debit Value Adjustment is the other side of the same coin. It accounts for the possibility that the reporting entity itself could default. One could think of it as the CVA that the entity’s counterparty would need to make to account for the default of the reporting entity. It accounts for the potential loss that the counterparty would incur to replace the existing derivative contract in the event of a default by the reporting entity (less any recovery received from the reporting entity). It can also be thought of as the notional gain to the reporting entity from not paying off its liability in full. The DVA depends on the probability of the reporting entity defaulting, the recovery in the event of default, and the expected negative value of the derivative at the point of default.

DVA can also be applied to any liabilities of the reporting entity that are accounted at fair value and not merely to derivatives, but the logic is the same.

The application of CVA and DVA in valuing assets and liabilities on the balance sheet is perhaps the only logical way of applying fair value accounting of assets and liabilities in which non performance risk is material. But the accounting standard setters took another fateful and controversial decision when they mandated that changes in CVA and DVA be included in the income statement instead of letting it go straight to the balance sheet as a part of Other Comprehensive Income. As I said at the beginning, this blog post is not about the merits of this accounting treatment; I mention the accounting rules only because these rules create the motivation for the management of financial firms to try and hedge the CVA and DVA.

Hedging the CVA is relatively less problematic as it only increases the resilience of the firm under conditions of systemic financial stress. Counterparty defaults are somewhat less threatening to the solvency of the entity when there are hedges in place even if there could be some doubts about whether the hedges themselves would pay off when the financial world is collapsing. What I find difficult to understand is the hedging of the DVA.

The DVA itself is a form of natural hedge in that it produces profits in bad times. It is when things are going wrong and the world is worried about the solvency of the reporting entity that the DVA changes produce profits. One could argue that the profits are notional, but there is no question that the profits arise at the point in time when they are most useful. Hedging the DVA would imply that during these bad times, the (possibly notional) DVA profits would be offset by real cash losses on the hedges. A position that produces losses in bad times is not a good idea. Such positions have to be tolerated when they are intrinsic to the business model of the entity. What baffles me is why anybody would willingly create such wrong way risks purely to hedge an accounting adjustment.

The Modigliani Miller argument in capital structure theory (home made leverage) can be extended to hedging decisions (home made hedging) to say that hedging is irrelevant except when it solves a capital market imperfection. Bankruptcy costs are a major capital market imperfection that can make it advantageous to undertake hedging activities that reduce the chance of bankruptcy. In this framework, the only hedges that make sense are the ones that hedge large solvency threatening risks. The DVA hedge is the exact opposite. It produces large cash losses precisely at the point of maximum distress. For example, this Wall Street Journal story says that Goldman Sachs implements a DVA hedge by selling credit default swaps on a range of financial firms. The trouble with this is that these hedges will produce large cash losses when many other financial firms are all in trouble, and this is likely to coincide with troubles at Goldman Sachs itself. Far from mitigating bankruptcy risks, the hedges would exacerbate them.

The only way this makes sense is if investment banks think that losses during systemic crises can be pushed on to the taxpayer. If this assumption is correct, then DVA hedges work wonderfully to socialize losses and privatize gains!

Posted at 9:55 pm IST on Fri, 28 Oct 2011         permanent link

Categories: derivatives

Comments

St. Petersburg, Menger and slippery infinities

In the twentieth century, St. Petersburg became Petrograd, then Leningrad and finally went back to being St. Petersburg. The St. Petersburg paradox named after this city also seems to have been running around in circles during the last three centuries. The latest round in this long standing paradox has been initiated by a mathematics professor who is coincidentally named Peters. Way back in 1934, Menger proved that a generalized version of the St. Petersburg paradox invalidates all unbounded utility functions. Prominent economists like Arrow and Samuelson have accepted this conclusion. In a paper entitled Menger 1934 revisited, Peters argues that Menger made an error that has remained undiscovered during the last 77 years.

The original version of the St. Petersburg game involved a fair coin being tossed until the first time a head appears. If this happens at the n'th toss of the coin, the payoff of the game is 2n–1. The probability of this event is 2-n and therefore this event contributes 2n–1 2-n = 1/2 to the expected payoff of the game. Summing over all n yields 1/2 + 1/2 + ... and the expected payoff from the game is therefore infinite.

Bernoulli’s 1738 paper from which the paradox obtained its name argued that nobody would pay an infinite price for the privilege of playing this game. He proposed that instead of expected monetary value, one must use expected utility. If utility of wealth is logarithmic in wealth, then the expected utility from playing the game is not only finite, but is also quite small.

Menger’s contribution was to consider a Super St. Petersburg game in which the payoff was not 2n–1 but exp(2n–1). Essentially, taking logarithms of this payoff to compute utility yields something similar to the payoff of the original St. Petersburg game, and the offending infinity reappears. Menger’s solution to this generalized paradox was to require that utility functions must be bounded. In this case, there is no monetary payoff that yields very high utilities like 2n–1 for sufficiently large n.

Peters argues that there is an error in Menger’s argument. The logarithmic function diverges at both ends — for large x, ln(x) goes to infinity, but for small x (approaching zero), ln(x) goes to minus infinity. Suppose a player pays a large price (close to his current wealth) for playing the Super St. Petersburg game. Now if heads comes quickly, the players’s wealth will be nearly zero and the utility would approach minus infinity. The crux of the Peters’ paper is the assertion: “Menger’s game produces a case of competing infinities. ... the diverging expectation value of the utility change resulting from the payout is dominated by the negatively diverging utility change from the purchase of the ticket.” Therefore, the ticket price that a person would pay for being allowed to play this game is finite.

I agree with Peters that even for the Super St. Petersburg game, a person would pay only a finite ticket price if the utility function is logarithmic or is of any other type that has a subsistence threshold below which there is infinite disutility. It appears to me however that a slight reformulation reintroduces the paradox. If we do not ask what ticket price a person would pay, but what sure reward a person would forego in order to play this game, the infinite disutility of the ticket price is kept out of the picture, and the infinite utility of the payoff remains. In other words, the certainty equivalent of the Super St. Petersburg game is infinite. Peters is right that a person with logarithmic utility would not pay a trillion dollars to play the game, but Menger is right that such a person would prefer playing the Super St. Petersburg game to receiving a sure reward of a trillion dollars. Peters’ contribution is to make us recognize that these are two very different questions when there is a “competing infinity” at the other end to contend with. But Menger is right that if you really want to exorcise this paradox, you must rule out the diverging positive infinity by insisting that utility functions should be bounded.

Peters also makes a very different argument by bringing the time dimension into play. He argues that the way to deal with the paradox is to use the Kelly criterion which brings us back to logarithmic functions. Peters relates this to the distinction between time averages and ensemble averages in physics. I think this argument goes nowhere. We can collapse the time dimension completely by changing the probability mechanism from repeated coin tossing to the choice of a single random number between zero and one. The first head in the coin toss can be replaced by the first one in the binary representation of the random number from the unit interval. Choosing one random number is a single event and there is no time to average over. The coin tossing mechanism is a red herring because it is only one way to generate the required sample space.

Of course, there are other solutions to the paradox. You can throw utility functions into the trash can and embrace prospect theory. You can correct for counterparty risk (Credit Value Adjustment or CVA in modern Wall Street jargon). You can argue that such games do not and cannot exist in a market, and financial economics need not price non existent instruments.

I am quite confident that three hundred years from today, people will still be debating the St. Petersburg paradox and gaining new insights from this simple game.

Posted at 2:27 pm IST on Sun, 16 Oct 2011         permanent link

Categories: behavioural finance, mathematics, statistics

Comments

Is there a two tier inter bank market in India?

Update October 13, 2011:

After I posted this yesterday, the RBI published the results of the Reverse Repo auction yesterday showing that there was no money parked with the RBI yesterday. Possibly, the top tier banks are also now cash deficit in the aggregate, and they do not have any surplus to deposit with RBI. Or perhaps, the two tier market is de-tiering. I do not know.

Original post (October 12, 2011):

In a well functioning inter bank market, cash surplus banks lend to cash deficit banks and only the aggregate cash surplus or deficit of the banking system is absorbed by the central bank’s liquidity operations (repo or reverse repo). In a two tier market, there is a top tier of healthy banks that lend to and borrow from each other, but this tier refuses to lend to the second tier of banks whose financial health is suspect. In such a market, if the top tier banks in the aggregate have a cash surplus, they would not lend it to the second tier banks, and would instead park the surplus with the central bank. If the second tier banks have a cash deficit, they would be borrowing from the central bank because they are unable to borrow from anybody else. The central bank would thus be partially supplanting the inter bank market. A two tier market is of course better than a complete seizure of the inter bank market where there is no inter bank market at all and all cash surpluses are parked with the central bank which on-lends it to the deficit banks. After 2008, this progression from a normal inter bank market to a non existent one is well known and understood.

What I am worried about is whether there is a two tier inter bank market in India today. Since the end of last month, we have been seeing the odd situation of some banks parking cash with the RBI at 7.25% while other banks are borrowing from the RBI at 8.25%. If there is no tiering of the banking system, this does not make sense. The surplus bank could lend to the deficit bank at 7.75% and both banks would be better off. The surplus bank would earn ½% more than what the RBI pays, while the deficit bank would reduce its borrowing cost by ½%. That this is not happening suggests that the surplus bank does not have confidence in the solvency of the deficit banks and prefers a safe deposit with RBI. Put differently, there are some banks who are able to borrow only from the central bank as other banks are unwilling to lend to them.

When I started observing this phenomenon at the end of September, my first reaction was that it was due to the distortions caused by the half yearly closing on September 30. When it lasted beyond that, I thought that this was just the effect of the holiday season (Durga Puja and Dussehra). But all that is now over and still the phenomenon persists. Are some bankers worried about the solvency of their fellow bankers?

Posted at 10:41 am IST on Thu, 13 Oct 2011         permanent link

Categories: banks, bond markets

Comments

Basel III: The German (or rather Sinn) Finish

I have blogged about the Swiss Finish and the British Finish that add (or threaten to add) large layers of capital requirements for banks on top of the Basel III minimum. Now, one of Germany’s most influential economists, Hans-Werner Sinn, has come out with proposals that are equally far reaching. My impression is that the German political establishment has been opposed to higher capital requirements, but this could change if the peripheral sovereign crisis necessitates a large bail out of German banks. So Sinn’s proposals are interesting:

After the Basel III system for bank regulation, a Basel IV system is needed in which the risk weights for sovereign debt are to be raised from zero to the level for mid-sized companies.

Common equity (core capital plus balance-sheet ratio) is to be increased by 50% with respect to Basel III.

Sinn does not elaborate on these points which come at the fag end of a long list of (highly controversial) recommendations on how to rescue the euro. There is therefore some ambiguity about what exactly he means. Basel III demands common equity of 4.5% plus a capital conservation buffer of 2.5% plus an extra capital requirement of up to 2.5% for Global Systemically Important Banks (G-SIBs) plus a counter cyclical buffer of up to 2.5%. This leaves us with a range of 7% to 12%. If we take the mid point of 9.5% (for example, a big G-SIB at a point in the business cycle where the counter cyclical buffer is zero) and apply a 50% increase to this, we end up at 14.25%. Since Basel III also requires non equity capital of 3.5%, the total capital requirement would be 17.75%. This is a little below the Swiss and British finish in the aggregate, but it has more of higher quality (equity) capital.

Sinn’s proposal for increasing risk weights is also effectively an increase in bank capital requirements. I am quite in agreement with the idea that we should not distinguish between sovereign exposures and corporate exposures when it comes to risk weights. Other classes of assets with low risk weights (for example, exposures to central counter parties) also need to be revisited. Sinn’s proposal attacks the risk weight problem in another way by applying a 50% increase to the balance sheet leverage ratio which essentially measures the ratio of capital to unweighted assets. Basel III requires a minimum leverage ratio of 3% (assets can be 33 times capital); if this ratio is pushed up to 4.5%, assets will be limited to 22 times capital. For the leverage ratio, Basel III uses tier one capital; it is not clear whether Sinn wants this to be entirely in the form of equity capital.

Basel III was in some ways a victory for the big global banks (though they are still trying to water it down to whatever extent they can), but it appears to me that the real battle lies beyond Basel III. And perhaps, the banks are gradually losing this battle. So many different groups of people coming at it from different perspectives are ending up with very similar banker-unfriendly numbers on minimum bank capital.

Posted at 2:17 pm IST on Wed, 5 Oct 2011         permanent link

Categories: banks, leverage, regulation

Comments

Siemens and the ECB

There have been a number of press reports about the German engineering giant Siemens parking € 4-6 billion of cash with the European Central Bank (ECB) in the form of one week deposits instead of leaving it with commercial banks (see, for example, here, here and here). Much of the commentary has emphasized the flight to safety motive for this move, but the reports also point out that the ECB pays a slightly higher interest rate on one week deposits than what the banks offer on longer term deposits. Assuming some tolerance for interest rate risk (the ECB rate could fall in future!), the move could also be justified purely as a pursuit of returns.

I would however like to ask the ultimate tail risk question (to which I have no answers) – is it reasonable to assume that there is no risk in depositing money with the ECB? The best analysis of the ECB’s solvency that I have seen is a piece by William Buiter written a couple of years ago (at his maverecon blog at the Financial Times several months before joining Citigroup as its Chief Economist). Much of the data in this is a little dated, but the analysis is illuminating all the same.

In his piece (entitled “Does the ECB/Eurosystem have enough capital?”), Buiter pointed out that the ECB has a leverage of 70:1 and even the consolidated balance sheet of the ECB and the Eurozone national central banks showed a leverage of 25:1. Buiter also noted that the asset side of the ECB balance sheet “includes a lot of rubbish”. And that was before it had started buying peripheral sovereign debt in a big way. Yet, Buiter concluded that all this “would not endanger the solvency of the Eurosystem, which has the present discounted value of current and future seigniorage income (the interest earned (or saved) by being able to borrow at a zero rate of interest through the issuance of currency and through mandatory reserve requirements).” Those who want a more elaborate theoretical treatment of seigniorage would find it useful to read his 2007 academic paper on the subject.

Buiter estimated that the capitalized value of the current and future stream of seigniorage would be 20 percent of Euro Area annual GDP. This capitalized value of seigniorage was according to Buiter sufficient to mark all the assets on the ECB’s entire balance sheet all the way down to zero and still leave it economically solvent.

One cannot ask for a more conclusive affirmation of solvency than this. But my question was about tail risk, and a tail risk scenario would include a possible euro zone break up. In that scenario, would the seigniorage income flow to the ECB or to the national central banks? And would those national central banks stand behind the ECB?

Posted at 6:17 pm IST on Wed, 21 Sep 2011         permanent link

Categories: banks, risk management

Comments