Wednesday, April 26, 2006

A Brief History of Derivatives

The history of derivatives is quite colorful and surprisingly a lot longer than most people think. A few years ago I compiled a list of the events that I thought shaped the history of derivatives. That list is published in its entirety in the Winter1995 is sue of Derivatives Quarterly. What follows here is a snapshot of the major events that I think form the evolution of derivatives.

I would like to first note that some of these stories are controversial. Do they really involve derivatives? Or do the minds of people like myself and others see derivatives everywhere?

To start we need to go back to the Bible. In Genesis Chapter 29, believed to be about the year 1700 B.C., Jacob purchased an option costing him seven years of labor that granted him the right to marry Laban's daughter Rachel. His prospective father-in-law, however, reneged, perhaps making this not only the first derivative but the first default on a derivative. Laban required Jacob to marry his older daughter Leah. Jacob married Leah, but because he preferred Rachel, he purchased another option, requiring seven more years of labor, and finally married Rachel, bigamy being allowed in those days. Jacob ended up with two wives, twelve sons, who became the patriarchs of the twelve tribes of Israel, and a lot of domestic friction, which is not surprising. Some argue that Jacob really had forward contracts, which obligated him to the marriages but that does not matter. Jacob did derivatives, one way or the other. Around 580 B.C., Thales the Milesian purchased options on olive presses and made a fortune off of a bumper crop in olives. So derivatives were around before the time of Christ.

The first exchange for trading derivatives appeared to be the Royal Exchange in London, which permitted forward contracting. The celebrated Dutch Tulip bulb mania, which you can read about in Extraordinary Popular Delusions and the Madness of Crowds by Charles Mackay, published 1841 but still in print, was characterized by forward contracting on tulip bulbs around 1637. The first "futures" contracts are generally traced to the Yodoya rice market in Osaka, Japan around 1650. These were evidently standardized contracts, which made them much like today's futures, although it is not known if the contracts were marked to market daily and/or had credit guarantees.

Probably the next major event, and the most significant as far as the history of U. S. futures markets, was the creation of the Chicago Board of Trade in 1848. Due to its prime location on Lake Michigan, Chicago was developing as a major center for the storage, sale, and distribution of Midwestern grain. Due to the seasonality of grain, however, Chicago's storage facilities were unable to accommodate the enormous increase in supply that occurred following the harvest. Similarly, its facilities were underutilized in the spring. Chicago spot prices rose and fell drastically. A group of grain traders created the "to-arrive" contract, which permitted farmers to lock in the price and deliver the grain later. This allowed the farmer to store the grain either on the farm or at a storage facility nearby and deliver it to Chicago months later. These to-arrive contracts proved useful as a device for hedging and speculating on price changes. Farmers and traders soon realized that the sale and delivery of the grain itself was not nearly as important as the ability to transfer the price risk associated with the grain. The grain could always be sold and delivered anywhere else at any time. These contracts were eventually standardized around 1865, and in 1925 the first futures clearinghouse was formed. From that point on, futures contracts were pretty much of the form we know them today.

In the mid 1800s, famed New York financier Russell Sage began creating synthetic loans using the principle of put-call parity. Sage would buy the stock and a put from his customer and sell the customer a call. By fixing the put, call, and strike prices, Sage was creating a synthetic loan with an interest rate significantly higher than usury laws allowed.

One of the first examples of financial engineering was by none other than the beleaguered government of the Confederate States of America, which is sued a dual currency optionable bond. This permitted the Confederate States to borrow money in sterling with an option to pay back in French francs. The holder of the bond had the option to convert the claim into cotton, the south's primary cash crop.

Interestingly, futures/options/derivatives trading was banned numerous times in Europe and Japan and even in the United States in the state of Illinois in 1867 though the law was quickly repealed. In 1874 the Chicago Mercantile Exchange's predecessor, the Chicago Produce Exchange, was formed. It became the modern day Merc in 1919. Other exchanges had been popping up around the country and continued to do so.

The early twentieth century was a dark period for derivatives trading as bucket shops were rampant. Bucket shops are small operators in options and securities that typically lure customers into transactions and then flee with the money, setting up shop elsewhere.

In 1922 the federal government made its first effort to regulate the futures market with the Grain Futures Act. In 1936 options on futures were banned in the United States. All the while options, futures and various derivatives continued to be banned from time to time in other countries.

The 1950s marked the era of two significant events in the futures markets. In 1955 the Supreme Court ruled in the case of Corn Products Refining Company that profits from hedging are treated as ordinary income. This ruling stood until it was challenged by the 1988 ruling in the Arkansas Best case. The Best decision denied the deductibility of capital losses against ordinary income and effectively gave hedging a tax disadvantage. Fortunately, this interpretation was overturned in 1993.

Another significant event of the 1950s was the ban on onion futures. Onion futures do not seem particularly important, though that is probably because they were banned, and we do not hear much about them. But the significance is that a group of Michigan onion farmers, reportedly enlisting the aid of their congressman, a young Gerald Ford, succeeded in banning a specific commodity from futures trading. To this day, the law in effect says, "you can create futures contracts on anything but onions.”

In 1972 the Chicago Mercantile Exchange, responding to the now-freely floating international currencies, created the International Monetary Market, which allowed trading in currency futures. These were the first futures contracts that were not on physical commodities. In 1975 the Chicago Board of Trade created the first interest rate futures contract, one based on Ginnie Mae (GNMA) mortgages. While the contract met with initial success, it eventually died. The CBOT resuscitated it several times, changing its structure, but it never became viable. In 1975 the Merc responded with the Treasury bill futures contract. This contract was the first successful pure interest rate futures. It was held up as an example, either good or bad depending on your perspective, of the enormous leverage in futures. For only about $1,000, and now less than that, you controlled $1 million of T -bills. In 1977, the CBOT created the T -bond futures contract, which went on to be the highest volume contract. In 1982 the CME created the Eurodollar contract, which has now surpassed the T -bond contract to become the most actively traded of all futures contracts. In 1982, the Kansas City Board of Trade launched the first stock index futures, a contract on the Value Line Index. The Chicago Mercantile Exchange quickly followed with their highly successful contract on the S&P 500 index.

1973 marked the creation of both the Chicago Board Options Exchange and the publication of perhaps the most famous formula in finance, the option pricing model of Fischer Black and Myron Scholes. These events revolutionized the investment world in ways no one could imagine at that time. The Black-Scholes model, as it came to be known, set up a mathematical framework that formed the basis for an explosive revolution in the use of derivatives. In 1983, the Chicago Board Options Exchange decided to create an option on an index of stocks. Though originally known as the CBOE 100 Index, it was soon turned over to Standard and Poor's and became known as the S&P 100, which remains the most actively traded exchange-listed option.

The 1980s marked the beginning of the era of swaps and other over-the-counter derivatives. Although over-the-counter options and forwards had previously existed, the generation of corporate financial managers of that decade was the first to come out of business schools with exposure to derivatives. Soon virtually every large corporation, and even some that were not so large, were using derivatives to hedge, and in some cases, speculate on interest rate, exchange rate and commodity risk. New products were rapidly created to hedge the now-recognized wide varieties of risks. As the problems became more complex, Wall Street turned increasingly to the talents of mathematicians and physicists, offering them new and quite different career paths and unheard-of money. The instruments became more complex and were sometimes even referred to as "exotic."

In 1994 the derivatives world was hit with a series of large losses on derivatives trading announced by some well-known and highly experienced firms, such as Procter and Gamble and Metallgesellschaft. One of America's wealthiest localities, Orange County, California, declared bankruptcy, allegedly due to derivatives trading, but more accurately, due to the use of leverage in a portfolio of short- term Treasury securities. England's venerable Barings Bank declared bankruptcy due to speculative trading in futures contracts by a 28- year old clerk in its Singapore office. These and other large losses led to a huge outcry, sometimes against the instruments and sometimes against the firms that sold them. While some minor changes occurred in the way in which derivatives were sold, most firms simply instituted tighter controls and continued to use derivatives.

These stories hit the high points in the history of derivatives. Even my aforementioned "Chronology" cannot do full justice to its long and colorful history. The future promises to bring new and exciting developments.

Don Chance is a professor of finance at Louisiana State University He can be reached at dchance@fenews.com

For More Reading

Black, Fischer and Myron Scholes. "The Pricing of Options and Corporate Liabilities." The Journal of Political Economy 81,637-654.

Chance, Don M. "A Chronology of Derivatives." Derivatives Quarterly 2 (Winter, 1995), 53-60.

Mackay, Charles. Extraordinary Popular Delusions and the Madness of Crowds. New York; Harmony Books (1841, current version 1980).

This column is excerpted from “Essays in Derivatives” by Don Chance (John Wiley & Sons, 1998) under an agreement with the publisher by Financial Engineering News.

Back to Basics: Which Duration is Best?

Teri Geske
Senior Vice President, Product Development

Note: This Back-to-Basics column on Duration was first published in 1997. Based on a number of recent inquiries on this subject, we are republishing the article, which we’ve revised and updated for this issue.
Fixed income professionals have come to rely on Duration as the primary measure of interest rate risk for individual securities and portfolios. Yet this widely accepted measure is still subject to misinterpretation and misuse, partly because there are various forms of Duration one might encounter (some of them being far more informative than others). In this Back-to-Basics article, we explain the differences among these duration measures and the implications of relying on the wrong one when evaluating a bond or managing a portfolio’s exposure to interest rate risk. We also discuss whether or not Duration can be interpreted as a measure of time, and how Duration relates to Average Life.

First, we review three types of Duration that may be calculated for a bond and/or for a portfolio1 , namely Macaulay’s (also known as Modified Duration), Effective Duration (also known as Option-Adjusted Duration), and Duration-to-Worst. These are defined as follows 2 :

-Macaulay’s (Modified) Duration – the approximate percentage change in a bond’s price given a 1% change in its yield-to-maturity . The Macaulay’s duration formula is based on a pre-determined set of principal and interest cash flows computed to the bond’s final maturity date and does not recognize that those cash flows could be affected by changes in interest rates, including the exercise of one or more embedded options (calls, puts, optional prepayments, floating rate coupons, including any reset caps or floors, etc.).

-Duration-to-Worst– the approximate percentage change in a bond’s price given a 1% change in its yield-to-maturity or its yield-to-call, whichever is lower. Duration-to-Worst is the same as Macaulay’s duration except the pre-determined set of principal and interest cash flows are based on either the final maturity date, or a call date within the bond’s call schedule, whichever would result in the lowest yield to the investor – i.e., the Yield-to-Worst. (Note that for puttable bonds, one would use a “duration-to-best” computed from cash flows to the maturity date or to the put date, whichever results in the highest yield to the investor).

-Effective Duration – the average percentage change in a bond’s price, based on upward and downward parallel shifts in the underlying term structure of interest rates (typically the Treasury spot curve). By determining what the bond’s price would be, given higher/lower interest rate environments, the effective duration measure reflects the increasing or decreasing likelihood of any option exercise, including calls, puts, changes in prepayment speeds for mortgage-backed securities, and the higher probability of encountering any rate caps/floors for securities with adjustable coupons.

Given that the primary objective of duration is to explain a bond’s or portfolio’s price sensitivity to changes in interest rates, we can see that neither Macaulay’s (Modified) Duration nor Duration-to-Worst can be used for this purpose, because neither one reflects the fact that a bond’s cash flows can be affected by a change in interest rates. Macaulay’s Duration assumes a bond will always survive to the stated maturity date, regardless of any call or put options, or in the case of a mortgage-backed security, that prepayments will be constant, regardless of a change in interest rates. Consider a mortgage pass-through forecasted to prepay at a CPR% of 18% for the remainder of the mortgage pool’s life, and that these cash flows produce a Macaulay’s duration of 3.20. Can we reasonably estimate the impact of a 50bp change in interest rates on the pass-through using the approximation: (– Duration x interest rates) = 1.60%? No, because the duration of 3.20 ignores the fact that if interest rates fall, prepayments are likely to increase, and vice versa. A similar error occurs with callable and puttable bonds, where Macaulay’s duration fails to recognize the increasing value of the call option as rates fall (or the rise in the put option’s value as rates rise). For bonds with adjustable rate coupons, Macaulay’s duration doesn’t reflect the fact that as interest rates change, the coupon rate on the bond changes; in essence it treats all bonds as fixed rate instruments. If Macaulay’s Duration is used to compare a portfolio’s interest rate sensitivity relative to a benchmark and the portfolio (or the benchmark) contains securities with any type of embedded options, a significant tracking error is likely to occur.

What about Duration-to-Worst? Even though Duration-to-Worst seems to recognize the presence of an embedded call option, it does not reflect the fact that the value of the option, i.e., the likelihood the option will be exercised, fluctuates as interest rates change. Duration-to-Worst is like an On/Off switch – it either assumes the bond is definitely going to be called, or is definitely not going to be called, without allowing for uncertainty. Therefore, Duration-to-Worst either under- or overestimates a bond’s interest rate sensitivity by assuming that a call will or will not be exercised, regardless of the future interest rate environment and can be a highly unstable and misleading measure.

Consider a bond with a 7.50% coupon, maturing in 10 years, callable a year from now at a price of 103, currently priced at 103.45, with the following measures: Yield-to-Maturity – 6.526%; Yield-to-Call – 6.355%; Macaulay’s Modified Duration – 6.99; Duration-to-Worst – 1.02; Effective Duration – 3.00. Since the yield to the first call date (which is the worst possible call date in this example), is lower than the yield-to-maturity of the bond, the bond is “trading to call”. The Macaulay’s Modified Duration, which ignores the presence of the call option entirely, predicts the bond’s price will increase by approximately 6.99% (from 103.45 to 110.70) if interest rates decline by 1%. However, we know the price cannot rise that far since the bond is callable at 103 in a year, so the Macaulay’s duration is not a useful approximation of price sensitivity.

Duration-to-Worst suffers from a related flaw – it assumes that the bond’s status (i.e., trading to call or trading to maturity) will never change until it actually does. If the bond is currently trading to call, the Duration-to-Worst assumes the bond will definitely be called, regardless of any future change in interest rates; if the bond is trading to maturity, the Duration-to-Worst assumes the bond will never be called. So, Duration-to-Worst can “jump” back and forth, from either a fairly short duration based on the call date, out to the duration based on the maturity date as the bond “crosses over” from trading to call to trading to maturity. Let’s say that a 20bp increase in rates would cause this bond to trade to maturity, rather than to the call date. If we use Duration-to-Worst, that 20bp rise in rates would cause us to restate the bond’s duration from 1.02 to 6.99, an unrealistically large jump in price sensitivity for a small change in interest rates3 . Of course, neither Duration-to-Worst nor Modified Duration provides a good indication of the actual change the bond’s price would experience given a shift in the yield curve; for this, we must use Effective Duration, which reflects the impact of the value of embedded options on the bond’s price sensitivity.

The Effective Duration of a callable bond will always be less than the Macaulay’s duration, for the following reason: As interest rates fall, the call becomes more important to the behavior of the security and the increase in price that a decline in rates would otherwise cause is restricted by the presence of the call. On a percentage basis, this means the price of a callable bond increases by a smaller amount than the price of an otherwise identical but non-callable bond for a given decline in rates. Conversely, as interest rates rise the value of the embedded call option declines and therefore has less and less impact on the price of the bond. On a percentage basis, the price of a callable bond begins to decline by almost as much as that of a non-callable bond. When we remember that Duration is used to estimate a percentage change in price, we can see that the Effective Duration value must be smaller than the Macaulay’s Duration, which ignores the impact of the call feature on the bond’s price. Similar logic holds true for mortgage-backed securities, where prepayments can be viewed as “partial calls” (that are exercised somewhat inefficiently).

Effective Duration should not be viewed as a measure of time, although it is often spoken of in terms of “years”. For securities with no embedded options (where the Macaulay’s Modified Duration and Effective Duration will be equal), duration can be viewed as the weighted-average time until cash flows are received, where the weights are the present values of the cash flows themselves. However, since securities with embedded options have uncertain cash flows (with respect to amount and/or timing), it is not appropriate to view duration in terms of time. In fact, some securities, most notably CMO Interest-Only (IO) tranches, have an Effective Duration that is negative, which certainly cannot be viewed as a time increment (leaving the theory of relativity aside!). Effective Duration can be longer than the Average Life of a bond if the Average Life is computed to a call date; otherwise, Effective Duration will be shorter than Average Life4 .

Effective Duration is the only one of the duration measures discussed here that reflects the impact of embedded options on a bond’s interest rate sensitivity. We devote a great deal of effort and resources to provide our clients with robust effective durations (and the various models required to derive them) for all types of fixed income securities, portfolios and benchmark indices. BondEdge provides all three durations discussed in this article, i.e. Modified, Effective and “To-Worst” - we hope this review has helped in making an informed decision about how to use them.

1 The duration of a portfolio is the weighted-average (market value-weighted) of the durations of each bond in the portfolio.
2 Note that each of these measures describes the percentage change in a bond’s (or portfolio’s) value for a given change in rates, not the dollar price change. For bonds priced at par, the percentage change and the dollar price change are the same; for bonds priced away from par, a so-called “dollar duration” may be computed that describes the bond’s dollar price change given a change in rates. However, unless otherwise noted, the term “duration” refers to “percentage change in price”.

3 Although Duration-to-Worst is not an accurate measure of interest rate risk for securities and portfolios that contain embedded options, it is often used in the municipal market. This may be due to the fact that municipal portfolios have traditionally been managed to maximize reported yield, rather than on a total return basis. In the mid-1980’s to early 1990s, years in which interest rates declined, the average tax-exempt bond mutual fund consistently underperformed muni market benchmarks. In an earlier On-the-Edge article, we proposed the hypothesis that relying on Duration-to-Worst caused a widespread mis-estimation of the interest rate sensitivity of these funds, leading to this pervasive underperformance.

4With the possible exception of certain CMO tranches with extreme extension or contraction risk.

Back to Basics: Volatility and Option Valuation

Teri Geske
Senior Vice President, Product Development

BondEdge for Windows allows investment managers to evaluate the impact of a change in volatility rates for different market sectors across a diversified portfolio. Since volatility is a critical component of option valuation, we thought it would be appropriate to review why volatility estimates are important in fixed income portfolio analysis and how you can measure the sensitivity of your portfolios to a change in volatility.

First, a brief reminder of why volatility is so important. Option theory reveals that, all other things being equal, an increase in volatility causes the value of an option to increase. If I have an option to buy avocados for $2 each for the next 12 months, and the price of avocados has been unchanged at $1.50 for the past 20 years, my option is worthless. However, if avocado prices have ranged from $0.90 to $3.25 over the past few seasons, my option is quite valuable. In fixed income portfolio analysis, volatility affects the value of callable (and puttable) bonds, mortgage-backed securities subject to prepayments (the right to prepay a mortgage is an option), adjustable rate securities with embedded caps and any other securities whose cashflows are potentially sensitive to changes in the level of interest rates. Since most of these instruments represent a short position in the option (with the notable exception of bonds with put options), an increase in volatility would cause the price of the security to decline. Or, if we hold price constant we can see that an increase in volatility causes a decline in a security's option-adjusted spread (OAS).

Although everyone agrees that volatility is an important variable in option valuation, the proper technique to use when estimating volatility is a topic of debate. In general, volatility is measured using historical data, or is implied from observed market prices (or some combination of the two). In BondEdge, the default volatility parameters (expressed as annual percentages) are based on historical observations because total return managers typically focus on returns over a fairly long period of time, e.g. 3 to 6 months. Some market participants (such as traders) with a shorter time horizon prefer to use implied volatilities, and the Volatility Appraisal report allows the portfolio manager to evaluate the impact of using different volatility assumptions on the duration and convexity of a portfolio. In fact, volatility estimates are often the primary cause of variations when comparing effective duration, convexity and OAS values from different sources.

While we typically use the term "volatility" in its singular form, to be more precise we should use the plural "volatilities", because the level of volatility differs along the term structure. Short term interest rates are generally more volatile than long term rates, and the analytical models in BondEdge take this into account by using different volatility rates along the term structure. The Volatility Appraisal report (under Portfolio-Simulation) allows you to specify the long and short rate volatilities for different segments of the market. Holding price constant, the effective duration, convexity and OAS of each security and of the portfolio are re-computed using the revised volatility estimates. The volatility parameters may also be modified in both Parallel and Specified Scenario portfolio simulations, where BondEdge calculates the total return, effective duration, convexity and other characteristics for the selected portfolio using the new volatility inputs. These features offer a portfolio-level analysis to complement the Security Valuation tool which allows you to analyze changes in volatility estimates (and other model parameters) for a single security.

Another way of measuring the impact of volatility on security valuation is described by the concept of Vega. Vega is defined as the price sensitivity to changes in volatility; securities (or portfolios) with a high degree of optionality have relatively high Vegas, whereas a security with no embedded options has a Vega of 0.00. Vega is one of the Risk Measures which may be computed in the Valuation screen or at the portfolio level using the Risk Measures report under the Simulation menu. We encourage you to use both Vega and the Volatility Appraisal and Simulations to understand how changes in volatility affect a diversified portfolio of securities with embedded options. As always, we welcome your feedback on this issue and invite you to suggest other topics for discussion.

Back to Basics: Value at Risk (VaR)

Teri Geske
Senior Vice President, Product Development



Over the past few years a tremendous amount of work has been done in the area of "Value at Risk" (referred to as "VaR" or "V-A-R"). There have been countless seminars and conferences on VaR, many books and articles have been written on the subject and there is no shortage of vendors touting their VaR systems as the sin qua non of risk management. VaR was originally designed for banks with significant trading operations covering several markets (fixed income, foreign exchange, derivatives, etc.) to quantify the institution’s risk in a systematic way. VaR is now used not only as an internal management tool, it has been adopted by international bank regulators in determining whether or not an institution is adequately capitalized. Although VaR has been embraced by most large banks, other members of the financial community (insurance companies, investment managers and plan sponsors) are still determining how, if at all, VaR fits into their business. Nonetheless, even though your firm may not yet be using VaR, it is a concept that is most likely here to stay. Therefore, we thought it might be useful to review the basics of VaR , including some of the strengths and weaknesses of this approach to risk management.

VaR is defined as the expected loss in value, given a statistical level of confidence, due to adverse movements in underlying risk factors. VaR allows us to state that "over the next x days, the portfolio is expected to lose no more than $y (or y%) in value with z% confidence" where z% is typically 95% or 99% (the Bank for International Settlements (BIS) standards use VaR in terms of a 3-day horizon with a 99% confidence interval.). Now, the statement that "99% of the time losses will not exceed $y" may sound rather comforting, but this also means that 1% of the time (one out of a hundred observations) we expect that losses will exceed the dollar value resulting from the VaR analysis. If we are using a 1-day VaR and a 99% confidence level, given that there are about 200+ trading days in a year we are saying that losses will be more severe than the VaR amount approximately twice a year. Furthermore, VaR says nothing about how bad the loss might be that 1% of the time – this is why risk managers realize that VaR should be combined with stress testing to determine what might happen under extreme conditions.

There are three approaches used to compute VaR, referred to as the "variance/covariance", "historical simulation" and "Monte Carlo simulation" methods. We will briefly summarize them here, mentioning some strengths and weaknesses of each. The variance/covariance approach assigns (or "maps") each asset to one or more equivalent risk position based on the factor(s) that affect the asset’s value. For example, a portfolio consisting of a 5 year bond and futures contracts on the S&P500 and would be represented as exposures to movements in the five year U.S. interest rate and the S&P500. The VaR of the portfolio is computed based on the variance and covariance of the individual risk factors over the VaR time horizon. So, if the daily change in the 5 year U.S. Treasury rate has a standard deviation of 4bps, the interest rate risk component of the portfolio’s one-day VaR with a 99% confidence interval would be based on a 2.33x4bp approximately 9bp move. To compute VaR (in dollars), the change in each risk factor associated with the chosen confidence level is multiplied by the "delta equivalent" value of the position – for fixed income securities, this is the dollar-duration (i.e., the change in dollar value given a small change in interest rates). If there is some negative correlation among the risk factors (e.g. if the S&P500 tends to move up when the Treasury prices go down and vice versa), the covariance between these risk factors would make the VaR of the portfolio something less than the sum of the VaRs of two separate portfolios, one holding the S&P500 futures and the other a 5 year Treasury.

The primary advantage of the variance/covariance approach is that it is fairly easy to compute. There are a number of sources of variances and covariance "matrices" for key risk factors (exchange rates, interest rates, commodity prices, etc.) that can be downloaded into spreadsheet programs designed to compute VaR using this method. However, there are a number of drawbacks to this approach; the most important for fixed income portfolios is that the price sensitivity of options, or of bonds with embedded options (callable bonds, mortgage-backed securities, etc.) cannot be adequately described by the variance/covariance method. This method implicitly assumes that prices change at a constant rate with respect to a change in a market risk factor, but this assumption is not valid for options. The price behavior of options is "non-linear" – in other words, not constant. For example, as interest rates rise a mortgage-backed security’s duration (its sensitivity to interest rate risk) can increase considerably due to a change in the value of the embedded prepayment option. The variance/covariance approach does not capture this and can significantly underestimate the true VaR for a portfolio containing options.

The Historical Simulation VaR method observes the actual level of market risk factors (such as yield curves, exchange rates, commodity prices, etc.) over a period of time and revalues each asset in the portfolio given each observed risk factor. For example, if a portfolio consisted solely of 30 year zero coupon bonds, we would observe the 30 year Treasury (spot) rate over each of the past 100 days and would revalue the portfolio 100 times, given these different interest rate levels. A one-day VaR with 95% confidence would be computed as the 5th largest decline in the portfolio’s value of the 100 daily observations. One advantage to this approach over the variance/covariance method is that it does reflect the non-linear price behavior of options. A disadvantage of the historical simulation method is that the VaR number is highly sensitive to the time period used to observe the market risk factors. For example, the VaR of a corporate bond portfolio calculated as the 5th worst loss using six months of credit spread changes observed over two years from June 1996 – June 1998 would be markedly different than the 5th worse loss over the two years from January 1997 to January 1999.

The Monte Carlo VaR approach overcomes the historical method’s dependence on a particular time period by generating a (random) distribution of changes in each key market risk factor based on parameters specified for each factor. The portfolio is revalued under each set of market conditions generated by the Monte Carlo simulation and, as with the Historical method, the changes in portfolio value are ordered so that the VaR is observed as the loss in value corresponding to the desired confidence level, e.g., the 5th worst loss out of one hundred observations for a 95% confidence level. The Monte Carlo approach is quite robust but requires the most sophisticated analytical systems and the greatest data collection effort.

Some problems with VaR: In addition to the drawbacks of each method cited above, VaR suffers from a number of shortcomings. First, the three calculation methods can produce radically different results – this makes it difficult (if not impossible) to compare VaR numbers reported by different institutions. There is the issue raised earlier that even a 99% confidence level says nothing about how severe the loss might be at the "tail" of the distribution. Two firms (or two portfolios) might have the same VaR at a 95% confidence interval, but at the 96th percentile one’s loss might be twice as large as the other’s. If a firm relies exclusively on VaR for risk management, the potential for catastrophic loss due to extreme changes in risk factors could grow unchecked over time. That possibility leads us back to the importance of stress testing as mentioned earlier. There are many other practical as well as theoretical issues to address in deciding which (if any) VaR analysis to use, and there is no shortage of financial literature devoted to the topic.

Finally, we should ask ourselves, is VaR appropriate for investment management? Traditionally, the time horizon for VaR analyses has been measured in terms of days (one day, three days, a week) and in terms of dollar value. Since investment managers typically measure performance monthly, and usually relative to a benchmark, a one-month "relative" VaR might be more appropriate. Using this approach, a VaR analysis using a 99% confidence level would state that in one month out of 100 the portfolio is expected to underperform by more than y% relative to a benchmark. Frankly, it becomes increasingly difficult to compute, back-test and interpret VaR numbers for these longer time horizons because the necessary data is difficult to collect. In this example, back testing would require us to collect 8+ years of monthly observations before we could determine whether or not our actual loss exceeded our computed VaR more than 1% of the time. Over that lengthy period, the parameters used to compute the original VaR number may no longer reflect actual market conditions and the portfolio’s exposure to different risk factors would most likely change. So, perhaps it is better to stay with short VaR time horizons despite the longer-term perspective of most investment managers.

Despite these drawbacks, VaR can be a useful tool. It promotes risk awareness, can be used to evaluate a firm’s risk profile over time or to compare asset managers across different sectors and so on.

Tuesday, April 25, 2006

ERM is the Next Big Thing for Quants

http://www.fenews.com/fen46/front-sr/wang/wang.html

The achievement of financial engineers, in helping to revolutionize Wall Street and the global capital markets, has been a phenomenal one. Now, a new opportunity of equal – or even bigger scale – has arrived for quants to revolutionize the way that firms manage their risks.

Enterprise Risk Management (ERM) has grabbed the attention of the business community. Unlike established strains of risk management, ERM is not just about compliance and control, but is more about strategic risk-taking and building an effective organization. At this point in time, corporate strategy and shareholder value initiatives are in need of exactly the tools that are emerging in the ERM discipline.

ERM as a Natural Evolution of Risk Management

For the banking industry, ERM follows more or less naturally from the risk management revolution that is being driven by the new Basel Accord. The initial Accord focused predominantly on market risks and credit risks. In recent years, the new Accord has put operational risks on the front burner. When you enter the operational risk arena, you are dealing with a broad spectrum of risks and their interactions – exposures like mis-selling, and IT systems failures which have dragged risk managers out of their comfort zone. Enterprise-wide risk management requires a deep understanding of the risk dynamics and the business processes, the incentive alignments and the cost-benefits analysis.

In the broader economic sector, the recent Sarbanes-Oxley Act drove home the role that ERM can play in underpinning enhanced corporate governance and financial transparency. Rating agencies and consumers are also pressuring firms to embrace ERM to address financial, strategic, operational and reputation risks.

To implement ERM, firms are developing risk measurement systems and economic capital frameworks that can reflect both the inherent risks and the level of risk control in place. One symbol of this sea-change has been the appointment at many firms of chief risk officers to join chief financial officers and chief information officers at the top table of management.

ERM is an Emerging Discipline

  • Most current textbook theories are derived from one set of assumptions and follow a one-dimensional logical thinking. ERM by nature must be multidisciplinary, reflecting the different perspectives and competing interests of multiple stakeholders. ERM promises to elevate the science of risk-taking to the next level. Before you try to quantify risks, you first need to understand which risks to focus on.
  • ERM requires research breakthroughs and new paradigms if it is to deliver on its promise. How to quantify the impact of big hedge funds money flows; how to apply agent-based risk modeling; how to extract the most relevant risk information among a wealth of financial data; how to reconcile different perspectives and interests of multiple stakeholders, etc. Here I would use a mathematical analogy: traditional silo-based risk modeling is like working in a linear space, ERM risk modeling is like working on manifolds, with changing “views” between global versus local perspectives.
  • The traditional take on corporate finance needs to evolve into more advanced analytical corporate finance, which treats asset risks and liability risks in a holistic fashion. For instance, based on the risk appetite of the firm, analytical corporate finance should aim to design the optimal risk management techniques, whether it is through hedging, raising additional capital or through some contingent capital contracts.

Why Quants Can Play a Key Role in ERM

The financial service industry is witnessing the increased proliferation of complex financial products and accelerated globalization. Technology development has enabled business transactions to take place at an ever faster speed. These will continue to require computation-intensive data analysis and risk modeling. Without some appreciation of quantitative models, nowadays it is almost impossible to lead an institution’s ERM efforts.

Many ERM issues call for solutions that look like those encountered in financial-engineering; for instance, how to design a capital allocation model that properly accounts for the effectiveness of internal controls and the hedging program.

Given the multiple-perspective nature of ERM issues, endless debates can only create more heat, while an objective quantitative approach can help shed some light. Many quants by nature are solution-oriented and can formulate complex issues in an objective quantitative framework. This gives quants a leg up in dealing with ERM issues.

What Other Skill Sets Do Quants Need to Practice in ERM?

Some might be inclined to view ERM as nothing but a huge risk-aggregation machinery. I think this is only a partial understanding of ERM. In order to make a greater impact, quants must go broader and go deeper. At the end of day, all businesses are conducted by people. Human behavior is the fabric of ERM.

The most important skill above all is to leave one-dimensional thinking behind. Quants need to develop a good appreciation of economic, accounting and legal considerations.

ERM needs an army of quantitative risk professionals with enforceable professional conducts. Currently no single profession can fulfill the vast ERM space, whether it is financial engineers, actuaries or accountants/auditors. The field of ERM invites all who are open-minded, technically solid, intelligently curious and action-oriented. Financial engineering solutions can help revolutionize the broader risk management field. I envision that through collaborative ERM educational and research efforts all the above risk professionals can play a part in practicing ERM.

Fixed Income Meets the Black Box

Oct 24, 2005
URL: http://www.wallstreetandtech.com/showArticle.jhtml?articleID=172900005

The universe of fixed income securities - with more than 3 million names in the U.S. alone - dwarfs the global equities market, which only has about 15,000 stocks from which to choose. As the dot-com bust has consolidated more than 100 bond-trading platforms to just a few entities with meaningful liquidity, the fixed income market would seem to be ripe for algorithmic trading, and in the fast-moving, deeply liquid interdealer market in government bonds, this certainly is the case. But it may be quite some time before algorithmic trading becomes commonplace for institutional asset managers and mutual fund managers, due to structural issues with the dealer-to-customer marketplace that result in a lack of transparency and navigability for automated trading patterns.

In the interdealer market, an active arbitrage business has developed between the two giants of electronic U.S. Treasury trading - eSpeed and Icap - and between these two platforms and the futures market. Opportunistic traders attempt to gain small profits by purchasing bonds on one platform and immediately selling them on the other, which requires lightning-fast connectivity and firm quotes - and it certainly doesn't hurt to have a computer with built-in parameters doing the work. According to David Rutter, CEO of electronic brokerage at London-based Icap, the vast majority of this trading is conducted by "less than 10" quantitative trading firms, most of which have been spun off of Chicago-based futures commission merchants (FCMs) to lay off risk against bond-based futures contracts traded on the Chicago Board of Trade (CBOT).

"Algorithmic trading is the fastest-growing customer segment that we have, and there has been a dramatic change in the last year - more than 50 percent of our bids and offers are now black-box-oriented," Rutter says. "[Black-box traders] have become a very important source of liquidity, which was traditionally the domain of dealers."

Rutter notes that relatively few transactions result from black-box postings, but they have helped reduce price discrepancies in the Treasury market. He adds that Icap now is experimenting with creating a volume weighted average price (VWAP) benchmark for its most active securities - a hallmark of the equities market and a harbinger of more algorithmic trading to come.

New York-based eSpeed also is considering creating a VWAP for "on the run" (i.e., most recently issued) Treasuries, officials say, and the broker is making preparations for more-automated trading. Over the past year and a half, the platform has been increasing its message-rate capacity so as to pave the way for active black-box customers.

"We have a group of people really dedicated to helping our clients operate programmatically with us," says Matt Claus, eSpeed's CTO. "Program trading, in our environment, is customers who have developed software that interacts with our API [application programming interface] without human intervention." Though he cannot provide specific figures, Claus estimates that algorithmic trading on eSpeed will grow tenfold in the next five years.

The attitude toward algorithmic trading, however, is much more conservative at dealer-to-customer venues such as TradeWeb, the predominant trading platform for government securities, and MarketAxess, the predominant corporate-debt trading platform. According to a spokesperson, TradeWeb is not actively pursuing algorithmic trading capacity.

For its part, MarketAxess is making improvements to its API to support faster trading, according to John Dean, the company's head of connectivity. But, he says, the algorithm bug probably will require 12 months to 18 months to take form in the fixed income dealer-to-customer marketplace, starting with Treasuries and moving to the recently launched credit default swap (CDS) index market, and finally on to the most-active corporate securities.

Critical Difference

The critical difference between the interdealer market and the dealer-to-customer market is comprised of equal parts immediacy, liquidity and transparency. In the interdealer space, quotes for common bonds, such as the 10-year Treasury note, are anonymous and available for instant execution, similar to the Nasdaq stock market. In the dealer-to-customer space, however, dealers provide quotes when requested by customers, and trading is not anonymous - customers risk showing their hand when they request a quote on the open market.

Although the dealers have automated pricing engines for this purpose, they retain the capability to refuse a trade request or change a quote before offering it again to a customer - which makes authoring an algorithm more challenging. Since most algorithms rely on a constant stream of current market data, the challenge increases the further one gets away from Treasuries.

In the corporate and municipal debt markets, trades are far less frequent and current pricing information often is not available. Also, the more obscure the security, the more likely it is that a single dealer may hold all the available liquidity. Although the National Association of Securities Dealers (NASD) has made improvements to the Trade Reporting and Compliance Engine (TRACE), corporate-bond trade information is delayed by 15 minutes, and even Treasury price information is not necessarily immediate and accurate, notes Harrell Smith, head of the securities and investments practice at Celent Communications.

"The limit-order algorithm works great in a highly transparent world, but it does not fit as cleanly into the fixed income world," Dean says. "In fixed income, you have no clue about where that bond is - at least not to the nth degree - so it is better to do a quote request and subject that to your pricing models. It is a technical hurdle, but it can be [overcome]."

There also are fewer reasons to use algorithms in fixed income trading. Whereas equities traders can benefit greatly from splitting up orders across multiple venues to avoid detection - one of the main drivers for algorithmic trading in the equities marketplace - there is little benefit to doing so in fixed income, where a trader does not necessarily pay more to conduct transactions of high value, and there are not many competing electronic venues offering the same security, according to Travis Bagley, head of fixed income transition at Russell Investment Group, which manages $34.2 billion in global fixed income securities.

"If you are holding a $1 million or $2 million piece of a bond that is tradable, you wouldn't incur as much impact as you would trying to trade that much value in equities," explains Bagley. "At this point, we see algorithmic trading as more theoretical. We expect some firms are out there using it, but it is not a large segment of fixed income trading, and we don't know what the tipping point would be to make it so."

Eric Goldberg, CEO of Portware, a software company that has developed algorithms for buy-side and sell-side firms in the equities and futures markets, believes there is a tipping point, but he is skeptical that it will come to pass. "Algorithmic trading is a natural extension once there is electronic trading with streaming prices on an open platform accessible through an API," Goldberg asserts. "But we don't have that yet in fixed income. A lot of the single-dealer and dealer-to-customer platforms are built for single order entry, and no one really wants it to be an open book."

Unless the market structure changes so that it resembles an exchange with an open book and live orders, with the possible exception of the most adventurous hedge funds and quantitative trading shops, it seems that the use of algorithms in the dealer-to-customer market will be limited to internal algorithms that try to find optimal price points against common benchmarks, such as the yield curve, according to Gavin Little-Gill, senior analyst at TowerGroup. "You will also see people looking at betting across markets," Little-Gill says. "They will play fixed income versus equity markets. They may build these complex models that culminate in some fairly rudimentary 'if-then' statements that trigger transactions."

On The Net

eSpeed
Icap
Chicago Board of Trade (CBOT)
Nasdaq
TradeWeb
MarketAxess
NASD
Celent Communications
Russell Investment Group
TowerGroup

The Continued Revolution in Buy-Side Trading

A recent survey by CutterAssociates indicates that market leading buy-side firms are transforming their trading practices and technology to enhance performance, lower costs, reduce risk and meet regulatory requirements. Survey findings indicate that the number one IT priority for many firms is to provide traders with enhanced trading and analytic tools, connectivity to liquidity sources, and integration capabilities.


The Impact of Regulations

The four major securities industry regulatory bodies – the Ontario Securities Commission in Canada, Financial Services Authority in the UK, the European Union, and the Securities and Exchange Commission in the US – have all issued or are expected to issue regulations that will have a profound affect on the buy-side trading desk. While the methods of regulation may differ – for example, either direct prohibition of soft dollar payments for certain services or disclosure to clients of all soft dollar payments – the goals of each regulatory body are the same: managers must provide their clients with best execution and commissions can be used only for execution and services that are for the benefit of the client, not the manager.

These new directives will place the burden of proof for the manager’s compliance on the manager, unlike the traditional approach where the regulator was responsible for proving if the manager were non-compliant.

While most of the new regulations are aimed primarily at the equity markets, we expect increased regulatory scrutiny of the fixed income and derivative markets as well.

Traders will bear the brunt of the impact of the regulations. Those traders who have not already done so will have to morph from “order takers” to executors who insure best execution and compliance with regulations. The job of head trader will entail less trading, more regulation-oriented administration, and more management.

Buy-Side Power, Sell-Side Decline

Buy-side firms have taken responsibility for execution from the sell side and now execute 70% of trades away from full-service brokers. Technology-savvy agency brokers and DMA providers, such as ITG and Wave Securities, will continue to expand their rosters of services to offer buy-side traders more analytic and execution venue choices. The buy-side will still look to the sell-side for commitment of capital, research, market color, and execution of special orders, but declines in commission rates will put pressure on full-service brokers to change existing business models and explore new ways to generate profits.

Because the buy side has taken increased responsibility for trading, traders will need increasingly sophisticated technologies to do their jobs. Market-leading firms are rethinking and reworking trading practices and deploying new systems to accommodate changes.

The Trading Desk Will Be Buoyed (or Swamped) By a Flood of Technology

Firms will be adding a host of new systems throughout the order workflow to support the trading desk’s evolution into a consistent alpha contributor.

· Order generation will include portfolio manufacturing-like decision support tools to keep portfolios in compliance.

· Trade analytic tools will provide for real-time transaction cost estimation, real-time risk management, and systems to suggest appropriate trading strategies.

· Hand-off to trading capabilities will allow selection of orders that can be sent directly to an exchange, systems to display order and market data tailored to the specific order and market conditions, and enhanced communications capabilities between trader and portfolio manager concerning the goals of the order and current markets.

· Order and execution management will identify potential basket trades, simulators to assist in selection of trading algorithms, real-time “shadowing” of the chosen algorithm using other algorithms, decision support systems for selecting a trading venue and method, systems for real-time monitoring of the status of all orders and automated exception reporting, real-time compliance and risk management systems, and visualization tools that allow the trader to absorb a great deal of information in a short period of time.

· Post-execution processing will provide real-time settlement exception processing and next-day transaction cost analysis.

The Challenge

The OMS vendors may be able to provide some of the functions, but most investment firms will have to rely on other systems providers and their own development teams. Advanced trading technologies create daunting integration and development challenges to IT because of the sheer number of systems involved, the computer power required (real-time risk management, compliance, and TCA), and the need for true, real-time interfaces. Despite the complexity and difficulty, those firms that can successfully deploy systems that fully automate trading and provide the full range of analytics will achieve an enormous competitive advantage.

對沖基金與信評

對沖基金與信評 Hedge Funds' Next Wrinkle: Ratings 對沖基金最近已經夠倒楣了, 現在又出現了下一個難關 --- 信評. 我有一個很聰明又很會賺錢的朋友Aaron在對沖基金作的很好, 他曾不只一次在閒談中擔心對沖基金的好景不再. (那...