Crisis in the Rearview Mirror: Monitoring Mutual Funds in the Five Year Window

March marked the first month that the impact of extreme down markets of the Financial Crisis faded out of 5 year fund performance histories. With the darkest days of 2008 and Q1 2009 removed from 5 year track records and associated risk measures, significant shake-ups are being felt in many DC fund “scorecards”, investment policy […]

May 27, 2014

March marked the first month that the impact of extreme down markets of the Financial Crisis faded out of 5 year fund performance histories. With the darkest days of 2008 and Q1 2009 removed from 5 year track records and associated risk measures, significant shake-ups are being felt in many DC fund “scorecards”, investment policy reviews and other fund rating systems. Wealth managers and investors should be aware of these changes as they review funds that may rank very differently than they did only a few months ago.

The post-Crisis period from 2009-2013, is a unique period in financial markets. It was witness to unprecedented global government intervention, compressed rates and spreads, volatility and dispersion within asset classes. Since the market’s bottom in March 2009, however, most asset prices have rebounded strongly.

In selecting funds, most investment professionals consider a broad spectrum of factors, both qualitative and quantitative. One of the most concise and widely-used methods to consider multiple factors is by using a fund rating system or screen. Fund rating systems – whether proprietary or widely published by third parties, such as Morningstar or Lipper – are employed by both institutional and retail investors to assist in making investment decisions. The influence such ratings have on flows of capital is considerable; research suggests that highly rated funds draw assets disproportionately to their underlying performance.

The past 5 years is a key window in many rating systems. Morningstar, for example, calculates 3, 5 and 10 year ratings, all of which are published individually. Its overall rating, however, is a combination of all of these with heavier weights on the longer periods. Lipper also calculates 3, 5, and 10 year ratings and combines them in equal weights for a final value. It should be clear that as recent periods are components of the longer ones, these shorter timeframes heavily influence the overall ratings. If a fund has been in existence for greater than 5 years but less than 10 years, then 5 years becomes the key figure in such rating systems.

Given the influence rating systems and DC plan scorecards may have on investment decision-making, and the vital role of the 5 year window, MPI performed analysis to understand how the elimination of the Financial Crisis has affected fund rankings.

Below, we calculate monthly ranks for three simplified 5 year (60 month) rating systems1 on a universe of large cap mutual funds2. Each rating system consists of ranking a single risk-adjusted return measure: Sortino Ratio, Calmar Ratio, and Morningstar Risk-Adjusted Return (MRAR3, Morningstar’s risk-adjusted return metric). While results are not shown, a comparison of the three systems shows their rankings to be consistently positively correlated, although of course not identical.4

First, we consider whether any significant change has in fact occurred with respect to 5 year rankings. The chart below displays the proportion of funds that experienced a significant change in MRAR rank, month over month for the past 5 years. Note that during most of the period, ranks have been fairly consistent, showing few extreme variations from month to month. Over the past six months, however, as many as 40% of funds have shifted their ranks by at least a decile, and a large proportion of those by over two deciles. This represents a significant upheaval in fund rankings. Results are similar, and somewhat more pronounced, for the other two systems using the Sortino Ratio and the Calmar Ratio.

Pic1

Naturally, a fund is going to shift in rank when its most recent return is particularly low relative to its peers and/or particularly high relative to its peers. This will be the case with all risk and performance measures that are derived from historical returns.

Who are the beneficiaries and victims of this shift? Let’s look at a few examples of the most extreme changes. The following six funds exhibit similarly significant decile shifts under all three ranking systems:

Pic2

It should be no surprise that those funds with a recent increase in rank share some characteristics. All three (ClearBridge Global Growth C, Upright Growth and Fidelity Growth & Income Portfolio) experienced significantly higher drawdowns than their peers during the Financial Crisis, as well as higher semi-standard deviations. During the current 5 year window, however, these values are much closer to the universe average for ClearBridge Global Growth and Fidelity Growth & Income, while Upright Growth still appears among the riskiest but is also a significant outperformer.

Conversely, the funds with a recent decrease in rank (Invesco Charter A, American Independent Stock I and Provident Trust Strategy) were relatively conservative relative to their peers during the Crisis, experiencing lower drawdowns and semi-standard deviations. Post-Crisis, again, their values are much closer to the universe average, all underperforming to some degree while still appearing less risky in varying degrees.

In these select cases (and numerous others) it appears that the change in 5 year rankings are due to the removal of the 2007-2009 losses rather than any more recent events. With the elimination of the Crisis, potentially more aggressive funds are being rewarded in 5 year rank for the higher returns their strategies are generating in the post-Crisis period without fully accounting for the risk they may have assumed in the process.5

This supposition is supported by the chart below showing dispersion in 5 year semi-standard deviation between top ranking (95%) and bottom ranking (5%) funds. The dispersion, or distribution, in downside deviation between top and bottom ranking funds narrows significantly in recent months. This makes it more difficult to distinguish the riskier funds from their more conservative counterparts in the absence of any large downside events.

Pic3

So what does the post-Crisis environment mean for fund ratings? The illustrations above lead to some takeaways:

  • The time horizon of the rating matters. Investors should be aware of the environment in which a fund is being evaluated. In particular, ratings that rely heavily on a particular risk measure should ensure that events in which that measure is significant are included in the evaluation period. Analysts may want to consider using additional analysis before watch-listing or terminating a fund following ratings volatility over a period without many significant downside events. In lieu of such adjustments, strong down market performance can be punished after prolonged bull markets and investors may be hurt in the future by a lack of conservative managers in their lineups. One option is to maintain a minimum allocation to managers with superior down market performance.
  • Homogeneity matters. Because most fund scorecards or ratings occur within broad asset class, like large cap value, where conservative funds evaluated in the same universe with aggressive funds, volatility in ratings is likely. Creating more distinct peer groups than the broad categories used by popular fund rating systems may be helpful in reducing ratings volatility.
  • Scorecards, Screens and Rating systems should be evaluated on an ongoing basis to ensure they meet the needs of investors. One should be aware of time periods where one input outweighs another (e.g. when risk measures contribute less to a fund’s ranking than its relative return). This may involve back-testing for stability, as well as understanding a system’s sensitivities to its various inputs. It may also mean that investment professionals alter the weightings and/or screens of their systems to better reflect their priorities in changing market conditions.
  • Well-constructed manager rating systems can be exceedingly helpful to analysts and investors in the pursuit of successful systematic decision-making. However, all quantitative and/or fundamental rating systems will have their limitations. Investors should still seek to understand other aspects of a given fund using thorough qualitative analysis.

The recent volatility in rating systems as the Financial Crisis fades into the rearview mirror and out of 5 year windows underscores the importance of care when choosing and/or constructing fund rating systems. Additionally, these recent changes highlight the importance of monitoring rating systems on a regular basis to account for changing market dynamics.

Many investors find that custom rating systems and scorecards are the optimal choice for fund analysis, monitoring and selection. Custom rating systems can better accommodate the nuance necessary to capture the optimal range of investment opportunities and dynamically evolve to incorporate unique historical data sets. A custom approach is not without its challenges, requiring truly flexible tools, analytics and experience across multiple market cycles and investing fads.

As a provider of custom fund rating systems design and review, MPI is committed to continued research in this area. If you or your organization would like to review your rating system or current approach, please contact info@markovprocesses.com

Footnotes

  • 1While the specific statistics used differ, most quantitative ratings consist primarily of a return component and a risk component.   For example, Morningstar ranks it’s MRAR statistic, or Morningstar risk-adjusted return. Lipper provides 5 distinct ranks, one of which is Consistent Return, using a combination of the Hurst exponent and an “Effective Return” measure.
  • 2The universe consists of 545 funds from the Morningstar database in the Large Blend, Large Growth and Large Value categories with a minimum of 180 months’ data available.
  • 3 Note that while we follow Morningstar’s calculation of the statistic, we make no adjustment for the loads of returns, nor do we take into account separate share classes of the same fund.
  • 4 To review results, please contact info@markovprocesses.com with a subject of “5 Year Data”
  • 5By no means do we suggest that this is the case for all funds, or that this is the only reason for changes in fund ranks. There are a large number of reasons beyond this for a funds’ change in ranking, including manager change, changes in investment approach or risk management practices and, of course, changes in the behavior of the other funds in the universe.
Tags
Comments
Leave a Reply
Fields to complete: