Performance Perspectives Blog

The 10 “Not So Good” Things About GIPS

by | Dec 20, 2011

Over the past several months, I’ve had conversations with other PMPs (Performance Measurement Professionals) about the GIPS(R) Standards (Global Investment Performance Standards). And while we are in agreement that the standards are exceptional, necessary, and highly successful, and something we all support, we’re also in agreement that there are some shortcomings (as one might expect with such a complex document). And, of course, there are many things about these standards we like a lot. And so, I thought I’d put two lists together, representing both sides.

I first thought of referring to this first list as the “10 worst things” about GIPS, because I will identify the “10 best things” about the Standards tomorrow, but thanks to feedback from colleagues I decided that the word “worst” might engender some ill feelings among some folks (which is not my intent or desire), and perhaps sounds really negative. And so, what to call the list?, I wondered. Someone suggested making it like Santa’s “naughty and nice” list, but I thought that didn’t fit well. How about the “10 could be better” list?, but that wasn’t very crisp. After much thought, I settled on this “euphemismistic” title: “The 10 ‘Not So Good’ Things About GIPS.”

And so, here is my list of the 10 “not so good” things about the standards, and in David Letterman style: 

————————–

#10 Performance Examinations: The GIPS draft didn’t include examinations, but somehow they were introduced when the first edition was published. They have virtually nothing to do with the standards, and cost managers tens (and in some cases, hundreds) of thousands of dollars a year. TSG is happy to do them for our clients, as we are a for-profit company, but discourage our clients from having the expense (compliance is an investment; verification is an investment; examinations are usually an expense). And most of the clients we’ve won from other verifiers, who previously had them done, have ceased; and none of the clients we were the first verifier for have them done.

Are examinations necessarily a “bad” thing? No, perhaps not; perhaps they fulfill a need that some folks have, to undergo an independent review of the underlying data, to enhance the credibility of their presentation. But, I believe they’re overdone, and that too many firms have ALL of their composites examined, or ALL of their “marketed” done, when this isn’t really necessary.

#9 Model fees: GIPS 2010 introduced a term: “model fees.” But, there is no definition for it! I’ve asked for one, but am still waiting. You would think that if a new term is introduced, it would have a definition, but this did not occur. Maybe we’ll see one in the 2015 edition.

#8 The recommendation that compliant firms annually give their existing clients a copy of the composite(s) their account is in. This idea, when introduced in the 2010 Exposure Draft, was met with strong opposition. But rather than listen to those opinions, the EC decided to make this part of the standards: and since recommendations are, by definition, “best practice,” someone thinks that all firms should do it. 

I am curious to know how many members of the EC, who work for asset managers, send these reports to all of their clients annually; I suspect few, if any. Reporting to existing clients is arguably “out of scope” for GIPS. This was a bad idea, but unfortunately it’s with us.

#7 Introducing a rule disallowing intra-month additions of accounts in a Q&A: There is no doubt that the recent addition of this rule was in response to my vocal opposition to the aggregate method, as several of my examples pointed out how the aggregate method could yield nonsensical results when accounts were added during the month. However, after further reflection, and the realization that the aggregate method, by definition, isn’t supposed to look like the asset-weighted methods, since it’s measuring something completely different, it turns out that intra-month additions for the aggregate method are perfectly fine; they shouldn’t be allowed for asset-weighted methods. But more importantly, introducing a rule, in a Q&A, no basis for it, with no opportunity for public comment, and with no effective date? One might suggest (as a colleague did to me) that the concept is good, but it’s lacking (a) a clear definition of the concept of a “performance measurement period,” and (b) a basis for the Q&A in the existing provisions and guidance, which would justify the requirement. I recommend retracting the Q&A and treating this rule change as any other would be: by putting it out for public comment, with sufficient supporting language to make its application clear.

#6 No GIPS Handbook. We still do not have a GIPS Handbook, which is odd/unfortunate, because GIPS 2010 has been in effect for a full year, and so many rely on this important document. Having worked on the current handbook, when I was a member of the Investment Performance Council and Interpretations Subcommittee, I know how challenging revisions can be, especially following major changes to  the Standards. But I also know how important the document is. We have had several inquiries from our clients about it, and its absence is a problem. Hopefully it will be here soon.

#5 Dropping after-tax. The 2005 edition of GIPS included after-tax provisions for the U.S. and Italy. It was decided to drop them completely from the 2010 edition, rather than (a) allow them to stay, (b) encourage other countries to offer rules, or (c) come up with generic rules. This has resulted in a great deal of confusion, at least here in the U.S. In addition, it has removed an important element: after-tax performance. 

Perhaps the EC could have, and perhaps should have, made more of an effort to come up with something here. And perhaps a USA-bias was the problem, and a feeling that a global standard would need to be as specific as the existing U.S. guidance, causing people to think any global tax provisions would be bogged down with accounting details. But, the right answer may have been to become more principles-based (with the necessary disclosures) rather than rules-based, with respect to after-tax provisions and guidance. Perhaps something to take up with the 2015 version. My colleague, John Simpson, CIPM, has worked on these rules for many years, and would be a great candidate to lead such an effort. And, I will toss in Douglas Rogers’ name, too, as he, like John, is a recognized expert on this subject, and could contribute greatly to the assembly of such a document.

#4 The Error Correction Guidance Statement: It would be a bit unfair (and not my intention) to label the entire GS as a bad idea, especially since I crafted the initial working draft several years ago (which was based on an article Stefan Illmer and I wrote). But the version that went into effect in January 2010 was poorly handled: it introduced several changes to the earlier draft (which had been approved by the IPC), including requirements, which arguably should have been put to the public for comment. Once the EC saw how the ideas were, received they were when made part of the standards (in the 2010 exposure draft), they (the EC) pulled them from the standards. But, since they were in the GS, they would still be required! 

The most recent GS edition is much improved, save for the third level of the recommended error correction hierarchy, which is a requirement to disclose immaterial errors, for an indefinite period! There’s a reason the public is asked to comment! Oh, and to clarify: the concept of the guidance is both great and necessary, and we fully support it (heck, it may have been my idea, come to think about it!); the problem is how it was introduced.

#3 The aggregate method for composite returns. I have commented on this before, but it’s clearly in my top (I mean, bottom) list. The aggregate method tells us how the composite did: who cares? Who manages the composite? Why wouldn’t we want to know how the average account did? Okay, while I’d prefer the equal-weighted average, I’ll take the asset-weighted average any day over the composite’s return, which I believe is useless information.

But perhaps more importantly, the GIPS EC needs to define what “composite definition” means, as I have suggested in the past, as the two broad calculation areas  (asset-weighted methods vs. aggregate) produce two very different numbers, representing two very different things.

#2 The use of asset-weighting to measure performance. Back when the Association for Investment Management & Research (CFA Institute’s former moniker) were crafting the AIMR-PPS(R), two groups (the ICAA (now the IAA) and IMCA) opposed the use of asset-weighting of portfolio returns, as it could skew the results in favor of the larger account(s). But AIMR refused to budge, and asset-weighting remains. Over the past year or so I’ve learned that they (the ICAA and IMCA) were right: equal-weighting is a much better way to assemble the returns. For example, a very large mutual fund’s return can dwarf the returns of a few smaller accounts, such that the composite’s return is essentially that of the mutual fund. 

Perhaps the best solution is to require both asset- and equal-weighted composite returns. One could argue that prospects should have some sense of returns for larger versus smaller portfolios in the strategy. 

#1 The decision to eliminate the opportunity for firms to allocate cash to carve-outs. Back when the earlier (2005) version of GIPS was being crafted, the rule change was already planned to occur in 2005. But the IPC (Investment Performance Council; the predecessor group to the GIPS EC (Executive Committee)) listened to the overwhelming opposition to the change, and agreed to push it back five years. But when the 2010 edition was being considered, rather than again open this point up for comment, the EC refused; how unfortunate. As a result, most firms that had been using carve-outs have had to stop, which was an unnecessary inconvenience; this isn’t a desirable outcome.

————————–

Please do not get me wrong: there is SO much more good about the standards than bad (sorry, I mean not so good); and, with any such document, there is bound to be disagreement. And, being the somewhat opinionated person that I am, it’s not surprising that I could craft such a list. To balance them out, I will post my “10 best things” tomorrow!

Oh, and if you (a) have your OWN list or (b) wish to comment on mine, please let us know!

Free Subscription!

The Journal of Performance Measurement

The Performance Measurement Resource.

Click to Subscribe