I participated in a panel discussion last week for the New York Society of Security Analysts (NYSSA). Questions arose regarding the use of standard deviation with GIPS(R) (Global Investment Performance Standards). I used my standard graphic, which distinguishes between this statistic being used as a risk measure (a longitudinal or across time view, looking at 36 months of composite returns) and as a measure of dispersion (for a single period, where we look at the returns of the accounts within the composite, to see how disparate they are).
One individual mentioned that as a dispersion measure, it measures the account returns relative to the composite’s return. While this would be, I believe, the ideal, as one should want to know how returns vary relative to the composite, in reality, most firms measure dispersion relative to the average of the accounts that were present for the full period, and this can be quite a different number.
Consider this: We have a composite that begins with 30 accounts; during the year, 10 disappear, and 10 more are added, meaning 20 are present for the full year. The composite’s return is derived on a monthly basis, from the accounts present each month; these returns are then linked to produce the composite’s return for the year. If one runs standard deviation across the 20 accounts that were present all year, it won’t consider the composite’s return whatsoever; in order to bring that return into the mix, one must manually (i.e., employ a step-by-step approach) calculate standard deviation, using the composite’s average as the average against which each account return is measured.
My suspicion is that few firms employ this more accurate approach. Is there much of a difference? Probably not. However, I think it unfortunate that we weren’t clearer as to how this measure is to be derived. Perhaps we will in the future. I’ll address this in greater detail in our February newsletter.