Performance Perspectives Blog

Sampling … what does it mean?

by | Nov 11, 2009

The GIPS (r) standards allow verifiers to use sampling to conduct their reviews. This makes perfect sense … otherwise, the costs might be prohibitive if every account, for every time period, for every composite had to be checked. Also, sampling has long been an acceptable method to test hypotheses, evaluate opinions, and conduct research.

As Pedhazur & Schmelkin point out, “Sampling permeates nearly every facet of our lives…decisions, impressions, opinions, beliefs, and the like are based on partial information…limited observations, bits and pieces of information, are generally resorted to when forming impressions and drawing conclusions about people, groups, objects, events, and other aspects of our environment.” They reference Samuel Johnson who said, “You don’t have to eat the whole ox to know that the meat is tough.” They also wrote that “Formal sampling is a process aimed at obtaining a representative portion of some whole, thereby affording valid inferences and generalizations to it.”

But what DOES sampling mean in the world of GIPS verifications? Presumably, its the selection of an adequate number of observations to yield enough information about the firm in order to allow the verifier to draw a reasonable conclusion regarding the firm’s composite construction process. But what percentage is adequate? The standards offers no guidance.

Perhaps this is like the word “obscenity” and the comment former U.S. Supreme Court Justice Potter Steward famously remarked (a statement that my friend and associate Herb Chain often cited when we taught GIPS courses together): that he didn’t know how to define it, but knew it when he saw it. This can be further likened to the word “materiality,” which is difficult to pin down in much detail. But after a (very) brief review of opinions from other firms we quickly realized that there is some disparity regarding sampling and verification.

We are speaking with a client that has roughly 1,000 composites … a very large number by anyone’s scale, yes? What size would constitute a relevant sample? We did a “mini survey” and, perhaps not surprisingly, got a mix of responses. At the low end we have roughly 2% and at the high end 10-15 percent. We tend to lean more towards the 10-15% figure, with an expectation that we would look at the composites the firm markets, and also at additional composites which would be selected on a mix of “random” and “non-random” bases. In other words, each year we won’t be looking at the same composites, but will vary many of them.

What if the verifier only looks at the firm’s “marketed” composites? Some might think this makes sense, since it focuses on those composites that will most likely be presented to prospects. But if the firm knows that only these composites will be reviewed, what motivation is there to bother with the others? Such a selection is “biased,” and hardly considered a fair way to evaluate a firm’s compliance. A more appropriate approach would be to include “marketed” composites, but also select a random number of “non-marketed,” in order to conduct a better, more conclusive and objective test.

It’s important to remember that the GIPS standards do not make a distinction between “marketed” and “non-marketed” composites. In fact, the terms don’t exist within the standards. Unfortunately, certain verifiers have, over the years, promoted the notion that firms need only be concerned with the “marketed” ones, in spite of being corrected on this multiple times. Such a posture only results in confusion and, unfortunately, many firms believing they’re compliant when, in fact, they aren’t: compliance is at the “firm” level, not at the “composite” or “marketed composite” levels.

Pedhazur, Elazar J. & Liora Pedhazur Schmelkin. Measurement, Design and Analysis. Psychology Press. 1991.

Free Subscription!

The Journal of Performance Measurement

The Performance Measurement Resource.

Click to Subscribe