Performance Perspectives Blog

Math mistakes matter

by | Aug 23, 2011

In this past weekend’s Wall Street Journal, Carl Bialik’s article, “Technology Can’t Save Us From Math Mishaps, brought home the problems we face daily in trying to ensure a relatively high degree of accuracy in the information we report.

He cited S&P’s $2 trillion error which didn’t dissuade them from lowering the U.S.Government’s coveted AAA rating. It is interesting how this “error” was characterized: “The Treasury Department claimed that S&P originally used the wrong number – projected gross domestic product growth instead of projected inflation – to calculate what U.S. government spending and total U.S. debt would be for the next 10 years. S&P spokesman John Piecuch says that the original number ‘was not a mathematical error at all,’ but instead based on a different assumption about spending…John Bellows, Treasury’s acting assistant secretary for economic policy, called it a ‘basic math error.'”

Bialik further pointed out that a common cause for many wrong numbers is “insufficient safeguards to catch errors before the numbers are released.” Amen. But how does one come up with these “safeguards”? That’s the challenge. Mathematician Thomas R. Nicely is quoted as saying “The best method of preventing [errors] is to have two or more independent procedures for determining [or checking] the same result.” If we were to apply this in the world of returns, would it mean, for example, to use two formulas (Modified Dietz and daily, for example? That, to me, wouldn’t work, since the underlying data would be the same, and that’s usually the cause for the errors, is it not?

One of our consulting clients compared their client returns to the S&P 500, and if they were off by a certain amount they’d flag it for review. On the surface this might appear reasonable, but given that their clients are primarily retail, non-discretionary accounts, that (a) aren’t investing relative to the S&P 500 and (b) may only have one or two securities in their portfolio, it isn’t likely to be very helpful. If the manager (or investor) is managing relative to a particular index, such a check is probably quite reasonable.

It’s relatively easy to establish wide ranges to use as tests (e.g., if the return is below -50% or above 50% for some given period). But what about the errors that aren’t quite that large? Which aren’t quite so obvious?

Getting the soundness of the formulas down shouldn’t be a problem; it’s the data, right? And this is where it gets difficult. Processing the corporate actions properly, accounting for trades, capturing the right prices, getting the exchange rates correct, etc.  Data isn’t a particularly “sexy” topic, is it (as opposed to returns, attribution, risk, and GIPS(R), that is)? But it’s critical. And sadly, it’s one of those areas that when everything goes well, the folks in charge probably don’t hear very much; but when errors arise, they do.

Best practices in data management is a topic worthy of further discussion. If you have ideas you want to share, please pass them along. Thanks!

Free Subscription!

The Journal of Performance Measurement

The Performance Measurement Resource.

Click to Subscribe