I just got a note from a colleague in response to this month’s newsletter (/images/stories/PDF/newsletters/jun09nl.pdf) suggesting that my use of the term “volatility” for standard deviation is incorrect, and that “variability” is more appropriate.
I’m not sure if this is a semantic issue or not. Bill Sharpe referred to standard deviation as variability in his 1966 paper, where his famed risk-adjusted-return measure was introduced: he called it “reward-to-variability,” and his risk measure is standard deviation. He went further and referred to Jack Treynor’s measure as “reward-to-volatility,” where the risk measure is beta. Did he therefore make a judgment decision feeling that one term applies more to one measure than the other? Or, was he simply trying to find a way to distinguish between the two terms, and chose synonyms, one for one term and one for the other (without resorting to the eponymous approach which history has taken care of for us)?
I would argue that most individuals in our industry see standard deviation as a measure of volatility – to show how VOLATILE the market is, for example. Not to see how values vary from day to day. Is there really much of a difference? It’s unclear to me.