No. of Recommendations: 21
Several posters have mentioned the difficulty of getting better volatility estimates for annual screens. I thought it would be interesting to use the 1986-1998 daily data provided by Peter Kuperman in June to calculate annualized GSD. Then compare it to GSD measures from other methods.

Recall that Peter's foundation uses the following screens:

Key100 1-4, annual
Spark 1-5, annual
PEG 1-4, semiannual
RSO 1-4, monthly
PEGO 1-4, monthly (overlapped from all 10 stocks from each PEG13 and PEGRSW)

Peter announced the availability of daily data for these screens in post #72493:

http://boards.fool.com/Message.asp?id=1030013013107000

Key100 Spark PEG RSO PEGO Blend
CAGR GSD CAGR GSD CAGR GSD CAGR GSD CAGR GSD CAGR GSD
Using 36 33 36 30 47 38 53 39 63 37 49 28
Daily Data

Backtester 37 19 37 30 48 26 53 24 65 29 50 17
Jan Start

Backtester 36 27 29 n/a n/a 23
GSD Avg All
Months

Backtester 40 37 n/a
GSD(M) for
Monthly Screens

Observations:

o The Backtester Jan start GSDs are generally much lower than the GSD(D) (the daily analog of GSD(M); GSD calculated from daily data).

o The GSD(M)'s are very close to the daily GSDs for the monthly screens.

o The Backtester GSDs averaged across all start months appears to improve on a single-start-month estimate, but is not necessarily close to the daily GSD (see PEG).

o There are some small differences in CAGR between the backtester and Peter's daily data. This was addressed earlier and is due to Friday purchases/sales (and closing prices) in Peter's daily data and Monday purchases/sales in the backtester.


The backtester link for the blended screen is:

http://gritton.org/ws/blend/?v869801ST12ss15U20nST12kc14U20nST06ps14U20nOV01rqrs0125l4U20nOV01pqpw3110l4U20n


I recognize that the volatility experienced in a portfolio's return over many years (much more than 14) based on annual returns may be less than that indicated by the daily-data GSD. Emintz and others have suggested that there may be some level of autocorrelation that smooths out that daily volatility when measured annually, which may be especially true as you blend different screens and have more stocks. However, I think the daily-data GSD may give a better indication of the mid-year "nerve wrackiness" of a screen as we check our portfolios daily.

Thanks again to Peter for the initial daily data. We look forward to the future batch of daily data for a broader set of screens.

Regards,

Tim
Print the post Back To Top
No. of Recommendations: 2
>>The GSD(M)'s are very close to the daily GSDs for the monthly screens.<<

These are what I would call annualized GSD's based on monthly and daily data, respectively. This result suggests to me that monthly data may be sufficient to capture most of the volatility of screens with holding periods longer than a month.

I wonder if the backtester can be modified to calculate annualized GSD and Sharpe Ratio based on monthly data. The data is already available, and the formulae are in messages 73096 and 76573 (as amended by 76580 and 76584), with m = 1 for monthly returns.

Ratio
Print the post Back To Top
No. of Recommendations: 0
BarryDTO:

I don't know if you are familiar with the following easy-to-read note:

http://wrdsenet.wharton.upenn.edu/fic/wfic/papers/97/9734.pdf

There are pro's and con's associated with scaling daily GSD's to annualized measures. From my experience, I'm not sure that you want to time disaggregate more than to monthly data.

Datasnooper.
Print the post Back To Top
No. of Recommendations: 3

http://wrdsenet.wharton.upenn.edu/fic/wfic/papers/97/9734.pdf

There are pro's and con's associated with scaling daily GSD's to annualized measures. From my experience, I'm not sure that you want to time disaggregate more than to monthly data.


Datasnooper:

First, let me say thanks for the link to the article; it was an interesting read. I've enjoyed your various posts that remind us there is a lot of research out there, either when you have articulated the background yourself or pointed us to a relevant research paper.

Second, I agree with you that monthly-measured volatility is probably fine for our purposes. RatioFool also made this comment. Several folks had been asking for some validation (or identification of problems) of our volatility measures, especially for the annual screens, so I think the daily-based GSDs served that purpose well: the monthly-based GSDs matched well with the daily-based GSDs, but the annual-based GSDS did not.

The rest of this post may not be of interest to many readers who think this is all too academic (so please feel free to skip!). But I bring the issues up because they may affect how we choose to calculate volatility for LorenCobb's exponential growth model, or future variations. I have suggested using daily data if available rather than the weekly data used so far. Also, I've been building and looking at portfolio allocation tools that use the correlation among stocks (or screens) to build an efficient frontier to choose "best" mixes from. The question may arise: is it appropriate to use stock daily returns, or should some lower frequency be used?

<Detailed comments follow!>

After reading the paper, I think it doesn't really change how we should use daily or monthly data for our primary purposes here. The point of the paper, I believe, was to say that the "square root of time" volatility scalar approach is not appropriate for a particular situation that doesn't really match ours. I'll try to make the point in three ways, and let me know if (where?) you disagree with my logic.

1. The focus of the article seems to be toward those managing risk of current investments on an ongoing basis by measuring and assessing historical volatility to build updated estimates of near-term future volatility. Their contention, which seems well-founded, is that measuring 1-day volatility and then multiplying by SQRT(n) to estimate n-day volatility is incorrect and misleading.

Our focus is more to understand, in a relatively straightforward way, the volatility that occurred during the past 14 (or more) years. We are not generally worried about the small change in volatility estimates we make as each new month of data gets added to the backtester, and we aren't creating new updates of future volatility estimates as that data comes in. We simply want to know which screen or set of screens was more or less volatile, relative to each other, over our backtest history. (We're hoping this is an indication of relative future volatility, just as we hope relative historical CAGRs will be repeated.)

So, I think the purposes are slightly different. The first might be characterized as "dynamic and absolute", whereas our purpose is more "static (historical) and relative".

2. The paper says that the time-scaling approach " ... produces volatilities that are correct on average ... ", but tend to magnify the volatility fluctuations of the longer time periods over time.

For our purposes, we are really just using that average volatility for the fixed backtest period. So, it seems that the simple approach will still be appropriate.

3. Empirically, item 2. seems to bear out in two ways:

a. The comparison of our monthly-based GSDs and the daily-based GSDs using the screens selected by Peter seem to support the SQRT(n) scalar approach for our purposes.

b. I built a spreadsheet to match the 10,000-day GARCH(1,1) process that the authors of the paper used as a base for demonstrating their point. As they did, I ignored the "start-up period" of 1,000 days, and assumed the remaining 9,000 days of returns were presented to me to measure volatility from.

I calculated the volatility for the entire period on a daily basis, obtaining a measure very close to the theoretical value I started with. Pressing F9 (calculate) several times generated new series of random numbers and results, but the measured volatility jumped around the desired result.

I then calculated the 5-day and 21-day volatility in two ways:

(1). 1-day volatility times the square root of 5 or 21, respectively.

(2). Obtaining sequential n-day returns (n=5 or 21), starting at the first of the 9,000 days, then calculating the volatility of those returns.

The results for (1) and (2) were remarkably similar, varying with each iteration of course.

My conclusion from this simple simulation is that it would be OK to use daily-based or monthly-based volatility measures when we are addressing 14 (or more) years and are simply looking backward (static). Over shorter periods (e.g., 6 months) it may be useful to use the daily-based approach to ensure we don't run into the same problem we see with the 14-year annual-based approach.

<End of detailed comments!>

Thanks again for pointing us to some useful and educational stuff!

Regards,

Tim
Print the post Back To Top
No. of Recommendations: 0
BarryDTO:

I didn't see your reply to my post until now, and it's quite a random event that that I did actually. I don't read this board regularly, but I guess there's no excuse for not checking up on my own posts.

Let me first say that I'm impresed with the research you performed responding to my message. Second, I agree that in a backtest of an ordinary screen covering many years it doesn't matter much whether you are using GARCH or unconditional volatilities. It's only when you forecast volatility ahead on a short (say up to a year) horizon that knowledge of the current conditional volatility matters much.

Therefore, as you point out the precision of the estimated volatility increases as you sample more frequently for long horizons.

Datasnooper.
Print the post Back To Top