Message Font: Serif | Sans-Serif
No. of Recommendations: 4
Author: PVCliff Date: 12/15/00 10:48 PM Number: 39113

Of course your premise that the market is a non-stationary process is well taken. I'm assuming you have been working in signal processing, which should make you very well versed in handling non-stationary stochastic processes. I would agree with short time span non-stationary.

That the market is non-stationary is the equivalent of Ann's saying that "Value stocks are out of style right now." Yes, the market seems to have a mind (or loss thereof) of its own. I submit, however that when one has fifty years of data with which to work, that the process would look stationary to an analyst. Any process can look non-stationary with a small enough sample, but there have been hundreds of brilliant folks poking at the market for many years, and their collective pronouncement seems to be that the market is random, and at least mostly efficient. I suspect that the micro caps may offer a window of non-efficiency, but that's hard to tell.

First, thanks for a well-written post. You have hit on several key points.

OK. I will get into a little of it, but the limitations of a regular keyboard make mathmematical discussions painful and hard to understand.

You are partially correct about my personal experience with probabilitic issues. However, almost all the stochastic processes we use in control and communication engineering are constructed to have some sort of stationarity. For others who may be reading this, stationarity can be pictured intuitively as the absence of any drift in the ensemble of member functions as a whole. More to the point, the past history of a stationary stochastic process can be used to predict the future of the process in a probabilitic sense. This speaks to the very heart of this stock market discussion.

There are actually three main flavors of stationarity. Assume a stochasatic process

[X(.,t);t member set gamma]

It is called first order stationary if:

Fx(z,t1)=Fx(z,t2) for all (t1,t2) and all real numbers z.

Similarly, wide sense or second order stationarity (also sometimes called covariance stationary) is described:


for all (z1,z2) and all allowable (t1,t2,tau)

Lastly, the one that I believe is most germaine to us in analyzing the stock market is strictly stationary as follows:


for all sets of real numbers (z1,z2,...,zn)

(this is very difficult to do, and I don't think this is a very good place to be discussing details of probability theory, so I think I stop here)

It is my contention that unless the market can be shown to be stochastic and strictly stationary, many tools used by the statistician are of questionable value.

However, it may be that the market is close enough in many situations that these tools would give a reasonable result.

Assumming that you are absolutely correct that the non-stationarity of the market makes the FF somehow more viable, how can one exploit this?

Please don't misunderstand my main issue here. I am not saying that the stationarity aspect, per se, is any indication of whether the F4 is a valid strategy or not. My only point was that many of the analyses performed by Snoop and the others depend on stochastic processes with strict stationarity, and that since this is suspect, it introduces an element of uncertainty into their results. I'm not saying that they are wrong. I'm only saying that they cannot make definitive and absolute conclusions based on their statistical analyses.

I guess my bigger problem is that I don't believe any historical market evidence can be dependable in predicting the future of any strategy regardless of whether data has been collected, in sample, out of sample, or under sample, for that matter. I just don't think the data-mining artifacts are as big a problem as they have been made out to be.

If value stocks are out of style right now, one would lose money buying the FF. If value stocks become "in style" then one could make money. How will you know in advance which kind of market you have? Will your non-stationary statistics help?

I know very little about non-stationary analysis, since we usually construct some amount of stationarity before making our analyses. Some of our 'brains' know a lot more about this stuff than I. It is very deep and involves lots of smoke and mirrors.

Will back testing predict the future? Every gimmick proposed, from chart patterns to the RP4, has been tried in the furnace of experience and found wanting. (There was a time when I really wanted to believe in charting, but I got over it after a few drubbings.)

This is really the bottom line. I have nearly reached the conclusion that statistical analysis of historical stock market data may be totally useless in predicting the future. I have played around with several of the MI techniques here on TMF, and just can't be comfortable that any of them are based on truly sound principles.

I still don't see your point of saying that the work Snoop, Soui, Jimmei, and others here is invalid, but TMF's datamining (oops, I mean multi-back testing) somehow has validity. I just don't get it. Maybe you could explain.

As you probably know, much of the work that they have done is mathematically correct. Again, it is properties of the set they are analyzing that is suspect in my mind.

My biggest objection all along is that the non-stationary (and possibly non-stochastic as well, but that is a different discussion) nature of the market could introduce errors far larger than the effects of the so-called datamining artifacts. This throws the whole statistical analyses into question.

I also need to clarify that I find no logical or statistical basis for the F4 (RP calculation). My interest really lies in the effects that dividends may have on the value of a stock (DDA strategies). I still think there has been enough evidence compiled over the last 50 years by enough independant economists to show that there is some corelation between dividend yield and appreciation. I also believe that the stochastic non-linear nature of the market may be causing erroneous results in some of the studies done by our resident statisticians using traditional statistical methods. They are blaming some of these on the use of multiple hypotheses. I think this may not be correct.

Print the post  


When Life Gives You Lemons
We all have had hardships and made poor decisions. The important thing is how we respond and grow. Read the story of a Fool who started from nothing, and looks to gain everything.
Contact Us
Contact Customer Service and other Fool departments here.
Work for Fools?
Winner of the Washingtonian great places to work, and Glassdoor #1 Company to Work For 2015! Have access to all of TMF's online and email products for FREE, and be paid for your contributions to TMF! Click the link and start your Fool career.