Message Font: Serif | Sans-Serif
 
UnThreaded | Threaded | Whole Thread (5) | Ignore Thread Prev Thread | Next Thread
Author: solasis Big red star, 1000 posts Old School Fool Add to my Favorite Fools Ignore this person (you won't see their posts anymore) Number: of 297  
Subject: fat tails - thin ice Date: 4/21/2002 5:49 PM
Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Recommendations: 2
http://datalab.morningstar.com/Midas/Login/ArticlesResearch/DL_Article_FatTails.asp
Print the post Back To Top
Author: TWA40 Big red star, 1000 posts Old School Fool CAPS All Star Add to my Favorite Fools Ignore this person (you won't see their posts anymore) Number: 200 of 297
Subject: Re: fat tails - thin ice Date: 4/23/2002 1:18 PM
Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Recommendations: 1
Twenty years ago everything I learned about statistics involved normal distributions: means, standard deviations, F-statistics and their associated probabilities. And I've always regarded parametric statistics as amazingly robust to minor, or even major violations of assumptions about normality, equality of variances, etc.

And 20 years of practical experience analyzing data has convinced me that parametric statistics are indeed amazingly robust--provided you're making inferences about central tendancy and the data are unimodal (i.e., they don't have to follow a perfect bell curve, but they do have to have a single hump somewhere in the middle). But making inferences out in the tails? Egads, don't go there. We reject null hypotheses out in the tails, but P = 0.0002 doesn't really mean 2 in 10,000; it just means far enough below 5% that we can turf it.

And now the new craze is information theory, using Akaike's Information Criterion (AIC) instead of arbitrary P values. And when you do multiple resampling from your data to verify your assumptions about data dispersion (i.e. bootstrapping), lo and behold you find that your data are inevitably over-dispersed, often by 80 to 300%. At least that's been my experience with every real-world biological data set I've worked with. But I'd be willing to bet a 6-pack of good beer that's also the case in financial data.

And the information theoretic approach is to calculate your variance inflation factor, divide into residual deviance, and sweep all the annoying variance under the rug. And then procede estimating means and main effects, full speed ahead.

I guess what I'm trying to say is that humans, scientists even, seem to be psychologically pre-disposed to sweep annoyingly infrequent events under the rug. And ill-equipped to think meaningfully about risk. It doesn't bode well for progress.

Todd

Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Print the post Back To Top
Author: jkm929 Big red star, 1000 posts Old School Fool Add to my Favorite Fools Ignore this person (you won't see their posts anymore) Number: 204 of 297
Subject: Re: fat tails - thin ice Date: 4/24/2002 1:52 PM
Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Recommendations: 0
Maybe the problem isn't so much fat tails as it is that one tail is fatter than the other. But if that were the case we wouldn't have a normal distribution and financial markets wouldn't be random walks.

jkm929

Print the post Back To Top
Author: TWA40 Big red star, 1000 posts Old School Fool CAPS All Star Add to my Favorite Fools Ignore this person (you won't see their posts anymore) Number: 205 of 297
Subject: Re: fat tails - thin ice Date: 4/24/2002 5:08 PM
Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Recommendations: 3
Maybe the problem isn't so much fat tails as it is that one tail is fatter than the other. But if that were the case we wouldn't have a normal distribution and financial markets wouldn't be random walks.

jkm929,

I suspect that this is intended as a tongue-in-cheek jab at "A Random Walk Down Wall Street", but just in case it's not, I'll clarify that something can be normally distributed, but not random. Or random, but not normally distributed. They're different concepts.

Random implies a selection process whereby every individual in the population is equally likely to experience a particular fate (i.e. being selected for a portfolio, going up by 2%, whatever). Roll a fair die and the result is random, but the frequency distribution isn't normal, it's uniform: 1,2, 3,4,5,6 could each occur with a probability of 0.167.

Normal means that the data approximate a normal probability density function, which wouldn't translate very well in ASCII format. But the formula includes constants like Pi and e, as well as 2 inputs derived from the sample population: the mean, and the standard deviation from the mean. In part, statisticians like to assume that data are normal because it makes everything so easy to work with--an entire frequency distribution can be generated using only 2 inputs.

You know you're working with data that are approximately normal if they have a classic bell-curve shape that's fairly symmetrical about the middle. The "middle" can be defined by the mean, median, or mode, and in a normal distribution they should all be roughly the same. In addition, the mean minus 2 standard deviations should exclude about 2.5% of all observations and the mean plus 2 SD should also exclude about 2.5%.

Skewness is the problem you mention where one tail is fatter than the other. Most real world data sets are skewed to varying degrees. The average American household has net assets of $150,000, but Buffett and Gates have north of $30 billion. That's positive skew. Risk-arb returns cluster around 4-8% annualized, but the deals that blow up result in -40 to -80%. That's negative skew (both these are hypothetical examples). A lot of times skewness can be reduced with an appropriate transformation, but usually it's just a work-around, rather than an actual correction.

But it's almost impossible to have enough data to accurately understand what's happening out in the tails. Two sigma events only occur about 1 time out of 19, so to observe 200 of them we need to sample 3,700 independent events. To witness that many 3-sigma events, we need to sample 45,000 independent events. For 4-sigma, we'd need 1.5 million events, and for 5-sigma, we'd need 135 million. There simply aren't enough independent data, on anything, to understand what's happening way out in the tails. That word "independent" is critically important too. Stock market data are notoriously autocorrelated, and so the return from Company A isn't independent of Company B, at least in the short run. And in the really long run, we don't get very many periods of non-overlapping data. And even if enough data are accumulated, there'd be no guarantee that they would mean anything going forward ("it's different this time" sometimes actually applies).



Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Print the post Back To Top
Author: jkm929 Big red star, 1000 posts Old School Fool Add to my Favorite Fools Ignore this person (you won't see their posts anymore) Number: 206 of 297
Subject: Re: fat tails - thin ice Date: 4/24/2002 9:48 PM
Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Recommendations: 1
You know you're working with data that are approximately normal if they have a classic bell-curve shape that's fairly symmetrical about the middle.

That's where I was mistaken. I thought a bell curve had to be perfectly symmetrical.

I posted something on the Berkshire board awhile back about there being more stock market crashes than buying panics and Peter L. Bernstein's conclusion, in his book Against the Gods: The Remarkable Story of Risk, that this means "At the extremes, the market is not a random walk." http://boards.fool.com/Message.asp?mid=16273617

But it's almost impossible to have enough data to accurately understand what's happening out in the tails. Two sigma events only occur about 1 time out of 19, so to observe 200 of them we need to sample 3,700 independent events. To witness that many 3-sigma events, we need to sample 45,000 independent events. For 4-sigma, we'd need 1.5 million events, and for 5-sigma, we'd need 135 million. There simply aren't enough independent data, on anything, to understand what's happening way out in the tails. That word "independent" is critically important too. Stock market data are notoriously autocorrelated,

The Wall Street Journal had a Stock Market Quarterly Review Section in its April 1st edition. It showed the twenty biggest one day percentage gains and twenty biggest one day percentage losses of all time for the Dow Jones Industrial Average. 1929 had four of the biggest declines but they clearly were not independent of each other. They took place on Oct. 28, Oct. 29, Nov. 6 and Nov. 11. There were two drops back to back in 1933, on July 20 and 21. The last pair was the Oct. 19 and Oct. 26 declines of 1987.

The other twelve big declines are not near each other.

An interesting fact about the twenty biggest gains is that fifteen of them, or 75%, took place in the three year span 1931-1933.

jkm929



Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Print the post Back To Top
UnThreaded | Threaded | Whole Thread (5) | Ignore Thread Prev Thread | Next Thread
Advertisement