Message Font: Serif | Sans-Serif
 
UnThreaded | Threaded | Whole Thread (31) | Ignore Thread Prev | Next
Author: dbau One star, 50 posts Add to my Favorite Fools Ignore this person (you won't see their posts anymore) Number: of 677  
Subject: Latency vs. Moore's divergence Date: 3/2/2000 10:59 AM
Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Recommendations: 33
More musings on Latency, Storewidth, etc.

As bandwidth explodes, storage is becoming more important. But what kind of storage?

Gilder has floated two theories about why storage is important for filling the gap opened up by balooning bandwidth. On one hand, he points out that latency is never going to drop below 40ms continental-round-trip, so storage-near-the-customer is needed to attack latency. On the other hand, he points out that exponential increases in bandwidth are far outrunning slower exponential increases in CPU power, and storage can be used to fill the gap between them.

So which is going to drive storage needs?

(1) the limitations of light-speed and latency, or
(2) the limitations of Moore's law of computation?

Maybe the answer is "both." But it doesn't look that way to me: it looks like the Moore's law-Metcalfe's law gap is the more important driver. Between an 4^n exponential growth in bandwidth and a 1.6^n exponential growth in CPU speed is an exponentially growing gulf even more dramatic than Moore's law itself. And I just don't buy the proposition of using storage to solve the latency problem.

If you do think latency to the customer is the big problem, you would follow Gilder in investing in caching companies like XLA/Mirror Image, Akamai, and Inktomi. A lot of people have been jumping on the bandwagon. However, in the long run this may not be the right way to go. This kind of caching only works on static, predictable data. In many ways, although static data is the bulk of the Internet today, this is also the least interesting part of Internet content. For example, this message board has high value because it is made of dynamic data that could change at any time. And the value of a piece of new content drops off quickly as it ages: old Gilder reports in Forbes are worth pennies on the dollar compared to new Gilder reports in GTR. By locating storage a continent away from the data center, the laws of latency guarantee that Mirror Image's storage is being used for old, low-value data. (Does @Home care your its data cache loses or misses a piece of static data? No way. It's low-value storage.) As a result, there doesn't appear to be a need for a Christensenian "run upmarket": Akamai, Inktomi, and Mirror Image will eventually need to "beat downmarket" and compete to supply a low-value, low-reliability, low-cost commodity.

On the other hand, if you think the big problem is the increasing slowness of CPUs compared to bandwidth, then it makes sense to invest in storage-in-suppport-of CPUs, i.e., the companies that make the storage boxes in data centers.

The abstract mathematical justification for companies to spend money here is that any finite computation can in theory be replaced by a big lookup table if you have enough storage. How do search engines like http://www.google.com search the web so for your keywords so quickly? Surely they throw a lot of computers at the problem, but to get the kind of speed they want, they cannot be doing much computation on each query. The answer is: they get their speed by depending on enormous precomputed lookup tables. The cleverness behind each search engine technology is in how these tables are built. The way to reduce huge CPU load is to precompute and store as many high-quality computational results as possible, and to engineer your storage strategy correctly. In today's world of exploding bandwidth, the CPU shouldn't be forced to think too hard when dealing with incoming requests.

So as CPU's crack under the strain of load from the incoming bandwidth tidal wave, more and more CPU cycles will be replaced by big storage. The cheapest fast form of big storage is disks, so the trend will be to alleviate the expensive CPU bottleneck with a wide, fat array of disks.

Because of the law of latency, disks that are used to reduce computation need to be near the CPU, not near the customer. Unlike the disks that are near the customer, disks in the data center can be used for valuable, live, fresh data. Companies running data centers will be willing to pay a premium for high performance, high reliability, and huge capacity on these disks, because the new, fresh data in the data center is the most valuable data on the Internet. (Does Schwab care if it loses storage backing its customer accounts? You bet they do!) This all seems to mean that there is plenty of room for a "run upmarket" for storage companies, and the right place to be investing in storage is still in the network-appliance companies, the NAS makers, and so on. NTAP's $29 billion market cap seems more justified than AKAM's $25 billion. (Although both seem pretty high, no?)

At a practical level for investors, it seems that there are good opportunities in storage appliances. While NTAP and EMC are the current stars, they don't have much running room to go upmarket: their products are already complicated, high-capacity beasts. The place to look for inpirational ideas is at the low-end of the business. Quantum/DSS-Meridian's Snap servers and Maxtor-CDS's MaxAttach are the two premier examples. By jettisoning customer servicability inside the box, these low-end products focus on simplicity rather than expandability. You use them provide enormous storage not by opening up the case and plugging in more disks, but by (1) buying more boxes, and more importantly (2) waiting for Moore's law of storage to increase the size of the disks inside each box.

A digression for computer science techies. An old nerdy joke from graduate school - a proof that "P=NP", by proving that exponential computations are linear. If the size of the problem is "n", and the number of computer operations it takes to compute the solution using brute force is "2^n", the algorithm to solve the problem in linear time is: (1) write a program that completes in "2^n" time on today's computers; (2) wait n times 18 months, then (3) buy another computer. Since Moore's law dictates that the new computer is 2^n times faster than today's computer, when you run the program on the new computer, it finishes in constant time. Q.E.D.

The lesson behind the joke is that the most fertile place to be hunting for new technolgies in on the "exponential technology curves" at the heart of the bandwidth explosion, the CPU explosion, and the storage explosion. It may sound ridiculous, but the fastest way to solve a big computational problem today may be to build an algorithm that uses few CPU cycles and eats enormous amounts of storage, then wait a few months for the underlying disks to grow to a size that lets your stupid solution solve a big problem.

Most telecosmic companies demand extremely high valuations today, but it's just absolutely fascinating that Maxtor and Quantum have such incredibly low valuations by comparison. The feeling on the street about these old companies seems to be that they provide a low-value low-cost commodity solutions. In Maxtor's case, the problem is that their disk drives in the past have been targeted for use on desktop PCs - and I agree, the disk on the PC ("stale data near the customer") is a low-value product where it's difficult to distinguish yourself from your competitors. In Quantum's case, the problem is that their tape business has been falling short of expectations ("even older data"), and is under technology attack both from above (with IBM's new tape standard) and below (from companies like EXBT). But in both cases, I believe that the street is completely missing out on the importance and the structure of the storage business in the future.

Storage is displacing computation in data centers as the engine that handles increasing bandwidth demands. The Moore's law of storage dictates that companies at the low-end of the business will prevail. The investment idea that come out of this line of reasoning seem clear to me.

A disclaimer.... I am long both MXTR and DSS now, but I don't believe in my own logic enough to be *really* long. I have small positions in each. Should I be buying more?
Post New | Post Reply | Reply Later | Create Poll . Report this Post | Recommend it!
Print the post  
UnThreaded | Threaded | Whole Thread (31) | Ignore Thread Prev | Next

Announcements

Pencils of Promise - Back to School Drive
"Pencils of Promise works with communities across the globe to build schools and create programs that provide education opportunities for children."
Managing Your Wealth
Our own TMFHockeypop from Rule Your Retirement fame on the TV show Managing Your Wealth.
When Life Gives You Lemons
We all have had hardships and made poor decisions. The important thing is how we respond and grow. Read the story of a Fool who started from nothing, and looks to gain everything.
What was Your Dumbest Investment?
Share it with us -- and learn from others' stories of flubs.
Community Home
Speak Your Mind, Start Your Blog, Rate Your Stocks

Community Team Fools - who are those TMF's?
Contact Us
Contact Customer Service and other Fool departments here.
Work for Fools?
Winner of the Washingtonian great places to work, and "#1 Media Company to Work For" (BusinessInsider 2011)! Have access to all of TMF's online and email products for FREE, and be paid for your contributions to TMF! Click the link and start your Fool career.
Advertisement