www.satn.org

Project MAC, where we met S at MIT A the Software Arts building where we worked together T and the attic N where VisiCalc was written
Other writings on our personal sites:

Bob's
David's
Dans's
RSS Feeds:

SATN

Bob

Dan
Comments from Frankston, Reed, and Friends

Wednesday, September 11, 2002

DPR at 4:33 AM [url]:

Intel - another 432?

Intel's just announced its LaGrande Technology. The idea is to create a "vault" in the processor and the chips supporting it, so that protected content can never be touched by unauthorized code, even when it's running in "your" personal computer. It's a response to the raucous cries from Hollywood and the record companies for hardware "rights management" in our systems.

Intel's LaGrande Technology reminds me of the Intel 432. Some of us with grey hair will remember the 432, Intel's attempt to create an "object oriented" processor that would embed all the great ideas of object oriented (OO) computing in a revolutionary new architecture.

What was wrong with this idea? It's not that it was too early, but instead that it was a caricature of the point of object-oriented computing.

OO computing is fundamentally about "late binding" - which, like my own* design principle, the "end-to-end argument" means avoiding putting too much function in the least plastic parts of the system. Late-binding enables a system to evolve rapidly and flexibly. That's what software is good at, and why OO is inherently a software idea. Putting the evolving ideas of OO into hardware is the design equivalent of an oxymoron. Wasting all that specialized and frozen silicon on a specific version of OO burdened any design with costs and risks that the future would not play out as planned.

What killed the 432 was the RISC idea. Not the purist RISC machines of the ultimately minimal instruction set, but the idea underlying RISC, which is that you don't specialize the machine with lots of cool gimmicks for specific applications - instead you make the machine that binds as little knowledge about its applications into hardware, leaving complete flexibility to software. RISC machines did OO really well and efficiently, just as they did other (non-OO) things really efficiently. They didn't try to predict the future and optimize for a particular winning scenario. What saved Intel was quickly dropping the 432, and moving aggressively to make its so-so processor (the 8086) more RISC-like - they learned the lessons of keeping your options open.

What will kill LaGrande is the same problem. Building some specialized notion of content protection into the processor and its buses is "early binding" in an extreme form. It makes the whole architecture brittle, and unable to compete for new opportunities, new applications, etc.

Why is LaGrande's design early binding? Because we don't know, we really don't know, what sorts of protection make sense in the emerging digital, networked marketplaces. Despite 35 years of computer security research, we have not yet increased our understanding of what needs to be protected beyond a simplified, very unworkable notion of military document security. Now joined with a simplified, very unclear notion of what Hollywood might really need (as defined by its lawyers and lobbyists - not the most technically savvy designers).

I'm reminded of this every day as I use the collaborative shared WWW environment for most of my information. The kinds of permissions and expectations about information sharing just don't fit the military security model. And they don't fit the copyright model either. When I post on a weblog, I don't negotiate a very detailed contract with all of my readers about what they can and can't do with this content. Yet I have the expectation that people won't intentionally misquote me, that they will give me (at least some) credit, etc. Those expectations/rules are not executable in a processor. We can experiment with them in software. But putting them in hardware freezes them in time.

Those of us who have been in the computer security business saw what happened when the military security model was built into the processor and operating system. The system became unusable. Why? Because the real, day-to-day information handling in the military did not follow the rules we were told to apply! And we'd just finished building them into the lowest levels of the system. Soon thereafter, with the personal computer and network revolutions, the whole concept of where information was and how it was communicated changed. All of those clever operating systems and hardware designs became largely irrelevant - though some of the learning was still relevant - because a distributed network of PCs distributed the information outside of the glass-walls of the air-conditioned computer center. The information was now held machines owned by and managed by their users, who were happy to get out from under the burden of corporate IT's power over their information.

The proper rules about information are evolving as the technology evolves, and there is no reason to believe that they are well understood. But Intel's design is to be fixed in hardware, at great cost to themselves and the entire PC industry, in motherboard designs, and perhaps in the dominant Intel O/S - Windows.

Worse, its design is fixed in a PC hardware concept, just as IT users are beginning to migrate again. This time they will move from a PC-centric model to a peer-to-peer networking model, where data sharing is the norm - not a special case, but the dominant case. When you are working with someone else on a document, a model, a simulation, or a computer game - who should "own" which part of the resulting thing is just not clear. The rules will evolve as we learn to live in these collaborative systems, which are not PCs, but "computing environments" composed of many PCs.

What happens when the rules change either at the application level, or at the systems structure level? This is what "late binding" is all about. I.e., avoiding the temptation to presume that you can completely solve the evolving problem and fix the answer in hardware or any other part of the low-level architecture of the system. When Intel got interested in accelerating graphics, etc. it followed this principle - embedding a general purpose floating point vector processing unit (not building the graphics pipeline into the main processor), and later pushing the AGP memory architecture. These things made a big difference, while not betting the farm on one or another future direction.

In contrast, look at what happened with Motorola's massive and technically brilliant Iridium project. "Early binding" is what killed it - they assumed that crappy 2.4 Kb/s voice system that only worked outdoors was precisely what the market would want, and left no flexibility at all in the systems design to do anything else. So there are 60-odd satellites flying around the world, with no customers, and precious little flexibility to be applied to other uses in today's market.

I've been a fan of Intel for decades now - because its management really learned from mistakes like the 432 and its initial anti-consumer response to the original Pentium floating point bug.

But now, as Andy Grove, Gordon Moore, and other heroes are on the sidelines, no longer running the shop, I'm worrying there's a new round of big blunders in store for the company. This may be the first.

* The end-to-end argument is a great example of why simple models of "ownership" don't work. It's my own - but it's also Jerry Saltzer's own, and Dave Clark's own. The three of us co-authored the paper that named it. But if you read the paper, you'd realize that what we really did was pull together design choices that were being argued in a wide variety of contexts - for example, I made a number of them in my Ph.D. thesis work, and we made a number of them in the TCP design process that I and subsequently Dave Clark worked in. And the "we" included lots of other people. At best my "sense of ownership" of the end-to-end argument has led me to push it forward in my contexts. But it's not ownership in the sense that copyright would enforce.



Monday, September 09, 2002

DanB at 1:22 PM [url]:

Essay about CD sales, downloading, and burning

I just posted a new essay, inspired by the Forrester study I mentioned here a while back. My bottom line: "Given the slight dip in CD sales despite so many reasons for there to be a much larger drop, it seems that the effect of downloading, burning, and sharing is one of the few bright lights helping the music industry with their most loyal customers. Perhaps the real reason for some of the drop in sales was the shutdown of Napster and other crackdowns by the music industry."

Read: "The Recording Industry is Trying to Kill the Goose That Lays the Golden Egg".

It would be helpful if other people looked carefully at the numbers, did research, and read all of the RIAA statements and came up with models that explain what really is the process of buying recorded music. The RIAA doesn't want to do this (or at least not tell us what they know) according to Josh Bernoff of Forrester, and there are many contradictions in their public statements. I'm afraid the recording industry is doing itself a disservice at the same time it is trying to hobble the computing industry.



For more, see the Archive.

© Copyright 2002-2008 by Daniel Bricklin, Bob Frankston, and David P. Reed
All Rights Reserved.

Comments to: webmaster at satn.org, danb at satn.org, bobf at satn.org, or dpreed at satn.org.

The weblog part of this web site is authored with Blogger.