Thursday, March 03, 2005

More crazy programming ideas .... A P2P OS
I have written before that I think administering a PC is too much for the average user, and one-PC causes a central point of failure and an immense ammount of frustration.
I like the idea of remote desktop viewer (thin-client computing)

It could be that yor ISP (or an "application provider") would clusters of machines that they furiously tend to weel working . While this is possible I want to imagine what would happen if you took some of the ideas from popular distributed systems such as bit torrent and applied them to an OS.

How would it work ... well every machine would contribute resources (CPU power, disk space) . Each task that they do on their computer would be run on another machine, or even better multiple other machines. So if I am typing into a word-processor I am actually typing into >2 word-processors operating on other peoples machines. If I loose power I do not loose any work. Equally if any of the machines hosting the word processor application crashes then there is no problem.

The capabilites of the individual PC become unimportant (well apart from the specifications of the screen and peripherals). This does assume that bandwidth is always on, cheap and plentiful but I don't think that is too hard to imagine. [One twist might be that the telcos see the need to offer their own branded services crippling the network for other uses. See Robert X Cringleys thoughts]

One advantage might be that a machine specilaises e.g. it only decodes MP3 or only compiles C code. This should make the jobs more efficient as more of the needed code should be resident in cache at any one time.

There would be many problems to solve. What versions of programs are run? How is data kept secure? How to deal with the churn of people joining and leaving the network?

I guess that LiveCD distrubtions, and using web interfaces as much as possible might be the closest approximation at this time of thin-cleint computing for the home. Google GMail and Google Maps are starting to redfine the expectations of a web interface.

I guess this lacks the simplicity that made Bot Torrent such a popular and scalable idea. And by that measure I know this idea is doomed. Oh well ....
This evening I went to an IEE lecture Computing Without Clocks: From Lab to Exploitation. This was by Prof Steve Furber of Manchester University. Famously he developed a range of ARM like asynchronous processors, the Amulet series.

The talk was rahter low on technical detail but it was interesting that they think the future lies in asynchronous networks between IP blocks on a chip. Adapters are placed on all synchronous logic blocks and a packet switch interconnect is auto-generated to a required speicifcation. I know from my digital design experience that at least 2 synchronising flops are needed to change to an asynchronoush clock domain. So for blockA <-> asynch bus <-> blockB these penalties can be costly. I am guessing that they can do better than this, either by the design of the adapters or by sometimes taking less cycles instead of always taking penalties.

The biggest advantage (IMHPO was that unlike the efforts needed in clocked system to implement voltage and frequency scaling, in an un-clockedcircuit there is no clock and the voltage can be varied all the time, with the caveat that the rate at which results are produced might change.