Monday, December 31, 2007

Fireside Problem

Over the Christmas period I have been grappling with a problem that has been hanging around with me for some time.
Now, it is possible to generate highly disordered sequences of 1s and 0s using an algorithm. An algorithm is also expressible as a sequence of 1s and 0s. However, whereas disordered sequences may be long – say of length Lr, the length of the algorithm sequence generating it, Lp, may be short. That is Lr is less than Lp. The number of possible algorithm sequences will be 2^Lr. The number of possible disordered sequences will be nearly equal to 2^Lp. So, because 2^Lp is much less than 2^Lr, then it is clear that short algorithms can only generate a very limited number of all the possible disordered sequences. For me, this result has a counter intuitive consequence: It suggests that there is a small class of disordered sequences that have a special status - namely the class of disordered sequences that can be generated algorithmically. But why should a particular class of disordered sequences be so mathematically favoured when in one sense every disordered sequence is like every other disordered sequence in that they all have the same statistical profile? My intuitions suggest that all disordered sequences are in some sense mathematically equal, and yet it seems that algorithms confer a special status on a small class of these sequences.
I think I now know where the answer to this intuitive contradiction lies. To answer it we have to go back to the network view of algorithmic change. If we take a computer like a Turing machine then it seems that it wires the class of all binary sequences into a network in a particular way, and it is the bias introduced by the network wiring that leads to certain disordered configurations being apparently favored. It is possible to wire the network together in other ways that would favour another class of disordered sequences. In short the determining variable isn’t the algorithm, but the network wring, which is a function of the computing model being used. It is the computing model inherent in the network wiring that has as much degree of freedom as does a sequence as long as Lr. Thus, in as much as any disordered sequence can have an algorithmically favoured position depending on the network wiring used by the computing model, then in that sense no disordered sequence is absolutely favoured over any other.

Well, that’ll have to do for now; I suppose I had better get back to the Intelligent Design Contention.

Wednesday, December 12, 2007

The Intelligent Design Contention: Part 1

The Contention
For some months now I have been reading Sandwalk, the blog of atheist Larry Moran, or as he dubs himself ‘A Skeptical Biochemist’: (He can’t be that skeptical otherwise he would apply his skepticism to atheism and graduate from ‘skeptic’ to ‘doubter’.) One reason for going to his blog was to help me get a handle on the evolution versus Intelligent Design (ID) debate. The ID case hinges on a central issue that I describe in this post.

The ID contention is, in the abstract, this: If we take any complex organized system consisting of a set of mutually harmonized parts that by virtue of that harmony forms a stable system, it seems (and I stress seems) that any small changes in the system severely comprises the stability of that system. If these small changes lead to a break down in stability how could the system have evolved given that evolution is a process requiring that such systems be arrived at by a series of incremental changes?

Complex organized systems of mutually harmonized components are termed ‘Irreducibly Complex’ by ID theorists. The term ‘irreducible’ in this context refers, I assume, to the fact that apparently any attempt to make the system incrementally simpler by say removing or changing a component, results in severe malfunction and in turn this jeopardizes the stability of the system. If the apparent import of this is followed through then it follows that there are no ‘stable exit routes’ by which the system can be simplified without compromising stability. If there are no ‘stable exit routes’ then there are no ‘stable entry routes’ by which an evolutionary process can ‘enter’.

Mathematically expressed:

Stable incremental routes out = stable incremental routes in = ZERO.

In the ID view many biological structures stand in unreachable isolation, surrounded by a barrier of evolutionary 'non-computability'. Having believed they have got to this point ID theorists are then inclined to make a three fold leap in their reasoning: 1) They reason that at least some aspects of complex stable systems of mutually harmonized parts have to be contrived all of piece; that is, in one fell swoop as a fait accompli. 2) That the only agency capable of creating these designs in such a manner requires intelligence as a ‘given’. 3) That this intelligence is to be identified with God.

I get a bad feeling about all this. Once again I suspect that evangelicalism is unrinating up the wrong lamp post. Although the spiritual attitudes of the ID theorists look a lot better than some of the redneck Young Earth Creationists I still feel very uneasy. So much that one is supposed to accept under the umbrella of evangelicalism is often administered with subtle and sometimes not so subtle hints that one is engaged in spiritual compromise if one doesn’t accept what is being offered. I hope that at least ID theory will not become bound up with those who apply spiritual duress to doubters.