Saturday, June 13, 2015

Algorithms, Searches, Dualism and Declarative Computation. Part 4

If conventional evolutionary theory and OOL are to work then configuration space must look something like this, with thin strands of survivability/stability/self-perpetuation permeating configuration space: Forget fitness surfaces!

This is the last in the series where I am looking at a post by Joe Felsenstein and Tom English (FE). In their post FE critique a paper by Intelligent Design gurus Dembski, Ewert and Marks (DEM).  The other parts of this series can be seen here, here and here

In their critique of DEM’s paper FE construct a simple toy model of evolution. This model they call the “Greedy Uphill Climbing Bug” (GUCB) – basically it is an algorithm which always moves in the direction of greatest “fitness” – that is in the direction of greatest slope of the “fitness surface” or “fitness field”, a field which is conjectured to pervade configuration space. The point they make from this model is simple and valid: It shows that a smooth fitness field (a smoothness which is a likely consequence of the laws of physics as Joe Felsenstein points out), when combined with a GUCB algorithm, entails a lot of “active information”. In fact FE find that even a white noise fitness field also implies a lot of “active information”.  “Active Information” is a term used by DEM to refer to the inherent up front informational bias of a search, a bias which means that the search does better than random chance.

Admittedly FE’s model is not a very realistic; but as FE state it needn’t be realistic to make their point: It simply shows that fairly basic mathematical conditions can supply a considerable level of active information. However, in spite of FE’s work I have to confess that I have doubts about the efficacy of the current model of evolution as a generator life. To see why let’s take a closer look at the GUCB algorithm.

FE put their GUCB algorithm in a configuration space implied by a genome of 3000 bases. At any given starting position the GUCB senses the direction of greatest slope (i.e. greatest fitness) and then moves in that direction. Now, how is this apparently simple algorithm realised in terms of real biology? Well, firstly in order for the GUCB to determine which direction to move it must first put out feelers in no less than 3000 directions. In real biology it is claimed that this is not done by systematically working through the 3000 cases, but by the algorithmically inefficient trial and error steps of random mutation. Once a mutation enters a population it must then be tested by the complex selection processes which determine structural viability, environmental fitness and competition. If it survives this test – which may take several generations of progeny –  then the organism can move on; if it fails, the process must start all over again with a new randomly selected mutation. So, it is clear that successfully moving in any direction at all  in configuration space entails, in real biology, a very large number of computational operations and trials.

Now, I’m not going to contradict the contention that perhaps given large tracts of geological time real biology is capable of moving forward in this laborious random-walk trial and error process: I’m not a mathematical biologist so I won’t make any claims on that score. But one thing is very clear to me, and it should be clear to anyone else: Viz, because the stepping process consumes so much time the structure that is doing the stepping must be stable and endowed with sufficient persistence and self-perpetuation in the first place. That is, although the structure may not be of the “fittest” quality, it must nevertheless be fit enough to make the next step in configuration space. If the structure wasn’t at least this fit it wouldn’t survive long enough to move anywhere in configuration space.  So, implicit in the GUCB model is the assumption that the points in configuration space are all fit – that is, capable of survival long enough to step through configuration space.

It is the property of stability/survivability rather than fitness that begs the big 64 million dollar question. Namely, is configuration space populated with enough stable self-perpetuating structures to provide the pathways on which the GUCB can move?  For the GUCB to move around in configuration space the class of stable structures must be a continuously connected set.  That is, for standard evolution to work the self-perpetuating structures in configuration space must form a reducibly complex set*; that is, working, stable functionality cannot be isolated into concentrated islands. If the set of self-perpetuating structures does exist then I envisage it to look something like a kind of thin spongy structure that stretches across configuration space (See picture at the head of this post).

It was the question of whether this spongy structure actually exists in configuration space which I posed in my configuration space series, a series whose last post can be picked up here. In the second part of this series I wrote:

Axiom 2 tells us that the set of living structures is tiny compared to the set of all possible non-self-perpetuating structures. This fact is an outcome of axiom 1 and the nature of disorder: If living structures occupy the mid regions between high order and high disorder then the logarithmic nature of the vertical axis on the LogZ-S graph will imply that disordered configurations are overwhelmingly more numerous. This raises the question of whether there are simply too few self-perpetuating structures to populate configuration space even with a very thin spongy structure; in fact the spongy structure may be so thin that although mathematically speaking we will have an in-principle reducible complexity, in terms of practical probabilities the structure is so tenuous that it may as well not exist!

Of course I have no analytical demonstration addressing this question and I doubt any one else has: How can we count the class of stable structures? For a start it very much depends on the complex environment those structures are in; in fact those structures are themselves part of that environment and so we have a convoluted non-linear feed-back relationship linking structure to environment. I suspect, therefore we are dealing with something here that is computationally irreducible.

But be all that as it may, my intuitions are that my question is answered in the negative: That is, the number of stable self-perpetuating structures is far too small to populate configuration space sufficiently to connect itself into a spongy structure that sprawls extensively across that space. If this conjecture is correct then it would make conventional evolution and OOL impossible. It was this feeling that resulted in my terminating the configuration space series and returning to early ideas that I started expressing publicly in my Melencolia I series, a series I continue to pursue.


Appendix
In this appendix I’ll assume that the class of stable self-perpetuating structures are grouped into a sponge like structure that stretches extensively across configuration space. If this is the case then we can see that evolution/OOL is less about differentials in fitness than it about survivability and stability. In the sponge model of configuration space evolution and OOL are envisaged as traversing the sponge with a kind of random walk based diffusional motion. However, it is possible that in some places the random walk is directionally biased, implying that certain parts of the sponge represent “fitter” structures than other parts of the sponge. In effect, then, a “fitness field” would pervade the structure of the sponge; this would mean that some areas of the sponge act like potential wells attracting the diffusion motion. However, it is equally possible that in other places there is no bias and the evolutionary/OOL diffusion is neutral; this is not to say that evolution is then without direction; as I have made clear before the sponge structure permeating configuration space acts as tramlines directing evolutionary diffusion. This latter point seems to be something that atheists don’t always understand. See the comments section of this post where I took up this question with atheist Chris Nedin. He appears to have absolutely  no inkling of just how profoundly directional his concept of evolution is required to be; frankly, he seems to be pulling the wool over his own eyes. See also: http://quantumnonlinearity.blogspot.co.uk/2015/01/the-road-to-somewhere.html


Footnote: Why I repudiate the de-facto IDists
* It follows then that an irreducibly complex set of structures is a highly disconnected set. This definition of irreducible complexity is different from the definition used by the de-facto Intelligent Design community, who once again seemed to have screwed up: See the link below about why their definition doesn’t work:
Let me get the following complaint off my chest: When I first came to this debate I had high hopes of the de-facto ID community. After all, they were in the main reasonable and moderate evangelicals like William Dembski. Moreover, I’ve always agreed with their premise that the information burden of our cosmos isn’t trivial and that this presents an enigma. However, I have become increasingly disillusioned with them: They have screwed up on various technical issues like irreducible complexity and the 2nd law of thermodynamics. Their so-called explanatory filter encourages dualistic God-of-Gaps theology. They are too uncritical of fundamentalism and in some cases harbour unreasonable religious fanatics in their midst. They also have a default well-right-of-centre political stance which I can’t support. They are right-wing and fundamentalist sympathizers and this seems to be bound up with their disdain of government funded academics, a group that the fundamentalists, for obvious reasons, also hate.

No comments: