Tuesday, April 09, 2013

More on Self Organisation and Richard Johns.

Self Organising Ants in a Sugar Jar.

Organic cells and their machinery act as a kind of recipe or algorithm from which full sized multi-cellular organisms “unwind”. There is here, therefore, an effective map from a single cell configuration of atoms to the large scale multi-cellular configurations they generate.

Organic cells may be small but they are nevertheless large atomic configurations of many billions of atoms. Even so, in terms of atomic count they are a lot smaller than the multicellular organisms they define. Since multicellular organisms “unwind” from single cells, the potential in single cells amounts to a kind of data compressed way of describing the organism they generate *1

Embarking on a computation that used the method of randomly searching for a configuration of atoms that constituted a viable multicellular organism would be an impractically long job because obviously the class of viable organisms is very likely going to be an extremely small fraction of all possible configurations of atoms.

However, a computation to find a viable organic form could be shortened considerably by randomly searching for a single cell that mapped to a viable organism; after all, a single cell contains a lot fewer atoms. Even so, it would still, of course, be an impractically long job. But the principle is there: Searching for the single cell “algorithms” that generate living forms is a lot less computationally complex than directly searching for the full grown organism. *2.

Since cells effectively constitute algorithmic encodes of the multicellular organisms they generate then, as I have already said, they can be thought of as “data compressed” versions of the much more extensive multicellular configurations. The question naturally arises, then, as to whether further data compression can take place. That is, can organic forms be encoded in even more succinct “algorithms” than single cells? I am, of course, thinking of our physical regime as it is encoded in the laws of physics and the question of whether via “self-organization” (=OOL and evolution) living structures are implied with a realistic probability given the age of the cosmos. Measured in the bit lengths needed to define them, these laws are going to be expressible in something quite a bit shorter than that needed to directly encode a multicellular form or even a single cell.

Let’s just say for the sake of argument that the laws of physics can be expressed in a few thousands of bits of information. That’s definitely going to be quite a bit shorter than the trillions of trillions of bits needed to describe organic forms directly.  So, if a suite of physical laws exist which imply a universe capable of generating life in reasonable time, then randomly searching through a “mere” few thousand bits is a lot quicker than randomly searching the number of bits needed to describe a working organic form whether unicellular or multicellular. *3

***

When I suggested to IDist Richard Johns that it is still yet possible (given the state of our knowledge) that our particular physical regime may have given rise to organic forms via self-organisation he said:

But even in that case, self-organisation theories of evolution will be in a difficult position. For they will then be committed to the claim that living organisms are algorithmically (and dynamically) simple. In other words, living organisms are like Pi, merely *appearing* to be complex, while in fact being generated by a very short program. (Vastly shorter than their genomes, for example.)

My reply was:

One more thing: Imagine that you were given the problem of Pi in reverse; that is you were given the pattern of digits and yet had no clue as to what, if any, simple algorithm generated it. The hard problem then is to guess the algorithm – generating Pi after you have found the algorithm is the easy problem. So to me life remains algorithmically complex even if it’s a product of Self-Organisation

And that latter statement becomes clearer in the scenario I have been developing above: For if a suite of laws exist capable of generating organic forms with reasonable probability within reasonable time then we still have to search through all the combinations available to thousands of bits: In terms of the age of the cosmos that is still a hugely impractically long job! So even if life can be generated by the laws of physics life it can hardly be classified as algorithmically simple! Yes, life would be algorithmically simpler but far from simple in an absolute sense!


Footnotes:

*1 Strictly I am not talking here about data compression as it is understood practically in data processing where an algorithm must create a one to one map between a compressible object and its compressed version and allow a reversible transformation from one to the other to take place. In the more general idea I am putting forward here the transformation may be neither reversible nor one to one. (In fact, as we have in “lossy” Jpeg compression). Moreover, physical algorithms don’t necessarily have the halting properties that effectively say “we have arrived at the required configuration!”.  In the “compression” I'm talking about here, all I'm looking for is a general enlargement of the size of the data field via the application of the physical algorithms: The map here is a much more fuzzy concept whereby the law and disorder suite is simply required to generate viable organisms with a reasonable probability within reasonable time.


*2 Clearly a random search is certainly not the most efficient way to search an exponential tree! Systematic searching may arrive at a solution much faster (although in worst case scenarios systematic searching can be even longer than a random search). However, random search, in having no particular systematic bias (i.e. no initial information) can be  used as a kind of standard measure of computational complexity; it’s a bit like defining the distance to the stars using the time a snail would take to crawl there. Using random search as the fundamental unit of complexity measure only has the effect of introducing a very large constant of proportionality in the exponential expression for computation time. Random search, as it were, gives us a likely upper limit on the search time with respect to a process defined with respect to a kind informational absolute zero; that is, it is measured from the absolute zero of a “know nothing,  learn nothing” perspective.

 *3 Here I haven’t mentioned that the computation required to determine whether we have reached a life generating universe includes a (polynomial time) verification that it is capable of generating a life; the only sure-fire way of doing this is to run the scenario through  to see if it does produce life, which of course would require a universe life time! This simply changes the constant of proportionality in the time complexity expression.

No comments: