Saturday, May 30, 2015

Gang-Ham style logic

Logic Ken Ham Style: No cat has eight tails:  A cat has one tail more than no cat: Therefore, a cat has nine tails.

As a bit of light relief between posts I thought I would feature the following piece of thinking by fundamentalist theme park manager Ken Ham: (See "NASA Scientists on God, Creation, and Evolution" dated  30 May). The "logic" here has a close parallel with the logic I showcased in this post: http://quantumnonlinearity.blogspot.co.uk/2013/12/of-comet-tails-and-cat-tails.html :

But there are also many scientific problems with the idea of an old universe. For example, the universe is full of brightly burning blue stars. These stars burn so brightly that they would burn out in just a few millions years, and yet they appear all over the universe no matter how far away we look. Scientists posit that they must be forming spontaneously even in modern times in order for them to remain so populous all throughout the universe. Yet scientists have never observed even one forming!* (see footnote) This confirms a young universe, not a billions-of-years-old one.

Now, if Ken could provide evidence that blue stars last for less than 10 thousand years (as opposed to millions of years) I might concede that there is a strong case for a universe of only 6000 years in age!. But no, all Ken can do is appeal to the unknowns of blue star formation. Ken links to an AiG article by one of his tame academics, an article that produces no strong evidence for a 6000 year old cosmos but simply exploits the tendency among fundamentalists to automatically interpret problems and unknowns in current cosmology as strong evidence for a 6000 year old cosmos!,,,, as does Ham himself; viz: "This confirms a young universe, not a billions-of-years-old one." And while I'm here I'll mention that Russ Humphrey's geocentric gravitational model, a model which attempts to solve the YEC star light problem, posits an old cosmos beyond the immediate cosmic vicinity of the Earth! Thus, presumably Humphrey's model has a blue star problem!

Here's another clunker by Ham:

Now, an old universe is not the only idea in secular thought that has major scientific problems. Evolution also has huge problems. For example, evolution requires the addition of huge amounts of brand-new information into the DNA of a creature in order for new features to arise. But there is no known process that adds brand-new information into the genome of a creature. But without new information you absolutely cannot turn an amoeba into an astronaut no matter how much time you have! Evolution just cannot happen.

This man is a complete embarrassment to the faith!   He's utterly unaware of the information debate: See here: http://quantumnonlinearity.blogspot.co.uk/2015/05/algorithms-searches-dualism-and_12.htm

Footnote:
* This statement "never observed even one forming" parallels the fundamentalist claim that speciation has never been observed. It is statement that will appeal to the largely scientifically illiterate fundamentalist audience who Ham is targeting. The reference to "observation" trades on the illusion that we can directly observe some parts of the world and when we do it gives special authority to those things we think we observer directly: No; observations are samples taken from presumed background theoretical structures; if the theoretical background structure is highly organised and the observations are consistent with it, the probability of the theory being right is enhanced: See here: https://drive.google.com/file/d/0BzLwnl6qE_yeSU5kX2lPa3Z6dlU/edit?pli=1

Of course, the likelihood of us actually seeing a blue star formation event depends on how long it takes: If blue star ignition only takes place over a few months the likelihood of seeing it happen in human time scales is very small.

Tuesday, May 19, 2015

Soft Core Science

The following passage by Intelligent Design guru V J Torely appeared on the ID apologetics site Uncommon Descent on 15th May. It is a passage about the nature of the intelligent agent who from time to time is thought by these de facto IDists to “intervene” in the “natural world”. It provides ideal evidence of just how soft the soft science of ID is:

When I speak of the agent’s “goals,” I don’t mean the agent’s personal motives for doing something, which we have no way of inferring from the products they design; rather, I simply mean the task that the agent was attempting to perform, or the problem that they were trying to solve. Beyond that, there is nothing more that we could possibly infer about the agent, unless we were acquainted with them or with other members of their species. For instance, we cannot infer that the designer of an artifact was a sentient being (since the ability to design doesn't imply the ability to feel), or a material being (whatever that vague term means), or a physical entity (since there’s no reason why a designer needs to exhibit law-governed behavior), or even a complex or composite entity. To be sure, all the agents that we are familiar with possess these characteristics, but we cannot infer them from the products designed by an agent. Finally, the fact that an agent is capable of performing a variety of functions does not necessarily imply that the agent is composed of multiple detachable parts. We simply don’t know that. In short: the scientific inferences we can make about non-human designers are extremely modest.

This tells us how unknown and unknowable is de-facto ID’s version of the intelligent agent. It is this all but unknowable unknown that is purported to make good the design gaps that these ID people believe to be at the heart of the natural world. Of course everyone knows the subtext; the designer the ID community have in mind is none other God himself who, it is envisaged, is acting behind the Turing screen as if he was an alien engineer tinkering with otherwise "natural processes". No doubt some gaps are here to stay, but my view is that divine proactiveness is not seen in these gaps only. Moreover, God’s presence in "natural forces" is more than just a sustaining power. In fact the idea that I continue to probe is that these so called “natural forces” are God’s intelligence in action, action which therefore becomes the subject of empirical investigation.

Torely (and his ID community) see nature and God intelligence as two very distinct categories; one with cognitive powers and the other without. Torely’s model of divine intervention is almost “archeological” in the sense that the artifacts are living things, a product of Divine  intelligent agency some time in the distant past. Using the epistemic of the explanatory filter he sees a stark choice between natural forces and God intelligence – for Torely those natural forces are utterly insentient with ability to achieve little more than high order, chaos, and randomness. What Torely is telling us above is that his postulated intervening intelligence is all but inscrutable. Consequently, once he has concluded that a material configuration is the result of an intervening intelligence there is little more he can say, because, Torley concedes, so little is known about that intelligence: “…the scientific inferences we can make about non-human designers are extremely modest”. It follows, that the predictive efficacy of Torely’s version of ID is concomitantly reduced; since we know so little about the intelligence concerned it is going to be difficult to come to any expectations in advance about what form the work of that intelligence will take. And yet the ID community claims to be able to make predictions such as economy of design and absence of Junk DNA. I suggest that they cannot make these predictions unless they are actually making implicit assumptions about the nature of the intelligence they are dealing with; there is therefore an inconsistency in Torley’s thought: He can’t make claim to knowing so little about the nature of the intelligent agent and yet at the same time try and pass on predictions that contain implicit assumptions about that intelligence. After all, motive, that is emotions, are a huge part of any practical intelligence and we need some inkling of those motives to make predictions. But when we do hazard postulating something about the nature of the intelligence involved the resultant science is far from exact, in fact it is a science that is a lot softer than archaeology (see link below).

Two other points about Torely’s post:

1)      I have a feeling that Torely’s remark about the agent not necessarily being “composed of multiple detachable parts” is a nod by Torely to Thomas Aquinas’ philosophy. Aquinas believed that God is simple, without composition of parts, i.e. indivisible. By modern standards Aquinas knew little about any analytical breakdown of intelligence. To me it seems inconceivable that something as complex as God doesn't have multiple parts in the analytic sense, although, of course, those parts would not be literally detachable. I suspect that here Torely has been seduced by Aquinas philosophy of “substance”.

2)     Stupidly, to my mind, Torely even claims that an intelligent agent might be insentient: To date this seems highly implausible. A designing intelligence would have to be highly motivated; our current understanding is that all highly motivated goal seeking complex systems are the seat of a motivating sentience: For example, it is a form of irrational solipsism to suggest that the higher mammals are anything other than conscious/sentient. Notably, Roger Penrose book "Shadows of the Mind" is based on the premise that real intelligence and consciousness go together.

To finish let me just say I have no illusions that evangelical atheists will be anything other than vociferously opposed to theism in any form and may argue against it from a basis which caricatures science. But there is little to be gained from the unnecessarily dichotomising models of the de-facto ID community, models which place much about the evolution of life well beyond conventional science and into the realm of unknown powers that in Torely’s own oxymoronic terms may not even be sentient!


Relevant Link:

Saturday, May 16, 2015

Brian Cox and The Fallacies of Hope.

Part 12 of Kenneth Clark's Civilization Series

Professor Brian Cox is a scientific poster-boy for our time. This poster-boy role is no doubt helped by Cox being the very opposite of the stereotypical semi-autistic scientific nerd, inaccessible to the average human being. In fact I don’t think many people would disagree that Cox comes over as a thoroughly likeable sort of bloke. He’s the trendy baby faced electric guitar strumming boy next door, apparently easy to connect with and the last person you’d expect to be a professor of particle physics. This apparently ordinary lad from ordinary old Oldham does not look like one of those academics who would take refuge in an ivory tower. In short Cox is academic science’s much needed human face. Without doubt Cox’s style is a gift to the media.

Before I go any further let me make it clear that as a Christian I abhor the practice of the Christian right-wing fundamentalists who would use Romans 1 to accuse an affable atheist like Cox of “suppressing the truth by their wickedness”. These fundamentalists know who they are so I won’t mention them by name here. In any case Romans 1 is not about atheists – it’s about idolatry – that is, the misrepresentation of God – in Roman society: If God doesn't figure much in your thinking its difficult to be an idolater; except of course in the eyes of the fundamentalists who take contradiction of their opinions to be an affront to God himself.

I felt this blog post coming on because I have recently read Cox’s book “The Human Universe”. This book is a window into the thought life of a genuine atheist and cuts across fundamentalist opinion that most if not all atheists are anti-God conspirators. For a variety of apparently good reasons Cox just can’t find it in his heart to believe and for him it simply doesn’t make sense that anything out there should have a scintilla of feeling for humanity. To him the cosmos is a dispassionate desolation and manifestly governed by impersonal forces. It is a place where humanity is a rare natural anomaly of no greater importance than ants. Human significance only comes by way of humanity’s own estimation of itself. Cox’s book exposes the heart of someone who I would classify as an almost reluctant atheist and it is a slanderous injustice to accuse Cox of suppressing the truth by his wickedness; such only makes sense to religious sectarians who see the world through the paranoia of the fundamentalist precursors to conspiracy theorism. But I’d better not start on that subject here!

I liked Cox’s observation on the way science has stumbled on big things by starting out in a very small humble way:

The purpose of recounting the story of Galileo is not to attack the easy target of the inquisition (which nobody expects). Rather, it is to highlight the fact that the smallest and most modest of scientific observations can lead to great philosophical and theological shifts that in turn can have a tremendous impact on society. Galileo by looking through a telescope, doing some drawings and thinking about what he saw, helped undermine centuries of autocratic idiocy and woolly thinking. In doing so he got himself locked up, but also bridged the gap between Copernicus and Kepler, and paved the way for Isaac Newton and ultimately Albert Einstein to construct a complete description of the universe and our place in it. (p43)

Why do I like [science] so much?  The reason is that it is modest – almost humble in its simplicity – and this, in my opinion, is the key to the success of science. Science isn’t a grandiose practice; there are no great ambitions to understand why we are here or how the whole universe works or our place within it, or even how the universe began. Just have a look at something – the smallest trivial thing – and enjoy trying to figure out how it works. That is science. (p40)

The remarkable thing about science, however, is that it has ended up addressing some of the big philosophical questions about the origin and fate of the universe and the meaning of existence without actually setting out to do so, and this is no accident. You won’t discover anything meaningful about the world by sitting on a pillar for decades and contemplating the cosmos, although you might become a saint. No a truly deep and profound understanding of the natural world has emerged more often than not from the consideration of much less lofty and profound questions, and there are two reasons for this. Firstly, simple questions can be answered systematically by applying the scientific method……whereas complex question and badly posed  questions  such as “why are we here?” cannot. But more importantly, and rather more profoundly, it turns out  that the answers to simple questions  can overturn  centuries of philosophical and theological pontificating quite by accident.  (p40).

There is much truth in all that. In fact some of the prosaic observations that have led to profound thoughts don’t even need Galileo’s simple telescope: Olber’s paradox, which swings on the observation that the night sky is dark, set the cat amongst the pigeons in cosmology. I’m also reminded of my posts here and here about the work and technology of the millers, artisans whose work-a-day concepts were to prove of universal and profound significance in an increasingly techno-scientific society. The idea of the humble troubling the counsels of the high and mighty has great romantic and popular appeal.

But we can’t deny that we humans like big thoughts and I suspect that one of the reasons why we like science - and this probably includes Cox himself  - is precisely because it’s non-greedy approach of not aiming for the big time nevertheless often comes up trumps by some convoluted and unexpected route. There is, in fact, still an implicit role here for those big “badly posed” questions: We keep those questions at the back of our minds and in due time they can be used as the measure of the success of science. But true; greedy methods that go straight for the gold don’t always work; (although sometimes they may!).

However, let’s get back to Cox’s personal take on the cosmos.  As regards to the question of “What does it all mean?”, Cox has this to say:

Building on these ideas, my view is that we humans represent an island of meaning in a meaningless universe, and I should immediately clarify what I mean by meaningless. I see no reason for the universe in a teleological sense; there is surely no final cause or purpose. Rather I think meaning is an emergent property; it appeared on Earth when the brains of our ancestors became large enough to allow primitive culture…. (p9)

That is, you need sentience before meaning, and presumably purpose, becomes meaningful – I’d agree with that, but some of the meanings that have emerged in human culture have been rather horrific: Need I name the Inquisition, the Nazis, Stalin, Pol Pot, Jones Town, Mao Tse Tung, Stalin, Islamic State….?

The reason for Cox’s bleak view of the wider Cosmos appears to be entirely down to the well-known effect of the Copernican revolution in its generalised form: In stages humanity has not only become aware of the sheer size of the cosmos and the apparent insignificance of humanity, but it has also lost any Ptolemaic sense of human centrality. It is with this context in mind that Cox talks about the “dizzying physical relegation” (p8) of human kind and the history of cosmology which has been a “journey into insignificance” pP32) resulting in “our demotion” (p33) and our “magnificent relegation” (p59).  On the penultimate page of his book we read:

It is surely true that there is no absolute meaning or value to our existence when set against the limitless stars. We are allowed to exist by the laws of nature and in that sense we have no more value than the stars themselves.  (p269)

In the section of his book about the golden records onboard the Voyager spacecraft Cox says of these very long shots at interstellar communication:

It is a desire to reach out to others, to attempt contact even when the chances are vanishingly small; a wish not to be alone. The golden disks are futile and yet filled with hope; the hope that one day we may know the boundaries of our loneliness and lay to rest  the unsettling noise that accompanies the enduring silence (p81)

To Cox human existence is a no more than a chance anomaly, significant only because of its rarity:

....our outrageously fortunate existence and our indescribable significance as an island of meaning in a sea of infinite stars (p269)

Our existence is a ridiculous affront to common sense beyond any reasonable expectation of the possible based on the simplicity of the laws of nature, and our civilization is the combination of seven billion individual affronts (p271)

Yes, an affront to common sense notions of randomness and also an affront to more subtle mathematical concepts of randomness; that's why some of us try to make sense of this datum dot provided by our outraged intuitions using theism. But on his own admission Cox has to believe and have faith in something: I have often remarked that atheism teeters on the brink of nihilism and/or postmodern anti-foundationalism, but with Cox his hope against nihilism is invested in humanity:

I am a believer in the innate rationality of human being given the right education, the right information and the right tuition in how to think about problems, I believe that people will make a rational choice. …I have to believe that, otherwise this book is a futile gesture. (p265)

Looking at some of those humanly emerging meanings I have alluded to it seems to me that Cox’s hope in human rationality may not be altogether be rational: The Inquisition, the Nazis, Stalin, Pol Pot, Jones Town, Mao Tse Tung, Stalin, Islamic State have all taken in educated people but what did education do for them?  However, I do appreciate Cox’s point – we have to believe in something; it’s what helps us get out of bed each day.

Cox’s humanism may keep him going, but his cosmic vision is ripe for an existential crises.  Below I quote from a section of a recent piece of writing of mine:

***


At the start of the 12th episode of his Civilisation series we find Sir Kenneth Clark in the clean rational and regular neoclassical interior of Osterley Park in England. As he looks upon this epitome of rational control he says:

A finite reasonable world, symmetrical, consistent and ….enclosed. Well, symmetry is a human concept because with all our oddities we are more or less symmetrical and the balance of a mantelpiece by Adam or a phrase by Mozart reflects our satisfaction with two eyes, two arms, two legs and so forth. And “consistency”… again and again in this series I’ve used that word as a term of praise. But “enclosed”, that’s the trouble. An enclosed world becomes a prison of the spirit, one longs to get out, one longs to move. One realises that symmetry and consistency, whatever their merits are the enemies of movement……and what is that I hear, that note of urgency, of indignation, of spiritual hunger, yes it’s Beethoven, it’s the sound of European man reaching for something beyond his grasp. We must leave this trim finite room and go to confront the infinite. We’ve a long rough voyage ahead of us and I can’t say how it will end because it isn’t over yet. We are still the off spring of the Romantic Movement and still victims of the fallacies of hope.

The romantics of the late 18th and 19th centuries rebelled against the deconsecration of the cosmos through the symmetries and regularities of enlightenment thinking and yearned for the infinite. They attempted to return to a much more intuitive apprehension of the natural world. As Clark says the journey isn’t over yet and even today our romantic intuitions and aspirations continue to do battle with our reason. I would suggest that two words are missing from Clark’s last sentence….victims of the fallacies of hope…in man!  !  ….. I want to look at the question of why science has left us high and dry…..






Relevant links

Tuesday, May 12, 2015

Algorithms, Searches, Dualism and (Possibly) Declarative Computation. Part 2

Here I continue with my analysis and comments in relation to this post by Joe Felsenstein and Tom English (FE for short) on Panda’s thumb. Part 1 of this series can be seen here

A priori sophisticated targeting methods and a distant object: Is this all American paradigm for hitting targets the right one for evolution?

In their paper here North American Intelligent Design ID theorists Dembski, Ewert and Marks (DEM for short) use the idea of an algorithmic search looking for targets. The target configurations at the back of their minds are, no doubt, the configurations of life. However, according to FE DEM use a model where the actual target sought for by the algorithm is immaterial:

DEM have a “target” for which the search is searching. Except that they don’t actually require that the “search” actually search for something that makes sense. The target can be any set of points. If each point is a genotype and each of them has a fitness, the target can be genotypes with unusually high fitnesses, with unusually low fitnesses, mediocre fitnesses, or any mixture of them. They do not have to be points that are “specified” by fitness or by any other criterion. DEM do not require that the “search” even consider the fitnesses. They calculate the fraction of all M points that are in the target. If |T| is the size of the target, for this fraction If we divide that by the number of points in the space, N, we get p = |T|/|N|. This of course is also the probability that a random point drawn uniformly from the space hits the target.

What I guess DEM are saying, then, is that their result is very general: Whatever the target whether, it be the configurations of life or something else, in terms of computational complexity the conclusion is the same for small targets (i.e relatively small |T|); namely, that they are extremely hard to find whatever they may be. Jumping ahead a bit, we might conclude therefore that if an algorithm is to find a specified target in reasonable time and with a reasonable probability that algorithm must be suitably provisioned with the right up front information to do so. Right? Well, that’s one solution to the problem. (More about that question another day). 

One of DEM’s important intermediate conclusions is expressed by FE as follows (My emphasis)

DEM assume that there is a baseline “search” that does not favor any particular “target”. For our space of genotypes, the baseline search generates all outcomes with equal probability. DEM in fact note that on average over all possible searches, the probability of success is the same as if we simply drew randomly (uniformly) from the space of genotypes.

That is, with respect to the class of all possible searches the average search will do no better than random chance. This, it seems, is a consequence of the search concept DEM are using. FE describe this concept as follows:

Searches as distributions on the space of points
DEM consider the probability distribution of all outcomes of a search. Different instances of the search can find different results, either because they choose different starting points, or because of random processes later during the search. They assume very little about the machinery of the search – they simply identify the search with the distribution of results that it gets. Suppose that two searches lead to the same distribution of outcomes, say a probability 0.6 of coming up with point x1, probability 0.4 of being coming up with x12, and probability 0 of everything else. They consider these two processes to be the same identical search. They don’t consider what intermediate steps the searches go through. Correspondingly, two searches that lead to different probability distributions of outcomes are considered to be different searches. All distributions that you can make can apparently be found by one or another of DEM’s search processes. From this point on they talk about the set of possible distributions, which to them represent the set of possible searches.

Note that this means that they are including “searches” that might either fail to be influenced by the fitnesses of the genotypes, and even ones that deliberately move away from highly fit genotypes, and seek out worse ones. Anything that gets results is a “search”, no matter how badly it performs.


On DEM’s definition, then, a search is identified by the probability distribution it returns over the total possible outcomes of the search. If we take the ith outcome, then for a particular search distribution the probability of the ith outcome, Pi, will have a value between 0 and 1. Given that the ensemble of distributions is in fact the set of all possible distributions, then selecting a distribution at random will effectively make Pi a random variable. But this random variable is subject to the constraint:
S Pi = 1

Given this constraint it follows that if DEM type searches are selected at random then the varying values of Pi must average out to a value ai where we require:

S ai  = N ai = 1

….and where N is the total number of possible outcomes. From this relation it follows that ai, the average value of the distribution for the ith outcome, is 1/N.  This value 1/N is in fact identical to the probability of selecting an outcome at random. Hence, provided there is no particular weighting or bias placed on the way the searches are selected it follows, as FE state, that on average over all possible searches, the probability of success is the same as if we simply drew randomly (uniformly) from the space of genotypes. So this result of DEM looks to be intuitively agreeable.

Although, as FE tell us, DEM make few assumption about the mechanics of the search, typically algorithmic searches walk step by step through configuration space from one configuration to the next, where adjacent configurations in the walk are separated by small differences. This kind of configuration space can be modelled as a network of connected nodes where the walk metric between configurations is the smallest possible change between two configurations. FE approach this walk concept of search by way of a concrete example, namely, a network formed from the possible configurations of a genotype of 1000 bases. Using this model it follows that a particular genotype has as its immediate neighbors all those configurations that can be formed simply by changing any one base on the genotype to any of the three other bases available from the set of four possible bases, A, G, C and T. Therefore using this specific example it follows that each genotype is directly connected to 3000 near neighbors.

With this concrete example in mind a search will consist of a starting genotype plus the subsequent walk through configuration space. If we take a starting genotype and then select a walk at random then the likelihood is that the subsequent walk will conform to random walk statistics and generate a Gaussian probability envelope across the configuration space. It  then follows, of course, that targets which are nearest the starting configuration will likely be found first. However, when searching for a target we must take into account the chances of starting at a configuration in the near vicinity of the target; if the starting configurations are being selected at random it follows that locating stating points near the target are less likely than wider misses. It turns out that the increased chances of finding a target from a nearby starting configuration are exactly offset by the reduced chance in selecting a starting configuration in the near vicinity of the target.

This tradeoff between the chance of finding a search and the chance of the search finding a target is the subject of a general theorem by DEM.  This theorem can be expressed as follows:

Probability of finding a search  = p/q   (p < q)

....where p is the probability of finding a target completely at random and q is the probability of finding a target given the search method in question. It can be seen from this relationship that if we attempt to increase the chance of finding a target by increasing q this simply has the effect of decreasing the probability of finding the search algorithm.  I give my ‘dime store’ proof of this theorem in the first of my Dembski and Felsenstein posts  - see here.

***

FE don’t attack DEM’s mathematical model; I don’t think anyone is doing that. What is at stake is the significance of DEM’s work and how they use this work to support their philosophy.  In fact  the mere content of DEM’s theorem is intuitively compelling; systems where organization and constraints are at a minimum - that is, where there is no a priori bias -  will not seek out small targets (such as life) in realistic times. It follows then that if a system is capable of generating life in realistic times it must be provisioned to do so. One (and I repeat one) of those ways of provisioning the system is by ‘front loading’ it – that is, by giving the system what DEM call ‘active information’ from the outset. In fact, I think you will find that FE would agree with this conclusion, although they are unlikely to favor the term ‘active information’.  But it is when the argument passes on to the realm of biased systems that FE come into their own as we shall see.  However, we will also see that the battle ground is philosophical dualism. As good dualists they all see it as a choice between natural forces and God. It is that debate which I will be considering next.  In due time I also hope to show that ‘active information isn't the only way forward.  

Saturday, May 09, 2015

Algorithms, Searches, Dualism and (Possibly) Declarative Computation. Part I

Western Dualism just doesn't make sense..

In this post on Panda’s Thumb Joe Felsenstein and Tom English continue the work of challenging the de facto Intelligent Design movement. This latest series of mine is going to be all but a rerun of my previous posts on Felsenstein and Dembski which can be read here and here. However, in this latest post on Panda’s thumb Felsenstein and English (FE for short) address a more recent paper by Intelligent Design theorists William Dembski, Winston Ewert, and Robert J. Marks II (DEM for short). FE have done a lot of hard work in not only understanding and explaining DEM's results but also helping their readers appreciate the status of their work and its limits. I trust FE’s expertise enough to proceed without reading DEM. This saves me a lot of time and brain strain, so many thanks to FE for their very competent efforts.

Let me say this to start with: As I have said before I have great respect for someone like William Dembski who seems to be one of the leaders of the de facto ID community. He is a moderate evangelical Christian who by no means stands in the tradition of the nasty heretic hunting fundamentalists that so often grab center stage in the evolution/creation debate and discredit not only themselves but Christians in general. Nevertheless, the polarization that afflicts this subject means that even evangelical moderates like Dembski are positioned very much on the opposite side to the likes of FE. But even though I’m a Christian myself I have to say that I’m not at all sure I could side with the dualistic God of the Gaps philosophy which seems to be the habitual mode of thought behind the de facto ID community; in fact, my observations suggest that this community are bound to interpret DEM in terms of God of the Gaps ID.  In contrast FE are atheists but nevertheless as Westerners they will inevitably share a Western dualist conception of God. That is, they will conceive 'God' and 'natural forces' as two very distinct categories. Of course, FE as atheists have done away with the uneasy and tense relationship between God and 'natural forces' by denying that there is any such thing as 'God'. Consequently they obviate the problem of two classical categories that don’t sit well together. This is not to say, of course, that their atheistic monism harmonizes with our deepest religious impulses.

Although I would ultimately take issue with the philosophy underlying the de facto ID movement, I nevertheless believe that not only are DEM’s results mathematically sound, but that they also present an enigma. Viz: Why should the world behave in such an ordered fashion as to have the capability of generating life in so short a time? DEM's work suggests that unconstrained/unbiased/random 'searches' simply can't get the required results quick enough. So, the debate really revolves round the question of the enigmatic organization of a cosmos that must, it seems, be very evolution friendly (I’m using the word 'evolution' here in the very general sense of life generating).  But on this question I probably have more in common with FE’s monism than the de facto ID community's residual Western dualism; for like FE I find myself committed to the scientific endeavor of finding out just what it is about 'matter' which gives it that providential fruitfulness referred to by Sir John Polkinghorne, a fruitfulness which means that the cosmos has generated the complex organized structures we call life.  In effect I reject the 'explanatory filter' epistemic of the defacto ID movement when it is used theologically; for this epistemic superimposes on the debate a sharp distinction between sentience and matter, thereby bringing about a false dichotomy between God and 'natural forces'.  This binary category has the effect of curtailing investigations into the proactive fruitfulness of God’s world of matter by attributing living structures to the inscrutable “interventions” of an unknown intelligence, unknown in power, motive and method. The upshot is that de facto ID is not unlike the science of archeology; a science which  assesses material artifacts and tries to guess the motives and methods of human ancestors. But as archeologist Francis Pryor has said, archaeology is not an exact science. And so it is with de facto ID, except that ID is likely to be an even softer science than archeology; for the sentience envisaged to “intervene” in the natural world is far more alien than human!

…to be continued.

Relevant links: