Get with the Program

Programmingby Brad Nelson   10/16/14
The more one reads about DNA and the various systems of control and manufacturing as carried out by proteins inside the cell, the more it looks like complex programming. In fact, it is programming. In order to build the structures in a cell, you have to have a specific program.

And anyone who has ever tried to create even a mildly complex computer program has quickly learned how difficult that is even when using all the powers of logic and foresight at one’s disposal.

I’ve done a bit of dabbling with programming here and there, including a small-footprint shareware calendar for Mac OS X. And let me tell you, even with the best intelligence at hand (or the average, as I have in this regard), it’s darn difficult to construct something in the real world via programming. There is no incrementing your way to a solution, other than the process of trial-and-error and debugging. You very much have to have in mind something you want to construct, and then you necessarily use the constrained, arbitrary, and obtuse methods of a particular language to do so. That is, the medium (computer bytes, circuit boards, proteins, whatever) are not the thing. There’s nothing in the thing that is “natural” to the end result (which is why, in practice, one is usually fighting with the tools). Thus you won’t explain programming via the physics of chemistry any more than you will explain the computer solitaire game you play on your computer via electricity.

And most programs that do anything worthwhile have various modules. And these modules each don’t do anything particular clever, but each is necessary and is part of the whole. For my calendar, for instance, there would be a module for handing clicks on the calendar dates, another for then displaying these dates, another for recording any info typed into the text field, etc.

But the design and construction process in no way represents any kind of evolution in regards to these modules. It’s not as if you start with a small one and then gradually build up to something more complex. Instead, one consciously breaks up a problem into discreet bits (or modules) with the mind (when all is said and done) to have all these modules integrate to achieve a designed purpose.

And that end goal is always in mind. But the modules themselves accomplish nothing in the sense of a useful function by themselves. In terms of natural selection, these modules are not complete unto themselves and cannot function except in relationship to the other modules, all of which are specifically written and integrated so as to achieve some structure and purpose. (Modules may be used in other programs, but even then they must be designed for this purpose in mind…typically designed so that a certain type of pre-designated data is to be fed into them, and a pre-designated type of data is to be output by them.)

Let’s just assume for the moment that I, as a programmer, am more intelligent than random chance. It’s difficult to image how you could construct any biological unit or subunit via a slowly evolving program. It’s devilishly difficult to write modules and algorithms, even with a goal in mind. Yes, software itself does evolve, but each feature added to a version 2.0 program is added by the programmer and with the same constraints, methods, and tools that were used to create the version 1.0 program (and probably with the mind to making enough changes to charge users for an upgrade).

It’s completely uncontroversial even among atheists/materialists to call it “the genetic code.” And the more we learn about DNA and the machinery of the cell, the more we see that it is a vast program, the DNA itself not merely storing raw data for the sequencing of DNA, but functioning also as an operating system and including (along with the cellular machinery) various programs. After all, it takes a complex program of some type to translate and construct proteins from the stored data, let alone putting together body parts and such.

Every attempt thus far by origin-of-life scientists to simulate on a computer any type of evolution of self-replicating life has had to (often unbeknownst to the programmers) insert their own intelligent design, especially in regards to having an end goal in mind. It’s thus difficult to see how any kind of simple program, let alone an enormously complex one that could form an immune system, could evolve by chance.

Normally it is chance events that ruin the ability of a program to do anything useful. A scratch on a CD, a miswrite on a hard disk (perhaps due to a power outage or surge), or even a mistyped keystroke typically break things. Programs are fragile, and thus it’s no surprise that many of the diseases that people suffer from are due to one protein or another (one could think of them as modules, or components of a module) having gone missing for some reason (the equivalent of a bug in the program)…perhaps because of a chance mutation.

I’m trying to imagine how I, or anyone else, would “mutate” one’s way to even the simplest via program. Just for the heck of it, take a look (granted, this is somewhat of an archaic language, but a powerful one in its own way) at how one can construct a game of Tic-Tac-Toe using the Logo language. Scroll down to the very bottom of this page where it says “Program Listing.”

Although other languages will necessarily promote solutions that look different, you can get some idea of the modularity of even the simplest of programs, of the many parts that must come together to function as one (forget that it all looks like gibberish…so does the content of DNA). Each part doesn’t do anything that would allow any kind of animal (or program) to survive and function. But together they can. Each module therefore doesn’t make much logical sense as an end-goal for evolution and natural selection. There would be no reason for natural selection to gravitate toward (assuming that it could) some module.

And for what it’s worth, here’s an online version of Tic-Tac-Toe written in Microsoft Small Basic. You can also see the program listening which is even far more complex (at least longer) than the one written in Logo.

I know that we have other programmers who frequent this site (professional or otherwise). It will be interesting to get their take on this question. And, to be fair, the nuts-and-bolts of the DNA operating system have barely been scratched. But they have been scratched, including this flow chart of the endomesoderm network. This is what a program looks like when expressed in physical objects instead of text symbols (which is what a circuit board basically is…programming for the use of electrons and such). This flow chart of endomesoderm charts the sort of “circuit” made by a complex network of proteins, and this is for just one aspect (the overall control and direction) of creating a specific type of tissue in a sea urchin.

A program is necessarily a different thing than throwing random darts at a dart board. And it shows.


Brad is editor and chief disorganizer of StubbornThings.
About Author  Author Archive  Email  Follow • (2522 views)

Share
Brad Nelson

About Brad Nelson

I like books, nature, politics, old movies, Ronald Reagan (you get sort of a three-fer with that one), and the founding ideals of this country. We are the Shining City on the Hill — or ought to be. However, our land has been poisoned by Utopian aspirations and feel-good bromides. Both have replaced wisdom and facts.
This entry was posted in Essays. Bookmark the permalink.

21 Responses to Get with the Program

  1. Timothy Lane says:

    As a professional programmer for about 25 years, I can say that even the simplest program (such as locating perfect numbers using the Mersenne primes technique) takes time to debug. And the longer the program is, the harder it is to debug (partly due to the extra complexity of different modules accessing each other. But when you have a program executed vast numbers of time each day, even 99.9% reliability is grossly insufficient (and believe me, we found this out the hard way).

    • Brad Nelson Brad Nelson says:

      But when you have a program executed vast numbers of time each day, even 99.9% reliability is grossly insufficient (and believe me, we found this out the hard way).

      Indeed. And imagine the complexity of a program that could correct its own errors. Well, to some extent, that’s already done with things such as RAM chips and such which have parity bits or some other scheme. The machinery of the cell does that as well. Here’s info from John Lennox’s “God’s Undertaker”:

      The incredibly precise duplication of DNA is not accomplished by the DNA alone: it depends on the presence of the living cell. In its normal surroundings in the cell the DNA replicates with roughly one error in 3 billion nucleotides (remember the human genome is about 3 billion nucleotides long). However, on its own in a test tube the error rate rises dramatically to about 1 in 100. When, still in a test tube, appropriate protein enzymes are added, the error rate sinks to about 1 in 10 million. The final low error rate depends on the addition of yet more proteins in the form of ‘repair’ enzymes that detect and correct errors.15

      The process of nucleic acid replication therefore depends on the presence of such protein enzymes, and not simply on the DNA itself. An interesting comment on the repair system is made by James Shapiro, who writes: ‘It has been a surprise to learn how thoroughly cells protect themselves against precisely the kinds of accidental genetic change that, according to conventional theory, are the sources of evolutionary variability. By virtue of their proofreading and repair systems, living cells are not the passive victims of the random forces of chemistry and physics. They devote large resources to suppressing random genetic variation and have the capacity to set the level of background localized mutability by adjusting the activity of their repair systems.’16

      One very important implication of the existence of alternative splicing and error repair mechanisms is that DNA would appear to depend on life for its existence, rather than life on DNA, thus calling in question the common notion that life originated in an RNA to DNA to life sequence (the RNA-world scenario). Commoner puts it bluntly: ‘DNA did not create life; life created DNA.’ Miller and Levine expand on this: ‘The largest stumbling block in bridging the gap between non-living and living still remains. All living cells are controlled by information stored in DNA, which is transcribed in RNA and then made into protein. This is a very complicated system and each of these three molecules requires the other two – either to put it together or to help it work. DNA, for example, carries information but cannot put that information to use, or even copy itself without the help of RNA and protein.’17

      Another interest aspect that Stephen Meyer mentions in “Signature in the Cell” is that the programming in the cell is capable of the equivalent of an if-then mechanism:

      In 2005, University of Chicago bacterial geneticist James Shapiro (not an advocate of intelligent design) published a paper describing a regulatory system in the cell called the lac operon system.2 He showed that the system functions in accord with a clear functional logic that can be readily and accurately represented as an algorithm involving a series of if/then commands. Since algorithms and algorithmic logic are, in our experience, the products of intelligent agency, the theory of intelligent design might expect to find such logic evident in the operation of cellular regulatory and control systems. It also, therefore, expects that as other regulatory and control systems are discovered and elucidated in the cell, many of these also will manifest a logic that can be expressed in algorithmic form.

      Meyer also writes in the epilogue of “Signature in the Cell” proteins that act as a switch…but can do more than just signify either on or off but can give some gradations in between.

    • Brad Nelson Brad Nelson says:

      I’ve been meaning to ask you, Timothy, which language or languages did you mainly have to deal in?

      • Timothy Lane says:

        Mostly assembly language, though I also did a good bit of COBOL in my younger days (incidentally, in one scene in Terminator, the robot mind is looking at COBOL code) and BASIC later on. I did a good bit of FORTRAN during my education, though it’s interesting that on a key program for one of my courses I chose COMPASS (the CDC 6500 assembly language). It foreshadowed so much of my feature.

  2. Kung Fu Zu Kung Fu Zu says:

    proteins that act as a switch…but can do more than just signify either on or off but can give some gradations in between

    The fact that human genome contained fewer genes than expected, which seems to mean that the many many proteins and their interactions are more important than originally thought, may well be part of the reason that it has proven to be more difficult to cure some of the genetic diseases that many thought would be cured once the genome was mapped.

    • Brad Nelson Brad Nelson says:

      the many many proteins and their interactions are more important than originally thought, may well be part of the reason that has proven to be more difficult to cure some of the genetic diseases that many thought would be cured once the genome was mapped.

      Yes, I think that’s true, Mr. Kung. John Lennox writes in “God’s Undertaker”:

      We need to pause here since, in talking about the complexity of information-rich biomolecules like DNA and the genetic code, it is easy to give the impression that the genes tell us everything about what it means to be human. Indeed for many years molecular biologists have regarded it as a ‘central dogma’, as Francis Crick called it, that the genome accounts completely for an organism’s inherited characteristics. This inevitably fuelled the kind of biodeterminism that held individual genes responsible for a whole variety not only of human diseases but also of all manner of characteristics from predisposition to violence or obesity to mathematical ability.

      However, evidence is rapidly mounting that this is very unlikely to be the case. For the human genome turns out to contain only 30,000 to 40,000 genes. This came as a great surprise to many people – after all human cellular machinery produces somewhere around 100,000 different proteins so one might have expected at least that number of genes to encode them. There are simply too few genes to account for the incredible complexity of our inherited characteristics, let alone for the great differences between, say, plants and humans. For this reason geneticist Steve Jones sounds a strong cautionary note: ‘A chimp may share 98 per cent of its DNA with ourselves but it is not 98 per cent human: it is not human at all – it is a chimp. And does the fact that we have genes in common with a mouse, or a banana say anything about human nature? Some claim that genes will tell us what we really are. The idea is absurd.’11

      Take, for example, the fact that genes can be switched on or off – and that at certain stages in the development of an organism. The control of such switching is mainly undertaken by sequences called ‘promoters’, which are usually to be found near to the start of the gene. Let us now imagine an organism with n genes, each of which can be in one of two states, on or off, expressed or unexpressed, in genetic terminology. Then there are clearly 2n possible expression states. Suppose now we have organisms A and B with 32,000 and 30,000 genes, respectively. Then the number of expression states for A is 232,000 and for B is 230,000. Hence A has 22000 times as many expression states as B – and 22000 is a very large number, larger in fact by far than the number of elementary particles there are estimated to be in the universe (about 1080).

      Thus a relatively small difference in the number of genes could account for the very large differences in the phenotype (observable characteristics) of the organism. But that is only a beginning since the base assumption in our last calculation that genes are either on or off is too simplistic by far, especially if we are thinking of the more complex organisms. The genes of such organisms tend to be ‘smarter’ in the sense that they have a much wider range of molecular machines they can build and control. For instance, they may be partly expressed, that is, neither completely on nor completely off. Such control mechanisms are capable of responding to the cellular environment in determining to what extent a gene should be switched on. Thus they are like miniature control computers in their own right. And, since the degree to which they are on or off varies, the above calculations must be drastically revised upwards. The effect of proteins working on proteins means that we are now entering a hierarchy of sharply increasing levels of complexity even the lowest level of which is difficult to grasp.

      Things are far more complicated then once supposed. This is basically the theme of Behe’s “Darwin’s Black Box.” Until we take a look under the hood and actually see what’s going on, the “dogma” of Darwinism (or anyone else) merits a great deal of skepticism. Many pronouncements were made when people had absolutely no idea as to how the cell functioned. And there is a heck of a lot yet to figure out. I think they’ve only scratched the surface.

  3. Brad Nelson Brad Nelson says:

    One of the aspects worth mentioning regarding programming is that if you look at the Logo listing and the Microsoft Small basic listing (both listed in the article), you’ll see the one (the latter) is obviously longer than the other.

    But which is easier to read and understand even if you knew both of the languages? Certainly the Logo one is by far, although this UCB Logo (ouch! I said “Berkeley”!) is not as powerful (it’s graphical capability is primitive, for example). (And Logo’s dependence on recursion can make it very obtuse.)

    Even so, having read Behe’s “Darwin’s Black Box,” it might be more accurate to say that interfacing with the Logo interpreter underneath is easier than interfacing with the BASIC interpreter lurking underneath the Microsoft product. The language you use – the characters typed in at the keyboard – is an abstraction layer above the internal program (which is itself an abstraction layer above several layers of the operating system, itself an abstract layer above the microcode, peripheral drives, and hardware. (It’s almost rabbits all the way down.)

    Many of the problems with origins-of-life computer simulations have to do with the data and logic that is hidden underneath. And what is “hidden underneath” is certainly of enormous interest if DNA and the cell machinery are indeed various programs inside an overall operating system. I would think it’s still true that there are more than a few black boxes remaining in terms of understanding what is going on, on how the various parts work in conjunction with each other.

    Certainly I would think the complexity, and obstructions to comprehensibility because of this complexity, are a strong vote for design (you should see how convoluted my programming gets) rather than blind, undirected chance. Most, if not all, of the natural processes (fields, gravity, etc.) that have been interpreted into mathematics has resulted in amazingly concise formulas (E=MC2). As Behe or Lennox still notes, the actual energies or particles are enormously complex. But the mathematics that describe their regularities are often quite simple.

    This is not going to be the case for the cell and all the other bits and pieces of life. As either Behe, Lennox, of Meyer notes, an explanation of an algorithm cannot have less information in it than the original algorithm. I’m not sure about the latest word about what kind, if any, algorithm-like things exist in the cell. But if it contains various algorithm-like things, you’re not going to reduce that down to a law…including a simple one like natural selection.

    I’ve rambled. And, yes, the programs I write tend to be similar.

  4. Kung Fu Zu Kung Fu Zu says:

    Does one have to be obsessive to be a software programmer or is it optional?

    • Brad Nelson Brad Nelson says:

      I think obsessive-compulsive is exactly the job description of a programmer. And that’s not to underrate the enormous creative abilities one needs to have as well. Writing software combines the needs of straight logic/mathematics with the ability to write poetry…without blowing everything up (unlike, say, many other areas of life, there are no forgiving flights of fancy, no matter how pleasing they are to hold inside one’s head).

      Or, at least it seems to me, it’s the ability to have tunnel vision and an obsessive attention to detail with a creative affinity for language that belies the heavy-rimmed glass and pocket protectors that say “nerd.”

      And just raw intelligence.

      The interesting thing about intelligent design, and considering DNA and its complimentary cellular machinery as programming, is that it takes that esoteric, all-perfect, all-loving, all-knowing, always-a-300-bowler idealist vision of God out of the sky and gives you a glimpse of something more tangible (and relatable, if you ask me).

      A program (I’m sure Timothy can tell you) is a programmer’s signature. There is not just one way to do things. A person’s style, creativity, experience, and sheer intelligence will shape much of the programming.

      And given the truly humungous hardware resources available to today’s programmers, there is very little need to optimize (relatively speaking), and little need to find the perfect solution. A program that is extensible, well-documented, and easy to read probably, in many or most environments, beats the perfect or optimal solution (other than regarding device drivers and firmware, which sounds like Timothy’s area of expertise).

      So if you’re God, with brain power to match, and with nature herself to deal with, what programs do you write? What are they going to look like? What are the basic design parameters and constraints which must overcome? What language, if any, do you use? Is it compiled? Is it interpreted? Is it object-oriented? Is it something else? And what is the program’s purpose and extent? That is, did an intelligent designer put all the programming into the basic cell with the ability to evolve from there? Or did this designer intervene several times throughout biological history? Especially if the latter, did the designer exert minute control of species (or some other higher level of order such as family)?

      It’s surely a given that some amount of evolution and adaptation would be part of the program, particular if it was sort of a “set it and forget it” type of program where you just let it run for tens of millions or hundreds of millions of years without any intervention or adjustments. Think of how robust and forward-looking such a program would have to be to account for so many contingencies.

      Even so, the fossil record is littered with extinct species. It can therefore be assumed that there was a great deal of “set it and forget it” involved. And a great deal of wastage. These programs are not just programs, for they code for and build life forms that themselves have a certain amount of autonomy to them. Such “wastage” isn’t necessarily a knock on the designer. It rather shows the scope of the designer. It would conceivable be very easy to create a life form that could survive for very long lengths of time and change not at all or very little (I think the crocodile fits that description, as probably do other species). But then would you really have the kind of dynamic systems we have on earth…so much so that crank theories such as “Gaia” actually have some merit?

      And it’s all well and good to say that God has no constraints because he’s perfect, omnipotent, etc. But when you come out of the clouds and attempt to do something more tangible, there might well be constraints (setting aside the fact that one has, with the creation of all of nature, built in those constraints). If we one day decode the program and operating system of DNA, will we see the types of programming techniques that programmers use today? According to Stephen Meyer, there are biologists who are already discovering just that.

      Meyer also notes that the way genes are stored on the DNA molecules shows the kind of hierarchical “folders within folders” that we are familiar with on our computer desktops. Similar sets of genes are together, and similar super-groups of genes are together, and there may be even higher levels of organization. And as Meyer notes, this data (like on a computer hard drive) seems to be organized for optimal retrieval speed and efficiency. This info came out of the further study of the supposed “junk” DNA.

  5. Loved your line, “And that end goal is always in mind.” That is what lies at the base of all this. Purpose. The need to prove a lack of purpose is what drives the atheist Darwinian and the absolute existence of purpose will be their undoing. It just can’t be ignored any more.

    • Brad Nelson Brad Nelson says:

      Indeed, Deana, purpose is the main dividing line. The split we see (as noted by Behe/Meyer/Lennox) is not between science and religion. It’s between atheism and theism, between the idea of purposelessness and purpose.

      Continuing the analogy (or reality): As Stephen Meyer (and perhaps Behe and Lennox) have said, the case for intelligent design does not rest entirely on the specified information content of DNA. It’s also based on the apparent fine-tuning of the universe (fine-tuned for the existence of life) as well as the Big Bang which would seem to be iron-clad proof of a beginning rather than the convenient (for naturalists and atheists) eternally-existing universe. A beginning suggests a creator, a first cause.

      And one thing you’ll note (at least I’m sure you geeks have noted) is that in even those very simple Tic-Tac-Toe games, the programmer had to declare some constants. In order to play any game (and life or the existence of the universe could be seen as a game or a constructed setting), you need to establish the playing field and some other parameters. And those various constants in nature seemed to be as arbitrary as the constants that one declares for a program (and surely any structure-producing program such as DNA is going to have a lot of constants as well).

      This is important logically because it highly suggests that the universe didn’t have to be the way it is. As noted by others, there is nothing that has been discovered in or about the laws of physics that suggest that they necessarily be the way that they are. The holy grail of materialism/atheism is to find a logical principle about the universe that requires it to be the way it is, that the laws be the way that they are (instead of having to plug in constants such as the speed of light or the charge of the electron).

      Failing to have done that (and especially with the evidence pointing toward the opposite), what the atheists/materialists have then done is create the idea of the multiverse. The only purpose for the multiverse is to avoid the obvious idea of a creator and a beginning. With the idea of the multiverse, our universe then becomes “necessary” because it is one of 10500 universes that exist, each with different starting parameters. Given so many universes, then ours “just had to be.”

      It’s a disingenuous evasion, at best. Surely people as smart as Hawking see the obvious flaws, such as: What mechanism then is responsible for setting the random variables and creating the 10500 universes? What power has the ability to create? for as Lennox notes, “laws of nature” describe and are not agency themselves.

      The problem of a first cause simply becomes larger in the attempt to blur the issue. As I recently told a friend, I think this is a self-conscious bit of propaganda, an attempt to deceive and shape opinion. The Left is constantly (no pun intended) forwarding its cause by presenting to the low-information voters a narrative that at least sounds plausible so that they have an excuse to reject the opposing argument.

      And given the fraud from top to bottom in the global warming hoax, we can see just how much that Leftism (not Christianity) has corrupted the scientific process. Metaphysicians, heal thyself.

      • Timothy Lane says:

        The fine-tuning of the universe is one of the major reasons that I moved back from agnosticism to deism. As for the multiverse, it’s an interesting concept for fiction, and could always be true — but it also cannot be tested scientifically, and thus is equivalent in that respect to young-Earth creationism. But it “sounds” scientific rather than religious, which is all that matters to the secularists who masquerade as genuine scientists.

        • Brad Nelson Brad Nelson says:

          Welcome to deism. 😀 As for the young-earthers, I have to admit I don’t give them much respect. The separation of the mid-ocean ridges work like a clock and are solid proof of a regular geological force working for several million years. There’s no excuse to be that clueless. Faith should not require discounting reality. Surely God does not desire or require brain-dead zombies.

          But unlike the multiverse, young-Earth creationism is testable. The earth isn’t that young. Proven. Over. Outtahere.

        • Brad Nelson Brad Nelson says:

          The fine-tuning of the universe is one of the major reasons that I moved back from agnosticism to deism.

          Timothy, one of the reasons I find intelligent design exciting is that it’s a way to believe in God without all the mythology stuff. I don’t hold a grudge with those who believe literally in the Bible. Who knows? It might even be true. But much of it doesn’t appeal to me. Much of it doesn’t speak in my language.

          But DNA does, as does Meyer’s very carefully articulated theory regarding intelligent design.

          But I won’t say something as douche-chillingly inane as “Intelligent design has made it possible to be an intellectually fulfilled deist,” to borrow a phrase from Dawkins (who said “Darwin made it possible to be an intellectually fulfilled atheist”).

          All atheists are dumb-asses to some extent. It is self-evident that theism or deism is the default proposition rather than atheism, due to historic reasons and current scientific reasons. If someone has a beef with the idea of God, then say so. Don’t use the weaselly proposition of atheism to say that you’re ticked off by the idea of a benevolent creator like a child who says he won’t take another breath and will turn blue unless he gets a second helping of ice cream.

          I’m often ticked off by the idea of a benevolent creator. It’s a sometimes monstrous idea given the nature of reality. Darwin was not totally unjustified when he said, “What a book a devil’s chaplain might write on the clumsy, wasteful, blundering low and horridly cruel works of nature!” Yep. That’s what much of nature looks like. It is a system whereby survival itself depends on killing and consuming other creatures. This is not the world one supposes that Mr. Rogers would have created. It is full of misfortune, injustice, and just shit luck.

          But an adult learns to deal with it. And it’s arguable that those who not only deal with it but find a great benevolence behind it aren’t just delusional. It’s too bad the Catholic Church is devolving into such a mire of Marxism because it has so many good stories to tell, including the very many saints who, despite their suffering, were full of a love for Creation. These are good stories to learn about. These stories are the exact opposite of the grievance stories of the Left (and of atheists, for atheism is a religion based not on a metaphysical proposition but on grievance).

          We all feel aggrieved at times. But to make a religion out of it is for kooks and (as I said) dumb-asses.

          I don’t find deism “intellectually fulfilling.” I just find it to be the best logical case one can make for the evidence of the world. And if intelligent design is true, then we can revise that word to something a little more active…short of Baby Jesus theism but more than just deism. (I’ll leave the coining of that word to you, by the way.)

  6. David Ray says:

    Nice article, Lord Nelson.

    In addition, I highly recommend the DVD “Unlocking the Mysteries of Life”. I’ve given a copy to the local library. (Until then, it was wall-to-wall Darwin crap.)

  7. Rosalys says:

    Great article and great discussion going on here.

  8. Jerry Richardson says:

    And that end goal is always in mind. —Brad Nelson

    Evolution evangelists, such as Richard Dawkins, are shameless in their insistence that evolution is a totally random driven process; while in practice they often smuggle-in some aspect that is definitely NOT part of randomness.

    In Dawkin’s book, The Blind Watchmaker Dawkins publishes a computing fraud that even a novice programmer would not be taken in by.

    Dawkins brazenly, doesn’t just smuggle-in, brazenly includes from the get-go an end-goal for a program. And then wants to claim that the program randomly arrived at “specified complexity.”

    William Dembski (one of the leaders in Intelligent Design theory) in his book, No Free Lunch does an excellent job in exposing Dawkins brazen attempt at con artistry.

    …consider a well-known example by Richard Dawkins in which he purports to show how an evolutionary algorithm can generate specified complexity. He starts with the following target sequence, a putative instance of specified complexity: METHINKS.IT.IS.LIKE.A.WEASEL
    —-
    But consider next Dawkins’s reframing of the problem. In place of pure chance, he considers the following evolutionary algorithm: (1) Start out with a randomly selected sequence of 28 capital Roman letters and spaces, such as WDL.MNLT.DTJBKWIRZREZLMQCOIP;
    (2) randomly alter all the letters and spaces in the current sequence that do not agree with the target sequence; and (3) whenever an alteration happens to match a corresponding letter in the target sequence, leave it and randomly alter only those remaining letters that still differ from the target sequence. In very short order this algorithm converges to Dawkins’s target sequence. In The Blind Watchmaker, Dawkins provides the following computer simulation of this algorithm:
    (1) WDL.MNLT.DTJBKWIRZREZLMQCO•P
    (2) WDLTMNLTsDTJBSWIRZREZLMQCO•P
    (10) MDLDMNLS.ITJISWHRZREZSMECS•P
    (20) MELDINLS.IT.ISWPRKE’Z’WECSEL
    (30) METHINGS.IT.ISWLIKE.B.WECSEL
    (40) METHINKS.IT.IS.LIKE.I.WEASEL
    (43) METHINKS’IT.IS.LIKE.A•WEASEL

    Thus, Dawkins’s simulation converges to the target sequence in 43 steps. In place of 1040 tries on average for pure chance to generate the target sequence, it now takes on average only 40 tries to generate it via an evolutionary algorithm.

    Although Dawkins and fellow Darwinists use this example to illustrate the power of evolutionary algorithms, in fact it raises more problems than it solves. For one thing, choosing a prespecified target sequence as Dawkins does here is deeply teleological (the target here is set prior to running the evolutionary algorithm and the evolutionary algorithm here is explicitly programmed to end up at the target). This is a problem because evolutionary algorithms are supposed to be capable of solving complex problems without invoking teleology (indeed, most evolutionary algorithms in the literature are programmed to search a space of possible solutions to a problem until they find an answer-not, as Dawkins does here, by explicitly programming the answer into them in advance).
    —William A. Dembski. No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (Kindle Locations 2803-2821). Kindle Edition.

    Can’t seem to get there without “an end goal.”

    • Timothy Lane says:

      Yes, we’ve discussed that here. I remember noticing this at the time: Dawkins in essence proved gradual evolution could work provided there was a specific end result it was working toward — which he denies is the case.

    • Brad Nelson Brad Nelson says:

      That’s good. Lennox also does a good job at taking apart the computer simulations of Dawkins and others. Meyer notes this as well (but Lennox does it with a little bit more smart-witted punch).

      One of the fellows posits a theory about this that I also find reasonable. No doubt there is some fraudulence and religious zealotry fully or partially behind some of these computer simulations. But it’s also possible that they are so entwined with the materialist mindset that they don’t comprehend the information content that they are putting in as any kind of a cheat. Thinking in terms of completely physicalism, it’s plausible that they are of the mindset that “once you get matter started in the right way, it will produce life.” It’s all physical for them. In their minds, they may just be thinking of creating the right starting conditions.

      And, indeed, those “starting conditions” are everything. And as Behe and others note, although the machinery of the cell is entirely physical, and can be explained by the laws of physics and chemistry, the influence that sets all this machinery in motion in such a choreographed way cannot be. It is information.

Leave a Reply

Your email address will not be published. Required fields are marked *