Tuesday, June 7, 2011

Chapter 11 - Monkey's Typing Shakespeare

Prebiotic Evolution is the process by which the first living cells are alleged to have formed. Prebiotic Evolution is also known by other names, such as Chemical Evolution and Abiogenesis.[1] Whatever name is used, Prebiotic Evolution amounts to the following claim:
  • A simple life form capable of self-replication was created by an unspecified natural process a long time ago.
The theory behind Prebiotic Evolution is that given enough time and enough atoms, the atoms in a primordial soup would randomly combine to make the chemical components of the first living cells. This amounts to the random formation of a living-entity out of non-living matter. The rhetoric for defending this random-formation runs parallel to this argument:
If you have enough monkeys blindly typing for enough time, eventually one of the monkeys will be able to replicate the Complete Works of Shakespeare.[2]
There is no doubt that this is a logically true statement. But it ignores the following real-world issues:
  • How many monkeys are enough?
  • How much time is required?
Duane T. Gish of the Institute for Creation Research has produced a set of calculations that demonstrate the importance of these questions.[3] Because Gish’s calculations deal with large numbers, he uses exponents to make the math easier to understand. You don’t have to be a math expert to understand the basic meaning of these calculations. You only have to understand the basic concept behind exponents.
Exponents describe how many zeroes come after the first digit of a large number. Using exponents is similar to describing the U.S. federal debt in trillions of dollars. Even people who are not very good at math can easily recognize that $9.99 is a lot less than $1 trillion. If you can understand the meaning behind the word trillion, then you can grasp the simple meaning behind exponents.
The word trillion implies a very large number, because it means that a lot of zeroes will follow the ‘1’. For example:
  • 1 trillion = 1,000,000,000,000 = (1012 using exponent notation)
If you count the number of zeroes in 1 trillion, there are 12. The 12 in 1012 represents the number of zeroes coming after the first digit – i.e. a ‘1’ followed by 12 zeroes is equal to 1 trillion. The 12 is the exponent of the number. In order to compare large numbers, the exponent is most important thing to look at is. Small differences in exponents can make a huge difference in the size of a number.
For example, at first glance, the number 1013 seems like it should be very similar in size to 1012, because 13 is only one more than 12. However, it is actually 10 times larger. If one trillion is a huge number, then ten trillion is a very huge number. The basic lesson of exponents is that the number of trailing zeroes in a large number is the most important thing. Once you know that, understanding exponents is pretty easy.
The basics behind exponential arithmetic are also pretty simple. To multiply power-of-10 numbers with exponents, you simply add the exponents. For example:
  • 100 x 100 = 10000. Using exponents, the arithmetic looks like this: 102 x 102 = 104.
To divide power-of-10 numbers with exponents, you simply subtract the exponents. For example:
  • 10000/100 = 100. Using exponents, the arithmetic looks like this: 104 / 102 = 102.
Simple mathematical calculations like Gish’s suggest the virtual impossibility of random combinations ever producing a specific data pattern. This has led many skeptics to doubt that Prebiotic Evolution has ever occurred. Similar calculations apply to both English sentences and to chemical molecules. If you can understand the basic concepts of exponential arithmetic, then you should be able to understand Gish’s calculations.
Gish’s hypothetical scenario assumes that a large set of monkeys are trying to type a single 100-letter sentence from an alphabet of 20 letters. Because there is only one sentence to be typed, this is a much easier goal than having the monkeys trying to replicate the Complete Works of Shakespeare. In order to calculate how likely the monkeys are to reach their goal, Gish’s makes the following assumptions:
  • Each monkey can type one character per second.
  • 1024 monkeys are available to type (1024 = 1,000,000,000,000,000,000).
  • The monkeys type for 5 Billion Years (= 1017 seconds = 100,000,000,000,000,000).
This is a huge number of monkeys and a huge amount of time, so very many characters will be typed. Using basic exponential arithmetic, the total characters typed amounts to:
  • 1017 times 1024 = 1041 (To multiply the exponents add the numbers 17 and 24).
  • 1041 =100,000,000, 000,000,000, 000,000,000, 000,000,000, 000,000.
After typing that amount of characters, one might assume that one of the monkeys would certainly have hit the target string of 100 characters. However, the mathematical laws of probability indicate that this is still an unlikely event. This is because the number of possible combinations of 100-letter sentences is much larger than 1041. To increase understandability, Gish’s calculation once again takes advantage of exponential math.
Using a basic law of probability, Gish’s calculation states that there are 20100 possible combinations of 100 letter sentences using an alphabet of 20 different letters. If you are unfamiliar with exponential notation, 20100 means 20 raised to the power of 100. This means 20 multiplied with itself 100 times. Furthermore, 20100 is roughly equivalent to 10130 (the standard Microsoft Windows calculator yields this result).
Even though 1041 is an extraordinary large number of characters, the amount of unique possibilities (10130) is much larger. This gives the monkeys only one chance in 1079 possibilities of hitting the exact 100-letter sentence (Hint: subtract the exponents). If you express this probability as a percentage, all those monkeys typing for all that time have only this small chance of typing a 100-letter sentence correctly:
  • 0.000000000 00000000000 0000000000 0000000000 0000000000 0000000000 0000000000 000000001%
Therefore, it doesn’t make much sense to believe that a finite-set of Typing Monkeys are likely to replicate the Complete Works of Shakespeare. What seemed logical in rhetoric is in fact very illogical based on simple mathematical calculations. The implication is very clear. Random combinations do not have a very good chance of generating long strings of ordered information.
The underlying concept of Prebiotic Evolution is similar to the problem of Monkeys Typing Shakespeare. It is based on the concept that random combinations of chemical letters over long periods of time can lead to the formation of complex Proteins. Both the “Typing Monkeys” and “Prebiotic Evolution” problems have similar mathematical structures, providing one makes the following substitutions:
  • The “number of monkeys” equates to the “total number of atoms in the universe.” Scientists estimate the universe has about 1080 atoms.[4]
  • The “amount of time” equates to 30 billion years (roughly 1018 seconds). This is about twice the amount of time that scientists estimate for the age of the universe.[5]
  • The “number of characters typed per second” equates to 1012. This allows each atom to have 1 trillion chemical reactions a second – a very fast reaction rate. For comparison, Bruce Alberts has described a far slower reaction rate of about 500,000 random molecular collisions per second.[6]
  • The “100-letter sentence” equates to a relatively simple protein molecule that is formed from a string of “100 Amino Acid letters.”
This scenario is abstracted from an article written by John Baumgardner.[7]The 20 amino acids used in the construction of proteins represent a Chemical Alphabet of 20 letters. This matches the 20-letter alphabet used by Gish’s monkeys. This means that there are 10130 possible ways of stringing together a protein built with 100 Amino Acid letters – the same as in Gish’s Monkey example.
Baumgardner’s calculation indicates that if all the atoms in the universe reacted as fast as possible for 30 billion years, the total number of molecular combinations they could produce is only 10110. The exponent ‘110’ is derived by adding the following exponential numbers to perform exponential multiplication:
  • 1080 (the estimate for the number of atoms in the universe)
  • 1018 (the number of seconds in 30 billion years)
  • 1012 (the estimate for the maximum number of chemical reactions/second)
Baumgardner’s calculation indicates that random atomic combinations have only a very small chance of forming a specific protein molecule of 100 Amino acids: 1 in 10130 possible combinations divided by 1 in 10110 random attempts = 1 in 1020. If this probability is expressed as a percentage, the chance of random combinations forming a specific protein molecule of 100 amino acids has only this small chance:
  • 0.00000000000000000001 %
But the chances of random formation of vital cellular proteins are actually much worse. Alberts, a former President of the National Academy of Sciences, has written that assemblies of 10 or more proteins are common at the cellular level.[8] If only two proteins of 100 amino acid length were needed, the chance of randomly generating both proteins is 1 in 1040. But if 10 proteins are needed, the chances drop to 1 in 10200.
The impact of this huge number is emphasized if it is expressed in a mathematically equivalent way. The probability that random chemical reactions of all the atoms in the universe for the entire life of the universe would not have generated 10 specific proteins, with each protein having a length of 100 amino acids, is:
  • 99.9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999 9999999999%
Clearly, the odds don’t look too good for randomly generating a typical molecular machine built from 10 specific proteins. If the full amount of cellular complexity is taken into account, the odds for random formation get even worse. For example, in an article published in Cell, Alberts has described cells as being analogous to a factory filled with many complex protein machines:
… the entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines.[9]
The microscopic molecular machines that fill these cellular factories derive their power from burning chemical compounds. For example, the motion of animals is driven by a set of well-controlled chemical fires that happen at the right place and right time to control motion at a macroscopic level. The proteins that allow these chemical fires to burn in controlled fashion are called enzymes. Such enzymes are vital to life.
In Evolution from Space, Hoyle and Wickramasinghe estimate that the probability of generating a typical enzyme through random combinations of amino acids is about 1 in 1020.[10] To arrive at this figure, they break down each enzyme into a highly critical active site and a less critical backbone. Their estimate is conservative because they assume many amino acid positions do not require exact matches.
Hoyle and Wickramasinghe assume that an enzyme’s active site is composed of 10 to 20 amino acid sites, and that the larger backbone is composed of 100 or more amino acid sites. Using these assumptions, Hoyle and Wickramasinghe estimate a probability of 1 in 105 for randomly forming an enzyme’s active site and a probability of 1 in 1015 for randomly forming an enzyme’s backbone.
Since each amino acid site is chosen from 20 different amino acids, the probability of getting a sequence of ‘n’ amino acids correct is 20n. Thus, Hoyle and Wickramasinghe have assumed that an enzyme’s active site only requires about 4 exact site matches (204 ~= 1.6 x 105), and that an enzyme’s backbone only requires about 12 exact site matches (2012 ~= 4 x 1015). This makes their estimate very conservative.
Multiplying their probability of randomly forming an enzyme’s active site (1 in 105) together with their probability of randomly forming an enzyme’s backbone (1 in 1015) yields the 1 in 1020 chance of randomly forming an enzyme.[11] While this probability might be reasonable by itself, Hoyle and Wickramasinghe have pointed out that roughly 2000 vital enzymes have a very similar structure across all of biological life.
Consequently, Hoyle and Wickramasinghe combine their probability for randomly forming a typical enzyme with the observation of the common nature of enzymes to reach the following conclusion:
… enzymes are a large class of molecule that for the most part runs across the whole of biology, without there being any hint of their mode of origin.[12]
The trouble is that there are about two thousand enzymes, and the chance of obtaining them all in a random trial is only one part in (1020)2000 = 1040,000, an outrageously small probability that could not be faced even if the whole universe consisted of organic soup.[13]
Evolutionists argue that the minor differences in enzyme structure between different organisms can be explained by genetic mutations. However, the similarity of enzymes in so many organisms leaves a very puzzling question: How did the first vital enzymes originate? Because the basic structure of enzymes is common to so many organisms, the logical conclusion of Evolutionary theory is that an ancient ancestor had the same set.
However, as the calculations of Hoyle (and Gish/Baumgardner) demonstrate, the random origin of a set of enzymes is highly unlikely. Because of the staggering odds against the random assembly of vital enzymes from amino acid components, Hoyle concluded that Prebiotic Evolution is about as likely as a tornado sweeping through a junkyard and assembling a Boeing 747.[14]
Hoyle’s viewpoint did not make him popular among evolutionists, who have labeled calculations like his creationist nonsense. But name-calling doesn’t refute the logic behind the improbability of Prebiotic Evolution. The math behind the probability calculations is not controversial. Therefore, the only significant grounds for dispute can be the assumptions implicit in calculations of this type.
Unless there are more atoms in the universe than estimated or the universe is vastly older than estimated, the 10110 computed by Baumgardner is a valid upper bound for the total number of possible chemical reactions in the entire history of the universe. And when compared with the probability off randomly assembling a specific protein of 100 amino acids, even this huge number is extremely small.
Similarly, Baumgardner’s hypothetical protein length of 200 amino acid letters is a conservative assumption. For example, Molecular Biology of the Cell (Alberts et al.) states that an average protein length is 400 amino acids.[15] Baumgardner addressed the issue of permissible mismatches by citing theoretical studies that indicate about one-half of the amino acid sites for a specific protein may need to match exactly.[16]
For example, Baumgardner assumes a hypothetical protein with mismatches in one-half of its 200 amino acid sites. This yields the same probability calculation as a hypothetical 100-site protein that requires exact matches. Although there is no precise way to estimate the exact number of matching sites that a hypothetical protein must have, the calculations of both Hoyle and Baumgardner allow for many mismatches.
Even changes in a single amino acid site can wreck a proteins normal functionality. For example, changing a single amino acid site in the hemoglobin protein drastically alters its shape so that it clogs blood vessels.[17] This devastating single-site mutation is described in a Harvard University article about sickle cell anemia:
The sickle cell mutation reflects a single change in the amino acid building blocks of the oxygen-transport protein, hemoglobin. This protein, which is the component that gives red cells their color, has two subunits. The alpha subunit is normal in people with sickle cell disease. The beta subunit has the amino acid valine at position 6 instead of the glutamic acid that is normally present. ... The other amino acids in sickle and normal hemoglobin are identical.[18]
A technical article about predicting protein shapes (Cheng et al.) clearly states the potential effects of any amino acid mismatch: “Single amino acid mutations can significantly change the stability of a protein structure.”[19] Thus, Baumgardner’s assumption about exact matches for one-half of the amino acids is on the conservative side. Hoyle and Wickramasinghe’s assumptions are even more conservative.
Nevertheless, Evolutionists dispute the validity of calculations that indicate the improbability of Prebiotic Evolution. For example, consider an article written by biomedical researcher Ian Musgrave.[20] On the Talk Origins website, Musgrave posted an article entitled Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations. In this article, Musgrave makes the following argument:
Here is a experiment you can do yourself: take a coin, flip it four times, write down the results, and then do it again. How many times would you think you had to repeat this procedure (trial) before you get 4 heads in a row?
Now the probability of 4 heads in a row is is (1/2)4 or 1 chance in 16: Do we have to do 16 trials to get 4 heads (HHHH)? No, in successive experiments I got 11, 10, 6, 16, 1, 5, and 3 trials before HHHH turned up. The figure 1 in 16 (or 1 in a million or 1 in 1040) gives the likelihood of an event in a given trial, but doesn't say where it will occur in a series. You can flip HHHH on your very first trial (I did).[21]
Nobody doubts that a first event can be lucky. But this misses the point. The defining characteristic of an unlikely event is that it is unlikely. Every lottery ticket you buy may be a winner. However, buying a losing lottery ticket is much more likely. Many people buy lottery tickets for years and never win. The whole idea behind probability calculations is to compute the chances of winning.
If there is only one universe available to produce the set of vital proteins required for a hypothetical first cell, the first lottery ticket had better be a lucky one! What Musgrave’s coin-flipping example actually demonstrates is that for a relatively simple target of four heads in a row, he needed an average of seven trials to reach his goal. But if there aren’t six unlucky universes to play with, does a seventh lucky universe mean anything?
What Baumgardner’s calculation demonstrates is that one would probably hit a lot of unlucky universes before finding a lucky one. According to Baumgardner’s calculations, one would need to sort through an average of about 1020 universes before finding one that was lucky enough to produce even one typical protein of 100 amino acids in length. So hitting a lucky universe on the first try would be highly unlikely.
This raises an important question: How much good fortune is science entitled to classify as an absolute certainty? In the Blind Watchmaker, Richard Dawkins considered the question of how much “luck” can be assumed in forming a theory for the origin of life on Earth.[22] Dawkins argued that any odds under 1 in 1020 (1 followed by 20 zeros) would be acceptable.[23]
How did Dawkins compute the 1 in 1020 figure? First, Dawkins assumed that life spontaneously arose in at least one place in the universe – here on Earth.[24] However, this is an assumption rather than a proof based on empirical evidence. If you are trying to compute the probability of life spontaneously appearing on a planet, you can’t assume that it has already appeared spontaneously. That would amount to circular reasoning.
Next, Dawkins suggests that the universe contains about1020 planets suitable for sustaining life.[25] Because, he assumes that life had at least one random origin (here on Earth), he then calculates that each planet has at least a 1 in 1020 chance of hosting life.[26] Dawkins then suggests that if life had a random origin on more than one planet, the odds for each planet originating life become even better.
However, even if one ignores Dawkins assumption of a random generation of life on Earth, his calculation is not based upon the laws of probability. To understand why, take another look at the simple coin-flipping experiment of Ian Musgrave. In six of Musgrave’s seven attempts, the number of trials needed was less than 16. But the laws of probability indicate that the chances of getting four heads in a row are still one in 16.
Dawkins’ probability calculation commits the same logical fallacy as Musgrave’s. Every good gambler knows that sometimes you are simply lucky. If you have been dealt a royal flush in a poker game of 5-card stud, this doesn’t make the odds of getting a royal flush 1 in 6 (assuming 6 players in the game). The odds of being dealt a royal flush in 5-card stud are always about 1 in 650,000, even if you are lucky enough to get one.[27]
The only way to compute an accurate probability for a random chance event is to divide the number of possible winning combinations by the total number of possible combinations. This is exactly what Baumgardner’s calculation does. However, many Evolutionists don’t want to accept this. They believe that life must have spontaneously appeared, regardless of how unlikely the math suggests this event was.
Dawkins freely admits that chemists do not know how long we would have to wait for random atomic combination to form a self-replicating molecule.[28] Dawkins also points out that mainstream scientific opinions about the fossil record suggest that the period of time available for a random origin is less than a billion years.[29] So Dawkins assumes that life on Earth had a random origin in less than a billion years.
But again, this is an assumption that life on Earth had a random origin, rather than a proof of it. Since the time of Pasteur’s famous experiment, no scientist has claimed to have observed spontaneous generation happening.[30] Nevertheless, Dawkins assumes at least one act of spontaneous generation occurred in the distant past of Earth’s history. Without this questionable assumption, Evolution has no starting point.
This presents a clear chicken-or-egg dilemma for which Evolutionists have no factual answer. The problem is real, but the answer is unknown. In The Blind Watchmaker, Dawkins clearly states that at least one chance event was needed to jumpstart the vital process of cumulative natural selection:
Cumulative selection is the key but it had to get started, and we cannot escape the need to postulate a single-step chance event in the origin of cumulative selection itself.[31]
Because straightforward probability calculations like Baumgardner’s and Hoyle’s indicate the unlikely nature of this chance event, Evolutionists seek to frame a different problem with a more likely solution. The typical approach for doing that is to postulate a simple self-replicator whose random origin is much more likely than a typical cellular protein. Musgrave suggests a hypothetical self-replicator made from 32-amino acids.
Switching the debate from a typical protein size of 100 amino acids (chance of random origin is 1 in 10130) to a hypothetical self-replicator of 32 amino acids (chance of random origin is 1 in 4.29 x 1040) improves the odds considerably. Anybody who understands that flipping ten heads in a row is much less likely than flipping two heads in a row will have no problem understanding why this is so.
Nevertheless the odds for generating a 1 in 4.29 x 1040 are much less than Dawkins “luck” allocation of 1 in 1020. To try and cover up this immense difference, Musgrave offers another analogy about the benefits of sequential trials:
1 chance in 4.29 x 1040 is still orgulously, gobsmackingly unlikely; it's hard to cope with this number. Even with the argument above (you could get it on your very first trial) most people would say "surely it would still take more time than the Earth existed to make this replicator by random methods". Not really; in the above examples we were examining sequential trials, as if there was only one protein/DNA/proto-replicator being assembled per trial. In fact there would be billions of simultaneous trials as the billions of building block molecules interacted in the oceans, or on the thousands of kilometers of shorelines that could provide catalytic surfaces or templates.
Let's go back to our example with the coins. Say it takes a minute to toss the coins 4 times; to generate HHHH would take on average 8 minutes. Now get 16 friends, each with a coin, to all flip the coin simultaneously 4 times; the average time to generate HHHH is now 1 minute. Now try to flip 6 heads in a row; this has a probability of (1/2)6 or 1 in 64. This would take half an hour on average, but go out and recruit 64 people, and you can flip it in a minute. If you want to flip a sequence with a chance of 1 in a billion, just recruit the population of China to flip coins for you, you will have that sequence in no time flat.[32]
In this analogy, Musgrave attempts to make something that is unlikely appear as if it is a sure thing. However, random combinations of letters are highly unlikely to hit a specific sequence. This is true even if good fortune (hitting a target on the first trial) can never be ruled out as a theoretical possibility. Musgrave distorts this possibility with his analogy of flipping four heads in a row, which is a fairly likely event (1 in 16 odds).
If one alters Musgrave’s analogy to a billion people pulling letters out of a hat in order to generate the 32-character long peptide sequence he cited, one can see how misleading his analogy is. The following simple calculation demonstrates that a billion people pulling letters as fast as possible are unlikely to ever generating his specific sequence (RMKQLEEKVYELLSKVACLEYEVARLKKVGE) – approximately a 1 in 1040 chance.[33]
First, assume that each of a billion people pulled 1 letter each second. In one year, they would have generated 3.1536 x 1016 different combinations (60 seconds/minute * 60 minutes/hour * 24 hours/day * 365 days/year * 1 billion people). This would mean that they would have to draw letters for about 1023 years to hit the correct combination for Musgrave’s hypothetical self-replicator (1023 = 1040/1017).
When the probability for a billion people producing Musgrave’s self-replicator is used, the results don’t look very favorable. 1023 years is about a trillion times longer than the estimated age of the universe. The point is that the odds that random chance could ever generate a set of molecules vital to the origin of life are staggeringly small because the number of useless molecular combinations is exceedingly huge.
Furthermore, Musgrave’s argument about sequential versus parallel test streams is simply not relevant to Baumgardner’s calculation. Unless you believe in additional universes supplying additional atoms and additional seconds, the maximum number of molecules that could have been generated in the entire history of the universe can be no more than 10110 (the figure reached by Baumgardner).
Musgrave also raises the issue that multiple proteins may have similar functionality.[34] He is correct about this. However, Baumgardner only required 100 exact matches in his hypothetical 200-slot protein. Thus, his calculation allows for 10130 implicit equivalent targets (20100 ~= 10130).[35] Therefore, Baumgardner’s calculation accounts for a large number of multiple targets – proteins with similar functionality – in each trial.
The number of equivalent targets assumed by Baumgardner can also be compared with Musgrave’s example of Cytochrome c (a common protein that is about 100 amino acids in length).[36] Musgrave sites a calculation by Hubert Yockey that indicates about 3.8 x 1061 possible variants of Cytochrome-c exist.[37] Although 3.8x1061 is a very large number, it is still much smaller than the 10130 assumed by Baumgardner.
One can also use Baumgardner’s methodology to estimate the number of equivalent targets for a Cytochrome c size protein. For the case of a 100 amino acid protein, Baumgardner’s methodology would assume that 50 amino acid sites must match exactly. The probability of matching 50 amino acid sites exactly is 2050 (~= 1065). Thus, Baumgardner’s methodology would allow 1065 equivalent targets.
This is over 2000 times more than the figure cited by Musgrave (1065 / 3.8 x 1061). This means Baumgardner’s assumption is actually more conservative. But in the realm of vast numbers, both these numbers are in the same ballpark. There is a good reason for the close correlation between the two estimates. Baumgardner also cites Yockey’s research as the basis for his estimates of the number of equivalent protein targets.[38]
The higher likelihood of forming a smaller protein like Cytochrome c doesn’t negate the hypothetical 200-site protein used by Baumgardner. The more amino acids a protein has, the less likely it would be to form randomly. Many vital proteins have far more amino acids than Cytochrome c does. For example, Hemoglobin is actually a combination of four different proteins, each having a little over 140 amino acids.[39]
Many vital cellular functions require the interaction of multiple proteins Thus, the value of a any single protein is very questionable. For example, the smallest known self-sustaining life form is the Mycoplasma genitalium, which has the genes to produce 468 different proteins.[40] Theoretical estimates for a smaller genome range down to about 250 different genes, but there is no empirical evidence that such life forms exist.[41]
Random generation of such a large number of proteins is a major issue. Even if Hoyle’s conservative estimate of a 1 in 1020 chance for a randomly generated protein is used, the chance that 250 proteins could have been generated by random processes is a staggering 1 in 105000.[42] Forming this many proteins by random generation goes well beyond what could have been achieved in our universe.
In Baumgardner’s calculation, each time the 1080 atoms in the universe have a reaction, there can be up to 1080 target molecules produced. But the large number of simultaneous targets produced does not change the probability for forming any specific protein. Multiple protein targets of identical chain lengths will have identical probabilities for random formation, no matter how many parallel targets are being searched for.
A real world analysis has to deal with implications of this mathematical truth. Nobody doubts that a billion people flipping coins simultaneously will probably hit a 1 in a billion sequence very quickly. However, a billion people flipping coins for the entire history of the universe would have virtually no chance of hitting a 1 in 105000 sequence. Thus, Musgrave’s example of billions of coin flippers doesn’t solve the problem of abiogenesis.
In order to get results with a reasonable probability, Musgrave’s examples constantly lower the bar for what is required. For example, one of Musgrave’s examples used a single peptide molecule with 32 amino acids rather than a set of average length proteins. Clearly, a much smaller molecule has a better chance of being randomly generated, just as flipping 2 heads in a row is much more likely than flipping 10 heads in a row.
While, both peptide molecules and proteins are built from sequences of amino acids, peptides have far fewer amino acids than proteins. Consequently, they lack the chemical complexity and biochemical functionality of proteins.[43] The simplest cells we observe today rely on large groups of protein molecules and not a single peptide. Thus, any probability calculation based on the real world has to use more than a single peptide.
But even if Musgrave is permitted to alter proteins to peptides, he is left with an important question: What covers the gap between a single peptide molecule and the collection of complex proteins that we observe in the simplest cells? Musgrave suggests that before natural selection guided living organisms to greater and greater complexity, a similar competitive process was at work in his simple self-replicating peptide:
… in modern abiogenesis theories the first "living things" would be … one or more simple molecules probably not more than 30-40 subunits long. These simple molecules then slowly evolved into more cooperative self-replicating systems, then finally into simple organisms.[44]
However, how factual is this hypothetical scenario? Assuming that good science must separate fact from speculation, this is certainly an important issue. Musgrave describes the hypothetical nature of the molecules used in this alleged evolutionary sequence.[45] His use of the word hypothetical implies something that can’t be observed in a scientific study because any connection to a specific real world example is missing.
In Musgrave’s scenario, the allegedly explanation for the origin of modern life forms has a large section that is completely invisible. This invisible section allegedly transformed a hypothetical self-replicating molecule into the complex set of interacting proteins that we observe today. However, if scientific facts require observable evidence, how could such an unobservable transformation ever be labeled a scientific fact?
In the Wizard of Oz movie, the dog Toto pulls back a curtain to reveal that the Wizard of Oz is an ordinary man with no magical power. In seeking to keep his lack of magical power a secret, the man responds: “Pay no attention to the man behind the curtain.”[46] However, once the curtain was pulled, the huff and puff of the great and powerful Wizard was gone. Perhaps it is the same way with the magical powers attributed to Abiogenesis.


Acknowledgements
Endnotes are contained in the following section. The following shorthand notation connects the numbered endnotes to permission statements:
N(x, y, z, …) indicates endnotes numbered ‘x’, ‘y’, ‘z’.
I gratefully acknowledge permission to reproduce quotes from the following copyrighted material:
N(3): Duane T. Gish, “The Origin of Life: Theories on the Origin of Biological Order", Institute for Creation Research, http://www.icr.org/article/83/. The ICR Guidelines for Fair Use permit 100 words of quotation and/or a paraphrase/summary of an ICR article provided a proper reference to their website is provided: http://www.icr.org/home/copyright/.
N(7, 16, 38): John Baumgardner, “Science and origins – Testimony #18” from: In Six Days: Why 50 Scientists Choose to Believe in Creation, ed. John Ashton (Green Forest: AR: Master Books, 2003). Used with permission from the publisher – Master Books, Green Forest, AR; copyright 2003.Used with permission of Answers in Genesis – www.answersingenesis.org.
N(20, 21, 32, 3334, 37, 44, 45): Ian Musgrave, “Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations,” Talk Origins, 21 December 1998. Used with permission of Ian Musgrave. Ian wanted to make clear that much has changed in the field of Abiogenesis since he wrote this article. But the quotes I am using are still relevant to discussing whether Abiogenesis (of whatever form) is a likely event.
Notes and References
[4]. See http://en.wikipedia.org/wiki/Observable_universe for background information.
[5]. See http://en.wikipedia.org/wiki/Age_of_the_universe for background information.
[6]. Bruce Alberts, “The Cell as a Collection of Protein Machines: Preparing the Next Generation of Molecular Biologists,” Cell 92(3):291-4, 6 February 1998, p. 291, as referenced from this website: Science Direct, http://www.sciencedirect.com/science/article/B6WSN-419K592-1/2/fc6ab6ca1e175d970b76c6a10ad6e81a.
[7]. John R. Baumgardner, “Science and origins – Testimony #24” from the book: In Six Days, ed. John Ashton (Green Forest, AR: Master Books, 2003), pp. 223-40, http://www.answersingenesis.org/home/area/ISD/baumgardner.asp.
[8]. Bruce Alberts, “The Cell as a Collection of Protein Machines: Preparing the Next Generation of Molecular Biologists,” Cell 92(3):291-4, 6 February 1998, p. 291, as referenced from this website: http://www.sciencedirect.com/science/article/B6WSN-419K592-1/2/fc6ab6ca1e175d970b76c6a10ad6e81a.
[9]. Bruce Alberts, “The Cell as a Collection of Protein Machines: Preparing the Next Generation of Molecular Biologists,” Cell 92(3):291-4, 6 February 1998, p. 291, as referenced from this website: http://www.sciencedirect.com/science/article/B6WSN-419K592-1/2/fc6ab6ca1e175d970b76c6a10ad6e81a.
[10]. Fred Hoyle and Chandra Wickramasinghe, Evolution from Space (New York: Simon and Schuster, 1981), p. 24.
[11]. Fred Hoyle and Chandra Wickramasinghe, Evolution from Space (New York: Simon and Schuster, 1981), p. 24. A quote of the calculation done by Hoyle and Wickramasinghe is also available from this website: Steven Jones, “Re: Fred Hoyle about the 747, the tornado and the junkyard,” 17 October 2008, http://creationevolutiondesign.blogspot.com/2008/10/re-fred-hoyle-about-747-tornado-and.html.
[12]. Fred Hoyle and Chandra Wickramasinghe, Evolution from Space (New York: Simon and Schuster, 1981), p. 23.
[13]. Fred Hoyle and Chandra Wickramasinghe, Evolution from Space (New York: Simon and Schuster, 1981), p. 24.
[14]. Fred Hoyle, “Hoyle on Evolution,” Nature 294, 12 November 1981, p. 105, as referenced from this website: “12 Quotes from Leading Evolutionists,” http://www.creationism.org/articles/quotes.htm.
[15]. Bruce Alberts, Alexander Johnson, Julian Lewis, Martin Raff, Keith Roberts, and Peter Walter, Molecular Biology of the Cell, 4th ed. (New York: Garland Science, 2002), Chapter 8, “RNA Synthesis and RNA Processing,” http://www.ncbi.nlm.nih.gov/bookshelf/br.fcgi?book=cell&part=A1682. An extensive list of average/median protein sizes is available in Table 2 of this article: Luciano Brocchieri and Samuel Karlin, “Protein length in eukaryotic and prokaryotic proteomes,” Nucleic Acids Research 33(10): 3390–3400, 10 June 2005, published by University of Oxford Press, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1150220/pdf/gki615.pdf.
[16] John R. Baumgardner, “Science and origins – Testimony #24” from the book: In Six Days, ed. John Ashton (Green Forest: AR: Master Books, 2003), p. 223, http://www.answersingenesis.org/home/area/ISD/baumgardner.asp.
[17]. “What is Sickle Cell Anemia?” National Institutes of Health, October 2010, http://www.nhlbi.nih.gov/health/dci/Diseases/Sca/SCA_WhatIs.html.
[18]. “How Does Sickle Cell Cause Disease?” Harvard University, 11 April 2002, http://sickle.bwh.harvard.edu/scd_background.html.
[19]. Jianlin Cheng, Arlo Randall, and Pierre Baldi, “Prediction of Protein Stability Changes for Single-Site Mutations Using Support Vector Machines,” PROTEINS: Structure, Function, and Bioinformatics 62:1125–1132 (2006), p. 1125, http://mupro.proteomics.ics.uci.edu/extra/mupro.pdf.
[20]. Ian Musgrave, “Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations,” Talk Origins, 21 December 1998, http://www.talkorigins.org/faqs/abioprob/abioprob.html.
[21]. Ian Musgrave, “Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations,” Talk Origins, 21 December 1998, http://www.talkorigins.org/faqs/abioprob/abioprob.html. See the section entitled, “Coin tossing for beginners and macromolecular assembly.”
[22]. Richard Dawkins, The Blind Watchmaker, 2006 Edition (New York: W.W. Norton, 2006), pp. 197-205; Richard Dawkins, The Blind Watchmaker (New York: W.W. Norton, 1986), pp. 142-6 from Chapter 6 “Origins and Miracles.”
[23]. Richard Dawkins, The Blind Watchmaker, 2006 Edition (New York: W.W. Norton, 2006), p. 205. Richard Dawkins, The Blind Watchmaker (New York: W.W. Norton, 1986), p. 146 from Chapter 6 “Origins and Miracles.
[24]. Richard Dawkins, The Blind Watchmaker, 2006 Edition (New York: W.W. Norton, 2006), p. 202. Richard Dawkins, The Blind Watchmaker (New York: W.W. Norton, 1986), p. 142 from Chapter 6 “Origins and Miracles.
[25]. Richard Dawkins, The Blind Watchmaker, 2006 Edition (New York: W.W. Norton, 2006), p. 202. Richard Dawkins, The Blind Watchmaker (New York: W.W. Norton, 1986), p. 142 from Chapter 6 “Origins and Miracles.
[26]. Richard Dawkins, The Blind Watchmaker, 2006 Edition (New York: W.W. Norton, 2006), p. 205. Richard Dawkins, The Blind Watchmaker (New York: W.W. Norton, 1986), p. 144 from Chapter 6 “Origins and Miracles.
[27]. See http://en.wikipedia.org/wiki/Poker_probability for background information.
[28]. Richard Dawkins, The Blind Watchmaker, 2006 Edition (New York: W.W. Norton, 2006), p. 205. Richard Dawkins, The Blind Watchmaker (New York: W.W. Norton, 1986), p. 144 from Chapter 6 “Origins and Miracles.
[29]. Richard Dawkins, The Blind Watchmaker, 2006 Edition (New York: W.W. Norton, 2006), p. 205. Richard Dawkins, The Blind Watchmaker (New York: W.W. Norton, 1986), pp. 144-5 from Chapter 6 “Origins and Miracles.
[30]. Russell Levine and Chris Evers, “The Slow Death of Spontaneous Generation (1668-1859),”Access Excellence @ the National Health Museum, http://www.accessexcellence.org/RC/AB/BC/Spontaneous_Generation.php.
[31]. Richard Dawkins, The Blind Watchmaker, 2006 Edition (New York: W.W. Norton, 2006), p. 198. Richard Dawkins, The Blind Watchmaker (New York: W.W. Norton, 1986), p. 140 from Chapter 6 “Origins and Miracles.
[32]. Ian Musgrave, “Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations,” Talk Origins, 21 December 1998, http://www.talkorigins.org/faqs/abioprob/abioprob.html. See the section entitled, “Coin tossing for beginners and macromolecular assembly.”
[33]. Ian Musgrave, “Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations,” Talk Origins, 21 December 1998, http://www.talkorigins.org/faqs/abioprob/abioprob.html.
[34]. Ian Musgrave, “Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations,” Talk Origins, 21 December 1998, http://www.talkorigins.org/faqs/abioprob/abioprob.html. See the section entitled, “Search spaces, or how many needles in the haystack?”
[35]. This calculation (and the following exponential calculations) can be performed on the standard Microsoft Windows Calculator. See http://en.wikipedia.org/wiki/Calculator_(Windows) for background information.
[36]. “Cytochrome c Comparison Lab,” Indiana University, http://www.indiana.edu/~ensiweb/lessons/molb.ws.pdf.
[37]. Hubert P. Yockey, “On the information content of cytochrome c” Journal Theoretical Biology 67:345-76 (1977), cited on the webpage: Ian Musgrave, “Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations,” Talk Origins, 21 December 1998, http://www.talkorigins.org/faqs/abioprob/abioprob.html.
[38]. Hubert P. Yockey, “A Calculation of the Probability of Spontaneous Biogenesis by Information Theory,” Journal of Theoretical Biology 67:377–398 (1978); Hubert P. Yockey, Information Theory and Molecular Biology (Cambridge, UK: Cambridge University Press, 1992), cited from the book: John R. Baumgardner, “Science and origins – Testimony #24” from the book: In Six Days, ed. John Ashton (Green Forest: AR: Master Books, 2003), pp. 225, 239, http://www.answersingenesis.org/home/area/ISD/baumgardner.asp.
[39]. “Hemoglobin,” Department of Biology, Davidson College, 2005, http://www.bio.davidson.edu/Courses/Molbio/MolStudents/spring2005/Heiner/hemoglobin.html.
[40]. William Wells, “Taking life to bits,” New Scientist, 16 August 1997, http://www.newscientist.com/article/mg15520954.900-taking-life-to-bits.html.
[42]. For a discussion of probability calculations, see this website: “Probability,” MicrobiologyBytes, 28 January 2007, http://www.microbiologybytes.com/maths/1011-19.html. According to this document: “The probability of several distinct events occurring successively or jointly is the product of their individual probabilities, provided that the events are independent (i.e. the outcome of one event must have no influence on the others, e.g. tossing a coin.” In the case of 250 proteins, each having a probability of 1 in 1020, the resulting probability is 1 in 105000.
[44]. Ian Musgrave, “Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations,” Talk Origins, 21 December 1998, http://www.talkorigins.org/faqs/abioprob/abioprob.html. See the section entitled, “A primordial protoplasmic globule.”
[45]. Ian Musgrave, “Lies, Damn Lies, Statistics, and the Probabilities of Abiogenesis Calculations,” Talk Origins, 21 December 1998, http://www.talkorigins.org/faqs/abioprob/abioprob.html.
[46]. “The Wizard of Oz – Movie Script,” Copyright © 1939 by Metro-Goldwyn Mayer, http://www.wendyswizardofoz.com/printablescript.htm.

No comments:

Post a Comment