1. If the initial explosion of the big bang had differed in strength by as little as one part in 10\60, the universe would have either quickly collapsed back on itself, or
expanded [too] rapidly for stars to form. In either case, life would be impossible.
2. (An accuracy of one part in 10 to the 60th power can be compared to firing a bullet at a one-inch target on the other side of the observable universe, twenty billion light years away, and hitting the target.)
The claim seems a bit strong. Let x be a measurement in some units of the strength of “the inital explosion of the big bang.” Reppert seems to be saying that if x were increased or decreased by x / (10^60), then the universe would have either collapsed immediately, or it would have expanded without forming stars, so that life would have been impossible.
It’s possible that someone could make a good argument for that claim. But the most natural argument for that claim would be to say something like this, “We know that x had to fall between y and z in order to produce stars, and y and z are so close together that if we increased or decreased x by one part in 10^60, it would fall outside y and z.” But this will not work unless x is already known to fall between y and z. And this implies that we have measured x to a precision of 60 digits.
I suspect that no one, ever, has measured any physical thing to a precision of 60 digits, using any units or any form of measurement. This suggests that something about Reppert’s claim is a bit off.
In any case, the fact that 10^60 is expressed by “10\60”, and the fact that Reppert omits the word “too” mean that we can trace his claim fairly precisely. Searching Google for the exact sentence, we get this page as the first result, from November 2011. John Piippo says there:
1. If the initial explosion of the big bang had differed in strength by as little as one part in 10\60, the universe would have either quickly collapsed back on itself, or expanded [too] rapidly for stars to form. In either case, life would be impossible. (An accuracy of one part in 10 to the 60th power can be compared to firing a bullet at a one-inch target on the other side of the observable universe, twenty billion light years away, and hitting the target.)
Reppert seems to have accidentally or deliberately divided this into two separate points; number 2 in his list does not make sense except as an observation on the first, as it is found here. Piippo likewise omits the word “too,” strongly suggesting that Piippo is the direct source for Reppert, although it is also possible that both borrowed from a third source.
We find an earlier form of the claim here, made by Robin Collins. It appears to date from around 1998, given the statement, “This work was made possible in part by a Discovery Institute grant for the fiscal year 1997-1998.” Here the claim stands thus:
1. If the initial explosion of the big bang had differed in strength by as little as 1 part in 1060, the universe would have either quickly collapsed back on itself, or expanded too rapidly for stars to form. In either case, life would be impossible. [See Davies, 1982, pp. 90-91. (As John Jefferson Davis points out (p. 140), an accuracy of one part in 1060 can be compared to firing a bullet at a one-inch target on the other side of the observable universe, twenty billion light years away, and hitting the target.)
Here we still have the number “1.”, and the text is obviously the source for the later claims, but the word “too” is present in this version, and the claims are sourced. He refers to The Accidental Universe by Paul Davies. Davies says on page 88:
It follows from (4.13) that if p > p_crit then k > 0, the universe is spatially closed, and will eventually contract. The additional gravity of the extra-dense matter will drag the galaxies back on themselves. For p < p_crit, the gravity of the cosmic matter is weaker and the universe ‘escapes’, expanding unchecked in much the same way as a rapidly receding projectile. The geometry of the universe, and its ultimate fate, thus depends on the density of matter or, equivalently, on the total number of particles in the universe, N. We are now able to grasp the full significance of the coincidence (4.12). It states precisely that nature has chosen N to have a value very close to that required to yield a spatially flat universe, with k = 0 and p = p_crit.
Then, at the end of page 89, he says this:
At the Planck time – the earliest epoch at which we can have any confidence in the theory – the ratio was at most an almost infinitesimal 10-60. If one regards the Planck time as the initial moment when the subsequent cosmic dynamics were determined, it is necessary to suppose that nature chose p to differ from p_crit by no more than one part in 1060.
Here we have our source. “The ratio” here refers to (p – p_crit) / p_crit. In order for the ratio to be this small, p has to be almost equal to p_crit. In fact, Davies says that this ratio is proportional to time. If we set time = 0, then we would get a ratio of exactly 0, so that p = p_crit. Davies rightly states that the physical theories in question cannot work this way: under the theory of the Big Bang, we cannot discuss the state of the universe at t = 0 and expect to get sensible results. Nonetheless, this suggests that something is wrong with the idea that anything has been calibrated to one part in 1060. Rather, two values have started out basically equal and grown apart throughout time, so that if you choose an extremely small value of time, you get an extremely small difference in the two values.
This also verifies my original suspicion. Nothing has been measured to a precision of 60 digits, and a determination made that the number measured could not vary by one iota. Instead, Davies has simply taken a ratio that is proportional to time, and calculated its value with a very small value of time.
There is a real issue here, and it is the question, “Why is the universe basically flat?” But whatever the answer to this question may be, the question, and presumably its answer, are quite different from the claim that physics contains constants that are constrained to the level of “one part in 1060.” To put this another way: if you answer the question, “Why is the universe flat?” with a response of the form, “Because x = 1892592714.2256399288581158185662151865333331859591, and if it had been the slightest amount more or less than this, the universe would not have been flat,” then your answer is very likely wrong. There is likely to be a simpler and more general answer to the question.
Reppert in fact agrees, and that is the whole point of his argument. For him, the simpler and more general answer is that God planned it that way. That may be, but it should be evident that there is nothing that demands either this answer or an answer of the above form. There could be any number of potential answers.
Playing the telephone game and expecting to get a sensible result is a bad idea. If you take a statement from someone else and restate it without a source, and your source itself has no source, it is quite possible that your statement is wrong and that the original claim was quite different. Even apart from this, however, Reppert is engaging in a basically mistaken enterprise. In essence, he is making a philosophical argument, but attempting to give the appearance of supporting it with physics and mathematics. This is presumably because these topics are less remote from the senses. If Reppert can convince you that his argument is supported by physics and mathematics, you will be likely to think that reasonable disagreement with his position is impossible. You will be less likely to be persuaded if you recognize that his argument remains a philosophical one.
There are philosophical arguments for the existence of God, and this blog has discussed such arguments. But these arguments belong to philosophy, not to science.