On 1 December 2016 at the CosPA2016 conference in Sydney I publicly conceded a bet I had made with T Padmanabhan concerning the cosmological constant on 15 December 2006. Back in 2006 TP had remarked during a lecture at a major conference in Melbourne that he had "never met a physicist prepared to make a bet against Lambda". Since I was finishing my first calculations for the timescape cosmology at the time, I took up TP's challenge and made a wager with him.
After I had conceded the bet in my CosPA2016 lecture, Padmanabhan made a press release. Stories immediately appeared in the Indian media, only reporting half the story. To put the record straight: I have only conceded the bet because it had a 10 year time limit on Lambda being the "observationally verified model of cosmology" in December 2016. If the bet had longer to run — in particular until such a time as future satellite missions can test the validity of the Friedmann equation to a particular precision — then in the end I am still willing to bet that I would not have to concede.
The Friedmann equation is a simplification of the Einstein equations upon which dark energy rests; but is based on the assumption that the Universe expands rigidly to keep the curvature of space the same everywhere. This convenient assumption makes models easy to write down, but is not demanded by Einstein's theory in which the curvature of space and matter are dynamically coupled.
Since science is based on falsifiable tests, any scientist who genuinely believes in both dark energy and precision cosmology should accept the terms of this wager. It is based on falsifiability by a particular test, rather than a model being "observationally verified" by some arbitrary date.
As of December 2016, the standard Lambda CDM cosmology has been tested against many independent data sets. However, it faces some outright anomalies such as the primordial lithium abundance, problems with the large angles of the Cosmic Microwave Background spectrum, and various other tensions, as we discussed in a recent review.
In cosmology unfortunately we cannot control the conditions of the lab. Given the problems of complex astrophysics, systematic uncertainties, statistics and selection biases, there is a saying that "if your model fits all the data at any time, then it is wrong, as some of the data will change". At any one time one must take the balance of evidence; at present Lambda CDM does very well.
Since the standard cosmology is so successful, any genuine competitor will give predictions which are very close to it. This is the case for the timescape cosmology, which also fits all the key tests including supernovae, the angular scale of the cosmic microwave background and the baryon acoustic oscillation scale. These results are discussed in my CosPA2016 lecture.
Will the Euclid satellite see non-Euclidean spatial geometry?
To really test things higher precision is required. This is why the
Clarkson-Bassett-Lu (CBL) test, for which both Lambda CDM and the
timescape
have definitive distinctive predictions, is important. Two years ago
Sapone, Majoretto and Nesseris made estimates of the precision of the
test that will be possible with the Euclid satellite mission, due to
be launched in the 2020s. Current data cannot distinguish the Lambda CDM
and timescape model (see below). However, their figure 10 (see below) shows
the projection for
Lambda CDM, the timescape and the tardis cosmologies, which each have
different predictions. Based on the timescape projection, I predict that
the standard cosmology will fail the CBL test beyond a 2 sigma level at
redshifts z < 1. The Euclid satellite can see the effects of
non-Euclidean geometry dynamically evolving over billions of years.
The new wager that I propose is therefore based on the Clarkson-Bassett-Lu test. It has yet to be accepted by TP or any other physicist. Do those who swear by the standard cosmology really have so little faith in the Friedmann equation after all?
Sapone, Majoretto and Nesseris, Phys. Rev. D 90 (2014) 023012 Fig. 8 showing best fit curve (blue) of current data to the Clarkson-Bassett-Lu test and the uncertainty (orange). The standard cosmology predicts a horizontal line at Omega_k(z)=0, within the uncertainty. | The timescape prediction for the same quantity (blue curve) on the same scale, including 1 sigma uncertainties (red curves), based on a fit of geometric quantities in the Planck satellite CMB anisotropy data [Class. Quantum Grav. 30 (2013) 175006]. |
Sapone, Majoretto and Nesseris, Phys. Rev. D 90 (2014) 023012 Fig. 10, right panel. Projected uncertainties for the FLRW cosmology about Omega_k(z)=0 are shown, along with the predictions for: timescape cosmology (green curve); tardis cosmology (brown curve); and an (unlikely) non-Copernican large void model (blue curve). |
Furthermore, some observational tests, such as fine details of the CMB anisotropy spectrum cannot yet be performed. Our recent analysis [Phys. Rev. D 91 (2015) 063519] shows that there are systematic uncertainties of 8 - 13% which arise from our still having to use the standard cosmology in the early universe.
To do better, we have to rewrite the whole of cosmological perturbation theory, using a new relativistic Lagrangian approach [c.f., e.g., Buchert et al, Phys. Rev. D 87 (2013) 123503] which is adapted to fluid elements, rather than global spatial hypersurfaces. However, just as precision measurements with complex satellites take decades to perform, rewriting the whole of cosmology with very few resources and little funding also takes a long time. This situation could change if more researchers rose to the challenge.
Even if a model such as the timescape is not correct, it would be healthy for cosmology to have at least one second viable model to test. At present most observationalists are testing only one model, and often fitting the data to the model rather than the reverse. Such opinions are echoed in a recent popular article arXiv:1608.01731, by Avi Loeb, Chair of Astronomy at Harvard University. Loeb concludes:
"When science funding is tight, a special effort should be made to advance not only the mainstream dogma but also its alternatives. To avoid stagnation and nurture a vibrant scientific culture, a research frontier should always maintain at least two ways of interpreting data so that new experiments will aim to select the correct one. A healthy dialogue between different points of view should be fostered through conferences that discuss conceptual issues and not just experimental results and phenomenology, as often is the case currently."