31 January 2013 [Update to Timescape cosmology FAQ]

    COMMENTS IN RESPONSE TO EDDIE CURRENT'S BLOG POST: "Too Big to Fail"

    Dear Eddie Current,

    Thanks for your email bringing my attention to this discussion, and for your clearly written piece. While I do not have a lot of time to discuss every point, I am happy to make some comments. I will do this under the following headings

    1. Distinction of timescape from non-Copernican "void" models
    2. New physics in the timescape cosmology
    3. Observational tests, anomalies and tensions
    4. Sociology of science in challenging a standard model

    DISTINCTION OF STATISTICAL AVERAGING FROM NON-COPERNICAN VOID MODELS

    Your correspondent Elfmotat has confused my model with non-Copernican large void models criticized by Moss, Zibin and Scott, Phys. Rev. D83 (2011) 103515 = arXiv:1007.3725. The timescape model is not such a model, is not subject to the criticisms made in this paper, and is not ruled out by any other observations.

    I am certainly not the first person to suggest that inhomogeneities rather than dark energy may be at the root of what appears as apparent cosmic acceleration (just the exact mechanism by which this is achieved is very different to anyone else - see next section). Elfmotat mentions Rocky Kolb who with collaborators generated publicity and controversy in 2005, with an argument based on perturbation theory about a homogeneous cosmology. But in fact people like Marie-Noelle Celerier and Kenji Tomita had already suggested inhomogeneities as an alternative to dark energy, using non-Copernican void models in 2000.

    The large void models are the most well studied in this field, because there is a simple exact solution for the Einstein equations with any spherically symmetric pressureless dust source, which goes back to Lemaitre and Tolman in 1933. The Lemaitre-Tolman (or Lemaitre-Tolman-Bondi LTB) models are of course unlikely philosophically, as well as observationally, since they require us to be near the centre of a universe with a very peculiar spherically inhomogeneous density profile - generally with us near the centre of a large void (though not necessarily, see arXiv:0906.0905) - thereby violating the Copernican principle. Furthermore, the void (or other spherical inhomogeneity) has to be much larger than the typical structures we observe.

    What we actually observe is not a single large void but a complex cosmic web of structures, with particular characteristic scales to the inhomogeneity which are much smaller than the toy model large LTB voids that that Moss, Zibin and Scott discredit. Surveys show that about 40% of the volume of the late epoch universe is in voids of 30/h Megaparsecs (where H0= 100 h km/s/Mpc is the Hubble constant). A similar fraction is in smaller voids, while clusters of galaxies are contained in filaments that thread the voids and walls that surround them in a cosmic web. The universe is homogeneous in some statistical sense when one averages on scales larger than about 100/h Mpc. These are observed facts which almost everybody agrees on.

    The actual inhomogeneity is far too complex to be solved in Einstein's equations, even on a computer. In the standard model one assumes that the universe evolves as if were a completely homogeneous uniform fluid with Newtonian gravity perturbations added on top. Since the 30/h Mpc voids are in the "nonlinear regime" (smaller than the statistical homogeneity scale) as far as perturbation theory is concerned, the only way to understand them in the standard model is to run computer simulations with Newtonian gravity + dark matter + dark energy + the uniformly expanding background. Full general relativity is not used; that problem is in the too-hard basket. It's not just a question of computing power - there are fundamental ambiguities about splitting space and time in general relativity which impact on the numerical problem. People have recently learned how to do the two body problem (e.g. 2 black holes) in numerical relativity; the cosmological problem is far more complex.

    What I do is fundamentally different, in that I begin from an approach pioneered by Thomas Buchert, in which one considers the statistical properties of a truly inhomogeneous matter distribution with regions of varying density and varying expansion rates in the fully nonlinear regime. So I do not solve Einstein's field equations directly, but rather Buchert's equations which describe the average evolution of a statistical ensemble of voids and wall regions. Rather than postulating a hypothetical single large void to have a simple exact solution of Einstein's equations, I am dealing with an approach which accounts for the actual observed inhomogeneity in a statistical sense within general relativity.


    2. NEW PHYSICS IN THE TIMESCAPE COSMOLOGY

    Going from a statistical description of cosmology to one which refers to our own observations requires additional input. It's like having the ideal gas law on one hand, and then having to use that law to explain physics from the point of view of a molecule of gas. So I certainly add new physics; the crucial physics relates to the relative calibration of rulers and clocks of observers in regions of different density which have decelerated by different amounts.

    General relativity is actually not a complete theory. On account of the equivalence principle, gravitational energy is not localizable, and a conserved energy cannot be defined in general. The best one can do is "quasilocal energy"; but the nature of quasilocal energy expressions is something that Einstein struggled with, and many mathematical relativists since, with no general consensus as there are no unique conservation laws. Since the universe is so remarkably simple despite the complex cosmic web of different densities, I think it likely that there are nonetheless simple principles to be found which shed light on this fundamental, but unsolved, part of general relativity. I proceed on that basis.

    I do not have time/space to describe my from-first-principles reasoning here. I wrote an FQXi essay a while ago which describes the ideas qualitatively, also available as arXiv:0912.4563. In particular, I treat the relative deceleration of regions of different density - which impacts on the relative kinetic energy of the expansion of space - as a physical degree of freedom in the calibration of clocks. So while the idea that "clocks run slower in denser regions" is familiar in solar system physics or near black holes, what I am dealing with is not the familiar relative position of observers in a static potential but something intrinsically different. It's a new effect.

    Since I am proposing a new effect, it's not a case of this being something obvious in general relativity that we just forgot to account for. If that was the case this would be much more readily accepted by theoretical physicists. Rather I am extending the basic physical ideas of relativity to a new regime - the complex many body problem of cosmological averages - where we know there are unsolved problems.

    The gravitational energy gradient / clock effect is essential for my solution to the problem of dark energy; when you put the numbers in it would not work otherwise. That is the reason why after toying with different names I chose to call it the "timescape cosmology" to highlight the essential new physics and distinguish my work from other approaches to inhomogeneity, such as large void models.


    3. OBSERVATIONAL TESTS, ANOMALIES AND TENSIONS

    "Anonymous" very correctly says that alternatives such as mine are taken seriously, and what matters are observational tests. I have detailed several observational tests in Phys. Rev. D90 (2009) 123512 = arXiv:0909.0749. Some of these involve things which are already tensions for the standard model.

    "Elfmotat" mentions that I have pointed out a tension between my dressed Hubble constant, and the current value of Riess et al; I can live with something in the range up to 68 km/s/Mpc (or down to 58 km/s/Mpc) but 72 km/s/Mpc would be too high. I should point out, however, that there are also many observations which provide tensions, or indeed outright anomalies, for the standard Lambda CDM cosmology.

    In cosmology it is sometimes said that any model which fits all of the data at any one time is almost certainly wrong because some of the data is going to change. For example, the standard model has the primordial lithium abundance anomaly, which has existed since the first release of WMAP data in 2003 and which has not gone away. This one is actually very interesting for me, since when I put the numbers in it is very likely that the timescape model can deal with the lithium abundance anomaly. (This happens because the ratio of baryons to dark matter is naturally increased for given baryon-to-photon ratio.) For the standard model the lithium abundance anomaly is statistically much more significant than the Hubble constant tension in my case - which is the difference between a "tension" and an "anomaly".

    The standard model is accepted despite some outright anomalies, and various other tensions, simply because it fits so many independent pieces of data. Of course, the standard model has been developed over decades with thousands of people working on it, and so it can be subject to many more tests than any other cosmological model. To claim better than standard model on the primordial lithium abundance anomaly, for example, I have to be able to fit all of the Doppler peaks in the CMB anisotropy spectrum to more tightly constrain the baryon-to-photon ratio.

    Thus far with the CMB I have only fit the overall angular scale of the Doppler peaks (which itself is a major test). To do the rest is hard because one cannot simply use existing numerical codes written for the standard model, developed over a decade by many people. One has to revisit the whole problem from first principles. That sort of thing takes time. Similarly, techniques used to analyse galaxy clustering statistics are very much based on the standard model; so again to do the tests rigorously requires revisiting the whole observational methodology.

    The interesting thing about the timescape cosmology is that quantities such as the luminosity distance are so close to those of the standard model that the differences are at the level of systematic errors. Supernova distances are the test we have studied most, as they are the simplest. Even in this simple case the fact that supernovae are not purely standard candles but standardizable candles means that the systematics have to be sorted out before one can say whether Lambda CDM or timescape is better (see arXiv:1009.5855). There are two main methods of data reduction - SALT/SALT-II and MLCS2k2 - if you use the first method you would say Lambda CDM fits better, and if you use the other then you would say timescape fits better.

    It boils down to empirical questions such as "is the reddening by dust in other galaxies similar to that of the Milky Way?". I find that it should be in order for timescape to come out better, and indeed independent measurements in other galaxies confirm that the Milky Way value reddening parameter R_v=3.1 is within a standard deviation of the mean R_v=2.8. Yet some supernova fits to the standard cosmology give R_v=1.7, which is way off. It is these sorts of nitty gritty issues that crop up again and again when you try to do any observational cosmology. Astrophysics is basically a dirty empirical subject where you cannot control the conditions of the lab. To distinguish two competing models on many independent tests therefore requires years of work. Some of the most definitive tests I discuss in arxiv:0909.0749, such as the redshift-time drift test, which actually take decades to do.

    The tension in the value of the dressed Hubble constant is actually interesting because in the timescape model one expects a natural variance of up to 17-22% in the nearby Hubble flow below the scale of statistical homogeneity. This may also impact on the assumptions made in normalizing the distance ladder, which is crucial to determining H0. So there are important systematic astrophysical issues to be resolved. The most interesting tests are in fact nearby, below the scale of statistical homogeneity, where the universe is most inhomogeneous. We have begun looking at this.

    Indeed in arXiv:1201.5371 we do a model-independent test, with a finding that is very much at odds with the assumptions in the standard model. The spherically averaged Hubble flow appears to be significantly more uniform in the rest frame of the local group, rather than the assumed rest frame of the CMB. The only way to understand this as far as I can see is that there appears to be a significant non-kinematic component to the CMB dipole due to foreground density gradients from the differential expansion of space; indeed we find a dipole whose strength is correlated with structures at the same scales that generate the spherical (monopole) variation in the Local Group frame. Such a finding supports the timescape model but does not prove it. On the other hand, some astronomers are beginning to collect data for far more ambitious tests of the timescape model: see arXiv:1211.1926.

    In short, observational tests are being undertaken. It just takes time.


    4. SOCIOLOGY OF SCIENCE IN CHALLENGING A STANDARD MODEL

    "Anonymous" is very correct to point out that alternatives to dark energy are seriously considered in the community, and that your portrayal - based largely on the popular account - is simplistic. However, the issue of acceptance of new ideas by the scientific community is a complex one, so the possibility of something being "too big to fail" (sociological inertia) deserves further discussion. The "too big to fail" standard here is not dark energy itself, but the Friedmann-Lemaitre models which have been around since the 1920s and upon which a huge superstructure has been built absorbing many decades of many people's careers. That is a lot to challenge; and not something you do lightly.

    The timescape model is testable, and it actually fits the current data so well that the differences between timescape and Lambda CDM will require years of careful analysis to resolve. But the pace of doing this work is not only limited by the time it takes to do observations. There is also the fact that I and my students are the only ones working on developing the theoretical ideas and analytic tools to enable the timescape cosmology to be better tested. My work is certainly cited, but just because people find it interesting not because they are working on it. The attitude of my colleagues is most often: "that's an interesting idea, we'll wait and see". Of course there are those who simply do not believe me as well.

    Most scientists generally only read in depth those papers which are going to help them write their next paper. Our careers are built on producing papers to pass peer review. To do something totally new requires a big investment of time, and most people want to be convinced that something is going to work before they make that investment.

    In theoretical physics, researchers will quickly take to a new topic if they can adapt their existing tools, and produce papers quickly. The reason that large voids and other LTB models are the most well-studied inhomogeneous models, with many many papers written despite being extremely unlikely as physical models of the whole universe, is that they are exact solutions of Einstein's field equations to which existing techniques can be applied. It requires no hard conceptual thinking to produce new papers; just an ability to solve differential equations and a familiarity with the tools of general relativity.

    Anything which involves truly new physics is hard to sell, since there are always a lot of ideas on the market. In my case I am going to the fundamentals of hard problems that troubled Einstein and many mathematical relativists since. I am thinking conceptually and using observations as a guide to new physical principles. In reality, the "shut up and calculate" school still dominates. A lot of theoretical physicists shy away from conceptual thought; and like to solve hard mathematical problems within the confines of an accepted theoretical framework. I have given an invited lecture at a big conference and had a famous relativist come up to me afterward with the comment "congratulations, this is the true spirit of relativity" or on another occasion "this is the way that physics should be done". But very few people actually dare to do physics this way; it's too much of a gamble.

    I am taking the gamble because I am convinced that the mystery of dark energy (and perhaps even dark matter too) does demand really new physics which was not in the toolbox of tricks that we had invented by the end of the 20th century. And as a relativist I know Einstein never finished the job. I am abandoning the idea that spacetime is a single geometry with a prescribed matter field. Instead I think cosmological spacetime describes a statistical ensemble of geometries, within which the notions of gravitational energy and entropy have to be better defined. It involves new symmetry principles (which I have tried to encapsulate with a "cosmological equivalence principle"). Such symmetry is no doubt related to the initial conditions of the universe and all those thorny questions about quantum gravity. But rather than trying to tackle those problems head on with purely mathematical ingenuity, I think we have to be guided by observations - which often means sorting out very prosaic astrophysical issues to get to the bottom of things.

    Physicists who talk about "modified gravity" have generally proceeded from a point of view in which all modifications come about from changing the action, while keeping the geometry simple. Indeed a lot of present day thinking in cosmology is really still Newtonian (actions and forces), with a simple Euclidean space lurking in the background. (From my point of view distances on cosmological scales - the 30/h or 100/h Mpc - are a convention adapted to one particular geometry. In a statistical description, when spatial curvature varies greatly and there is no one fixed metric - there are different equivalent metric descriptions. It is all about relative calibration of rulers and clocks.) So while the program I am pursuing is not "modified gravity" in the way it has traditionally been done, i.e., it does not change general relativity on the scale of bound systems on which it is tested, I would be happy to call it "modified geometry".

    This is new and uncharted territory. But if "dark energy" is the fundamental mystery that everyone says it is then any real lasting solution is going to be new uncharted territory at some point. You cannot have it both ways. It is entirely reasonable of my colleagues to "wait and see", since I might be wrong. But I am of the view that being a theoretical physicist is not about necessarily being right or wrong, but rather about having the courage to go right back to the conceptual foundations when the observations demand it, and to rigorously ask questions that have never been asked before.


    [Update to Timescape cosmology FAQ] - David Wiltshire