First of all, your model relies on the Friedmann equation being invalid. Certainly I was always taught that the Universe is homogeneous and isotropic, but I've seen surveys find larger and larger structures in the Universe over the years.
COMMENT: The Friedmann equation can and will be tested by the Euclid satellite in the period 2020-2026, by the Clarkson-Bassett-Lu test. In terms of observational tests (your last question) the most important tests are not the ones that are dreamed up in the context of the standard cosmology, but tests which can falsify the assumptions of the Friedmann equation. The timescape model gives a prediction of the precision required for the Friedmann equation to fail. I have put figures relating to this at http://www2.phys.canterbury.ac.nz/~dlw24/universe/wager.html#euclid.
UNITS: I will give distances in h^{-1} Mpc, where the Hubble constant is normalized so that H_0 = 100 h km/s/Mpc; meaning that if h=0.67 (Planck LCDM value of H_0) then 100/h Mpc = 149 Mpc = 487 million ly.
"SHORT" ANSWER 1: There is a scale around 100/h Mpc, close to the BAO scale, below which things are very lumpy, and on larger scales less so. In fact, everyone agrees on this much. It is true that there are larger and larger structures, but then there is a question of how typical are those structures? Everyone agrees that on scales smaller than 100/h Mpc the universe is very typically very lumpy. On larger scales it may be lumpy but the question of how typical that it is depends on how you smooth things out in taking averages. I have a long version answer about how one arrives at these scales observationally that I will put at the bottom as an appendix.
In determining expansion via Einstein's equations the question is: on what scales are matter and geometry coupled via Einstein's equations? My answer: small scales take precedence, because Einstein's equation are causal. Geometry here depends on matter here.
While the speed of light presents an absolute speed limit, in fact the total energy density in photons and gravitational waves at late cosmic epochs is negligible compared to the local density of particles which do not travel relative to each other at relativistic speeds. So as far as I am concerned it is the very small scales relating to non-relativistic particles [something George Ellis calls the matter horizon- MNRAS 398 (2009) 1527] that take precedence.
Everyone agrees that there is only evidence for an average cosmological expansion law on scales > 100/h Mpc. But there is no theoretical reason for this average expansion law to be given by the Friedmann equation (other than it works at the expense of adding dark matter and dark energy). An average of the small scale Einstein equations which obeys fundamental physical principles consistent with the principles of general relativity (GR), such as causality, does not have to give the Einstein equations on large scales. In fact, it does not. The Buchert equations are a generic average of the Einstein that are not exactly the same form as the Einstein equations.
We use Einstein's equations for historical reasons. When he wrote down cosmology in 1917 the distance to galaxies was not known. It was the era of "the Great Debate". It was reasonable then to assume that the universe was the density of the inside of our galaxy, with stars as particles of dust that form a statistically homogeneous distribution. Later people assumed that galaxies were the particles of dust. But galaxies are not homogeneously distributed which is implicit in the dust approximation. Instead clusters of galaxies - the largest bound structures - form filaments and sheets that thread and surround voids. The largest typical nonlinear structures are very empty voids of diameter 30/h Mpc. Voids of just this one typical size occupy 40% of the present universe, and if you add those of other sizes the universe is void dominated at present.
For Einstein's equations with "dust" on the right hand side to be actually justified, then there should be some largest objects moving as ideal dust particles in response to Einstein's equations that are also randomly distributed with spatially homogeneous average. But there are no such objects. Clusters of galaxies - the largest bound structures - are not randomly distributed. So we use the "dust" approximation; but strictly speaking it is only valid in voids where structures never formed, and where we know what the dust is - thinly dispersed protons and helium nuclei. Coarse-graining the bound structures where the mass is concentrated as "particles" in a manner consistent with GR is unsolved.
Furthermore, there is a basic physical issue relating to gravitational energy which is non-local on account of the equivalence principle. If we coarse grain to replace all the particles in the Earth by a single particle of mass M, then we are dealing essentially with the coarse-graining of non-gravitational forces that are much stronger than gravity. But when we coarse-grain stars into galaxies and then galaxies into clusters and then clusters into the expanding universe we are coarse graining gravitational degrees of freedom. We are coarse-graining geometry, and we have a very different problem. In fact, it is an unresolved problem which gets to the heart of conceptually difficult pieces of Einstein's theory which he never resolved. Unfortunately, given its successes many physicists naively assume that GR is a completed theory. It is not. As far as I am concerned, "dark energy" is the observational challenge that we sort out those basic unsolved questions in general relativity.
ANSWER: The clock of an observer who sees a close to isotropic CMB in a galaxy will very gradually accumulate a difference relative to that of a clock in a void not gravitationally bound to any structures (which also sees an as-close-as-possible to isotopic CMB). The effect is cumulative resulting from the relative overall volume deceleration of regions of different density in expanding regions, whose expanion decelerates by different amounts.
Putting numbers in, the galaxy clock could be 35-38% slower (not faster; you have the sense reversed) than the average void one by the present day after billions of years of cosmic history.
Apart from effects relating to the local anisotropies of peculiar local structures below the scale of statistical homogeneity the expansion itself would not look different to either observer in terms of the overall degree of anisotropy on large scales. However, in the void the temperature of the CMB would be cooler, the overall angular position of the acoustic peaks would be shifted and the Universe would be a few billion years older. Effectively, small regional "universes" of different ages coexist inside the observable universe.
Within clusters of galaxies, structures "virialize" more or less; that is, becoming stable effectively isolated bound structures, on account of growth of angular momentum perturbations as structures collapse. To the extent that such structures are unchanging then within bound structures we are generally talking about close to static potentials, and therefore the known standard effects in GR. In so far as we are talking about those effects, then in the timescape model the time dilations in superclusters as compared to galaxies in thin filaments like ours would not be any different to conventional estimates.
COMMENTS: I do want to emphasize that the time dilation effect in the timescape model is not the same as a local acceleration in a static potential, which is the image your words "gravitational potential of the cluster" conjure up.
In relativity, there is no "unique time" at any point. Time is only defined by observers who carry clocks. At any point in space there are an infinite number of arbitrarily differing time dilations of observers who are travelling with respect to each other at local relative speeds that get closer and closer to that of light. In special relativity, only observers at rest with respect to each other can have clocks that tick at the same rate.
What GR introduces that is different is that for two observers who are *not moving* with respect to each other, there will still be a time dilation by an amount depending on the observers' relative position in the static potential. If one simply talks about time going "faster" or "slower", it is conceptually misleading as it conjures up some sort of Newtonian idea of an external time that is independent of the observers and fixed to a point in space, when actually it is *specific observers* one is talking about (the ones at a fixed distance from each other in the static potential case).
When we go to the timescape model, we are not talking about static potentials because no distances are fixed in cosmology. In talking about a "potential" it is always a potential with respect to a background. But what background? GR in general is *background free*; matter creates its own background! So the question gets back to "on what scales are matter and geometry coupled by Einstein's equations"? (In reality the Einstein equations are only tested on 2-body systems like the solar system, binary pulsars and merging black holes.)
In the Friedmann model, one has the symmetries of the background to define ideal hypersurfaces of homogeneity, with respect to which to define potentials are defined. Distance between observers in these hypersurfaces increase uniformly, so they can be considered to have synchronized clocks. The fact that one has the background at one's disposal means that one can avoid dealing with all the fundamental problems that there are in GR in all its generality.
In the bound systems in which GR is tested, time dilations are defined relative to the "clock at infinity" assuming isolated systems in an otherwise empty universe. But no system is isolated, and this notion of infinity is an idealization that is not actually realized absolutely. The time dilation effect that I am discussing is effectively an instantaneously very small drift of the "clock at infinity" in a universe in which nothing is static. It is a different effect to any time dilation effect you will find in a GR textbook, because it has been overlooked.
What I do is isolate the degree of freedom of non-static systems which relates to the regionally closest to homogeneous and isotropic density. The Universe starts out expanding close to uniformly, but denser regions decelerate more. It is the relative regional volume deceleration, and its effect on normalizing the effective "clock at infinity" which I claim is a type of time dilation which has been neglected in GR as long as we have spending time thinking only about exact solutions of Einstein's equations and exact symmetries, rather than the general case.
I am talking about re-examining the first principles of GR in dealing with open questions in its foundations.
QUESTION 3. In the press release that accompanied the publication of your paper, you mention that the supernova data fit the timescape model a little better than the Friedmann model. How much better was this fit, and is there anything that can be done to make the answer more definitive?
ANSWER: On Bayesian comparison it is "not worth more than a bare mention". Nothing definitive; in fact I did not want to say which model fit better given that was the case, but the RAS said the press release had to say because people would ask. But the fact that it is not definitive is because the distance-redshift relations of the two models are closer that the magnitude of current systematic uncertaintes. As long as neither model is drastically wrong (as say a non-accelerating Einstein-de sitter model is) then one cannot hope to do anything definitive as long as the systematic uncertainties are as large as they are; these have to be sorted out first.
As far as I am concerned there are far more interesting results in our paper, testing for new systematic uncertainties in the supernova data.
Any supernova analysis relies on empirical models to standardize the candles. In the SALT2 method which is by now perhaps the most widely used, empirical light curve parameters are fit together with cosmological parameters for the whole sample. Furthermore, in many analysis such as on the JLA sample, supernovae at distances closer to us than the 100/h Mpc statistical homogeneity scale are used. Since it is known that there are significant inhomogeneities on such small scales, in the standard analysis people model the effect of these inhomogeneities as "peculiar velocities". This is determined using Newtonian gravity on top of the uniform expansion, as one would have in the FLRW model. In any inhomogeneous cosmology, whether the timescape or otherwise, in general one has differential cosmic expansion which is not the same as the Friedmann model plus local boosts. Therefore one should not use data below the statistical homogeneity scale in determining cosmological distances.
But with our analysis we could test whether this assumption made any difference by cutting out all data below some distance "D" and repeating the analysis (starting with raw data and not trying to use any peculiar velocity model). What we found, independently of cosmological model, was that particular empirical parameters which are supposed to be constant changed around a scale which turns out to be precisely at the expected statistical homogeneity scale. There are further issues in the systematic data, because the "stretch parameter" continues to gradually change as data is removed. However, the "colour parameter" dropped significantly around the statistical homogeneity scale and then stabilized. You do not need to assume a value for the statistical homogeneity scale: the empirical analysis picks it out.
This shows that issues relating to statistical homogeneity are important and have to be sorted out in order to achieve greater precision. In fact, we are also still limited by the fact that the "raw" data is not quite raw, but includes modelling of the Malmquist bias - assuming a FLRW cosmology. Accounting for selection biases is absolutely necessary in cosmology. Unfortunately you need a model the selection biases. At some level it may not make much empirical difference. But to get to high precision, it is important to understand such things. This is a level of nitty gritty we have not gotten into, as we are not astronomers and have not reduced the light curves ourselves. We are just using the reduced distances and redshifts (but without additional peculiar velocity modelling or other tweaks that the standard model people do).
QUESTION 4. Finally, how does the timescape model explain other measures of dark energy, such as the growth of galaxy clusters, or the measurement of standard rulers such as baryonic acoustic oscillations?
ANSWER: (i) The growth of galaxy clusters is formulated in a manner that is based on a growth factor relative to a FLRW model. You cannot easily formulate the timescape model that way as there is no FLRW model. What you do have, however, is a void volume fraction. Apparent acceleration starts when the void fraction reaches 59%. In principle, you could test the void fraction as a function of redshift. However, defining the void fraction empirically from observations is difficult and will not lead to precise measurements.
(ii) The BAO is a challenge because the standard techniques usually involve a complex statistical analysis bases on Fourier space assuming a standard FLRW cosmology. It is so difficult that we have not attempted this until starting last month. A student of mine is working on a different method at present. This work is in progress, the data is noisy, and the the conclusion depends on the priors in Bayesian comparison. Choosing wide priors one cannot say which model fits better. Choosing narrow priors is possible in LCDM, but not yet in timescape until we understand the CMB in the timescape a lot better (next item), as few percent variations in the redshift of the baryon drag epoch are going to be important if you wish to get narrow priors for a Bayesian comparison.
(iii) The CMB anisotropies are one of the most important tests in cosmology. A few years ago I did a big investigation of this with my student Ahsan Nazer in Phys. Rev. D91 (2015) 063519. The upshot is we have to consider backreaction in the primordial plasma in order to say anything with the precision that is claimed in the standard FLRW analysis. We can fit the angular diameter distance of the sound horizon easily, but if we look at parameters that relate to the shape of the acoustic peaks (the interplay of the baryon to photon ratio and the spectral index) then uncertainties of order 10^{-5} in energy density at last scattering lead to uncertainties of order 8-13% in these parameters at the present day.
The point is that since no one has ever considered backreaction in the primordial plasma we are still using the FLRW model at that point, while matching it as smoothly as possible. However, if the curvature of space is fixed to be the same everywhere and small, it actually turns out to be different from evolving things with the curvature small but *not the same everywhere* and then not evolved with a global Friedmann equation. While globally things are close to homogeneous and isotropic in the early universe in the timescape model, the mathematical nature of the way that things differ is important in considering the growth of perturbations, and that is important in the way one parameterizes things that determine the shape of the acoustic peaks.
This is a very hard problem, as it means revisiting the way that early universe perturbation theory is done from first principles. We are trying to make a start, but I doubt any one else will want to unless the Friedmann equation is shown to fail. That is the reason I rate the Clarkson-Bassett-Lu test, mentioned at the start, as the important one. If the Friedmann equation can be clearly shown to fail - and a decade from now we should know the answer - then it will be game on.
(iv) The timescape model has very specific predictions that relate the average expansion rate (Hubble parameter) to the maximum variations of expansion that one sees below the scale of statistical homogeneity. Numerically, these are consistent - e.g., if we only consider the thin filament that joins us with the Virgo cluster, then one gets of order 50 km/s/Mpc. Similarly the maximum rate across voids, as measured by us is of order 75 km/s/Mpc. Since the Universe is void dominated one will always see a higher local value when doing a spherical average until the size of the spheres averaged over becomes a few times larger than the largest typical nonlinear structures (voids of 30/h Mpc in diameter).
The tension in the Hubble constant is of course an interesting thing, because it is roughly what one expects. However, the details relate to how one takes the average. We have have done some work on this (not yet published, and technical, so I will not comment further).
You question has different answers depending on the definition of statistically average homogeneity, as described in section 2.2 of our review article Buchert et al, Int. J. Mod. Phys. D25 (2016) 1630007 http://arxiv.org/pdf/1512.03313.pdf [based a the textbook definition, as in A. Gabrielli, F. Sylos Labini, M. Joyce and L. Pietronero, "Statistical physics for cosmic structures" (Berlin: Springer, 2005)]
This standard definition of average homogeneity, implicit in assuming Friedmann's equation to be valid, is that the time-evolution of the universe splits into spatial hypersurfaces on which, at a fixed time, the different difference between the density measured in a random box, \rho, and some constant average density, \rho_0, becomes negligibly small as the box is made ever larger. I.e., there is homogeneity (a constant density at a fixed time) on the very largest scales, which one can reach arbitrarily closely in a big enough box.
Mathematically, one can then define a homogeneity scale and relate it observationally to galaxy-galaxy correlation functions. The N-point correlation function, for a given scale, r, is the probability in excess of random of finding N galaxies within a sphere of radius r (or an equivalent definition).
If one had the textbook definition of homogeneity, then all N-point galaxy correlation functions would yield a result consistent with random on scales r > R, for some fixed R; the "statistical homogeneity scale".
If we only consider the 2-point galaxy correlation function, for pairs of galaxies, then a "transition to homogeneity" is seen for R somewhere in the range 70/h < R < 120/h Mpc. *But* the same is not true for all the higher order N-point correlation functions on any scale up to as large as the surveys can measure, with 3 sigma deviation from LCDM on scales up to 500/h Mpc and 2 sigma deviations on scales up to 700/h Mpc [Wiegand, Buchert and Ostermann, MNRAS 443 (2014) 241].
This is a quantitative version of your statement about "larger and larger structures in the Universe". But when one looks at the larger and larger structures, you have to worry about what are typical structures as opposed to what are rare structures.
Another way to look at this is the remark that the observations which see a "transition to homogeneity" in measurements of the 2-point galaxy correlation function also see a variation of 7-8% in the density of spheres on scales as large as possible given the survey volumes [Hogg et al, ApJ 624 (2005) 54; Sylos Labini et al, A&A 505 (2009) 981].
There is a notion of "statistical homogeneity scale" which the 2-point galaxy correlation function observations pick out, but it does not correspond to the textbook definition above. In particular, the variation in density is not going to become arbitrarily small on ever larger sales, but it will remain *bounded* at the 7-8% level that is observed on the currently largest measured scales. A spatial flat Friedmann geometry does not apply, and observations designed to test the textbook definition (mathematically assuming these geometrical assumptions) fail.
Why is there a statistical homogeneity scale? At the surface of last scattering, we infer from the temperature fluctuations in the CMB radiation that the density was very close to homogeneous. But there are fluctuations of order 10^(-5) in baryons, and of order 10^(-4) in dark matter (by the standard assumptions), *on all spatial scales*. Suppose the dominant density fluctuation of this order has amplitude A. The BAO scale, R, then sets a physical demarcation. For fluctuations on scales r < R then A is amplified by acoustic oscillations. On scales r > R the amplitude it is not. In fact, as is observed - and consistent with inflation - the fluctuations are almost scale-free. So we should expect variations on large scales.
It is a back of the envelope calculation to estimate: if I have small independent spatial regions, each governed by the Friedmann equation and they differed in density by 10^(-4) at last scattering, then by how much will they differ today? Answer 6%. In other words, if the Friedmann equation or something close to it applies at small scales - but we make no assumptions about geometry on larger scales - then the 7-8% large scale density variation that is observed is totally consistent with cosmic variance and the idea that the initial fluctuations were scale free. It is only the idea of the fixed global FLRW geometry that raises a problem.
Furthermore, the BAO scale is naturally the scale at which a demarcation is seen, since fluctuations below this scale are potentially amplified. Well, we have to be careful. The 2nd acoustic peak is a compression inside a rarefaction or vice-versa, which undoes the amplification. But the third acoustic peak represents a compression inside a compression (or rarefaction inside a rarefaction), and then we get extra amplification. And so it is there that typical structures.
As far as I am concerned it is no accident that 40% of the volume of the present universe is in voids of 30/h Mpc — 1/3 of the BAO scale - it is the echo of the 3rd acoustic peak. This is still a scale at which we have cosmic expansion. On smaller and smaller scales, especially when things collapse (clusters and superlusters) so that angular momentum comes into play then everything get mixed up to the extent that one no longer sees and echo of the seed perturbations.