Question 1: Other recent papers (e.g. Guzzo et al) say that dark energy fits better than alternatives. Does not their claim contradict yours?
Answer: Generally there could be three main reasons for the observations that we interpret as evidence of "dark energy":
Most theorists and observationalists only consider the first two options, and simply do not consider option 3, as I do. As long as people pursuing option 3 are a minority, then of course one must be careful to check what is behind any media statement. Guzzo et al [Nature 451 (2008) 451], like most people, simply did not consider option 3, as their press release clearly shows.
Guzzo et al in fact compare some standard cosmologies, including the standard Lambda Cold Dark Matter (LCDM) model with the predictions of one particular modified gravity model, the Davli-Gabadadze-Porrati (DGP) braneworld. Their test looks at the growth of cosmic structure at a particular redshift. They conclude that the LCDM model fits better than this particular DGP model, though from their present data they cannot make a definite statistical statement about that.
Since my model gives results for type Ia supernova distances, which are statistically indistinguishable from the standard LCDM model using the Riess06 Gold Data Set, one would expect that it will be closer to the LCDM model than to many modified gravity models on any test which is designed to test the "dark energy equation of state" (in the terminology of option 1). Such tests will give differences from LCDM, but generally one would need much more and better data than is required to distinguish LCDM from various modified gravity models, such as the DGP model. For my model, a "Hubble flow tomography" test on scales of up to about 100 Mpc would be the most decisive, as the expected predictions for such a test would not be reproduced in other scenarios.
For the record, I have worked on both options 1 and 2 - the ones most theorists consider - for much of my past career. Indeed, I first looked at brane worlds as a PhD student in the mid-1980s, some 13 years before they became fashionable. I was a minor author on the 1987 paper (Super p-branes) in which the word "brane" was invented (Paul Townsend's idea), and the major author on a very early "warped compactification" braneworld written with my PhD supervisor. (Technically, I invented the first example of a "fluxbrane".) However, observations have convinced me that neither option 1 nor option 2 is a realistic one. I have changed my mind about these being worthwhile avenues of research in cosmology, at least as far as late epochs are concerned.
The fact is that there are many observations which are simply not explained by either option 1 (a dark energy fluid) or option 2 (modified gravity). These include: primordial lithium abundances, the expansion age of the universe relative to structures seen, the Hubble bubble feature, large angle anomalies in the CMB anisotropies and a number of other anomalies detailed in Table 2 of arxiv:0712.3984. Since many of these anomalies do not fit either of the major paradigms (options 1 and 2), theorists tend to either ignore these anomalies, or else to concentrate on one that they hope to solve, while forgetting about others. Some particle physicists invent new particles simply to explain the primordial lithium abundance problem.
With so many anomalies, the question for me was: what is the weakest link? Clearly it had to be the over-simplifying assumptions of the standard cosmology. Einstein tells us that we should model spacetime geometry by the observed matter distribution. We do not do that because the problem is too hard to solve analytically. So if you are not doing what your theory says you should do just because it's too hard - that's the obvious place to start! The question of how we average the actual lumpy universe to get an average close to standard cosmology is then the vital question. People like Thomas Buchert, Syksy Räsänen and Roustam Zalaletdinov have been thinking about those questions for a long time. My contribution has been to think about the observers; an another important thing Einstein taught us about his theory. We only ever measure local geometry, not some average geometry at a distant point. The relation between local and average geometry, recognising that we live in bound systems and not expanding space, is my starting point.
Question 2: Could you explain how supernovae type Ia appear to be further away than expected?
Answer: When we measure a supernova to make cosmological inferences we measure two things that matter cosmologically
Three effects contribute to the redshift, but the dominant one is how much the universe has expanded between the time the light was emitted in a distant galaxy and the time it is received by us.
Supernovae of certain type are considered to be "standard candles"; to have the same intrinsic luminosity (after an empirical correction known as the Phillips relation). Since luminosity decreases inversely to the square of the distance; measuring the apparent magnitude of standard candles tells us the distance the light has travelled. Because the universe is expanding the distance the light has actually travelled will always be greater than the distance between us and the source galaxy at the time the light was emitted, but less than the equivalent distance at the time we receive the light today, independent of model. However, different model universes have different expansion histories, so the way in which the measured apparent magnitude averages the actual distance at time of emission with the actual distance at the time of observation, depends on the cosmological model you assume.
There are two key things involved in the interpretation of a cosmological model. The first is the calibration of the clock of the observer because to work out an expansion rate - Hubble's parameter - you have to take a time derivative. Also, to deduce that the expansion is "accelerating" you have to take a second time derivative of the luminosity distance. Nobody measures "acceleration" directly; it is a model dependent inference which depends crucially on the calibration of the clock assumed.
The calibration of rulers of observers is equally important. They tell you how to estimate the volume of space given a certain radial distance. In flat Euclidean space, we were taught in school that the volume of a sphere of radius R is (4/3) pi R^3 ... but if space is negatively curved, for example, then for a given proper radius R the volume of space is larger than (4/3) pi R^3 by an amount that depends on the curvature... How does this affect cosmology? Well, a cosmological model gives the expansion history; i.e., how much space has expanded between the time of emitting and receiving the supernovae light, which you need if you want a theoretical luminosity distance to compare to the observed one.
Since gravity is attractive the greater the density of matter the more the expansion is slowed down, or decelerated. But density is mass divided by volume. So if two different observers believe that the radius calibrates differently to volume, because they think the curvature of space is everywhere the same as their own, they will infer different densities for the same amount of matter, different expansion histories, and so a different theoretical luminosity distance.
The fact that volume effects can greatly affect the interpretation of cosmological parameters in a lumpy universe was noted by Thomas Buchert and Mauro Carfora in 2002 and 2003, who wrote a paper "Cosmological Parameters are Dressed". Their analysis was qualitative, not quantitative. I claim that the clock rate differences between different ideal observers are so large that these must also be accounted for when interpreting the expansion history, and that I have a quantitatively viable model. Without the clock rate effects it would not be quantitatively viable. With my two former PhD students Ben Leith and Cindy Ng, I have shown in Astrophysical Journal 672 (2008) L91, that the fit of what I call the "fractal bubble model" or "timescape cosmology" is statistically indistinguishable from the standard dark energy Lambda CDM model, using the Riess 2007 gold data set by Bayesian statistical model comparison.
Question 3: Does the equivalence principle insist gravitational energy cannot be localized? Gravity appears localized on the scale of our solar system, and probably the Milky Way.
Answer: Your question confuses the idea of the gravitational force (or spacetime curvature in GR) with gravitational energy, which you have not defined. There are gravitational energy differences over all scales on which gravity acts, including within the solar system - just they are small. By the strong equivalence principle the effects of gravity can always be removed at a point, and so any definition of gravitational energy must involve separated points, which is all we mean by it not being localised. (This should not be confused with non-locality in quantum mechanics. Classical general relativity is a local theory but we are dealing with different points in a large scale structure implicitly; the connection of general relativity tells one mathematically how to relate locally measured physical quantities from one point to a distant point when spacetime is curved.)
To spell things out, the three versions of the equivalence principle are:
(i) the Weak Equivalence Principle (WEP), which is
about the the equivalence of passive gravitational mass and inertial mass,
and goes back to Galileo's experiments;
(ii) the 1907 Einstein equivalence principle about
the equivalence of a static homogeneous gravitational field and a uniformly
accelerated frame in empty space; and
(iii) the Strong Equivalence Principle (SEP), which
is embodied in the full theory of general relativity from 1915. The SEP
states:
By the definition of the SEP, gravitational energy refers to those differences of energy which apply over scales on which one cannot use special relativity to describe physics to some desired accuracy.
Yes, gravity does act over the scale of the solar system, and gravitational energy differences within the solar system are routinely measured: all those things that give rise to gravitational time dilation between observers at different locations (hence "non-local") are manifestations of gravitational energy differences. Near the Earth's surface this is small but measurable (and routinely accounted for in the GPS system). It is larger (but still relatively small) between the surface of the sun and us. Within our galaxy gravitational energy gradients only become very large near black holes.
The question of what can be considered as a "local region" in terms of the SEP depends on the accuracy of measurement one requires if one is comparing the rates at which two clocks tick. For many purposes, the small variations of gravitational energy and clocks rates within the solar system can be neglected, and the whole thing considered as a "local inertial frame". Indeed, the same is true of galactic scales (provided you keep your clocks away from black holes), and beyond to galaxy clusters etc. Astronomers implicitly have to assume this if they use Newtonian gravity at such scales. One important question, which I raise is: "what is the largest region which might be considered a local inertial frame from the point of view of the strong equivalence principle when we average over the gravitational fields of many objects embedded in an expanding space?"
Since gravitational energy differences give rise to differences in clock rates between different spatial points, a fiducial reference point is required for a strict definition of gravitational energy. [Actually to isolate gravitational energy from other forms of energy the clocks at the two spatial points should ideally both be in free fall, and we have to talk only about those differences which cannot be undone by a local Lorentz boost at either point.] In ideal exact solutions of Einstein's equations which describe unchanging objects such as stars and black holes, the natural fiducial reference point is "spatial infinity". However, that is only an approximation in an expanding universe, and has to be replaced by something else - finite infinity as suggested by George Ellis in 1984. I provide a definition of finite infinity as my starting point.
Question 4: Do you violate the Copernican Principle?
Answer: No. I maintain the Copernican Principle.
Some people who have not properly read my work assume that to deal with the observed inhomogeneity that I must be violating the Copernican Principle, since other approaches, such as those based on the Lemaitre-Tolman-Bondi solutions, violate the Copernican principle. My procedure, based on Buchert averaging of the Einstein equations, is fundamentally different from such approaches to inhomogeneity which begin with exact but unlikely symmetries to simplify the Einstein equations.
I explicitly maintain the Copernican principle, but recognise that we live in galaxy, and that apart from the cosmic microwave background our observations are also made on galaxies and other bound systems. Most of the volume of the universe is in voids; so the average position in expanding space is actually a very different location to the mass-biased points from which our observations are made.
It is not a violation of the Copernican principle to recognise that there is likely to be physics in the difference between these two locations. I claim that our measurements differ little from measurements in other galaxies. I relate the average measurements made in galaxies to those that would be made at the average position in expanding space in our observable universe, which differ in a systematic way.
The universe is inhomogeneous, or we would not be here. Clearly we have to deal with the universe we observe, which is inhomogeneous at some level. Thus the Copernican Principle must be reconciled with the observed inhomogeneity. I explicitly identify the relevant scales over which gradients in spatial curvature and gravitational energy can be expected. The dominant voids, as measured by Hoyle and Vogeley are 30/h Mpc in diameter, or 48 Mpc with my estimate of h=0.617. Statistically, the scale of apparent homogeneity must be reached by the Baryon Acoustic Oscillation scale of 104/h Mpc, i.e. 168 Mpc for h=0.617. At this level of averaging, over a 168 Mpc cube the Copernican principle applies. Just when we look at different points within the cell we have to account for the gradients in spatial curvature and gravitational energy within it. The mass-average and the volume-average locations do not coincide. We can still have a Copernican Principle, while recognising that there are systematic differences depending where you within an average cube. This average cube is also just slightly larger than the scale of the observed Hubble bubble - below this scale we detect a larger average Hubble parameter, with a relative peak at the statistically 48Mpc dominant void scale.
I would go as far as to say that the empirical observation of the Hubble bubble violates the Copernican principal if one assumes the standard synchronisation of clocks. The only way to explain the Hubble bubble in the standard cosmology is to assume that we live close to the centre of a very large underdense region expanding faster into the surrounding universe. Such a scenario flies in the face of standard models of structure formation. Moreover, it marks out our position as special in the centre of the universe. I claim to restore the Copernican principle, because the Hubble bubble, like cosmic acceleration, is a purely apparent effect resulting from clock rate variance between galaxies and local voids on scales 30-50Mpc. Observers in galaxies elsewhere will also detect Hubble bubbles centred on their locations.
Before Copernicus people noticed that the sun went around the earth and deduced that we lived at the centre of the universe; because they did not understand that we happened to be living on an average planet, and average planets rotate. The fact that we live in a Hubble bubble is less well known but by now an indisputable observation. I conclude that again the interpretation that this marks our location as special in the centre of the universe is wrong because of a modern day misinterpretation of observations. We have not woken up to the fact that actually we live in an average galaxy, and galaxies being bound systems, which broke from the expansion of the universe over ten billion years ago, can have clocks which by today run a lot slower than clocks in freely expanding space. A variance of 38% in clock rates accumulated by the present epoch is very counter-intuitive; but the fact the Earth could rotate once in a day without us falling off was also counter-intuitive once-upon-a-time. Both of these counter-intuitive things relate to the understanding of one physical principle - inertia; in the case of the universe, inertia in the context of dynamically evolving non-static gravitational energy gradients. The dynamical nature of energy in general relativity is just not yet so well understood.
Question 5: Other people have looked at this and say the effects of inhomogeneities are too small to give cosmic acceleration; so why are they wrong?
Answer: Critics who have argued against inhomogeneous backreaction have failed to recognise that there can be systematic differences between observers in bound systems, and ideal comoving observers in freely expanding space. Since proponents of inhomogeneous backreaction have also failed to realise this; I claim the debate has proceeded on the wrong grounds.
I agree with the proponents of backreaction, that it is real and must be accounted for as the background universe does not evolve according to the Friedmann equation. Kolb, Matarrese and Riotto [New J. Phys. 8 (2006) 322] demonstrated from the standard approach of perturbation theory on an initially smooth background with the perturbations one expects from inflation, that an instability occurs in the perturbative expansion involving sub-Hubble modes. Now because they use perturbation theory, their result has been controversial. If you demonstrate an effect at second order, then what about third order? Might the third order terms not undo the second order ones?
All one can safely say is there is an instability which is changing the background, so the issue cannot be resolved in perturbation theory, which relies on a stable background. Such an instability means the Friedmann equation simply gives the wrong cosmic evolution, and the wrong background. Even if individual slices of the actual universe look on average homogeneous and isotropic, with the Friedmann equation we stitch them together wrongly. Although the Friedmann equation has been demonstrated to be flawed, this point is ignored by many cosmologists, undoubtedly because it undermines most work still being done in cosmology.
Kolb et al have clearly demonstrated that there is a problem, but their techniques will give no way of constructing a viable cosmology - one needs a different background. This is why I have looked to nonlinear averaging schemes, which have been shown to give the average evolution of general classes of inhomogeneous models. These result, in the scheme of Thomas Buchert, in Friedmann-like equations with back-reaction corrections. Solving these equations for a distribution of matter consistent with observations will give a more realistic cosmic evolution. I claim to have done this for a realistic first approximation in Phys. Rev. Letters 99 (2007) 251101.
Where critics of inhomogeneous backreaction in schemes like Buchert's do have a point is on the question of the size of the backreaction term. Because this term looks like a cosmological constant in one equation - but not in another - some naive arguments were put forward that this term will act like dark energy and give apparent cosmic acceleration. The critics, such as Ishibashi and Wald, argue that for any realistic distribution of matter this term is too small to register as cosmic acceleration. I agree with this criticism. In a recent eprint arxiv:0801.2692, Räsänen, a backreaction proponent has also agreed that for realistic matter distributions and realistic assumptions about averaging volumes one cannot get cosmic acceleration. Thus even though backreaction is real the effects are indeed too small to register as cosmic acceleration at a volume-average comoving position. BUT what is small for one observer can be large for another observer!
The whole point - which has been missed by those on both sides of the debate before I firstly clearly made my case in "Cosmic clocks, cosmic variance and cosmic averages" - is that we are observers in bound systems, just as are all the galaxies we observe, containing the supernovae to which we measure luminosity distances. Observers in galaxies view the universe from a mass-biased perspective in regions where space is not expanding.
You cannot make a physical argument based on differential equations unless you consider the physical interpretation of the symbols in the equations. Einstein's theory has a clear interpretation that rulers and clocks are to be related to invariants of the local metric - and we are not dealing with the local metric in these averaged equations, except maybe for one particular set of observers who have to be defined.
I claim a resolution to the controversy. The critics say that inhomogeneities are too small to register as cosmic acceleration - and indeed this is true from the point of view of an ideal observer at an average position by volume, which is in freely expanding space. But such observers are not us! The critics do not go as far as thinking about observers - and that's the whole problem. General relativity is a theory about comparing rulers and clocks which differ on large scales - and as long as we ignore the question of who the observer is, which is the way the standard cosmology and the debate based on the standard cosmology is phrased, then we miss the point.
The critics are perfectly correct for the volume-average comoving observer I claim; but they have missed the point, because we are not the volume-average observer. One average set of rulers and clocks will not be calibrated exactly as those of another when there are significant gradients in spatial curvature and gravitational energy. It is precisely accounting for differences between local conditions of actual observers, and those of ideal observers in expanding space for whom our cosmological models are traditionally written, which makes the timescape model observationally viable.
There is no actual cosmic acceleration. It is just a naive deduction based on trying to fit a model of the universe assuming that the curvature of space is the same everywhere, and the clocks of all ideal "isotropic" observers are synchronised, when the actual conditions are more complicated. An observer in freely expanding space who makes the naive assumption that curvature of space is everywhere negative and everyone's clock is synchronised to hers, will correctly deduce no cosmic acceleration. Observers like us in galaxies who have slower clocks and live in a spatially flat environment infer cosmic "acceleration" by naively assuming that our local measurements of these quantities are universal. "Acceleration" is really just really the result of trying to fit on oversimplified cosmological model when there is significant variance in spatial curvature and clock rates over scales of 30-50 Mpc.
Critics are correct that as seen from the perspective of a volume-average observer, the Buchert corrections to the Friedmann equations are instantaneously small. However, over the 11 billion years plus that our galaxy has existed as a bound system decoupled from the expansion of the universe, what is important is not the instantaneous magnitude of a term in a differential equation, but the cumulative integrated variance in the rate of a clock between our bound system and that of the ideal volume average observer. One has to begin with the question: in a dynamical expanding space with large non-static gradients in density and curvature, how do you keep clocks synchronised? That's what I set out to do.
"Dark energy" is a misidentification of gradients in gravitational energy, which grow as voids open up. Apparent acceleration correlates directly to the growth of voids - for us it appears to start when the void volume fraction reaches 59% at a redshift of 0.9; this explains a cosmic coincidence that has no explanation in the standard cosmology. I agree with Ishibashi and Wald that "it is the burden of an alternative model to account for the observed properties of our universe". As I demonstrated with two of my former students in Astrophysical Journal 672 (2008) L91, we can already quantitatively account for a number of observed properties of the universe including some observations which are not accounted for in the standard model. Of course, doing every test takes a lot of work; and work on more tests is in progress.
Question 6: Are you suggesting that dark energy is all due to differences in binding energy?
Answer: No; if you look carefully at my papers you will see I try hard to dispell this notion. I realise there is a problem of jargon when I talk about quasilocal gravitational energy. We are used to usually only thinking about quasilocal energy when we have asymptotic flatness, and defining an energy scale with respect to an ideal spatial infinity in those situations.
My essential point is that because the universe is expanding then spatial infinity has to be replaced by something else: finite infinity, as proposed qualitatively by George Ellis in 1984. In the quasilocal approach this should define the reference scale that replaces spatial infinity. It is just that as well as defining the reference scale for binding energy, it also defines the reference scale for "unbinding energy" for those regions that lie beyond finite infinity. From elementary considerations (such as Bondi's original discussion in 1947) in an expanding universe there is kinetic gravitational energy associated with expansion and positive gravitational energy associated with negative spatial curvature, neither of which apply to virialised bound systems within almost asymptotically flat regions (i.e., us and locally in the vicinity of all the stuff we observe "out there"). These are still aspects of gravitational energy which cannot be localised, on account of the equivalence principle.
The fact that binding energy is small does not mean that unbinding energy is also small. I claim that in writing down the FLRW model and saying this is the time on our clocks and the length of our ruler, we have been making a naive assumption which has no basis in terms of the principles of general relativity, nor in observations. Because we have done this implicitly ever since Einstein wrote down the first cosmological metric 90 years ago, I know it's hard to get your head around at first. It is clear from Einstein's paper with Straus on the Swiss cheese model, that he worried about the problem a lot. I believe Einstein and Straus gave the correct answer to the wrong question, and as I state in section 10, should have asked instead: How does expanding space affect local determinations of cosmological parameters within expanding regions in a way which would distinguish them from local determinations of cosmological parameters made within bound systems?. If the cosmological constant is Einstein's biggest mistake - on which point I now agree with him - then the way in which he phrased the starting question of the Swiss cheese model may well be his "greatest missed opportunity".
Question 7: Did the dressed parameters for your model get Omega_matter from ~0.23-0.27, to ~1, thus achieving flatness without Dark Energy?
No. In this model one cannot say anything about "flatness" without specifying the scale you are talking about, due to its intrinsic inhomogeneity. There are three different choices of Omega parameters discussed in the paper, and the dressed ones are not related to flatness at the present epoch. The three choices are
Let me stress that the positions of the angular peaks in the CMB that we observe do not force any dressed Omega total to be 1, because they are not measured at the volume average. This is a point I struggled hard a long time to understand, but the standard conclusion is based on there being uniform Gaussian curvature everywhere. By relating the parameters of the volume average geometry to our own local "wall geometry", via (40), it turns out - as I show in sec 7.2 - that the angular anisotropy scale in the CMB is a measure of local spatial curvature (that of the 24% of the volume in walls/filaments), not average spatial curvature. One simply has to get out of the FLRW mindset, and realise that the geometry of the Universe is much deeper and richer than the simple geometries that graced the pages of Nature when Boomerang measured the angular position of the first peak in 2000.
Question 8: Your residual Hubble plot curves (Fig 3 NJP 9 (2007) 377; Fig 1 ApJ 672 (2007) L91) look significantly offset from LambdaCDM; how did you get just as good a Chi^2? (Because of the very large scatter in the SNe data...?)
Answer: The SneIa data in fact is very well fit by an empty coasting Milne universe, which is a late time attractor here. In the first work on this 2 years ago I just tried to make a rough approximation which displayed the clock effect only, discussed in gr-qc/0503099, with data analysed in astro-ph/0504192. As discussed, that too is almost Milne but with residuals which are always negative. In fact, if one uses the Gold06 data and the no-apparent-acceleration approximation of gr-qc/0503099 the chi square is 0.96 per degree of freedom, which is still a good fit. So you might say it is due to the large scatter in the data. The data has not been shown in Fig. 3, since the uncertainties just make a big mess just like that shown in Fig. 8 or Fig. 9 of Tonry et al, ApJ 594, 1 [astro-ph/0305008].
In ApJ 672 (2007) L91, we showed that by Bayesian model comparison the Timescape Model is actually marginally a better fit to Riess06; just not enough to be statistically significant yet on the current data. This should be improved by future analyses.
Another way of looking at it is that in the present model there is apparent acceleration, but because it is smaller, the positive residuals in Fig. 3 of "Cosmic clocks.." will always be less than in the LambdaCDM model. In the data the evidence for positive residuals is marginal, being statistically significant only in the range z=0.3 to 0.6 where most of the data is.
One nice point is that this model is sufficiently different from LambdaCDM model that with enough data one might be able to distinguish it from LambdaCDM. I hope that the astronomers will sit up and take notice, because this really gives a competing model to test against: a real raison-d'etre for projects like ESSENCE, SNLS and SNAP. Just it is parameterised in a completely different way, and there is a bit of a learning curve to go through: we are not looking at a parameter, w, relating to the internal energy of a weird fluid. The intrinsic underlying inhomogeneities also mean some care in assumptions made in data reduction. As I am a theoretical physicist, not an astronomer, I have not thought about all that, but I know that at some level it must be important. I've not used SNLS in Fig. 2 because it does not go to high z, and also I am not sure that I understand to what extent the assumptions in their data reduction are independent of FLRW evolution.
Question 9: If the effect you talk about mostly happens just when all of the light rays enter our gravitationally bound local volume (is that true?), then how do you achieve an acceleration-like turnover in the residual Hubble curve, which actually mimics a transition happening at mid-z?
Answer: It is wrong to think that the effect I am talking about happens mostly in our gravitationally bound local volume. It also happens between every other void and filament/wall region on the way. The luminosity and angular diameter distances are path integrals over both wall and void regions. Just depending on your actual location, whether you are in a galaxy or a void, will influence your local clocks and rulers and thus how you interpret that path integral. The difference between these two locations was negligible before structure formed, but gets significant at late epochs. At the present epoch about 24% of the volume of the universe is in the walls and filaments, and 76% in voids. Earlier on it was different, as Fig. 4 of "Cosmic Clocks..." shows. As I discuss in the text the first cosmological milestone is when the void fraction reaches 50%; which is at about z=1.5. The positive residuals depend on the second derivative of gamma on account of eq (59). I am sure I can quantify that more precisely, but have not yet done so, other than in numerical examples.
The only thing that can be said about our locally gravitationally bound volume is that there should be a large gradient between us and the centre of the nearest dominant void of diameter ~48 Mpc. Those large gradients below the scale of homogeneity give the Hubble bubble feature. The ultimate test here is to measure tens of thousands of supernovae on scales less than 200 Mpc, correlate them to the filamentary structure, and test the variance that follows from eqn. (42). I suspect this may take some decades of data collection.