The reason for this complexity is the way that space and time are curved and stretched in Einstein's theory. There is no obvious way to decide how you should split space and time in general; different choices lead to different results. Additional assumptions are needed to simplify Einstein's equations.
In the years 1917-1927 the earliest cosmologists, Albert Einstein, Alexander
Friedmann and Georges Lemaître, made a big simplifying assumption to make
it possible to solve Einstein's equations for the universe. They assumed
that all the matter in the universe was a smooth featureless fluid, with a
uniform expansion, and a split of space and time such that all ideal
observers have synchronised clocks and the curvature of the space is the
same everywhere. Space expands everywhere, so there is only one class of
observers - those in freely expanding space. For this to correspond
to reality, each galaxy must be considered a particle of "dust" whose
substructure is irrelevant, and each galaxy must be moving uniformly
from each other galaxy. That was a reasonable first approximation; it
simplifies six dynamically independent Einstein equations down to one
effectively: the Friedmann equation.
DARK ENERGY
In 1917 Einstein first introduced "dark energy" in the form of a
cosmological constant. Since gravity is attractive it makes the universe
want to collapse. By writing down a model in which the mysterious
cosmological constant exactly counter-balanced the attractive force
of gravity given a certain density of "dust", he was able to obtain
a static universe. In 1917 Einstein did not imagine that the
universe could be changing. But the Einstein static universe model
is unstable; if you give that cosmological constant any value other
than a particular "fine-tuned" one the universe will still want to change.
Generally, because of the "repulsive force" of the cosmological term
the universe will tend to expand at an ever faster rate, i.e. accelerate,
if a cosmological term is present.
"Dark energy" is different from "dark matter" because it cannot form lumps of stuff. We believe there is dark matter which is not the ordinary sort of matter we see on Earth, because it interacts only with ordinary matter by gravity and no other forces. But such stuff will still form lumps on some scale, even if the "lumps" are diffuse spread on the scale of galaxies. "Dark energy" won't do that. Einstein's simple version of dark energy, the cosmological constant, is not the only possibility one could have - but any "dark energy" can be thought of as some smooth "stuff" smeared out in the vacuum of space, which does not clump.
Friedmann solved his simplified Einstein equations without a cosmological
term in 1922 to give the general expanding and contracting universe
models, commonly called the open, closed and spatially flat models.
In 1927 Lemaître wrote down the general models including dust, radiation and
a cosmological constant, whose value is not finely tuned like Einstein's
1917 model. This 80-year old model of Lemaître is still the one used in
the standard model today, the Lambda Cold Dark Matter model, even though
the distribution of matter we observe in the present day universe is very
different from the one assumed in the model.
AVERAGE HOMOGENEITY
When the universe was 300,000 years old the assumptions of the standard
cosmology were certainly correct, given the evidence of the cosmic
microwave background (CMB) radiation, that has travelled to us since then.
The uniformity of the mean CMB temperature, at 2.73 K, from any direction
on the sky - technically its "isotropy" - apart from very tiny fluctuations
is evidence of that. Structure had not formed. The universe was filled with
a uniform fluid of dust. But today galaxies are not uniformly distributed.
Clusters of galaxies are spread in bubble walls around huge voids, and some
in tiny filaments that thread the voids. It is clearly inhomogeneous.
Why do we still assume the 80-90 year old models are OK despite the inhomogeneity we see? One reason is that we do not know how to solve for the general inhomogeneous case except by assuming simplifying symmetries which are unlikely - such as one (the Lemaître-Tolman-Bondi model) that we are in the centre of the universe surrounded by spherical shells of varying density: that would violate the principle of Copernicus - putting us at a special place, rather than an average place in the universe.
But there is also a more compelling argument for the standard interpretation that I learnt when I was first taught cosmology by Martin Rees in Cambridge in 1984, which I have now also taught students for 15-odd years, which goes as follows. Apart from our small local motion (which we can account for) we see an isotropic CMB. By the Copernican principle, we are average observers, so other observers should also see an isotropic CMB. Since the universe is homogeneous on average, it therefore follows (by a mathematical theorem) that when looked at on the average scales on which it is homogeneous it is well approximated by a homogeneous isotropic geometry. Now that argument is true - but from that we go on to make flawed inferences which do not follow logically.
In particular, there are 2 consequences of the universe only being at present averagely homogeneous and not absolutely homogeneous:
The problem with the 80-90 year old standard models is that space expands everywhere within them. They only admit one set of observers: those in freely expanding space. But actual observers - us and the galaxies we see, whether in a bubble wall or a filament - are in bound systems. So in general we have to fit our observations to those of volume-average observers. Even if Buchert's "back-reaction" corrections are small, when written from the viewpoint of the volume-average observer in freely expanding space for whom the standard cosmology is written, we still have to fit the geometry of that notional observer to the geometry here if we wish to use something close to the standard model. This problem is too difficult to solve in general, on account of the complexity of Einstein's equations.
This general fundamental problem is called the "fitting problem" and its ramifications were first really well discussed by George Ellis in 1984, though of course the problem did really worry Einstein because he knew it was crucially important for interpretation of observations. Einstein and Straus came up with the Swiss cheese model in 1945, which is a crude attempt at a solution to the fitting problem. Swiss cheese guesses the answer by cutting and pasting two assumed answers together, and checking that you can consistently glue them together using Einstein's equations. But just because you can do that does not mean it has anything to do with the actual universe; it is only as good as your guess! In all Swiss cheese solutions I know about the background evolves by the Friedmann equation. That means the background universe always has the same density outside the observable universe as inside the observable universe.
But for a universe in which the original tiny perturbations in density
that went on to form galaxies and voids are embedded inside other density
perturbations, like Russian dolls, such evolution is not the actual evolution.
So I claim this type of Swiss cheese answer cannot be relevant. Perturbations
embedded in perturbations is the actual "scale-invariant" structure predicted
by primordial "inflation", and the structure consistent with the spectrum of
CMB fluctuations we observe. These concepts did not exist in Einstein's
lifetime; if they did he might have thought harder about the problem. My work
represents a solution to the "fitting problem", consistent with primordial
inflation and the observed CMB, but which is very different to the usual
Swiss cheese solutions. In Swiss cheese the clock difference between
a galaxy observer and a volume-average observer is negligible. In my
solution it is considerable and must be accounted for.
FINITE INFINITY
As far as observers in bound systems are concerned, a crucial question in
the fitting problem is how to do we normalise clocks relative to ideal
observers in expanding space? In bound systems we use solutions to
Einstein's equations - the Schwarzschild and Kerr geometries - in which
space is not expanding at infinity very far from some localised mass.
Our own clocks are related observationally to solutions for orbits in
the Schwarzschild geometry defined by the greatest local mass concentration,
the sun.
Because there is no expansion of space infinitely far from the mass concentrations in these ideal solutions - a place we call "spatial infinity" - there is a well-defined sense of time there, and we can use clocks there as a reference point for time. We know that mass warps space and slows down time. We measure such gravitational time dilation every day on the Earth - clocks run a tiny bit slower on the Earth as compared to GPS satellites. If you go close to the sun the difference in clock rates is a little larger but still small.
If you go close to a black hole the difference in clock rates becomes arbitrarily large. If you watch someone fall into a black hole sending out radio pulses at regular intervals, the time between the pulses becomes longer and longer. In fact they would appear to be hovering at the edge of the black hole for all eternity, fading away, the wavelength of their radio transmissions becoming stretched longer and longer.
But this is all for compact objects within an ideal universe where space is not expanding at "spatial infinity". Since the actual universe is expanding, this ideal "spatial infinity" does not exist; it is just an approximation. In reality we have to ask "where is infinity?" if we want to ask how we normalise our clocks in the actual universe. In practice this means we have to replace ideal "spatial infinity" by a "finite infinity", as first suggested qualitatively by George Ellis when he discussed the "fitting problem" in 1984.
My solution of the fitting problem involves attempting to give a
physical definition of finite infinity.
SANDAGE-DE VAUCOULEURS PARADOX
My solution is based on a hypothesis that solves an observational puzzle.
In the standard way of thinking about cosmological averages, if you take a
box of the size of average homogeneity, then you should expect galaxies
to have large peculiar velocities if you average on scales much smaller
than the homogeneous box, which I mentioned before was of order 170 Mpc.
In particular, if you look at very small scales the
statistical scatter of peculiar velocities should be so great that
no linear Hubble law between redshift and distance can be extracted. Yet
Hubble discovered his law on nearby scales of 20Mpc,
10% of the scale of homogeneity. By standard thinking this does not
make sense.
This paradox was raised in the 1970s by Alan Sandage in criticism of the hierarchical cosmology of Gerard de Vaucouleurs, who was one of the earliest astronomers to appreciate the nature of voids and large structures. Sandage and de Vaucouleurs are the two big names that have figured on opposite sides of a debate about the value of the "Hubble constant", H_0: the present-day value of the Hubble parameter which is a measure of the expansion rate of the universe. Sandage favoured lower values of H_0 and de Vaucouleurs higher values. de Vaucouleurs died in 1995; Sandage - once a student of Hubble - is still alive. Much of the debate, which is still not finished, concerns systematic issues about calibrating astrophysics of standard candles, but I believe there is also an intrinsic element related to the scale of averaging.
Interestingly Sandage often selected measurements in "nearby" environments, such as to the Virgo cluster 25Mpc away, which is the closest thing to measuring a distance within the bubble wall or filament in which our galaxy is located. Sandage got lower values; de Vaucouleurs doing other observations higher values.
Curvature of space of a certain "volume-expanding" type has a gravitational energy cost associated with it. Space in the voids can be opening up more by volume than wall regions but the clocks can also be going faster. This means that the underlying expansion of space in expanding regions, measured anywhere by local rulers and clocks can be uniform while the curvature of space and clock rates both change. This explains the Sandage-de Vaucouleurs paradox: there is a notion of homogeneity of expansion of space deep inside a 170Mpc box on scales on which the distribution of matter is inhomogeneous. Roughly, once the voids open up, clocks are going slower where the mass is located as opposed to the empty parts where there is almost nothing.
In the bubble walls space is almost flat, but in the voids it is negatively curved. There is a gradient in clock rates associated with the gradient in curvature. It leads to a different split of space and time from the standard one. It means we have to be very careful in calibrating rulers and clocks between galaxies and voids in the solution to the fitting problem.
As far as the argument of Sandage and de Vaucouleurs is concerned; if we measure the Hubble constant in an ideal "bubble wall" where the average clock rate is close to ours, we will get a low value of the Hubble constant, 48 km/s/Mpc. If we measure it to the other side of a void of the dominant size of 48Mpc across, we will get a higher value, 76 km/s/Mpc - because space appears to be expanding faster there by our clocks - which are going slower than the clocks in the voids. Once we average on the scale of apparent homogeneity, our average includes as many bubble walls as voids as the average in the observable universe, then we converge to a "global average" Hubble constant between the two extremes, of 62 km/s/Mpc.
This value, which we find fits 3 independent cosmological tests is also the controversial value claimed in 2006 by the Hubble Key team of Sandage et al, and differs from the other Hubble Key team of Freeman et al, and from the consensus that the Hubble constant is about 72 km/s/Mpc. Since most of space is in voids if we take nearby measurements - other than very special ones within our bubble wall - we will get higher values if we average "locally" below the scale of homogeneity. Many first steps on the cosmological distance ladder are made on these scales. So unsurprisingly it affects our calibrations, and the consensus is for the higher value. This explains key elements of the 40 year debate about the Hubble constant. It also explains why there is small statistical scatter on any one particular averaging scale, even though the possible Hubble constants vary hugely by scale. The underlying "quasilocally measured" Hubble expansion is uniform!
A gradual drop-off in the value of the average Hubble constant from a scale of 48 Mpc - corresponding to the dominant void size - until the scale of homogeneity at 170 Mpc is seen observationally. It is known as the "Hubble bubble". It is a problem for the standard cosmology as it should not be there. For any model of cosmology, with an assumption that clock rates are the same everywhere, it is represents a statistical fluke one cannot explain. We appear to be in the middle of a bubble expanding faster than the rest of the observable universe; this violates the Copernican principle.
For my model it is a feature. It is an apparent effect only, like apparent
acceleration. If you go to a distant point in an average galaxy elsewhere
in some other average box, you will also interpret things in terms of
a "Hubble bubble" centred on your galaxy, because of the variance of the
clocks between the galaxies and the voids within your average box. Because
there is literally almost nothing in voids, we do not see clocks there.
We only see clocks in other galaxies, bound systems within finite infinity,
keeping time more or less synchronised to our own. That's why this
notion is not obvious; literally you have to think "outside of the box"
in terms of what you see. A galaxy in a wispy filament in a void
is still a bound system. Space and time are relative; it is the reference
point that is crucial. Finite infinity provides one reference point,
and the volume average observer in freely expanding space - the reference
point of the standard cosmology, the other.
WHY MY EXPLANATION IS COUNTER-INTUITIVE FOR MANY PHYSICISTS
Physicists are used to Newtonian mechanics, where there is only one time
and space is flat everywhere, and even in general relativity we
are used to a Newtonian limit when the density of matter is small, and
fields are weak. We are used to large time dilation effects only at very
high relative speeds in particle accelerators, or for extreme density
contrasts where you have very compact objects such as neutron stars
and black holes. The time-dilation I am claiming between us and the
volume-average observer of 38% at the present day is small compared to the
effectively infinite differences that are possible in those two other
circumstances we are most familiar with. But 38% is intuitively huge
and "impossible" many people think when one is dealing with "weak fields".
Cosmologists who are used to calculating the usual gravitational time dilation effects for Newtonian potentials about the standard FLRW background, and who assume that is all that is going on, will assume this has to be wrong. However, it is the standard FLRW background which is wrong I claim, and the type of gravitational time dilation is not the sort we are used to, but is a new effect.
The problem with deriving a Newtonian limit, in the case of weak fields, is that there is no notion of expanding space in Newtonian gravity. In Newtonian gravity there is a well-defined "spatial infinity". The question of "what is beyond finite infinity" never arises. The time dilation I am talking about is not the time dilation due to an instantaneous sharp density contrast such as that of a black hole in a region where the static approximation at infinity is good. It is an issue about energy gradients in the fabric of a dynamically expanding space, about cumulative clock differences that build up over billions of years. It is a "new" effect.
In new work (arXiv:0809.1183 September 2008) I have now calculated the relative deceleration of the background of wall regions and voids, by appeal to the Equivalence Principle, to show that it is indeed a small weak field effect. Over most of the life of the universe the relative deceleration is about a few x 10^(-10) m/s², decreasing to 6.7 (+2.4/-3.4) x 10^(-11) m/s² at the present epoch. This is indeed small and for recent epochs the value of the relative deceleration scale in fact coincides with the empirical acceleration scale of the modified Newtonian dynamics (MOND) scenario normalised to my value of the Hubble constant. [At this stage, I am treating this as a coincidence. In principle, MOND may be related but my work does not yet apply to galactic scales directly. Furthermore, I still predict that the dominant form of matter in the universe is non-baryonic dark matter, even if there is somewhat less of it than in the LCDM model. Thus it is the phenomenology of MOND, rather than an alternative to dark matter that one might hope to explain, if the effects are at all related.]
Since most physicists are not used to thinking about cumulative effects, the debate from critics of back-reaction proposals has centred on looking at the magnitude of a particular term in a differential equation. Such arguments completely miss the point I claim. I agree with the critics that the instantaneous back-reaction term is small (of the order a few percent) from the point of view of the observer for whom these equations are defined. But the observer for whom these equations are defined is not us! A term which is instantaneously small can lead to cumulative differences, relative to other observers (those in bound systems), when integrated over the lifetime of the universe. That is precisely what I argue is going on. A volume-average observer in freely expanding space detects no apparent cosmic acceleration; but we do.
This involves an arcane area of general relativity called "quasilocal gravitational energy", which is there because of the equivalence principle. Curvature of space has energy associated with it. Colleagues will agree that the density contrasts on the scale of dominant void size are "of order one", and that this is therefore the scale over which general relativity is "non-linear" and effects of the type I describe are logically to be expected. However, because we have always applied Newtonian intuition to the problem, no one ever expected such large effects. I am talking about cumulative effects of distributed dynamical non-static mass gradients on large scales. People have not thought that one through I claim. As long as critics will only debate the question of small perturbations about a smooth fluid (an FLRW model) - which is the wrong background for the present universe I claim - rather than accepting the evidence of their telescopes that the universe is not a smooth fluid, then they completely miss the point.
This is what makes my proposal "radically" conservative. I am claiming the dark energy puzzle can be solved in general relativity; but it involves the arcane difficult areas of general relativity that troubled Einstein and many a good mathematical relativist since. The amazing thing I claim is that a huge area of general relativity is still largely unexplored, and this area is the crucial one for understanding "dark energy" in cosmology. Einstein never quite finished his theory. He was right that the cosmological constant was his biggest mistake. And the foundational questions which troubled him are the crucial ones for understanding observations in our universe.