Category Archives: General Relativity

Cosmology with the Square Kilometre Array

A large fraction of my time over the last 18 months has been spent working out parts of the cosmology science case for the Square Kilometre Array, a gigantic new radio telescope that will be built (mostly) across South Africa and Australia over the coming decade. It’s been in the works since the early 90’s and – after surviving the compulsory planning, political wrangling, and cost-cutting phases that all Big Science projects are subjected to – will soon be moving to the stage where metal is actually put into the ground. (Well, soon-ish – the first phase of construction is due for completion in 2023.)

Infographic: SKA will have 8x the sensitivity of LOFAR.

A detailed science case for the SKA was developed around a decade ago, but of course a lot has changed since then. There was a conference in Sicily around this time last year where preliminary updates on all sorts of scientific possibilities were presented, which were then fleshed out into more detailed chapters for the conference proceedings. While a lot of the chapters were put on arXiv in January, it’s good to see that all of them have now been published (online, for free). This is, effectively, the new SKA science book, and it’s interesting to see how it’s grown since its first incarnation.

My contribution has mostly been the stuff on using surveys of neutral hydrogen (HI) to constrain cosmological parameters. I think it’s fair to say that most cosmologists haven’t paid too much attention to the SKA in recent years, apart from those working on the Epoch of Reionisation. This is presumably because it all seemed a bit futuristic; the headline “billion galaxy” spectroscopic redshift survey – one of the original motivations for the SKA – requires Phase 2 of the array, which isn’t due to enter operation until closer to 2030. Other (smaller) large-scale structure experiments will return interesting data long before this.

Artist's impression of the SKA1-MID dish array.

We’ve recently realised that we can do a lot of competitive cosmology with Phase 1 though, using a couple of different survey methods. One option is to perform a continuum survey [pdf], which can be used to detect extremely large numbers of galaxies, albeit without the ability to measure their redshifts. HI spectroscopic galaxy surveys rely on detecting the redshifted 21cm line in the frequency spectrum of a galaxy, which requires narrow frequency channels (and thus high sensitivity/long integration times). This is time consuming, and Phase 1 of the SKA simply isn’t sensitive enough to detect a large enough number of galaxies in this way in a reasonable amount of time.

Radio galaxy spectra also exhibit a broad, relatively smooth continuum, however, which can be integrated over a wide frequency range, thus enabling the array to see many more (and fainter) galaxies for a given survey time. Redshift information can’t be extracted, as there are no features in the spectra whose shift can be measured, meaning that one essentially sees a 2D map of the galaxies, instead of the full 3D distribution. This loss of information is felt acutely for some purposes – precise constraints on the equation of state of dark energy, w(z), can’t be achieved, for example. But other questions – like whether the matter distribution violates statistical isotropy [pdf], or whether the initial conditions of the Universe were non-Gaussiancan be answered using this technique. The performance of SKA1 in these domains will be highly competitive.

Another option is to perform an intensity mapping survey. This gets around the sensitivity issue by detecting the integrated HI emission from many galaxies over a comparatively large patch of the sky. Redshift information is retained – the redshifted 21cm line is still the cause of the emission – but angular resolution is sacrificed, so that individual galaxies cannot be detected. The resulting maps are of the large-scale matter distribution as traced by the HI distribution. Since the large-scale information is what cosmologists are usually looking for (for example, the baryon acoustic scale, which is used to measure cosmological distances, is something like 10,000 times the size of an individual galaxy), the loss of small angular scales is not so severe, and so this technique can be used to precisely measure quantities like w(z). We explored the relative performance of intensity mapping surveys in a paper last year, and found that, while not quite as good as its spectroscopic galaxy survey contemporaries like Euclid, SKA1 will still be able to put strong (and useful!) constraints on dark energy and other cosmological parameters. This is contingent on solving a number of sticky problems to do with foreground contamination and instrumental effects, however.

The comoving volumes and redshift ranges covered by various future surveys.

The thing I’m probably most excited about is the possibility of measuring the matter distribution on extremely large-scales, though. This will let us study perturbation modes of order the cosmological horizon at relatively late times (redshifts below ~3), where a bunch of neat relativistic effects kick in. These can be used to test fundamental physics in exciting new ways – we can get new handles on inflation, dark energy, and the nature of gravity using them. With collaborators, I recently put out two papers on this topic – one more general forecast paper, where we look at the detectability of these effects with various survey techniques, and another where we tried to figure out how these effects would change if the theory of gravity was something other than General Relativity. To see these modes, you need an extremely large survey, over a wide redshift range and survey area – and this is just what the SKA will be able to provide, in Phase 1 as well as Phase 2. While it turns out that a photometric galaxy survey with LSST (also a prospect for ~2030) will give the best constraints on the parameters we considered, an intensity mapping survey with SKA1 isn’t far behind, and can happen much sooner.

Cool stuff, no?

Advertisements

Acceleration paper published

Hooray! The acceleration paper was published in Phys. Rev. D a couple of days ago.

I was quite pleased with how this one turned out – it’s a nice clarification, I think. Tim had the idea of using the blueshift in collapsing regions to mimic acceleration in the Hubble diagram, which is pretty cool in itself. It was also good to find a concrete example of the link between acceleration of the average and acceleration in the Hubble diagram that Syksy Rasanen has discussed in a couple of papers (see our discussion for references).

Of course, we’re not claiming that “dark energy is backreaction” or anything nearly as strong as that, but I think it does extend the backreaction debate a little. The papers by Ishibashi, Green, and Wald, which seem to show that inhomogeneities on small scales don’t affect the background evolution much, suggest that backreaction effects can’t have any bearing on dark energy. I suppose our paper responds to theirs by saying “yes, perhaps they can’t dynamically, but what about non-linear optical effects?”

ResearchBlogging.org

Philip Bull, Timothy Clifton (2012). Local and non-local measures of acceleration in cosmology. Phys. Rev. D: 10.1103/PhysRevD.85.103512


Lambda is an unnatural tuning

Here’s an interesting comment from John Barrow, who’s currently speaking at the Arrow of Time mini-series here in Oxford. With many theories, if you need to add an extra parameter, you typically make the theory more complicated (and in a sense, less symmetrical). With an FLRW model, however, you find that your model is less symmetrical when you take away a certain parameter – the cosmological constant. FLRW with Lambda asymptotes to de Sitter space far into the future, a spacetime which is maximally symmetric. Removing Lambda removes this asymptotic behaviour, and so you have effectively made the theory less symmetric. Weird, huh?


Code release: LTB in Python, spherical collapse, and Buchert averaging

The release of our next paper is imminent (yay!), and so it’s time for another code release. I try to make all of my code, or at least a substantial fraction of it, publicly available. This enables other people to reproduce and check my work if they want to. It also allows them to build off my code and do cool new things, rather than having to spend months solving problems that, well, have already been solved. That’s the theory, anyway – I only know of a couple of people who’ve actually poked around in the code, or tried to use it for something. But hey, you’ve got to start somewhere. For posterity, I’ve posted the closest thing I have to release notes below.

Continue reading


Buchert deceleration parameter doesn’t care about the sign of the expansion rate

This week’s task: debugging some code that calculates the Buchert spatial average in LTB models. It’s a Python code, using my homebrew LTB background solver (also in Python). I’m using the results reported in a few papers to help debug my code, but I’ve run into problems with reproducing one model in particular (model 8 from Bolejko and Andersson 2008, an overdensity surrounded by vacuum). Hmmm.

I’ll spare the gory details, but one potential problem was that I might have used the wrong sign for the transverse Hubble rate. The model, as specified in the paper, gives no clue as to the sign of the Hubble rates (i.e. whether the overdense region is in a collapsing or expanding phase), only specifying a density and spatial curvature profile. In the process of constructing the model, you need to take the square root of the LTB “Friedmann” equation, and of course there is a freedom in which sign of the root you take. Out of force of habit with LTB models, I was choosing the positive sign. So would choosing the negative sign solve the discrepancy I was seeing between my code and the Bolejko and Larsson paper?

As it turns out: No. I’ll have to keep trying. But it did lead to what I thought was an interesting little result: the Buchert averaged hypersurface deceleration parameter, usually written q_\mathcal{D}, is invariant under \Theta \mapsto - \Theta, where \Theta is the expansion scalar for the dust congruence. This means that it doesn’t care whether your structures are collapsing or expanding, as long as the density profile and variance of the Hubble rates are the same. This is pretty trivial by inspection of the general form of the expression for q_\mathcal{D}, but it hadn’t crossed my mind before.