Tag Archives: surveys

Neutral hydrogen

In an attempt to blog more often, I’ve decided to try writing brief, weekly-ish research updates. Let’s see how long this lasts…

Beyond BAO with autocorrelation intensity mapping experiments

This week, I’ve been at the Cosmology with Neutral Hydrogen workshop in Berkeley, where I gave a talk about autocorrelation (“single-dish”) 21cm intensity mapping experiments, of the kind we’re planning for SKA. As far as the US community is concerned, low-redshift (z < 6) intensity mapping is synonymous with (a) interferometric experiments, like CHIME and HIRAX, and (b) BAO surveys. My argument is that there are distinct advantages to exploring non-BAO science with IM experiments, and that autocorrelation experiments have significant benefits when it comes to those other science cases.

A more provocative statement is that “BAO is [ultimately] a dead end for 21cm IM”, which led to a rather passionate discussion at the end of the first day. I think this is a fair statement – while detecting the BAO will be an excellent, and probably necessary, validation of the 21cm IM technique (you either see the BAO bump feature at the right scale in the correlation function or don’t), contemporary spectroscopic galaxy surveys will cover a big chunk of the interesting redshift range (0 < z < 3) over a similar timeframe, and people will probably trust their results more. That is, counting galaxies is simpler than subtracting IM foregrounds. Perhaps something more can be gained by the ability of IM surveys to reach larger volumes and higher redshifts with tractable amounts of survey time (spectroscopic galaxy surveys are slow and expensive!), but I doubt this will lead to much more than mild improvements on parameter constraints.

Once IM has shown that it can detect the BAO, and is therefore a viable method, where do we go from there? I advocated targeting science for which the IM technique has definitive advantages over other methods. In particular, I suggested IM as being particularly promising for constraining extremely large-scale clustering (e.g. to detect new general relativistic effects and scale-dependent bias from primordial non-Gaussianity), and putting integral constraints on faint emission (i.e. sources deep into the faint end of the luminosity function). Galaxy surveys can’t do the latter unless they’re incredibly deep, and can’t do the former without excessive amounts of survey time. Autocorrelation IM is a better fit for these techniques than interferometric IM because (a) autocorrelation sees all scales in the survey area larger than the beam size, while interferometers filter out large scales unless you have a high density of very short baselines, and (b) there is no “missing flux” due to missing baselines (and therefore missing Fourier modes), which would screw-up integral constraints on total emission. That said, interferometers are probably a safer way to get an initial IM BAO detection, owing to the relative difficulty of calibrating autocorrelation experiments. My money is still on CHIME to get the initial 21cm BAO detection.

There are a few autocorrelation IM experiments on the slate right now, including BINGO (and purpose-built IM experiment that will start operations in the ~2018 timeframe), MeerKAT (for a which a 4,000 hour, 4,000 sq. deg. IM survey called MeerKLASS, matched to the DES footprint, has been proposed), and SKA1-MID (which I’ve spent a lot of time working on; it’s due to switch on around 2020), in addition to existing surveys with GBT and Parkes. If the various hard data analysis challenges can be solved for these experiments (which I think they can be), this will open up several exciting  scientific possibilities that are almost unique to IM, like measuring ultra-large scales. And I think this should be recognised as a more promising niche for the technique – BAO detections are a medium-term validation strategy that will likely provide interesting (but not Earth-shattering) science, but ultimately they’re not its raison d’etre.

Validating 21cm results – how can you trust the auto-spectrum?

Another thing that provoked much hand-wringing was the difficulty of definitively verifying 21cm auto-spectrum detections. The GBT experiment has been trying to do this, but it’s hard. Perhaps the power spectrum they’ve detected is just systematics? Or maybe they’ve over-subtracted the foregrounds and thus taken some signal with them? They claim to be able to combine upper and lower bounds from their auto- and (WiggleZ) cross-spectra respectively to measure parameters like the HI bias and HI fraction, but I have my reservations. As I said, it’s hard.

In my opinion, we need to wait for experiments with greater data volumes (wider survey areas, higher SNR). Then, we gain the ability to perform a much wider array of null tests than are currently possible with the GBT data. This is what people do to validate any other precision experiment, like Planck. It’s not a silver bullet, sure, but it’ll be a good, informative way to build confidence in any claimed auto-power detection.

So, why worry? Just wait for more data, then do the statistical tests.

SDSS DR9 data release is out

The much-anticipated (by me, anyway) SDSS-III DR9 data release is out! Like many other people, I love mucking around with big datasets like this, and the SDSS team have done a really nice job of trying to make the data as accessible as possible. An integral part of any serious public data release is documentation, and the SDSS crew have excelled at that too, from what I’ve seen so far: check out the various interesting tutorials and basic code snippets that will help you to get up and running.