Monthly Archives: January 2017

Gamma ray background and CO galaxy lensing

A productive week, mostly spent kicking off a few new projects.

At the beginning of the week, Stefano Camera was visiting from Manchester. He gave a great talk about putting constraints on particle dark matter by looking for annihilation signatures in the gamma-ray background (as observed by Fermi). Various other processes can contribute to the background, so ideally one would apply some filter to extract only the dark matter contribution (which currently has unknown amplitude). One answer is to cross-correlate the Fermi map with a reconstructed map of the weak lensing potential, which is probably one of the purest tracers of the dark matter distribution you can get. This should get pretty good with future weak lensing datasets, and future Fermi data. A really nice idea.

I was also asked to give an overview of the cosmology landscape in 2030, for the benefit of the ngVLA (next-generation VLA) cosmology working group. They’re in the process of building a science case for ngVLA, which looks to have many similarities to SKA in terms of size (~200 dishes) and approach (they want a big “facility”-style observatory), but over the 5-100 GHz band instead (the SKA1-MID dish array should effectively cover 350 MHz – 15 GHz). The question is how this could be interesting for cosmology.

Speaking from experience with the SKA, it can be difficult to carve out a really compelling cosmology science case, mostly because there’s just so much competition from other surveys and observational methods. People can often do much of what you want to do, but sooner, or with better-understood (though not necessarily superior) methods. The question is whether your experiment can do something really novel and interesting in the space that’s left.

One suggestion is to do CO intensity mapping with ngVLA, which I think would be neat. Except – its field of view will be small, leading to low survey speeds, so it won’t be able to measure the large volumes that are most useful for cosmology. It’d be handy for constraining the star-formation history of the Universe though, as CO is thought to be a very good tracer of that. There’s also going to be competition from smaller, cheaper (and sooner) CO-IM experiments like COMAP (which I was surprised to learn already has a prototype running, and is hoping to be fully commissioned around the end of 2017).

My proposal was to consider doing a weak lensing survey with ngVLA, perhaps using the CO line. The Manchester group have done a lot of work on radio weak lensing recently, mostly targeting the SKA. They plan to perform a continuum survey at ~1 GHz over a few thousand square degrees, yielding an acceptably high source number density and sufficient angular resolution to measure galaxy ellipticities. Redshift information for continuum sources is very scant however, so there’s a significant loss of information due to effectively averaging over all the radial Fourier modes; an SKA1 survey should still have comparable performance to DES though. In any case, the real power of this approach is in cross-correlating the radio and optical lensing data, which would have the effect of removing many difficult systematics that could be extremely difficult to identify and remove with sufficient precision in a single survey. Radio and optical lensing systematics are expected to look quite different; even the atmosphere has a very different effect between the two.

While I haven’t done the full calculation yet (in progress!), my suspicion is that ngVLA could be even better at weak lensing than SKA1, if it has sufficient sensitivity. By targeting the CO line, one gets precise redshift information about the detected galaxies, which should allow much more information to be recovered from the lensing signal than in a continuum survey. By virtue of working at higher frequency, the ngVLA should also have a higher angular resolution, presumably making shape measurement easier too. Most of the other advantages of radio weak lensing are retained, and so this could be a nice dataset to cross-correlate with (e.g.) LSST, and thereby convincingly validate their lensing analysis. The question really is whether ngVLA would have the sensitivity (and survey speed) for this to be practical, however. Stay tuned.


Neutral hydrogen

In an attempt to blog more often, I’ve decided to try writing brief, weekly-ish research updates. Let’s see how long this lasts…

Beyond BAO with autocorrelation intensity mapping experiments

This week, I’ve been at the Cosmology with Neutral Hydrogen workshop in Berkeley, where I gave a talk about autocorrelation (“single-dish”) 21cm intensity mapping experiments, of the kind we’re planning for SKA. As far as the US community is concerned, low-redshift (z < 6) intensity mapping is synonymous with (a) interferometric experiments, like CHIME and HIRAX, and (b) BAO surveys. My argument is that there are distinct advantages to exploring non-BAO science with IM experiments, and that autocorrelation experiments have significant benefits when it comes to those other science cases.

A more provocative statement is that “BAO is [ultimately] a dead end for 21cm IM”, which led to a rather passionate discussion at the end of the first day. I think this is a fair statement – while detecting the BAO will be an excellent, and probably necessary, validation of the 21cm IM technique (you either see the BAO bump feature at the right scale in the correlation function or don’t), contemporary spectroscopic galaxy surveys will cover a big chunk of the interesting redshift range (0 < z < 3) over a similar timeframe, and people will probably trust their results more. That is, counting galaxies is simpler than subtracting IM foregrounds. Perhaps something more can be gained by the ability of IM surveys to reach larger volumes and higher redshifts with tractable amounts of survey time (spectroscopic galaxy surveys are slow and expensive!), but I doubt this will lead to much more than mild improvements on parameter constraints.

Once IM has shown that it can detect the BAO, and is therefore a viable method, where do we go from there? I advocated targeting science for which the IM technique has definitive advantages over other methods. In particular, I suggested IM as being particularly promising for constraining extremely large-scale clustering (e.g. to detect new general relativistic effects and scale-dependent bias from primordial non-Gaussianity), and putting integral constraints on faint emission (i.e. sources deep into the faint end of the luminosity function). Galaxy surveys can’t do the latter unless they’re incredibly deep, and can’t do the former without excessive amounts of survey time. Autocorrelation IM is a better fit for these techniques than interferometric IM because (a) autocorrelation sees all scales in the survey area larger than the beam size, while interferometers filter out large scales unless you have a high density of very short baselines, and (b) there is no “missing flux” due to missing baselines (and therefore missing Fourier modes), which would screw-up integral constraints on total emission. That said, interferometers are probably a safer way to get an initial IM BAO detection, owing to the relative difficulty of calibrating autocorrelation experiments. My money is still on CHIME to get the initial 21cm BAO detection.

There are a few autocorrelation IM experiments on the slate right now, including BINGO (and purpose-built IM experiment that will start operations in the ~2018 timeframe), MeerKAT (for a which a 4,000 hour, 4,000 sq. deg. IM survey called MeerKLASS, matched to the DES footprint, has been proposed), and SKA1-MID (which I’ve spent a lot of time working on; it’s due to switch on around 2020), in addition to existing surveys with GBT and Parkes. If the various hard data analysis challenges can be solved for these experiments (which I think they can be), this will open up several exciting  scientific possibilities that are almost unique to IM, like measuring ultra-large scales. And I think this should be recognised as a more promising niche for the technique – BAO detections are a medium-term validation strategy that will likely provide interesting (but not Earth-shattering) science, but ultimately they’re not its raison d’etre.

Validating 21cm results – how can you trust the auto-spectrum?

Another thing that provoked much hand-wringing was the difficulty of definitively verifying 21cm auto-spectrum detections. The GBT experiment has been trying to do this, but it’s hard. Perhaps the power spectrum they’ve detected is just systematics? Or maybe they’ve over-subtracted the foregrounds and thus taken some signal with them? They claim to be able to combine upper and lower bounds from their auto- and (WiggleZ) cross-spectra respectively to measure parameters like the HI bias and HI fraction, but I have my reservations. As I said, it’s hard.

In my opinion, we need to wait for experiments with greater data volumes (wider survey areas, higher SNR). Then, we gain the ability to perform a much wider array of null tests than are currently possible with the GBT data. This is what people do to validate any other precision experiment, like Planck. It’s not a silver bullet, sure, but it’ll be a good, informative way to build confidence in any claimed auto-power detection.

So, why worry? Just wait for more data, then do the statistical tests.