Gamma ray background and CO galaxy lensing

A productive week, mostly spent kicking off a few new projects.

At the beginning of the week, Stefano Camera was visiting from Manchester. He gave a great talk about putting constraints on particle dark matter by looking for annihilation signatures in the gamma-ray background (as observed by Fermi). Various other processes can contribute to the background, so ideally one would apply some filter to extract only the dark matter contribution (which currently has unknown amplitude). One answer is to cross-correlate the Fermi map with a reconstructed map of the weak lensing potential, which is probably one of the purest tracers of the dark matter distribution you can get. This should get pretty good with future weak lensing datasets, and future Fermi data. A really nice idea.

I was also asked to give an overview of the cosmology landscape in 2030, for the benefit of the ngVLA (next-generation VLA) cosmology working group. They’re in the process of building a science case for ngVLA, which looks to have many similarities to SKA in terms of size (~200 dishes) and approach (they want a big “facility”-style observatory), but over the 5-100 GHz band instead (the SKA1-MID dish array should effectively cover 350 MHz – 15 GHz). The question is how this could be interesting for cosmology.

Speaking from experience with the SKA, it can be difficult to carve out a really compelling cosmology science case, mostly because there’s just so much competition from other surveys and observational methods. People can often do much of what you want to do, but sooner, or with better-understood (though not necessarily superior) methods. The question is whether your experiment can do something really novel and interesting in the space that’s left.

One suggestion is to do CO intensity mapping with ngVLA, which I think would be neat. Except – its field of view will be small, leading to low survey speeds, so it won’t be able to measure the large volumes that are most useful for cosmology. It’d be handy for constraining the star-formation history of the Universe though, as CO is thought to be a very good tracer of that. There’s also going to be competition from smaller, cheaper (and sooner) CO-IM experiments like COMAP (which I was surprised to learn already has a prototype running, and is hoping to be fully commissioned around the end of 2017).

My proposal was to consider doing a weak lensing survey with ngVLA, perhaps using the CO line. The Manchester group have done a lot of work on radio weak lensing recently, mostly targeting the SKA. They plan to perform a continuum survey at ~1 GHz over a few thousand square degrees, yielding an acceptably high source number density and sufficient angular resolution to measure galaxy ellipticities. Redshift information for continuum sources is very scant however, so there’s a significant loss of information due to effectively averaging over all the radial Fourier modes; an SKA1 survey should still have comparable performance to DES though. In any case, the real power of this approach is in cross-correlating the radio and optical lensing data, which would have the effect of removing many difficult systematics that could be extremely difficult to identify and remove with sufficient precision in a single survey. Radio and optical lensing systematics are expected to look quite different; even the atmosphere has a very different effect between the two.

While I haven’t done the full calculation yet (in progress!), my suspicion is that ngVLA could be even better at weak lensing than SKA1, if it has sufficient sensitivity. By targeting the CO line, one gets precise redshift information about the detected galaxies, which should allow much more information to be recovered from the lensing signal than in a continuum survey. By virtue of working at higher frequency, the ngVLA should also have a higher angular resolution, presumably making shape measurement easier too. Most of the other advantages of radio weak lensing are retained, and so this could be a nice dataset to cross-correlate with (e.g.) LSST, and thereby convincingly validate their lensing analysis. The question really is whether ngVLA would have the sensitivity (and survey speed) for this to be practical, however. Stay tuned.

Advertisements

About Phil Bull

I'm a theoretical cosmologist, currently working as a NASA NPP fellow at JPL/Caltech in Pasadena, CA. My research focuses on the effects of inhomogeneities on the evolution of the Universe and how we measure it. I'm also keen on stochastic processes, scientific computing, the philosophy of science, and open source stuff. View all posts by Phil Bull

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: