Author Archives: Phil Bull

About Phil Bull

I'm a theoretical cosmologist, currently working as a NASA NPP fellow at JPL/Caltech in Pasadena, CA. My research focuses on the effects of inhomogeneities on the evolution of the Universe and how we measure it. I'm also keen on stochastic processes, scientific computing, the philosophy of science, and open source stuff.

Prioritising modified gravity models

Just a brief note this week. One thing looming on the horizon is the LSST Dark Energy Science Collaboration meeting in Stanford, where we’ll spend a couple of sessions discussing which models beyond the standard LambdaCDM model we should prioritise to be tested when the first data arrive.

In an ideal world, we’d test everything that seemed interesting, for a very loose and inclusive definition of the word interesting. Wouldn’t want to miss out that crazy theory that might just be the answer to all the big problems in physics now, would we? The problem is, testing models takes a lot of time and effort, with the effort required becoming increasingly prohibitive as we begin to push to sub-1% precision on cosmological parameters. Modern survey data are incredibly complex, and so it takes a lot to ensure that the analysis you’re doing is robust – a lot of computing time, a lot of validation, a lot of simulations, a lot of model complexity… It’s just hard.

So, we need to prioritise. I think the best bet for now would be to whittle down the vast array of possible models into a very short but diverse list. This could cover some neat examples of very different physics, but without any attempt at being comprehensive. The diversity will give us a handful of “example” implementations of testable models that can be used as templates for future, more comprehensive, analyses. Sticking to a short list is crucial for now however, as it will allow us to focus our development and simulation effort without overreaching.

Advertisements

Gamma ray background and CO galaxy lensing

A productive week, mostly spent kicking off a few new projects.

At the beginning of the week, Stefano Camera was visiting from Manchester. He gave a great talk about putting constraints on particle dark matter by looking for annihilation signatures in the gamma-ray background (as observed by Fermi). Various other processes can contribute to the background, so ideally one would apply some filter to extract only the dark matter contribution (which currently has unknown amplitude). One answer is to cross-correlate the Fermi map with a reconstructed map of the weak lensing potential, which is probably one of the purest tracers of the dark matter distribution you can get. This should get pretty good with future weak lensing datasets, and future Fermi data. A really nice idea.

I was also asked to give an overview of the cosmology landscape in 2030, for the benefit of the ngVLA (next-generation VLA) cosmology working group. They’re in the process of building a science case for ngVLA, which looks to have many similarities to SKA in terms of size (~200 dishes) and approach (they want a big “facility”-style observatory), but over the 5-100 GHz band instead (the SKA1-MID dish array should effectively cover 350 MHz – 15 GHz). The question is how this could be interesting for cosmology.

Speaking from experience with the SKA, it can be difficult to carve out a really compelling cosmology science case, mostly because there’s just so much competition from other surveys and observational methods. People can often do much of what you want to do, but sooner, or with better-understood (though not necessarily superior) methods. The question is whether your experiment can do something really novel and interesting in the space that’s left.

One suggestion is to do CO intensity mapping with ngVLA, which I think would be neat. Except – its field of view will be small, leading to low survey speeds, so it won’t be able to measure the large volumes that are most useful for cosmology. It’d be handy for constraining the star-formation history of the Universe though, as CO is thought to be a very good tracer of that. There’s also going to be competition from smaller, cheaper (and sooner) CO-IM experiments like COMAP (which I was surprised to learn already has a prototype running, and is hoping to be fully commissioned around the end of 2017).

My proposal was to consider doing a weak lensing survey with ngVLA, perhaps using the CO line. The Manchester group have done a lot of work on radio weak lensing recently, mostly targeting the SKA. They plan to perform a continuum survey at ~1 GHz over a few thousand square degrees, yielding an acceptably high source number density and sufficient angular resolution to measure galaxy ellipticities. Redshift information for continuum sources is very scant however, so there’s a significant loss of information due to effectively averaging over all the radial Fourier modes; an SKA1 survey should still have comparable performance to DES though. In any case, the real power of this approach is in cross-correlating the radio and optical lensing data, which would have the effect of removing many difficult systematics that could be extremely difficult to identify and remove with sufficient precision in a single survey. Radio and optical lensing systematics are expected to look quite different; even the atmosphere has a very different effect between the two.

While I haven’t done the full calculation yet (in progress!), my suspicion is that ngVLA could be even better at weak lensing than SKA1, if it has sufficient sensitivity. By targeting the CO line, one gets precise redshift information about the detected galaxies, which should allow much more information to be recovered from the lensing signal than in a continuum survey. By virtue of working at higher frequency, the ngVLA should also have a higher angular resolution, presumably making shape measurement easier too. Most of the other advantages of radio weak lensing are retained, and so this could be a nice dataset to cross-correlate with (e.g.) LSST, and thereby convincingly validate their lensing analysis. The question really is whether ngVLA would have the sensitivity (and survey speed) for this to be practical, however. Stay tuned.


Neutral hydrogen

In an attempt to blog more often, I’ve decided to try writing brief, weekly-ish research updates. Let’s see how long this lasts…

Beyond BAO with autocorrelation intensity mapping experiments

This week, I’ve been at the Cosmology with Neutral Hydrogen workshop in Berkeley, where I gave a talk about autocorrelation (“single-dish”) 21cm intensity mapping experiments, of the kind we’re planning for SKA. As far as the US community is concerned, low-redshift (z < 6) intensity mapping is synonymous with (a) interferometric experiments, like CHIME and HIRAX, and (b) BAO surveys. My argument is that there are distinct advantages to exploring non-BAO science with IM experiments, and that autocorrelation experiments have significant benefits when it comes to those other science cases.

A more provocative statement is that “BAO is [ultimately] a dead end for 21cm IM”, which led to a rather passionate discussion at the end of the first day. I think this is a fair statement – while detecting the BAO will be an excellent, and probably necessary, validation of the 21cm IM technique (you either see the BAO bump feature at the right scale in the correlation function or don’t), contemporary spectroscopic galaxy surveys will cover a big chunk of the interesting redshift range (0 < z < 3) over a similar timeframe, and people will probably trust their results more. That is, counting galaxies is simpler than subtracting IM foregrounds. Perhaps something more can be gained by the ability of IM surveys to reach larger volumes and higher redshifts with tractable amounts of survey time (spectroscopic galaxy surveys are slow and expensive!), but I doubt this will lead to much more than mild improvements on parameter constraints.

Once IM has shown that it can detect the BAO, and is therefore a viable method, where do we go from there? I advocated targeting science for which the IM technique has definitive advantages over other methods. In particular, I suggested IM as being particularly promising for constraining extremely large-scale clustering (e.g. to detect new general relativistic effects and scale-dependent bias from primordial non-Gaussianity), and putting integral constraints on faint emission (i.e. sources deep into the faint end of the luminosity function). Galaxy surveys can’t do the latter unless they’re incredibly deep, and can’t do the former without excessive amounts of survey time. Autocorrelation IM is a better fit for these techniques than interferometric IM because (a) autocorrelation sees all scales in the survey area larger than the beam size, while interferometers filter out large scales unless you have a high density of very short baselines, and (b) there is no “missing flux” due to missing baselines (and therefore missing Fourier modes), which would screw-up integral constraints on total emission. That said, interferometers are probably a safer way to get an initial IM BAO detection, owing to the relative difficulty of calibrating autocorrelation experiments. My money is still on CHIME to get the initial 21cm BAO detection.

There are a few autocorrelation IM experiments on the slate right now, including BINGO (and purpose-built IM experiment that will start operations in the ~2018 timeframe), MeerKAT (for a which a 4,000 hour, 4,000 sq. deg. IM survey called MeerKLASS, matched to the DES footprint, has been proposed), and SKA1-MID (which I’ve spent a lot of time working on; it’s due to switch on around 2020), in addition to existing surveys with GBT and Parkes. If the various hard data analysis challenges can be solved for these experiments (which I think they can be), this will open up several exciting  scientific possibilities that are almost unique to IM, like measuring ultra-large scales. And I think this should be recognised as a more promising niche for the technique – BAO detections are a medium-term validation strategy that will likely provide interesting (but not Earth-shattering) science, but ultimately they’re not its raison d’etre.

Validating 21cm results – how can you trust the auto-spectrum?

Another thing that provoked much hand-wringing was the difficulty of definitively verifying 21cm auto-spectrum detections. The GBT experiment has been trying to do this, but it’s hard. Perhaps the power spectrum they’ve detected is just systematics? Or maybe they’ve over-subtracted the foregrounds and thus taken some signal with them? They claim to be able to combine upper and lower bounds from their auto- and (WiggleZ) cross-spectra respectively to measure parameters like the HI bias and HI fraction, but I have my reservations. As I said, it’s hard.

In my opinion, we need to wait for experiments with greater data volumes (wider survey areas, higher SNR). Then, we gain the ability to perform a much wider array of null tests than are currently possible with the GBT data. This is what people do to validate any other precision experiment, like Planck. It’s not a silver bullet, sure, but it’ll be a good, informative way to build confidence in any claimed auto-power detection.

So, why worry? Just wait for more data, then do the statistical tests.


Book Review: How Software Works

Another month, another book kindly sent for review from No Starch. This one’s of the more conceptual variety, and sets out to explain – in laymen’s terms – the algorithms that are responsible for much of the “magic” of modern technology.

hsw_cover-front

When I was younger, and just getting into computers, I used to spend hours reading second-hand software and hardware manuals. I think I must have read the manual for my old computer’s motherboard 50 times. A kindly network engineer from the PC Pro forums (ah, those were the days) sent me an old 400-page networking manual that I inhaled too. Manuals were the best. Manuals showed me what computers were capable of.

It wasn’t until later that I became aware of where the real action is. Algorithms. Manuals are a fine thing, but they’re typically written at a high-level. They tell you what’s happening – and which buttons to press – but, for expediency, often skip over the “how” – the way the mysterious feats that first got me fired up about computers are actually achieved.

This book, How Software Works (by V. Anton Spraul), is all about the “how”. You won’t find any practical manual-type information in here at all, so don’t expect to come out the other side of this book with a finely-honed knowledge of printer troubleshooting or anything like that. No, this is a very pure book that explains, in uncompromisingly non-technical terms, how computers achieve their magic.

Each chapter covers a broad but real-world relevant topic, such as web security, movie CGI, or mapping. After some background on each topic, Spraul sketches out the most important pieces of the algorithmic puzzle needed to produce the “everyday” results we now take for granted in movies, on the web, and in our smartphone apps. This might include a walkthrough of the logic behind a trapdoor function, of the sort that that makes public key encryption possible (which in turn makes internet shopping practical). Or perhaps the step-by-step process by which a rendering program builds up a realistic virtual scene in a movie, through ray tracing.

The writing is very clear and non-technical, almost without exception, and assumes very little prior knowledge. You do not need a technical background to understand this book, but you’ll want to spend some time to follow the examples and ruminate on them a little to really get everything. The examples themselves are plentiful, and include step-by-step illustrations of simplified situations that, when linked together, demonstrate how each algorithm works as a whole.

Given that this is a book on software, it’s slightly disappointing that the presentation is completely “dead-tree traditional”. By this, I mean that there’s no supplementary material in the form of working code snippets that one could play with, or interactive demonstrations. This feels like a missed opportunity, at least for those of us who learn best by tinkering (c.f. the excellent W3Schools “Try It Yourself” tutorials). It’d also turn the book into a more direct educational tool, perhaps something that a class could be based on – and there are enough simple web-based programming systems out there to remove much of the burden of having to “teach” programming in the first place. This is more of a wishlist item than a crucially missing piece, however.

Another minor criticism is the length of the book. It would have been nice to see a few more topics covered, or perhaps a little more detail in the final chapters. The material on searching could go into more detail in explaining how web search works, for example, including things like how robots/crawlers and ranking algorithms (e.g. PageRank) actually do their thing. As it is, it feels like the author ran out of steam before getting to the real crux of this topic.

All in all, it’s a very nice book, and I learned a lot about some interesting, highly-relevant techniques that I was only dimly aware existed. The material on encryption in particular outlines a clever and essentially mathematical topic that will speak to those of you who enjoy logic puzzles, for example. I’m not quite sure who the intended audience is for the book as a whole, but it’s definitely something to keep in mind for an aspiring techie – a teenager who’s still reading the manuals, perhaps, and is ready to have their horizons broadened. The mechanically-minded, those with a fundamental curiosity about how things work, will also enjoy.


Book Review: How Linux Works 2

I received a review copy of How Linux Works 2, by Brian Ward, from the lovely folks at No Starch Press (my publisher) late last year. Inexcusably, it’s taken me until now to put together a proper review; here it is, with profuse apologies for the delay!

How Linux Works 2

How Linux Works 2 is a very nice technical read. I’ve been a user and administrator of Linux systems for over a decade now, and can safely say I learned a lot of new stuff from both angles. Newer users will probably get even more from it – although absolute beginners with less of a technical bent might be better off looking elsewhere.

The book fills something of a niche; it’s not a standard manual-type offering, nor is it a technical system reference. It’s more impressionistic than either of those, written as a sort of overview of the organisation and concepts that go into a generic Linux system, although with specific details scattered throughout that really get into the nuts and bolts of things. If you’re looking for “how-to”-type instructions, you’re unlikely to find everything you need here, and it isn’t a comprehensive reference guide either. But if you’re technically-minded and want to understand the essentials of how most Linux distros work in considerable (but not absolute) depth, with a bit of getting your hands dirty, then it’s a great book to have on your shelf.

Various technical concepts are covered ably and concisely, and was I left with a much better feeling for more mysterious Linux components – like the networking subsystem – than I had before. There are practical details here as well though, and you’ll find brief, high-level overviews of a number of useful commands and utilities that are sufficient to give a flavour for what they’re good for without getting too caught up in the (often idiosyncratic) specifics of their usage.

That said, the author does sometimes slip into “how-to” mode, giving more details about how to use certain tools. While this is fine in moderation, the choice of digression is sometimes unusual – for example, file sharing with Samba is awarded a whole six pages (and ten subsections) of usage-specifics, while the arguably more fundamental CUPS printing subsystem has to make do with less than 2 pages. The discussion of SSH is also quite limited, despite the importance of this tool from both the user’s and administrator’s perspective, and desktop environments probably could have done with a bit more than a brief single-chapter overview. Still, this book really isn’t intended as a manual, and the author has done well not to stray too far in this direction.

A common difficulty for Linux books is the great deal of variation between distros. Authors often struggle with where to draw the line between complete (but superficial) distro-agnostic generality and more useful, but audience-limiting, distro specifics. How Linux Works succeeds admirably in walking this tightrope, providing sufficient detail to be useful to users of more or less any Linux system without repeatedly dropping into tiresome list-like “distro by distro” discussions. This isn’t always successful – the preponderance of init systems in modern distros has necessitated a long and somewhat dull enumeration of three of the most common options, for example – but HLW2 does much better at handling this than most books I’ve seen. The upshot is that the writing is fluid and interesting for the most part, without too many of the “painful but necessary” digressions that plague technical writing.

Overall, this book is an enjoyable and informative read for anyone interested in, well, how Linux works! You’ll get an essential understanding of what’s going on under the hood without getting bogged down in minutiae – making this a very refreshing (and wholly recommended) addition to the Linux literature.

You can find a sample chapter and table of contents/index on the No Starch website.


NAM 2015 Radio Surveys session

I co-chaired a series of parallel sessions on radio surveys at the 2015 UK National Astronomy Meeting in Llandudno earlier this month. It was a fun session, with lots of nice talks. We’ve now made the talk slides available online –  take a look!


Cosmology with the Square Kilometre Array

A large fraction of my time over the last 18 months has been spent working out parts of the cosmology science case for the Square Kilometre Array, a gigantic new radio telescope that will be built (mostly) across South Africa and Australia over the coming decade. It’s been in the works since the early 90’s and – after surviving the compulsory planning, political wrangling, and cost-cutting phases that all Big Science projects are subjected to – will soon be moving to the stage where metal is actually put into the ground. (Well, soon-ish – the first phase of construction is due for completion in 2023.)

Infographic: SKA will have 8x the sensitivity of LOFAR.

A detailed science case for the SKA was developed around a decade ago, but of course a lot has changed since then. There was a conference in Sicily around this time last year where preliminary updates on all sorts of scientific possibilities were presented, which were then fleshed out into more detailed chapters for the conference proceedings. While a lot of the chapters were put on arXiv in January, it’s good to see that all of them have now been published (online, for free). This is, effectively, the new SKA science book, and it’s interesting to see how it’s grown since its first incarnation.

My contribution has mostly been the stuff on using surveys of neutral hydrogen (HI) to constrain cosmological parameters. I think it’s fair to say that most cosmologists haven’t paid too much attention to the SKA in recent years, apart from those working on the Epoch of Reionisation. This is presumably because it all seemed a bit futuristic; the headline “billion galaxy” spectroscopic redshift survey – one of the original motivations for the SKA – requires Phase 2 of the array, which isn’t due to enter operation until closer to 2030. Other (smaller) large-scale structure experiments will return interesting data long before this.

Artist's impression of the SKA1-MID dish array.

We’ve recently realised that we can do a lot of competitive cosmology with Phase 1 though, using a couple of different survey methods. One option is to perform a continuum survey [pdf], which can be used to detect extremely large numbers of galaxies, albeit without the ability to measure their redshifts. HI spectroscopic galaxy surveys rely on detecting the redshifted 21cm line in the frequency spectrum of a galaxy, which requires narrow frequency channels (and thus high sensitivity/long integration times). This is time consuming, and Phase 1 of the SKA simply isn’t sensitive enough to detect a large enough number of galaxies in this way in a reasonable amount of time.

Radio galaxy spectra also exhibit a broad, relatively smooth continuum, however, which can be integrated over a wide frequency range, thus enabling the array to see many more (and fainter) galaxies for a given survey time. Redshift information can’t be extracted, as there are no features in the spectra whose shift can be measured, meaning that one essentially sees a 2D map of the galaxies, instead of the full 3D distribution. This loss of information is felt acutely for some purposes – precise constraints on the equation of state of dark energy, w(z), can’t be achieved, for example. But other questions – like whether the matter distribution violates statistical isotropy [pdf], or whether the initial conditions of the Universe were non-Gaussiancan be answered using this technique. The performance of SKA1 in these domains will be highly competitive.

Another option is to perform an intensity mapping survey. This gets around the sensitivity issue by detecting the integrated HI emission from many galaxies over a comparatively large patch of the sky. Redshift information is retained – the redshifted 21cm line is still the cause of the emission – but angular resolution is sacrificed, so that individual galaxies cannot be detected. The resulting maps are of the large-scale matter distribution as traced by the HI distribution. Since the large-scale information is what cosmologists are usually looking for (for example, the baryon acoustic scale, which is used to measure cosmological distances, is something like 10,000 times the size of an individual galaxy), the loss of small angular scales is not so severe, and so this technique can be used to precisely measure quantities like w(z). We explored the relative performance of intensity mapping surveys in a paper last year, and found that, while not quite as good as its spectroscopic galaxy survey contemporaries like Euclid, SKA1 will still be able to put strong (and useful!) constraints on dark energy and other cosmological parameters. This is contingent on solving a number of sticky problems to do with foreground contamination and instrumental effects, however.

The comoving volumes and redshift ranges covered by various future surveys.

The thing I’m probably most excited about is the possibility of measuring the matter distribution on extremely large-scales, though. This will let us study perturbation modes of order the cosmological horizon at relatively late times (redshifts below ~3), where a bunch of neat relativistic effects kick in. These can be used to test fundamental physics in exciting new ways – we can get new handles on inflation, dark energy, and the nature of gravity using them. With collaborators, I recently put out two papers on this topic – one more general forecast paper, where we look at the detectability of these effects with various survey techniques, and another where we tried to figure out how these effects would change if the theory of gravity was something other than General Relativity. To see these modes, you need an extremely large survey, over a wide redshift range and survey area – and this is just what the SKA will be able to provide, in Phase 1 as well as Phase 2. While it turns out that a photometric galaxy survey with LSST (also a prospect for ~2030) will give the best constraints on the parameters we considered, an intensity mapping survey with SKA1 isn’t far behind, and can happen much sooner.

Cool stuff, no?