Monthly Archives: January 2012

NumPy Tip: The index of the array element nearest to some value

Courtesy of Stack Overflow, here’s a little one liner that I use surprisingly frequently. Suppose that you’ve been using an ordered array of numbers arr to do some interpolation and, now that you have the interpolated value v, you want to return only the slice of the array that has arr[i] < v. To do this, you need to find the array index of the element that has the value closest to v, like so:

idx = (np.abs(arr - v)).argmin()

Simple. This finds the deviation between each array element and the value, takes its absolute value, and then returns the index of the element with the smallest value in the resulting array. This, of course, is the number that is closest to v. You can then slice arr using idx, as you would with any other list or array.


NumPy tip: Turn on error reporting when debugging

It’s been a relatively varied week so far. We got back (and discussed) the referee report for our recent paper on proving homogeneity with a generalised kinematic Sunyaev-Zel’dovich effect, I sent off a couple of comments on a nice draft paper by some collaborators of mine, and we’ve had a busy couple of schools outreach evenings. I’m also now a trained STEM ambassador. But most of my time this week has been sunk into getting a couple of NumPy/SciPy-based codes working properly.

I’ll keep it short and sweet: If you’re writing a numerical code in Python (integrating an ODE, or solving some equation with a root finder, or whatever), you might like to turn on floating point error reporting while you debug. Normally, NumPy has the good grace to suppress floating point errors – who wants to write an error handler for every time you have 0/0 in some plotting code? – but there are plenty of applications where you want to catch NaNs and infs as soon as they happen rather than waiting until the end of the calculation to inspect the result. Yeah, I’m looking at you, scipy.integrate.odeint(). Well, it turns out that NumPy has a nice and easy convenience function to control floating point error reporting: numpy.seterr(). Call it at the very top of your code, like this:

numpy.seterr(all='raise')

That’s all you need. Now, when your code comes across an invalid floating point number, it’ll raise an error ASAP and you can have fun debugging the problem. Don’t forget to remove it when you’ve fixed everything up.

As usual, I can’t believe how long it took me to find this.


Buchert deceleration parameter doesn’t care about the sign of the expansion rate

This week’s task: debugging some code that calculates the Buchert spatial average in LTB models. It’s a Python code, using my homebrew LTB background solver (also in Python). I’m using the results reported in a few papers to help debug my code, but I’ve run into problems with reproducing one model in particular (model 8 from Bolejko and Andersson 2008, an overdensity surrounded by vacuum). Hmmm.

I’ll spare the gory details, but one potential problem was that I might have used the wrong sign for the transverse Hubble rate. The model, as specified in the paper, gives no clue as to the sign of the Hubble rates (i.e. whether the overdense region is in a collapsing or expanding phase), only specifying a density and spatial curvature profile. In the process of constructing the model, you need to take the square root of the LTB “Friedmann” equation, and of course there is a freedom in which sign of the root you take. Out of force of habit with LTB models, I was choosing the positive sign. So would choosing the negative sign solve the discrepancy I was seeing between my code and the Bolejko and Larsson paper?

As it turns out: No. I’ll have to keep trying. But it did lead to what I thought was an interesting little result: the Buchert averaged hypersurface deceleration parameter, usually written q_\mathcal{D}, is invariant under \Theta \mapsto - \Theta, where \Theta is the expansion scalar for the dust congruence. This means that it doesn’t care whether your structures are collapsing or expanding, as long as the density profile and variance of the Hubble rates are the same. This is pretty trivial by inspection of the general form of the expression for q_\mathcal{D}, but it hadn’t crossed my mind before.