Tag Archives: code

Optimisation problems: Choosing a method

In this morning’s Code Coffee meeting, we discussed optimisation methods (for doing things like least-squares minimisation, not for optimising code). This is an extremely common problem in astrophysics – you have some data, and you want to find the model that best fits it. In fact, this is a very common problem in general, and so there are lots of methods out there for handling optimisation. Of course, depending on the type of problem you have, some methods will be better than others. So how do you choose between them? And how do you implement the methods? I’ve summarised some of the discussion we had on these topics below. There’s also a great website with a detailed optimisation guide here.

N.B. I should note that most problems that astrophysicists are interested in involve the optimisation of real functions given data (so-called continuous optimisation). There are vast branches of computer science research concerned with combinatorial optimisation, which I’m not so interested in here.

Choosing which method to use

As I’ve already mentioned, optimisation is a really common problem, so there are very many different optimisation algorithms, most of them with a number of different implementations. Over the course of the meeting, we came up with a basic step-by-step guide to help you choose which method to use:

  1. Is the problem linear? Use a linear method, since these are guaranteed to converge to the right answer (within some tolerance). The Conjugate Gradient method is a good first port of call for general linear problems. If you have a particularly simple problem, you could even directly invert the matrix for your linear operator.
  2. Can the problem be rewritten to be linear? If you can change variables or otherwise re-cast the problem in such a way that it becomes linear, do that and use a linear solver!
  3. Is simple brute-force good enough? Some of the simpler (or at least better-known) optimisation methods are very robust, and some are guaranteed to converge, even for pretty general non-linear problems. If you have a small number of parameters, or not too many data points, it’s often quicker and easier to use one of these. Ryan Houghton recommended four, in order of increasing complexity: Amoeba; Levenberg-Marquardt; NNLS (non-negative least-squares); and BVLS (bound variable least-squares). The first two are discussed in Numerical Recipes, and the latter two have pretty widely-used implementations.
  4. Can you analytically calculate gradients? If you can analytically/cheaply calculate the Jacobian and/or Hessian matrix of your problem (i.e. the derivatives of the function you’re minimising with respect to the model parameters), you can use a gradient-based method. Providing this extra information can result in substantial increases in the speed and convergence properties of the optimiser, so it’s worth doing, especially for larger problems.
  5. Is your problem computationally intensive/complex? Some problems are just too large or too messy to use any of the methods advocated so far, so you’ll have to look at more sophisticated methods with more complex implementations. Fortunately, there are some nice, general libraries that implement lots of fancy algorithms (e.g. NLopt, or some of the codes here).
  6. Do you have access to a supercomputer? If more sophisticated algorithms are still unsuitable, you might have to bite the bullet and just use a brute-force method and a lot of computer time. This can be nasty, especially if you don’t know how long it will take for the optimiser to converge. In this case, you might want to spend some more time thinking about clever ways to simplify your problem.

Interesting implementations

Over the course of the discussion, quite a few nice-looking optimisation libraries were mentioned. I’ve listed a few of them below.

  • NLopt has interfaces for lots of programming languages, and implements a wide range of optimisation algorithms. You can switch between algorithms by simply changing a single line of code.
  • L-BFGS is a limited-memory code for doing constrained and unconstrained large-scale optimisation problems. Consider using this if your problem has lots of parameters, or you have quite a lot of data. There’s also KNITRO from the same author, which looks more advanced.
  • levmar is a nice code for doing Levenberg-Marquardt optimisation, with support for a number of programming languages.
  • NNLS is a non-negative least-squares implementation, and is available for a number of programming languages commonly used in astrophysics. It’s supposed to have pretty good convergence properties.
  • More algorithms are listed here, and particular implementations of them are either listed here, or are only a Google search away.

Properties of optimisation algorithms

  • Convergence: Some algorithms are guaranteed to converge, but that doesn’t mean that they will converge to the correct answer!
  • Constrained optimisation: Sometimes it’s necessary to put constraints into the optimisation problem (e.g. you don’t want the mass of your black hole to go negative). You might be able to avoid doing this if you have a good guess for the starting point of the optimisation, so it’s less likely to wander off into a disallowed region of parameter space. If not, there are a number of methods for performing constrained optimisations, but they tend to be reliable only for linear constraints. Word on the street is that codes capable of handling more complicated non-linear constraints are jealously-guarded Wall Street secrets…

Thanks go to Mark Walker, Joe Zuntz, Min-Su Shin, Sigurd Naess and Ryan Houghton for contributing greatly to this discussion.

Advertisements

Code Coffee: One-line guides to getting started in astro computing

I was away on my travels last week, so I missed the first Code Coffee of the new term here in Oxford. Joe Zuntz was present to officiate, though, and led a discussion (allegedly) for the benefit of our new doctoral students, who’d just arrived at that point. The result was a collection of “one-line guides” on getting started with astrophysics-style computing and programming, which I’ve copied below. (Be warned: It’s a little Oxford-specific in parts.)

Getting help

Answering Questions

Training

Don’t code maths!

Numerical Methods

General coding advice

Files

  • Don’t invent your own file formats.
  • FITS tables are not a great format. Consider using HDF instead.

Compilers

  • GNU: gcc, g++, gfortran (freely-available, cross-platform)
  • Intel: icc, ifort (tend to be faster for some applications)
  • Avoid mixing Intel/GNU compilers, and different versions of the compilers, when using Fortran. The compiled Fortran modules that are produced may not be compatible between different versions.

Best Practices


Drawing random numbers from a multivariate Gaussian distribution

I like software libraries. A software library means that someone has already done some of the heavy lifting for me. A software library means that I won’t have to reinvent the wheel. It means that I might not have to spend my afternoon fixing bugs and stringently validating my own code*.

It also means that I don’t necessarily have a good idea of what the computer is doing “behind the scenes”; that I’m potentially unaware of the clever (or not) algorithm that is being implemented, and what it’s limitations (or bugs) are.

For a long time now, I’ve been blithely accepting that numpy.random.multivariate_normal applies some sensible, but mysterious, procedure in order to generate random variables from a multivariate Gaussian (normal) distribution. And it does – I just never tried to unravel said mystery. So, prompted by Dag Sverre, and courtesy of Wikipedia,  I present a Python snippet that draws multivariate Gaussians given (a) some way of generating a set of standard univariate Gaussian random numbers, and (b) a Cholesky decomposition routine. In all, it’s about 10% faster than the NumPy routine, which applies a bunch of validity tests that I don’t.

import numpy as np

# Define mean, covariance matrix, and no. of samples required
mean = ... # some array of length N
cov = ... # some positive-definite NxN matrix
Ndraws = 1000

# Do factorisation (can store this and use it again later)
L = np.linalg.cholesky(cov)

# Get 3*Ndraws Gaussian random variables (mean=0, variance=1)
norm = np.random.normal(size=Ndraws*3).reshape(3, Ndraws)

# Construct final set of random numbers (with correct mean)
rand = mean + np.dot(L, norm)

Of course, this now introduces two new mysteries: How do you efficiently implement Cholesky decomposition, and how do you generate univariate Gaussian random numbers? Those are exercises for another day, I think.

* It depends on how sloppy I’m feeling; of course, you should validate everything that matters, regardless of who wrote it.


Weird gfortran compiler error of the week

In response to the following compiler error from gfortran, the correct response is apparently to run make clean and then make again:

Error: More actual than formal arguments in procedure call at (1)

God knows what was actually going wrong.


Getting codes in different languages to interact

In a Code Coffee meeting last month (9th July), we discussed ways of getting codes in different languages to talk to one another. This is quite a common task; for example, if you need your modern data analysis pipeline to interact with some legacy code (normally Fortran!), or if you have a mostly Python code that has a computationally-intensive bit that would be faster written in C. Our focus was on Python, C, and Fortran, since they seem to be the most commonly used languages in the group.

People brought a number of different solutions with them. Some were very easy to implement, just as a simple few-line block in your Python code that could be used to speed up a loop. Others were quite a bit more complicated, requiring extra work on building modules and getting your hands dirty with memory management, but having the virtue of being much more flexible. The methods we discussed are summarised below, in increasing order of complexity. If you skip to the bottom, there’s also a link to a tarball, with some examples.

(Edit: Be sure to check out some of the interesting comments, below, too. They mostly concern Cython.)

Weave

Weave is a way of writing C code “inline” – directly into a Python script.  Python variables are automatically made available to the C code (you sometimes need to do a bit of extra work to cast them to a specific type, but not much), which makes it nice and easy to use, with little boilerplate required in the C code. Essentially, you can just stick a fast, few-line C loop in the middle of a Python function, then call Weave and it’ll handle compilation and shuffling data between Python and C for you automagically, without requiring any complicated wrapper code or makefile magic. There’s also a “blitz” mode, which takes a Python/NumPy expression and converts it into C++ in the background, which can also help speed things up (although it’s generally slower than the inline mode).

Unfortunately, the documentation isn’t great, but there are plenty of examples out there (SageMath have a couple of very clear ones). It should definitely be your first port of call if all you want to do is speed up part of your Python code. It’s not so good if you want to interface with legacy code, or have something more complicated that you need to do in C (e.g. something that’s split into different functions). See also Cython, which seems to be a bit more flexible, but not too much more difficult to use. There’s a nice speed comparison with NumPy, Weave blitz and inline, and MATLAB here, and a Cython/Weave example (with benchmarks) here. There’s also an interesting blog post on getting GSL to work with Weave/Cython.

f2py

f2py is a way of automatically generating Python modules from Fortran code. It seems pretty easy to use – all you need to do is run it over your Fortran source code, and it will produce a compiled module that you can simply import into your Python code and use as you would any other. The f2py program scans the Fortran code and figures out how to deal with function arguments and convert between types itself, which is very handy. It even generates function documentation (docstrings) for the Python module, telling you what format the function arguments need to be in. One problem with f2py, pointed out by Ryan, is that it tends to fail silently, making it difficult to debug! It’s apparently quite robust though, so this probably won’t be necessary that often. The documentation looks pretty good, and you can find other resources here. (N.B. They ask you to cite their paper if you use f2py in research code.)

Python ctypes

ctypes is a Python module for handling calls to shared libraries (whether they are written in C/C++ or Fortran). The syntax for loading libraries and calling functions is exceptionally simple – little different from importing a native Python module and calling functions in that. However, there’s some added complexity that comes as a result of having to convert between C and Python data types in function arguments. For the most part, all you need to do is call a simple ctypes helper function to make the conversion; e.g. c_double(42.3) will give you something that can be passed to a function that expects a C double as an argument. In more recent versions of NumPy, ndarrays also have ctypes conversion methods (see the ndarray.ctypes.data_as() method; you probably want to get a pointer to the array, for which you should use ctypes.POINTER), although there are also some subtleties to be aware of in terms of how Python/NumPy and C/C++ or Fortran store multidimensional arrays in memory (see the order property of NumPy arrays, for example). The Sage website has a good tutorial on working with ndarrays in ctypes. A slightly nicer way of handling types conversion is to write a small Python wrapper for the C function (or whatever), and use the argtypes attribute to specify what arguments the function accepts. A comprehensive tutorial can be found here, which explains most of the workings of ctypes, and which has good examples.

The tutorial on the Python website mostly concerns calling existing system libraries, but of course you can write your own libraries too. For C/C++ code compiled with GCC, this is pretty simple – all you need to do is add a couple of compiler flags (see Section 3.4 of the TDLP Library Program tutorial). Of course, your C code should be set out like a library first – just compiling something that’s meant to be a standalone program as a library won’t work too well, and you’ll need to write header files etc. That’s easy to do, and existing “non-library” code can often be refactored into a working library with very little effort. There’s a very nice, simple tutorial on writing and compiling libraries here. Note that the libraries don’t need any “special treatment” to interface with Python using ctypes – they’re just standard C/Fortran libraries, and can happily be called by other C/Fortran programs too.

Python C API

Big chunks of Python are written in C, so it should come as no surprise that there is a C library which provides access to Python data types and functions. Access it by including Python.h in your C code. I’ll keep discussion of this one to a deliberately high-level, because it’s significantly more complicated than the others, and doesn’t seem to be fantastically well documented (although see the official Python docs for it here and here). This method allows you to write new Python modules directly in C; the end result will be something that looks and behaves exactly like a standard Python module (of the sort that you may have written before, in Python), which requires no fiddling about with data types or what have you in the Python code that uses it. That’s because all of the fiddling around is done in the C code! I find this quite a bit trickier to write – your C files need some boilerplate to get themselves (and their constituent functions) recognised by the Python interpreter, and there’s a bit of makefile magic required to actually build and install the module too. Plus, you have to figure out how to handle type conversions between native C types and Python types (mostly PyObjects). It’s not super-difficult, but it is fiddly in places. Take a look at this nice tutorial by Dan Foreman-Mackey (and another one from JPL which looks more specifically at handling NumPy arrays).

Some of the more confusing issues that you’re likely to run into are to do with reference counting. This is a reasonably technical computer science concept, which has to do with how Python stores and labels variables. (Recall that, in Python, the same data can be referred to by multiple variable names.) It’s important to make sure your C code properly tracks references to Python objects as it manipulates them; otherwise, you’re likely to run into a host of problems, from memory leaks to weird crashes. If you’re going to use the Python C API, I strongly recommend that you invest a good amount of time in understanding how this works.

All in all, I think the C API is only the way forward if you’re specifically setting out to write a new Python module for use elsewhere; if all you have is some existing C code that you want to quickly plug in to some Python code, it’s going to be a lot of hassle. Still, once you’ve set up a project once, and used the C API a little bit, it’s a lot quicker to get up and running with the next project.

Summary

All in all, ctypes is probably the best route to go for most people, unless you have something particularly trivial you want to speed-up. It’s fast (native C function calls), easy enough to set up, and conceptually simple to understand. It’s not quite as “automatic” as Weave/Cython or f2py, but you’ll probably end up with cleaner, more flexible, and more robust code for your troubles. And it’s much easier to work with than the Python C API.

The Performance Python page has a nice comparison of some of the methods mentioned above. A tarball containing example code for most of the above methods can be downloaded from here; thanks to Ryan Houghton, Neale Gibson and Joe Zuntz for providing these.


Book review: The Linux Command Line, by William E. Shotts

No Starch Press kindly sent me a copy of The Linux Command Line to review a little while back. I already posted a review on Amazon, but figured that the book might be of interest to people here, too. Read on for a fuller, more detailed review.

The Linux Command Line book cover

Although point-and-click graphical interfaces may initially seem more intuitive, many tasks are much better suited to command line applications. I first started using the command line mostly out of necessity, back in the days when desktop Linux was still rather unfriendly to new users. After a little while, I had gained enough familiarity with the system to start cooking up useful combinations of commands for myself, without having to constantly copy them from books or webpages. I soon found myself preferring to use the command line for many tasks – it became quicker and easier than using a GUI. I even gained a rudimentary understanding of how the command line itself works, and started using a few of its more advanced features. And that’s the stage I’m at now – I know how to use a range of useful commands, and can achieve a lot using the command line in an efficient and satisfying manner. But anything beyond the basics of powerful concepts like regular expressions, commands like sed and find, and the ins and outs of Bash scripting, are still a bit of a mystery to me. If I need to do anything fancy, I fall back on doing it manually, or else write a Python script, simply because I’m more comfortable doing things that way. “The Linux Command Line” is a very useful book, in that it has helped me to take my command line skills to the next level, making it much less of a necessity to develop my own tools.

Something for every eventuality

As with most books from No Starch*, the book is beautifully presented and well written. All of the common command-line tasks are covered, many of them in detail, and there are a number of nice, friendly introductory chapters (pdf) for those who have never used a command line before. But the real value, for me at least, came from the treasure trove of more obscure commands that the book contains. UNIX is a very mature system, and practically every computer-based task you can think of can be performed using existing command line tools. Indeed, on first opening the book to a random page, I immediately found a tool to automatically format text files in a rather specific way, a task that I had been wasting quite a bit of time doing manually (and hadn’t yet encouraged myself to automate). Even old hands will be able to find something new in here, such is the diversity of the topics covered.

Other highlights include a guide to programming in Bash, which cuts through the quirks of the scripting language in a rather transparent manner, and a number of chapters on the common system admin tasks that occupy most peoples’ command line time. The latter part is slightly hampered by being rather distro-specific in places, choosing only to cover Debian (Ubuntu) and Red Hat (Fedora) where non-generic commands must be used. A couple of other distros could have been added to the list without too much fuss, allowing the book to serve as a useful cross-distro reference.

Covering all the bases

And that brings us to one of the difficulties with books like this: Completeness. The cover’s promise of a complete introduction is a bold one, and Shotts can be forgiven for not providing a comprehensive guide to every last command; there are just so many of them, and they can get quite complex. Nevertheless, there are some interesting omissions here. Terminal-based text editors are extremely advanced, offering real productivity gains for programmers and other would-be command line users, but they go largely unmentioned. While a comprehensive introduction to emacs is certainly outside the scope of this book, it could at least have been mentioned! Only a brief introduction to vim is included, which suffices for basic text editing tasks. Additionally, important networking tools like SSH, which are vital for remote working and system administration, are covered only very briefly. In an ideal world, I would have liked to have seen a whole chapter with detailed information on SSH, tunnelling, and the like. This lack of total completeness, understandable though it may be, will leave some readers a little disappointed. Still, the majority of topics are covered in suitable depth to satisfy the majority.

It’s probably best to treat this book as an advanced introduction – reading it will introduce you to a whole range of useful tools, which you can then go on to learn more about using specialised documentation, or more focused technical manuals. The Linux Command Line will get you started, and quite a lot more, but you’ll eventually need to move to something more specific if you really need to delve into the murkier recesses of a certain command.

Verdict

If you would like to start using the command line, improve your existing skills, or simply want to discover tools that you were never even aware existed, this book has everything you need, and I wholly recommend it. If you want to learn about the specifics of some particular command line tool, you’ll at least get a good introduction here, but you’ll eventually have to read its manual all the same.

* Including my own; I’ve authored two titles with them, and love the house style to pieces.


Portability issues with floating point exception handling

Well, this is a fun one for a balmy Friday afternoon – it turns out that there are somewhat subtle differences between fenv.h on Linux (with glibc) and on Mac OS X. A number of functions defined in fenv.h are there to control the handling of floating point errors, and many of them are specified by the C99 standard. However, glibc has some useful extra functions defined, like feenableexcept(). Useful non-standard functions.

So, I’m looking at someone else’s code, developed on a Linux machine, which needs to enable/disable floating point exceptions at various junctures, but which I need to compile on a Mac that I’m using as a test rig (it has 8 cores, which is handy). As you can see here, fenv.h on Mac OS X is missing feenableexcept(), which is rather irksome. Hmm.

StackOverflow to the rescue: There’s a thread with a few bits of advice on how one can replace these functions on Mac OS X. I’ll try the xmmintrin.h one first; hardly ideal, but I’m hoping it will work so I can go to the pub…

Update: It turns out that there’s quite a nice implementation of glibc’s floating point exception handling functions that’s portable to Mac OS X, written by David N. Williams. See this update for more information.