This is a bi-lingual blog of the members of the ADAMIS team at Laboratoire APC and invited guests. We comment on selected papers and events exploring, or relevant to, the interface between physics, cosmology, applied math, statistics, and numerical algorithms and which we have found interesting.

The opinions expressed in this blog reflect those of their authors and neither that of the ADAMIS group as a whole nor of Laboratoire APC.

Sunday, September 5, 2010

Adapt or get obsolete [arXiv:1008.4623, arXiv:1007.1660, arXiv:1004.2503, & others]

Life in science is clearly dangerous with "publish or parish" and "adapt or get obsolete" prominently featured as guiding principles. And a change again may be upon us - and potentially a big one - yet again to test the adaptation skills of scientific community. Just when we have got comfortable with MPIs, openMPs, threads, latencies, etc ... (sigh).


The technological development has been for years behind the progress in science. Not only on the experimental or observational level, but also in the context of theoretical investigations or, the closest to our ADAMIS's heart, for data analysis. More powerful computers and better more sophisticated software have both changed the way the science is done. And keep on doing so.

But the directions of the technological progress are driven by many forces and only weakly by science needs itself - or at least the 'fundamental' part of it. The science then has to be opportunistic and for any new technology looming on the horizon the question is then to what extent, if any, will it have an important impact on science. A somewhat more drastic case can be imagined, and so with little difficulty, of some technology generally acknowledged to be of great service for science becoming superseded by some other due to superior, say, market winds blowing the other way. Then scientists may indeed face the "adapt or get obsolete" rule in its most grim rendition.

And this can be indeed the case at this time.

Given the preeminent role of computers in present-day science any paradigm change for the latter is likely to have a great impact on it. And such change seems to be already in the workings and referred to as the multi-core future (or problem).

GPUs, Graphic Processing Units, are part of it. They are being heralded as one of the next big things in scientific computing. Quick and cheap, though not very smart, have been developed on the shoulders of, from the pockets of, the ever faithful generations of computer game aficionados, and now seem to be predestined to end up as part of the next generation of the supercomputers. On the paper their specs are simply overwhelming. A single latest nVIDIA GPU chip, so called Fermi, hails over a Teraflop performance ! But then the question is how much of that can be realistically used for our benefit and what the price to pay would be. (And becoming a game developer is not a price many of us are ready to accept ;-))

As usual the best way to answer that question is to have a try. The heterogeneous computer platforms, as they are commonly referred to, are indeed at the heart of many projects undertaken by computer scientists. The computational scientists, including astrophysicists, are joining their ranks as well. As a sign of the trend, I have spotted at least three papers on the arXiv archive this summer, and know of a few more investigations are on-going at this time, including the ones ADAMIS is involved in.

The papers I am referring to can be found here, here, and here, and are worth browsing from the above-mentioned perspective, even though the only answer at this stage is the one one would naively expect: it all depends. On a specific problem at hand, on employed algorithms, and programmer's skills. In some cases, in particular those where the data are abundant and operations simple, GPU can provide a significant performance gain (in the wall-clock time of a user) over that of a standard processing unit (CPUs). In fact thanks to GPU Aubert and Teyssier cut the runtime of their specific application by a factor nearly of 100. For other cases the answer is however more complex. In particular, some of the standard signal processing techniques so ubiquitous in astrophysical data analyzes, say Fast Fourier transforms, may look doomed from the onset to perform not better than standard processors. The first attempts indeed seem to confirm that. But then again maybe just more work is needed ...

The jury is still out then on the question how beneficial the GPU/CPU platforms can be for our research. No doubts though that the life will get for us a bit more complicated, but then all worries and whining may be completely useless as this may well be the case of the "adapt or get obsolete' rule.

No comments:

Post a Comment