% ----------------------------------------------------------------------- % executionFlow.tex: Section detailing each of the main algorithms % used by Duchamp. % ----------------------------------------------------------------------- % Copyright (C) 2006, Matthew Whiting, ATNF % % This program is free software; you can redistribute it and/or modify it % under the terms of the GNU General Public License as published by the % Free Software Foundation; either version 2 of the License, or (at your % option) any later version. % % Duchamp is distributed in the hope that it will be useful, but WITHOUT % ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or % FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License % for more details. % % You should have received a copy of the GNU General Public License % along with Duchamp; if not, write to the Free Software Foundation, % Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA % % Correspondence concerning Duchamp may be directed to: % Internet email: Matthew.Whiting [at] atnf.csiro.au % Postal address: Dr. Matthew Whiting % Australia Telescope National Facility, CSIRO % PO Box 76 % Epping NSW 1710 % AUSTRALIA % ----------------------------------------------------------------------- \secA{What \duchamp is doing} \label{sec-flow} Each of the steps that \duchamp goes through in the course of its execution are discussed here in more detail. This should provide enough background information to fully understand what \duchamp is doing and what all the output information is. For those interested in the programming side of things, \duchamp is written in C/C++ and makes use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot} libraries. \secB{Image input} \label{sec-input} The cube is read in using basic \textsc{cfitsio} commands, and stored as an array in a special C++ class. This class keeps track of the list of detected objects, as well as any reconstructed arrays that are made (see \S\ref{sec-recon}). The World Coordinate System (WCS)\footnote{This is the information necessary for translating the pixel locations to quantities such as position on the sky, frequency, velocity, and so on.} information for the cube is also obtained from the FITS header by \textsc{wcslib} functions \citep{greisen02, calabretta02}, and this information, in the form of a \texttt{wcsprm} structure, is also stored in the same class. A sub-section of a cube can be requested by defining the subsection with the \texttt{subsection} parameter and setting \texttt{flagSubsection = true} -- this can be a good idea if the cube has very noisy edges, which may produce many spurious detections. There are two ways of specifying the \texttt{subsection} string. The first is the generalised form \texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the \textsc{cfitsio} library. This has one set of colon-separated numbers for each axis in the FITS file. In this manner, the x-coordinates run from \texttt{x1} to \texttt{x2} (inclusive), with steps of \texttt{dx}. The step value can be omitted, so a subsection of the form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp does not make use of any step value present in the subsection string, and any that are present are removed before the file is opened. If the entire range of a coordinate is required, one can replace the range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the subsection string \texttt{[*,*,*]} is simply the entire cube. Note that the pixel ranges for each axis start at 1, so the full pixel range of a 100-pixel axis would be expressed as 1:100. A complete description of this section syntax can be found at the \textsc{fitsio} web site% \footnote{% \href% {http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}% {http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}. Making full use of the subsection requires knowledge of the size of each of the dimensions. If one wants to, for instance, trim a certain number of pixels off the edges of the cube, without examining the cube to obtain the actual size, one can use the second form of the subsection string. This just gives a number for each axis, \eg \texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and} end of each axis). All types of subsections can be combined \eg \texttt{[5,2:98,*]}. Typically, the units of pixel brightness are given by the FITS file's BUNIT keyword. However, this may often be unwieldy (for instance, the units are Jy/beam, but the values are around a few mJy/beam). It is therefore possible to nominate new units, to which the pixel values will be converted, by using the \texttt{newFluxUnits} input parameter. The units must be directly translatable from the existing ones -- for instance, if BUNIT is Jy/beam, you cannot specify mJy, it must be mJy/beam. If an incompatible unit is given, the BUNIT value is used instead. \secB{Image modification} \label{sec-modify} Several modifications to the cube can be made that improve the execution and efficiency of \duchamp (their use is optional, governed by the relevant flags in the parameter file). \secC{BLANK pixel removal} \label{sec-blank} If the imaged area of a cube is non-rectangular (see the example in Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels are used to pad it out to a rectangular shape. The value of these pixels is given by the FITS header keywords BLANK, BSCALE and BZERO. While these pixels make the image a nice shape, they will take up unnecessary space in memory, and so to potentially speed up the processing we can trim them from the edge. This is done when the parameter \texttt{flagTrim = true}. If the above keywords are not present, the trimming will not be done (in this case, a similar effect can be accomplished, if one knows where the ``blank'' pixels are, by using the subsection option). The amount of trimming is recorded, and these pixels are added back in once the source-detection is completed (so that quoted pixel positions are applicable to the original cube). Rows and columns are trimmed one at a time until the first non-BLANK pixel is reached, so that the image remains rectangular. In practice, this means that there will be some BLANK pixels left in the trimmed image (if the non-BLANK region is non-rectangular). However, these are ignored in all further calculations done on the cube. \secC{Baseline removal} Second, the user may request the removal of baselines from the spectra, via the parameter \texttt{flagBaseline}. This may be necessary if there is a strong baseline ripple present, which can result in spurious detections at the high points of the ripple. The baseline is calculated from a wavelet reconstruction procedure (see \S\ref{sec-recon}) that keeps only the two largest scales. This is done separately for each spatial pixel (\ie for each spectrum in the cube), and the baselines are stored and added back in before any output is done. In this way the quoted fluxes and displayed spectra are as one would see from the input cube itself -- even though the detection (and reconstruction if applicable) is done on the baseline-removed cube. The presence of very strong signals (for instance, masers at several hundred Jy) could affect the determination of the baseline, and would lead to a large dip centred on the signal in the baseline-subtracted spectrum. To prevent this, the signal is trimmed prior to the reconstruction process at some standard threshold (at $8\sigma$ above the mean). The baseline determined should thus be representative of the true, signal-free baseline. Note that this trimming is only a temporary measure which does not affect the source-detection. \secC{Ignoring bright Milky Way emission} \label{sec-MW} Finally, a single set of contiguous channels can be ignored -- these may exhibit very strong emission, such as that from the Milky Way as seen in extragalactic \hi cubes (hence the references to ``Milky Way'' in relation to this task -- apologies to Galactic astronomers!). Such dominant channels will produce many detections that are unnecessary, uninteresting (if one is interested in extragalactic \hi) and large (in size and hence in memory usage), and so will slow the program down and detract from the interesting detections. The use of this feature is controlled by the \texttt{flagMW} parameter, and the exact channels concerned are able to be set by the user (using \texttt{maxMW} and \texttt{minMW} -- these give an inclusive range of channels). When employed, these channels are ignored for the searching, and the scaling of the spectral output (see Fig.~\ref{fig-spect}) will not take them into account. They will be present in the reconstructed array, however, and so will be included in the saved FITS file (see \S\ref{sec-reconIO}). When the final spectra are plotted, the range of channels covered by these parameters is indicated by a green hashed box. Note that these channels refer to channel numbers in the full cube, before any subsection is applied. \secB{Image reconstruction} \label{sec-recon} The user can direct \duchamp to reconstruct the data cube using the \atrous wavelet procedure. A good description of the procedure can be found in \citet{starck02:book}. The reconstruction is an effective way of removing a lot of the noise in the image, allowing one to search reliably to fainter levels, and reducing the number of spurious detections. This is an optional step, but one that greatly enhances the source-detection process, with the payoff that it can be relatively time- and memory-intensive. \secC{Algorithm} The steps in the \atrous reconstruction are as follows: \begin{enumerate} \item The reconstructed array is set to 0 everywhere. \item The input array is discretely convolved with a given filter function. This is determined from the parameter file via the \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for details on the filters available. \item The wavelet coefficients are calculated by taking the difference between the convolved array and the input array. \item If the wavelet coefficients at a given point are above the requested threshold (given by \texttt{snrRecon} as the number of $\sigma$ above the mean and adjusted to the current scale -- see Appendix~\ref{app-scaling}), add these to the reconstructed array. \item The separation between the filter coefficients is doubled. (Note that this step provides the name of the procedure\footnote{\atrous means ``with holes'' in French.}, as gaps or holes are created in the filter coverage.) \item The procedure is repeated from step 2, using the convolved array as the input array. \item Continue until the required maximum number of scales is reached. \item Add the final smoothed (\ie convolved) array to the reconstructed array. This provides the ``DC offset'', as each of the wavelet coefficient arrays will have zero mean. \end{enumerate} The range of scales at which the selection of wavelet coefficients is made is governed by the \texttt{scaleMin} and \texttt{scaleMax} parameters. The minimum scale used is given by \texttt{scaleMin}, where the default value is 1 (the first scale). This parameter is useful if you want to ignore the highest-frequency features (e.g. high-frequency noise that might be present). Normally the maximum scale is calculated from the size of the input array, but it can be specified by using \texttt{scaleMax}. A value $\le0$ will result in the use of the calculated value, as will a value of \texttt{scaleMax} greater than the calculated value. Use of these two parameters can allow searching for features of a particular scale size, for instance searching for narrow absorption features. The reconstruction has at least two iterations. The first iteration makes a first pass at the wavelet reconstruction (the process outlined in the 8 stages above), but the residual array will likely have some structure still in it, so the wavelet filtering is done on the residual, and any significant wavelet terms are added to the final reconstruction. This step is repeated until the change in the measured standard deviation of the background (see note below on the evaluation of this quantity) is less than some fiducial amount. It is important to note that the \atrous decomposition is an example of a ``redundant'' transformation. If no thresholding is performed, the sum of all the wavelet coefficient arrays and the final smoothed array is identical to the input array. The thresholding thus removes only the unwanted structure in the array. Note that any BLANK pixels that are still in the cube will not be altered by the reconstruction -- they will be left as BLANK so that the shape of the valid part of the cube is preserved. \secC{Note on Statistics} The correct calculation of the reconstructed array needs good estimators of the underlying mean and standard deviation (or rms) of the background noise distribution. The methods used to estimate these quantities are detailed in \S\ref{sec-stats} -- the default behaviour is to use robust estimators, to avoid biasing due to bright pixels. %These statistics are estimated using %robust methods, to avoid corruption by strong outlying points. The %mean of the distribution is actually estimated by the median, while %the median absolute deviation from the median (MADFM) is calculated %and corrected assuming Gaussianity to estimate the underlying standard %deviation $\sigma$. The Gaussianity (or Normality) assumption is %critical, as the MADFM does not give the same value as the usual rms %or standard deviation value -- for a Normal distribution %$N(\mu,\sigma)$ we find MADFM$=0.6744888\sigma$, but this will change %for different distributions. Since this ratio is corrected for, the %user need only think in the usual multiples of the rms when setting %\texttt{snrRecon}. See Appendix~\ref{app-madfm} for a derivation of %this value. When thresholding the different wavelet scales, the value of the rms as measured from the wavelet array needs to be scaled to account for the increased amount of correlation between neighbouring pixels (due to the convolution). See Appendix~\ref{app-scaling} for details on this scaling. \secC{User control of reconstruction parameters} The most important parameter for the user to select in relation to the reconstruction is the threshold for each wavelet array. This is set using the \texttt{snrRecon} parameter, and is given as a multiple of the rms (estimated by the MADFM) above the mean (which for the wavelet arrays should be approximately zero). There are several other parameters that can be altered as well that affect the outcome of the reconstruction. By default, the cube is reconstructed in three dimensions, using a 3-dimensional filter and 3-dimensional convolution. This can be altered, however, using the parameter \texttt{reconDim}. If set to 1, this means the cube is reconstructed by considering each spectrum separately, whereas \texttt{reconDim=2} will mean the cube is reconstructed by doing each channel map separately. The merits of these choices are discussed in \S\ref{sec-notes}, but it should be noted that a 2-dimensional reconstruction can be susceptible to edge effects if the spatial shape of the pixel array is not rectangular. The user can also select the minimum scale to be used in the reconstruction. The first scale exhibits the highest frequency variations, and so ignoring this one can sometimes be beneficial in removing excess noise. The default is to use all scales (\texttt{minscale = 1}). Finally, the filter that is used for the convolution can be selected by using \texttt{filterCode} and the relevant code number -- the choices are listed in Appendix~\ref{app-param}. A larger filter will give a better reconstruction, but take longer and use more memory when executing. When multi-dimensional reconstruction is selected, this filter is used to construct a 2- or 3-dimensional equivalent. \secB{Smoothing the cube} \label{sec-smoothing} An alternative to doing the wavelet reconstruction is to smooth the cube. This technique can be useful in reducing the noise level slightly (at the cost of making neighbouring pixels correlated and blurring any signal present), and is particularly well suited to the case where a particular signal size (\ie a certain channel width or spatial size) is believed to be present in the data. There are two alternative methods that can be used: spectral smoothing, using the Hanning filter; or spatial smoothing, using a 2D Gaussian kernel. These alternatives are outlined below. To utilise the smoothing option, set the parameter \texttt{flagSmooth=true} and set \texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}. \secC{Spectral smoothing} When \texttt{smoothType = spectral} is selected, the cube is smoothed only in the spectral domain. Each spectrum is independently smoothed by a Hanning filter, and then put back together to form the smoothed cube, which is then used by the searching algorithm (see below). Note that in the case of both the reconstruction and the smoothing options being requested, the reconstruction will take precedence and the smoothing will \emph{not} be done. There is only one parameter necessary to define the degree of smoothing -- the Hanning width $a$ (given by the user parameter \texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter are defined by \[ c(x) = \begin{cases} \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| < (a+1)/2\\ 0 &|x| \geq (a+1)/2. \end{cases},\ a,x \in \mathbb{Z} \] Note that the width specified must be an odd integer (if the parameter provided is even, it is incremented by one). \secC{Spatial smoothing} When \texttt{smoothType = spatial} is selected, the cube is smoothed only in the spatial domain. Each channel map is independently smoothed by a two-dimensional Gaussian kernel, put back together to form the smoothed cube, and used in the searching algorithm (see below). Again, reconstruction is always done by preference if both techniques are requested. The two-dimensional Gaussian has three parameters to define it, governed by the elliptical cross-sectional shape of the Gaussian function: the FWHM (full-width at half-maximum) of the major and minor axes, and the position angle of the major axis. These are given by the user parameters \texttt{kernMaj, kernMin} \& \texttt{kernPA}. If a circular Gaussian is required, the user need only provide the \texttt{kernMaj} parameter. The \texttt{kernMin} parameter will then be set to the same value, and \texttt{kernPA} to zero. If we define these parameters as $a,b,\theta$ respectively, we can define the kernel by the function \[ k(x,y) = \exp\left[-0.5 \left(\frac{X^2}{\sigma_X^2} + \frac{Y^2}{\sigma_Y^2} \right) \right] \] where $(x,y)$ are the offsets from the central pixel of the gaussian function, and \begin{align*} X& = x\sin\theta - y\cos\theta& Y&= x\cos\theta + y\sin\theta\\ \sigma_X^2& = \frac{(a/2)^2}{2\ln2}& \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\ \end{align*} \secB{Input/Output of reconstructed/smoothed arrays} \label{sec-reconIO} The smoothing and reconstruction stages can be relatively time-consuming, particularly for large cubes and reconstructions in 3-D (or even spatial smoothing). To get around this, \duchamp provides a shortcut to allow users to perform multiple searches (\eg with different thresholds) on the same reconstruction/smoothing setup without re-doing the calculations each time. To save the reconstructed array as a FITS file, set \texttt{flagOutputRecon = true}. The file will be saved in the same directory as the input image, so the user needs to have write permissions for that directory. The name of the file can given by the \texttt{fileOutputRecon} parameter, but this can be ignored and \duchamp will present a name based on the reconstruction parameters. The filename will be derived from the input filename, with extra information detailing the reconstruction that has been done. For example, suppose \texttt{image.fits} has been reconstructed using a 3-dimensional reconstruction with filter \#2, thresholded at $4\sigma$ using all scales. The output filename will then be \texttt{image.RECON-3-2-4-1.fits} (\ie it uses the four parameters relevant for the \atrous reconstruction as listed in Appendix~\ref{app-param}). The new FITS file will also have these parameters as header keywords. If a subsection of the input image has been used (see \S\ref{sec-input}), the format of the output filename will be \texttt{image.sub.RECON-3-2-4-1.fits}, and the subsection that has been used is also stored in the FITS header. Likewise, the residual image, defined as the difference between the input and reconstructed arrays, can also be saved in the same manner by setting \texttt{flagOutputResid = true}. Its filename will be the same as above, with \texttt{RESID} replacing \texttt{RECON}. If a reconstructed image has been saved, it can be read in and used instead of redoing the reconstruction. To do so, the user should set the parameter \texttt{flagReconExists = true}. The user can indicate the name of the reconstructed FITS file using the \texttt{reconFile} parameter, or, if this is not specified, \duchamp searches for the file with the name as defined above. If the file is not found, the reconstruction is performed as normal. Note that to do this, the user needs to set \texttt{flagAtrous = true} (obviously, if this is \texttt{false}, the reconstruction is not needed). To save the smoothed array, set \texttt{flagOutputSmooth = true}. As for the reconstructed/residual arrays, the name of the file can given by the \texttt{fileOutputSmooth} parameter, but this can be ignored and \duchamp will present a name based on the method of smoothing used. It will be either \texttt{image.SMOOTH-1D-a.fits}, where a is replaced by the Hanning width used, or \texttt{image.SMOOTH-2D-a-b-c.fits}, where the Gaussian kernel parameters are a,b,c. Similarly to the reconstruction case, a saved file can be read in by setting \texttt{flagSmoothExists = true} and either specifying a file to be read with the \texttt{smoothFile} parameter or relying on \duchamp to find the file with the name as given above. \secB{Searching the image} \label{sec-detection} \secC{Technique} The basic idea behind detection in \duchamp is to locate sets of contiguous voxels that lie above some threshold. No size or shape requirement is imposed upon the detections -- that is, \duchamp does not fit \eg a Gaussian profile to each source. All it does is find connected groups of bright voxels. One threshold is calculated for the entire cube, enabling calculation of signal-to-noise ratios for each source (see Section~\ref{sec-output} for details). The user can manually specify a value (using the parameter \texttt{threshold}) for the threshold, which will override the calculated value. Note that this option overrides any settings of \texttt{snrCut} or FDR options (see below). The cube can be searched in one of two ways, governed by the input parameter \texttt{searchType}. If \texttt{searchType=spatial}, the cube is searched one channel map at a time, using the 2-dimensional raster-scanning algorithm of \citet{lutz80} that connects groups of neighbouring pixels. Such an algorithm cannot be applied directly to a 3-dimensional case, as it requires that objects are completely nested in a row (when scanning along a row, if an object finishes and other starts, you won't get back to the first until the second is completely finished for the row). Three-dimensional data does not have this property, hence the need to treat the data on a 2-dimensional basis at most. Alternatively, if \texttt{searchType=spectral}, the searching is done in one dimension on each individual spatial pixel's spectrum. This is a simpler search, but there are potentially many more of them. Although there are parameters that govern the minimum number of pixels in a spatial, spectral and total senses that an object must have (\texttt{minPix}, \texttt{minChannels} and \texttt{minVoxels} respectively), these criteria are not applied at this point - see \S\ref{sec-reject} for details. Finally, the search only looks for positive features. If one is interested instead in negative features (such as absorption lines), set the parameter \texttt{flagNegative = true}. This will invert the cube (\ie multiply all pixels by $-1$) prior to the search, and then re-invert the cube (and the fluxes of any detections) after searching is complete. All outputs are done in the same manner as normal, so that fluxes of detections will be negative. \secC{Calculating statistics} \label{sec-stats} A crucial part of the detection process (as well as the wavelet reconstruction: \S\ref{sec-recon}) is estimating the statistics that define the detection threshold. To determine a threshold, we need to estimate from the data two parameters: the middle of the noise distribution (the ``noise level''), and the width of the distribution (the ``noise spread''). The noise level is estimated by either the mean or the median, and the noise spread by the rms (or the standard deviation) or the median absolute deviation from the median (MADFM). The median and MADFM are robust statistics, in that they are not biased by the presence of a few pixels much brighter than the noise. All four statistics are calculated automatically, but the choice of parameters that will be used is governed by the input parameter \texttt{flagRobustStats}. This has the default value \texttt{true}, meaning the underlying mean of the noise distribution is estimated by the median, and the underlying standard deviation is estimated by the MADFM. In the latter case, the value is corrected, under the assumption that the underlying distribution is Normal (Gaussian), by dividing by 0.6744888 -- see Appendix~\ref{app-madfm} for details. If \texttt{flagRobustStats=false}, the mean and rms are used instead. The choice of pixels to be used depend on the analysis method. If the wavelet reconstruction has been done, the residuals (defined in the sense of original $-$ reconstruction) are used to estimate the noise spread of the cube, since the reconstruction should pick out all significant structure. The noise level (the middle of the distribution) is taken from the original array. If smoothing of the cube has been done instead, all noise parameters are measured from the smoothed array, and detections are made with these parameters. When the signal-to-noise level is quoted for each detection (see \S\ref{sec-output}), the noise parameters of the original array are used, since the smoothing process correlates neighbouring pixels, reducing the noise level. If neither reconstruction nor smoothing has been done, then the statistics are calculated from the original, input array. The parameters that are estimated should be representative of the noise in the cube. For the case of small objects embedded in many noise pixels (\eg the case of \hi surveys), using the full cube will provide good estimators. It is possible, however, to use only a subsection of the cube by setting the parameter \texttt{flagStatSec = true} and providing the desired subsection to the \texttt{StatSec} parameter. This subsection works in exactly the same way as the pixel subsection discussed in \S\ref{sec-input}. Note that this subsection applies only to the statistics used to determine the threshold. It does not affect the calculation of statistics in the case of the wavelet reconstruction. Note also that pixels flagged as BLANK or as part of the ``Milky Way'' range of channels are ignored in the statistics calculations. \secC{Determining the threshold} Once the statistics have been calculated, the threshold is determined in one of two ways. The first way is a simple sigma-clipping, where a threshold is set at a fixed number $n$ of standard deviations above the mean, and pixels above this threshold are flagged as detected. The value of $n$ is set with the parameter \texttt{snrCut}. The ``mean'' and ``standard deviation'' here are estimated according to \texttt{flagRobustStats}, as discussed in \S\ref{sec-stats}. In this first case only, if the user specifies a threshold, using the \texttt{threshold} parameter, the sigma-clipped value is ignored. The second method uses the False Discovery Rate (FDR) technique \citep{miller01,hopkins02}, whose basis we briefly detail here. The false discovery rate (given by the number of false detections divided by the total number of detections) is fixed at a certain value $\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false positives). In practice, an $\alpha$ value is chosen, and the ensemble average FDR (\ie $\langle FDR \rangle$) when the method is used will be less than $\alpha$. One calculates $p$ -- the probability, assuming the null hypothesis is true, of obtaining a test statistic as extreme as the pixel value (the observed test statistic) -- for each pixel, and sorts them in increasing order. One then calculates $d$ where \[ d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\}, \] and then rejects all hypotheses whose $p$-values are less than or equal to $P_d$. (So a $P_i