[258] | 1 | \secA{What \duchamp is doing} |
---|
[158] | 2 | \label{sec-flow} |
---|
| 3 | |
---|
[265] | 4 | Each of the steps that \duchamp goes through in the course of its |
---|
| 5 | execution are discussed here in more detail. This should provide |
---|
| 6 | enough background information to fully understand what \duchamp is |
---|
| 7 | doing and what all the output information is. For those interested in |
---|
| 8 | the programming side of things, \duchamp is written in C/C++ and makes |
---|
| 9 | use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot} |
---|
[158] | 10 | libraries. |
---|
| 11 | |
---|
| 12 | \secB{Image input} |
---|
| 13 | \label{sec-input} |
---|
| 14 | |
---|
[162] | 15 | The cube is read in using basic \textsc{cfitsio} commands, and stored |
---|
| 16 | as an array in a special C++ class. This class keeps track of the list |
---|
| 17 | of detected objects, as well as any reconstructed arrays that are made |
---|
| 18 | (see \S\ref{sec-recon}). The World Coordinate System |
---|
| 19 | (WCS)\footnote{This is the information necessary for translating the |
---|
| 20 | pixel locations to quantities such as position on the sky, frequency, |
---|
| 21 | velocity, and so on.} information for the cube is also obtained from |
---|
| 22 | the FITS header by \textsc{wcslib} functions \citep{greisen02, |
---|
| 23 | calabretta02}, and this information, in the form of a \texttt{wcsprm} |
---|
| 24 | structure, is also stored in the same class. |
---|
[158] | 25 | |
---|
[231] | 26 | A sub-section of a cube can be requested by defining the subsection |
---|
| 27 | with the \texttt{subsection} parameter and setting |
---|
| 28 | \texttt{flagSubsection=true} -- this can be a good idea if the cube |
---|
| 29 | has very noisy edges, which may produce many spurious detections. |
---|
| 30 | |
---|
| 31 | There are two ways of specifying the \texttt{subsection} string. The |
---|
| 32 | first is the generalised form |
---|
| 33 | \texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the |
---|
| 34 | \textsc{cfitsio} library. This has one set of colon-separated numbers |
---|
| 35 | for each axis in the FITS file. In this manner, the x-coordinates run |
---|
[158] | 36 | from \texttt{x1} to \texttt{x2} (inclusive), with steps of |
---|
[231] | 37 | \texttt{dx}. The step value can be omitted, so a subsection of the |
---|
[258] | 38 | form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp |
---|
[231] | 39 | does not make use of any step value present in the subsection string, |
---|
| 40 | and any that are present are removed before the file is opened. |
---|
[158] | 41 | |
---|
[231] | 42 | If the entire range of a coordinate is required, one can replace the |
---|
| 43 | range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the |
---|
| 44 | subsection string \texttt{[*,*,*]} is simply the entire cube. A |
---|
| 45 | complete description of this section syntax can be found at the |
---|
| 46 | \textsc{fitsio} web site% |
---|
[158] | 47 | \footnote{% |
---|
| 48 | \href% |
---|
[223] | 49 | {http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}% |
---|
| 50 | {http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}. |
---|
[158] | 51 | |
---|
[231] | 52 | |
---|
| 53 | Making full use of the subsection requires knowledge of the size of |
---|
| 54 | each of the dimensions. If one wants to, for instance, trim a certain |
---|
| 55 | number of pixels off the edges of the cube, without examining the cube |
---|
| 56 | to obtain the actual size, one can use the second form of the |
---|
| 57 | subsection string. This just gives a number for each axis, \eg |
---|
| 58 | \texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and} |
---|
| 59 | end of each axis). |
---|
| 60 | |
---|
| 61 | All types of subsections can be combined \eg \texttt{[5,2:98,*]}. |
---|
| 62 | |
---|
| 63 | |
---|
| 64 | %A sub-section of an image can be requested via the \texttt{subsection} |
---|
| 65 | %parameter -- this can be a good idea if the cube has very noisy edges, |
---|
| 66 | %which may produce many spurious detections. The generalised form of |
---|
| 67 | %the subsection that is used by \textsc{cfitsio} is |
---|
| 68 | %\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, such that the x-coordinates run |
---|
| 69 | %from \texttt{x1} to \texttt{x2} (inclusive), with steps of |
---|
| 70 | %\texttt{dx}. The step value can be omitted (so a subsection of the |
---|
[258] | 71 | %form \texttt{[2:50,2:50,10:1000]} is still valid). \duchamp does not |
---|
[231] | 72 | %make use of any step value present in the subsection string, and any |
---|
| 73 | %that are present are removed before the file is opened. |
---|
| 74 | % |
---|
| 75 | %If one wants the full range of a coordinate then replace the range |
---|
| 76 | %with an asterisk, \eg \texttt{[2:50,2:50,*]}. If one wants to use a |
---|
| 77 | %subsection, one must set \texttt{flagSubsection = 1}. A complete |
---|
| 78 | %description of the section syntax can be found at the \textsc{fitsio} |
---|
| 79 | %web site% |
---|
| 80 | %\footnote{% |
---|
| 81 | %\href% |
---|
| 82 | %{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}% |
---|
| 83 | %{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}. |
---|
| 84 | % |
---|
| 85 | |
---|
[158] | 86 | \secB{Image modification} |
---|
| 87 | \label{sec-modify} |
---|
| 88 | |
---|
| 89 | Several modifications to the cube can be made that improve the |
---|
[258] | 90 | execution and efficiency of \duchamp (their use is optional, governed |
---|
[158] | 91 | by the relevant flags in the parameter file). |
---|
| 92 | |
---|
| 93 | \secC{BLANK pixel removal} |
---|
| 94 | |
---|
[162] | 95 | If the imaged area of a cube is non-rectangular (see the example in |
---|
| 96 | Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels are |
---|
| 97 | used to pad it out to a rectangular shape. The value of these pixels |
---|
| 98 | is given by the FITS header keywords BLANK, BSCALE and BZERO. While |
---|
| 99 | these pixels make the image a nice shape, they will unnecessarily |
---|
| 100 | interfere with the processing (as well as taking up needless |
---|
| 101 | memory). The first step, then, is to trim them from the edge. This is |
---|
| 102 | done when the parameter \texttt{flagBlankPix=true}. If the above |
---|
| 103 | keywords are not present, the user can specify the BLANK value by the |
---|
| 104 | parameter \texttt{blankPixValue}. |
---|
[158] | 105 | |
---|
| 106 | Removing BLANK pixels is particularly important for the reconstruction |
---|
| 107 | step, as lots of BLANK pixels on the edges will smooth out features in |
---|
| 108 | the wavelet calculation stage. The trimming will also reduce the size |
---|
| 109 | of the cube's array, speeding up the execution. The amount of trimming |
---|
| 110 | is recorded, and these pixels are added back in once the |
---|
| 111 | source-detection is completed (so that quoted pixel positions are |
---|
| 112 | applicable to the original cube). |
---|
| 113 | |
---|
| 114 | Rows and columns are trimmed one at a time until the first non-BLANK |
---|
| 115 | pixel is reached, so that the image remains rectangular. In practice, |
---|
[162] | 116 | this means that there will be some BLANK pixels left in the trimmed |
---|
| 117 | image (if the non-BLANK region is non-rectangular). However, these are |
---|
[158] | 118 | ignored in all further calculations done on the cube. |
---|
| 119 | |
---|
| 120 | \secC{Baseline removal} |
---|
| 121 | |
---|
| 122 | Second, the user may request the removal of baselines from the |
---|
| 123 | spectra, via the parameter \texttt{flagBaseline}. This may be |
---|
| 124 | necessary if there is a strong baseline ripple present, which can |
---|
| 125 | result in spurious detections at the high points of the ripple. The |
---|
| 126 | baseline is calculated from a wavelet reconstruction procedure (see |
---|
| 127 | \S\ref{sec-recon}) that keeps only the two largest scales. This is |
---|
| 128 | done separately for each spatial pixel (\ie for each spectrum in the |
---|
| 129 | cube), and the baselines are stored and added back in before any |
---|
| 130 | output is done. In this way the quoted fluxes and displayed spectra |
---|
| 131 | are as one would see from the input cube itself -- even though the |
---|
| 132 | detection (and reconstruction if applicable) is done on the |
---|
| 133 | baseline-removed cube. |
---|
| 134 | |
---|
| 135 | The presence of very strong signals (for instance, masers at several |
---|
[162] | 136 | hundred Jy) could affect the determination of the baseline, and would |
---|
| 137 | lead to a large dip centred on the signal in the baseline-subtracted |
---|
[158] | 138 | spectrum. To prevent this, the signal is trimmed prior to the |
---|
| 139 | reconstruction process at some standard threshold (at $8\sigma$ above |
---|
| 140 | the mean). The baseline determined should thus be representative of |
---|
| 141 | the true, signal-free baseline. Note that this trimming is only a |
---|
| 142 | temporary measure which does not affect the source-detection. |
---|
| 143 | |
---|
| 144 | \secC{Ignoring bright Milky Way emission} |
---|
| 145 | |
---|
| 146 | Finally, a single set of contiguous channels can be ignored -- these |
---|
| 147 | may exhibit very strong emission, such as that from the Milky Way as |
---|
[258] | 148 | seen in extragalactic \hi cubes (hence the references to ``Milky |
---|
[158] | 149 | Way'' in relation to this task -- apologies to Galactic |
---|
| 150 | astronomers!). Such dominant channels will produce many detections |
---|
| 151 | that are unnecessary, uninteresting (if one is interested in |
---|
| 152 | extragalactic \hi) and large (in size and hence in memory usage), and |
---|
| 153 | so will slow the program down and detract from the interesting |
---|
| 154 | detections. |
---|
| 155 | |
---|
| 156 | The use of this feature is controlled by the \texttt{flagMW} |
---|
| 157 | parameter, and the exact channels concerned are able to be set by the |
---|
| 158 | user (using \texttt{maxMW} and \texttt{minMW} -- these give an |
---|
| 159 | inclusive range of channels). When employed, these channels are |
---|
| 160 | ignored for the searching, and the scaling of the spectral output (see |
---|
| 161 | Fig.~\ref{fig-spect}) will not take them into account. They will be |
---|
| 162 | present in the reconstructed array, however, and so will be included |
---|
| 163 | in the saved FITS file (see \S\ref{sec-reconIO}). When the final |
---|
| 164 | spectra are plotted, the range of channels covered by these parameters |
---|
| 165 | is indicated by a green hashed box. |
---|
| 166 | |
---|
| 167 | \secB{Image reconstruction} |
---|
| 168 | \label{sec-recon} |
---|
| 169 | |
---|
[258] | 170 | The user can direct \duchamp to reconstruct the data cube using the |
---|
| 171 | \atrous wavelet procedure. A good description of the procedure can be |
---|
[158] | 172 | found in \citet{starck02:book}. The reconstruction is an effective way |
---|
| 173 | of removing a lot of the noise in the image, allowing one to search |
---|
| 174 | reliably to fainter levels, and reducing the number of spurious |
---|
| 175 | detections. This is an optional step, but one that greatly enhances |
---|
| 176 | the source-detection process, with the payoff that it can be |
---|
| 177 | relatively time- and memory-intensive. |
---|
| 178 | |
---|
| 179 | \secC{Algorithm} |
---|
| 180 | |
---|
[258] | 181 | The steps in the \atrous reconstruction are as follows: |
---|
[158] | 182 | \begin{enumerate} |
---|
[162] | 183 | \item The reconstructed array is set to 0 everywhere. |
---|
[158] | 184 | \item The input array is discretely convolved with a given filter |
---|
| 185 | function. This is determined from the parameter file via the |
---|
| 186 | \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for |
---|
| 187 | details on the filters available. |
---|
| 188 | \item The wavelet coefficients are calculated by taking the difference |
---|
| 189 | between the convolved array and the input array. |
---|
| 190 | \item If the wavelet coefficients at a given point are above the |
---|
| 191 | requested threshold (given by \texttt{snrRecon} as the number of |
---|
| 192 | $\sigma$ above the mean and adjusted to the current scale -- see |
---|
| 193 | Appendix~\ref{app-scaling}), add these to the reconstructed array. |
---|
| 194 | \item The separation of the filter coefficients is doubled. (Note that |
---|
[258] | 195 | this step provides the name of the procedure\footnote{\atrous means |
---|
[158] | 196 | ``with holes'' in French.}, as gaps or holes are created in the |
---|
| 197 | filter coverage.) |
---|
| 198 | \item The procedure is repeated from step 2, using the convolved array |
---|
| 199 | as the input array. |
---|
| 200 | \item Continue until the required maximum number of scales is reached. |
---|
| 201 | \item Add the final smoothed (\ie convolved) array to the |
---|
| 202 | reconstructed array. This provides the ``DC offset'', as each of the |
---|
| 203 | wavelet coefficient arrays will have zero mean. |
---|
| 204 | \end{enumerate} |
---|
| 205 | |
---|
| 206 | The reconstruction has at least two iterations. The first iteration |
---|
| 207 | makes a first pass at the wavelet reconstruction (the process outlined |
---|
[162] | 208 | in the 8 stages above), but the residual array will likely have some |
---|
| 209 | structure still in it, so the wavelet filtering is done on the |
---|
[158] | 210 | residual, and any significant wavelet terms are added to the final |
---|
[162] | 211 | reconstruction. This step is repeated until the change in the measured |
---|
| 212 | standard deviation of the background (see note below on the evaluation |
---|
| 213 | of this quantity) is less than some fiducial amount. |
---|
[158] | 214 | |
---|
[258] | 215 | It is important to note that the \atrous decomposition is an example |
---|
[158] | 216 | of a ``redundant'' transformation. If no thresholding is performed, |
---|
| 217 | the sum of all the wavelet coefficient arrays and the final smoothed |
---|
| 218 | array is identical to the input array. The thresholding thus removes |
---|
| 219 | only the unwanted structure in the array. |
---|
| 220 | |
---|
| 221 | Note that any BLANK pixels that are still in the cube will not be |
---|
| 222 | altered by the reconstruction -- they will be left as BLANK so that |
---|
| 223 | the shape of the valid part of the cube is preserved. |
---|
| 224 | |
---|
| 225 | \secC{Note on Statistics} |
---|
| 226 | |
---|
| 227 | The correct calculation of the reconstructed array needs good |
---|
| 228 | estimators of the underlying mean and standard deviation of the |
---|
| 229 | background noise distribution. These statistics are estimated using |
---|
| 230 | robust methods, to avoid corruption by strong outlying points. The |
---|
| 231 | mean of the distribution is actually estimated by the median, while |
---|
| 232 | the median absolute deviation from the median (MADFM) is calculated |
---|
| 233 | and corrected assuming Gaussianity to estimate the underlying standard |
---|
| 234 | deviation $\sigma$. The Gaussianity (or Normality) assumption is |
---|
| 235 | critical, as the MADFM does not give the same value as the usual rms |
---|
[231] | 236 | or standard deviation value -- for a Normal distribution |
---|
[265] | 237 | $N(\mu,\sigma)$ we find MADFM$=0.6744888\sigma$, but this will change |
---|
| 238 | for different distributions. Since this ratio is corrected for, the |
---|
| 239 | user need only think in the usual multiples of the rms when setting |
---|
| 240 | \texttt{snrRecon}. See Appendix~\ref{app-madfm} for a derivation of |
---|
| 241 | this value. |
---|
[158] | 242 | |
---|
[265] | 243 | When thresholding the different wavelet scales, the value of the rms |
---|
[158] | 244 | as measured from the wavelet array needs to be scaled to account for |
---|
| 245 | the increased amount of correlation between neighbouring pixels (due |
---|
| 246 | to the convolution). See Appendix~\ref{app-scaling} for details on |
---|
| 247 | this scaling. |
---|
| 248 | |
---|
| 249 | \secC{User control of reconstruction parameters} |
---|
| 250 | |
---|
| 251 | The most important parameter for the user to select in relation to the |
---|
| 252 | reconstruction is the threshold for each wavelet array. This is set |
---|
| 253 | using the \texttt{snrRecon} parameter, and is given as a multiple of |
---|
| 254 | the rms (estimated by the MADFM) above the mean (which for the wavelet |
---|
| 255 | arrays should be approximately zero). There are several other |
---|
| 256 | parameters that can be altered as well that affect the outcome of the |
---|
| 257 | reconstruction. |
---|
| 258 | |
---|
| 259 | By default, the cube is reconstructed in three dimensions, using a |
---|
| 260 | 3-dimensional filter and 3-dimensional convolution. This can be |
---|
| 261 | altered, however, using the parameter \texttt{reconDim}. If set to 1, |
---|
| 262 | this means the cube is reconstructed by considering each spectrum |
---|
| 263 | separately, whereas \texttt{reconDim=2} will mean the cube is |
---|
| 264 | reconstructed by doing each channel map separately. The merits of |
---|
| 265 | these choices are discussed in \S\ref{sec-notes}, but it should be |
---|
| 266 | noted that a 2-dimensional reconstruction can be susceptible to edge |
---|
[162] | 267 | effects if the spatial shape of the pixel array is not rectangular. |
---|
[158] | 268 | |
---|
| 269 | The user can also select the minimum scale to be used in the |
---|
| 270 | reconstruction. The first scale exhibits the highest frequency |
---|
| 271 | variations, and so ignoring this one can sometimes be beneficial in |
---|
| 272 | removing excess noise. The default is to use all scales |
---|
| 273 | (\texttt{minscale = 1}). |
---|
| 274 | |
---|
| 275 | Finally, the filter that is used for the convolution can be selected |
---|
| 276 | by using \texttt{filterCode} and the relevant code number -- the |
---|
| 277 | choices are listed in Appendix~\ref{app-param}. A larger filter will |
---|
| 278 | give a better reconstruction, but take longer and use more memory when |
---|
| 279 | executing. When multi-dimensional reconstruction is selected, this |
---|
| 280 | filter is used to construct a 2- or 3-dimensional equivalent. |
---|
| 281 | |
---|
[208] | 282 | \secB{Smoothing the cube} |
---|
| 283 | \label{sec-smoothing} |
---|
| 284 | |
---|
[275] | 285 | An alternative to doing the wavelet reconstruction is to smooth the |
---|
| 286 | cube. This technique can be useful in reducing the noise level |
---|
| 287 | slightly (at the cost of making neighbouring pixels correlated and |
---|
| 288 | blurring any signal present), and is particularly well suited to the |
---|
| 289 | case where a particular signal size (\ie a certain channel width or |
---|
| 290 | spatial size) is believed to be present in the data. |
---|
[208] | 291 | |
---|
[275] | 292 | There are two alternative methods that can be used: spectral |
---|
| 293 | smoothing, using the Hanning filter; or spatial smoothing, using a 2D |
---|
| 294 | Gaussian kernel. These alternatives are outlined below. To utilise the |
---|
| 295 | smoothing option, set the parameter \texttt{flagSmooth=true} and set |
---|
| 296 | \texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}. |
---|
[208] | 297 | |
---|
[275] | 298 | \secC{Spectral smoothing} |
---|
| 299 | |
---|
| 300 | When \texttt{smoothType=spectral} is selected, the cube is smoothed |
---|
| 301 | only in the spectral domain. Each spectrum is independently smoothed |
---|
| 302 | by a Hanning filter, and then put back together to form the smoothed |
---|
| 303 | cube, which is then used by the searching algorithm (see below). Note |
---|
| 304 | that in the case of both the reconstruction and the smoothing options |
---|
| 305 | being requested, the reconstruction will take precedence and the |
---|
| 306 | smoothing will \emph{not} be done. |
---|
| 307 | |
---|
[208] | 308 | There is only one parameter necessary to define the degree of |
---|
| 309 | smoothing -- the Hanning width $a$ (given by the user parameter |
---|
[231] | 310 | \texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter |
---|
| 311 | are defined by |
---|
[208] | 312 | \[ |
---|
[231] | 313 | c(x) = |
---|
| 314 | \begin{cases} |
---|
| 315 | \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| \leq (a+1)/2\\ |
---|
| 316 | 0 &|x| > (a+1)/2. |
---|
| 317 | \end{cases} |
---|
[277] | 318 | ,\ a,x \in \mathbb{Z} |
---|
[208] | 319 | \] |
---|
[277] | 320 | Note that the width specified must be an |
---|
[232] | 321 | odd integer (if the parameter provided is even, it is incremented by |
---|
| 322 | one). |
---|
[208] | 323 | |
---|
[275] | 324 | \secC{Spatial smoothing} |
---|
[208] | 325 | |
---|
[275] | 326 | When \texttt{smoothType=spatial} is selected, the cube is smoothed |
---|
| 327 | only in the spatial domain. Each channel map is independently smoothed |
---|
| 328 | by a two-dimensional Gaussian kernel, and then put back together to |
---|
| 329 | form the smoothed cube, and used in the searching algorithm (see |
---|
| 330 | below). Again, reconstruction is always done by preference if both |
---|
| 331 | techniques are requested. |
---|
| 332 | |
---|
| 333 | The two-dimensional Gaussian has three parameters to define it, |
---|
| 334 | governed by the elliptical cross-sectional shape of the gaussian |
---|
| 335 | function: the FWHM (full-width at half-maximum) of the major and minor |
---|
| 336 | axes, and the position angle of the major axis. These are given by the |
---|
| 337 | user parameters \texttt{kernMaj, kernMin \& kernPA}. If we define |
---|
| 338 | these parameters as $a,b,\theta$ respectively, we can define the |
---|
| 339 | kernel by the function |
---|
| 340 | \[ |
---|
[277] | 341 | k(x,y) = \exp\left[-0.5 \left(\frac{X^2}{\sigma_X^2} + |
---|
| 342 | \frac{Y^2}{\sigma_Y^2} \right) \right] |
---|
[275] | 343 | \] |
---|
| 344 | where $(x,y)$ are the offsets from the central pixel of the gaussian |
---|
| 345 | function, and |
---|
[277] | 346 | \begin{align*} |
---|
| 347 | X& = x\sin\theta - y\cos\theta& |
---|
| 348 | Y&= x\cos\theta + y\sin\theta\\ |
---|
| 349 | \sigma_X^2& = \frac{(a/2)^2}{2\ln2}& |
---|
| 350 | \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\ |
---|
| 351 | \end{align*} |
---|
[275] | 352 | |
---|
| 353 | |
---|
[277] | 354 | \secB{Input/Output of reconstructed arrays} |
---|
| 355 | \label{sec-reconIO} |
---|
| 356 | |
---|
| 357 | The smoothing and reconstruction stages can be relatively |
---|
| 358 | time-consuming, particularly for large cubes and reconstructions in |
---|
| 359 | 3-D (or even spatial smoothing). To get around this, \duchamp provides |
---|
| 360 | a shortcut to allow users to perform multiple searches (\eg with |
---|
| 361 | different thresholds) on the same reconstruction/smoothing setup |
---|
| 362 | without re-doing the calculations each time. |
---|
| 363 | |
---|
| 364 | To save the reconstructed array as a FITS file, set |
---|
| 365 | \texttt{flagOutputRecon = true}. The file will be saved in the same |
---|
| 366 | directory as the input image, so the user needs to have write |
---|
| 367 | permissions for that directory. |
---|
| 368 | |
---|
| 369 | The filename will be derived from the input filename, with extra |
---|
| 370 | information detailing the reconstruction that has been done. For |
---|
| 371 | example, suppose \texttt{image.fits} has been reconstructed using a |
---|
| 372 | 3-dimensional reconstruction with filter \#2, thresholded at $4\sigma$ |
---|
| 373 | using all scales. The output filename will then be |
---|
| 374 | \texttt{image.RECON-3-2-4-1.fits} (\ie it uses the four parameters |
---|
| 375 | relevant for the \atrous reconstruction as listed in |
---|
| 376 | Appendix~\ref{app-param}). The new FITS file will also have these |
---|
| 377 | parameters as header keywords. If a subsection of the input image has |
---|
| 378 | been used (see \S\ref{sec-input}), the format of the output filename |
---|
| 379 | will be \texttt{image.sub.RECON-3-2-4-1.fits}, and the subsection that |
---|
| 380 | has been used is also stored in the FITS header. |
---|
| 381 | |
---|
| 382 | Likewise, the residual image, defined as the difference between the |
---|
| 383 | input and reconstructed arrays, can also be saved in the same manner |
---|
| 384 | by setting \texttt{flagOutputResid = true}. Its filename will be the |
---|
| 385 | same as above, with \texttt{RESID} replacing \texttt{RECON}. |
---|
| 386 | |
---|
| 387 | If a reconstructed image has been saved, it can be read in and used |
---|
| 388 | instead of redoing the reconstruction. To do so, the user should set |
---|
| 389 | the parameter \texttt{flagReconExists = true}. The user can indicate |
---|
| 390 | the name of the reconstructed FITS file using the \texttt{reconFile} |
---|
| 391 | parameter, or, if this is not specified, \duchamp searches for the |
---|
| 392 | file with the name as defined above. If the file is not found, the |
---|
| 393 | reconstruction is performed as normal. Note that to do this, the user |
---|
| 394 | needs to set \texttt{flagAtrous = true} (obviously, if this is |
---|
| 395 | \texttt{false}, the reconstruction is not needed). |
---|
| 396 | |
---|
| 397 | To save the smoothed array, set \texttt{flagOutputSmooth = true}. The |
---|
| 398 | name of the saved file will depend on the method of smoothing used. It |
---|
| 399 | will be either \texttt{image.SMOOTH-1D-a.fits}, where a is replaced by |
---|
| 400 | the Hanning width used, or \texttt{image.SMOOTH-2D-a-b-c.fits}, where |
---|
| 401 | the Gaussian kernel parameters are a,b,c. Similarly to the |
---|
| 402 | reconstruction case, a saved file can be read in by setting |
---|
| 403 | \texttt{flagSmoothExists = true} and either specifying a file to be |
---|
| 404 | read with the \texttt{smoothFile} parameter or relying on \duchamp to |
---|
| 405 | find the file with the name as given above. |
---|
| 406 | |
---|
| 407 | |
---|
[158] | 408 | \secB{Searching the image} |
---|
| 409 | \label{sec-detection} |
---|
| 410 | |
---|
[277] | 411 | \secC{Technique} |
---|
| 412 | |
---|
[258] | 413 | The basic idea behind detection is to locate sets of contiguous voxels |
---|
[265] | 414 | that lie above some threshold. One threshold is calculated for the |
---|
| 415 | entire cube, enabling calculation of signal-to-noise ratios for each |
---|
| 416 | source (see Section~\ref{sec-output} for details). The user can |
---|
| 417 | manually specify a value (using the parameter |
---|
[258] | 418 | \texttt{threshold}) for the threshold, which will override the |
---|
| 419 | calculated value. Note that this only applies for the first of the two |
---|
| 420 | cases discussed below -- the FDR case ignores any manually-set |
---|
| 421 | threshold value. |
---|
| 422 | |
---|
[265] | 423 | The cube is searched one channel map at a time, using the |
---|
| 424 | 2-dimensional raster-scanning algorithm of \citet{lutz80} that |
---|
| 425 | connects groups of neighbouring pixels. Such an algorithm cannot be |
---|
| 426 | applied directly to a 3-dimensional case, as it requires that objects |
---|
| 427 | are completely nested in a row (when scanning along a row, if an |
---|
| 428 | object finishes and other starts, you won't get back to the first |
---|
| 429 | until the second is completely finished for the |
---|
| 430 | row). Three-dimensional data does not have this property, hence the |
---|
| 431 | need to treat the data on a 2-dimensional basis. |
---|
[158] | 432 | |
---|
[265] | 433 | Although there are parameters that govern the minimum number of pixels |
---|
| 434 | in a spatial and spectral sense that an object must have |
---|
| 435 | (\texttt{minPix} and \texttt{minChannels} respectively), these |
---|
| 436 | criteria are not applied at this point. It is only after the merging |
---|
| 437 | and growing (see \S\ref{sec-merger}) is done that objects are rejected |
---|
| 438 | for not meeting these criteria. |
---|
[158] | 439 | |
---|
[258] | 440 | Finally, the search only looks for positive features. If one is |
---|
| 441 | interested instead in negative features (such as absorption lines), |
---|
| 442 | set the parameter \texttt{flagNegative = true}. This will invert the |
---|
| 443 | cube (\ie multiply all pixels by $-1$) prior to the search, and then |
---|
| 444 | re-invert the cube (and the fluxes of any detections) after searching |
---|
| 445 | is complete. All outputs are done in the same manner as normal, so |
---|
| 446 | that fluxes of detections will be negative. |
---|
[158] | 447 | |
---|
[258] | 448 | \secC{Calculating statistics} |
---|
| 449 | |
---|
| 450 | A crucial part of the detection process is estimating the statistics |
---|
| 451 | that define the detection threshold. To determine a threshold, we need |
---|
[277] | 452 | to estimate from the data two parameters: the middle of the noise |
---|
| 453 | distribution (the ``noise level''), and the width of the distribution |
---|
| 454 | (the ``noise spread''). For both cases, we again use robust methods, |
---|
| 455 | using the median and MADFM. |
---|
[258] | 456 | |
---|
[277] | 457 | The choice of pixels to be used depend on the analysis method. If the |
---|
| 458 | wavelet reconstruction has been done, the residuals (defined |
---|
| 459 | in the sense of original $-$ reconstruction) are used to estimate the |
---|
| 460 | noise spread of the cube, since the reconstruction should pick out |
---|
| 461 | all significant structure. The noise level (the middle of the |
---|
| 462 | distribution) is taken from the original array. |
---|
| 463 | |
---|
| 464 | If smoothing of the cube has been done instead, all noise parameters |
---|
| 465 | are measured from the smoothed array, and detections are made with |
---|
| 466 | these parameters. When the signal-to-noise level is quoted for each |
---|
| 467 | detection (see \S\ref{sec-output}), the noise parameters of the |
---|
| 468 | original array are used, since the smoothing process correlates |
---|
| 469 | neighbouring pixels, reducing the noise level. |
---|
| 470 | |
---|
| 471 | If neither reconstruction nor smoothing has been done, then the |
---|
| 472 | statistics are calculated from the original, input array. |
---|
| 473 | |
---|
[258] | 474 | The parameters that are estimated should be representative of the |
---|
| 475 | noise in the cube. For the case of small objects embedded in many |
---|
| 476 | noise pixels (\eg the case of \hi surveys), using the full cube will |
---|
| 477 | provide good estimators. It is possible, however, to use only a |
---|
| 478 | subsection of the cube by setting the parameter \texttt{flagStatSec = |
---|
| 479 | true} and providing the desired subsection to the \texttt{StatSec} |
---|
| 480 | parameter. This subsection works in exactly the same way as the pixel |
---|
[265] | 481 | subsection discussed in \S\ref{sec-input}. Note that this subsection |
---|
| 482 | applies only to the statistics used to determine the threshold. It |
---|
| 483 | does not affect the calculation of statistics in the case of the |
---|
| 484 | wavelet reconstruction. |
---|
[258] | 485 | |
---|
| 486 | \secC{Determining the threshold} |
---|
| 487 | |
---|
| 488 | Once the statistics have been calculated, the threshold is determined |
---|
| 489 | in one of two ways. The first way is a simple sigma-clipping, where a |
---|
| 490 | threshold is set at a fixed number $n$ of standard deviations above |
---|
| 491 | the mean, and pixels above this threshold are flagged as detected. The |
---|
| 492 | value of $n$ is set with the parameter \texttt{snrCut}. As before, the |
---|
| 493 | value of the standard deviation is estimated by the MADFM, and |
---|
| 494 | corrected by the ratio derived in Appendix~\ref{app-madfm}. |
---|
| 495 | |
---|
[158] | 496 | The second method uses the False Discovery Rate (FDR) technique |
---|
| 497 | \citep{miller01,hopkins02}, whose basis we briefly detail here. The |
---|
| 498 | false discovery rate (given by the number of false detections divided |
---|
| 499 | by the total number of detections) is fixed at a certain value |
---|
| 500 | $\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false |
---|
| 501 | positives). In practice, an $\alpha$ value is chosen, and the ensemble |
---|
| 502 | average FDR (\ie $\langle FDR \rangle$) when the method is used will |
---|
| 503 | be less than $\alpha$. One calculates $p$ -- the probability, |
---|
| 504 | assuming the null hypothesis is true, of obtaining a test statistic as |
---|
| 505 | extreme as the pixel value (the observed test statistic) -- for each |
---|
| 506 | pixel, and sorts them in increasing order. One then calculates $d$ |
---|
| 507 | where |
---|
| 508 | \[ |
---|
| 509 | d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\}, |
---|
| 510 | \] |
---|
| 511 | and then rejects all hypotheses whose $p$-values are less than or |
---|
| 512 | equal to $P_d$. (So a $P_i<P_d$ will be rejected even if $P_i \geq |
---|
| 513 | j\alpha/c_N N$.) Note that ``reject hypothesis'' here means ``accept |
---|
| 514 | the pixel as an object pixel'' (\ie we are rejecting the null |
---|
| 515 | hypothesis that the pixel belongs to the background). |
---|
| 516 | |
---|
[277] | 517 | The $c_N$ value here is a normalisation constant that depends on the |
---|
[158] | 518 | correlated nature of the pixel values. If all the pixels are |
---|
| 519 | uncorrelated, then $c_N=1$. If $N$ pixels are correlated, then their |
---|
| 520 | tests will be dependent on each other, and so $c_N = \sum_{i=1}^N |
---|
| 521 | i^{-1}$. \citet{hopkins02} consider real radio data, where the pixels |
---|
[265] | 522 | are correlated over the beam. For the calculations done in \duchamp, |
---|
[277] | 523 | $N=2B$, where $B$ is the beam size in pixels, calculated from the FITS |
---|
| 524 | header (if the correct keywords -- BMAJ, BMIN -- are not present, the |
---|
| 525 | size of the beam is taken from the parameter \texttt{beamSize}). The |
---|
| 526 | factor of 2 comes about because we treat neighbouring channels as |
---|
| 527 | correlated. |
---|
[158] | 528 | |
---|
| 529 | The theory behind the FDR method implies a direct connection between |
---|
| 530 | the choice of $\alpha$ and the fraction of detections that will be |
---|
[265] | 531 | false positives. These detections, however, are individual pixels, |
---|
| 532 | which undergo a process of merging and rejection (\S\ref{sec-merger}), |
---|
| 533 | and so the fraction of the final list of detected objects that are |
---|
| 534 | false positives will be much smaller than $\alpha$. See the discussion |
---|
| 535 | in \S\ref{sec-notes}. |
---|
[158] | 536 | |
---|
[265] | 537 | %\secC{Storage of detected objects in memory} |
---|
| 538 | % |
---|
| 539 | %It is useful to understand how \duchamp stores the detected objects in |
---|
| 540 | %memory while it is running. This makes use of nested C++ classes, so |
---|
| 541 | %that an object is stored as a class that includes the set of detected |
---|
| 542 | %pixels, plus all the various calculated parameters (fluxes, WCS |
---|
| 543 | %coordinates, pixel centres and extrema, flags,...). The set of pixels |
---|
| 544 | %are stored using another class, that stores 3-dimensional objects as a |
---|
| 545 | %set of channel maps, each consisting of a $z$-value and a |
---|
| 546 | %2-dimensional object (a spatial map if you like). This 2-dimensional |
---|
| 547 | %object is recorded using ``run-length'' encoding, where each row (a |
---|
| 548 | %fixed $y$ value) is stored by the starting $x$-value and the length |
---|
[158] | 549 | |
---|
| 550 | \secB{Merging detected objects} |
---|
| 551 | \label{sec-merger} |
---|
| 552 | |
---|
| 553 | The searching step produces a list of detected objects that will have |
---|
| 554 | many repeated detections of a given object -- for instance, spectral |
---|
| 555 | detections in adjacent pixels of the same object and/or spatial |
---|
| 556 | detections in neighbouring channels. These are then combined in an |
---|
| 557 | algorithm that matches all objects judged to be ``close'', according |
---|
| 558 | to one of two criteria. |
---|
| 559 | |
---|
| 560 | One criterion is to define two thresholds -- one spatial and one in |
---|
| 561 | velocity -- and say that two objects should be merged if there is at |
---|
| 562 | least one pair of pixels that lie within these threshold distances of |
---|
| 563 | each other. These thresholds are specified by the parameters |
---|
| 564 | \texttt{threshSpatial} and \texttt{threshVelocity} (in units of pixels |
---|
| 565 | and channels respectively). |
---|
| 566 | |
---|
| 567 | Alternatively, the spatial requirement can be changed to say that |
---|
| 568 | there must be a pair of pixels that are \emph{adjacent} -- a stricter, |
---|
| 569 | but perhaps more realistic requirement, particularly when the spatial |
---|
[258] | 570 | pixels have a large angular size (as is the case for |
---|
| 571 | \hi surveys). This |
---|
| 572 | method can be selected by setting the parameter |
---|
[158] | 573 | \texttt{flagAdjacent} to 1 (\ie \texttt{true}) in the parameter |
---|
| 574 | file. The velocity thresholding is done in the same way as the first |
---|
| 575 | option. |
---|
| 576 | |
---|
| 577 | Once the detections have been merged, they may be ``grown''. This is a |
---|
[265] | 578 | process of increasing the size of the detection by adding nearby |
---|
| 579 | pixels (according to the \texttt{threshSpatial} and |
---|
| 580 | \texttt{threshVelocity} parameters) that are above some secondary |
---|
| 581 | threshold. This threshold is lower than the one used for the initial |
---|
| 582 | detection, but above the noise level, so that faint pixels are only |
---|
| 583 | detected when they are close to a bright pixel. The value of this |
---|
| 584 | threshold is a possible input parameter (\texttt{growthCut}), with a |
---|
| 585 | default value of $1.5\sigma$. |
---|
| 586 | |
---|
| 587 | The use of the growth algorithm is controlled by the |
---|
[158] | 588 | \texttt{flagGrowth} parameter -- the default value of which is |
---|
| 589 | \texttt{false}. If the detections are grown, they are sent through the |
---|
| 590 | merging algorithm a second time, to pick up any detections that now |
---|
| 591 | overlap or have grown over each other. |
---|
| 592 | |
---|
| 593 | Finally, to be accepted, the detections must span \emph{both} a |
---|
| 594 | minimum number of channels (to remove any spurious single-channel |
---|
| 595 | spikes that may be present), and a minimum number of spatial |
---|
| 596 | pixels. These numbers, as for the original detection step, are set |
---|
| 597 | with the \texttt{minChannels} and \texttt{minPix} parameters. The |
---|
| 598 | channel requirement means there must be at least one set of |
---|
| 599 | \texttt{minChannels} consecutive channels in the source for it to be |
---|
| 600 | accepted. |
---|