source: tags/release-1.6.1/docs/executionFlow.tex

Last change on this file was 1367, checked in by MatthewWhiting, 10 years ago

Updates to the figures, as well as some minor changes to the text, indicating .fits.gz compliance, plus the solid spectral boundary lines.

File size: 46.1 KB
Line 
1% -----------------------------------------------------------------------
2% executionFlow.tex: Section detailing each of the main algorithms
3%                    used by Duchamp.
4% -----------------------------------------------------------------------
5% Copyright (C) 2006, Matthew Whiting, ATNF
6%
7% This program is free software; you can redistribute it and/or modify it
8% under the terms of the GNU General Public License as published by the
9% Free Software Foundation; either version 2 of the License, or (at your
10% option) any later version.
11%
12% Duchamp is distributed in the hope that it will be useful, but WITHOUT
13% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
14% FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
15% for more details.
16%
17% You should have received a copy of the GNU General Public License
18% along with Duchamp; if not, write to the Free Software Foundation,
19% Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
20%
21% Correspondence concerning Duchamp may be directed to:
22%    Internet email: Matthew.Whiting [at] atnf.csiro.au
23%    Postal address: Dr. Matthew Whiting
24%                    Australia Telescope National Facility, CSIRO
25%                    PO Box 76
26%                    Epping NSW 1710
27%                    AUSTRALIA
28% -----------------------------------------------------------------------
29\secA{What \duchamp is doing}
30\label{sec-flow}
31
32Each of the steps that \duchamp goes through in the course of its
33execution are discussed here in more detail. This should provide
34enough background information to fully understand what \duchamp is
35doing and what all the output information is. For those interested in
36the programming side of things, \duchamp is written in C/C++ and makes
37use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot}
38libraries.
39
40\secB{Image input}
41\label{sec-input}
42
43The image cube to be searched is provided to \duchamp in FITS
44format. The \texttt{cfitsio} library will read \texttt{.fits} files,
45as well as compressed \texttt{.fits.gz} files directly.
46
47The cube is read in using basic \textsc{cfitsio} commands, and stored
48as an array in a special C++ class. This class keeps track of the list
49of detected objects, as well as any reconstructed arrays that are made
50(see \S\ref{sec-recon}). The World Coordinate System
51(WCS)\footnote{This is the information necessary for translating the
52  pixel locations to quantities such as position on the sky,
53  frequency, velocity, and so on.} information for the cube is also
54obtained from the FITS header by \textsc{wcslib} functions
55\citep{greisen02, calabretta02,greisen06}, and this information, in
56the form of a \texttt{wcsprm} structure, is also stored in the same
57class. See \S\ref{sec-wcs} for more details.
58
59A sub-section of a cube can be requested by defining the subsection
60with the \texttt{subsection} parameter and setting
61\texttt{flagSubsection = true} -- this can be a good idea if the cube
62has very noisy edges, which may produce many spurious detections.
63
64There are two ways of specifying the \texttt{subsection} string. The
65first is the generalised form
66\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the
67\textsc{cfitsio} library. This has one set of colon-separated numbers
68for each axis in the FITS file. In this manner, the x-coordinates run
69from \texttt{x1} to \texttt{x2} (inclusive), with steps of
70\texttt{dx}. The step value can be omitted, so a subsection of the
71form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp
72does not make use of any step value present in the subsection string,
73and any that are present are removed before the file is opened.
74
75If the entire range of a coordinate is required, one can replace the
76range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the
77subsection string \texttt{[*,*,*]} is simply the entire cube. Note
78that the pixel ranges for each axis start at 1, so the full pixel
79range of a 100-pixel axis would be expressed as 1:100. A complete
80description of this section syntax can be found at the
81\textsc{fitsio} web site%
82\footnote{%
83\href%
84{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}%
85{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}.
86
87
88Making full use of the subsection requires knowledge of the size of
89each of the dimensions. If one wants to, for instance, trim a certain
90number of pixels off the edges of the cube, without examining the cube
91to obtain the actual size, one can use the second form of the
92subsection string. This just gives a number for each axis, \eg
93\texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and}
94end of each axis).
95
96All types of subsections can be combined \eg \texttt{[5,2:98,*]}.
97
98Typically, the units of pixel brightness are given by the FITS file's
99BUNIT keyword. However, this may often be unwieldy (for instance, the
100units are Jy/beam, but the values are around a few mJy/beam). It is
101therefore possible to nominate new units, to which the pixel values
102will be converted, by using the \texttt{newFluxUnits} input
103parameter. The units must be directly translatable from the existing
104ones -- for instance, if BUNIT is Jy/beam, you cannot specify mJy, it
105must be mJy/beam. If an incompatible unit is given, the BUNIT value is
106used instead.
107
108\secB{World Coordinate System}
109\label{sec-wcs}
110
111\duchamp uses the \textsc{wcslib} package to handle the conversions
112between pixel and world coordinates. This package uses the
113transformations described in the WCS papers
114\citep{greisen02,calabretta02,greisen06}. The same package handles the
115WCS axes in the spatial plots. The conversions used are governed by
116the information in the FITS header -- this is parsed by
117\textsc{wcslib} to create the appropriate transformations.
118
119For the spectral axis, however, \duchamp provides the ability to change the
120type of transformation used, so that different spectral quantities can
121be calculated. By using the parameter \texttt{spectralType}, the user
122can change from the type given in the FITS header. This should be done
123in line with the conventions outlined in \citet{greisen06}. The
124spectral type can be either a full 8-character string (\eg
125'VELO-F2V'), or simply the 4-character ``S-type'' (\eg 'VELO'), in
126which case \textsc{wcslib} will handle the conversion.
127
128The rest frequency can be provided as well. This may be necessary, if
129the FITS header does not specify one and you wish to transform to
130velocity. Alternatively, you may want to make your measurements based
131on a different spectral line (\eg OH1665 instead of
132H\textsc{i}-21cm). The input parameter \texttt{restFrequency} is used,
133and this will override the FITS header value.
134
135Finally, the user may also request different spectral units from those
136in the FITS file, or from the defaults arising from the
137\textsc{wcslib} transformation. The input parameter
138\texttt{spectralUnits} should be used, and \citet{greisen02} should be
139consulted to ensure the syntax is appropriate.
140
141\secB{Image modification}
142\label{sec-modify}
143
144Several modifications to the cube can be made that improve the
145execution and efficiency of \duchamp (their use is optional, governed
146by the relevant flags in the parameter file).
147
148\secC{BLANK pixel removal}
149\label{sec-blank}
150
151If the imaged area of a cube is non-rectangular (see the example in
152Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels
153are used to pad it out to a rectangular shape. The value of these
154pixels is given by the FITS header keywords BLANK, BSCALE and
155BZERO. While these pixels make the image a nice shape, they will take
156up unnecessary space in memory, and so to potentially speed up the
157processing we can trim them from the edge. This is done when the
158parameter \texttt{flagTrim = true}. If the above keywords are not
159present, the trimming will not be done (in this case, a similar effect
160can be accomplished, if one knows where the ``blank'' pixels are, by
161using the subsection option).
162
163The amount of trimming is recorded, and these pixels are added back in
164once the source-detection is completed (so that quoted pixel positions
165are applicable to the original cube). Rows and columns are trimmed one
166at a time until the first non-BLANK pixel is reached, so that the
167image remains rectangular. In practice, this means that there will be
168some BLANK pixels left in the trimmed image (if the non-BLANK region
169is non-rectangular). However, these are ignored in all further
170calculations done on the cube.
171
172\secC{Negative-feature detection}
173\label{sec-negative}
174
175\duchamp searches strictly for positive sources -- that is, pixels
176that are \textbf{above} the detection threshold. If one instead wishes
177to search for negative features (such as absorption lines), set the
178parameter \texttt{flagNegative=true}.
179
180This inverts the cube (\ie multiplies all pixels by $-1$) prior to
181searching, and then re-inverts the cube, and the fluxes of any
182detections (so that the detections will have, for instance, a negative
183peak flux) after searching is complete. Any reconstructed or smoothed
184array that has been read from disk is also inverted.
185
186This works fine for the simple case of inverting the cube. If,
187however, a spectral baseline needs to be removed (see the next section
188\S\ref{sec-baseline}), then special care needs to be taken. This is
189described in more detail in \S\ref{sec-absorptionline}.
190
191\secC{Baseline removal}
192\label{sec-baseline}
193
194The user may request the removal of baselines from the
195spectra, via the parameter \texttt{flagBaseline}. This may be
196necessary if there is a strong baseline ripple present, which can
197result in spurious detections at the high points of the ripple, or if
198there is continuum emission present in the cube that is not
199interesting for the search.
200
201There are two ways in which the baseline can be calculated. The first
202makes use of the \atrous wavelet reconstruction procedure (see
203\S\ref{sec-recon}), where only the two largest scales are kept.
204
205The second method uses a median filter combined with a sliding box of
206some specified width. For each pixel, the baseline is determined to be
207the median value of all pixels in a region of that width centred on
208the pixel of interest. The box is truncated at the edges of the
209spectrum (so that fewer pixels are included), but there will always be
210at least half the width of the box present.
211
212The choice between these methods is made using the
213\texttt{baselineType} parameter -- only values of \texttt{atrous} (the
214default) or \texttt{median} are accepted. If the \texttt{median}
215method is being used, the full width of the box is specified with
216\texttt{baselineBoxWidth}.
217
218The baseline calculation is done separately for each spatial pixel
219(\ie for each spectrum in the cube), and the baselines are stored and
220added back in before any output is done. In this way the quoted fluxes
221and displayed spectra are as one would see from the input cube itself
222-- even though the detection (and reconstruction if applicable) is
223done on the baseline-removed cube.
224
225When the \texttt{atrous} method is used, the presence of very strong
226signals (for instance, masers at several hundred Jy) could affect the
227determination of the baseline, and would lead to a large dip centred
228on the signal in the baseline-subtracted spectrum. To prevent this,
229the signal is trimmed prior to the reconstruction process at some
230standard threshold (at $5\sigma$ above the mean). The baseline
231determined should thus be representative of the true, signal-free
232baseline. Note that this trimming is only a temporary measure which
233does not affect the source-detection.
234
235The baseline values can be saved to a FITS file for later
236examination. See \S\ref{sec-baselineOut} for details.
237
238\secC{Combining negative detections with baseline removal}
239\label{sec-absorptionline}
240
241When both \texttt{flagNegative=true} and \texttt{flagBaseline=true},
242care needs to be taken when parameterising the sources. They are
243strictly negative on the baseline-subtracted spectrum, and so
244parameterisation is done on this spectrum, prior to adding the
245baseline back.
246
247This ensures that the peak flux and location correspond to the maximum
248absorption (the peak flux will be quoted as a negative value), and the
249spectral plots (see \S\ref{sec-spectralplots}) will properly show the
250peak spectrum (when used in \texttt{spectralMethod=peak} mode).
251
252This method is typical of the approach that would be used for
253detecting absorption lines against a continuum source. For now, the
254parameterisation is the same as for other detections, but a future
255version of \duchamp will have parameters more relevant for absorption
256lines, such as optical depth and equivalent width.
257
258\secC{Flagging channels}
259\label{sec-flagging}
260
261Finally, it is possible to flag particular channels so that they are
262not included in the search. This is an extension of the old ``Milky
263Way'' channel range. That allowed the specification of a single
264contiguous block of channels, and was aimed at excluding Galactic
265emission in extragalactic \hi cubes.
266
267The new flagging approach allows the specification of a series of
268channels and channel ranges. This allows the user to block the
269detection of known regions of RFI, or known but uninteresting emission
270(\eg Galactic \hi emission if you are searching for extragalactic
271sources).
272
273Flagged channels are specified using the \texttt{flaggedChannels}
274parameter, and can be given by a comma-separated list of single values
275or ranges. For instance: \texttt{flaggedChannels  5,6,12-20,87}. These
276channels refer to channel numbers in the \textbf{the full cube},
277before any subsection is applied. Also note that \textbf{the channel
278  numbering starts at zero}, that is, channel 0 is the first channel
279of the cube.
280
281The effect is to ignore detections that lie within these channels. If
282a spatial search is being conducted (\ie one channel map at a time),
283these channels are simply not searched. If a spectral search is being
284conducted, those channels will be flagged so that no detection is made
285within them. The spectral output (see Fig.~\ref{fig-spect}) will
286ignore them as far as scaling the plot goes, and the channel range
287will be indicated by a green hatched box.
288
289Note that these channels will be included in any smoothing or
290reconstruction that is done on the array, and so will be included in
291any saved FITS file (see \S\ref{sec-reconIO}).
292
293\secB{Image reconstruction}
294\label{sec-recon}
295
296The user can direct \duchamp to reconstruct the data cube using the
297multi-resolution \atrous wavelet algorithm. A good description of the
298procedure can be found in \citet{starck02a}. The reconstruction is an
299effective way of removing a lot of the noise in the image, allowing
300one to search reliably to fainter levels, and reducing the number of
301spurious detections. This is an optional step, but one that greatly
302enhances the reliability of the resulting catalogue, at the cost of
303additional CPU and memory usage (see \S\ref{sec-notes} for
304discussion).
305
306\secC{Algorithm}
307
308The steps in the \atrous reconstruction are as follows:
309\begin{enumerate}
310\item The reconstructed array is set to 0 everywhere.
311\item The input array is discretely convolved with a given filter
312  function. This is determined from the parameter file via the
313  \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for
314  details on the filters available. Edges are dealt with by assuming
315  reflection at the boundary.
316\item The wavelet coefficients are calculated by taking the difference
317  between the convolved array and the input array.
318\item If the wavelet coefficients at a given point are above the
319  requested reconstruction threshold (given by \texttt{snrRecon} as
320  the number of $\sigma$ above the mean and adjusted to the current
321  scale -- see Appendix~\ref{app-scaling}), add these to the
322  reconstructed array.
323\item The separation between the filter coefficients is doubled. (Note
324  that this step provides the name of the procedure\footnote{\atrous
325  means ``with holes'' in French.}, as gaps or holes are created in
326  the filter coverage.)
327\item The procedure is repeated from step 2, using the convolved array
328  as the input array.
329\item Continue until the required maximum number of scales is reached.
330\item Add the final smoothed (\ie convolved) array to the
331  reconstructed array. This provides the ``DC offset'', as each of the
332  wavelet coefficient arrays will have zero mean.
333\end{enumerate}
334
335The range of scales at which the selection of wavelet coefficients is
336made is governed by the \texttt{scaleMin} and \texttt{scaleMax}
337parameters. The minimum scale used is given by \texttt{scaleMin},
338where the default value is 1 (the first scale). This parameter is
339useful if you want to ignore the highest-frequency features
340(e.g. high-frequency noise that might be present). Normally the
341maximum scale is calculated from the size of the input array, but it
342can be specified by using \texttt{scaleMax}. A value $\le0$ will
343result in the use of the calculated value, as will a value of
344\texttt{scaleMax} greater than the calculated value. Use of these two
345parameters can allow searching for features of a particular scale
346size, for instance searching for narrow absorption features.
347
348The reconstruction has at least two iterations. The first iteration
349makes a first pass at the wavelet reconstruction (the process outlined
350in the 8 stages above), but the residual array will likely have some
351structure still in it, so the wavelet filtering is done on the
352residual, and any significant wavelet terms are added to the final
353reconstruction. This step is repeated until the relative change in the
354measured standard deviation of the residual (see note below on the
355evaluation of this quantity) is less than some value, given by the
356\texttt{reconConvergence} parameter.
357
358It is important to note that the \atrous decomposition is an example
359of a ``redundant'' transformation. If no thresholding is performed,
360the sum of all the wavelet coefficient arrays and the final smoothed
361array is identical to the input array. The thresholding thus removes
362only the unwanted structure in the array.
363
364Note that any BLANK pixels that are still in the cube will not be
365altered by the reconstruction -- they will be left as BLANK so that
366the shape of the valid part of the cube is preserved.
367
368\secC{Note on Statistics}
369
370The correct calculation of the reconstructed array needs good
371estimators of the underlying mean and standard deviation (or rms) of
372the background noise distribution. The methods used to estimate these
373quantities are detailed in \S\ref{sec-stats} -- the default behaviour
374is to use robust estimators, to avoid biasing due to bright pixels.
375
376When thresholding the different wavelet scales, the value of the rms
377as measured from the wavelet array needs to be scaled to account for
378the increased amount of correlation between neighbouring pixels (due
379to the convolution). See Appendix~\ref{app-scaling} for details on
380this scaling.
381
382\secC{User control of reconstruction parameters}
383
384The most important parameter for the user to select in relation to the
385reconstruction is the threshold for each wavelet array. This is set
386using the \texttt{snrRecon} parameter, and is given as a multiple of
387the rms (estimated by the MADFM) above the mean (which for the wavelet
388arrays should be approximately zero). There are several other
389parameters that can be altered as well that affect the outcome of the
390reconstruction.
391
392By default, the cube is reconstructed in three dimensions, using a
393three-dimensional filter and three-dimensional convolution. This can be
394altered, however, using the parameter \texttt{reconDim}. If set to 1,
395this means the cube is reconstructed by considering each spectrum
396separately, whereas \texttt{reconDim=2} will mean the cube is
397reconstructed by doing each channel map separately. The merits of
398these choices are discussed in \S\ref{sec-notes}, but it should be
399noted that a 2-dimensional reconstruction can be susceptible to edge
400effects if the spatial shape of the pixel array is not rectangular.
401
402The user can also select the minimum and maximum scales to be used in
403the reconstruction. The first scale exhibits the highest frequency
404variations, and so ignoring this one can sometimes be beneficial in
405removing excess noise. The default is to use all scales
406(\texttt{minscale = 1}).
407
408The convergence of the \atrous iterations is governed by the
409\texttt{reconConvergence} parameter, which is the fractional decrease
410in the standard deviation of the residuals from one iteration to the
411next. \duchamp will do at least two iterations, and then continue
412until the decrease is less than the value of this parameter.
413
414Finally, the filter that is used for the convolution can be selected
415by using \texttt{filterCode} and the relevant code number -- the
416choices are listed in Appendix~\ref{app-param}. A larger filter will
417give a better reconstruction, but take longer and use more memory when
418executing. When multi-dimensional reconstruction is selected, this
419filter is used to construct a 2- or 3-dimensional equivalent.
420
421\secB{Smoothing the cube}
422\label{sec-smoothing}
423
424An alternative to doing the wavelet reconstruction is to smooth the
425cube.  This technique can be useful in reducing the noise level (at
426the cost of making neighbouring pixels correlated and blurring any
427signal present), and is particularly well suited to the case where a
428particular signal size (\ie a certain channel width or spatial size)
429is believed to be present in the data.
430
431There are two alternative methods that can be used: spectral
432smoothing, using the Hanning filter; or spatial smoothing, using a 2D
433Gaussian kernel. These alternatives are outlined below. To utilise the
434smoothing option, set the parameter \texttt{flagSmooth=true} and set
435\texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}.
436
437\secC{Spectral smoothing}
438
439When \texttt{smoothType = spectral} is selected, the cube is smoothed
440only in the spectral domain. Each spectrum is independently smoothed
441by a Hanning filter, and then put back together to form the smoothed
442cube, which is then used by the searching algorithm (see below). Note
443that in the case of both the reconstruction and the smoothing options
444being requested, the reconstruction will take precedence and the
445smoothing will \emph{not} be done.
446
447There is only one parameter necessary to define the degree of
448smoothing -- the Hanning width $a$ (given by the user parameter
449\texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter
450are defined by
451\[
452c(x) =
453  \begin{cases}
454   \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| < (a+1)/2\\
455   0                                               &|x| \geq (a+1)/2.
456  \end{cases},\ a,x \in \mathbb{Z}
457\]
458Note that the width specified must be an
459odd integer (if the parameter provided is even, it is incremented by
460one).
461
462\secC{Spatial smoothing}
463
464When \texttt{smoothType = spatial} is selected, the cube is smoothed
465only in the spatial domain. Each channel map is independently smoothed
466by a two-dimensional Gaussian kernel, put back together to form the
467smoothed cube, and used in the searching algorithm (see below). Again,
468reconstruction is always done by preference if both techniques are
469requested.
470
471The two-dimensional Gaussian has three parameters to define it,
472governed by the elliptical cross-sectional shape of the Gaussian
473function: the FWHM (full-width at half-maximum) of the major and minor
474axes, and the position angle of the major axis. These are given by the
475user parameters \texttt{kernMaj, kernMin} \& \texttt{kernPA}. If a
476circular Gaussian is required, the user need only provide the
477\texttt{kernMaj} parameter. The \texttt{kernMin} parameter will then
478be set to the same value, and \texttt{kernPA} to zero.  If we define
479these parameters as $a,b,\theta$ respectively, we can define the
480kernel by the function
481\[
482k(x,y) = \frac{1}{2\pi\sigma_X\sigma_Y} \exp\left[-0.5
483  \left(\frac{X^2}{\sigma_X^2} + \frac{Y^2}{\sigma_Y^2} \right)
484\right]
485\]
486where $(x,y)$ are the offsets from the central pixel of the gaussian
487function, and
488\begin{align*}
489X& = x\sin\theta - y\cos\theta&
490  Y&= x\cos\theta + y\sin\theta\\
491\sigma_X^2& = \frac{(a/2)^2}{2\ln2}&
492  \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\
493\end{align*}
494
495The size of the kernel is determined both by these FWHM parameters and
496the cutoff parameter \texttt{spatialSmoothCutoff}. The width is
497determined by
498\[
499W = \sigma_\text{maj} \sqrt{(2\log(C))}
500\]
501where $C$ is the cutoff value. The value of $W$ is rounded up to get
502the half-width of the kernel in pixels. The default value of
503\texttt{spatialSmoothCutoff} is 1.e-10, which gives full kernel widths
504of 31 pix for \texttt{kernMaj=5} and 19 pix for \texttt{kernMaj=3}.
505
506The algorithm provides different ways, controlled by the input
507parameter \texttt{smoothEdgeMethod}, to handle those pixels that lie
508within a kernel half-width from the edge of the image and so cannot
509have the full kernel placed on top of them. The default behaviour,
510with \texttt{smoothEdgeMethod=equal}, simply treats these pixels in
511the same way, adding up the product of the kernel pixels with the
512image pixels for all cases that lie within the image boundary. A more
513drastic alternative, \texttt{smoothEdgeMethod=truncate}, is to simply
514not evaluate the convolution at these pixels -- they will be set at
515some null value that will not contribute to any detection. Thirdly,
516setting \texttt{smoothEdgeMethod=scale} will evaluate the convolution
517as for the \texttt{'equal'} case, but scale down the edge pixels by
518summing over only those kernel pixels contributing.
519
520\secB{Input/Output of reconstructed/smoothed arrays}
521\label{sec-reconIO}
522
523The smoothing and reconstruction stages can be relatively
524time-consuming, particularly for large cubes and reconstructions in
5253-D (or even spatial smoothing). To get around this, \duchamp provides
526a shortcut to allow users to perform multiple searches (\eg with
527different thresholds) on the same reconstruction/smoothing setup
528without re-doing the calculations each time.
529
530To save the reconstructed array as a FITS file, set
531\texttt{flagOutputRecon = true}. The file will be saved in the same
532directory as the input image, so the user needs to have write
533permissions for that directory.
534
535The name of the file can given by the \texttt{fileOutputRecon}
536parameter, but this can be ignored and \duchamp will present a name
537based on the reconstruction parameters. The filename will be derived
538from the input filename, with extra information detailing the
539reconstruction that has been done. For example, suppose
540\texttt{image.fits} has been reconstructed using a 3-dimensional
541reconstruction with filter \#2, thresholded at $4\sigma$ using all
542scales from 1 to 5, with a convergence criterion of 0.005. The output
543filename will then be \texttt{image.RECON-3-2-4-1-5-0.005.fits} (\ie
544it uses the six parameters relevant for the \atrous reconstruction as
545listed in Appendix~\ref{app-param}). The new FITS file will also have
546these parameters as header keywords. If a subsection of the input
547image has been used (see \S\ref{sec-input}), the format of the output
548filename will be \texttt{image.sub.RECON-3-2-4-1-5-0.005.fits}, and the
549subsection that has been used is also stored in the FITS header.
550
551Likewise, the residual image, defined as the difference between the
552input and reconstructed arrays, can also be saved in the same manner
553by setting \texttt{flagOutputResid = true}. Its filename will be the
554same as above, with \texttt{RESID} replacing \texttt{RECON}.
555
556If a reconstructed image has been saved, it can be read in and used
557instead of redoing the reconstruction. To do so, the user should set
558the parameter \texttt{flagReconExists = true}. The user can indicate
559the name of the reconstructed FITS file using the \texttt{reconFile}
560parameter, or, if this is not specified, \duchamp searches for the
561file with the name as defined above. If the file is not found, the
562reconstruction is performed as normal. Note that to do this, the user
563needs to set \texttt{flagAtrous = true} (obviously, if this is
564\texttt{false}, the reconstruction is not needed).
565
566To save the smoothed array, set \texttt{flagOutputSmooth = true}. As
567for the reconstructed/residual arrays, the name of the file can given
568by the parameter \texttt{fileOutputSmooth}, but this can be ignored
569and \duchamp will present a name that indicates the both the type and
570the details of the smoothing method used. It will be either
571\texttt{image.SMOOTH-1D-a.fits}, where a is replaced by the Hanning
572width used, or \texttt{image.SMOOTH-2D-a-b-c-X-f.fits}, where a,b,c
573are the Gaussian kernel parameters, X represents the edge method
574(E=equal, T=truncate, S=scale), and f is
575$-\log_{10}(\text{cutoff})$. Similarly to the reconstruction case, a
576saved file can be read in by setting \texttt{flagSmoothExists = true}
577and either specifying a file to be read with the \texttt{smoothFile}
578parameter or relying on \duchamp to find the file with the name as
579given above.
580
581
582\secB{Searching the image}
583\label{sec-detection}
584
585\secC{Representation of detected objects}
586\label{sec-scan}
587
588\begin{figure}[t]
589\includegraphics[width=\textwidth]{exampleObject}
590\caption{An example of the run-length encoding method of storing
591pixel information. The scans used to encode the image are listed
592alongside the relevant row. The pixels are colour-coded by
593nominal pixel values, but note that the pixel values themselves
594do not form part of the encoding and are not kept as part of the
595object class. }
596\label{fig-objExample}
597\end{figure}
598
599The principle aim of \duchamp is to provide a catalogue of sources
600located in the image. While running, \duchamp needs to maintain for
601each source several data structures that will contribute to the memory
602footprint: a record of which pixels contribute to the source; a set of
603measured parameters that will go into the catalogue; and a separate
604two-dimensional map showing the spatial location of detected pixels
605(carrying this around makes the computation of detection maps easier
606-- see \S\ref{sec-spatialmaps}).
607
608To keep track of the set of detected pixels, \duchamp
609employs specialised techniques that keep the memory usage
610manageable. A naive method could be to store each single pixel, but
611this results in a lot of redundant information being stored in memory.
612
613To reduce the storage requirements, the run-length encoding method is
614used for storing the spatial information. In this fashion, an object
615in 2D is stored as a series of ``runs'', encoded by a row number (the
616$y$-value), the starting column (the minimum $x$-value) and the run
617length ($\ell_x$: the number of contiguous pixels in that row
618connected to the starting pixel). A single set of $(y,x,\ell_x)$
619values is called a ``scan''. A two-dimensional image is therefore made
620up of a set of scans. An example can be seen in
621Fig.~\ref{fig-objExample}. Note that the object shown has fourteen
622pixels, and so would require 28 integers to record the positions of
623all pixels. The run-length encoding uses just 18 integers to record
624the same information. The longer the runs are in each scan, the
625greater the saving of storage over the naive method.
626
627A 3D object is stored as a set of channel maps, with a channel map
628being a 2D plane with constant $z$-value. Each channel map is itself a
629set of scans showing the $(x,y)$ position of the pixels. The
630additional detection map is stored as a separate channel map, also
631made up of scans.
632
633Note that these pixel map representations do not carry the flux
634information with them. They store just the pixel locations and need to
635be combined with an array of flux values to provide parameters such as
636integrated flux. The advantage of this approach is that the pixel
637locations can be easily applied to different flux arrays as the need
638permits (for instance, defining them using the reconstructed array,
639yet evaluating parameters on the original array).
640
641This scan-based run-length encoding is how the individual detections
642are stored in the binary catalogue described in \S\ref{sec-bincat}.
643
644\secC{Technique}
645\label{sec-searchTechnique}
646
647The basic idea behind detection in \duchamp is to locate sets of
648contiguous voxels that lie above some threshold. No size or shape
649requirement is imposed upon the detections, and no fitting (for
650instance, fitting Gaussian profiles) is done on the sources. All
651\duchamp does is find connected groups of bright voxels and report
652their locations and basic parameters.
653
654One threshold is calculated for the entire cube, enabling calculation
655of signal-to-noise ratios for each source (see
656\S\ref{sec-output} for details). The user can manually specify a
657value (using the parameter \texttt{threshold}) for the threshold,
658which will override the calculated value. Note that this option
659overrides any settings of \texttt{snrCut} or FDR options (see below).
660
661The cube can be searched in one of two ways, governed by the input
662parameter \texttt{searchType}. If \texttt{searchType=spatial}, the
663cube is searched one channel map at a time, using the 2-dimensional
664raster-scanning algorithm of \citet{lutz80} that connects groups of
665neighbouring pixels. Such an algorithm cannot be applied directly to a
6663-dimensional case, as it requires that objects are completely nested
667in a row (when scanning along a row, if an object finishes and other
668starts, you won't get back to the first until the second is completely
669finished for the row). Three-dimensional data does not have this
670property, hence the need to treat the data on a 2-dimensional basis at
671most.
672
673Alternatively, if \texttt{searchType=spectral}, the searching is done
674in one dimension on each individual spatial pixel's spectrum. This is
675a simpler search, but there are potentially many more of them.
676
677Although there are parameters that govern the minimum number of pixels
678in a spatial, spectral and total sense that an object must have
679(\texttt{minPix}, \texttt{minChannels} and \texttt{minVoxels}
680respectively), these criteria are not applied at this point - see
681\S\ref{sec-reject} for details.
682
683Finally, the search only looks for positive features. If one is
684interested instead in negative features (such as absorption lines),
685set the parameter \texttt{flagNegative = true}. This will invert the
686cube (\ie multiply all pixels by $-1$) prior to the search, and then
687re-invert the cube (and the fluxes of any detections) after searching
688is complete. If the reconstructed or smoothed array has been read in
689from disk, this will also be inverted at the same time. All outputs
690are done in the same manner as normal, so that fluxes of detections
691will be negative.
692
693\secC{Calculating statistics}
694\label{sec-stats}
695
696A crucial part of the detection process (as well as the wavelet
697reconstruction: \S\ref{sec-recon}) is estimating the statistics that
698define the detection threshold. To determine a threshold, we need to
699estimate from the data two parameters: the middle of the noise
700distribution (the ``noise level''), and the width of the distribution
701(the ``noise spread''). The noise level is estimated by either the
702mean or the median, and the noise spread by the rms (or the standard
703deviation) or the median absolute deviation from the median
704(MADFM). The median and MADFM are robust statistics, in that they are
705not biased by the presence of a few pixels much brighter than the
706noise.
707
708All four statistics are calculated automatically, but the choice of
709parameters that will be used is governed by the input parameter
710\texttt{flagRobustStats}. This has the default value \texttt{true},
711meaning the underlying mean of the noise distribution is estimated by
712the median, and the underlying standard deviation is estimated by the
713MADFM. In the latter case, the value is corrected, under the
714assumption that the underlying distribution is Normal (Gaussian), by
715dividing by 0.6744888 -- see Appendix~\ref{app-madfm} for details. If
716\texttt{flagRobustStats=false}, the mean and rms are used instead.
717
718The choice of pixels to be used depend on the analysis method. If the
719wavelet reconstruction has been done, the residuals (defined
720in the sense of original $-$ reconstruction) are used to estimate the
721noise spread of the cube, since the reconstruction should pick out
722all significant structure. The noise level (the middle of the
723distribution) is taken from the original array.
724
725If smoothing of the cube has been done instead, all noise parameters
726are measured from the smoothed array, and detections are made with
727these parameters. When the signal-to-noise level is quoted for each
728detection (see \S\ref{sec-output}), the noise parameters of the
729original array are used, since the smoothing process correlates
730neighbouring pixels, reducing the noise level.
731
732If neither reconstruction nor smoothing has been done, then the
733statistics are calculated from the original, input array.
734
735The parameters that are estimated should be representative of the
736noise in the cube. For the case of small objects embedded in many
737noise pixels (\eg the case of \hi surveys), using the full cube will
738provide good estimators. It is possible, however, to use only a
739subsection of the cube by setting the parameter \texttt{flagStatSec =
740  true} and providing the desired subsection to the \texttt{StatSec}
741parameter. This subsection works in exactly the same way as the pixel
742subsection discussed in \S\ref{sec-input}. The \texttt{StatSec} will
743be trimmed if necessary so that it lies wholly within the image
744subsection being used (\ie that given by the \texttt{subsection}
745parameter - this governs what pixels are read in and so are able to be
746used in the calculations).
747
748Note that \texttt{StatSec} applies only to the statistics used to
749determine the threshold. It does not affect the calculation of
750statistics in the case of the wavelet reconstruction. Note also that
751pixels identified as BLANK or as flagged via the
752\texttt{flaggedChannels} parameter are ignored in the statistics
753calculations.
754
755\secC{Determining the threshold}
756
757Once the statistics have been calculated, the threshold is determined
758in one of two ways. The first way is a simple sigma-clipping, where a
759threshold is set at a fixed number $n$ of standard deviations above
760the mean, and pixels above this threshold are flagged as detected. The
761value of $n$ is set with the parameter \texttt{snrCut}. The ``mean''
762and ``standard deviation'' here are estimated according to
763\texttt{flagRobustStats}, as discussed in \S\ref{sec-stats}. In this
764first case only, if the user specifies a threshold, using the
765\texttt{threshold} parameter, the sigma-clipped value is ignored.
766
767The second method uses the False Discovery Rate (FDR) technique
768\citep{miller01,hopkins02}, whose basis we briefly detail here. The
769false discovery rate (given by the number of false detections divided
770by the total number of detections) is fixed at a certain value
771$\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false
772positives). In practice, an $\alpha$ value is chosen, and the ensemble
773average FDR (\ie $\langle FDR \rangle$) when the method is used will
774be less than $\alpha$.  One calculates $p$ -- the probability,
775assuming the null hypothesis is true, of obtaining a test statistic as
776extreme as the pixel value (the observed test statistic) -- for each
777pixel, and sorts them in increasing order. One then calculates $d$
778where
779\[
780d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\},
781\]
782and then rejects all hypotheses whose $p$-values are less than or
783equal to $P_d$. (So a $P_i<P_d$ will be rejected even if $P_i \geq
784j\alpha/c_N N$.) Note that ``reject hypothesis'' here means ``accept
785the pixel as an object pixel'' (\ie we are rejecting the null
786hypothesis that the pixel belongs to the background).
787
788The $c_N$ value here is a normalisation constant that depends on the
789correlated nature of the pixel values. If all the pixels are
790uncorrelated, then $c_N=1$. If $N$ pixels are correlated, then their
791tests will be dependent on each other, and so $c_N = \sum_{i=1}^N
792i^{-1}$. \citet{hopkins02} consider real radio data, where the pixels
793are correlated over the beam. For the calculations done in \duchamp,
794$N = B \times C$, where $B$ is the beam area in pixels, calculated
795from the FITS header (if the correct keywords -- BMAJ, BMIN -- are not
796present, the size of the beam is taken from the input parameters - see
797discussion in \S\ref{sec-results}, and if these parameters are not
798given, $B=1$), and $C$ is the number of neighbouring channels that can
799be considered to be correlated.
800
801The use of the FDR method is governed by the \texttt{flagFDR} flag,
802which is \texttt{false} by default. To set the relevant parameters,
803use \texttt{alphaFDR} to set the $\alpha$ value, and
804\texttt{FDRnumCorChan} to set the $C$ value discussed above. These
805have default values of 0.01 and 2 respectively.
806
807The theory behind the FDR method implies a direct connection between
808the choice of $\alpha$ and the fraction of detections that will be
809false positives. These detections, however, are individual pixels,
810which undergo a process of merging and rejection (\S\ref{sec-merger}),
811and so the fraction of the final list of detected objects that are
812false positives will be much smaller than $\alpha$. See the discussion
813in \S\ref{sec-notes}.
814
815%\secC{Storage of detected objects in memory}
816%
817%It is useful to understand how \duchamp stores the detected objects in
818%memory while it is running. This makes use of nested C++ classes, so
819%that an object is stored as a class that includes the set of detected
820%pixels, plus all the various calculated parameters (fluxes, WCS
821%coordinates, pixel centres and extrema, flags,...). The set of pixels
822%are stored using another class, that stores 3-dimensional objects as a
823%set of channel maps, each consisting of a $z$-value and a
824%2-dimensional object (a spatial map if you like). This 2-dimensional
825%object is recorded using ``run-length'' encoding, where each row (a
826%fixed $y$ value) is stored by the starting $x$-value and the length
827
828\secB{Merging, growing and rejecting detected objects}
829\label{sec-merger}
830
831\secC{Merging}
832
833The searches described above are either 1- or 2-dimensional only. They
834do not know anything about the third dimension that is likely to be
835present. To build up 3D sources, merging of detections must be
836done. This is done via an algorithm that matches objects judged to be
837``close'', according to one of two criteria.
838
839One criterion is to define two thresholds -- one spatial and one in
840velocity -- and say that two objects should be merged if there is at
841least one pair of pixels that lie within these threshold distances of
842each other. These thresholds are specified by the parameters
843\texttt{threshSpatial} and \texttt{threshVelocity} (in units of pixels
844and channels respectively).
845
846Alternatively, the spatial requirement can be changed to say that
847there must be a pair of pixels that are \emph{adjacent} -- a stricter,
848but perhaps more realistic requirement, particularly when the spatial
849pixels have a large angular size (as is the case for \hi
850surveys). This method can be selected by setting the parameter
851\texttt{flagAdjacent=true} in the parameter file. The velocity
852thresholding is always done with the \texttt{threshVelocity} test.
853
854
855\secC{Stages of merging}
856
857This merging can be done in two stages. The default behaviour is for
858each new detection to be compared with those sources already detected,
859and for it to be merged with the first one judged to be close. No
860other examination of the list is done at this point.
861
862This step can be turned off by setting
863\texttt{flagTwoStageMerging=false}, so that new detections are simply
864added to the end of the list, leaving all merging to be done in the
865second stage.
866
867The second, main stage of merging is more thorough, Once the searching
868is completed, the list is iterated through, looking at each pair of
869objects, and merging appropriately. The merged objects are then
870included in the examination, to see if a merged pair is suitably close
871to a third.
872
873\secC{Growing}
874
875Once the detections have been merged, they may be ``grown'' (this is
876essentially the process known elsewhere as ``floodfill''). This is a
877process of increasing the size of the detection by adding nearby
878pixels (according to the \texttt{threshSpatial} and
879\texttt{threshVelocity} parameters) that are above some secondary
880threshold and not already part of a detected object. This threshold
881should be lower than the one used for the initial detection, but above
882the noise level, so that faint pixels are only detected when they are
883close to a bright pixel. This threshold is specified via one of two
884input parameters. It can be given in terms of the noise statistics via
885\texttt{growthCut} (which has a default value of $3\sigma$), or it can
886be directly given via \texttt{growthThreshold}. Note that if you have
887given the detection threshold with the \texttt{threshold} parameter,
888the growth threshold \textbf{must} be given with
889\texttt{growthThreshold}. If \texttt{growthThreshold} is not provided
890in this situation, the growing will not be done.
891
892The use of the growth algorithm is controlled by the
893\texttt{flagGrowth} parameter -- the default value of which is
894\texttt{false}. If the detections are grown, they are sent through the
895merging algorithm a second time, to pick up any detections that should
896be merged at the new lower threshold (\ie they have grown into each
897other).
898
899\secC{Rejecting}
900\label{sec-reject}
901
902Finally, to be accepted, the detections must satisfy minimum (and,
903optionally, maximum) size criteria, relating to the number of
904channels, spatial pixels and voxels occupied by the object. These
905criteria are set using the \texttt{minChannels}, \texttt{minPix} and
906\texttt{minVoxels} parameters respectively. The channel requirement
907means a source must have at least one set of \texttt{minChannels}
908consecutive channels to be accepted. The spatial pixels
909(\texttt{minPix}) requirement refers to distinct spatial pixels (which
910are possibly in different channels), while the voxels requirement
911refers to the total number of voxels detected. If the
912\texttt{minVoxels} parameter is not provided, it defaults to
913\texttt{minPix}$+$\texttt{minChannels}-1.
914
915It is possible to also place upper limits on the size of detections in
916a similar fashion. The corresponding parameters are \texttt{maxPix},
917\texttt{maxChannels} and \texttt{maxVoxels}. These will all default to
918a value of -1 -- if they are negative the check is \textbf{not}
919made. If they are provided by the user, an object will be rejected if
920one of the metrics exceeds the limit (in each case, the value is the
921highest allowed value for an object to be accepted). The channel
922requirement differs slightly from the minimum check -- the limit
923applies to the total number of channels in the object.
924
925It is possible to do this rejection stage before the main merging and
926growing stage. This could be done to remove narrow (hopefully
927spurious) sources from the list before growing them, to reduce the
928number of false positives in the final list. This mode can be selected
929by setting the input parameter \texttt{flagRejectBeforeMerge=true} --
930caution is urged if you use this in conjunction with
931\texttt{flagTwoStageMerging=false}, as you can throw away parts of
932objects that you may otherwise wish to keep.
933
934%%% Local Variables:
935%%% mode: latex
936%%% TeX-master: "Guide"
937%%% End:
Note: See TracBrowser for help on using the repository browser.