source: tags/release-1.1.11b/docs/executionFlow.tex

Last change on this file was 801, checked in by MatthewWhiting, 13 years ago

Updating documentation & default inputs for recent changes to beam calcs

File size: 35.5 KB
Line 
1% -----------------------------------------------------------------------
2% executionFlow.tex: Section detailing each of the main algorithms
3%                    used by Duchamp.
4% -----------------------------------------------------------------------
5% Copyright (C) 2006, Matthew Whiting, ATNF
6%
7% This program is free software; you can redistribute it and/or modify it
8% under the terms of the GNU General Public License as published by the
9% Free Software Foundation; either version 2 of the License, or (at your
10% option) any later version.
11%
12% Duchamp is distributed in the hope that it will be useful, but WITHOUT
13% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
14% FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
15% for more details.
16%
17% You should have received a copy of the GNU General Public License
18% along with Duchamp; if not, write to the Free Software Foundation,
19% Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
20%
21% Correspondence concerning Duchamp may be directed to:
22%    Internet email: Matthew.Whiting [at] atnf.csiro.au
23%    Postal address: Dr. Matthew Whiting
24%                    Australia Telescope National Facility, CSIRO
25%                    PO Box 76
26%                    Epping NSW 1710
27%                    AUSTRALIA
28% -----------------------------------------------------------------------
29\secA{What \duchamp is doing}
30\label{sec-flow}
31
32Each of the steps that \duchamp goes through in the course of its
33execution are discussed here in more detail. This should provide
34enough background information to fully understand what \duchamp is
35doing and what all the output information is. For those interested in
36the programming side of things, \duchamp is written in C/C++ and makes
37use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot}
38libraries.
39
40\secB{Image input}
41\label{sec-input}
42
43The cube is read in using basic \textsc{cfitsio} commands, and stored
44as an array in a special C++ class. This class keeps track of the list
45of detected objects, as well as any reconstructed arrays that are made
46(see \S\ref{sec-recon}). The World Coordinate System
47(WCS)\footnote{This is the information necessary for translating the
48pixel locations to quantities such as position on the sky, frequency,
49velocity, and so on.} information for the cube is also obtained from
50the FITS header by \textsc{wcslib} functions \citep{greisen02,
51calabretta02}, and this information, in the form of a \texttt{wcsprm}
52structure, is also stored in the same class.
53
54A sub-section of a cube can be requested by defining the subsection
55with the \texttt{subsection} parameter and setting
56\texttt{flagSubsection = true} -- this can be a good idea if the cube
57has very noisy edges, which may produce many spurious detections.
58
59There are two ways of specifying the \texttt{subsection} string. The
60first is the generalised form
61\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the
62\textsc{cfitsio} library. This has one set of colon-separated numbers
63for each axis in the FITS file. In this manner, the x-coordinates run
64from \texttt{x1} to \texttt{x2} (inclusive), with steps of
65\texttt{dx}. The step value can be omitted, so a subsection of the
66form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp
67does not make use of any step value present in the subsection string,
68and any that are present are removed before the file is opened.
69
70If the entire range of a coordinate is required, one can replace the
71range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the
72subsection string \texttt{[*,*,*]} is simply the entire cube. Note
73that the pixel ranges for each axis start at 1, so the full pixel
74range of a 100-pixel axis would be expressed as 1:100. A complete
75description of this section syntax can be found at the
76\textsc{fitsio} web site%
77\footnote{%
78\href%
79{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}%
80{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}.
81
82
83Making full use of the subsection requires knowledge of the size of
84each of the dimensions. If one wants to, for instance, trim a certain
85number of pixels off the edges of the cube, without examining the cube
86to obtain the actual size, one can use the second form of the
87subsection string. This just gives a number for each axis, \eg
88\texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and}
89end of each axis).
90
91All types of subsections can be combined \eg \texttt{[5,2:98,*]}.
92
93Typically, the units of pixel brightness are given by the FITS file's
94BUNIT keyword. However, this may often be unwieldy (for instance, the
95units are Jy/beam, but the values are around a few mJy/beam). It is
96therefore possible to nominate new units, to which the pixel values
97will be converted, by using the \texttt{newFluxUnits} input
98parameter. The units must be directly translatable from the existing
99ones -- for instance, if BUNIT is Jy/beam, you cannot specify mJy, it
100must be mJy/beam. If an incompatible unit is given, the BUNIT value is
101used instead.
102
103\secB{Image modification}
104\label{sec-modify}
105
106Several modifications to the cube can be made that improve the
107execution and efficiency of \duchamp (their use is optional, governed
108by the relevant flags in the parameter file).
109
110\secC{BLANK pixel removal}
111\label{sec-blank}
112
113If the imaged area of a cube is non-rectangular (see the example in
114Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels
115are used to pad it out to a rectangular shape. The value of these
116pixels is given by the FITS header keywords BLANK, BSCALE and
117BZERO. While these pixels make the image a nice shape, they will take
118up unnecessary space in memory, and so to potentially speed up the
119processing we can trim them from the edge. This is done when the
120parameter \texttt{flagTrim = true}. If the above keywords are not
121present, the trimming will not be done (in this case, a similar effect
122can be accomplished, if one knows where the ``blank'' pixels are, by
123using the subsection option).
124
125The amount of trimming is recorded, and these pixels are added back in
126once the source-detection is completed (so that quoted pixel positions
127are applicable to the original cube). Rows and columns are trimmed one
128at a time until the first non-BLANK pixel is reached, so that the
129image remains rectangular. In practice, this means that there will be
130some BLANK pixels left in the trimmed image (if the non-BLANK region
131is non-rectangular). However, these are ignored in all further
132calculations done on the cube.
133
134\secC{Baseline removal}
135
136Second, the user may request the removal of baselines from the
137spectra, via the parameter \texttt{flagBaseline}. This may be
138necessary if there is a strong baseline ripple present, which can
139result in spurious detections at the high points of the ripple. The
140baseline is calculated from a wavelet reconstruction procedure (see
141\S\ref{sec-recon}) that keeps only the two largest scales. This is
142done separately for each spatial pixel (\ie for each spectrum in the
143cube), and the baselines are stored and added back in before any
144output is done. In this way the quoted fluxes and displayed spectra
145are as one would see from the input cube itself -- even though the
146detection (and reconstruction if applicable) is done on the
147baseline-removed cube.
148
149The presence of very strong signals (for instance, masers at several
150hundred Jy) could affect the determination of the baseline, and would
151lead to a large dip centred on the signal in the baseline-subtracted
152spectrum. To prevent this, the signal is trimmed prior to the
153reconstruction process at some standard threshold (at $8\sigma$ above
154the mean). The baseline determined should thus be representative of
155the true, signal-free baseline. Note that this trimming is only a
156temporary measure which does not affect the source-detection.
157
158\secC{Ignoring bright Milky Way emission}
159\label{sec-MW}
160
161Finally, a single set of contiguous channels can be ignored -- these
162may exhibit very strong emission, such as that from the Milky Way as
163seen in extragalactic \hi cubes (hence the references to ``Milky
164Way'' in relation to this task -- apologies to Galactic
165astronomers!). Such dominant channels will produce many detections
166that are unnecessary, uninteresting (if one is interested in
167extragalactic \hi) and large (in size and hence in memory usage), and
168so will slow the program down and detract from the interesting
169detections.
170
171The use of this feature is controlled by the \texttt{flagMW}
172parameter, and the exact channels concerned are able to be set by the
173user (using \texttt{maxMW} and \texttt{minMW} -- these give an
174inclusive range of channels). When employed, these channels are
175ignored for the searching, and the scaling of the spectral output (see
176Fig.~\ref{fig-spect}) will not take them into account. They will be
177present in the reconstructed array, however, and so will be included
178in the saved FITS file (see \S\ref{sec-reconIO}). When the final
179spectra are plotted, the range of channels covered by these parameters
180is indicated by a green hashed box. Note that these channels refer to
181channel numbers in the full cube, before any subsection is applied.
182
183\secB{Image reconstruction}
184\label{sec-recon}
185
186The user can direct \duchamp to reconstruct the data cube using the
187\atrous wavelet procedure. A good description of the procedure can be
188found in \citet{starck02:book}. The reconstruction is an effective way
189of removing a lot of the noise in the image, allowing one to search
190reliably to fainter levels, and reducing the number of spurious
191detections. This is an optional step, but one that greatly enhances
192the source-detection process, with the payoff that it can be
193relatively time- and memory-intensive.
194
195\secC{Algorithm}
196
197The steps in the \atrous reconstruction are as follows:
198\begin{enumerate}
199\item The reconstructed array is set to 0 everywhere.
200\item The input array is discretely convolved with a given filter
201  function. This is determined from the parameter file via the
202  \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for
203  details on the filters available.
204\item The wavelet coefficients are calculated by taking the difference
205  between the convolved array and the input array.
206\item If the wavelet coefficients at a given point are above the
207  requested threshold (given by \texttt{snrRecon} as the number of
208  $\sigma$ above the mean and adjusted to the current scale -- see
209  Appendix~\ref{app-scaling}), add these to the reconstructed array.
210\item The separation between the filter coefficients is doubled. (Note
211  that this step provides the name of the procedure\footnote{\atrous
212  means ``with holes'' in French.}, as gaps or holes are created in
213  the filter coverage.)
214\item The procedure is repeated from step 2, using the convolved array
215  as the input array.
216\item Continue until the required maximum number of scales is reached.
217\item Add the final smoothed (\ie convolved) array to the
218  reconstructed array. This provides the ``DC offset'', as each of the
219  wavelet coefficient arrays will have zero mean.
220\end{enumerate}
221
222The range of scales at which the selection of wavelet coefficients is
223made is governed by the \texttt{scaleMin} and \texttt{scaleMax}
224parameters. The minimum scale used is given by \texttt{scaleMin},
225where the default value is 1 (the first scale). This parameter is
226useful if you want to ignore the highest-frequency features
227(e.g. high-frequency noise that might be present). Normally the
228maximum scale is calculated from the size of the input array, but it
229can be specified by using \texttt{scaleMax}. A value $\le0$ will
230result in the use of the calculated value, as will a value of
231\texttt{scaleMax} greater than the calculated value. Use of these two
232parameters can allow searching for features of a particular scale
233size, for instance searching for narrow absorption features.
234
235The reconstruction has at least two iterations. The first iteration
236makes a first pass at the wavelet reconstruction (the process outlined
237in the 8 stages above), but the residual array will likely have some
238structure still in it, so the wavelet filtering is done on the
239residual, and any significant wavelet terms are added to the final
240reconstruction. This step is repeated until the change in the measured
241standard deviation of the background (see note below on the evaluation
242of this quantity) is less than some fiducial amount.
243
244It is important to note that the \atrous decomposition is an example
245of a ``redundant'' transformation. If no thresholding is performed,
246the sum of all the wavelet coefficient arrays and the final smoothed
247array is identical to the input array. The thresholding thus removes
248only the unwanted structure in the array.
249
250Note that any BLANK pixels that are still in the cube will not be
251altered by the reconstruction -- they will be left as BLANK so that
252the shape of the valid part of the cube is preserved.
253
254\secC{Note on Statistics}
255
256The correct calculation of the reconstructed array needs good
257estimators of the underlying mean and standard deviation (or rms) of
258the background noise distribution. The methods used to estimate these
259quantities are detailed in \S\ref{sec-stats} -- the default behaviour
260is to use robust estimators, to avoid biasing due to bright pixels.
261
262%These statistics are estimated using
263%robust methods, to avoid corruption by strong outlying points. The
264%mean of the distribution is actually estimated by the median, while
265%the median absolute deviation from the median (MADFM) is calculated
266%and corrected assuming Gaussianity to estimate the underlying standard
267%deviation $\sigma$. The Gaussianity (or Normality) assumption is
268%critical, as the MADFM does not give the same value as the usual rms
269%or standard deviation value -- for a Normal distribution
270%$N(\mu,\sigma)$ we find MADFM$=0.6744888\sigma$, but this will change
271%for different distributions. Since this ratio is corrected for, the
272%user need only think in the usual multiples of the rms when setting
273%\texttt{snrRecon}. See Appendix~\ref{app-madfm} for a derivation of
274%this value.
275
276When thresholding the different wavelet scales, the value of the rms
277as measured from the wavelet array needs to be scaled to account for
278the increased amount of correlation between neighbouring pixels (due
279to the convolution). See Appendix~\ref{app-scaling} for details on
280this scaling.
281
282\secC{User control of reconstruction parameters}
283
284The most important parameter for the user to select in relation to the
285reconstruction is the threshold for each wavelet array. This is set
286using the \texttt{snrRecon} parameter, and is given as a multiple of
287the rms (estimated by the MADFM) above the mean (which for the wavelet
288arrays should be approximately zero). There are several other
289parameters that can be altered as well that affect the outcome of the
290reconstruction.
291
292By default, the cube is reconstructed in three dimensions, using a
2933-dimensional filter and 3-dimensional convolution. This can be
294altered, however, using the parameter \texttt{reconDim}. If set to 1,
295this means the cube is reconstructed by considering each spectrum
296separately, whereas \texttt{reconDim=2} will mean the cube is
297reconstructed by doing each channel map separately. The merits of
298these choices are discussed in \S\ref{sec-notes}, but it should be
299noted that a 2-dimensional reconstruction can be susceptible to edge
300effects if the spatial shape of the pixel array is not rectangular.
301
302The user can also select the minimum scale to be used in the
303reconstruction. The first scale exhibits the highest frequency
304variations, and so ignoring this one can sometimes be beneficial in
305removing excess noise. The default is to use all scales
306(\texttt{minscale = 1}).
307
308Finally, the filter that is used for the convolution can be selected
309by using \texttt{filterCode} and the relevant code number -- the
310choices are listed in Appendix~\ref{app-param}. A larger filter will
311give a better reconstruction, but take longer and use more memory when
312executing. When multi-dimensional reconstruction is selected, this
313filter is used to construct a 2- or 3-dimensional equivalent.
314
315\secB{Smoothing the cube}
316\label{sec-smoothing}
317
318An alternative to doing the wavelet reconstruction is to smooth the
319cube.  This technique can be useful in reducing the noise level
320slightly (at the cost of making neighbouring pixels correlated and
321blurring any signal present), and is particularly well suited to the
322case where a particular signal size (\ie a certain channel width or
323spatial size) is believed to be present in the data.
324
325There are two alternative methods that can be used: spectral
326smoothing, using the Hanning filter; or spatial smoothing, using a 2D
327Gaussian kernel. These alternatives are outlined below. To utilise the
328smoothing option, set the parameter \texttt{flagSmooth=true} and set
329\texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}.
330
331\secC{Spectral smoothing}
332
333When \texttt{smoothType = spectral} is selected, the cube is smoothed
334only in the spectral domain. Each spectrum is independently smoothed
335by a Hanning filter, and then put back together to form the smoothed
336cube, which is then used by the searching algorithm (see below). Note
337that in the case of both the reconstruction and the smoothing options
338being requested, the reconstruction will take precedence and the
339smoothing will \emph{not} be done.
340
341There is only one parameter necessary to define the degree of
342smoothing -- the Hanning width $a$ (given by the user parameter
343\texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter
344are defined by
345\[
346c(x) =
347  \begin{cases}
348   \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| < (a+1)/2\\
349   0                                               &|x| \geq (a+1)/2.
350  \end{cases},\ a,x \in \mathbb{Z}
351\]
352Note that the width specified must be an
353odd integer (if the parameter provided is even, it is incremented by
354one).
355
356\secC{Spatial smoothing}
357
358When \texttt{smoothType = spatial} is selected, the cube is smoothed
359only in the spatial domain. Each channel map is independently smoothed
360by a two-dimensional Gaussian kernel, put back together to form the
361smoothed cube, and used in the searching algorithm (see below). Again,
362reconstruction is always done by preference if both techniques are
363requested.
364
365The two-dimensional Gaussian has three parameters to define it,
366governed by the elliptical cross-sectional shape of the Gaussian
367function: the FWHM (full-width at half-maximum) of the major and minor
368axes, and the position angle of the major axis. These are given by the
369user parameters \texttt{kernMaj, kernMin} \& \texttt{kernPA}. If a
370circular Gaussian is required, the user need only provide the
371\texttt{kernMaj} parameter. The \texttt{kernMin} parameter will then
372be set to the same value, and \texttt{kernPA} to zero.  If we define
373these parameters as $a,b,\theta$ respectively, we can define the
374kernel by the function
375\[
376k(x,y) = \exp\left[-0.5 \left(\frac{X^2}{\sigma_X^2} +
377                              \frac{Y^2}{\sigma_Y^2}   \right) \right]
378\]
379where $(x,y)$ are the offsets from the central pixel of the gaussian
380function, and
381\begin{align*}
382X& = x\sin\theta - y\cos\theta&
383  Y&= x\cos\theta + y\sin\theta\\
384\sigma_X^2& = \frac{(a/2)^2}{2\ln2}&
385  \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\
386\end{align*}
387
388\secB{Input/Output of reconstructed/smoothed arrays}
389\label{sec-reconIO}
390
391The smoothing and reconstruction stages can be relatively
392time-consuming, particularly for large cubes and reconstructions in
3933-D (or even spatial smoothing). To get around this, \duchamp provides
394a shortcut to allow users to perform multiple searches (\eg with
395different thresholds) on the same reconstruction/smoothing setup
396without re-doing the calculations each time.
397
398To save the reconstructed array as a FITS file, set
399\texttt{flagOutputRecon = true}. The file will be saved in the same
400directory as the input image, so the user needs to have write
401permissions for that directory.
402
403The name of the file can given by the \texttt{fileOutputRecon}
404parameter, but this can be ignored and \duchamp will present a name
405based on the reconstruction parameters. The filename will be derived
406from the input filename, with extra information detailing the
407reconstruction that has been done. For example, suppose
408\texttt{image.fits} has been reconstructed using a 3-dimensional
409reconstruction with filter \#2, thresholded at $4\sigma$ using all
410scales. The output filename will then be
411\texttt{image.RECON-3-2-4-1.fits} (\ie it uses the four parameters
412relevant for the \atrous reconstruction as listed in
413Appendix~\ref{app-param}). The new FITS file will also have these
414parameters as header keywords. If a subsection of the input image has
415been used (see \S\ref{sec-input}), the format of the output filename
416will be \texttt{image.sub.RECON-3-2-4-1.fits}, and the subsection that
417has been used is also stored in the FITS header.
418
419Likewise, the residual image, defined as the difference between the
420input and reconstructed arrays, can also be saved in the same manner
421by setting \texttt{flagOutputResid = true}. Its filename will be the
422same as above, with \texttt{RESID} replacing \texttt{RECON}.
423
424If a reconstructed image has been saved, it can be read in and used
425instead of redoing the reconstruction. To do so, the user should set
426the parameter \texttt{flagReconExists = true}. The user can indicate
427the name of the reconstructed FITS file using the \texttt{reconFile}
428parameter, or, if this is not specified, \duchamp searches for the
429file with the name as defined above. If the file is not found, the
430reconstruction is performed as normal. Note that to do this, the user
431needs to set \texttt{flagAtrous = true} (obviously, if this is
432\texttt{false}, the reconstruction is not needed).
433
434To save the smoothed array, set \texttt{flagOutputSmooth = true}. As
435for the reconstructed/residual arrays, the
436name of the file can given by the \texttt{fileOutputSmooth} parameter,
437but this can be ignored and \duchamp will present a name based on the
438method of smoothing used. It will be either
439\texttt{image.SMOOTH-1D-a.fits}, where a is replaced by the Hanning
440width used, or \texttt{image.SMOOTH-2D-a-b-c.fits}, where the Gaussian
441kernel parameters are a,b,c. Similarly to the reconstruction case, a
442saved file can be read in by setting \texttt{flagSmoothExists = true}
443and either specifying a file to be read with the \texttt{smoothFile}
444parameter or relying on \duchamp to find the file with the name as
445given above.
446
447
448\secB{Searching the image}
449\label{sec-detection}
450
451\secC{Technique}
452
453The basic idea behind detection in \duchamp is to locate sets of
454contiguous voxels that lie above some threshold. No size or shape
455requirement is imposed upon the detections -- that is, \duchamp does
456not fit \eg a Gaussian profile to each source. All it does is find
457connected groups of bright voxels.
458
459One threshold is calculated for the entire cube, enabling calculation
460of signal-to-noise ratios for each source (see
461Section~\ref{sec-output} for details). The user can manually specify a
462value (using the parameter \texttt{threshold}) for the threshold,
463which will override the calculated value. Note that this option
464overrides any settings of \texttt{snrCut} or FDR options (see below).
465
466The cube can be searched in one of two ways, governed by the input
467parameter \texttt{searchType}. If \texttt{searchType=spatial}, the
468cube is searched one channel map at a time, using the 2-dimensional
469raster-scanning algorithm of \citet{lutz80} that connects groups of
470neighbouring pixels. Such an algorithm cannot be applied directly to a
4713-dimensional case, as it requires that objects are completely nested
472in a row (when scanning along a row, if an object finishes and other
473starts, you won't get back to the first until the second is completely
474finished for the row). Three-dimensional data does not have this
475property, hence the need to treat the data on a 2-dimensional basis at
476most.
477
478Alternatively, if \texttt{searchType=spectral}, the searching is done
479in one dimension on each individual spatial pixel's spectrum. This is
480a simpler search, but there are potentially many more of them.
481
482Although there are parameters that govern the minimum number of pixels
483in a spatial, spectral and total senses that an object must have
484(\texttt{minPix}, \texttt{minChannels} and \texttt{minVoxels}
485respectively), these criteria are not applied at this point - see
486\S\ref{sec-reject} for details.
487
488Finally, the search only looks for positive features. If one is
489interested instead in negative features (such as absorption lines),
490set the parameter \texttt{flagNegative = true}. This will invert the
491cube (\ie multiply all pixels by $-1$) prior to the search, and then
492re-invert the cube (and the fluxes of any detections) after searching
493is complete. All outputs are done in the same manner as normal, so
494that fluxes of detections will be negative.
495
496\secC{Calculating statistics}
497\label{sec-stats}
498
499A crucial part of the detection process (as well as the wavelet
500reconstruction: \S\ref{sec-recon}) is estimating the statistics that
501define the detection threshold. To determine a threshold, we need to
502estimate from the data two parameters: the middle of the noise
503distribution (the ``noise level''), and the width of the distribution
504(the ``noise spread''). The noise level is estimated by either the
505mean or the median, and the noise spread by the rms (or the standard
506deviation) or the median absolute deviation from the median
507(MADFM). The median and MADFM are robust statistics, in that they are
508not biased by the presence of a few pixels much brighter than the
509noise.
510
511All four statistics are calculated automatically, but the choice of
512parameters that will be used is governed by the input parameter
513\texttt{flagRobustStats}. This has the default value \texttt{true},
514meaning the underlying mean of the noise distribution is estimated by
515the median, and the underlying standard deviation is estimated by the
516MADFM. In the latter case, the value is corrected, under the
517assumption that the underlying distribution is Normal (Gaussian), by
518dividing by 0.6744888 -- see Appendix~\ref{app-madfm} for details. If
519\texttt{flagRobustStats=false}, the mean and rms are used instead.
520
521The choice of pixels to be used depend on the analysis method. If the
522wavelet reconstruction has been done, the residuals (defined
523in the sense of original $-$ reconstruction) are used to estimate the
524noise spread of the cube, since the reconstruction should pick out
525all significant structure. The noise level (the middle of the
526distribution) is taken from the original array.
527
528If smoothing of the cube has been done instead, all noise parameters
529are measured from the smoothed array, and detections are made with
530these parameters. When the signal-to-noise level is quoted for each
531detection (see \S\ref{sec-output}), the noise parameters of the
532original array are used, since the smoothing process correlates
533neighbouring pixels, reducing the noise level.
534
535If neither reconstruction nor smoothing has been done, then the
536statistics are calculated from the original, input array.
537
538The parameters that are estimated should be representative of the
539noise in the cube. For the case of small objects embedded in many
540noise pixels (\eg the case of \hi surveys), using the full cube will
541provide good estimators. It is possible, however, to use only a
542subsection of the cube by setting the parameter \texttt{flagStatSec =
543  true} and providing the desired subsection to the \texttt{StatSec}
544parameter. This subsection works in exactly the same way as the pixel
545subsection discussed in \S\ref{sec-input}. Note that this subsection
546applies only to the statistics used to determine the threshold. It
547does not affect the calculation of statistics in the case of the
548wavelet reconstruction. Note also that pixels flagged as BLANK or as
549part of the ``Milky Way'' range of channels are ignored in the
550statistics calculations.
551
552\secC{Determining the threshold}
553
554Once the statistics have been calculated, the threshold is determined
555in one of two ways. The first way is a simple sigma-clipping, where a
556threshold is set at a fixed number $n$ of standard deviations above
557the mean, and pixels above this threshold are flagged as detected. The
558value of $n$ is set with the parameter \texttt{snrCut}. The ``mean''
559and ``standard deviation'' here are estimated according to
560\texttt{flagRobustStats}, as discussed in \S\ref{sec-stats}. In this
561first case only, if the user specifies a threshold, using the
562\texttt{threshold} parameter, the sigma-clipped value is ignored.
563
564The second method uses the False Discovery Rate (FDR) technique
565\citep{miller01,hopkins02}, whose basis we briefly detail here. The
566false discovery rate (given by the number of false detections divided
567by the total number of detections) is fixed at a certain value
568$\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false
569positives). In practice, an $\alpha$ value is chosen, and the ensemble
570average FDR (\ie $\langle FDR \rangle$) when the method is used will
571be less than $\alpha$.  One calculates $p$ -- the probability,
572assuming the null hypothesis is true, of obtaining a test statistic as
573extreme as the pixel value (the observed test statistic) -- for each
574pixel, and sorts them in increasing order. One then calculates $d$
575where
576\[
577d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\},
578\]
579and then rejects all hypotheses whose $p$-values are less than or
580equal to $P_d$. (So a $P_i<P_d$ will be rejected even if $P_i \geq
581j\alpha/c_N N$.) Note that ``reject hypothesis'' here means ``accept
582the pixel as an object pixel'' (\ie we are rejecting the null
583hypothesis that the pixel belongs to the background).
584
585The $c_N$ value here is a normalisation constant that depends on the
586correlated nature of the pixel values. If all the pixels are
587uncorrelated, then $c_N=1$. If $N$ pixels are correlated, then their
588tests will be dependent on each other, and so $c_N = \sum_{i=1}^N
589i^{-1}$. \citet{hopkins02} consider real radio data, where the pixels
590are correlated over the beam. For the calculations done in \duchamp,
591$N = B \times C$, where $B$ is the beam area in pixels, calculated
592from the FITS header (if the correct keywords -- BMAJ, BMIN -- are not
593present, the size of the beam is taken from the input parameters - see
594discussion in \S\ref{sec-results}, and if these parameters are not
595given, $B=1$), and $C$ is the number of neighbouring channels that can
596be considered to be correlated.
597
598The use of the FDR method is governed by the \texttt{flagFDR} flag,
599which is \texttt{false} by default. To set the relevant parameters,
600use \texttt{alphaFDR} to set the $\alpha$ value, and
601\texttt{FDRnumCorChan} to set the $C$ value discussed above. These
602have default values of 0.01 and 2 respectively.
603
604The theory behind the FDR method implies a direct connection between
605the choice of $\alpha$ and the fraction of detections that will be
606false positives. These detections, however, are individual pixels,
607which undergo a process of merging and rejection (\S\ref{sec-merger}),
608and so the fraction of the final list of detected objects that are
609false positives will be much smaller than $\alpha$. See the discussion
610in \S\ref{sec-notes}.
611
612%\secC{Storage of detected objects in memory}
613%
614%It is useful to understand how \duchamp stores the detected objects in
615%memory while it is running. This makes use of nested C++ classes, so
616%that an object is stored as a class that includes the set of detected
617%pixels, plus all the various calculated parameters (fluxes, WCS
618%coordinates, pixel centres and extrema, flags,...). The set of pixels
619%are stored using another class, that stores 3-dimensional objects as a
620%set of channel maps, each consisting of a $z$-value and a
621%2-dimensional object (a spatial map if you like). This 2-dimensional
622%object is recorded using ``run-length'' encoding, where each row (a
623%fixed $y$ value) is stored by the starting $x$-value and the length
624
625\secB{Merging, growing and rejecting detected objects}
626\label{sec-merger}
627
628\secC{Merging}
629
630The searches described above are either 1- or 2-dimensional only.They
631do not know anything about the third dimension that is likely to be
632present.To build up 3D sources, merging of detections must be
633done. This is done via an algorithm that matches objects judged to be
634``close'', according to one of two criteria.
635
636One criterion is to define two thresholds -- one spatial and one in
637velocity -- and say that two objects should be merged if there is at
638least one pair of pixels that lie within these threshold distances of
639each other. These thresholds are specified by the parameters
640\texttt{threshSpatial} and \texttt{threshVelocity} (in units of pixels
641and channels respectively).
642
643Alternatively, the spatial requirement can be changed to say that
644there must be a pair of pixels that are \emph{adjacent} -- a stricter,
645but perhaps more realistic requirement, particularly when the spatial
646pixels have a large angular size (as is the case for \hi
647surveys). This method can be selected by setting the parameter
648\texttt{flagAdjacent=true} in the parameter file. The velocity
649thresholding is always done with the \texttt{threshVelocity} test.
650
651
652\secC{Stages of merging}
653
654This merging can be done in two stages. The default behaviour is for
655each new detection to be compared with those sources already detected,
656and for it to be merged with the first one judged to be close. No
657other examination of the list is done at this point.
658
659This step can be turned off by setting
660\texttt{flagTwoStageMerging=false}, so that new detections are simply
661added to the end of the list, leaving all merging to be done in the
662second stage.
663
664The second, main stage of merging is more thorough, Once the searching
665is completed, the list is iterated through, looking at each pair of
666objects, and merging appropriately. The merged objects are then
667included in the examination, to see if a merged pair is suitably close
668to a third.
669
670\secC{Growing}
671
672Once the detections have been merged, they may be ``grown''. This is a
673process of increasing the size of the detection by adding nearby
674pixels (according to the \texttt{threshSpatial} and
675\texttt{threshVelocity} parameters) that are above some secondary
676threshold and not already part of a detected object. This threshold
677should be lower than the one used for the initial detection, but above
678the noise level, so that faint pixels are only detected when they are
679close to a bright pixel. This threshold is specified via one of two
680input parameters. It can be given in terms of the noise statistics via
681\texttt{growthCut} (which has a default value of $3\sigma$), or it can
682be directly given via \texttt{growthThreshold}. Note that if you have
683given the detection threshold with the \texttt{threshold} parameter,
684the growth threshold \textbf{must} be given with
685\texttt{growthThreshold}.
686
687The use of the growth algorithm is controlled by the
688\texttt{flagGrowth} parameter -- the default value of which is
689\texttt{false}. If the detections are grown, they are sent through the
690merging algorithm a second time, to pick up any detections that should
691be merged at the new lower threshold (\ie they have grown into each
692other).
693
694\secC{Rejecting}
695\label{sec-reject}
696
697Finally, to be accepted, the detections must span \emph{both} a
698minimum number of channels (enabling the removal of any spurious
699single-channel spikes that may be present), and a minimum number of
700spatial pixels. These numbers, as for the original detection step, are
701set with the \texttt{minChannels}, \texttt{minPix} parameters and
702\texttt{minVoxels}. The channel requirement means a source must have
703at least one set of \texttt{minChannels} consecutive channels to be
704accepted. The spatial pixels (\texttt{minPix}) requirement refers to
705distinct spatial pixels (which are possibly in different channels),
706while the voxels requirement refers to the total number of voxels
707detected.
708
709It is possible to do this rejection stage before the main merging and
710growing stage. This could be done to remove narrow (hopefully
711spurious) sources from the list before growing them, to reduce the
712number of false positives in the final list. This mode can be selected
713by setting the input parameter \texttt{flagRejectBeforeMerge=true} -
714caution is urged if you use this in conjunction with
715\texttt{flagTwoStageMerging=false}.
Note: See TracBrowser for help on using the repository browser.