source: tags/release-1.1.2/docs/executionFlow.tex @ 1441

Last change on this file since 1441 was 386, checked in by MatthewWhiting, 17 years ago

Updated docs to account for better stats description, plus new parameters.

File size: 32.3 KB
Line 
1% -----------------------------------------------------------------------
2% executionFlow.tex: Section detailing each of the main algorithms
3%                    used by Duchamp.
4% -----------------------------------------------------------------------
5% Copyright (C) 2006, Matthew Whiting, ATNF
6%
7% This program is free software; you can redistribute it and/or modify it
8% under the terms of the GNU General Public License as published by the
9% Free Software Foundation; either version 2 of the License, or (at your
10% option) any later version.
11%
12% Duchamp is distributed in the hope that it will be useful, but WITHOUT
13% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
14% FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
15% for more details.
16%
17% You should have received a copy of the GNU General Public License
18% along with Duchamp; if not, write to the Free Software Foundation,
19% Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
20%
21% Correspondence concerning Duchamp may be directed to:
22%    Internet email: Matthew.Whiting [at] atnf.csiro.au
23%    Postal address: Dr. Matthew Whiting
24%                    Australia Telescope National Facility, CSIRO
25%                    PO Box 76
26%                    Epping NSW 1710
27%                    AUSTRALIA
28% -----------------------------------------------------------------------
29\secA{What \duchamp is doing}
30\label{sec-flow}
31
32Each of the steps that \duchamp goes through in the course of its
33execution are discussed here in more detail. This should provide
34enough background information to fully understand what \duchamp is
35doing and what all the output information is. For those interested in
36the programming side of things, \duchamp is written in C/C++ and makes
37use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot}
38libraries.
39
40\secB{Image input}
41\label{sec-input}
42
43The cube is read in using basic \textsc{cfitsio} commands, and stored
44as an array in a special C++ class. This class keeps track of the list
45of detected objects, as well as any reconstructed arrays that are made
46(see \S\ref{sec-recon}). The World Coordinate System
47(WCS)\footnote{This is the information necessary for translating the
48pixel locations to quantities such as position on the sky, frequency,
49velocity, and so on.} information for the cube is also obtained from
50the FITS header by \textsc{wcslib} functions \citep{greisen02,
51calabretta02}, and this information, in the form of a \texttt{wcsprm}
52structure, is also stored in the same class.
53
54A sub-section of a cube can be requested by defining the subsection
55with the \texttt{subsection} parameter and setting
56\texttt{flagSubsection = true} -- this can be a good idea if the cube
57has very noisy edges, which may produce many spurious detections.
58
59There are two ways of specifying the \texttt{subsection} string. The
60first is the generalised form
61\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the
62\textsc{cfitsio} library. This has one set of colon-separated numbers
63for each axis in the FITS file. In this manner, the x-coordinates run
64from \texttt{x1} to \texttt{x2} (inclusive), with steps of
65\texttt{dx}. The step value can be omitted, so a subsection of the
66form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp
67does not make use of any step value present in the subsection string,
68and any that are present are removed before the file is opened.
69
70If the entire range of a coordinate is required, one can replace the
71range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the
72subsection string \texttt{[*,*,*]} is simply the entire cube. Note
73that the pixel ranges for each axis start at 1, so the full pixel
74range of a 100-pixel axis would be expressed as 1:100. A complete
75description of this section syntax can be found at the
76\textsc{fitsio} web site%
77\footnote{%
78\href%
79{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}%
80{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}.
81
82
83Making full use of the subsection requires knowledge of the size of
84each of the dimensions. If one wants to, for instance, trim a certain
85number of pixels off the edges of the cube, without examining the cube
86to obtain the actual size, one can use the second form of the
87subsection string. This just gives a number for each axis, \eg
88\texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and}
89end of each axis).
90
91All types of subsections can be combined \eg \texttt{[5,2:98,*]}.
92
93
94\secB{Image modification}
95\label{sec-modify}
96
97Several modifications to the cube can be made that improve the
98execution and efficiency of \duchamp (their use is optional, governed
99by the relevant flags in the parameter file).
100
101\secC{BLANK pixel removal}
102\label{sec-blank}
103
104If the imaged area of a cube is non-rectangular (see the example in
105Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels
106are used to pad it out to a rectangular shape. The value of these
107pixels is given by the FITS header keywords BLANK, BSCALE and
108BZERO. While these pixels make the image a nice shape, they will take
109up unnecessary space in memory, and so to potentially speed up the
110processing we can trim them from the edge. This is done when the
111parameter \texttt{flagTrim = true}. If the above keywords are not
112present, the trimming will not be done (in this case, a similar effect
113can be accomplished, if one knows where the ``blank'' pixels are, by
114using the subsection option).
115
116The amount of trimming is recorded, and these pixels are added back in
117once the source-detection is completed (so that quoted pixel positions
118are applicable to the original cube). Rows and columns are trimmed one
119at a time until the first non-BLANK pixel is reached, so that the
120image remains rectangular. In practice, this means that there will be
121some BLANK pixels left in the trimmed image (if the non-BLANK region
122is non-rectangular). However, these are ignored in all further
123calculations done on the cube.
124
125\secC{Baseline removal}
126
127Second, the user may request the removal of baselines from the
128spectra, via the parameter \texttt{flagBaseline}. This may be
129necessary if there is a strong baseline ripple present, which can
130result in spurious detections at the high points of the ripple. The
131baseline is calculated from a wavelet reconstruction procedure (see
132\S\ref{sec-recon}) that keeps only the two largest scales. This is
133done separately for each spatial pixel (\ie for each spectrum in the
134cube), and the baselines are stored and added back in before any
135output is done. In this way the quoted fluxes and displayed spectra
136are as one would see from the input cube itself -- even though the
137detection (and reconstruction if applicable) is done on the
138baseline-removed cube.
139
140The presence of very strong signals (for instance, masers at several
141hundred Jy) could affect the determination of the baseline, and would
142lead to a large dip centred on the signal in the baseline-subtracted
143spectrum. To prevent this, the signal is trimmed prior to the
144reconstruction process at some standard threshold (at $8\sigma$ above
145the mean). The baseline determined should thus be representative of
146the true, signal-free baseline. Note that this trimming is only a
147temporary measure which does not affect the source-detection.
148
149\secC{Ignoring bright Milky Way emission}
150\label{sec-MW}
151
152Finally, a single set of contiguous channels can be ignored -- these
153may exhibit very strong emission, such as that from the Milky Way as
154seen in extragalactic \hi cubes (hence the references to ``Milky
155Way'' in relation to this task -- apologies to Galactic
156astronomers!). Such dominant channels will produce many detections
157that are unnecessary, uninteresting (if one is interested in
158extragalactic \hi) and large (in size and hence in memory usage), and
159so will slow the program down and detract from the interesting
160detections.
161
162The use of this feature is controlled by the \texttt{flagMW}
163parameter, and the exact channels concerned are able to be set by the
164user (using \texttt{maxMW} and \texttt{minMW} -- these give an
165inclusive range of channels). When employed, these channels are
166ignored for the searching, and the scaling of the spectral output (see
167Fig.~\ref{fig-spect}) will not take them into account. They will be
168present in the reconstructed array, however, and so will be included
169in the saved FITS file (see \S\ref{sec-reconIO}). When the final
170spectra are plotted, the range of channels covered by these parameters
171is indicated by a green hashed box.
172
173\secB{Image reconstruction}
174\label{sec-recon}
175
176The user can direct \duchamp to reconstruct the data cube using the
177\atrous wavelet procedure. A good description of the procedure can be
178found in \citet{starck02:book}. The reconstruction is an effective way
179of removing a lot of the noise in the image, allowing one to search
180reliably to fainter levels, and reducing the number of spurious
181detections. This is an optional step, but one that greatly enhances
182the source-detection process, with the payoff that it can be
183relatively time- and memory-intensive.
184
185\secC{Algorithm}
186
187The steps in the \atrous reconstruction are as follows:
188\begin{enumerate}
189\item The reconstructed array is set to 0 everywhere.
190\item The input array is discretely convolved with a given filter
191  function. This is determined from the parameter file via the
192  \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for
193  details on the filters available.
194\item The wavelet coefficients are calculated by taking the difference
195  between the convolved array and the input array.
196\item If the wavelet coefficients at a given point are above the
197  requested threshold (given by \texttt{snrRecon} as the number of
198  $\sigma$ above the mean and adjusted to the current scale -- see
199  Appendix~\ref{app-scaling}), add these to the reconstructed array.
200\item The separation between the filter coefficients is doubled. (Note
201  that this step provides the name of the procedure\footnote{\atrous
202  means ``with holes'' in French.}, as gaps or holes are created in
203  the filter coverage.)
204\item The procedure is repeated from step 2, using the convolved array
205  as the input array.
206\item Continue until the required maximum number of scales is reached.
207\item Add the final smoothed (\ie convolved) array to the
208  reconstructed array. This provides the ``DC offset'', as each of the
209  wavelet coefficient arrays will have zero mean.
210\end{enumerate}
211
212The range of scales at which the selection of wavelet coefficients is
213made is governed by the \texttt{scaleMin} and \texttt{scaleMax}
214parameters. The minimum scale used is given by \texttt{scaleMin},
215where the default value is 1 (the first scale). This parameter is
216useful if you want to ignore the highest-frequency features
217(e.g. high-frequency noise that might be present). Normally the
218maximum scale is calculated from the size of the input array, but it
219can be specified by using \texttt{scaleMax}. A value $\le0$ will
220result in the use of the calculated value, as will a value of
221\texttt{scaleMax} greater than the calculated value. Use of these two
222parameters can allow searching for features of a particular scale
223size, for instance searching for narrow absorption features.
224
225The reconstruction has at least two iterations. The first iteration
226makes a first pass at the wavelet reconstruction (the process outlined
227in the 8 stages above), but the residual array will likely have some
228structure still in it, so the wavelet filtering is done on the
229residual, and any significant wavelet terms are added to the final
230reconstruction. This step is repeated until the change in the measured
231standard deviation of the background (see note below on the evaluation
232of this quantity) is less than some fiducial amount.
233
234It is important to note that the \atrous decomposition is an example
235of a ``redundant'' transformation. If no thresholding is performed,
236the sum of all the wavelet coefficient arrays and the final smoothed
237array is identical to the input array. The thresholding thus removes
238only the unwanted structure in the array.
239
240Note that any BLANK pixels that are still in the cube will not be
241altered by the reconstruction -- they will be left as BLANK so that
242the shape of the valid part of the cube is preserved.
243
244\secC{Note on Statistics}
245
246The correct calculation of the reconstructed array needs good
247estimators of the underlying mean and standard deviation (or rms) of
248the background noise distribution. The methods used to estimate these
249quantities are detailed in \S\ref{sec-stats} -- the default behaviour
250is to use robust estimators, to avoid biasing due to bright pixels.
251
252%These statistics are estimated using
253%robust methods, to avoid corruption by strong outlying points. The
254%mean of the distribution is actually estimated by the median, while
255%the median absolute deviation from the median (MADFM) is calculated
256%and corrected assuming Gaussianity to estimate the underlying standard
257%deviation $\sigma$. The Gaussianity (or Normality) assumption is
258%critical, as the MADFM does not give the same value as the usual rms
259%or standard deviation value -- for a Normal distribution
260%$N(\mu,\sigma)$ we find MADFM$=0.6744888\sigma$, but this will change
261%for different distributions. Since this ratio is corrected for, the
262%user need only think in the usual multiples of the rms when setting
263%\texttt{snrRecon}. See Appendix~\ref{app-madfm} for a derivation of
264%this value.
265
266When thresholding the different wavelet scales, the value of the rms
267as measured from the wavelet array needs to be scaled to account for
268the increased amount of correlation between neighbouring pixels (due
269to the convolution). See Appendix~\ref{app-scaling} for details on
270this scaling.
271
272\secC{User control of reconstruction parameters}
273
274The most important parameter for the user to select in relation to the
275reconstruction is the threshold for each wavelet array. This is set
276using the \texttt{snrRecon} parameter, and is given as a multiple of
277the rms (estimated by the MADFM) above the mean (which for the wavelet
278arrays should be approximately zero). There are several other
279parameters that can be altered as well that affect the outcome of the
280reconstruction.
281
282By default, the cube is reconstructed in three dimensions, using a
2833-dimensional filter and 3-dimensional convolution. This can be
284altered, however, using the parameter \texttt{reconDim}. If set to 1,
285this means the cube is reconstructed by considering each spectrum
286separately, whereas \texttt{reconDim=2} will mean the cube is
287reconstructed by doing each channel map separately. The merits of
288these choices are discussed in \S\ref{sec-notes}, but it should be
289noted that a 2-dimensional reconstruction can be susceptible to edge
290effects if the spatial shape of the pixel array is not rectangular.
291
292The user can also select the minimum scale to be used in the
293reconstruction. The first scale exhibits the highest frequency
294variations, and so ignoring this one can sometimes be beneficial in
295removing excess noise. The default is to use all scales
296(\texttt{minscale = 1}).
297
298Finally, the filter that is used for the convolution can be selected
299by using \texttt{filterCode} and the relevant code number -- the
300choices are listed in Appendix~\ref{app-param}. A larger filter will
301give a better reconstruction, but take longer and use more memory when
302executing. When multi-dimensional reconstruction is selected, this
303filter is used to construct a 2- or 3-dimensional equivalent.
304
305\secB{Smoothing the cube}
306\label{sec-smoothing}
307
308An alternative to doing the wavelet reconstruction is to smooth the
309cube.  This technique can be useful in reducing the noise level
310slightly (at the cost of making neighbouring pixels correlated and
311blurring any signal present), and is particularly well suited to the
312case where a particular signal size (\ie a certain channel width or
313spatial size) is believed to be present in the data.
314
315There are two alternative methods that can be used: spectral
316smoothing, using the Hanning filter; or spatial smoothing, using a 2D
317Gaussian kernel. These alternatives are outlined below. To utilise the
318smoothing option, set the parameter \texttt{flagSmooth=true} and set
319\texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}.
320
321\secC{Spectral smoothing}
322
323When \texttt{smoothType = spectral} is selected, the cube is smoothed
324only in the spectral domain. Each spectrum is independently smoothed
325by a Hanning filter, and then put back together to form the smoothed
326cube, which is then used by the searching algorithm (see below). Note
327that in the case of both the reconstruction and the smoothing options
328being requested, the reconstruction will take precedence and the
329smoothing will \emph{not} be done.
330
331There is only one parameter necessary to define the degree of
332smoothing -- the Hanning width $a$ (given by the user parameter
333\texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter
334are defined by
335\[
336c(x) =
337  \begin{cases}
338   \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| \leq (a+1)/2\\
339   0                                               &|x| > (a+1)/2.
340  \end{cases},\ a,x \in \mathbb{Z}
341\]
342Note that the width specified must be an
343odd integer (if the parameter provided is even, it is incremented by
344one).
345
346\secC{Spatial smoothing}
347
348When \texttt{smoothType = spatial} is selected, the cube is smoothed
349only in the spatial domain. Each channel map is independently smoothed
350by a two-dimensional Gaussian kernel, put back together to form the
351smoothed cube, and used in the searching algorithm (see below). Again,
352reconstruction is always done by preference if both techniques are
353requested.
354
355The two-dimensional Gaussian has three parameters to define it,
356governed by the elliptical cross-sectional shape of the Gaussian
357function: the FWHM (full-width at half-maximum) of the major and minor
358axes, and the position angle of the major axis. These are given by the
359user parameters \texttt{kernMaj, kernMin} \& \texttt{kernPA}. If a
360circular Gaussian is required, the user need only provide the
361\texttt{kernMaj} parameter. The \texttt{kernMin} parameter will then
362be set to the same value, and \texttt{kernPA} to zero.  If we define
363these parameters as $a,b,\theta$ respectively, we can define the
364kernel by the function
365\[
366k(x,y) = \exp\left[-0.5 \left(\frac{X^2}{\sigma_X^2} +
367                              \frac{Y^2}{\sigma_Y^2}   \right) \right]
368\]
369where $(x,y)$ are the offsets from the central pixel of the gaussian
370function, and
371\begin{align*}
372X& = x\sin\theta - y\cos\theta&
373  Y&= x\cos\theta + y\sin\theta\\
374\sigma_X^2& = \frac{(a/2)^2}{2\ln2}&
375  \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\
376\end{align*}
377
378\secB{Input/Output of reconstructed/smoothed arrays}
379\label{sec-reconIO}
380
381The smoothing and reconstruction stages can be relatively
382time-consuming, particularly for large cubes and reconstructions in
3833-D (or even spatial smoothing). To get around this, \duchamp provides
384a shortcut to allow users to perform multiple searches (\eg with
385different thresholds) on the same reconstruction/smoothing setup
386without re-doing the calculations each time.
387
388To save the reconstructed array as a FITS file, set
389\texttt{flagOutputRecon = true}. The file will be saved in the same
390directory as the input image, so the user needs to have write
391permissions for that directory.
392
393The filename will be derived from the input filename, with extra
394information detailing the reconstruction that has been done. For
395example, suppose \texttt{image.fits} has been reconstructed using a
3963-dimensional reconstruction with filter \#2, thresholded at $4\sigma$
397using all scales. The output filename will then be
398\texttt{image.RECON-3-2-4-1.fits} (\ie it uses the four parameters
399relevant for the \atrous reconstruction as listed in
400Appendix~\ref{app-param}). The new FITS file will also have these
401parameters as header keywords. If a subsection of the input image has
402been used (see \S\ref{sec-input}), the format of the output filename
403will be \texttt{image.sub.RECON-3-2-4-1.fits}, and the subsection that
404has been used is also stored in the FITS header.
405
406Likewise, the residual image, defined as the difference between the
407input and reconstructed arrays, can also be saved in the same manner
408by setting \texttt{flagOutputResid = true}. Its filename will be the
409same as above, with \texttt{RESID} replacing \texttt{RECON}.
410
411If a reconstructed image has been saved, it can be read in and used
412instead of redoing the reconstruction. To do so, the user should set
413the parameter \texttt{flagReconExists = true}. The user can indicate
414the name of the reconstructed FITS file using the \texttt{reconFile}
415parameter, or, if this is not specified, \duchamp searches for the
416file with the name as defined above. If the file is not found, the
417reconstruction is performed as normal. Note that to do this, the user
418needs to set \texttt{flagAtrous = true} (obviously, if this is
419\texttt{false}, the reconstruction is not needed).
420
421To save the smoothed array, set \texttt{flagOutputSmooth = true}. The
422name of the saved file will depend on the method of smoothing used. It
423will be either \texttt{image.SMOOTH-1D-a.fits}, where a is replaced by
424the Hanning width used, or \texttt{image.SMOOTH-2D-a-b-c.fits}, where
425the Gaussian kernel parameters are a,b,c. Similarly to the
426reconstruction case, a saved file can be read in by setting
427\texttt{flagSmoothExists = true} and either specifying a file to be
428read with the \texttt{smoothFile} parameter or relying on \duchamp to
429find the file with the name as given above.
430
431
432\secB{Searching the image}
433\label{sec-detection}
434
435\secC{Technique}
436
437The basic idea behind detection in \duchamp is to locate sets of
438contiguous voxels that lie above some threshold. No size or shape
439requirement is imposed upon the detections -- that is, \duchamp does
440not fit \eg a Gaussian profile to each source. All it does is find
441connected groups of bright voxels.
442
443One threshold is calculated for the entire cube, enabling calculation
444of signal-to-noise ratios for each source (see
445Section~\ref{sec-output} for details). The user can manually specify a
446value (using the parameter \texttt{threshold}) for the threshold,
447which will override the calculated value. Note that this only applies
448for the first of the two cases discussed below -- the FDR case ignores
449any manually-set threshold value.
450
451The cube is searched one channel map at a time, using the
4522-dimensional raster-scanning algorithm of \citet{lutz80} that
453connects groups of neighbouring pixels. Such an algorithm cannot be
454applied directly to a 3-dimensional case, as it requires that objects
455are completely nested in a row (when scanning along a row, if an
456object finishes and other starts, you won't get back to the first
457until the second is completely finished for the
458row). Three-dimensional data does not have this property, hence the
459need to treat the data on a 2-dimensional basis.
460
461Although there are parameters that govern the minimum number of pixels
462in a spatial and spectral sense that an object must have
463(\texttt{minPix} and \texttt{minChannels} respectively), these
464criteria are not applied at this point. It is only after the merging
465and growing (see \S\ref{sec-merger}) is done that objects are rejected
466for not meeting these criteria.
467
468Finally, the search only looks for positive features. If one is
469interested instead in negative features (such as absorption lines),
470set the parameter \texttt{flagNegative = true}. This will invert the
471cube (\ie multiply all pixels by $-1$) prior to the search, and then
472re-invert the cube (and the fluxes of any detections) after searching
473is complete. All outputs are done in the same manner as normal, so
474that fluxes of detections will be negative.
475
476\secC{Calculating statistics}
477\label{sec-stats}
478
479A crucial part of the detection process (as well as the wavelet
480reconstruction: \S\ref{sec-recon}) is estimating the statistics that
481define the detection threshold. To determine a threshold, we need to
482estimate from the data two parameters: the middle of the noise
483distribution (the ``noise level''), and the width of the distribution
484(the ``noise spread''). The noise level is estimated by either the
485mean or the median, and the noise spread by the rms (or the standard
486deviation) or the median absolute deviation from the median
487(MADFM). The median and MADFM are robust statistics, in that they are
488not biased by the presence of a few pixels much brighter than the
489noise.
490
491All four statistics are calculated automatically, but the choice of
492parameters that will be used is governed by the input parameter
493\texttt{flagRobustStats}. This has the default value \texttt{true},
494meaning the underlying mean of the noise distribution is estimated by
495the median, and the underlying standard deviation is estimated by the
496MADFM. In the latter case, the value is corrected, under the
497assumption that the underlying distribution is Normal (Gaussian), by
498dividing by 0.6744888 -- see Appendix~\ref{app-madfm} for details. If
499\texttt{flagRobustStats=false}, the mean and rms are used instead.
500
501The choice of pixels to be used depend on the analysis method. If the
502wavelet reconstruction has been done, the residuals (defined
503in the sense of original $-$ reconstruction) are used to estimate the
504noise spread of the cube, since the reconstruction should pick out
505all significant structure. The noise level (the middle of the
506distribution) is taken from the original array.
507
508If smoothing of the cube has been done instead, all noise parameters
509are measured from the smoothed array, and detections are made with
510these parameters. When the signal-to-noise level is quoted for each
511detection (see \S\ref{sec-output}), the noise parameters of the
512original array are used, since the smoothing process correlates
513neighbouring pixels, reducing the noise level.
514
515If neither reconstruction nor smoothing has been done, then the
516statistics are calculated from the original, input array.
517
518The parameters that are estimated should be representative of the
519noise in the cube. For the case of small objects embedded in many
520noise pixels (\eg the case of \hi surveys), using the full cube will
521provide good estimators. It is possible, however, to use only a
522subsection of the cube by setting the parameter \texttt{flagStatSec =
523  true} and providing the desired subsection to the \texttt{StatSec}
524parameter. This subsection works in exactly the same way as the pixel
525subsection discussed in \S\ref{sec-input}. Note that this subsection
526applies only to the statistics used to determine the threshold. It
527does not affect the calculation of statistics in the case of the
528wavelet reconstruction. Note also that pixels flagged as BLANK or as
529part of the ``Milky Way'' range of channels are ignored in the
530statistics calculations.
531
532\secC{Determining the threshold}
533
534Once the statistics have been calculated, the threshold is determined
535in one of two ways. The first way is a simple sigma-clipping, where a
536threshold is set at a fixed number $n$ of standard deviations above
537the mean, and pixels above this threshold are flagged as detected. The
538value of $n$ is set with the parameter \texttt{snrCut}. The ``mean''
539and ``standard deviation'' here are estimated according to
540\texttt{flagRobustStats}, as discussed in \S\ref{sec-stats}. In this
541first case only, if the user specifies a threshold, using the
542\texttt{threshold} parameter, the sigma-clipped value is ignored.
543
544The second method uses the False Discovery Rate (FDR) technique
545\citep{miller01,hopkins02}, whose basis we briefly detail here. The
546false discovery rate (given by the number of false detections divided
547by the total number of detections) is fixed at a certain value
548$\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false
549positives). In practice, an $\alpha$ value is chosen, and the ensemble
550average FDR (\ie $\langle FDR \rangle$) when the method is used will
551be less than $\alpha$.  One calculates $p$ -- the probability,
552assuming the null hypothesis is true, of obtaining a test statistic as
553extreme as the pixel value (the observed test statistic) -- for each
554pixel, and sorts them in increasing order. One then calculates $d$
555where
556\[
557d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\},
558\]
559and then rejects all hypotheses whose $p$-values are less than or
560equal to $P_d$. (So a $P_i<P_d$ will be rejected even if $P_i \geq
561j\alpha/c_N N$.) Note that ``reject hypothesis'' here means ``accept
562the pixel as an object pixel'' (\ie we are rejecting the null
563hypothesis that the pixel belongs to the background).
564
565The $c_N$ value here is a normalisation constant that depends on the
566correlated nature of the pixel values. If all the pixels are
567uncorrelated, then $c_N=1$. If $N$ pixels are correlated, then their
568tests will be dependent on each other, and so $c_N = \sum_{i=1}^N
569i^{-1}$. \citet{hopkins02} consider real radio data, where the pixels
570are correlated over the beam. For the calculations done in \duchamp,
571$N=2B$, where $B$ is the beam size in pixels, calculated from the FITS
572header (if the correct keywords -- BMAJ, BMIN -- are not present, the
573size of the beam is taken from the parameter \texttt{beamSize}). The
574factor of 2 comes about because we treat neighbouring channels as
575correlated. In the case of a two-dimensional image, we just have
576$N=B$.
577
578The theory behind the FDR method implies a direct connection between
579the choice of $\alpha$ and the fraction of detections that will be
580false positives. These detections, however, are individual pixels,
581which undergo a process of merging and rejection (\S\ref{sec-merger}),
582and so the fraction of the final list of detected objects that are
583false positives will be much smaller than $\alpha$. See the discussion
584in \S\ref{sec-notes}.
585
586%\secC{Storage of detected objects in memory}
587%
588%It is useful to understand how \duchamp stores the detected objects in
589%memory while it is running. This makes use of nested C++ classes, so
590%that an object is stored as a class that includes the set of detected
591%pixels, plus all the various calculated parameters (fluxes, WCS
592%coordinates, pixel centres and extrema, flags,...). The set of pixels
593%are stored using another class, that stores 3-dimensional objects as a
594%set of channel maps, each consisting of a $z$-value and a
595%2-dimensional object (a spatial map if you like). This 2-dimensional
596%object is recorded using ``run-length'' encoding, where each row (a
597%fixed $y$ value) is stored by the starting $x$-value and the length
598
599\secB{Merging and growing detected objects}
600\label{sec-merger}
601
602The searching step produces a list of detected objects that will have
603many repeated detections of a given object -- for instance, spectral
604detections in adjacent pixels of the same object and/or spatial
605detections in neighbouring channels. These are then combined in an
606algorithm that matches all objects judged to be ``close'', according
607to one of two criteria.
608
609One criterion is to define two thresholds -- one spatial and one in
610velocity -- and say that two objects should be merged if there is at
611least one pair of pixels that lie within these threshold distances of
612each other. These thresholds are specified by the parameters
613\texttt{threshSpatial} and \texttt{threshVelocity} (in units of pixels
614and channels respectively).
615
616Alternatively, the spatial requirement can be changed to say that
617there must be a pair of pixels that are \emph{adjacent} -- a stricter,
618but perhaps more realistic requirement, particularly when the spatial
619pixels have a large angular size (as is the case for
620\hi surveys). This
621method can be selected by setting the parameter
622\texttt{flagAdjacent} to 1 (\ie \texttt{true}) in the parameter
623file. The velocity thresholding is done in the same way as the first
624option.
625
626Once the detections have been merged, they may be ``grown''. This is a
627process of increasing the size of the detection by adding nearby
628pixels (according to the \texttt{threshSpatial} and
629\texttt{threshVelocity} parameters) that are above some secondary
630threshold. This threshold is lower than the one used for the initial
631detection, but above the noise level, so that faint pixels are only
632detected when they are close to a bright pixel. The value of this
633threshold is a possible input parameter (\texttt{growthCut}), with a
634default value of $2\sigma$.
635
636The use of the growth algorithm is controlled by the
637\texttt{flagGrowth} parameter -- the default value of which is
638\texttt{false}. If the detections are grown, they are sent through the
639merging algorithm a second time, to pick up any detections that now
640overlap or have grown over each other.
641
642Finally, to be accepted, the detections must span \emph{both} a
643minimum number of channels (enabling the removal of any spurious
644single-channel spikes that may be present), and a minimum number of
645spatial pixels. These numbers, as for the original detection step, are
646set with the \texttt{minChannels} and \texttt{minPix} parameters. The
647channel requirement means there must be at least one set of
648\texttt{minChannels} consecutive channels in the source for it to be
649accepted.
Note: See TracBrowser for help on using the repository browser.