source: trunk/docs/executionFlow.tex @ 383

Last change on this file since 383 was 364, checked in by MatthewWhiting, 17 years ago

Updated User Guide to account for the recent changes.

File size: 30.9 KB
Line 
1% -----------------------------------------------------------------------
2% executionFlow.tex: Section detailing each of the main algorithms
3%                    used by Duchamp.
4% -----------------------------------------------------------------------
5% Copyright (C) 2006, Matthew Whiting, ATNF
6%
7% This program is free software; you can redistribute it and/or modify it
8% under the terms of the GNU General Public License as published by the
9% Free Software Foundation; either version 2 of the License, or (at your
10% option) any later version.
11%
12% Duchamp is distributed in the hope that it will be useful, but WITHOUT
13% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
14% FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
15% for more details.
16%
17% You should have received a copy of the GNU General Public License
18% along with Duchamp; if not, write to the Free Software Foundation,
19% Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
20%
21% Correspondence concerning Duchamp may be directed to:
22%    Internet email: Matthew.Whiting [at] atnf.csiro.au
23%    Postal address: Dr. Matthew Whiting
24%                    Australia Telescope National Facility, CSIRO
25%                    PO Box 76
26%                    Epping NSW 1710
27%                    AUSTRALIA
28% -----------------------------------------------------------------------
29\secA{What \duchamp is doing}
30\label{sec-flow}
31
32Each of the steps that \duchamp goes through in the course of its
33execution are discussed here in more detail. This should provide
34enough background information to fully understand what \duchamp is
35doing and what all the output information is. For those interested in
36the programming side of things, \duchamp is written in C/C++ and makes
37use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot}
38libraries.
39
40\secB{Image input}
41\label{sec-input}
42
43The cube is read in using basic \textsc{cfitsio} commands, and stored
44as an array in a special C++ class. This class keeps track of the list
45of detected objects, as well as any reconstructed arrays that are made
46(see \S\ref{sec-recon}). The World Coordinate System
47(WCS)\footnote{This is the information necessary for translating the
48pixel locations to quantities such as position on the sky, frequency,
49velocity, and so on.} information for the cube is also obtained from
50the FITS header by \textsc{wcslib} functions \citep{greisen02,
51calabretta02}, and this information, in the form of a \texttt{wcsprm}
52structure, is also stored in the same class.
53
54A sub-section of a cube can be requested by defining the subsection
55with the \texttt{subsection} parameter and setting
56\texttt{flagSubsection = true} -- this can be a good idea if the cube
57has very noisy edges, which may produce many spurious detections.
58
59There are two ways of specifying the \texttt{subsection} string. The
60first is the generalised form
61\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the
62\textsc{cfitsio} library. This has one set of colon-separated numbers
63for each axis in the FITS file. In this manner, the x-coordinates run
64from \texttt{x1} to \texttt{x2} (inclusive), with steps of
65\texttt{dx}. The step value can be omitted, so a subsection of the
66form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp
67does not make use of any step value present in the subsection string,
68and any that are present are removed before the file is opened.
69
70If the entire range of a coordinate is required, one can replace the
71range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the
72subsection string \texttt{[*,*,*]} is simply the entire cube. Note
73that the pixel ranges for each axis start at 1, so the full pixel
74range of a 100-pixel axis would be expressed as 1:100. A complete
75description of this section syntax can be found at the
76\textsc{fitsio} web site%
77\footnote{%
78\href%
79{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}%
80{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}.
81
82
83Making full use of the subsection requires knowledge of the size of
84each of the dimensions. If one wants to, for instance, trim a certain
85number of pixels off the edges of the cube, without examining the cube
86to obtain the actual size, one can use the second form of the
87subsection string. This just gives a number for each axis, \eg
88\texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and}
89end of each axis).
90
91All types of subsections can be combined \eg \texttt{[5,2:98,*]}.
92
93
94\secB{Image modification}
95\label{sec-modify}
96
97Several modifications to the cube can be made that improve the
98execution and efficiency of \duchamp (their use is optional, governed
99by the relevant flags in the parameter file).
100
101\secC{BLANK pixel removal}
102\label{sec-blank}
103
104If the imaged area of a cube is non-rectangular (see the example in
105Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels
106are used to pad it out to a rectangular shape. The value of these
107pixels is given by the FITS header keywords BLANK, BSCALE and
108BZERO. While these pixels make the image a nice shape, they will take
109up unnecessary space in memory, and so to potentially speed up the
110processing we can trim them from the edge. This is done when the
111parameter \texttt{flagTrim = true}. If the above keywords are not
112present, the trimming will not be done (in this case, a similar effect
113can be accomplished, if one knows where the ``blank'' pixels are, by
114using the subsection option).
115
116The amount of trimming is recorded, and these pixels are added back in
117once the source-detection is completed (so that quoted pixel positions
118are applicable to the original cube). Rows and columns are trimmed one
119at a time until the first non-BLANK pixel is reached, so that the
120image remains rectangular. In practice, this means that there will be
121some BLANK pixels left in the trimmed image (if the non-BLANK region
122is non-rectangular). However, these are ignored in all further
123calculations done on the cube.
124
125\secC{Baseline removal}
126
127Second, the user may request the removal of baselines from the
128spectra, via the parameter \texttt{flagBaseline}. This may be
129necessary if there is a strong baseline ripple present, which can
130result in spurious detections at the high points of the ripple. The
131baseline is calculated from a wavelet reconstruction procedure (see
132\S\ref{sec-recon}) that keeps only the two largest scales. This is
133done separately for each spatial pixel (\ie for each spectrum in the
134cube), and the baselines are stored and added back in before any
135output is done. In this way the quoted fluxes and displayed spectra
136are as one would see from the input cube itself -- even though the
137detection (and reconstruction if applicable) is done on the
138baseline-removed cube.
139
140The presence of very strong signals (for instance, masers at several
141hundred Jy) could affect the determination of the baseline, and would
142lead to a large dip centred on the signal in the baseline-subtracted
143spectrum. To prevent this, the signal is trimmed prior to the
144reconstruction process at some standard threshold (at $8\sigma$ above
145the mean). The baseline determined should thus be representative of
146the true, signal-free baseline. Note that this trimming is only a
147temporary measure which does not affect the source-detection.
148
149\secC{Ignoring bright Milky Way emission}
150
151Finally, a single set of contiguous channels can be ignored -- these
152may exhibit very strong emission, such as that from the Milky Way as
153seen in extragalactic \hi cubes (hence the references to ``Milky
154Way'' in relation to this task -- apologies to Galactic
155astronomers!). Such dominant channels will produce many detections
156that are unnecessary, uninteresting (if one is interested in
157extragalactic \hi) and large (in size and hence in memory usage), and
158so will slow the program down and detract from the interesting
159detections.
160
161The use of this feature is controlled by the \texttt{flagMW}
162parameter, and the exact channels concerned are able to be set by the
163user (using \texttt{maxMW} and \texttt{minMW} -- these give an
164inclusive range of channels). When employed, these channels are
165ignored for the searching, and the scaling of the spectral output (see
166Fig.~\ref{fig-spect}) will not take them into account. They will be
167present in the reconstructed array, however, and so will be included
168in the saved FITS file (see \S\ref{sec-reconIO}). When the final
169spectra are plotted, the range of channels covered by these parameters
170is indicated by a green hashed box.
171
172\secB{Image reconstruction}
173\label{sec-recon}
174
175The user can direct \duchamp to reconstruct the data cube using the
176\atrous wavelet procedure. A good description of the procedure can be
177found in \citet{starck02:book}. The reconstruction is an effective way
178of removing a lot of the noise in the image, allowing one to search
179reliably to fainter levels, and reducing the number of spurious
180detections. This is an optional step, but one that greatly enhances
181the source-detection process, with the payoff that it can be
182relatively time- and memory-intensive.
183
184\secC{Algorithm}
185
186The steps in the \atrous reconstruction are as follows:
187\begin{enumerate}
188\item The reconstructed array is set to 0 everywhere.
189\item The input array is discretely convolved with a given filter
190  function. This is determined from the parameter file via the
191  \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for
192  details on the filters available.
193\item The wavelet coefficients are calculated by taking the difference
194  between the convolved array and the input array.
195\item If the wavelet coefficients at a given point are above the
196  requested threshold (given by \texttt{snrRecon} as the number of
197  $\sigma$ above the mean and adjusted to the current scale -- see
198  Appendix~\ref{app-scaling}), add these to the reconstructed array.
199\item The separation between the filter coefficients is doubled. (Note
200  that this step provides the name of the procedure\footnote{\atrous
201  means ``with holes'' in French.}, as gaps or holes are created in
202  the filter coverage.)
203\item The procedure is repeated from step 2, using the convolved array
204  as the input array.
205\item Continue until the required maximum number of scales is reached.
206\item Add the final smoothed (\ie convolved) array to the
207  reconstructed array. This provides the ``DC offset'', as each of the
208  wavelet coefficient arrays will have zero mean.
209\end{enumerate}
210
211The range of scales at which the selection of wavelet coefficients is
212made is governed by the \texttt{scaleMin} and \texttt{scaleMax}
213parameters. The minimum scale used is given by \texttt{scaleMin},
214where the default value is 1 (the first scale). This parameter is
215useful if you want to ignore the highest-frequency features
216(e.g. high-frequency noise that might be present). Normally the
217maximum scale is calculated from the size of the input array, but it
218can be specified by using \texttt{scaleMax}. A value $\le0$ will
219result in the use of the calculated value, as will a value of
220\texttt{scaleMax} greater than the calculated value. Use of these two
221parameters can allow searching for features of a particular scale
222size, for instance searching for narrow absorption features.
223
224The reconstruction has at least two iterations. The first iteration
225makes a first pass at the wavelet reconstruction (the process outlined
226in the 8 stages above), but the residual array will likely have some
227structure still in it, so the wavelet filtering is done on the
228residual, and any significant wavelet terms are added to the final
229reconstruction. This step is repeated until the change in the measured
230standard deviation of the background (see note below on the evaluation
231of this quantity) is less than some fiducial amount.
232
233It is important to note that the \atrous decomposition is an example
234of a ``redundant'' transformation. If no thresholding is performed,
235the sum of all the wavelet coefficient arrays and the final smoothed
236array is identical to the input array. The thresholding thus removes
237only the unwanted structure in the array.
238
239Note that any BLANK pixels that are still in the cube will not be
240altered by the reconstruction -- they will be left as BLANK so that
241the shape of the valid part of the cube is preserved.
242
243\secC{Note on Statistics}
244
245The correct calculation of the reconstructed array needs good
246estimators of the underlying mean and standard deviation of the
247background noise distribution. These statistics are estimated using
248robust methods, to avoid corruption by strong outlying points. The
249mean of the distribution is actually estimated by the median, while
250the median absolute deviation from the median (MADFM) is calculated
251and corrected assuming Gaussianity to estimate the underlying standard
252deviation $\sigma$. The Gaussianity (or Normality) assumption is
253critical, as the MADFM does not give the same value as the usual rms
254or standard deviation value -- for a Normal distribution
255$N(\mu,\sigma)$ we find MADFM$=0.6744888\sigma$, but this will change
256for different distributions. Since this ratio is corrected for, the
257user need only think in the usual multiples of the rms when setting
258\texttt{snrRecon}. See Appendix~\ref{app-madfm} for a derivation of
259this value.
260
261When thresholding the different wavelet scales, the value of the rms
262as measured from the wavelet array needs to be scaled to account for
263the increased amount of correlation between neighbouring pixels (due
264to the convolution). See Appendix~\ref{app-scaling} for details on
265this scaling.
266
267\secC{User control of reconstruction parameters}
268
269The most important parameter for the user to select in relation to the
270reconstruction is the threshold for each wavelet array. This is set
271using the \texttt{snrRecon} parameter, and is given as a multiple of
272the rms (estimated by the MADFM) above the mean (which for the wavelet
273arrays should be approximately zero). There are several other
274parameters that can be altered as well that affect the outcome of the
275reconstruction.
276
277By default, the cube is reconstructed in three dimensions, using a
2783-dimensional filter and 3-dimensional convolution. This can be
279altered, however, using the parameter \texttt{reconDim}. If set to 1,
280this means the cube is reconstructed by considering each spectrum
281separately, whereas \texttt{reconDim=2} will mean the cube is
282reconstructed by doing each channel map separately. The merits of
283these choices are discussed in \S\ref{sec-notes}, but it should be
284noted that a 2-dimensional reconstruction can be susceptible to edge
285effects if the spatial shape of the pixel array is not rectangular.
286
287The user can also select the minimum scale to be used in the
288reconstruction. The first scale exhibits the highest frequency
289variations, and so ignoring this one can sometimes be beneficial in
290removing excess noise. The default is to use all scales
291(\texttt{minscale = 1}).
292
293Finally, the filter that is used for the convolution can be selected
294by using \texttt{filterCode} and the relevant code number -- the
295choices are listed in Appendix~\ref{app-param}. A larger filter will
296give a better reconstruction, but take longer and use more memory when
297executing. When multi-dimensional reconstruction is selected, this
298filter is used to construct a 2- or 3-dimensional equivalent.
299
300\secB{Smoothing the cube}
301\label{sec-smoothing}
302
303An alternative to doing the wavelet reconstruction is to smooth the
304cube.  This technique can be useful in reducing the noise level
305slightly (at the cost of making neighbouring pixels correlated and
306blurring any signal present), and is particularly well suited to the
307case where a particular signal size (\ie a certain channel width or
308spatial size) is believed to be present in the data.
309
310There are two alternative methods that can be used: spectral
311smoothing, using the Hanning filter; or spatial smoothing, using a 2D
312Gaussian kernel. These alternatives are outlined below. To utilise the
313smoothing option, set the parameter \texttt{flagSmooth=true} and set
314\texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}.
315
316\secC{Spectral smoothing}
317
318When \texttt{smoothType = spectral} is selected, the cube is smoothed
319only in the spectral domain. Each spectrum is independently smoothed
320by a Hanning filter, and then put back together to form the smoothed
321cube, which is then used by the searching algorithm (see below). Note
322that in the case of both the reconstruction and the smoothing options
323being requested, the reconstruction will take precedence and the
324smoothing will \emph{not} be done.
325
326There is only one parameter necessary to define the degree of
327smoothing -- the Hanning width $a$ (given by the user parameter
328\texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter
329are defined by
330\[
331c(x) =
332  \begin{cases}
333   \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| \leq (a+1)/2\\
334   0                                               &|x| > (a+1)/2.
335  \end{cases},\ a,x \in \mathbb{Z}
336\]
337Note that the width specified must be an
338odd integer (if the parameter provided is even, it is incremented by
339one).
340
341\secC{Spatial smoothing}
342
343When \texttt{smoothType = spatial} is selected, the cube is smoothed
344only in the spatial domain. Each channel map is independently smoothed
345by a two-dimensional Gaussian kernel, put back together to form the
346smoothed cube, and used in the searching algorithm (see below). Again,
347reconstruction is always done by preference if both techniques are
348requested.
349
350The two-dimensional Gaussian has three parameters to define it,
351governed by the elliptical cross-sectional shape of the Gaussian
352function: the FWHM (full-width at half-maximum) of the major and minor
353axes, and the position angle of the major axis. These are given by the
354user parameters \texttt{kernMaj, kernMin} \& \texttt{kernPA}. If a
355circular Gaussian is required, the user need only provide the
356\texttt{kernMaj} parameter. The \texttt{kernMin} parameter will then
357be set to the same value, and \texttt{kernPA} to zero.  If we define
358these parameters as $a,b,\theta$ respectively, we can define the
359kernel by the function
360\[
361k(x,y) = \exp\left[-0.5 \left(\frac{X^2}{\sigma_X^2} +
362                              \frac{Y^2}{\sigma_Y^2}   \right) \right]
363\]
364where $(x,y)$ are the offsets from the central pixel of the gaussian
365function, and
366\begin{align*}
367X& = x\sin\theta - y\cos\theta&
368  Y&= x\cos\theta + y\sin\theta\\
369\sigma_X^2& = \frac{(a/2)^2}{2\ln2}&
370  \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\
371\end{align*}
372
373\secB{Input/Output of reconstructed/smoothed arrays}
374\label{sec-reconIO}
375
376The smoothing and reconstruction stages can be relatively
377time-consuming, particularly for large cubes and reconstructions in
3783-D (or even spatial smoothing). To get around this, \duchamp provides
379a shortcut to allow users to perform multiple searches (\eg with
380different thresholds) on the same reconstruction/smoothing setup
381without re-doing the calculations each time.
382
383To save the reconstructed array as a FITS file, set
384\texttt{flagOutputRecon = true}. The file will be saved in the same
385directory as the input image, so the user needs to have write
386permissions for that directory.
387
388The filename will be derived from the input filename, with extra
389information detailing the reconstruction that has been done. For
390example, suppose \texttt{image.fits} has been reconstructed using a
3913-dimensional reconstruction with filter \#2, thresholded at $4\sigma$
392using all scales. The output filename will then be
393\texttt{image.RECON-3-2-4-1.fits} (\ie it uses the four parameters
394relevant for the \atrous reconstruction as listed in
395Appendix~\ref{app-param}). The new FITS file will also have these
396parameters as header keywords. If a subsection of the input image has
397been used (see \S\ref{sec-input}), the format of the output filename
398will be \texttt{image.sub.RECON-3-2-4-1.fits}, and the subsection that
399has been used is also stored in the FITS header.
400
401Likewise, the residual image, defined as the difference between the
402input and reconstructed arrays, can also be saved in the same manner
403by setting \texttt{flagOutputResid = true}. Its filename will be the
404same as above, with \texttt{RESID} replacing \texttt{RECON}.
405
406If a reconstructed image has been saved, it can be read in and used
407instead of redoing the reconstruction. To do so, the user should set
408the parameter \texttt{flagReconExists = true}. The user can indicate
409the name of the reconstructed FITS file using the \texttt{reconFile}
410parameter, or, if this is not specified, \duchamp searches for the
411file with the name as defined above. If the file is not found, the
412reconstruction is performed as normal. Note that to do this, the user
413needs to set \texttt{flagAtrous = true} (obviously, if this is
414\texttt{false}, the reconstruction is not needed).
415
416To save the smoothed array, set \texttt{flagOutputSmooth = true}. The
417name of the saved file will depend on the method of smoothing used. It
418will be either \texttt{image.SMOOTH-1D-a.fits}, where a is replaced by
419the Hanning width used, or \texttt{image.SMOOTH-2D-a-b-c.fits}, where
420the Gaussian kernel parameters are a,b,c. Similarly to the
421reconstruction case, a saved file can be read in by setting
422\texttt{flagSmoothExists = true} and either specifying a file to be
423read with the \texttt{smoothFile} parameter or relying on \duchamp to
424find the file with the name as given above.
425
426
427\secB{Searching the image}
428\label{sec-detection}
429
430\secC{Technique}
431
432The basic idea behind detection in \duchamp is to locate sets of
433contiguous voxels that lie above some threshold. No size or shape
434requirement is imposed upon the detections -- that is, \duchamp does
435not fit \eg a Gaussian profile to each source. All it does is find
436connected groups of bright voxels.
437
438One threshold is calculated for the entire cube, enabling calculation
439of signal-to-noise ratios for each source (see
440Section~\ref{sec-output} for details). The user can manually specify a
441value (using the parameter \texttt{threshold}) for the threshold,
442which will override the calculated value. Note that this only applies
443for the first of the two cases discussed below -- the FDR case ignores
444any manually-set threshold value.
445
446The cube is searched one channel map at a time, using the
4472-dimensional raster-scanning algorithm of \citet{lutz80} that
448connects groups of neighbouring pixels. Such an algorithm cannot be
449applied directly to a 3-dimensional case, as it requires that objects
450are completely nested in a row (when scanning along a row, if an
451object finishes and other starts, you won't get back to the first
452until the second is completely finished for the
453row). Three-dimensional data does not have this property, hence the
454need to treat the data on a 2-dimensional basis.
455
456Although there are parameters that govern the minimum number of pixels
457in a spatial and spectral sense that an object must have
458(\texttt{minPix} and \texttt{minChannels} respectively), these
459criteria are not applied at this point. It is only after the merging
460and growing (see \S\ref{sec-merger}) is done that objects are rejected
461for not meeting these criteria.
462
463Finally, the search only looks for positive features. If one is
464interested instead in negative features (such as absorption lines),
465set the parameter \texttt{flagNegative = true}. This will invert the
466cube (\ie multiply all pixels by $-1$) prior to the search, and then
467re-invert the cube (and the fluxes of any detections) after searching
468is complete. All outputs are done in the same manner as normal, so
469that fluxes of detections will be negative.
470
471\secC{Calculating statistics}
472
473A crucial part of the detection process is estimating the statistics
474that define the detection threshold. To determine a threshold, we need
475to estimate from the data two parameters: the middle of the noise
476distribution (the ``noise level''), and the width of the distribution
477(the ``noise spread''). For both cases, we again use robust methods,
478using the median and MADFM.
479
480The choice of pixels to be used depend on the analysis method. If the
481wavelet reconstruction has been done, the residuals (defined
482in the sense of original $-$ reconstruction) are used to estimate the
483noise spread of the cube, since the reconstruction should pick out
484all significant structure. The noise level (the middle of the
485distribution) is taken from the original array.
486
487If smoothing of the cube has been done instead, all noise parameters
488are measured from the smoothed array, and detections are made with
489these parameters. When the signal-to-noise level is quoted for each
490detection (see \S\ref{sec-output}), the noise parameters of the
491original array are used, since the smoothing process correlates
492neighbouring pixels, reducing the noise level.
493
494If neither reconstruction nor smoothing has been done, then the
495statistics are calculated from the original, input array.
496
497The parameters that are estimated should be representative of the
498noise in the cube. For the case of small objects embedded in many
499noise pixels (\eg the case of \hi surveys), using the full cube will
500provide good estimators. It is possible, however, to use only a
501subsection of the cube by setting the parameter \texttt{flagStatSec =
502true} and providing the desired subsection to the \texttt{StatSec}
503parameter. This subsection works in exactly the same way as the pixel
504subsection discussed in \S\ref{sec-input}. Note that this subsection
505applies only to the statistics used to determine the threshold. It
506does not affect the calculation of statistics in the case of the
507wavelet reconstruction.
508
509\secC{Determining the threshold}
510
511Once the statistics have been calculated, the threshold is determined
512in one of two ways. The first way is a simple sigma-clipping, where a
513threshold is set at a fixed number $n$ of standard deviations above
514the mean, and pixels above this threshold are flagged as detected. The
515value of $n$ is set with the parameter \texttt{snrCut}. As before, the
516value of the standard deviation is estimated by the MADFM, and
517corrected by the ratio derived in Appendix~\ref{app-madfm}.
518
519The second method uses the False Discovery Rate (FDR) technique
520\citep{miller01,hopkins02}, whose basis we briefly detail here. The
521false discovery rate (given by the number of false detections divided
522by the total number of detections) is fixed at a certain value
523$\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false
524positives). In practice, an $\alpha$ value is chosen, and the ensemble
525average FDR (\ie $\langle FDR \rangle$) when the method is used will
526be less than $\alpha$.  One calculates $p$ -- the probability,
527assuming the null hypothesis is true, of obtaining a test statistic as
528extreme as the pixel value (the observed test statistic) -- for each
529pixel, and sorts them in increasing order. One then calculates $d$
530where
531\[
532d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\},
533\]
534and then rejects all hypotheses whose $p$-values are less than or
535equal to $P_d$. (So a $P_i<P_d$ will be rejected even if $P_i \geq
536j\alpha/c_N N$.) Note that ``reject hypothesis'' here means ``accept
537the pixel as an object pixel'' (\ie we are rejecting the null
538hypothesis that the pixel belongs to the background).
539
540The $c_N$ value here is a normalisation constant that depends on the
541correlated nature of the pixel values. If all the pixels are
542uncorrelated, then $c_N=1$. If $N$ pixels are correlated, then their
543tests will be dependent on each other, and so $c_N = \sum_{i=1}^N
544i^{-1}$. \citet{hopkins02} consider real radio data, where the pixels
545are correlated over the beam. For the calculations done in \duchamp,
546$N=2B$, where $B$ is the beam size in pixels, calculated from the FITS
547header (if the correct keywords -- BMAJ, BMIN -- are not present, the
548size of the beam is taken from the parameter \texttt{beamSize}). The
549factor of 2 comes about because we treat neighbouring channels as
550correlated. In the case of a two-dimensional image, we just have
551$N=B$.
552
553The theory behind the FDR method implies a direct connection between
554the choice of $\alpha$ and the fraction of detections that will be
555false positives. These detections, however, are individual pixels,
556which undergo a process of merging and rejection (\S\ref{sec-merger}),
557and so the fraction of the final list of detected objects that are
558false positives will be much smaller than $\alpha$. See the discussion
559in \S\ref{sec-notes}.
560
561%\secC{Storage of detected objects in memory}
562%
563%It is useful to understand how \duchamp stores the detected objects in
564%memory while it is running. This makes use of nested C++ classes, so
565%that an object is stored as a class that includes the set of detected
566%pixels, plus all the various calculated parameters (fluxes, WCS
567%coordinates, pixel centres and extrema, flags,...). The set of pixels
568%are stored using another class, that stores 3-dimensional objects as a
569%set of channel maps, each consisting of a $z$-value and a
570%2-dimensional object (a spatial map if you like). This 2-dimensional
571%object is recorded using ``run-length'' encoding, where each row (a
572%fixed $y$ value) is stored by the starting $x$-value and the length
573
574\secB{Merging and growing detected objects}
575\label{sec-merger}
576
577The searching step produces a list of detected objects that will have
578many repeated detections of a given object -- for instance, spectral
579detections in adjacent pixels of the same object and/or spatial
580detections in neighbouring channels. These are then combined in an
581algorithm that matches all objects judged to be ``close'', according
582to one of two criteria.
583
584One criterion is to define two thresholds -- one spatial and one in
585velocity -- and say that two objects should be merged if there is at
586least one pair of pixels that lie within these threshold distances of
587each other. These thresholds are specified by the parameters
588\texttt{threshSpatial} and \texttt{threshVelocity} (in units of pixels
589and channels respectively).
590
591Alternatively, the spatial requirement can be changed to say that
592there must be a pair of pixels that are \emph{adjacent} -- a stricter,
593but perhaps more realistic requirement, particularly when the spatial
594pixels have a large angular size (as is the case for
595\hi surveys). This
596method can be selected by setting the parameter
597\texttt{flagAdjacent} to 1 (\ie \texttt{true}) in the parameter
598file. The velocity thresholding is done in the same way as the first
599option.
600
601Once the detections have been merged, they may be ``grown''. This is a
602process of increasing the size of the detection by adding nearby
603pixels (according to the \texttt{threshSpatial} and
604\texttt{threshVelocity} parameters) that are above some secondary
605threshold. This threshold is lower than the one used for the initial
606detection, but above the noise level, so that faint pixels are only
607detected when they are close to a bright pixel. The value of this
608threshold is a possible input parameter (\texttt{growthCut}), with a
609default value of $2\sigma$.
610
611The use of the growth algorithm is controlled by the
612\texttt{flagGrowth} parameter -- the default value of which is
613\texttt{false}. If the detections are grown, they are sent through the
614merging algorithm a second time, to pick up any detections that now
615overlap or have grown over each other.
616
617Finally, to be accepted, the detections must span \emph{both} a
618minimum number of channels (enabling the removal of any spurious
619single-channel spikes that may be present), and a minimum number of
620spatial pixels. These numbers, as for the original detection step, are
621set with the \texttt{minChannels} and \texttt{minPix} parameters. The
622channel requirement means there must be at least one set of
623\texttt{minChannels} consecutive channels in the source for it to be
624accepted.
Note: See TracBrowser for help on using the repository browser.