source: trunk/docs/executionFlow.tex @ 992

Last change on this file since 992 was 964, checked in by MatthewWhiting, 12 years ago

Lots of updates to the user guide.

File size: 37.8 KB
Line 
1% -----------------------------------------------------------------------
2% executionFlow.tex: Section detailing each of the main algorithms
3%                    used by Duchamp.
4% -----------------------------------------------------------------------
5% Copyright (C) 2006, Matthew Whiting, ATNF
6%
7% This program is free software; you can redistribute it and/or modify it
8% under the terms of the GNU General Public License as published by the
9% Free Software Foundation; either version 2 of the License, or (at your
10% option) any later version.
11%
12% Duchamp is distributed in the hope that it will be useful, but WITHOUT
13% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
14% FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
15% for more details.
16%
17% You should have received a copy of the GNU General Public License
18% along with Duchamp; if not, write to the Free Software Foundation,
19% Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
20%
21% Correspondence concerning Duchamp may be directed to:
22%    Internet email: Matthew.Whiting [at] atnf.csiro.au
23%    Postal address: Dr. Matthew Whiting
24%                    Australia Telescope National Facility, CSIRO
25%                    PO Box 76
26%                    Epping NSW 1710
27%                    AUSTRALIA
28% -----------------------------------------------------------------------
29\secA{What \duchamp is doing}
30\label{sec-flow}
31
32Each of the steps that \duchamp goes through in the course of its
33execution are discussed here in more detail. This should provide
34enough background information to fully understand what \duchamp is
35doing and what all the output information is. For those interested in
36the programming side of things, \duchamp is written in C/C++ and makes
37use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot}
38libraries.
39
40\secB{Image input}
41\label{sec-input}
42
43The cube is read in using basic \textsc{cfitsio} commands, and stored
44as an array in a special C++ class. This class keeps track of the list
45of detected objects, as well as any reconstructed arrays that are made
46(see \S\ref{sec-recon}). The World Coordinate System
47(WCS)\footnote{This is the information necessary for translating the
48  pixel locations to quantities such as position on the sky,
49  frequency, velocity, and so on.} information for the cube is also
50obtained from the FITS header by \textsc{wcslib} functions
51\citep{greisen02, calabretta02,greisen06}, and this information, in
52the form of a \texttt{wcsprm} structure, is also stored in the same
53class. See Sec.~\ref{sec-wcs} for more details.
54
55A sub-section of a cube can be requested by defining the subsection
56with the \texttt{subsection} parameter and setting
57\texttt{flagSubsection = true} -- this can be a good idea if the cube
58has very noisy edges, which may produce many spurious detections.
59
60There are two ways of specifying the \texttt{subsection} string. The
61first is the generalised form
62\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the
63\textsc{cfitsio} library. This has one set of colon-separated numbers
64for each axis in the FITS file. In this manner, the x-coordinates run
65from \texttt{x1} to \texttt{x2} (inclusive), with steps of
66\texttt{dx}. The step value can be omitted, so a subsection of the
67form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp
68does not make use of any step value present in the subsection string,
69and any that are present are removed before the file is opened.
70
71If the entire range of a coordinate is required, one can replace the
72range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the
73subsection string \texttt{[*,*,*]} is simply the entire cube. Note
74that the pixel ranges for each axis start at 1, so the full pixel
75range of a 100-pixel axis would be expressed as 1:100. A complete
76description of this section syntax can be found at the
77\textsc{fitsio} web site%
78\footnote{%
79\href%
80{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}%
81{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}.
82
83
84Making full use of the subsection requires knowledge of the size of
85each of the dimensions. If one wants to, for instance, trim a certain
86number of pixels off the edges of the cube, without examining the cube
87to obtain the actual size, one can use the second form of the
88subsection string. This just gives a number for each axis, \eg
89\texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and}
90end of each axis).
91
92All types of subsections can be combined \eg \texttt{[5,2:98,*]}.
93
94Typically, the units of pixel brightness are given by the FITS file's
95BUNIT keyword. However, this may often be unwieldy (for instance, the
96units are Jy/beam, but the values are around a few mJy/beam). It is
97therefore possible to nominate new units, to which the pixel values
98will be converted, by using the \texttt{newFluxUnits} input
99parameter. The units must be directly translatable from the existing
100ones -- for instance, if BUNIT is Jy/beam, you cannot specify mJy, it
101must be mJy/beam. If an incompatible unit is given, the BUNIT value is
102used instead.
103
104\secB{World Coordinate System}
105\label{sec-wcs}
106
107\duchamp uses the \textsc{wcslib} package to handle the conversions
108between pixel and world coordinates. This package uses the
109transformations described in the WCS papers
110\citep{greisen02,calabretta02,greisen06}. The same package handles the
111WCS axes in the spatial plots. The conversions used are governed by
112the information in the FITS header -- this is parsed by
113\textsc{wcslib} to create the appropriate transformations.
114
115For the spectral axis, however, \duchamp provides the ability to change the
116type of transformation used, so that different spectral quantities can
117be calculated. By using the parameter \texttt{spectralType}, the user
118can change from the type given in the FITS header. This should be done
119in line with the conventions outlined in \citet{greisen06}. The
120spectral type can be either a full 8-character string (\eg
121'VELO-F2V'), or simply the 4-character ``S-type'' (\eg 'VELO'), in
122which case \textsc{wcslib} will handle the conversion.
123
124The rest frequency can be provided as well. This may be necessary, if
125the FITS header does not specify one and you wish to transform to
126velocity. Alternatively, you may want to make your measurements based
127on a different spectral line (\eg OH1665 instead of
128H\textsc{i}-21cm). The input parameter \texttt{restFrequency} is used,
129and this will override the FITS header value.
130
131Finally, the user may also request different spectral units from those
132in the FITS file, or from the defaults arising from the
133\textsc{wcslib} transformation. The input parameter
134\texttt{spectralUnits} should be used, and \citet{greisen02} should be
135consulted to ensure the syntax is appropriate.
136
137\secB{Image modification}
138\label{sec-modify}
139
140Several modifications to the cube can be made that improve the
141execution and efficiency of \duchamp (their use is optional, governed
142by the relevant flags in the parameter file).
143
144\secC{BLANK pixel removal}
145\label{sec-blank}
146
147If the imaged area of a cube is non-rectangular (see the example in
148Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels
149are used to pad it out to a rectangular shape. The value of these
150pixels is given by the FITS header keywords BLANK, BSCALE and
151BZERO. While these pixels make the image a nice shape, they will take
152up unnecessary space in memory, and so to potentially speed up the
153processing we can trim them from the edge. This is done when the
154parameter \texttt{flagTrim = true}. If the above keywords are not
155present, the trimming will not be done (in this case, a similar effect
156can be accomplished, if one knows where the ``blank'' pixels are, by
157using the subsection option).
158
159The amount of trimming is recorded, and these pixels are added back in
160once the source-detection is completed (so that quoted pixel positions
161are applicable to the original cube). Rows and columns are trimmed one
162at a time until the first non-BLANK pixel is reached, so that the
163image remains rectangular. In practice, this means that there will be
164some BLANK pixels left in the trimmed image (if the non-BLANK region
165is non-rectangular). However, these are ignored in all further
166calculations done on the cube.
167
168\secC{Baseline removal}
169
170Second, the user may request the removal of baselines from the
171spectra, via the parameter \texttt{flagBaseline}. This may be
172necessary if there is a strong baseline ripple present, which can
173result in spurious detections at the high points of the ripple. The
174baseline is calculated from a wavelet reconstruction procedure (see
175\S\ref{sec-recon}) that keeps only the two largest scales. This is
176done separately for each spatial pixel (\ie for each spectrum in the
177cube), and the baselines are stored and added back in before any
178output is done. In this way the quoted fluxes and displayed spectra
179are as one would see from the input cube itself -- even though the
180detection (and reconstruction if applicable) is done on the
181baseline-removed cube.
182
183The presence of very strong signals (for instance, masers at several
184hundred Jy) could affect the determination of the baseline, and would
185lead to a large dip centred on the signal in the baseline-subtracted
186spectrum. To prevent this, the signal is trimmed prior to the
187reconstruction process at some standard threshold (at $8\sigma$ above
188the mean). The baseline determined should thus be representative of
189the true, signal-free baseline. Note that this trimming is only a
190temporary measure which does not affect the source-detection.
191
192\secC{Ignoring bright Milky Way emission}
193\label{sec-MW}
194
195Finally, a single set of contiguous channels can be ignored -- these
196may exhibit very strong emission, such as that from the Milky Way as
197seen in extragalactic \hi cubes (hence the references to ``Milky
198Way'' in relation to this task -- apologies to Galactic
199astronomers!). Such dominant channels will produce many detections
200that are unnecessary, uninteresting (if one is interested in
201extragalactic \hi) and large (in size and hence in memory usage), and
202so will slow the program down and detract from the interesting
203detections.
204
205The use of this feature is controlled by the \texttt{flagMW}
206parameter, and the exact channels concerned are able to be set by the
207user (using \texttt{maxMW} and \texttt{minMW} -- these give an
208inclusive range of channels). When employed, these channels are
209ignored for the searching, and the scaling of the spectral output (see
210Fig.~\ref{fig-spect}) will not take them into account. They will be
211present in the reconstructed array, however, and so will be included
212in the saved FITS file (see \S\ref{sec-reconIO}). When the final
213spectra are plotted, the range of channels covered by these parameters
214is indicated by a green hashed box. Note that these channels refer to
215channel numbers in the full cube, before any subsection is applied.
216
217\secB{Image reconstruction}
218\label{sec-recon}
219
220The user can direct \duchamp to reconstruct the data cube using the
221\atrous wavelet procedure. A good description of the procedure can be
222found in \citet{starck02a}. The reconstruction is an effective way
223of removing a lot of the noise in the image, allowing one to search
224reliably to fainter levels, and reducing the number of spurious
225detections. This is an optional step, but one that greatly enhances
226the source-detection process, with the payoff that it can be
227relatively time- and memory-intensive.
228
229\secC{Algorithm}
230
231The steps in the \atrous reconstruction are as follows:
232\begin{enumerate}
233\item The reconstructed array is set to 0 everywhere.
234\item The input array is discretely convolved with a given filter
235  function. This is determined from the parameter file via the
236  \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for
237  details on the filters available.
238\item The wavelet coefficients are calculated by taking the difference
239  between the convolved array and the input array.
240\item If the wavelet coefficients at a given point are above the
241  requested threshold (given by \texttt{snrRecon} as the number of
242  $\sigma$ above the mean and adjusted to the current scale -- see
243  Appendix~\ref{app-scaling}), add these to the reconstructed array.
244\item The separation between the filter coefficients is doubled. (Note
245  that this step provides the name of the procedure\footnote{\atrous
246  means ``with holes'' in French.}, as gaps or holes are created in
247  the filter coverage.)
248\item The procedure is repeated from step 2, using the convolved array
249  as the input array.
250\item Continue until the required maximum number of scales is reached.
251\item Add the final smoothed (\ie convolved) array to the
252  reconstructed array. This provides the ``DC offset'', as each of the
253  wavelet coefficient arrays will have zero mean.
254\end{enumerate}
255
256The range of scales at which the selection of wavelet coefficients is
257made is governed by the \texttt{scaleMin} and \texttt{scaleMax}
258parameters. The minimum scale used is given by \texttt{scaleMin},
259where the default value is 1 (the first scale). This parameter is
260useful if you want to ignore the highest-frequency features
261(e.g. high-frequency noise that might be present). Normally the
262maximum scale is calculated from the size of the input array, but it
263can be specified by using \texttt{scaleMax}. A value $\le0$ will
264result in the use of the calculated value, as will a value of
265\texttt{scaleMax} greater than the calculated value. Use of these two
266parameters can allow searching for features of a particular scale
267size, for instance searching for narrow absorption features.
268
269The reconstruction has at least two iterations. The first iteration
270makes a first pass at the wavelet reconstruction (the process outlined
271in the 8 stages above), but the residual array will likely have some
272structure still in it, so the wavelet filtering is done on the
273residual, and any significant wavelet terms are added to the final
274reconstruction. This step is repeated until the change in the measured
275standard deviation of the background (see note below on the evaluation
276of this quantity) is less than some fiducial amount.
277
278It is important to note that the \atrous decomposition is an example
279of a ``redundant'' transformation. If no thresholding is performed,
280the sum of all the wavelet coefficient arrays and the final smoothed
281array is identical to the input array. The thresholding thus removes
282only the unwanted structure in the array.
283
284Note that any BLANK pixels that are still in the cube will not be
285altered by the reconstruction -- they will be left as BLANK so that
286the shape of the valid part of the cube is preserved.
287
288\secC{Note on Statistics}
289
290The correct calculation of the reconstructed array needs good
291estimators of the underlying mean and standard deviation (or rms) of
292the background noise distribution. The methods used to estimate these
293quantities are detailed in \S\ref{sec-stats} -- the default behaviour
294is to use robust estimators, to avoid biasing due to bright pixels.
295
296%These statistics are estimated using
297%robust methods, to avoid corruption by strong outlying points. The
298%mean of the distribution is actually estimated by the median, while
299%the median absolute deviation from the median (MADFM) is calculated
300%and corrected assuming Gaussianity to estimate the underlying standard
301%deviation $\sigma$. The Gaussianity (or Normality) assumption is
302%critical, as the MADFM does not give the same value as the usual rms
303%or standard deviation value -- for a Normal distribution
304%$N(\mu,\sigma)$ we find MADFM$=0.6744888\sigma$, but this will change
305%for different distributions. Since this ratio is corrected for, the
306%user need only think in the usual multiples of the rms when setting
307%\texttt{snrRecon}. See Appendix~\ref{app-madfm} for a derivation of
308%this value.
309
310When thresholding the different wavelet scales, the value of the rms
311as measured from the wavelet array needs to be scaled to account for
312the increased amount of correlation between neighbouring pixels (due
313to the convolution). See Appendix~\ref{app-scaling} for details on
314this scaling.
315
316\secC{User control of reconstruction parameters}
317
318The most important parameter for the user to select in relation to the
319reconstruction is the threshold for each wavelet array. This is set
320using the \texttt{snrRecon} parameter, and is given as a multiple of
321the rms (estimated by the MADFM) above the mean (which for the wavelet
322arrays should be approximately zero). There are several other
323parameters that can be altered as well that affect the outcome of the
324reconstruction.
325
326By default, the cube is reconstructed in three dimensions, using a
3273-dimensional filter and 3-dimensional convolution. This can be
328altered, however, using the parameter \texttt{reconDim}. If set to 1,
329this means the cube is reconstructed by considering each spectrum
330separately, whereas \texttt{reconDim=2} will mean the cube is
331reconstructed by doing each channel map separately. The merits of
332these choices are discussed in \S\ref{sec-notes}, but it should be
333noted that a 2-dimensional reconstruction can be susceptible to edge
334effects if the spatial shape of the pixel array is not rectangular.
335
336The user can also select the minimum scale to be used in the
337reconstruction. The first scale exhibits the highest frequency
338variations, and so ignoring this one can sometimes be beneficial in
339removing excess noise. The default is to use all scales
340(\texttt{minscale = 1}).
341
342Finally, the filter that is used for the convolution can be selected
343by using \texttt{filterCode} and the relevant code number -- the
344choices are listed in Appendix~\ref{app-param}. A larger filter will
345give a better reconstruction, but take longer and use more memory when
346executing. When multi-dimensional reconstruction is selected, this
347filter is used to construct a 2- or 3-dimensional equivalent.
348
349\secB{Smoothing the cube}
350\label{sec-smoothing}
351
352An alternative to doing the wavelet reconstruction is to smooth the
353cube.  This technique can be useful in reducing the noise level
354slightly (at the cost of making neighbouring pixels correlated and
355blurring any signal present), and is particularly well suited to the
356case where a particular signal size (\ie a certain channel width or
357spatial size) is believed to be present in the data.
358
359There are two alternative methods that can be used: spectral
360smoothing, using the Hanning filter; or spatial smoothing, using a 2D
361Gaussian kernel. These alternatives are outlined below. To utilise the
362smoothing option, set the parameter \texttt{flagSmooth=true} and set
363\texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}.
364
365\secC{Spectral smoothing}
366
367When \texttt{smoothType = spectral} is selected, the cube is smoothed
368only in the spectral domain. Each spectrum is independently smoothed
369by a Hanning filter, and then put back together to form the smoothed
370cube, which is then used by the searching algorithm (see below). Note
371that in the case of both the reconstruction and the smoothing options
372being requested, the reconstruction will take precedence and the
373smoothing will \emph{not} be done.
374
375There is only one parameter necessary to define the degree of
376smoothing -- the Hanning width $a$ (given by the user parameter
377\texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter
378are defined by
379\[
380c(x) =
381  \begin{cases}
382   \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| < (a+1)/2\\
383   0                                               &|x| \geq (a+1)/2.
384  \end{cases},\ a,x \in \mathbb{Z}
385\]
386Note that the width specified must be an
387odd integer (if the parameter provided is even, it is incremented by
388one).
389
390\secC{Spatial smoothing}
391
392When \texttt{smoothType = spatial} is selected, the cube is smoothed
393only in the spatial domain. Each channel map is independently smoothed
394by a two-dimensional Gaussian kernel, put back together to form the
395smoothed cube, and used in the searching algorithm (see below). Again,
396reconstruction is always done by preference if both techniques are
397requested.
398
399The two-dimensional Gaussian has three parameters to define it,
400governed by the elliptical cross-sectional shape of the Gaussian
401function: the FWHM (full-width at half-maximum) of the major and minor
402axes, and the position angle of the major axis. These are given by the
403user parameters \texttt{kernMaj, kernMin} \& \texttt{kernPA}. If a
404circular Gaussian is required, the user need only provide the
405\texttt{kernMaj} parameter. The \texttt{kernMin} parameter will then
406be set to the same value, and \texttt{kernPA} to zero.  If we define
407these parameters as $a,b,\theta$ respectively, we can define the
408kernel by the function
409\[
410k(x,y) = \exp\left[-0.5 \left(\frac{X^2}{\sigma_X^2} +
411                              \frac{Y^2}{\sigma_Y^2}   \right) \right]
412\]
413where $(x,y)$ are the offsets from the central pixel of the gaussian
414function, and
415\begin{align*}
416X& = x\sin\theta - y\cos\theta&
417  Y&= x\cos\theta + y\sin\theta\\
418\sigma_X^2& = \frac{(a/2)^2}{2\ln2}&
419  \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\
420\end{align*}
421
422\secB{Input/Output of reconstructed/smoothed arrays}
423\label{sec-reconIO}
424
425The smoothing and reconstruction stages can be relatively
426time-consuming, particularly for large cubes and reconstructions in
4273-D (or even spatial smoothing). To get around this, \duchamp provides
428a shortcut to allow users to perform multiple searches (\eg with
429different thresholds) on the same reconstruction/smoothing setup
430without re-doing the calculations each time.
431
432To save the reconstructed array as a FITS file, set
433\texttt{flagOutputRecon = true}. The file will be saved in the same
434directory as the input image, so the user needs to have write
435permissions for that directory.
436
437The name of the file can given by the \texttt{fileOutputRecon}
438parameter, but this can be ignored and \duchamp will present a name
439based on the reconstruction parameters. The filename will be derived
440from the input filename, with extra information detailing the
441reconstruction that has been done. For example, suppose
442\texttt{image.fits} has been reconstructed using a 3-dimensional
443reconstruction with filter \#2, thresholded at $4\sigma$ using all
444scales. The output filename will then be
445\texttt{image.RECON-3-2-4-1.fits} (\ie it uses the four parameters
446relevant for the \atrous reconstruction as listed in
447Appendix~\ref{app-param}). The new FITS file will also have these
448parameters as header keywords. If a subsection of the input image has
449been used (see \S\ref{sec-input}), the format of the output filename
450will be \texttt{image.sub.RECON-3-2-4-1.fits}, and the subsection that
451has been used is also stored in the FITS header.
452
453Likewise, the residual image, defined as the difference between the
454input and reconstructed arrays, can also be saved in the same manner
455by setting \texttt{flagOutputResid = true}. Its filename will be the
456same as above, with \texttt{RESID} replacing \texttt{RECON}.
457
458If a reconstructed image has been saved, it can be read in and used
459instead of redoing the reconstruction. To do so, the user should set
460the parameter \texttt{flagReconExists = true}. The user can indicate
461the name of the reconstructed FITS file using the \texttt{reconFile}
462parameter, or, if this is not specified, \duchamp searches for the
463file with the name as defined above. If the file is not found, the
464reconstruction is performed as normal. Note that to do this, the user
465needs to set \texttt{flagAtrous = true} (obviously, if this is
466\texttt{false}, the reconstruction is not needed).
467
468To save the smoothed array, set \texttt{flagOutputSmooth = true}. As
469for the reconstructed/residual arrays, the
470name of the file can given by the \texttt{fileOutputSmooth} parameter,
471but this can be ignored and \duchamp will present a name based on the
472method of smoothing used. It will be either
473\texttt{image.SMOOTH-1D-a.fits}, where a is replaced by the Hanning
474width used, or \texttt{image.SMOOTH-2D-a-b-c.fits}, where the Gaussian
475kernel parameters are a,b,c. Similarly to the reconstruction case, a
476saved file can be read in by setting \texttt{flagSmoothExists = true}
477and either specifying a file to be read with the \texttt{smoothFile}
478parameter or relying on \duchamp to find the file with the name as
479given above.
480
481
482\secB{Searching the image}
483\label{sec-detection}
484
485\secC{Technique}
486
487The basic idea behind detection in \duchamp is to locate sets of
488contiguous voxels that lie above some threshold. No size or shape
489requirement is imposed upon the detections -- that is, \duchamp does
490not fit \eg a Gaussian profile to each source. All it does is find
491connected groups of bright voxels.
492
493One threshold is calculated for the entire cube, enabling calculation
494of signal-to-noise ratios for each source (see
495Section~\ref{sec-output} for details). The user can manually specify a
496value (using the parameter \texttt{threshold}) for the threshold,
497which will override the calculated value. Note that this option
498overrides any settings of \texttt{snrCut} or FDR options (see below).
499
500The cube can be searched in one of two ways, governed by the input
501parameter \texttt{searchType}. If \texttt{searchType=spatial}, the
502cube is searched one channel map at a time, using the 2-dimensional
503raster-scanning algorithm of \citet{lutz80} that connects groups of
504neighbouring pixels. Such an algorithm cannot be applied directly to a
5053-dimensional case, as it requires that objects are completely nested
506in a row (when scanning along a row, if an object finishes and other
507starts, you won't get back to the first until the second is completely
508finished for the row). Three-dimensional data does not have this
509property, hence the need to treat the data on a 2-dimensional basis at
510most.
511
512Alternatively, if \texttt{searchType=spectral}, the searching is done
513in one dimension on each individual spatial pixel's spectrum. This is
514a simpler search, but there are potentially many more of them.
515
516Although there are parameters that govern the minimum number of pixels
517in a spatial, spectral and total senses that an object must have
518(\texttt{minPix}, \texttt{minChannels} and \texttt{minVoxels}
519respectively), these criteria are not applied at this point - see
520\S\ref{sec-reject} for details.
521
522Finally, the search only looks for positive features. If one is
523interested instead in negative features (such as absorption lines),
524set the parameter \texttt{flagNegative = true}. This will invert the
525cube (\ie multiply all pixels by $-1$) prior to the search, and then
526re-invert the cube (and the fluxes of any detections) after searching
527is complete. All outputs are done in the same manner as normal, so
528that fluxes of detections will be negative.
529
530\secC{Calculating statistics}
531\label{sec-stats}
532
533A crucial part of the detection process (as well as the wavelet
534reconstruction: \S\ref{sec-recon}) is estimating the statistics that
535define the detection threshold. To determine a threshold, we need to
536estimate from the data two parameters: the middle of the noise
537distribution (the ``noise level''), and the width of the distribution
538(the ``noise spread''). The noise level is estimated by either the
539mean or the median, and the noise spread by the rms (or the standard
540deviation) or the median absolute deviation from the median
541(MADFM). The median and MADFM are robust statistics, in that they are
542not biased by the presence of a few pixels much brighter than the
543noise.
544
545All four statistics are calculated automatically, but the choice of
546parameters that will be used is governed by the input parameter
547\texttt{flagRobustStats}. This has the default value \texttt{true},
548meaning the underlying mean of the noise distribution is estimated by
549the median, and the underlying standard deviation is estimated by the
550MADFM. In the latter case, the value is corrected, under the
551assumption that the underlying distribution is Normal (Gaussian), by
552dividing by 0.6744888 -- see Appendix~\ref{app-madfm} for details. If
553\texttt{flagRobustStats=false}, the mean and rms are used instead.
554
555The choice of pixels to be used depend on the analysis method. If the
556wavelet reconstruction has been done, the residuals (defined
557in the sense of original $-$ reconstruction) are used to estimate the
558noise spread of the cube, since the reconstruction should pick out
559all significant structure. The noise level (the middle of the
560distribution) is taken from the original array.
561
562If smoothing of the cube has been done instead, all noise parameters
563are measured from the smoothed array, and detections are made with
564these parameters. When the signal-to-noise level is quoted for each
565detection (see \S\ref{sec-output}), the noise parameters of the
566original array are used, since the smoothing process correlates
567neighbouring pixels, reducing the noise level.
568
569If neither reconstruction nor smoothing has been done, then the
570statistics are calculated from the original, input array.
571
572The parameters that are estimated should be representative of the
573noise in the cube. For the case of small objects embedded in many
574noise pixels (\eg the case of \hi surveys), using the full cube will
575provide good estimators. It is possible, however, to use only a
576subsection of the cube by setting the parameter \texttt{flagStatSec =
577  true} and providing the desired subsection to the \texttt{StatSec}
578parameter. This subsection works in exactly the same way as the pixel
579subsection discussed in \S\ref{sec-input}. The \texttt{StatSec} will
580be trimmed if necessary so that it lies wholly within the image
581subsection being used (\ie that given by the \texttt{subsection}
582parameter - this governs what pixels are read in and so are able to be
583used in the calculations).
584
585Note that \texttt{StatSec} applies only to the statistics used to
586determine the threshold. It does not affect the calculation of
587statistics in the case of the wavelet reconstruction. Note also that
588pixels flagged as BLANK or as part of the ``Milky Way'' range of
589channels are ignored in the statistics calculations.
590
591\secC{Determining the threshold}
592
593Once the statistics have been calculated, the threshold is determined
594in one of two ways. The first way is a simple sigma-clipping, where a
595threshold is set at a fixed number $n$ of standard deviations above
596the mean, and pixels above this threshold are flagged as detected. The
597value of $n$ is set with the parameter \texttt{snrCut}. The ``mean''
598and ``standard deviation'' here are estimated according to
599\texttt{flagRobustStats}, as discussed in \S\ref{sec-stats}. In this
600first case only, if the user specifies a threshold, using the
601\texttt{threshold} parameter, the sigma-clipped value is ignored.
602
603The second method uses the False Discovery Rate (FDR) technique
604\citep{miller01,hopkins02}, whose basis we briefly detail here. The
605false discovery rate (given by the number of false detections divided
606by the total number of detections) is fixed at a certain value
607$\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false
608positives). In practice, an $\alpha$ value is chosen, and the ensemble
609average FDR (\ie $\langle FDR \rangle$) when the method is used will
610be less than $\alpha$.  One calculates $p$ -- the probability,
611assuming the null hypothesis is true, of obtaining a test statistic as
612extreme as the pixel value (the observed test statistic) -- for each
613pixel, and sorts them in increasing order. One then calculates $d$
614where
615\[
616d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\},
617\]
618and then rejects all hypotheses whose $p$-values are less than or
619equal to $P_d$. (So a $P_i<P_d$ will be rejected even if $P_i \geq
620j\alpha/c_N N$.) Note that ``reject hypothesis'' here means ``accept
621the pixel as an object pixel'' (\ie we are rejecting the null
622hypothesis that the pixel belongs to the background).
623
624The $c_N$ value here is a normalisation constant that depends on the
625correlated nature of the pixel values. If all the pixels are
626uncorrelated, then $c_N=1$. If $N$ pixels are correlated, then their
627tests will be dependent on each other, and so $c_N = \sum_{i=1}^N
628i^{-1}$. \citet{hopkins02} consider real radio data, where the pixels
629are correlated over the beam. For the calculations done in \duchamp,
630$N = B \times C$, where $B$ is the beam area in pixels, calculated
631from the FITS header (if the correct keywords -- BMAJ, BMIN -- are not
632present, the size of the beam is taken from the input parameters - see
633discussion in \S\ref{sec-results}, and if these parameters are not
634given, $B=1$), and $C$ is the number of neighbouring channels that can
635be considered to be correlated.
636
637The use of the FDR method is governed by the \texttt{flagFDR} flag,
638which is \texttt{false} by default. To set the relevant parameters,
639use \texttt{alphaFDR} to set the $\alpha$ value, and
640\texttt{FDRnumCorChan} to set the $C$ value discussed above. These
641have default values of 0.01 and 2 respectively.
642
643The theory behind the FDR method implies a direct connection between
644the choice of $\alpha$ and the fraction of detections that will be
645false positives. These detections, however, are individual pixels,
646which undergo a process of merging and rejection (\S\ref{sec-merger}),
647and so the fraction of the final list of detected objects that are
648false positives will be much smaller than $\alpha$. See the discussion
649in \S\ref{sec-notes}.
650
651%\secC{Storage of detected objects in memory}
652%
653%It is useful to understand how \duchamp stores the detected objects in
654%memory while it is running. This makes use of nested C++ classes, so
655%that an object is stored as a class that includes the set of detected
656%pixels, plus all the various calculated parameters (fluxes, WCS
657%coordinates, pixel centres and extrema, flags,...). The set of pixels
658%are stored using another class, that stores 3-dimensional objects as a
659%set of channel maps, each consisting of a $z$-value and a
660%2-dimensional object (a spatial map if you like). This 2-dimensional
661%object is recorded using ``run-length'' encoding, where each row (a
662%fixed $y$ value) is stored by the starting $x$-value and the length
663
664\secB{Merging, growing and rejecting detected objects}
665\label{sec-merger}
666
667\secC{Merging}
668
669The searches described above are either 1- or 2-dimensional only. They
670do not know anything about the third dimension that is likely to be
671present. To build up 3D sources, merging of detections must be
672done. This is done via an algorithm that matches objects judged to be
673``close'', according to one of two criteria.
674
675One criterion is to define two thresholds -- one spatial and one in
676velocity -- and say that two objects should be merged if there is at
677least one pair of pixels that lie within these threshold distances of
678each other. These thresholds are specified by the parameters
679\texttt{threshSpatial} and \texttt{threshVelocity} (in units of pixels
680and channels respectively).
681
682Alternatively, the spatial requirement can be changed to say that
683there must be a pair of pixels that are \emph{adjacent} -- a stricter,
684but perhaps more realistic requirement, particularly when the spatial
685pixels have a large angular size (as is the case for \hi
686surveys). This method can be selected by setting the parameter
687\texttt{flagAdjacent=true} in the parameter file. The velocity
688thresholding is always done with the \texttt{threshVelocity} test.
689
690
691\secC{Stages of merging}
692
693This merging can be done in two stages. The default behaviour is for
694each new detection to be compared with those sources already detected,
695and for it to be merged with the first one judged to be close. No
696other examination of the list is done at this point.
697
698This step can be turned off by setting
699\texttt{flagTwoStageMerging=false}, so that new detections are simply
700added to the end of the list, leaving all merging to be done in the
701second stage.
702
703The second, main stage of merging is more thorough, Once the searching
704is completed, the list is iterated through, looking at each pair of
705objects, and merging appropriately. The merged objects are then
706included in the examination, to see if a merged pair is suitably close
707to a third.
708
709\secC{Growing}
710
711Once the detections have been merged, they may be ``grown'' (this is
712essentially the process known elsewhere as ``floodfill''). This is a
713process of increasing the size of the detection by adding nearby
714pixels (according to the \texttt{threshSpatial} and
715\texttt{threshVelocity} parameters) that are above some secondary
716threshold and not already part of a detected object. This threshold
717should be lower than the one used for the initial detection, but above
718the noise level, so that faint pixels are only detected when they are
719close to a bright pixel. This threshold is specified via one of two
720input parameters. It can be given in terms of the noise statistics via
721\texttt{growthCut} (which has a default value of $3\sigma$), or it can
722be directly given via \texttt{growthThreshold}. Note that if you have
723given the detection threshold with the \texttt{threshold} parameter,
724the growth threshold \textbf{must} be given with
725\texttt{growthThreshold}. If \texttt{growthThreshold} is not provided
726in this situation, the growing will not be done.
727
728The use of the growth algorithm is controlled by the
729\texttt{flagGrowth} parameter -- the default value of which is
730\texttt{false}. If the detections are grown, they are sent through the
731merging algorithm a second time, to pick up any detections that should
732be merged at the new lower threshold (\ie they have grown into each
733other).
734
735\secC{Rejecting}
736\label{sec-reject}
737
738Finally, to be accepted, the detections must satisfy minimum size
739criteria, relating to the number of channels, spatial pixels and
740voxels occupied by the object. These criteria are set using the
741\texttt{minChannels}, \texttt{minPix} and \texttt{minVoxels}
742parameters respectively. The channel requirement means a source must
743have at least one set of \texttt{minChannels} consecutive channels to
744be accepted. The spatial pixels (\texttt{minPix}) requirement refers
745to distinct spatial pixels (which are possibly in different channels),
746while the voxels requirement refers to the total number of voxels
747detected. If the \texttt{minVoxels} parameter is not provided, it
748defaults to \texttt{minPix}$+$\texttt{minChannels}-1.
749
750It is possible to do this rejection stage before the main merging and
751growing stage. This could be done to remove narrow (hopefully
752spurious) sources from the list before growing them, to reduce the
753number of false positives in the final list. This mode can be selected
754by setting the input parameter \texttt{flagRejectBeforeMerge=true} --
755caution is urged if you use this in conjunction with
756\texttt{flagTwoStageMerging=false}, as you can throw away parts of
757objects that you may otherwise wish to keep.
758
759%%% Local Variables:
760%%% mode: latex
761%%% TeX-master: "Guide"
762%%% End:
Note: See TracBrowser for help on using the repository browser.