source: trunk/docs/executionFlow.tex @ 278

Last change on this file since 278 was 277, checked in by Matthew Whiting, 17 years ago

Two main changes:

  • Improved the way smoothed FITS files are written and read, with new header keywords in duchamp.hh. Also add a test to smoothCube.cc functions to see if the smoothed array exists already.
  • Allow the ability to test for the number of dimensions when printing out FINT/FTOT (so that 2D images with good WCS can show FTOT rather than FINT, which would be zero). Involves adding a numAxes parameter to Detection class.

Otherwise, just minor changes and updates to documentation.

File size: 29.5 KB
Line 
1\secA{What \duchamp is doing}
2\label{sec-flow}
3
4Each of the steps that \duchamp goes through in the course of its
5execution are discussed here in more detail. This should provide
6enough background information to fully understand what \duchamp is
7doing and what all the output information is. For those interested in
8the programming side of things, \duchamp is written in C/C++ and makes
9use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot}
10libraries.
11
12\secB{Image input}
13\label{sec-input}
14
15The cube is read in using basic \textsc{cfitsio} commands, and stored
16as an array in a special C++ class. This class keeps track of the list
17of detected objects, as well as any reconstructed arrays that are made
18(see \S\ref{sec-recon}). The World Coordinate System
19(WCS)\footnote{This is the information necessary for translating the
20pixel locations to quantities such as position on the sky, frequency,
21velocity, and so on.} information for the cube is also obtained from
22the FITS header by \textsc{wcslib} functions \citep{greisen02,
23calabretta02}, and this information, in the form of a \texttt{wcsprm}
24structure, is also stored in the same class.
25
26A sub-section of a cube can be requested by defining the subsection
27with the \texttt{subsection} parameter and setting
28\texttt{flagSubsection=true} -- this can be a good idea if the cube
29has very noisy edges, which may produce many spurious detections.
30
31There are two ways of specifying the \texttt{subsection} string. The
32first is the generalised form
33\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the
34\textsc{cfitsio} library. This has one set of colon-separated numbers
35for each axis in the FITS file. In this manner, the x-coordinates run
36from \texttt{x1} to \texttt{x2} (inclusive), with steps of
37\texttt{dx}. The step value can be omitted, so a subsection of the
38form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp
39does not make use of any step value present in the subsection string,
40and any that are present are removed before the file is opened.
41
42If the entire range of a coordinate is required, one can replace the
43range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the
44subsection string \texttt{[*,*,*]} is simply the entire cube. A
45complete description of this section syntax can be found at the
46\textsc{fitsio} web site%
47\footnote{%
48\href%
49{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}%
50{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}.
51
52
53Making full use of the subsection requires knowledge of the size of
54each of the dimensions. If one wants to, for instance, trim a certain
55number of pixels off the edges of the cube, without examining the cube
56to obtain the actual size, one can use the second form of the
57subsection string. This just gives a number for each axis, \eg
58\texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and}
59end of each axis).
60
61All types of subsections can be combined \eg \texttt{[5,2:98,*]}.
62
63
64%A sub-section of an image can be requested via the \texttt{subsection}
65%parameter -- this can be a good idea if the cube has very noisy edges,
66%which may produce many spurious detections. The generalised form of
67%the subsection that is used by \textsc{cfitsio} is
68%\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, such that the x-coordinates run
69%from \texttt{x1} to \texttt{x2} (inclusive), with steps of
70%\texttt{dx}. The step value can be omitted (so a subsection of the
71%form \texttt{[2:50,2:50,10:1000]} is still valid). \duchamp does not
72%make use of any step value present in the subsection string, and any
73%that are present are removed before the file is opened.
74%
75%If one wants the full range of a coordinate then replace the range
76%with an asterisk, \eg \texttt{[2:50,2:50,*]}. If one wants to use a
77%subsection, one must set \texttt{flagSubsection = 1}. A complete
78%description of the section syntax can be found at the \textsc{fitsio}
79%web site%
80%\footnote{%
81%\href%
82%{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}%
83%{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}.
84%
85
86\secB{Image modification}
87\label{sec-modify}
88
89Several modifications to the cube can be made that improve the
90execution and efficiency of \duchamp (their use is optional, governed
91by the relevant flags in the parameter file).
92
93\secC{BLANK pixel removal}
94
95If the imaged area of a cube is non-rectangular (see the example in
96Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels are
97used to pad it out to a rectangular shape. The value of these pixels
98is given by the FITS header keywords BLANK, BSCALE and BZERO. While
99these pixels make the image a nice shape, they will unnecessarily
100interfere with the processing (as well as taking up needless
101memory). The first step, then, is to trim them from the edge. This is
102done when the parameter \texttt{flagBlankPix=true}. If the above
103keywords are not present, the user can specify the BLANK value by the
104parameter \texttt{blankPixValue}.
105
106Removing BLANK pixels is particularly important for the reconstruction
107step, as lots of BLANK pixels on the edges will smooth out features in
108the wavelet calculation stage. The trimming will also reduce the size
109of the cube's array, speeding up the execution. The amount of trimming
110is recorded, and these pixels are added back in once the
111source-detection is completed (so that quoted pixel positions are
112applicable to the original cube).
113
114Rows and columns are trimmed one at a time until the first non-BLANK
115pixel is reached, so that the image remains rectangular. In practice,
116this means that there will be some BLANK pixels left in the trimmed
117image (if the non-BLANK region is non-rectangular). However, these are
118ignored in all further calculations done on the cube.
119
120\secC{Baseline removal}
121
122Second, the user may request the removal of baselines from the
123spectra, via the parameter \texttt{flagBaseline}. This may be
124necessary if there is a strong baseline ripple present, which can
125result in spurious detections at the high points of the ripple. The
126baseline is calculated from a wavelet reconstruction procedure (see
127\S\ref{sec-recon}) that keeps only the two largest scales. This is
128done separately for each spatial pixel (\ie for each spectrum in the
129cube), and the baselines are stored and added back in before any
130output is done. In this way the quoted fluxes and displayed spectra
131are as one would see from the input cube itself -- even though the
132detection (and reconstruction if applicable) is done on the
133baseline-removed cube.
134
135The presence of very strong signals (for instance, masers at several
136hundred Jy) could affect the determination of the baseline, and would
137lead to a large dip centred on the signal in the baseline-subtracted
138spectrum. To prevent this, the signal is trimmed prior to the
139reconstruction process at some standard threshold (at $8\sigma$ above
140the mean). The baseline determined should thus be representative of
141the true, signal-free baseline. Note that this trimming is only a
142temporary measure which does not affect the source-detection.
143
144\secC{Ignoring bright Milky Way emission}
145
146Finally, a single set of contiguous channels can be ignored -- these
147may exhibit very strong emission, such as that from the Milky Way as
148seen in extragalactic \hi cubes (hence the references to ``Milky
149Way'' in relation to this task -- apologies to Galactic
150astronomers!). Such dominant channels will produce many detections
151that are unnecessary, uninteresting (if one is interested in
152extragalactic \hi) and large (in size and hence in memory usage), and
153so will slow the program down and detract from the interesting
154detections.
155
156The use of this feature is controlled by the \texttt{flagMW}
157parameter, and the exact channels concerned are able to be set by the
158user (using \texttt{maxMW} and \texttt{minMW} -- these give an
159inclusive range of channels). When employed, these channels are
160ignored for the searching, and the scaling of the spectral output (see
161Fig.~\ref{fig-spect}) will not take them into account. They will be
162present in the reconstructed array, however, and so will be included
163in the saved FITS file (see \S\ref{sec-reconIO}). When the final
164spectra are plotted, the range of channels covered by these parameters
165is indicated by a green hashed box.
166
167\secB{Image reconstruction}
168\label{sec-recon}
169
170The user can direct \duchamp to reconstruct the data cube using the
171\atrous wavelet procedure. A good description of the procedure can be
172found in \citet{starck02:book}. The reconstruction is an effective way
173of removing a lot of the noise in the image, allowing one to search
174reliably to fainter levels, and reducing the number of spurious
175detections. This is an optional step, but one that greatly enhances
176the source-detection process, with the payoff that it can be
177relatively time- and memory-intensive.
178
179\secC{Algorithm}
180
181The steps in the \atrous reconstruction are as follows:
182\begin{enumerate}
183\item The reconstructed array is set to 0 everywhere.
184\item The input array is discretely convolved with a given filter
185  function. This is determined from the parameter file via the
186  \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for
187  details on the filters available.
188\item The wavelet coefficients are calculated by taking the difference
189  between the convolved array and the input array.
190\item If the wavelet coefficients at a given point are above the
191  requested threshold (given by \texttt{snrRecon} as the number of
192  $\sigma$ above the mean and adjusted to the current scale -- see
193  Appendix~\ref{app-scaling}), add these to the reconstructed array.
194\item The separation of the filter coefficients is doubled. (Note that
195  this step provides the name of the procedure\footnote{\atrous means
196  ``with holes'' in French.}, as gaps or holes are created in the
197  filter coverage.)
198\item The procedure is repeated from step 2, using the convolved array
199  as the input array.
200\item Continue until the required maximum number of scales is reached.
201\item Add the final smoothed (\ie convolved) array to the
202  reconstructed array. This provides the ``DC offset'', as each of the
203  wavelet coefficient arrays will have zero mean.
204\end{enumerate}
205
206The reconstruction has at least two iterations. The first iteration
207makes a first pass at the wavelet reconstruction (the process outlined
208in the 8 stages above), but the residual array will likely have some
209structure still in it, so the wavelet filtering is done on the
210residual, and any significant wavelet terms are added to the final
211reconstruction. This step is repeated until the change in the measured
212standard deviation of the background (see note below on the evaluation
213of this quantity) is less than some fiducial amount.
214
215It is important to note that the \atrous decomposition is an example
216of a ``redundant'' transformation. If no thresholding is performed,
217the sum of all the wavelet coefficient arrays and the final smoothed
218array is identical to the input array. The thresholding thus removes
219only the unwanted structure in the array.
220
221Note that any BLANK pixels that are still in the cube will not be
222altered by the reconstruction -- they will be left as BLANK so that
223the shape of the valid part of the cube is preserved.
224
225\secC{Note on Statistics}
226
227The correct calculation of the reconstructed array needs good
228estimators of the underlying mean and standard deviation of the
229background noise distribution. These statistics are estimated using
230robust methods, to avoid corruption by strong outlying points. The
231mean of the distribution is actually estimated by the median, while
232the median absolute deviation from the median (MADFM) is calculated
233and corrected assuming Gaussianity to estimate the underlying standard
234deviation $\sigma$. The Gaussianity (or Normality) assumption is
235critical, as the MADFM does not give the same value as the usual rms
236or standard deviation value -- for a Normal distribution
237$N(\mu,\sigma)$ we find MADFM$=0.6744888\sigma$, but this will change
238for different distributions. Since this ratio is corrected for, the
239user need only think in the usual multiples of the rms when setting
240\texttt{snrRecon}. See Appendix~\ref{app-madfm} for a derivation of
241this value.
242
243When thresholding the different wavelet scales, the value of the rms
244as measured from the wavelet array needs to be scaled to account for
245the increased amount of correlation between neighbouring pixels (due
246to the convolution). See Appendix~\ref{app-scaling} for details on
247this scaling.
248
249\secC{User control of reconstruction parameters}
250
251The most important parameter for the user to select in relation to the
252reconstruction is the threshold for each wavelet array. This is set
253using the \texttt{snrRecon} parameter, and is given as a multiple of
254the rms (estimated by the MADFM) above the mean (which for the wavelet
255arrays should be approximately zero). There are several other
256parameters that can be altered as well that affect the outcome of the
257reconstruction.
258
259By default, the cube is reconstructed in three dimensions, using a
2603-dimensional filter and 3-dimensional convolution. This can be
261altered, however, using the parameter \texttt{reconDim}. If set to 1,
262this means the cube is reconstructed by considering each spectrum
263separately, whereas \texttt{reconDim=2} will mean the cube is
264reconstructed by doing each channel map separately. The merits of
265these choices are discussed in \S\ref{sec-notes}, but it should be
266noted that a 2-dimensional reconstruction can be susceptible to edge
267effects if the spatial shape of the pixel array is not rectangular.
268
269The user can also select the minimum scale to be used in the
270reconstruction. The first scale exhibits the highest frequency
271variations, and so ignoring this one can sometimes be beneficial in
272removing excess noise. The default is to use all scales
273(\texttt{minscale = 1}).
274
275Finally, the filter that is used for the convolution can be selected
276by using \texttt{filterCode} and the relevant code number -- the
277choices are listed in Appendix~\ref{app-param}. A larger filter will
278give a better reconstruction, but take longer and use more memory when
279executing. When multi-dimensional reconstruction is selected, this
280filter is used to construct a 2- or 3-dimensional equivalent.
281
282\secB{Smoothing the cube}
283\label{sec-smoothing}
284
285An alternative to doing the wavelet reconstruction is to smooth the
286cube.  This technique can be useful in reducing the noise level
287slightly (at the cost of making neighbouring pixels correlated and
288blurring any signal present), and is particularly well suited to the
289case where a particular signal size (\ie a certain channel width or
290spatial size) is believed to be present in the data.
291
292There are two alternative methods that can be used: spectral
293smoothing, using the Hanning filter; or spatial smoothing, using a 2D
294Gaussian kernel. These alternatives are outlined below. To utilise the
295smoothing option, set the parameter \texttt{flagSmooth=true} and set
296\texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}.
297
298\secC{Spectral smoothing}
299
300When \texttt{smoothType=spectral} is selected, the cube is smoothed
301only in the spectral domain. Each spectrum is independently smoothed
302by a Hanning filter, and then put back together to form the smoothed
303cube, which is then used by the searching algorithm (see below). Note
304that in the case of both the reconstruction and the smoothing options
305being requested, the reconstruction will take precedence and the
306smoothing will \emph{not} be done.
307
308There is only one parameter necessary to define the degree of
309smoothing -- the Hanning width $a$ (given by the user parameter
310\texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter
311are defined by
312\[
313c(x) =
314  \begin{cases}
315   \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| \leq (a+1)/2\\
316   0                                               &|x| > (a+1)/2.
317  \end{cases}
318,\ a,x \in \mathbb{Z}
319\]
320Note that the width specified must be an
321odd integer (if the parameter provided is even, it is incremented by
322one).
323
324\secC{Spatial smoothing}
325
326When \texttt{smoothType=spatial} is selected, the cube is smoothed
327only in the spatial domain. Each channel map is independently smoothed
328by a two-dimensional Gaussian kernel, and then put back together to
329form the smoothed cube, and used in the searching algorithm (see
330below). Again, reconstruction is always done by preference if both
331techniques are requested.
332
333The two-dimensional Gaussian has three parameters to define it,
334governed by the elliptical cross-sectional shape of the gaussian
335function: the FWHM (full-width at half-maximum) of the major and minor
336axes, and the position angle of the major axis. These are given by the
337user parameters \texttt{kernMaj, kernMin \& kernPA}. If we define
338these parameters as $a,b,\theta$ respectively, we can define the
339kernel by the function
340\[
341k(x,y) = \exp\left[-0.5 \left(\frac{X^2}{\sigma_X^2} +
342                              \frac{Y^2}{\sigma_Y^2}   \right) \right]
343\]
344where $(x,y)$ are the offsets from the central pixel of the gaussian
345function, and
346\begin{align*}
347X& = x\sin\theta - y\cos\theta&
348  Y&= x\cos\theta + y\sin\theta\\
349\sigma_X^2& = \frac{(a/2)^2}{2\ln2}&
350  \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\
351\end{align*}
352
353
354\secB{Input/Output of reconstructed arrays}
355\label{sec-reconIO}
356
357The smoothing and reconstruction stages can be relatively
358time-consuming, particularly for large cubes and reconstructions in
3593-D (or even spatial smoothing). To get around this, \duchamp provides
360a shortcut to allow users to perform multiple searches (\eg with
361different thresholds) on the same reconstruction/smoothing setup
362without re-doing the calculations each time.
363
364To save the reconstructed array as a FITS file, set
365\texttt{flagOutputRecon = true}. The file will be saved in the same
366directory as the input image, so the user needs to have write
367permissions for that directory.
368
369The filename will be derived from the input filename, with extra
370information detailing the reconstruction that has been done. For
371example, suppose \texttt{image.fits} has been reconstructed using a
3723-dimensional reconstruction with filter \#2, thresholded at $4\sigma$
373using all scales. The output filename will then be
374\texttt{image.RECON-3-2-4-1.fits} (\ie it uses the four parameters
375relevant for the \atrous reconstruction as listed in
376Appendix~\ref{app-param}). The new FITS file will also have these
377parameters as header keywords. If a subsection of the input image has
378been used (see \S\ref{sec-input}), the format of the output filename
379will be \texttt{image.sub.RECON-3-2-4-1.fits}, and the subsection that
380has been used is also stored in the FITS header.
381
382Likewise, the residual image, defined as the difference between the
383input and reconstructed arrays, can also be saved in the same manner
384by setting \texttt{flagOutputResid = true}. Its filename will be the
385same as above, with \texttt{RESID} replacing \texttt{RECON}.
386
387If a reconstructed image has been saved, it can be read in and used
388instead of redoing the reconstruction. To do so, the user should set
389the parameter \texttt{flagReconExists = true}. The user can indicate
390the name of the reconstructed FITS file using the \texttt{reconFile}
391parameter, or, if this is not specified, \duchamp searches for the
392file with the name as defined above. If the file is not found, the
393reconstruction is performed as normal. Note that to do this, the user
394needs to set \texttt{flagAtrous = true} (obviously, if this is
395\texttt{false}, the reconstruction is not needed).
396
397To save the smoothed array, set \texttt{flagOutputSmooth = true}. The
398name of the saved file will depend on the method of smoothing used. It
399will be either \texttt{image.SMOOTH-1D-a.fits}, where a is replaced by
400the Hanning width used, or \texttt{image.SMOOTH-2D-a-b-c.fits}, where
401the Gaussian kernel parameters are a,b,c. Similarly to the
402reconstruction case, a saved file can be read in by setting
403\texttt{flagSmoothExists = true} and either specifying a file to be
404read with the \texttt{smoothFile} parameter or relying on \duchamp to
405find the file with the name as given above.
406
407
408\secB{Searching the image}
409\label{sec-detection}
410
411\secC{Technique}
412
413The basic idea behind detection is to locate sets of contiguous voxels
414that lie above some threshold. One threshold is calculated for the
415entire cube, enabling calculation of signal-to-noise ratios for each
416source (see Section~\ref{sec-output} for details). The user can
417manually specify a value (using the parameter
418\texttt{threshold}) for the threshold, which will override the
419calculated value. Note that this only applies for the first of the two
420cases discussed below -- the FDR case ignores any manually-set
421threshold value.
422
423The cube is searched one channel map at a time, using the
4242-dimensional raster-scanning algorithm of \citet{lutz80} that
425connects groups of neighbouring pixels. Such an algorithm cannot be
426applied directly to a 3-dimensional case, as it requires that objects
427are completely nested in a row (when scanning along a row, if an
428object finishes and other starts, you won't get back to the first
429until the second is completely finished for the
430row). Three-dimensional data does not have this property, hence the
431need to treat the data on a 2-dimensional basis.
432
433Although there are parameters that govern the minimum number of pixels
434in a spatial and spectral sense that an object must have
435(\texttt{minPix} and \texttt{minChannels} respectively), these
436criteria are not applied at this point. It is only after the merging
437and growing (see \S\ref{sec-merger}) is done that objects are rejected
438for not meeting these criteria.
439
440Finally, the search only looks for positive features. If one is
441interested instead in negative features (such as absorption lines),
442set the parameter \texttt{flagNegative = true}. This will invert the
443cube (\ie multiply all pixels by $-1$) prior to the search, and then
444re-invert the cube (and the fluxes of any detections) after searching
445is complete. All outputs are done in the same manner as normal, so
446that fluxes of detections will be negative.
447
448\secC{Calculating statistics}
449
450A crucial part of the detection process is estimating the statistics
451that define the detection threshold. To determine a threshold, we need
452to estimate from the data two parameters: the middle of the noise
453distribution (the ``noise level''), and the width of the distribution
454(the ``noise spread''). For both cases, we again use robust methods,
455using the median and MADFM.
456
457The choice of pixels to be used depend on the analysis method. If the
458wavelet reconstruction has been done, the residuals (defined
459in the sense of original $-$ reconstruction) are used to estimate the
460noise spread of the cube, since the reconstruction should pick out
461all significant structure. The noise level (the middle of the
462distribution) is taken from the original array.
463
464If smoothing of the cube has been done instead, all noise parameters
465are measured from the smoothed array, and detections are made with
466these parameters. When the signal-to-noise level is quoted for each
467detection (see \S\ref{sec-output}), the noise parameters of the
468original array are used, since the smoothing process correlates
469neighbouring pixels, reducing the noise level.
470
471If neither reconstruction nor smoothing has been done, then the
472statistics are calculated from the original, input array.
473
474The parameters that are estimated should be representative of the
475noise in the cube. For the case of small objects embedded in many
476noise pixels (\eg the case of \hi surveys), using the full cube will
477provide good estimators. It is possible, however, to use only a
478subsection of the cube by setting the parameter \texttt{flagStatSec =
479true} and providing the desired subsection to the \texttt{StatSec}
480parameter. This subsection works in exactly the same way as the pixel
481subsection discussed in \S\ref{sec-input}. Note that this subsection
482applies only to the statistics used to determine the threshold. It
483does not affect the calculation of statistics in the case of the
484wavelet reconstruction.
485
486\secC{Determining the threshold}
487
488Once the statistics have been calculated, the threshold is determined
489in one of two ways. The first way is a simple sigma-clipping, where a
490threshold is set at a fixed number $n$ of standard deviations above
491the mean, and pixels above this threshold are flagged as detected. The
492value of $n$ is set with the parameter \texttt{snrCut}. As before, the
493value of the standard deviation is estimated by the MADFM, and
494corrected by the ratio derived in Appendix~\ref{app-madfm}.
495
496The second method uses the False Discovery Rate (FDR) technique
497\citep{miller01,hopkins02}, whose basis we briefly detail here. The
498false discovery rate (given by the number of false detections divided
499by the total number of detections) is fixed at a certain value
500$\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false
501positives). In practice, an $\alpha$ value is chosen, and the ensemble
502average FDR (\ie $\langle FDR \rangle$) when the method is used will
503be less than $\alpha$.  One calculates $p$ -- the probability,
504assuming the null hypothesis is true, of obtaining a test statistic as
505extreme as the pixel value (the observed test statistic) -- for each
506pixel, and sorts them in increasing order. One then calculates $d$
507where
508\[
509d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\},
510\]
511and then rejects all hypotheses whose $p$-values are less than or
512equal to $P_d$. (So a $P_i<P_d$ will be rejected even if $P_i \geq
513j\alpha/c_N N$.) Note that ``reject hypothesis'' here means ``accept
514the pixel as an object pixel'' (\ie we are rejecting the null
515hypothesis that the pixel belongs to the background).
516
517The $c_N$ value here is a normalisation constant that depends on the
518correlated nature of the pixel values. If all the pixels are
519uncorrelated, then $c_N=1$. If $N$ pixels are correlated, then their
520tests will be dependent on each other, and so $c_N = \sum_{i=1}^N
521i^{-1}$. \citet{hopkins02} consider real radio data, where the pixels
522are correlated over the beam. For the calculations done in \duchamp,
523$N=2B$, where $B$ is the beam size in pixels, calculated from the FITS
524header (if the correct keywords -- BMAJ, BMIN -- are not present, the
525size of the beam is taken from the parameter \texttt{beamSize}). The
526factor of 2 comes about because we treat neighbouring channels as
527correlated.
528
529The theory behind the FDR method implies a direct connection between
530the choice of $\alpha$ and the fraction of detections that will be
531false positives. These detections, however, are individual pixels,
532which undergo a process of merging and rejection (\S\ref{sec-merger}),
533and so the fraction of the final list of detected objects that are
534false positives will be much smaller than $\alpha$. See the discussion
535in \S\ref{sec-notes}.
536
537%\secC{Storage of detected objects in memory}
538%
539%It is useful to understand how \duchamp stores the detected objects in
540%memory while it is running. This makes use of nested C++ classes, so
541%that an object is stored as a class that includes the set of detected
542%pixels, plus all the various calculated parameters (fluxes, WCS
543%coordinates, pixel centres and extrema, flags,...). The set of pixels
544%are stored using another class, that stores 3-dimensional objects as a
545%set of channel maps, each consisting of a $z$-value and a
546%2-dimensional object (a spatial map if you like). This 2-dimensional
547%object is recorded using ``run-length'' encoding, where each row (a
548%fixed $y$ value) is stored by the starting $x$-value and the length
549
550\secB{Merging detected objects}
551\label{sec-merger}
552
553The searching step produces a list of detected objects that will have
554many repeated detections of a given object -- for instance, spectral
555detections in adjacent pixels of the same object and/or spatial
556detections in neighbouring channels. These are then combined in an
557algorithm that matches all objects judged to be ``close'', according
558to one of two criteria.
559
560One criterion is to define two thresholds -- one spatial and one in
561velocity -- and say that two objects should be merged if there is at
562least one pair of pixels that lie within these threshold distances of
563each other. These thresholds are specified by the parameters
564\texttt{threshSpatial} and \texttt{threshVelocity} (in units of pixels
565and channels respectively).
566
567Alternatively, the spatial requirement can be changed to say that
568there must be a pair of pixels that are \emph{adjacent} -- a stricter,
569but perhaps more realistic requirement, particularly when the spatial
570pixels have a large angular size (as is the case for
571\hi surveys). This
572method can be selected by setting the parameter
573\texttt{flagAdjacent} to 1 (\ie \texttt{true}) in the parameter
574file. The velocity thresholding is done in the same way as the first
575option.
576
577Once the detections have been merged, they may be ``grown''. This is a
578process of increasing the size of the detection by adding nearby
579pixels (according to the \texttt{threshSpatial} and
580\texttt{threshVelocity} parameters) that are above some secondary
581threshold. This threshold is lower than the one used for the initial
582detection, but above the noise level, so that faint pixels are only
583detected when they are close to a bright pixel. The value of this
584threshold is a possible input parameter (\texttt{growthCut}), with a
585default value of $1.5\sigma$.
586
587The use of the growth algorithm is controlled by the
588\texttt{flagGrowth} parameter -- the default value of which is
589\texttt{false}. If the detections are grown, they are sent through the
590merging algorithm a second time, to pick up any detections that now
591overlap or have grown over each other.
592
593Finally, to be accepted, the detections must span \emph{both} a
594minimum number of channels (to remove any spurious single-channel
595spikes that may be present), and a minimum number of spatial
596pixels. These numbers, as for the original detection step, are set
597with the \texttt{minChannels} and \texttt{minPix} parameters. The
598channel requirement means there must be at least one set of
599\texttt{minChannels} consecutive channels in the source for it to be
600accepted.
Note: See TracBrowser for help on using the repository browser.