source: trunk/docs/executionFlow.tex @ 298

Last change on this file since 298 was 298, checked in by Matthew Whiting, 17 years ago

Updated documentation, and changed the version number to 1.1 ready for the next release.

File size: 28.8 KB
Line 
1\secA{What \duchamp is doing}
2\label{sec-flow}
3
4Each of the steps that \duchamp goes through in the course of its
5execution are discussed here in more detail. This should provide
6enough background information to fully understand what \duchamp is
7doing and what all the output information is. For those interested in
8the programming side of things, \duchamp is written in C/C++ and makes
9use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot}
10libraries.
11
12\secB{Image input}
13\label{sec-input}
14
15The cube is read in using basic \textsc{cfitsio} commands, and stored
16as an array in a special C++ class. This class keeps track of the list
17of detected objects, as well as any reconstructed arrays that are made
18(see \S\ref{sec-recon}). The World Coordinate System
19(WCS)\footnote{This is the information necessary for translating the
20pixel locations to quantities such as position on the sky, frequency,
21velocity, and so on.} information for the cube is also obtained from
22the FITS header by \textsc{wcslib} functions \citep{greisen02,
23calabretta02}, and this information, in the form of a \texttt{wcsprm}
24structure, is also stored in the same class.
25
26A sub-section of a cube can be requested by defining the subsection
27with the \texttt{subsection} parameter and setting
28\texttt{flagSubsection = true} -- this can be a good idea if the cube
29has very noisy edges, which may produce many spurious detections.
30
31There are two ways of specifying the \texttt{subsection} string. The
32first is the generalised form
33\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the
34\textsc{cfitsio} library. This has one set of colon-separated numbers
35for each axis in the FITS file. In this manner, the x-coordinates run
36from \texttt{x1} to \texttt{x2} (inclusive), with steps of
37\texttt{dx}. The step value can be omitted, so a subsection of the
38form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp
39does not make use of any step value present in the subsection string,
40and any that are present are removed before the file is opened.
41
42If the entire range of a coordinate is required, one can replace the
43range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the
44subsection string \texttt{[*,*,*]} is simply the entire cube. Note
45that the pixel ranges for each axis start at 1, so the full pixel
46range of a 100-pixel axis would be expressed as 1:100. A complete
47description of this section syntax can be found at the
48\textsc{fitsio} web site%
49\footnote{%
50\href%
51{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}%
52{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}.
53
54
55Making full use of the subsection requires knowledge of the size of
56each of the dimensions. If one wants to, for instance, trim a certain
57number of pixels off the edges of the cube, without examining the cube
58to obtain the actual size, one can use the second form of the
59subsection string. This just gives a number for each axis, \eg
60\texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and}
61end of each axis).
62
63All types of subsections can be combined \eg \texttt{[5,2:98,*]}.
64
65
66\secB{Image modification}
67\label{sec-modify}
68
69Several modifications to the cube can be made that improve the
70execution and efficiency of \duchamp (their use is optional, governed
71by the relevant flags in the parameter file).
72
73\secC{BLANK pixel removal}
74\label{sec-blank}
75
76If the imaged area of a cube is non-rectangular (see the example in
77Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels
78are used to pad it out to a rectangular shape. The value of these
79pixels is given by the FITS header keywords BLANK, BSCALE and
80BZERO. While these pixels make the image a nice shape, they will take
81up unnecessary space in memory, and so to potentially speed up the
82processing we can trim them from the edge. This is done when the
83parameter \texttt{flagTrim = true}. If the above keywords are not
84present, the trimming will not be done (in this case, a similar effect
85can be accomplished, if one knows where the ``blank'' pixels are, by
86using the subsection option).
87
88The amount of trimming is recorded, and these pixels are added back in
89once the source-detection is completed (so that quoted pixel positions
90are applicable to the original cube). Rows and columns are trimmed one
91at a time until the first non-BLANK pixel is reached, so that the
92image remains rectangular. In practice, this means that there will be
93some BLANK pixels left in the trimmed image (if the non-BLANK region
94is non-rectangular). However, these are ignored in all further
95calculations done on the cube.
96
97\secC{Baseline removal}
98
99Second, the user may request the removal of baselines from the
100spectra, via the parameter \texttt{flagBaseline}. This may be
101necessary if there is a strong baseline ripple present, which can
102result in spurious detections at the high points of the ripple. The
103baseline is calculated from a wavelet reconstruction procedure (see
104\S\ref{sec-recon}) that keeps only the two largest scales. This is
105done separately for each spatial pixel (\ie for each spectrum in the
106cube), and the baselines are stored and added back in before any
107output is done. In this way the quoted fluxes and displayed spectra
108are as one would see from the input cube itself -- even though the
109detection (and reconstruction if applicable) is done on the
110baseline-removed cube.
111
112The presence of very strong signals (for instance, masers at several
113hundred Jy) could affect the determination of the baseline, and would
114lead to a large dip centred on the signal in the baseline-subtracted
115spectrum. To prevent this, the signal is trimmed prior to the
116reconstruction process at some standard threshold (at $8\sigma$ above
117the mean). The baseline determined should thus be representative of
118the true, signal-free baseline. Note that this trimming is only a
119temporary measure which does not affect the source-detection.
120
121\secC{Ignoring bright Milky Way emission}
122
123Finally, a single set of contiguous channels can be ignored -- these
124may exhibit very strong emission, such as that from the Milky Way as
125seen in extragalactic \hi cubes (hence the references to ``Milky
126Way'' in relation to this task -- apologies to Galactic
127astronomers!). Such dominant channels will produce many detections
128that are unnecessary, uninteresting (if one is interested in
129extragalactic \hi) and large (in size and hence in memory usage), and
130so will slow the program down and detract from the interesting
131detections.
132
133The use of this feature is controlled by the \texttt{flagMW}
134parameter, and the exact channels concerned are able to be set by the
135user (using \texttt{maxMW} and \texttt{minMW} -- these give an
136inclusive range of channels). When employed, these channels are
137ignored for the searching, and the scaling of the spectral output (see
138Fig.~\ref{fig-spect}) will not take them into account. They will be
139present in the reconstructed array, however, and so will be included
140in the saved FITS file (see \S\ref{sec-reconIO}). When the final
141spectra are plotted, the range of channels covered by these parameters
142is indicated by a green hashed box.
143
144\secB{Image reconstruction}
145\label{sec-recon}
146
147The user can direct \duchamp to reconstruct the data cube using the
148\atrous wavelet procedure. A good description of the procedure can be
149found in \citet{starck02:book}. The reconstruction is an effective way
150of removing a lot of the noise in the image, allowing one to search
151reliably to fainter levels, and reducing the number of spurious
152detections. This is an optional step, but one that greatly enhances
153the source-detection process, with the payoff that it can be
154relatively time- and memory-intensive.
155
156\secC{Algorithm}
157
158The steps in the \atrous reconstruction are as follows:
159\begin{enumerate}
160\item The reconstructed array is set to 0 everywhere.
161\item The input array is discretely convolved with a given filter
162  function. This is determined from the parameter file via the
163  \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for
164  details on the filters available.
165\item The wavelet coefficients are calculated by taking the difference
166  between the convolved array and the input array.
167\item If the wavelet coefficients at a given point are above the
168  requested threshold (given by \texttt{snrRecon} as the number of
169  $\sigma$ above the mean and adjusted to the current scale -- see
170  Appendix~\ref{app-scaling}), add these to the reconstructed array.
171\item The separation between the filter coefficients is doubled. (Note
172  that this step provides the name of the procedure\footnote{\atrous
173  means ``with holes'' in French.}, as gaps or holes are created in
174  the filter coverage.)
175\item The procedure is repeated from step 2, using the convolved array
176  as the input array.
177\item Continue until the required maximum number of scales is reached.
178\item Add the final smoothed (\ie convolved) array to the
179  reconstructed array. This provides the ``DC offset'', as each of the
180  wavelet coefficient arrays will have zero mean.
181\end{enumerate}
182
183The reconstruction has at least two iterations. The first iteration
184makes a first pass at the wavelet reconstruction (the process outlined
185in the 8 stages above), but the residual array will likely have some
186structure still in it, so the wavelet filtering is done on the
187residual, and any significant wavelet terms are added to the final
188reconstruction. This step is repeated until the change in the measured
189standard deviation of the background (see note below on the evaluation
190of this quantity) is less than some fiducial amount.
191
192It is important to note that the \atrous decomposition is an example
193of a ``redundant'' transformation. If no thresholding is performed,
194the sum of all the wavelet coefficient arrays and the final smoothed
195array is identical to the input array. The thresholding thus removes
196only the unwanted structure in the array.
197
198Note that any BLANK pixels that are still in the cube will not be
199altered by the reconstruction -- they will be left as BLANK so that
200the shape of the valid part of the cube is preserved.
201
202\secC{Note on Statistics}
203
204The correct calculation of the reconstructed array needs good
205estimators of the underlying mean and standard deviation of the
206background noise distribution. These statistics are estimated using
207robust methods, to avoid corruption by strong outlying points. The
208mean of the distribution is actually estimated by the median, while
209the median absolute deviation from the median (MADFM) is calculated
210and corrected assuming Gaussianity to estimate the underlying standard
211deviation $\sigma$. The Gaussianity (or Normality) assumption is
212critical, as the MADFM does not give the same value as the usual rms
213or standard deviation value -- for a Normal distribution
214$N(\mu,\sigma)$ we find MADFM$=0.6744888\sigma$, but this will change
215for different distributions. Since this ratio is corrected for, the
216user need only think in the usual multiples of the rms when setting
217\texttt{snrRecon}. See Appendix~\ref{app-madfm} for a derivation of
218this value.
219
220When thresholding the different wavelet scales, the value of the rms
221as measured from the wavelet array needs to be scaled to account for
222the increased amount of correlation between neighbouring pixels (due
223to the convolution). See Appendix~\ref{app-scaling} for details on
224this scaling.
225
226\secC{User control of reconstruction parameters}
227
228The most important parameter for the user to select in relation to the
229reconstruction is the threshold for each wavelet array. This is set
230using the \texttt{snrRecon} parameter, and is given as a multiple of
231the rms (estimated by the MADFM) above the mean (which for the wavelet
232arrays should be approximately zero). There are several other
233parameters that can be altered as well that affect the outcome of the
234reconstruction.
235
236By default, the cube is reconstructed in three dimensions, using a
2373-dimensional filter and 3-dimensional convolution. This can be
238altered, however, using the parameter \texttt{reconDim}. If set to 1,
239this means the cube is reconstructed by considering each spectrum
240separately, whereas \texttt{reconDim=2} will mean the cube is
241reconstructed by doing each channel map separately. The merits of
242these choices are discussed in \S\ref{sec-notes}, but it should be
243noted that a 2-dimensional reconstruction can be susceptible to edge
244effects if the spatial shape of the pixel array is not rectangular.
245
246The user can also select the minimum scale to be used in the
247reconstruction. The first scale exhibits the highest frequency
248variations, and so ignoring this one can sometimes be beneficial in
249removing excess noise. The default is to use all scales
250(\texttt{minscale = 1}).
251
252Finally, the filter that is used for the convolution can be selected
253by using \texttt{filterCode} and the relevant code number -- the
254choices are listed in Appendix~\ref{app-param}. A larger filter will
255give a better reconstruction, but take longer and use more memory when
256executing. When multi-dimensional reconstruction is selected, this
257filter is used to construct a 2- or 3-dimensional equivalent.
258
259\secB{Smoothing the cube}
260\label{sec-smoothing}
261
262An alternative to doing the wavelet reconstruction is to smooth the
263cube.  This technique can be useful in reducing the noise level
264slightly (at the cost of making neighbouring pixels correlated and
265blurring any signal present), and is particularly well suited to the
266case where a particular signal size (\ie a certain channel width or
267spatial size) is believed to be present in the data.
268
269There are two alternative methods that can be used: spectral
270smoothing, using the Hanning filter; or spatial smoothing, using a 2D
271Gaussian kernel. These alternatives are outlined below. To utilise the
272smoothing option, set the parameter \texttt{flagSmooth=true} and set
273\texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}.
274
275\secC{Spectral smoothing}
276
277When \texttt{smoothType = spectral} is selected, the cube is smoothed
278only in the spectral domain. Each spectrum is independently smoothed
279by a Hanning filter, and then put back together to form the smoothed
280cube, which is then used by the searching algorithm (see below). Note
281that in the case of both the reconstruction and the smoothing options
282being requested, the reconstruction will take precedence and the
283smoothing will \emph{not} be done.
284
285There is only one parameter necessary to define the degree of
286smoothing -- the Hanning width $a$ (given by the user parameter
287\texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter
288are defined by
289\[
290c(x) =
291  \begin{cases}
292   \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| \leq (a+1)/2\\
293   0                                               &|x| > (a+1)/2.
294  \end{cases},\ a,x \in \mathbb{Z}
295\]
296Note that the width specified must be an
297odd integer (if the parameter provided is even, it is incremented by
298one).
299
300\secC{Spatial smoothing}
301
302When \texttt{smoothType = spatial} is selected, the cube is smoothed
303only in the spatial domain. Each channel map is independently smoothed
304by a two-dimensional Gaussian kernel, put back together to form the
305smoothed cube, and used in the searching algorithm (see below). Again,
306reconstruction is always done by preference if both techniques are
307requested.
308
309The two-dimensional Gaussian has three parameters to define it,
310governed by the elliptical cross-sectional shape of the Gaussian
311function: the FWHM (full-width at half-maximum) of the major and minor
312axes, and the position angle of the major axis. These are given by the
313user parameters \texttt{kernMaj, kernMin} \& \texttt{kernPA}. If a
314circular Gaussian is required, the user need only provide the
315\texttt{kernMaj} parameter. The \texttt{kernMin} parameter will then
316be set to the same value, and \texttt{kernPA} to zero.  If we define
317these parameters as $a,b,\theta$ respectively, we can define the
318kernel by the function
319\[
320k(x,y) = \exp\left[-0.5 \left(\frac{X^2}{\sigma_X^2} +
321                              \frac{Y^2}{\sigma_Y^2}   \right) \right]
322\]
323where $(x,y)$ are the offsets from the central pixel of the gaussian
324function, and
325\begin{align*}
326X& = x\sin\theta - y\cos\theta&
327  Y&= x\cos\theta + y\sin\theta\\
328\sigma_X^2& = \frac{(a/2)^2}{2\ln2}&
329  \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\
330\end{align*}
331
332\secB{Input/Output of reconstructed/smoothed arrays}
333\label{sec-reconIO}
334
335The smoothing and reconstruction stages can be relatively
336time-consuming, particularly for large cubes and reconstructions in
3373-D (or even spatial smoothing). To get around this, \duchamp provides
338a shortcut to allow users to perform multiple searches (\eg with
339different thresholds) on the same reconstruction/smoothing setup
340without re-doing the calculations each time.
341
342To save the reconstructed array as a FITS file, set
343\texttt{flagOutputRecon = true}. The file will be saved in the same
344directory as the input image, so the user needs to have write
345permissions for that directory.
346
347The filename will be derived from the input filename, with extra
348information detailing the reconstruction that has been done. For
349example, suppose \texttt{image.fits} has been reconstructed using a
3503-dimensional reconstruction with filter \#2, thresholded at $4\sigma$
351using all scales. The output filename will then be
352\texttt{image.RECON-3-2-4-1.fits} (\ie it uses the four parameters
353relevant for the \atrous reconstruction as listed in
354Appendix~\ref{app-param}). The new FITS file will also have these
355parameters as header keywords. If a subsection of the input image has
356been used (see \S\ref{sec-input}), the format of the output filename
357will be \texttt{image.sub.RECON-3-2-4-1.fits}, and the subsection that
358has been used is also stored in the FITS header.
359
360Likewise, the residual image, defined as the difference between the
361input and reconstructed arrays, can also be saved in the same manner
362by setting \texttt{flagOutputResid = true}. Its filename will be the
363same as above, with \texttt{RESID} replacing \texttt{RECON}.
364
365If a reconstructed image has been saved, it can be read in and used
366instead of redoing the reconstruction. To do so, the user should set
367the parameter \texttt{flagReconExists = true}. The user can indicate
368the name of the reconstructed FITS file using the \texttt{reconFile}
369parameter, or, if this is not specified, \duchamp searches for the
370file with the name as defined above. If the file is not found, the
371reconstruction is performed as normal. Note that to do this, the user
372needs to set \texttt{flagAtrous = true} (obviously, if this is
373\texttt{false}, the reconstruction is not needed).
374
375To save the smoothed array, set \texttt{flagOutputSmooth = true}. The
376name of the saved file will depend on the method of smoothing used. It
377will be either \texttt{image.SMOOTH-1D-a.fits}, where a is replaced by
378the Hanning width used, or \texttt{image.SMOOTH-2D-a-b-c.fits}, where
379the Gaussian kernel parameters are a,b,c. Similarly to the
380reconstruction case, a saved file can be read in by setting
381\texttt{flagSmoothExists = true} and either specifying a file to be
382read with the \texttt{smoothFile} parameter or relying on \duchamp to
383find the file with the name as given above.
384
385
386\secB{Searching the image}
387\label{sec-detection}
388
389\secC{Technique}
390
391The basic idea behind detection in \duchamp is to locate sets of
392contiguous voxels that lie above some threshold. No size or shape
393requirement is imposed upon the detections -- that is, \duchamp does
394not fit \eg a Gaussian profile to each source. All it does is find
395connected groups of bright voxels.
396
397One threshold is calculated for the entire cube, enabling calculation
398of signal-to-noise ratios for each source (see
399Section~\ref{sec-output} for details). The user can manually specify a
400value (using the parameter \texttt{threshold}) for the threshold,
401which will override the calculated value. Note that this only applies
402for the first of the two cases discussed below -- the FDR case ignores
403any manually-set threshold value.
404
405The cube is searched one channel map at a time, using the
4062-dimensional raster-scanning algorithm of \citet{lutz80} that
407connects groups of neighbouring pixels. Such an algorithm cannot be
408applied directly to a 3-dimensional case, as it requires that objects
409are completely nested in a row (when scanning along a row, if an
410object finishes and other starts, you won't get back to the first
411until the second is completely finished for the
412row). Three-dimensional data does not have this property, hence the
413need to treat the data on a 2-dimensional basis.
414
415Although there are parameters that govern the minimum number of pixels
416in a spatial and spectral sense that an object must have
417(\texttt{minPix} and \texttt{minChannels} respectively), these
418criteria are not applied at this point. It is only after the merging
419and growing (see \S\ref{sec-merger}) is done that objects are rejected
420for not meeting these criteria.
421
422Finally, the search only looks for positive features. If one is
423interested instead in negative features (such as absorption lines),
424set the parameter \texttt{flagNegative = true}. This will invert the
425cube (\ie multiply all pixels by $-1$) prior to the search, and then
426re-invert the cube (and the fluxes of any detections) after searching
427is complete. All outputs are done in the same manner as normal, so
428that fluxes of detections will be negative.
429
430\secC{Calculating statistics}
431
432A crucial part of the detection process is estimating the statistics
433that define the detection threshold. To determine a threshold, we need
434to estimate from the data two parameters: the middle of the noise
435distribution (the ``noise level''), and the width of the distribution
436(the ``noise spread''). For both cases, we again use robust methods,
437using the median and MADFM.
438
439The choice of pixels to be used depend on the analysis method. If the
440wavelet reconstruction has been done, the residuals (defined
441in the sense of original $-$ reconstruction) are used to estimate the
442noise spread of the cube, since the reconstruction should pick out
443all significant structure. The noise level (the middle of the
444distribution) is taken from the original array.
445
446If smoothing of the cube has been done instead, all noise parameters
447are measured from the smoothed array, and detections are made with
448these parameters. When the signal-to-noise level is quoted for each
449detection (see \S\ref{sec-output}), the noise parameters of the
450original array are used, since the smoothing process correlates
451neighbouring pixels, reducing the noise level.
452
453If neither reconstruction nor smoothing has been done, then the
454statistics are calculated from the original, input array.
455
456The parameters that are estimated should be representative of the
457noise in the cube. For the case of small objects embedded in many
458noise pixels (\eg the case of \hi surveys), using the full cube will
459provide good estimators. It is possible, however, to use only a
460subsection of the cube by setting the parameter \texttt{flagStatSec =
461true} and providing the desired subsection to the \texttt{StatSec}
462parameter. This subsection works in exactly the same way as the pixel
463subsection discussed in \S\ref{sec-input}. Note that this subsection
464applies only to the statistics used to determine the threshold. It
465does not affect the calculation of statistics in the case of the
466wavelet reconstruction.
467
468\secC{Determining the threshold}
469
470Once the statistics have been calculated, the threshold is determined
471in one of two ways. The first way is a simple sigma-clipping, where a
472threshold is set at a fixed number $n$ of standard deviations above
473the mean, and pixels above this threshold are flagged as detected. The
474value of $n$ is set with the parameter \texttt{snrCut}. As before, the
475value of the standard deviation is estimated by the MADFM, and
476corrected by the ratio derived in Appendix~\ref{app-madfm}.
477
478The second method uses the False Discovery Rate (FDR) technique
479\citep{miller01,hopkins02}, whose basis we briefly detail here. The
480false discovery rate (given by the number of false detections divided
481by the total number of detections) is fixed at a certain value
482$\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false
483positives). In practice, an $\alpha$ value is chosen, and the ensemble
484average FDR (\ie $\langle FDR \rangle$) when the method is used will
485be less than $\alpha$.  One calculates $p$ -- the probability,
486assuming the null hypothesis is true, of obtaining a test statistic as
487extreme as the pixel value (the observed test statistic) -- for each
488pixel, and sorts them in increasing order. One then calculates $d$
489where
490\[
491d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\},
492\]
493and then rejects all hypotheses whose $p$-values are less than or
494equal to $P_d$. (So a $P_i<P_d$ will be rejected even if $P_i \geq
495j\alpha/c_N N$.) Note that ``reject hypothesis'' here means ``accept
496the pixel as an object pixel'' (\ie we are rejecting the null
497hypothesis that the pixel belongs to the background).
498
499The $c_N$ value here is a normalisation constant that depends on the
500correlated nature of the pixel values. If all the pixels are
501uncorrelated, then $c_N=1$. If $N$ pixels are correlated, then their
502tests will be dependent on each other, and so $c_N = \sum_{i=1}^N
503i^{-1}$. \citet{hopkins02} consider real radio data, where the pixels
504are correlated over the beam. For the calculations done in \duchamp,
505$N=2B$, where $B$ is the beam size in pixels, calculated from the FITS
506header (if the correct keywords -- BMAJ, BMIN -- are not present, the
507size of the beam is taken from the parameter \texttt{beamSize}). The
508factor of 2 comes about because we treat neighbouring channels as
509correlated. In the case of a two-dimensional image, we just have
510$N=B$.
511
512The theory behind the FDR method implies a direct connection between
513the choice of $\alpha$ and the fraction of detections that will be
514false positives. These detections, however, are individual pixels,
515which undergo a process of merging and rejection (\S\ref{sec-merger}),
516and so the fraction of the final list of detected objects that are
517false positives will be much smaller than $\alpha$. See the discussion
518in \S\ref{sec-notes}.
519
520%\secC{Storage of detected objects in memory}
521%
522%It is useful to understand how \duchamp stores the detected objects in
523%memory while it is running. This makes use of nested C++ classes, so
524%that an object is stored as a class that includes the set of detected
525%pixels, plus all the various calculated parameters (fluxes, WCS
526%coordinates, pixel centres and extrema, flags,...). The set of pixels
527%are stored using another class, that stores 3-dimensional objects as a
528%set of channel maps, each consisting of a $z$-value and a
529%2-dimensional object (a spatial map if you like). This 2-dimensional
530%object is recorded using ``run-length'' encoding, where each row (a
531%fixed $y$ value) is stored by the starting $x$-value and the length
532
533\secB{Merging and growing detected objects}
534\label{sec-merger}
535
536The searching step produces a list of detected objects that will have
537many repeated detections of a given object -- for instance, spectral
538detections in adjacent pixels of the same object and/or spatial
539detections in neighbouring channels. These are then combined in an
540algorithm that matches all objects judged to be ``close'', according
541to one of two criteria.
542
543One criterion is to define two thresholds -- one spatial and one in
544velocity -- and say that two objects should be merged if there is at
545least one pair of pixels that lie within these threshold distances of
546each other. These thresholds are specified by the parameters
547\texttt{threshSpatial} and \texttt{threshVelocity} (in units of pixels
548and channels respectively).
549
550Alternatively, the spatial requirement can be changed to say that
551there must be a pair of pixels that are \emph{adjacent} -- a stricter,
552but perhaps more realistic requirement, particularly when the spatial
553pixels have a large angular size (as is the case for
554\hi surveys). This
555method can be selected by setting the parameter
556\texttt{flagAdjacent} to 1 (\ie \texttt{true}) in the parameter
557file. The velocity thresholding is done in the same way as the first
558option.
559
560Once the detections have been merged, they may be ``grown''. This is a
561process of increasing the size of the detection by adding nearby
562pixels (according to the \texttt{threshSpatial} and
563\texttt{threshVelocity} parameters) that are above some secondary
564threshold. This threshold is lower than the one used for the initial
565detection, but above the noise level, so that faint pixels are only
566detected when they are close to a bright pixel. The value of this
567threshold is a possible input parameter (\texttt{growthCut}), with a
568default value of $2\sigma$.
569
570The use of the growth algorithm is controlled by the
571\texttt{flagGrowth} parameter -- the default value of which is
572\texttt{false}. If the detections are grown, they are sent through the
573merging algorithm a second time, to pick up any detections that now
574overlap or have grown over each other.
575
576Finally, to be accepted, the detections must span \emph{both} a
577minimum number of channels (enabling the removal of any spurious
578single-channel spikes that may be present), and a minimum number of
579spatial pixels. These numbers, as for the original detection step, are
580set with the \texttt{minChannels} and \texttt{minPix} parameters. The
581channel requirement means there must be at least one set of
582\texttt{minChannels} consecutive channels in the source for it to be
583accepted.
Note: See TracBrowser for help on using the repository browser.