source: trunk/docs/executionFlow.tex @ 285

Last change on this file since 285 was 285, checked in by Matthew Whiting, 17 years ago

Changes worth talking about are:

  • New parameters:
    • The trimming is now controlled by flagTrim. The BLANK etc keywords are always read in -- if they aren't there, flagTrim set to false. User access of flagBlankPix and blankPixVal is disabled. For clarity, flagTrimmed changed to hasBeenTrimmed.
    • Default value of kernMin set to -1 -- then if it is found to be negative, it is set to match kernMaj.
    • Slight changes to parameter output.
  • Fixed bug in findAllStats, so that the correct array is used.
  • Number of pixels for FDR setup set to 2*beamsize only if we are 3-dimensional.
  • Added pixel information to VOTable.
  • Simplified some code in drawMomentCutout
  • Added check for recon array allocation at start of ReconSearch?, as well as checks on dimensionality of cube.
  • Updated images and documentation.
File size: 28.4 KB
Line 
1\secA{What \duchamp is doing}
2\label{sec-flow}
3
4Each of the steps that \duchamp goes through in the course of its
5execution are discussed here in more detail. This should provide
6enough background information to fully understand what \duchamp is
7doing and what all the output information is. For those interested in
8the programming side of things, \duchamp is written in C/C++ and makes
9use of the \textsc{cfitsio}, \textsc{wcslib} and \textsc{pgplot}
10libraries.
11
12\secB{Image input}
13\label{sec-input}
14
15The cube is read in using basic \textsc{cfitsio} commands, and stored
16as an array in a special C++ class. This class keeps track of the list
17of detected objects, as well as any reconstructed arrays that are made
18(see \S\ref{sec-recon}). The World Coordinate System
19(WCS)\footnote{This is the information necessary for translating the
20pixel locations to quantities such as position on the sky, frequency,
21velocity, and so on.} information for the cube is also obtained from
22the FITS header by \textsc{wcslib} functions \citep{greisen02,
23calabretta02}, and this information, in the form of a \texttt{wcsprm}
24structure, is also stored in the same class.
25
26A sub-section of a cube can be requested by defining the subsection
27with the \texttt{subsection} parameter and setting
28\texttt{flagSubsection=true} -- this can be a good idea if the cube
29has very noisy edges, which may produce many spurious detections.
30
31There are two ways of specifying the \texttt{subsection} string. The
32first is the generalised form
33\texttt{[x1:x2:dx,y1:y2:dy,z1:z2:dz,...]}, as used by the
34\textsc{cfitsio} library. This has one set of colon-separated numbers
35for each axis in the FITS file. In this manner, the x-coordinates run
36from \texttt{x1} to \texttt{x2} (inclusive), with steps of
37\texttt{dx}. The step value can be omitted, so a subsection of the
38form \texttt{[2:50,2:50,10:1000]} is still valid. In fact, \duchamp
39does not make use of any step value present in the subsection string,
40and any that are present are removed before the file is opened.
41
42If the entire range of a coordinate is required, one can replace the
43range with a single asterisk, \eg \texttt{[2:50,2:50,*]}. Thus, the
44subsection string \texttt{[*,*,*]} is simply the entire cube. A
45complete description of this section syntax can be found at the
46\textsc{fitsio} web site%
47\footnote{%
48\href%
49{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}%
50{http://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c\_user/node91.html}}.
51
52
53Making full use of the subsection requires knowledge of the size of
54each of the dimensions. If one wants to, for instance, trim a certain
55number of pixels off the edges of the cube, without examining the cube
56to obtain the actual size, one can use the second form of the
57subsection string. This just gives a number for each axis, \eg
58\texttt{[5,5,5]} (which would trim 5 pixels from the start \emph{and}
59end of each axis).
60
61All types of subsections can be combined \eg \texttt{[5,2:98,*]}.
62
63
64\secB{Image modification}
65\label{sec-modify}
66
67Several modifications to the cube can be made that improve the
68execution and efficiency of \duchamp (their use is optional, governed
69by the relevant flags in the parameter file).
70
71\secC{BLANK pixel removal}
72\label{sec-blank}
73
74If the imaged area of a cube is non-rectangular (see the example in
75Fig.~\ref{fig-moment}, a cube from the HIPASS survey), BLANK pixels
76are used to pad it out to a rectangular shape. The value of these
77pixels is given by the FITS header keywords BLANK, BSCALE and
78BZERO. While these pixels make the image a nice shape, they will take
79up unnecessary space in memory, and so to potentially speed up the
80processing we can trim them from the edge. This is done when the
81parameter \texttt{flagTrim=true}. If the above keywords are not
82present, the trimming will not be done (in this case, a similar effect
83can be accomplished, if one knows where the ``blank'' pixels are, by
84using the subsection option).
85
86The amount of trimming is recorded, and these pixels are added back in
87once the source-detection is completed (so that quoted pixel positions
88are applicable to the original cube). Rows and columns are trimmed one
89at a time until the first non-BLANK pixel is reached, so that the
90image remains rectangular. In practice, this means that there will be
91some BLANK pixels left in the trimmed image (if the non-BLANK region
92is non-rectangular). However, these are ignored in all further
93calculations done on the cube.
94
95\secC{Baseline removal}
96
97Second, the user may request the removal of baselines from the
98spectra, via the parameter \texttt{flagBaseline}. This may be
99necessary if there is a strong baseline ripple present, which can
100result in spurious detections at the high points of the ripple. The
101baseline is calculated from a wavelet reconstruction procedure (see
102\S\ref{sec-recon}) that keeps only the two largest scales. This is
103done separately for each spatial pixel (\ie for each spectrum in the
104cube), and the baselines are stored and added back in before any
105output is done. In this way the quoted fluxes and displayed spectra
106are as one would see from the input cube itself -- even though the
107detection (and reconstruction if applicable) is done on the
108baseline-removed cube.
109
110The presence of very strong signals (for instance, masers at several
111hundred Jy) could affect the determination of the baseline, and would
112lead to a large dip centred on the signal in the baseline-subtracted
113spectrum. To prevent this, the signal is trimmed prior to the
114reconstruction process at some standard threshold (at $8\sigma$ above
115the mean). The baseline determined should thus be representative of
116the true, signal-free baseline. Note that this trimming is only a
117temporary measure which does not affect the source-detection.
118
119\secC{Ignoring bright Milky Way emission}
120
121Finally, a single set of contiguous channels can be ignored -- these
122may exhibit very strong emission, such as that from the Milky Way as
123seen in extragalactic \hi cubes (hence the references to ``Milky
124Way'' in relation to this task -- apologies to Galactic
125astronomers!). Such dominant channels will produce many detections
126that are unnecessary, uninteresting (if one is interested in
127extragalactic \hi) and large (in size and hence in memory usage), and
128so will slow the program down and detract from the interesting
129detections.
130
131The use of this feature is controlled by the \texttt{flagMW}
132parameter, and the exact channels concerned are able to be set by the
133user (using \texttt{maxMW} and \texttt{minMW} -- these give an
134inclusive range of channels). When employed, these channels are
135ignored for the searching, and the scaling of the spectral output (see
136Fig.~\ref{fig-spect}) will not take them into account. They will be
137present in the reconstructed array, however, and so will be included
138in the saved FITS file (see \S\ref{sec-reconIO}). When the final
139spectra are plotted, the range of channels covered by these parameters
140is indicated by a green hashed box.
141
142\secB{Image reconstruction}
143\label{sec-recon}
144
145The user can direct \duchamp to reconstruct the data cube using the
146\atrous wavelet procedure. A good description of the procedure can be
147found in \citet{starck02:book}. The reconstruction is an effective way
148of removing a lot of the noise in the image, allowing one to search
149reliably to fainter levels, and reducing the number of spurious
150detections. This is an optional step, but one that greatly enhances
151the source-detection process, with the payoff that it can be
152relatively time- and memory-intensive.
153
154\secC{Algorithm}
155
156The steps in the \atrous reconstruction are as follows:
157\begin{enumerate}
158\item The reconstructed array is set to 0 everywhere.
159\item The input array is discretely convolved with a given filter
160  function. This is determined from the parameter file via the
161  \texttt{filterCode} parameter -- see Appendix~\ref{app-param} for
162  details on the filters available.
163\item The wavelet coefficients are calculated by taking the difference
164  between the convolved array and the input array.
165\item If the wavelet coefficients at a given point are above the
166  requested threshold (given by \texttt{snrRecon} as the number of
167  $\sigma$ above the mean and adjusted to the current scale -- see
168  Appendix~\ref{app-scaling}), add these to the reconstructed array.
169\item The separation between the filter coefficients is doubled. (Note
170  that this step provides the name of the procedure\footnote{\atrous
171  means ``with holes'' in French.}, as gaps or holes are created in
172  the filter coverage.)
173\item The procedure is repeated from step 2, using the convolved array
174  as the input array.
175\item Continue until the required maximum number of scales is reached.
176\item Add the final smoothed (\ie convolved) array to the
177  reconstructed array. This provides the ``DC offset'', as each of the
178  wavelet coefficient arrays will have zero mean.
179\end{enumerate}
180
181The reconstruction has at least two iterations. The first iteration
182makes a first pass at the wavelet reconstruction (the process outlined
183in the 8 stages above), but the residual array will likely have some
184structure still in it, so the wavelet filtering is done on the
185residual, and any significant wavelet terms are added to the final
186reconstruction. This step is repeated until the change in the measured
187standard deviation of the background (see note below on the evaluation
188of this quantity) is less than some fiducial amount.
189
190It is important to note that the \atrous decomposition is an example
191of a ``redundant'' transformation. If no thresholding is performed,
192the sum of all the wavelet coefficient arrays and the final smoothed
193array is identical to the input array. The thresholding thus removes
194only the unwanted structure in the array.
195
196Note that any BLANK pixels that are still in the cube will not be
197altered by the reconstruction -- they will be left as BLANK so that
198the shape of the valid part of the cube is preserved.
199
200\secC{Note on Statistics}
201
202The correct calculation of the reconstructed array needs good
203estimators of the underlying mean and standard deviation of the
204background noise distribution. These statistics are estimated using
205robust methods, to avoid corruption by strong outlying points. The
206mean of the distribution is actually estimated by the median, while
207the median absolute deviation from the median (MADFM) is calculated
208and corrected assuming Gaussianity to estimate the underlying standard
209deviation $\sigma$. The Gaussianity (or Normality) assumption is
210critical, as the MADFM does not give the same value as the usual rms
211or standard deviation value -- for a Normal distribution
212$N(\mu,\sigma)$ we find MADFM$=0.6744888\sigma$, but this will change
213for different distributions. Since this ratio is corrected for, the
214user need only think in the usual multiples of the rms when setting
215\texttt{snrRecon}. See Appendix~\ref{app-madfm} for a derivation of
216this value.
217
218When thresholding the different wavelet scales, the value of the rms
219as measured from the wavelet array needs to be scaled to account for
220the increased amount of correlation between neighbouring pixels (due
221to the convolution). See Appendix~\ref{app-scaling} for details on
222this scaling.
223
224\secC{User control of reconstruction parameters}
225
226The most important parameter for the user to select in relation to the
227reconstruction is the threshold for each wavelet array. This is set
228using the \texttt{snrRecon} parameter, and is given as a multiple of
229the rms (estimated by the MADFM) above the mean (which for the wavelet
230arrays should be approximately zero). There are several other
231parameters that can be altered as well that affect the outcome of the
232reconstruction.
233
234By default, the cube is reconstructed in three dimensions, using a
2353-dimensional filter and 3-dimensional convolution. This can be
236altered, however, using the parameter \texttt{reconDim}. If set to 1,
237this means the cube is reconstructed by considering each spectrum
238separately, whereas \texttt{reconDim=2} will mean the cube is
239reconstructed by doing each channel map separately. The merits of
240these choices are discussed in \S\ref{sec-notes}, but it should be
241noted that a 2-dimensional reconstruction can be susceptible to edge
242effects if the spatial shape of the pixel array is not rectangular.
243
244The user can also select the minimum scale to be used in the
245reconstruction. The first scale exhibits the highest frequency
246variations, and so ignoring this one can sometimes be beneficial in
247removing excess noise. The default is to use all scales
248(\texttt{minscale = 1}).
249
250Finally, the filter that is used for the convolution can be selected
251by using \texttt{filterCode} and the relevant code number -- the
252choices are listed in Appendix~\ref{app-param}. A larger filter will
253give a better reconstruction, but take longer and use more memory when
254executing. When multi-dimensional reconstruction is selected, this
255filter is used to construct a 2- or 3-dimensional equivalent.
256
257\secB{Smoothing the cube}
258\label{sec-smoothing}
259
260An alternative to doing the wavelet reconstruction is to smooth the
261cube.  This technique can be useful in reducing the noise level
262slightly (at the cost of making neighbouring pixels correlated and
263blurring any signal present), and is particularly well suited to the
264case where a particular signal size (\ie a certain channel width or
265spatial size) is believed to be present in the data.
266
267There are two alternative methods that can be used: spectral
268smoothing, using the Hanning filter; or spatial smoothing, using a 2D
269Gaussian kernel. These alternatives are outlined below. To utilise the
270smoothing option, set the parameter \texttt{flagSmooth=true} and set
271\texttt{smoothType} to either \texttt{spectral} or \texttt{spatial}.
272
273\secC{Spectral smoothing}
274
275When \texttt{smoothType=spectral} is selected, the cube is smoothed
276only in the spectral domain. Each spectrum is independently smoothed
277by a Hanning filter, and then put back together to form the smoothed
278cube, which is then used by the searching algorithm (see below). Note
279that in the case of both the reconstruction and the smoothing options
280being requested, the reconstruction will take precedence and the
281smoothing will \emph{not} be done.
282
283There is only one parameter necessary to define the degree of
284smoothing -- the Hanning width $a$ (given by the user parameter
285\texttt{hanningWidth}). The coefficients $c(x)$ of the Hanning filter
286are defined by
287\[
288c(x) =
289  \begin{cases}
290   \frac{1}{2}\left(1+\cos(\frac{\pi x}{a})\right) &|x| \leq (a+1)/2\\
291   0                                               &|x| > (a+1)/2.
292  \end{cases},\ a,x \in \mathbb{Z}
293\]
294Note that the width specified must be an
295odd integer (if the parameter provided is even, it is incremented by
296one).
297
298\secC{Spatial smoothing}
299
300When \texttt{smoothType=spatial} is selected, the cube is smoothed
301only in the spatial domain. Each channel map is independently smoothed
302by a two-dimensional Gaussian kernel, put back together to form the
303smoothed cube, and used in the searching algorithm (see below). Again,
304reconstruction is always done by preference if both techniques are
305requested.
306
307The two-dimensional Gaussian has three parameters to define it,
308governed by the elliptical cross-sectional shape of the Gaussian
309function: the FWHM (full-width at half-maximum) of the major and minor
310axes, and the position angle of the major axis. These are given by the
311user parameters \texttt{kernMaj, kernMin \& kernPA}. If a circular
312Gaussian is required, the user need only provide the \texttt{kernMaj}
313parameter. The \texttt{kernMin} parameter will then be set to the same
314value, and \texttt{kernPA} to zero.  If we define these parameters as
315$a,b,\theta$ respectively, we can define the kernel by the function
316\[
317k(x,y) = \exp\left[-0.5 \left(\frac{X^2}{\sigma_X^2} +
318                              \frac{Y^2}{\sigma_Y^2}   \right) \right]
319\]
320where $(x,y)$ are the offsets from the central pixel of the gaussian
321function, and
322\begin{align*}
323X& = x\sin\theta - y\cos\theta&
324  Y&= x\cos\theta + y\sin\theta\\
325\sigma_X^2& = \frac{(a/2)^2}{2\ln2}&
326  \sigma_Y^2& = \frac{(b/2)^2}{2\ln2}\\
327\end{align*}
328
329\secB{Input/Output of reconstructed/smoothed arrays}
330\label{sec-reconIO}
331
332The smoothing and reconstruction stages can be relatively
333time-consuming, particularly for large cubes and reconstructions in
3343-D (or even spatial smoothing). To get around this, \duchamp provides
335a shortcut to allow users to perform multiple searches (\eg with
336different thresholds) on the same reconstruction/smoothing setup
337without re-doing the calculations each time.
338
339To save the reconstructed array as a FITS file, set
340\texttt{flagOutputRecon = true}. The file will be saved in the same
341directory as the input image, so the user needs to have write
342permissions for that directory.
343
344The filename will be derived from the input filename, with extra
345information detailing the reconstruction that has been done. For
346example, suppose \texttt{image.fits} has been reconstructed using a
3473-dimensional reconstruction with filter \#2, thresholded at $4\sigma$
348using all scales. The output filename will then be
349\texttt{image.RECON-3-2-4-1.fits} (\ie it uses the four parameters
350relevant for the \atrous reconstruction as listed in
351Appendix~\ref{app-param}). The new FITS file will also have these
352parameters as header keywords. If a subsection of the input image has
353been used (see \S\ref{sec-input}), the format of the output filename
354will be \texttt{image.sub.RECON-3-2-4-1.fits}, and the subsection that
355has been used is also stored in the FITS header.
356
357Likewise, the residual image, defined as the difference between the
358input and reconstructed arrays, can also be saved in the same manner
359by setting \texttt{flagOutputResid = true}. Its filename will be the
360same as above, with \texttt{RESID} replacing \texttt{RECON}.
361
362If a reconstructed image has been saved, it can be read in and used
363instead of redoing the reconstruction. To do so, the user should set
364the parameter \texttt{flagReconExists = true}. The user can indicate
365the name of the reconstructed FITS file using the \texttt{reconFile}
366parameter, or, if this is not specified, \duchamp searches for the
367file with the name as defined above. If the file is not found, the
368reconstruction is performed as normal. Note that to do this, the user
369needs to set \texttt{flagAtrous = true} (obviously, if this is
370\texttt{false}, the reconstruction is not needed).
371
372To save the smoothed array, set \texttt{flagOutputSmooth = true}. The
373name of the saved file will depend on the method of smoothing used. It
374will be either \texttt{image.SMOOTH-1D-a.fits}, where a is replaced by
375the Hanning width used, or \texttt{image.SMOOTH-2D-a-b-c.fits}, where
376the Gaussian kernel parameters are a,b,c. Similarly to the
377reconstruction case, a saved file can be read in by setting
378\texttt{flagSmoothExists = true} and either specifying a file to be
379read with the \texttt{smoothFile} parameter or relying on \duchamp to
380find the file with the name as given above.
381
382
383\secB{Searching the image}
384\label{sec-detection}
385
386\secC{Technique}
387
388The basic idea behind detection is to locate sets of contiguous voxels
389that lie above some threshold. One threshold is calculated for the
390entire cube, enabling calculation of signal-to-noise ratios for each
391source (see Section~\ref{sec-output} for details). The user can
392manually specify a value (using the parameter
393\texttt{threshold}) for the threshold, which will override the
394calculated value. Note that this only applies for the first of the two
395cases discussed below -- the FDR case ignores any manually-set
396threshold value.
397
398The cube is searched one channel map at a time, using the
3992-dimensional raster-scanning algorithm of \citet{lutz80} that
400connects groups of neighbouring pixels. Such an algorithm cannot be
401applied directly to a 3-dimensional case, as it requires that objects
402are completely nested in a row (when scanning along a row, if an
403object finishes and other starts, you won't get back to the first
404until the second is completely finished for the
405row). Three-dimensional data does not have this property, hence the
406need to treat the data on a 2-dimensional basis.
407
408Although there are parameters that govern the minimum number of pixels
409in a spatial and spectral sense that an object must have
410(\texttt{minPix} and \texttt{minChannels} respectively), these
411criteria are not applied at this point. It is only after the merging
412and growing (see \S\ref{sec-merger}) is done that objects are rejected
413for not meeting these criteria.
414
415Finally, the search only looks for positive features. If one is
416interested instead in negative features (such as absorption lines),
417set the parameter \texttt{flagNegative = true}. This will invert the
418cube (\ie multiply all pixels by $-1$) prior to the search, and then
419re-invert the cube (and the fluxes of any detections) after searching
420is complete. All outputs are done in the same manner as normal, so
421that fluxes of detections will be negative.
422
423\secC{Calculating statistics}
424
425A crucial part of the detection process is estimating the statistics
426that define the detection threshold. To determine a threshold, we need
427to estimate from the data two parameters: the middle of the noise
428distribution (the ``noise level''), and the width of the distribution
429(the ``noise spread''). For both cases, we again use robust methods,
430using the median and MADFM.
431
432The choice of pixels to be used depend on the analysis method. If the
433wavelet reconstruction has been done, the residuals (defined
434in the sense of original $-$ reconstruction) are used to estimate the
435noise spread of the cube, since the reconstruction should pick out
436all significant structure. The noise level (the middle of the
437distribution) is taken from the original array.
438
439If smoothing of the cube has been done instead, all noise parameters
440are measured from the smoothed array, and detections are made with
441these parameters. When the signal-to-noise level is quoted for each
442detection (see \S\ref{sec-output}), the noise parameters of the
443original array are used, since the smoothing process correlates
444neighbouring pixels, reducing the noise level.
445
446If neither reconstruction nor smoothing has been done, then the
447statistics are calculated from the original, input array.
448
449The parameters that are estimated should be representative of the
450noise in the cube. For the case of small objects embedded in many
451noise pixels (\eg the case of \hi surveys), using the full cube will
452provide good estimators. It is possible, however, to use only a
453subsection of the cube by setting the parameter \texttt{flagStatSec =
454true} and providing the desired subsection to the \texttt{StatSec}
455parameter. This subsection works in exactly the same way as the pixel
456subsection discussed in \S\ref{sec-input}. Note that this subsection
457applies only to the statistics used to determine the threshold. It
458does not affect the calculation of statistics in the case of the
459wavelet reconstruction.
460
461\secC{Determining the threshold}
462
463Once the statistics have been calculated, the threshold is determined
464in one of two ways. The first way is a simple sigma-clipping, where a
465threshold is set at a fixed number $n$ of standard deviations above
466the mean, and pixels above this threshold are flagged as detected. The
467value of $n$ is set with the parameter \texttt{snrCut}. As before, the
468value of the standard deviation is estimated by the MADFM, and
469corrected by the ratio derived in Appendix~\ref{app-madfm}.
470
471The second method uses the False Discovery Rate (FDR) technique
472\citep{miller01,hopkins02}, whose basis we briefly detail here. The
473false discovery rate (given by the number of false detections divided
474by the total number of detections) is fixed at a certain value
475$\alpha$ (\eg $\alpha=0.05$ implies 5\% of detections are false
476positives). In practice, an $\alpha$ value is chosen, and the ensemble
477average FDR (\ie $\langle FDR \rangle$) when the method is used will
478be less than $\alpha$.  One calculates $p$ -- the probability,
479assuming the null hypothesis is true, of obtaining a test statistic as
480extreme as the pixel value (the observed test statistic) -- for each
481pixel, and sorts them in increasing order. One then calculates $d$
482where
483\[
484d = \max_j \left\{ j : P_j < \frac{j\alpha}{c_N N} \right\},
485\]
486and then rejects all hypotheses whose $p$-values are less than or
487equal to $P_d$. (So a $P_i<P_d$ will be rejected even if $P_i \geq
488j\alpha/c_N N$.) Note that ``reject hypothesis'' here means ``accept
489the pixel as an object pixel'' (\ie we are rejecting the null
490hypothesis that the pixel belongs to the background).
491
492The $c_N$ value here is a normalisation constant that depends on the
493correlated nature of the pixel values. If all the pixels are
494uncorrelated, then $c_N=1$. If $N$ pixels are correlated, then their
495tests will be dependent on each other, and so $c_N = \sum_{i=1}^N
496i^{-1}$. \citet{hopkins02} consider real radio data, where the pixels
497are correlated over the beam. For the calculations done in \duchamp,
498$N=2B$, where $B$ is the beam size in pixels, calculated from the FITS
499header (if the correct keywords -- BMAJ, BMIN -- are not present, the
500size of the beam is taken from the parameter \texttt{beamSize}). The
501factor of 2 comes about because we treat neighbouring channels as
502correlated. In the case of a two-dimensional image, we just have
503$N=B$.
504
505The theory behind the FDR method implies a direct connection between
506the choice of $\alpha$ and the fraction of detections that will be
507false positives. These detections, however, are individual pixels,
508which undergo a process of merging and rejection (\S\ref{sec-merger}),
509and so the fraction of the final list of detected objects that are
510false positives will be much smaller than $\alpha$. See the discussion
511in \S\ref{sec-notes}.
512
513%\secC{Storage of detected objects in memory}
514%
515%It is useful to understand how \duchamp stores the detected objects in
516%memory while it is running. This makes use of nested C++ classes, so
517%that an object is stored as a class that includes the set of detected
518%pixels, plus all the various calculated parameters (fluxes, WCS
519%coordinates, pixel centres and extrema, flags,...). The set of pixels
520%are stored using another class, that stores 3-dimensional objects as a
521%set of channel maps, each consisting of a $z$-value and a
522%2-dimensional object (a spatial map if you like). This 2-dimensional
523%object is recorded using ``run-length'' encoding, where each row (a
524%fixed $y$ value) is stored by the starting $x$-value and the length
525
526\secB{Merging detected objects}
527\label{sec-merger}
528
529The searching step produces a list of detected objects that will have
530many repeated detections of a given object -- for instance, spectral
531detections in adjacent pixels of the same object and/or spatial
532detections in neighbouring channels. These are then combined in an
533algorithm that matches all objects judged to be ``close'', according
534to one of two criteria.
535
536One criterion is to define two thresholds -- one spatial and one in
537velocity -- and say that two objects should be merged if there is at
538least one pair of pixels that lie within these threshold distances of
539each other. These thresholds are specified by the parameters
540\texttt{threshSpatial} and \texttt{threshVelocity} (in units of pixels
541and channels respectively).
542
543Alternatively, the spatial requirement can be changed to say that
544there must be a pair of pixels that are \emph{adjacent} -- a stricter,
545but perhaps more realistic requirement, particularly when the spatial
546pixels have a large angular size (as is the case for
547\hi surveys). This
548method can be selected by setting the parameter
549\texttt{flagAdjacent} to 1 (\ie \texttt{true}) in the parameter
550file. The velocity thresholding is done in the same way as the first
551option.
552
553Once the detections have been merged, they may be ``grown''. This is a
554process of increasing the size of the detection by adding nearby
555pixels (according to the \texttt{threshSpatial} and
556\texttt{threshVelocity} parameters) that are above some secondary
557threshold. This threshold is lower than the one used for the initial
558detection, but above the noise level, so that faint pixels are only
559detected when they are close to a bright pixel. The value of this
560threshold is a possible input parameter (\texttt{growthCut}), with a
561default value of $2\sigma$.
562
563The use of the growth algorithm is controlled by the
564\texttt{flagGrowth} parameter -- the default value of which is
565\texttt{false}. If the detections are grown, they are sent through the
566merging algorithm a second time, to pick up any detections that now
567overlap or have grown over each other.
568
569Finally, to be accepted, the detections must span \emph{both} a
570minimum number of channels (enabling the removal of any spurious
571single-channel spikes that may be present), and a minimum number of
572spatial pixels. These numbers, as for the original detection step, are
573set with the \texttt{minChannels} and \texttt{minPix} parameters. The
574channel requirement means there must be at least one set of
575\texttt{minChannels} consecutive channels in the source for it to be
576accepted.
Note: See TracBrowser for help on using the repository browser.