source: trunk/docs/hints.tex @ 964

Last change on this file since 964 was 964, checked in by MatthewWhiting, 12 years ago

Lots of updates to the user guide.

File size: 7.1 KB
Line 
1% -----------------------------------------------------------------------
2% hints.tex: Section giving some tips & hints on how Duchamp is best
3%            used.
4% -----------------------------------------------------------------------
5% Copyright (C) 2006, Matthew Whiting, ATNF
6%
7% This program is free software; you can redistribute it and/or modify it
8% under the terms of the GNU General Public License as published by the
9% Free Software Foundation; either version 2 of the License, or (at your
10% option) any later version.
11%
12% Duchamp is distributed in the hope that it will be useful, but WITHOUT
13% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
14% FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
15% for more details.
16%
17% You should have received a copy of the GNU General Public License
18% along with Duchamp; if not, write to the Free Software Foundation,
19% Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
20%
21% Correspondence concerning Duchamp may be directed to:
22%    Internet email: Matthew.Whiting [at] atnf.csiro.au
23%    Postal address: Dr. Matthew Whiting
24%                    Australia Telescope National Facility, CSIRO
25%                    PO Box 76
26%                    Epping NSW 1710
27%                    AUSTRALIA
28% -----------------------------------------------------------------------
29\secA{Notes and hints on the use of \duchamp}
30\label{sec-notes}
31
32In using \duchamp, the user has to make a number of decisions about
33the way the program runs. This section is designed to give the user
34some idea about what to choose.
35
36The main choice is whether to alter the cube to try and enhance the
37detectability of objects, by either smoothing or reconstructing via
38the \atrous method. The main benefits of both methods are the marked
39reduction in the noise level, leading to regularly-shaped detections,
40and good reliability for faint sources.
41
42The main drawback with the \atrous method is the long execution time:
43to reconstruct a $170\times160\times1024$ (\hipass) cube often
44requires three iterations and takes about 20-25 minutes to run
45completely. Note that this is for the more complete three-dimensional
46reconstruction: using \texttt{reconDim = 1} makes the reconstruction
47quicker (the full program then takes less than 5 minutes), but it is
48still the largest part of the time.
49
50The smoothing procedure is computationally simpler, and thus quicker,
51than the reconstruction. The spectral Hanning method adds only a very
52small overhead on the execution, and the spatial Gaussian method,
53while taking longer, will be done (for the above example) in less than
542 minutes. Note that these times will depend on the size of the
55filter/kernel used: a larger filter means more calculations.
56
57The searching part of the procedure is much quicker: searching an
58un-reconstructed cube leads to execution times of less than a
59minute. Alternatively, using the ability to read in previously-saved
60reconstructed arrays makes running the reconstruction more than once a
61more feasible prospect.
62
63On the positive side, the shape of the detections in a cube that has
64been reconstructed or smoothed will be much more regular and smooth --
65the ragged edges that objects in the raw cube possess are smoothed by
66the removal of most of the noise. This enables better determination of
67the shapes and characteristics of objects.
68
69While the time overhead is larger for the reconstruction case, it will
70potentially provide a better recovery of real sources than the
71smoothing case. This is because it probes the full range of scales
72present in the cube (or spectral domain), rather than the specific
73scale determined by the Hanning filter or Gaussian kernel used in the
74smoothing.
75
76When considering the reconstruction method, note that the 2D
77reconstruction (\texttt{reconDim = 2}) can be susceptible to edge
78effects. If the valid area in the cube (\ie the part that is not
79BLANK) has non-rectangular edges, the convolution can produce
80artefacts in the reconstruction that mimic the edges and can lead
81(depending on the selection threshold) to some spurious
82sources. Caution is advised with such data -- the user is advised to
83check carefully the reconstructed cube for the presence of such
84artefacts. Note, however, that the 1- and 3-dimensional
85reconstructions are \emph{not} susceptible in the same way, since the
86spectral direction does not generally exhibit these BLANK edges, and
87so we recommend the use of either of these.
88
89{\bf *** 3D  ONE MAY BE DIFFERENT}
90
91If one chooses the reconstruction method, a further decision is
92required on the signal-to-noise cutoff used in determining acceptable
93wavelet coefficients. A larger value will remove more noise from the
94cube, at the expense of losing fainter sources, while a smaller value
95will include more noise, which may produce spurious detections, but
96will be more sensitive to faint sources. Values of less than about
97$3\sigma$ tend to not reduce the noise a great deal and can lead to
98many spurious sources (this depends, of course on the cube itself).
99
100The smoothing options have less parameters to consider: basically just
101the size of the smoothing function or kernel. Spectrally smoothing
102with a Hanning filter of width 3 (the smallest possible) is very
103efficient at removing spurious one-channel objects that may result
104just from statistical fluctuations of the noise. One may want to use
105larger widths or kernels of larger size to look for features of a
106particular scale in the cube.
107
108When it comes to searching, the FDR method produces more reliable
109results than simple sigma-clipping, particularly in the absence of
110reconstruction.  However, it does not work in exactly the way one
111would expect for a given value of \texttt{alpha}. For instance,
112setting fairly liberal values of \texttt{alpha} (say, 0.1) will often
113lead to a much smaller fraction of false detections (\ie much less
114than 10\%). This is the effect of the merging algorithms, that combine
115the sources after the detection stage, and reject detections not
116meeting the minimum pixel or channel requirements.  It is thus better
117to aim for larger \texttt{alpha} values than those derived from a
118straight conversion of the desired false detection rate.
119
120If the FDR method is not used, caution is required when choosing the
121S/N cutoff. Typical cubes have very large numbers of pixels, so even
122an apparently large cutoff will still result in a not-insignificant
123number of detections simply due to random fluctuations of the noise
124background. For instance, a $4\sigma$ threshold on a cube of Gaussian
125noise of size $100\times100\times1024$ will result in $\sim340$
126single-pixel detections. This is where the minimum channel and pixel
127requirements are important in rejecting spurious detections.
128
129%Finally, as \duchamp is still undergoing development, there are some
130%elements that are not fully developed. In particular, it is not as
131%clever as I would like at avoiding interference. The ability to place
132%requirements on the minimum number of channels and pixels partially
133%circumvents this problem, but work is being done to make \duchamp
134%smarter at rejecting signals that are clearly (to a human eye at
135%least) interference. See the following section for further
136%improvements that are planned.
Note: See TracBrowser for help on using the repository browser.