source: tags/release-1.6.1/docs/hints.tex

Last change on this file was 1262, checked in by MatthewWhiting, 11 years ago

Adding text to the Guide on the baseline changes, as well as the new maximum pixels/voxels/channels parameters.

File size: 12.7 KB
Line 
1% -----------------------------------------------------------------------
2% hints.tex: Section giving some tips & hints on how Duchamp is best
3%            used.
4% -----------------------------------------------------------------------
5% Copyright (C) 2006, Matthew Whiting, ATNF
6%
7% This program is free software; you can redistribute it and/or modify it
8% under the terms of the GNU General Public License as published by the
9% Free Software Foundation; either version 2 of the License, or (at your
10% option) any later version.
11%
12% Duchamp is distributed in the hope that it will be useful, but WITHOUT
13% ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
14% FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
15% for more details.
16%
17% You should have received a copy of the GNU General Public License
18% along with Duchamp; if not, write to the Free Software Foundation,
19% Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
20%
21% Correspondence concerning Duchamp may be directed to:
22%    Internet email: Matthew.Whiting [at] atnf.csiro.au
23%    Postal address: Dr. Matthew Whiting
24%                    Australia Telescope National Facility, CSIRO
25%                    PO Box 76
26%                    Epping NSW 1710
27%                    AUSTRALIA
28% -----------------------------------------------------------------------
29\secA{Notes and hints on the use of \duchamp}
30\label{sec-notes}
31
32In using \duchamp, the user has to make a number of decisions about
33the way the program runs. This section is designed to give the user
34some idea about what to choose.
35
36\secB{Memory usage}
37
38A lot of attention has been paid to the memory usage in \duchamp,
39recognising that data cubes are going to be increasing in size with
40new generation correlators and wider fields of view. However, users
41with large cubes should be aware of the likely usage for different
42modes of operation and plan their \duchamp execution carefully.
43
44At the start of the program, memory is allocated sufficient for:
45\begin{itemize}
46\item The entire pixel array (as requested, subject to any
47subsection).
48\item The spatial extent, which holds the map of detected pixels (for
49output into the detection map).
50\item If smoothing or reconstruction has been selected, another array
51of the same size as the pixel array. This will hold the
52smoothed/reconstructed array (the original needs to be kept to do the
53correct parameterisation of detected sources).
54\item If baseline-subtraction has been selected, a further array of
55the same size as the pixel array. This holds the baseline values,
56which need to be added back in prior to parameterisation.
57\end{itemize}
58All of these will be float type, except for the detection map, which
59is short.
60
61There will, of course, be additional allocation during the course of
62the program. The detection list will progressively grow, with each
63detection having a memory footprint as described in
64\S\ref{sec-scan}. But perhaps more important and with a larger
65impact will be the temporary space allocated for various algorithms.
66
67The largest of these will be the wavelet reconstruction. This will
68require an additional allocation of twice the size of the array being
69reconstructed, one for the coefficients and one for the wavelets -
70each scale will overwrite the previous one. So, for the 1D case, this
71means an additional allocation of twice the spectral dimension (since
72we only reconstruct one spectrum at a time), but the 3D case will
73require an additional allocation of twice the cube size (this means
74there needs to be available at least four times the size of the input
75cube for 3D reconstruction, plus the additional overheads of
76detections and so forth).
77
78The smoothing has less of an impact, since it only operates on the
79lower dimensions, but it will make an additional allocation of twice
80the relevant size (spectral dimension for spectral smoothing, or
81spatial image size for the spatial Gaussian smoothing).
82
83The other large allocation of temporary space will be for calculating
84robust statistics. The median-based calculations require at least
85partial sorting of the data, and so cannot be done on the original
86image cube. This is done for the entire cube and so the temporary
87memory increase can be large.
88
89
90\secB{Timing considerations}
91
92Another intersting question from a user's perspective is how long you
93can expect \duchamp to take. This is a difficult question to answer in
94general, as different users will have different sized data sets, as
95well as machines with different capabilities (in terms of the CPU
96speed and I/O \& memory bandwidths). Additionally, the time required
97will depend slightly on the number of sources found and their size
98(very large sources can take a while to fully parameterise).
99
100Having said that, in \citet{whiting12} a brief analysis was done
101looking at different modes of execution applied to a single HIPASS
102cube (\#201) using a MacBook Pro (2.66GHz, 8MB RAM). Two sets of
103thresholds were used, either $10^8$~Jy~beam$^{-1}$ (no sources will be
104found, so that the time taken is dominated by preprocessing), or
10535~mJy~beam$^{-1}$ (or $\sim2.58\sigma$, chosen so that the time taken
106will include that required to process sources).  The basic searches,
107with no pre-processing done, took less than a second for the
108high-threshold search, but between 1 and 3~min for the low-threshold
109case -- the numbers of sources detected ranged from 3000 (rejecting
110sources with less than 3 channels and 2 spatial pixels) to 42000
111(keeping all sources).
112
113When smoothing, the raw time for the spectral smoothing was only a few
114seconds, with a small dependence on the width of the smoothing
115filter. And because the number of spurious sources is markedly
116decreased (the final catalogues ranged from 17 to 174 sources,
117depending on the width of the smoothing), searching with the low
118threshold did not add much more than a second to the time. The spatial
119smoothing was more computationally intensive, taking about 4 minutes
120to complete the high-threshold search.
121
122The wavelet reconstruction time primarily depended on the
123dimensionality of the reconstruction, with the 1D taking 20~s, the 2D
124taking 30 - 40~s and the 3D taking 2 - 4~min. The spread in times for
125a given dimensionality was caused by different reconstruction
126thresholds, with lower thresholds taking longer (since more pixels are
127above the threshold and so need to be added to the final spectrum). In
128all cases the reconstruction time dominated the total time for the
129low-threshold search, since the number of sources found was again
130smaller than the basic searches.
131
132
133\secB{Why do preprocessing?}
134
135The preprocessing options provided by \duchamp, particularly the
136ability to smooth or reconstruct via multi-resolution wavelet
137decomposition, provide an opportunity to beat the effects of the
138random noise that will be present in the data. This noise will
139ultimately limit ones ability to detect objects and form a complete
140and reliable catalogue. Two effects are important here. First, the
141noise reduces the completeness of the final catalogue by suppressing
142the flux of real sources such that they fall below the detection
143threshold. Secondly, the noise provides false positive detections
144through noise peaks that fall above the threshold, thereby reducing
145the reliability of the catalogue.
146
147\citet{whiting12} examined the effect on completeness and reliability
148for the reconstruction and smoothing (1D cases only) when applied to a
149simple simulated dataset. Both had the effect of reducing the number
150of spurious sources, which means the searches can be done to fainter
151thresholds. This led to completeness levels of about one flux unit
152(equal to one standard-deviation of the noise) fainter than searches
153without pre-processing, with $>95\%$ reliability. The smoothing did
154slightly better, with the completeness level nearly half a flux unit
155fainter than the reconstruction, although this was helped by the
156sources in the simulation all having the same spectral size.
157
158\secB{Reconstruction considerations}
159
160The \atrous wavelet reconstruction approach is designed to remove a
161large amount of random noise while preserving as much structure as
162possible on the full range of spatial and/or spectral scales present
163in the data. While it is relatively more expensive in terms of memory
164and CPU usage (see previous sections), its effect on, in particular,
165the reliability of the final catalogue makes it worth investigating.
166
167There are, however, a number of subtleties to it that need to be
168considered by potential users. \citet{whiting12} shows a set of
169examples of reconstruction applied to simulated and real data. The
170real data, in this case a HIPASS cube, shows differences in the
171quality of the reconstructed spectrum depending on the dimensionality
172of the reconstruction. The two-dimensional reconstruction (where the
173cube is reconstructed one channel map at a time) shows much larger
174channel-to-channel noise, with a number of narrow peaks surviving the
175reconstruction process. The problem here is that there are spatial
176correlations between pixels due to the beam, which allow beam-sized
177noise fluctuations to rise above the threshold more frequently in
178one-dimension. The other effect is that when compared to a spectrum
179from the 1D reconstruction, each channel is independently
180reconstructed, and does not depend on its neighbouring channels. This
181is also why the 3D reconstruction (which also suffers from the beam
182effects) has improved noise in the output spectrum, since the
183information on neighbouring channels is taken into account.
184
185Caution is also advised when looking at subsections of a cube. Due to
186the multi-scale nature of the algorithm, the wavelet coefficients at a
187given pixel are influenced by pixels at very large separations,
188particularly given that edges are dealt with by assuming reflection
189(so the whole array is visible to all pixels). Also, if one decreases
190the dimensions of the array being reconstructed, there may be fewer
191scales used in the reconstruction. These points mean that the
192reconstruction of a subsection of a cube will differ from the same
193subsection of the reconstructed cube. The difference may be small
194(depending on the relative size difference and the amount of structure
195at large scales), but there will be differences at some level.
196
197Note also that BLANK pixels have no effect on the reconstruction: they
198remain as BLANK in the output, and do not contribute to the discrete
199convolution when they otherwise would. Flagging channels with the
200\texttt{flaggedChannels} parameter, however, has no effect on the
201reconstruction -- this are applied after the preprocessing, either in
202the searching or the rejection stage.
203
204\secB{Smoothing considerations}
205
206The smoothing approach differs from the wavelet reconstruction in that
207it has a single scale associated with it. The user has two choices to
208make - which dimension to smooth in (spatially or spectrally), and
209what size kernel to smooth with. \citet{whiting12} show examples of
210how different smoothing widths (in one-dimension in this case) can
211highlight sources of different sizes. If one has some \textit{a
212  priori} idea of the typical size scale of objects one wishes to
213detect, then choosing a single smoothing scale can be quite
214beneficial.
215
216Note also that beam effects can be important here too, when smoothing
217spatial data on scales close to that of the beam. This can enhance
218beam-sized noise fluctuations and potentially introduce spurious
219sources. As always, examining the smoothed array (after saving via
220\texttt{flagOutputSmooth}) is a good idea.
221
222
223\secB{Threshold method}
224
225When it comes to searching, the FDR method produces more reliable
226results than simple sigma-clipping, particularly in the absence of
227reconstruction.  However, it does not work in exactly the way one
228would expect for a given value of \texttt{alpha}. For instance,
229setting fairly liberal values of \texttt{alpha} (say, 0.1) will often
230lead to a much smaller fraction of false detections (\ie much less
231than 10\%). This is the effect of the merging algorithms, that combine
232the sources after the detection stage, and reject detections not
233meeting the minimum pixel or channel requirements.  It is thus better
234to aim for larger \texttt{alpha} values than those derived from a
235straight conversion of the desired false detection rate.
236
237If the FDR method is not used, caution is required when choosing the
238S/N cutoff. Typical cubes have very large numbers of pixels, so even
239an apparently large cutoff will still result in a not-insignificant
240number of detections simply due to random fluctuations of the noise
241background. For instance, a $4\sigma$ threshold on a cube of Gaussian
242noise of size $100\times100\times1024$ will result in $\sim340$
243single-pixel detections. This is where the minimum channel and pixel
244requirements are important in rejecting spurious detections.
245
246
247%%% Local Variables:
248%%% mode: latex
249%%% TeX-master: "Guide"
250%%% End:
Note: See TracBrowser for help on using the repository browser.