[158] | 1 | \secA{Notes and hints on the use of \duchamp} |
---|
| 2 | \label{sec-notes} |
---|
| 3 | |
---|
| 4 | In using \duchamp, the user has to make a number of decisions about |
---|
| 5 | the way the program runs. This section is designed to give the user |
---|
| 6 | some idea about what to choose. |
---|
| 7 | |
---|
[285] | 8 | The main choice is whether to alter the cube to try and enhance the |
---|
| 9 | detectability of objects, by either smoothing or reconstructing via |
---|
| 10 | the \atrous method. The main benefits of both methods are the marked |
---|
| 11 | reduction in the noise level, leading to regularly-shaped detections, |
---|
| 12 | and good reliability for faint sources. |
---|
[158] | 13 | |
---|
[285] | 14 | The main drawback with the \atrous method is the long execution time: |
---|
| 15 | to reconstruct a $170\times160\times1024$ (\hipass) cube often |
---|
| 16 | requires three iterations and takes about 20-25 minutes to run |
---|
| 17 | completely. Note that this is for the more complete three-dimensional |
---|
| 18 | reconstruction: using \texttt{reconDim=1} makes the reconstruction |
---|
| 19 | quicker (the full program then takes less than 5 minutes), but it is |
---|
| 20 | still the largest part of the time. |
---|
| 21 | |
---|
| 22 | The smoothing procedure is computationally simpler, and thus quicker, |
---|
| 23 | than the reconstruction. The spectral Hanning method adds only a very |
---|
| 24 | small overhead on the execution, and the spatial Gaussian method, |
---|
| 25 | while taking longer, will be done (for the above example) in less than |
---|
| 26 | 2 minutes. Note that these times will depend on the size of the |
---|
| 27 | filter/kernel used: a larger filter means more calculations. |
---|
| 28 | |
---|
[158] | 29 | The searching part of the procedure is much quicker: searching an |
---|
[285] | 30 | un-reconstructed cube leads to execution times of less than a |
---|
| 31 | minute. Alternatively, using the ability to read in previously-saved |
---|
[158] | 32 | reconstructed arrays makes running the reconstruction more than once a |
---|
| 33 | more feasible prospect. |
---|
| 34 | |
---|
| 35 | On the positive side, the shape of the detections in a cube that has |
---|
[285] | 36 | been reconstructed or smoothed will be much more regular and smooth -- |
---|
| 37 | the ragged edges that objects in the raw cube possess are smoothed by |
---|
| 38 | the removal of most of the noise. This enables better determination of |
---|
| 39 | the shapes and characteristics of objects. |
---|
[158] | 40 | |
---|
| 41 | A further point to consider when using the reconstruction is that if |
---|
| 42 | the two-dimensional reconstruction is chosen (\texttt{reconDim=2}), it |
---|
| 43 | can be susceptible to edge effects. If the valid area in the cube (\ie |
---|
| 44 | the part that is not BLANK) has non-rectangular edges, the convolution |
---|
| 45 | can produce artefacts in the reconstruction that mimic the edges and |
---|
| 46 | can lead (depending on the selection threshold) to some spurious |
---|
| 47 | sources. Caution is advised with such data -- the user is advised to |
---|
| 48 | check carefully the reconstructed cube for the presence of such |
---|
| 49 | artefacts. Note, however, that the 1- and 3-dimensional |
---|
| 50 | reconstructions are \emph{not} susceptible in the same way, since the |
---|
| 51 | spectral direction does not generally exhibit these BLANK edges, and |
---|
| 52 | so we recommend the use of either of these. |
---|
| 53 | |
---|
| 54 | If one chooses the reconstruction method, a further decision is |
---|
| 55 | required on the signal-to-noise cutoff used in determining acceptable |
---|
| 56 | wavelet coefficients. A larger value will remove more noise from the |
---|
| 57 | cube, at the expense of losing fainter sources, while a smaller value |
---|
| 58 | will include more noise, which may produce spurious detections, but |
---|
| 59 | will be more sensitive to faint sources. Values of less than about |
---|
| 60 | $3\sigma$ tend to not reduce the noise a great deal and can lead to |
---|
[160] | 61 | many spurious sources (this depends, of course on the cube itself). |
---|
[158] | 62 | |
---|
[285] | 63 | The smoothing options have less parameters to consider: basically just |
---|
| 64 | the size of the smoothing function or kernel. Spectrally smoothing |
---|
| 65 | with a Hanning filter of width 3 (the smallest possible) is very |
---|
| 66 | efficient at removing spurious one-channel objects that may result |
---|
| 67 | just from statistical fluctuations of the noise. One may want to use |
---|
| 68 | larger widths or kernels of larger size to look for features of a |
---|
| 69 | particular scale in the cube. |
---|
| 70 | |
---|
[158] | 71 | When it comes to searching, the FDR method produces more reliable |
---|
| 72 | results than simple sigma-clipping, particularly in the absence of |
---|
| 73 | reconstruction. However, it does not work in exactly the way one |
---|
| 74 | would expect for a given value of \texttt{alpha}. For instance, |
---|
| 75 | setting fairly liberal values of \texttt{alpha} (say, 0.1) will often |
---|
| 76 | lead to a much smaller fraction of false detections (\ie much less |
---|
| 77 | than 10\%). This is the effect of the merging algorithms, that combine |
---|
| 78 | the sources after the detection stage, and reject detections not |
---|
| 79 | meeting the minimum pixel or channel requirements. It is thus better |
---|
| 80 | to aim for larger \texttt{alpha} values than those derived from a |
---|
| 81 | straight conversion of the desired false detection rate. |
---|
| 82 | |
---|
[258] | 83 | Finally, as \duchamp is still undergoing development, there are some |
---|
[158] | 84 | elements that are not fully developed. In particular, it is not as |
---|
| 85 | clever as I would like at avoiding interference. The ability to place |
---|
| 86 | requirements on the minimum number of channels and pixels partially |
---|
[258] | 87 | circumvents this problem, but work is being done to make \duchamp |
---|
[158] | 88 | smarter at rejecting signals that are clearly (to a human eye at |
---|
| 89 | least) interference. See the following section for further |
---|
| 90 | improvements that are planned. |
---|