Opened 13 years ago
Last modified 13 years ago
#101 assigned enhancement
Velocity width improvements
Reported by: | MatthewWhiting | Owned by: | MatthewWhiting |
---|---|---|---|
Priority: | low | Milestone: | Release-2.0 |
Component: | Output | Version: | 1.1.11 |
Severity: | normal | Keywords: | |
Cc: |
Description
Need to look again at the calculation of the velocity widths. The values are not consistent, with W50 and W20 often being the same.
A better way to do it might be to take the integrated spectrum and apply a threshold of 50%/20% of the peak and take the extremities of the resulting detection as the width. Considerations:
- May want to extrapolate out from the extreme pixels to the exact threshold to get a better estimate of the width. Caution required though in case the next pixel in the spectrum goes up. Could it though? No, if we apply the threshold on that spectrum, anything outside the extremities must be below the threshold.
- What happens when 50% or 20% of peak is below sensible noise level? e.g a 6-sigma detection will have the 20% level at 1.2 sigma.
- Make integrated spectrum of just the detected pixels? But how to do point 1.? And if both 50% and 20% are below the original detection threshold, then they'll be the same...
Attachments (3)
Change History (6)
Changed 13 years ago by
Attachment: | duchamp-1.1.8-results.txt added |
---|
comment:1 Changed 13 years ago by
Priority: | normal → high |
---|---|
Status: | new → assigned |
Version: | 1.1.10 → 1.1.11 |
Bug report from Tobias on this issue:
Hi Matt, Sorry to bother you with Duchamp again, but I encountered a very serious problem which I though I might report to you directly. Paolo Serra created a new version of the WSRT WHISP model cube, this time with enough noise data so the noise doesn't repeat every few hundred channels. Before running Duchamp on the actual data cube, I decided to run it on the noise-free model cube first to get a reference catalogue of objects in the cube. For this purpose I installed the latest version of Duchamp (1.1.11) and ran it on the noise-free cube with an absolute flux threshold of 1 µJy. Please find the corresponding Duchamp parameter file attached. Duchamp successfully ran and detected 100 sources in the cube. However, when inspecting the output catalogue, I noticed that some of the line width measurement don't make sense. Several of the w_50 values are much larger than the w_20 and w_vel, and in some cases I get w_50 values of several thousand km/s (more than 13,000 km/s in one case) which doesn't make any sense. I first thought that those were either faint sources or sources near the edge of the cube where the algorithm fails to calculate correct line widths, but I quickly found sources that were bright and inmidst the cube and yet had excessive line widths of more than 1000 km/s. Since I did not remember having similar issues in the past, I decided to run an old version of Duchamp (1.1.8) on the same data cube with exactly the same parameter file. To my surprise, the problem did not occur with the old version, and all line widths seemed to be reasonable and in good agreement with one another. Furthermore, I noticed that Duchamp 1.1.8 calculates line widths in MHz, which is the original axis type of the cube, whereas Duchamp 1.1.11 provides all line widths in km/s and must somehow internally convert frequency to velocity. Here is just one example (source J115900+302726): ------------------------------------------------------------- Version x y z w_50 w_20 w_vel/freq ------------------------------------------------------------- 1.1.11 257.3 346.1 1052.9 1659.967 211.188 235.521 km/s 1.1.8 257.3 346.1 1052.9 0.927 1.019 1.043 MHz ------------------------------------------------------------- Note the excessive w_50 found by Duchamp 1.1.11. In some cases, w_50 is wrong but w_20 looks fine, whereas in other cases both are significantly too large. I attached the two catalogues for 1.1.8 and 1.1.11, so you can take a look at the full output. It seems that if I increase the flux threshold, e.g. to 0.1 mJy, the problem remains, but the values are not quite as extreme and typically below 1000 km/s. But this is still way too large, and w_50 is still larger than w_20 and w_vel in these cases. Cheers, Tobias
comment:2 Changed 13 years ago by
Largely fixed - the peak was being searched for over the entire spectral range, rather than just the range of the object in question.
Remaining issue is how to decide where to stop. Current method starts at zmin/zmax, then goes in or out depending on whether the current point is below/above the threshold in question. This can give W20 in particular much bigger than WVEL - which I guess is what you want.
Question is what happens when the threshold is right in the noise (eg. 20% of 5sigma detection...)
comment:3 Changed 13 years ago by
Priority: | high → low |
---|---|
Type: | defect → enhancement |
Thinking about this some more - I think I'm happy with the current approach for the time being. Although it slightly breaks the mould of only reporting parameters using the detected pixels, that approach doesn't really make sense for the W50 and W20 parameters, since they could potentially provide no more information than WVEL if the detection threshold is such that the peak/2 or peak/5 is below that threshold.
To make them worthwhile, I think we have to allow them to go below the detection threshold and extend the source out past just the detected pixels.
I might leave this ticket open since we may want to look at alternative measurement schemes, but I'll reduce its import.
Results from 1.1.8