Dr Paul Leyland

Thank you. I’ve read that paper in the past, and imaged M67, but it’s good to read it again. Section 4 (p77) is particularly relevant, especially the comment about the difference between visual and CCD estimates of a 17.4 object when the image goes down to 20 or so. Another apposite comment is on page 79: This magnitude-bridging technique is common in the professional world, as most of the standard stars are too bright for large telescopes.

Please note that for present purposes I am emphatically NOT trying to detect the faintest possible object on the image. I am trying to measure the magnitude of the faintest object which has an error smaller than a specific limit, 0.1 magnitude say. In this case the SNR is way above the 5-sigma limit mentioned in the paper.

As I pointed out earlier, if I used a sequence which ends at 16.9 to measure a variable at say, 19.5 +/- 0.1 that estimate would be accepted without question. Why should a measurement to the same accuracy of an equally bright nearby star be rejected purely because it is not a (known) variable?

Behind all this is my firm belief that one should not throw away data. It should be preserved for later scientists to re-analyse if they wish. For my part I store every image which is not too badly corrupted by focus errors, guidance errors, passing clouds, etc. In only 18 months I already have more than thirty thousand images, together with their metadata in a SQL database. All can be retrieved and re-examined for whatever reason — pre-discovery observations, perhaps, or searching for previously unknown variables.

Don´t misunderstand me: I will continue to play by the rules as they stand but it seems to me that the present rule is extremely conservative to use Arne Henden´s phrase.