More 30s captures of U Leo are coming in to add to the hundreds already stored. This is all rather tedious so I have been thinking about data processing and, in particular, about stacking the images. The SNR of each sub varies between about 10 and about 20, depending on sky transparency, air mass, etc. A desirable SNR might be 30-50 so perhaps 9 should be stacked.
So far, so simple. I could just divide up each night’s observations into consecutive and non-overlapping sub-sequences of length 9, stack them and use the mid-exposure time as the date of each measurement. However, it seems that there is nothing particularly magic about any particular selection of 9 consecutive images. If the first image, say, was deleted by accident the process would yield much the same output but would contain a sample of the light curve slightly displaced from that which would have been created from the full set of images.
If this is the case, why not create 9 light curves, each with a temporal displacement from its neighbour set by the cadence of the subs?
Each curve would have no more information than any other, and the sum of them would be just as noisy in intensity values but would have more temporal sampling. Might not this assist subsequent processing to extract a smoother light curve from the noisy data? If not, what am I missing? To me it appears related to a simple running-average smoohng process. It’s late and I am sleepy so could well be overlooking something which should be obvious.