Thanks. You understand my proposal and I was specifically thinking of the analysis phase. That should have been made clearer.
Unfortunately from the analysis point of view, the database stores only a magnitude and its uncertainty at a particular JD. The last is an annoyance but nothing more as conversion to the more useful HJD or BJD is a tedious but straightforward computation. The first two destroy some of the information held by the intensity counts of the VS, the sequence stars, local sky backgrounds and the latter’s variance. Further, it does not store the data pertaining to other stars in the image which can be used — with care — to give a tighter estimate of the ZPM and its uncertainty through ensemble photometry. Needless to say, I keep all the raw data around for subsequent re-analysis. Please note that I am not saying that the database structure should be changed!
Perhaps I should submit the raw 30s results with their 0.05 to 0.1 magnitude uncertainties and let others smooth them as they wish for their analysis. I can pretty much guarantee the 3.2-hour variability will not be visible without such smoothing.