In this case since it is a particular campaign with specific goals I would probably look to the PI for guidance.
In general though the processing (including combining) of data for submission to databases where the end use is unknown (and in this case potentially may not be known until all of us are long gone) is an ever present dilemma. I faced this in real life for example concerning databases storing vast quantities of process control and quality time series data from a continuous process (a paper machine) where (similar to astronomy) variations over several orders of timescale (from milliseconds to years in that case) are potentially of relevance. Ultimately, storing the data from every exposure and letting the final user make the decision would be ideal (aided perhaps by tools in the database to allow the casual user interrogating the database to view the filtered data). Andy (like our paper mill IT manager at the time) might baulk at every exposure being measured and stored individually indefinitely though! An alternative approach could perhaps be based on examining the data prior to submission to look at what point when combining data the variation if any becomes significant compared to the uncertainty, thus preserving the maximum information while storing the minimum of data.