I don’t see any reason why CMOS should be any different to CCD if you are careful not to saturate either the individual subs or the final stack and the images are calibrated accurately. Due to much lower read noise and much faster readout times CMOS tends to be used with shorter exposures so the smaller well depth and reduced bit depth (12 or 14 bits rather than 16 bits) are not really relevant. If the standard deviation of the noise in each image exceeds a few LSBs than the quantization is not relevant anyway whichever technology you are using. A key thing if you stack images before measuring them is to use mean rather than sum and ouput the resulting FITS as float rather than int to avoid saturation and quantization effects.
CMOS and CCD have the same problems as far as calibration is concerned (flats, darks etc) and in terms of aligning wavelength sensitivity with standards (filters etc) but otherwise they are pretty similar. CMOS has many other advantages so it won’t be long before CCD detectors are only available for very specialist applications.