Bias Frames for CMOS

Forums Photometry Bias Frames for CMOS

Viewing 16 posts - 1 through 16 (of 16 total)
  • Author
    Posts
  • #619817
    Kevin West
    Participant

    I’m clear about flats and darks thanks to posts on here, but confused about bias.
    I have searched online extensively for info on this and found rather mixed messages and opinions.
    Are they necessary even possible with a CMOS camera given that I don’t have a zero exposure option.
    Some contributors say to just do the shortest exposure you can.
    Some say there are problems with very short exposures on CMOS cameras and bias frames are.
    Many posts are beyond my technical understanding.

    Zero exposure problem aside, I know how to do them, but do I need them?

    Help?
    Kevin

    #619825
    Dr Paul Leyland
    Participant

    Not having a CMoS camera to play with, I throw this suggestion out for those who do for them to consider.

    Take a series of dark exposures under conditions as equal as possible, especially sensor temperature. Fit a smooth curve to the intensity readings at each pixel. Extrapolate back to zero exposure. That is your bias frame.

    Rather tedious but I don’t see why it shouldn’t work. Whether bias frames are useful to you I couldn’t say.

    #619827
    Mr Ian David Sharp
    Participant

    Hi Kevin,

    The first thing to bear in mind is that bias signal always needs to be subtracted during calibration. However, the thing to also remember is that *all* images contain the bias signal and this includes your darks and flats.

    The traditional CCD workflow is to take separate bias, dark, and flat frames, then explicitly subtract the bias from darks, flats, and
    lights. This works well with CCD’s because they are very well behaved in terms of their linearity.

    CMOS cameras, however, have certain non-linearities with short exposures and dark frames (amp glow). (a lot of new CCD cameras seem to have all but eliminated amp glow).

    To calibrate CMOS images, we don’t need to take separate bias frames, but rather keep the bias in the darks and flats and the bias subtraction is done when calibrating the flats with dark flats and the lights with their darks. So, it is not that bias is not used, but rather how it is ultimately subtracted.

    If you use PixInsight and the excellent WBPP (Weighted Batch Pre Processing) script, then all the settings are nicely set up for you. You just need to feed it with your darks, flats and flat-darks (yes, take dark frames to match the exposure of your flats as well as your lights). The only time that PixInsight needs Bias frames is in the situation where you have not taken dark frames to match your light frames. In this situation, the bias frames are used to scale your darks to match. So, for example, if you have 300s darks in your library and you have taken some 360s lights, the program will scale the 300s master dark to produces a scaled-dark. This is not ideal but works very well.

    In summary, take darks for all the exposures and temperature combinations you are likely to use. Take flats for all of your filters and also take flat-darks to match the flat exposures. I’m assuming here that you have chosen your Gain and Offset values. These must be kept the same for all lights and calibration frames.

    Adam Block explains all in great depth here: https://www.youtube.com/watch?v=WzEpygFGbN0

    Hope that helps!
    Ian.

    #619828
    Gary Eason
    Participant

    [EDIT] Previous reply was written while I was typing this] I’d also be interested in the answer. I use a Nikon D750 DSLR, which reportedly has a Sony IMX-128-(L)-AQP CMOS sensor. I usually shoot about 30 dark frames at the same exposure as my lights. I also shoot about 30 each Flats and Dark Flats. I have never bothered with Bias frames. I stack with AstroPixelProcessor which creates Masters from the calibration frames. I was just reading a long thread on Astrobin on the whole subject – in which various people announce, with what sounds like great authority … completely contradictory things.

    Surely there must be a thorough scientific explanation somewhere that everyone can agree on?

    • This reply was modified 7 months, 2 weeks ago by Gary Eason.
    #619830
    Grant Privett
    Participant

    The impression I got from colleagues was that CMOS bias frames, even when taken at the same sensor temp and gain setting, were not as reliably repeatable as CCD. To me, that suggested that collecting darks immediately before or after your imagery might be a good idea. Collecting darks on another night – less so.

    Afterall, its worth noting that professional astronomers have been slow to take up CMOS and I rather doubt thats because they are all stuck in their ways – as some have suggested in the past.

    I must admit, as a CCDer, I have never used bias frames because I only use 3 or 4 exposure settings and just redo those every few weeks – just to be sure.

    For CCDs a dark of the same exposure length as the lights works fine. Throw in a decent flat field image and a defect map and its as good as it gets.

    • This reply was modified 7 months, 2 weeks ago by Grant Privett.
    #619832
    Mr Ian David Sharp
    Participant

    There looks to be some fine advice from the manufacturers of SBIG cameras:

    https://diffractionlimited.com/calibrating-cmos-images/

    Cheers
    Ian.

    #619833
    Dr Paul Leyland
    Participant

    Thank you all. I am learning a lot of information which will doubtless come in useful one day. Although the SX 814 now used has a CCD chip it seems inevitable that I will need to use a CMOS device sooner or later.

    Actually, I have already used CMOS sensors in extremely elderly Canon DSLRs. I have not yet attempted to do photometry at anything better than 100mmg accuracy or precision. 2mmag is possible with the SX camera.

    Paul

    #619848
    Grant Privett
    Participant

    The paper:

    https://conference.sdo.esoc.esa.int/proceedings/neosst2/paper/7/NEOSST2-paper7.pdf

    is quite an interesting read. Looks like a reasonable number of noisier pixels on quite a popular camera.

    AAVSO also have a report on the QHY600 camera from 2020 and S&T have a review of the QHY600 (De Cicco?) – where the bias issues were mentioned.

    One thing to remember is that it comes in two flavours: research grade (best used with their fibre link) and the lower quality chip (mainly USB3) cameras.

    I’ve been wondering how each pixel having its own read out noise characteristics works when stacking images using Medians or Sigma Clipping. My gut instinct says not quite as well, but I would recommend testing that in practice. I think it means dithering may have become an essential but the saved readout time over a night may balance out the need for more frames.

    Might also try a defect map.

    • This reply was modified 7 months, 2 weeks ago by Grant Privett.
    #619870
    Dr Paul Leyland
    Participant

    I think it means dithering may have become an essential but the saved readout time over a night may balance out the need for more frames.

    If the FWHM of the stars is several pixels, poor focus, seeing and minor guiding errors will do much the same as dithering for free.

    Many photometrists defocus slightly for exactly this reason.

    #619871
    Grant Privett
    Participant

    With old and noisy sensors I found the best way to get a decent clean background was via dithering. However, while using my old Super Polaris the periodic error helped – though the defective pixels could cause a streaky pattern in the direction of my declination drift.

    Realistically, using a series of darks to find the noisiest pixels and then spatially filter them might be worthwhile as the Gaussian random noise assumption made when applying median stacking may not apply well for CMOS sensors – each pixel has its own amplifier characteristics.

    #619873
    AlanM
    Participant

    The AAVSO Guide to CCD/CMOS Photometry with Monochrome Cameras has some practical advice on bias frames.

    #619881
    David Arditti
    Participant

    I agree with Ian Sharp’s answer. There’s no need for bias frames. A better solution is the combination of dark frames of length and temperature equal to that of the light frames, flat frames specific to the optical setup and filter, and dark flat frames of length equal to the flat frames. I think this is better for all sensor types.

    #619882
    Kevin West
    Participant

    Thanks Ian,
    I think I have that.
    Just to clarify
    My flats taken with a light pad and T-shirt required exposures of 2.3 sec to get the histogram max
    between 1/3 and 1/2 of max ADU count.
    That means I need to take a set of darks at the same exposure (all other things as you say replicated, temp gain etc.)
    Thanks
    Kevin
    PS I thought I posted this already but can’t find it.

    #619884
    David Arditti
    Participant

    Yes, though I am surprised your flat frames need to be exposed for so long. I use a twilight sky and the exposures are less than 0.1s.

    #619886
    Mr Ian David Sharp
    Participant

    Just to clarify
    My flats taken with a light pad and T-shirt required exposures of 2.3 sec to get the histogram max
    between 1/3 and 1/2 of max ADU count.
    That means I need to take a set of darks at the same exposure (all other things as you say replicated, temp gain etc.)

    Hi Kevin,

    Yes, that’s correct – you need darks to match your flats (and darks to match your lights of course!). My exposures vary from about 2 to 8 seconds with my CCD based system with my LED screen.

    Cheers
    Ian.

    #619891
    Graeme Coates
    Participant

    As Ian says above, taking matching darks, flats and matching flat darks with the same temp, gain and offset is all that’s required (in my experience, taking and trying to apply bias frames makes everything *much* worse – I fell into this trap when first starting with a CMOS camera…).

    On a practical point of view, it’s useful to pick two gain/offset combinations – one for broadband imaging with low gain, and one for narrowband with high gain (you need a bit of work to determine offset to ensure you don’t clip the pixel values at the lower end through insufficient offset) – then stick with them, and a few exposure times – these again should be determined to ensure the sky background noise swamps read noise. This makes taking calibration frames much less onerous, and with set point cooling, (the darks at least) can be reused for some time if you have set point cooling.

Viewing 16 posts - 1 through 16 (of 16 total)
  • You must be logged in to reply to this topic.