› Forums › Photometry › Bias Frames for CMOS
Tagged: bias frame, calibration frame, cmos sensor
- This topic has 15 replies, 8 voices, and was last updated 1 year, 1 month ago by Graeme Coates.
-
AuthorPosts
-
29 October 2023 at 9:03 pm #619817Kevin WestParticipant
I’m clear about flats and darks thanks to posts on here, but confused about bias.
I have searched online extensively for info on this and found rather mixed messages and opinions.
Are they necessary even possible with a CMOS camera given that I don’t have a zero exposure option.
Some contributors say to just do the shortest exposure you can.
Some say there are problems with very short exposures on CMOS cameras and bias frames are.
Many posts are beyond my technical understanding.Zero exposure problem aside, I know how to do them, but do I need them?
Help?
Kevin30 October 2023 at 9:40 am #619825Dr Paul LeylandParticipantNot having a CMoS camera to play with, I throw this suggestion out for those who do for them to consider.
Take a series of dark exposures under conditions as equal as possible, especially sensor temperature. Fit a smooth curve to the intensity readings at each pixel. Extrapolate back to zero exposure. That is your bias frame.
Rather tedious but I don’t see why it shouldn’t work. Whether bias frames are useful to you I couldn’t say.
30 October 2023 at 10:50 am #619827Mr Ian David SharpParticipantHi Kevin,
The first thing to bear in mind is that bias signal always needs to be subtracted during calibration. However, the thing to also remember is that *all* images contain the bias signal and this includes your darks and flats.
The traditional CCD workflow is to take separate bias, dark, and flat frames, then explicitly subtract the bias from darks, flats, and
lights. This works well with CCD’s because they are very well behaved in terms of their linearity.CMOS cameras, however, have certain non-linearities with short exposures and dark frames (amp glow). (a lot of new CCD cameras seem to have all but eliminated amp glow).
To calibrate CMOS images, we don’t need to take separate bias frames, but rather keep the bias in the darks and flats and the bias subtraction is done when calibrating the flats with dark flats and the lights with their darks. So, it is not that bias is not used, but rather how it is ultimately subtracted.
If you use PixInsight and the excellent WBPP (Weighted Batch Pre Processing) script, then all the settings are nicely set up for you. You just need to feed it with your darks, flats and flat-darks (yes, take dark frames to match the exposure of your flats as well as your lights). The only time that PixInsight needs Bias frames is in the situation where you have not taken dark frames to match your light frames. In this situation, the bias frames are used to scale your darks to match. So, for example, if you have 300s darks in your library and you have taken some 360s lights, the program will scale the 300s master dark to produces a scaled-dark. This is not ideal but works very well.
In summary, take darks for all the exposures and temperature combinations you are likely to use. Take flats for all of your filters and also take flat-darks to match the flat exposures. I’m assuming here that you have chosen your Gain and Offset values. These must be kept the same for all lights and calibration frames.
Adam Block explains all in great depth here: https://www.youtube.com/watch?v=WzEpygFGbN0
Hope that helps!
Ian.30 October 2023 at 10:52 am #619828Gary EasonParticipant[EDIT] Previous reply was written while I was typing this] I’d also be interested in the answer. I use a Nikon D750 DSLR, which reportedly has a Sony IMX-128-(L)-AQP CMOS sensor. I usually shoot about 30 dark frames at the same exposure as my lights. I also shoot about 30 each Flats and Dark Flats. I have never bothered with Bias frames. I stack with AstroPixelProcessor which creates Masters from the calibration frames. I was just reading a long thread on Astrobin on the whole subject – in which various people announce, with what sounds like great authority … completely contradictory things.
Surely there must be a thorough scientific explanation somewhere that everyone can agree on?
- This reply was modified 1 year, 1 month ago by Gary Eason.
30 October 2023 at 12:05 pm #619830Grant PrivettParticipantThe impression I got from colleagues was that CMOS bias frames, even when taken at the same sensor temp and gain setting, were not as reliably repeatable as CCD. To me, that suggested that collecting darks immediately before or after your imagery might be a good idea. Collecting darks on another night – less so.
Afterall, its worth noting that professional astronomers have been slow to take up CMOS and I rather doubt thats because they are all stuck in their ways – as some have suggested in the past.
I must admit, as a CCDer, I have never used bias frames because I only use 3 or 4 exposure settings and just redo those every few weeks – just to be sure.
For CCDs a dark of the same exposure length as the lights works fine. Throw in a decent flat field image and a defect map and its as good as it gets.
- This reply was modified 1 year, 1 month ago by Grant Privett.
30 October 2023 at 2:34 pm #619832Mr Ian David SharpParticipantThere looks to be some fine advice from the manufacturers of SBIG cameras:
https://diffractionlimited.com/calibrating-cmos-images/
Cheers
Ian.30 October 2023 at 5:00 pm #619833Dr Paul LeylandParticipantThank you all. I am learning a lot of information which will doubtless come in useful one day. Although the SX 814 now used has a CCD chip it seems inevitable that I will need to use a CMOS device sooner or later.
Actually, I have already used CMOS sensors in extremely elderly Canon DSLRs. I have not yet attempted to do photometry at anything better than 100mmg accuracy or precision. 2mmag is possible with the SX camera.
Paul
31 October 2023 at 1:10 am #619848Grant PrivettParticipantThe paper:
https://conference.sdo.esoc.esa.int/proceedings/neosst2/paper/7/NEOSST2-paper7.pdf
is quite an interesting read. Looks like a reasonable number of noisier pixels on quite a popular camera.
AAVSO also have a report on the QHY600 camera from 2020 and S&T have a review of the QHY600 (De Cicco?) – where the bias issues were mentioned.
One thing to remember is that it comes in two flavours: research grade (best used with their fibre link) and the lower quality chip (mainly USB3) cameras.
I’ve been wondering how each pixel having its own read out noise characteristics works when stacking images using Medians or Sigma Clipping. My gut instinct says not quite as well, but I would recommend testing that in practice. I think it means dithering may have become an essential but the saved readout time over a night may balance out the need for more frames.
Might also try a defect map.
- This reply was modified 1 year, 1 month ago by Grant Privett.
31 October 2023 at 10:51 am #619870Dr Paul LeylandParticipantI think it means dithering may have become an essential but the saved readout time over a night may balance out the need for more frames.
If the FWHM of the stars is several pixels, poor focus, seeing and minor guiding errors will do much the same as dithering for free.
Many photometrists defocus slightly for exactly this reason.
31 October 2023 at 4:24 pm #619871Grant PrivettParticipantWith old and noisy sensors I found the best way to get a decent clean background was via dithering. However, while using my old Super Polaris the periodic error helped – though the defective pixels could cause a streaky pattern in the direction of my declination drift.
Realistically, using a series of darks to find the noisiest pixels and then spatially filter them might be worthwhile as the Gaussian random noise assumption made when applying median stacking may not apply well for CMOS sensors – each pixel has its own amplifier characteristics.
1 November 2023 at 7:34 am #619873AlanMParticipantThe AAVSO Guide to CCD/CMOS Photometry with Monochrome Cameras has some practical advice on bias frames.
1 November 2023 at 4:22 pm #619881David ArdittiParticipantI agree with Ian Sharp’s answer. There’s no need for bias frames. A better solution is the combination of dark frames of length and temperature equal to that of the light frames, flat frames specific to the optical setup and filter, and dark flat frames of length equal to the flat frames. I think this is better for all sensor types.
1 November 2023 at 9:03 pm #619882Kevin WestParticipantThanks Ian,
I think I have that.
Just to clarify
My flats taken with a light pad and T-shirt required exposures of 2.3 sec to get the histogram max
between 1/3 and 1/2 of max ADU count.
That means I need to take a set of darks at the same exposure (all other things as you say replicated, temp gain etc.)
Thanks
Kevin
PS I thought I posted this already but can’t find it.1 November 2023 at 10:44 pm #619884David ArdittiParticipantYes, though I am surprised your flat frames need to be exposed for so long. I use a twilight sky and the exposures are less than 0.1s.
2 November 2023 at 9:35 am #619886Mr Ian David SharpParticipantJust to clarify
My flats taken with a light pad and T-shirt required exposures of 2.3 sec to get the histogram max
between 1/3 and 1/2 of max ADU count.
That means I need to take a set of darks at the same exposure (all other things as you say replicated, temp gain etc.)Hi Kevin,
Yes, that’s correct – you need darks to match your flats (and darks to match your lights of course!). My exposures vary from about 2 to 8 seconds with my CCD based system with my LED screen.
Cheers
Ian.2 November 2023 at 11:33 am #619891Graeme CoatesSpectatorAs Ian says above, taking matching darks, flats and matching flat darks with the same temp, gain and offset is all that’s required (in my experience, taking and trying to apply bias frames makes everything *much* worse – I fell into this trap when first starting with a CMOS camera…).
On a practical point of view, it’s useful to pick two gain/offset combinations – one for broadband imaging with low gain, and one for narrowband with high gain (you need a bit of work to determine offset to ensure you don’t clip the pixel values at the lower end through insufficient offset) – then stick with them, and a few exposure times – these again should be determined to ensure the sky background noise swamps read noise. This makes taking calibration frames much less onerous, and with set point cooling, (the darks at least) can be reused for some time if you have set point cooling.
-
AuthorPosts
- You must be logged in to reply to this topic.