Precision loss during stacking when input is in integer FITS format

Image processing, astrometry, photometry, etc.
Post Reply
EHEA
Posts: 2
Joined: 10 Jun 2020, 16:17

Precision loss during stacking when input is in integer FITS format

Post by EHEA » 10 Jun 2020, 17:08

Hi all,

The task I want to perform in AstroArt7 is the following:

I want to stack many (sometimes a lot) of FITS images, and then do photometry on them with an external software.

The input :
- Out of the camera: 16 bit integer FITS , light and dark frames
- A single master flat frame, 32 bit floating point .

I used to do "average" settings for darks and lights, and tell AA7 to retain the master flat and master dark image.

To my surprise, the stacked, averaged image has only integer pixel values. Even when using Edit -> Data-Format -> Floating point, you still have only integer values so it looks like at the end of the stacking, the output image is forced into the integer format of the light frames, losing some precision in the photometry you'll do on the resulting image.

Also the generated master dark is in integer pixel values.

The funny thing is this: if you convert all the light- and dark-frames into floating point FITS files up front, (without changing any of the information in them), and then stack the images with the exact same settings in AA7 as before, the stacked image will be in floating point format and will have the extra fractional pixel values from the averaging.

That is very inconvenient.
When you do photometry by stacking a ton of frames for faint stars with low pixel values, the difference from the quantization to integer values can really matter.

To partially fix this, the easiest thing seems to be (short of converting all input frames to floating point format up-front):
- use "sum" instead of average
- lookup in the log window how many frames were actually summed (some might have been rejected)
- switch output image to floating point
- divide image by that number,
- save to FITS file

This is still not 100% perfect. Even after the summing step, the real output should not have integer pixel values, as the input frames are first calibrated with the masterdark and master-flat field and then added, which will not generally result in integer values. The pixel values will just be higher so that the rounding to integer values is less harmful.

It would be so much more convenient to have an option that would tell AA7 to produce all data products of the stacking in floating point (the combined image and the master calibration frames).

Or do I miss something here? Even for pretty pictures, this forcing to integer values will be added quantization noise.

Clear Skies
HBE

fabdev
Posts: 461
Joined: 03 Dec 2018, 21:43

Re: Precision loss during stacking when input is in integer FITS format

Post by fabdev » 10 Jun 2020, 22:17

Hello,
here is some info:

In Preprocessing all the steps are performed in floating point, the roundation to integer is performed only at the end, to keep the same format of the initial images. This could be used to obtain the desired result, just open the first image of the sequence, convert it to floating point, then save it. The result of the Preprocessing will not be rounded. By the way:

If the external software accepts 32bit floating point, why to average at all? Just send him the result of the Sum. This will permit an easier calculation of the S/N of stars, since the number of images is variable.

It depends on the situation, but I think that quantization on 16 bit cameras is not an issue with less than 500 images. For more images, to keep the same quality, the master dark frame and the master flat field should be the combination of no less than 100 images (if you prepare the masters in advance, save them in floating point format).

Clear skies,
Fabio.

EHEA
Posts: 2
Joined: 10 Jun 2020, 16:17

Re: Precision loss during stacking when input is in integer FITS format

Post by EHEA » 10 Jun 2020, 23:33

Hi Fabio.

Thanks for the swift reply.
If the external software accepts 32bit floating point, why to average at all? Just send him the result of the Sum. This will permit an easier calculation of the S/N of stars, since the number of images is variable.
That's true, but again, even when doing a sum of integer input lights, the rounding to integers in the end is losing precision, as e.g. the flat-fielding (and ideally the subtraction of the averaged darks as well) lead to non-integer pixel values.
but I think that quantization on 16 bit cameras is not an issue with less than 500 images. For more images,...
Hmmm...honestly I do not quite get the logic of this argument. If you are stacking images, the relativ error by being off by +/- half an ADU in the end result by rounding to an integer should be worse when done for fewer images, not more images, when using summing. When using averaging, the relative error just depends on the actual mean ADU value contribution from the star in question. Doing more sub images (given same exposure length) should not make things worse in either case.

As for 16 bit cameras: Also there are a lof of 12 bit CMOS sensors out there now with very low read noise that invite/force you to do many, but shorter exposures. DSLR sensors also have 12 or 14 bit sensors most of the time. Some acquisition software will scale back such sensor data to 16 bits, but e.g. SharpCap has an option to use the native ADU, unscaled. I'm not sure how AA7 handles 14 bit values in DSLR RAW images?

Anyway, I would suggest to give the user better control of the format of the produced files, without having to manipulate the input files. Also if the master dark that is produced by averaging the "sub-darks" is used as floating point pixel values internally (I sure hope it is....!!!) but then converted to integer and offered to the user as the master dark, that seems actually wrong. If instead the averaged master flat is rounded to integers before being applied to individual sub frames when stacking, that is even "more wrong" in terms of the error/noise introduced.

Clear Skies,
HBE

Post Reply