grelf.net
Why stack astrophotos
I have made this image to try to explain why image stacking is often necessary in astrophotography.
It shows two different exposures of the same area (around Alpha Delphini) cropped from the full frame.
You see the photos on the left, the upper one was 2.5 seconds exposure at ISO 3200 and the lower one was 40 seconds at the same sensitivity.
On the right are graphs of the brightness profile along the horizontal line through the brightest star (Alpha Del). I deliberately angled the photos so there are two fainter stars on the same line.
The graphs have 4 traces: one for each of the colour channels (RGB) and black would be the monochrome version (actually root sum of squares at each pixel).
The camera had 14 bits per channel, which means the (digitised) brightness in each colour at any pixel can only have values from 0 to 16383 (2 to the power 14 minus 1). So there is a limited range that can be represented and even for the short (2.5s) exposure the bright star is saturated: its peak would have gone above the maximum and so it is chopped off and set to 16383.
The lower part of every trace is for the background (sky) pixels and it is quite clear that even for the short exposure they are not zero. Furthermore the red trace is always higher than the green and blue ones, which is typical of pollution from street lamps.
On the longer exposure (lower photo) we can see that the background is really high, leaving little room between that and the maximum. Hence the reddish fogged photo and a smaller brightness range of stars can be discriminated.
So we have to keep individual exposures short enough to keep the background as near zero as possible and also to keep as many stars as possible from saturating.
When we do that though the level of the fainter stars is barely above that of the background and they tend to be lost in the fluctuations (noise) of the background.
Stacking helps (if the software does it right) by adding the pixels up in a memory area that allows a much greater range of brightness values before saturation.
When I started trying to do this, around 2001, there was nothing available that could cater for the large images from DSLR's, only for the much smaller images made by CCD cameras. So I started to write my own software, which I call GRIP (GR's Image Processor - I had worked in imaging software in the 1980's and 90's, which helped).
GRIP has an accumulator image in memory that has 32 bits per channel for every pixel, so brightnesses up to 2 to the power 32 can be represented before saturation would occur. (You would need to add more than 250000 14-bit exposures for any saturation to occur so, yes, it's overkill but convenient for programming.)
So if we accumulated 16 of the 2.5-second exposures the result would be similar to a 40-second exposure except that the profiles would not be chopped off at the top. The trick then is to read out the accumulator into a normal image through a look-up curve which takes the minimum of the background level down to true zero, stretches the contrast of the levels just above the background to make faint stars more visible, and takes the maximum actually occuring brightness (of the brightest star in the image, if none have saturated) to the maximum of the target image (which will have either 16 or 8 bits per channel).
NB: If intending to do photometry, to measure magnitudes of stars, the contrast must be kept linear. Also no stars involved in the measuring must have saturated.
(I have adapted this from a page of my own site, where there is more detail. See www.grelf.net/astro_exposure.html.)
Why stack astrophotos
I have made this image to try to explain why image stacking is often necessary in astrophotography.
It shows two different exposures of the same area (around Alpha Delphini) cropped from the full frame.
You see the photos on the left, the upper one was 2.5 seconds exposure at ISO 3200 and the lower one was 40 seconds at the same sensitivity.
On the right are graphs of the brightness profile along the horizontal line through the brightest star (Alpha Del). I deliberately angled the photos so there are two fainter stars on the same line.
The graphs have 4 traces: one for each of the colour channels (RGB) and black would be the monochrome version (actually root sum of squares at each pixel).
The camera had 14 bits per channel, which means the (digitised) brightness in each colour at any pixel can only have values from 0 to 16383 (2 to the power 14 minus 1). So there is a limited range that can be represented and even for the short (2.5s) exposure the bright star is saturated: its peak would have gone above the maximum and so it is chopped off and set to 16383.
The lower part of every trace is for the background (sky) pixels and it is quite clear that even for the short exposure they are not zero. Furthermore the red trace is always higher than the green and blue ones, which is typical of pollution from street lamps.
On the longer exposure (lower photo) we can see that the background is really high, leaving little room between that and the maximum. Hence the reddish fogged photo and a smaller brightness range of stars can be discriminated.
So we have to keep individual exposures short enough to keep the background as near zero as possible and also to keep as many stars as possible from saturating.
When we do that though the level of the fainter stars is barely above that of the background and they tend to be lost in the fluctuations (noise) of the background.
Stacking helps (if the software does it right) by adding the pixels up in a memory area that allows a much greater range of brightness values before saturation.
When I started trying to do this, around 2001, there was nothing available that could cater for the large images from DSLR's, only for the much smaller images made by CCD cameras. So I started to write my own software, which I call GRIP (GR's Image Processor - I had worked in imaging software in the 1980's and 90's, which helped).
GRIP has an accumulator image in memory that has 32 bits per channel for every pixel, so brightnesses up to 2 to the power 32 can be represented before saturation would occur. (You would need to add more than 250000 14-bit exposures for any saturation to occur so, yes, it's overkill but convenient for programming.)
So if we accumulated 16 of the 2.5-second exposures the result would be similar to a 40-second exposure except that the profiles would not be chopped off at the top. The trick then is to read out the accumulator into a normal image through a look-up curve which takes the minimum of the background level down to true zero, stretches the contrast of the levels just above the background to make faint stars more visible, and takes the maximum actually occuring brightness (of the brightest star in the image, if none have saturated) to the maximum of the target image (which will have either 16 or 8 bits per channel).
NB: If intending to do photometry, to measure magnitudes of stars, the contrast must be kept linear. Also no stars involved in the measuring must have saturated.
(I have adapted this from a page of my own site, where there is more detail. See www.grelf.net/astro_exposure.html.)