Despite the huge simplicity of temporal median (both theoretical and practical approaches), I considered it very interesting since it allows to remove undesired objects from a static background. This can be especially useful when someone wants to take a picture of a very crowded sightseeing or monument, although it consumes a very important amount of resources.
First we have to assume that the pictures were taken from the same spot, they have the same luminosity and the same size. These are quite strong constraints but can be achieved by using a tripod and a remote trigger. In my case, I used my smartphone which has a function that by making an intense noise (snap) it takes a picture.
The algorithm iterates over each coordinate and calculates the median of each pixel and each color channel from the different images. The value of each color channel is then mixed and located in that coordinates.
Let us say that we have 3 pictures with the same background and it has a single pixel located in different spots. When the iteration arrives to [5,8] it encounters a black dot in the first picture.
When median is calculated, the pixel will be:
And therefore, the black pixel will disappear.
Other temporal operations use mode and average instead of median. In case of average, the result is a blended picture.
4 images were processed (4208×2368) by an under-average laptop and it took 35 minutes to generate the result.
The code is provided in the Source code section.
1. M. Nixon and A. Aguado. 2008. “First order edge detection operators”, Feature Extraction & Image Processing.