**Fast = not toooo slow…*

For the image restoration tool I had to implement min and max filters (also erosion and dilation—in this case with a square structuring element). Implementing these efficiently is not so easy. The naive approach is to simply check all the pixels in the window, and select the maximum or minimum value. This algorithm’s run time is quadratic in the window width, which can be a bit slow for the bigger windows that I am interested in. There are some very efficient algorithms available, but they are quite complicated to implement properly (some require esoteric data structures, for example monotonic wedges (PDF)), and many are not suitable for floating point images.

So I came up with this approximation scheme. It uses some expensive floating point operations, but its run time is *constant* in the window width.

The crux of the algorithm is the following approximation, which works well for , .

I am not going into the mathematical details of why this a fair approximation.

To turn this into a max filter is straightforward:

### 1. Calculate the power image

Raise each pixel to the power *p*.

### 2. Calculate a summed area table

The summed area table (or integral) of an image is a table containing in each cell (x, y), the sum of all the pixels top and left of (x, y); for the power image it is given by:

**Update:** This can be efficiently be calculated using the following recurrence:

and if or .

### 3. Calculate the window mean

Use the cumulative image sum to calculate the mean in the window. Here, is the window radius, *A* stands for average:

### 4. Calculate the approximate maximum

That’s it!

The minimum filter is implemented by finding the maximum of the inverse image and then inverting the result. This approach uses two more passes through the image; so far I could not find a direct approximation for the minimum value. (**Update:** It seems using gives an approximation for finding the minimum of a set of values.)

## Notes

- For very large images, you might run into floating point issues when calculating the cumulative sum image. You can reduce this effect somewhat by subtracting the (total image) mean of the power image from each pixel before calculating the image sum.
- For the image sizes I worked on, the value was suitable. The higher , the more accurate results, as long as you have no underflow.
- Raising powers is a slow operation. By choosing a suitable value of , you can use a suitable, faster approximation for calculating the power.
**Update:**It is possible to implement this as a two-pass algorithm, first working on columns, then on rows. This requires constructing two tables, which makes the algorithm slower. However, using this approach allows bigger images to be processed before overflow in the summed are tables becomes a problem.

Initially I thought that once the maximum can be found, we can also find the second maximum, and so on, so that any statistical order filter can be constructed.

The basic idea was that if is the maximum, we could find the second largest value like this:

The idea is that we remove the maximum value from the sum, so that the result must yield the second largest element. Of course, we do not use the actual maximum, but the approximation—and this is where the formula fails. When we use the approximate maximum, it is easy to show that the formula above will yield the exact same approximate maximum, and not the *second* largest element.

I should have know it seemed too good to be true!

**Update:** while looking for links for this post, I found this article: Robust Local Max-Min Filters by Normalized Power-Weighted Filtering (PDF) which initially looked like it was essentially the same thing. Although it is very similar, it is in fact different. The approximate maximum is given by the value:

for .

Using gives an approximation to the minimum.

The nice thing about this approximation is that it seems to work well for all real , not just for .