This post is a collection of notes regarding PixInsight’s Generalized Extreme Studentized Test Rejection Algorithm. It is gathered from the following PixInsight forum post.

An explanation of the algorithm can be found on NIST.

## ESD Significance Parameter

The ESD significance parameter does not define a Boolean condition (reject or don’t reject) over the whole stack. It works as a limit to compute critical values (lambda_i variables), which are compared to test statistics (R_i) in order to find the number of outliers. By increasing the ESD significance parameter more pixels will be rejected because the algorithm will allow more mistakes made by rejecting the null hypothesis (no outliers in the stack). It’s a bit convoluted but that’s how it works.

Juan Conejero

PixInsight Staff

## ESD Outlier Parameter

The ESD outliers parameter defines an upper bound for the number of outliers that can be rejected in the sample. For example, for a stack of 10 pixels, if ESD outliers is equal to 0.3 this means that 0, 1, 2 or 3 outliers can be detected. Of course, this also means that in case there were an additional fourth outlier, it would pass unnoticed.

Juan Conejero

PixInsight Staff

One other thought..

At the default settings the effect of the ESD outliers parameter seems likely to be far greater through its effect on the trimmed mean function than throught limiting the number of rejected pixels. Essentially, at 0.3 it is trimming 30% of the high pixels and 20% of the low to compute the mean of 50% of the pixels. However, it seems very unlikely that so many pixels would pass the significance test that the 30% cap on rejection would come into play.

As such, if a user wants to reject fewer pixels, the alpha parameter has much greater leverage but the tooltips as written might lead them to focus on reducing the outliers parameter. They may would have to reduce it very substantially, perhaps down to 5 or 10% before they see any effect, and such a reduction would of course vastly reduce the amount of trimming. Again, it makes me wonder why those two functions (amount of trimming to find the central value, and upper bound on the percentage of pixels rejected) are tied to the same parameter.

johnpane

Well-known member

## ESD Relaxation Parameter

Note that our implementation introduces two important changes to the original algorithm that make the ESD rejection method more robust and versatile. For robustness, we don’t use the mean and the standard deviation of the sample, but a trimmed mean and the standard deviation of the trimmed sample at each iteration. In all of our tests the algorithm behaves much more consistently with this variation. For versatility, we introduce a relaxation factor that multiplies the standard deviation (the s variable in the algorithm description above) for pixels with values smaller than the trimmed mean. This allows us to apply a more tolerant rejection for dark pixels.

The answer is c). As I’ve noted above, the relaxation parameter multiplies the standard deviation of the (trimmed) sample at each iteration to compute test statistics for pixels with values smaller than the trimmed mean. Since the low relaxation parameter is >= 1, it causes the algorithm to be more tolerant for rejection of low pixels, since the algorithm ‘sees’ a higher dispersion for these low pixels. In a sense, what we are doing here is telling lies to the ESD test for a subset of pixels where we want less rejection.

Juan Conejero

PixInsight Staff

## Other Notes

Sorry to keep adding more thoughts.

Users not well-versed in statistics may not realize just how much leverage the relaxation parameter has. With the default settings, alpha=.05 and that means that high pixels are rejected if they are about 2 sd or more from the central value assuming a stack of 60 pixels. With relaxation of 1.5, low pixels have to be about 3 sd or more from the central value, making alpha≈.002 for low rejection. If users bump relaxation up to 2, they are effectively setting alpha for low rejection to ≈.00001, if they set it to 3 effective alpha≈0.00000001 (six sigma, but using a t-distribution with 60df), and if they set it to 5 they are essentially setting alpha to zero for low rejection.

I wonder if the interface could display some calculations? Once the input frames have been loaded, n is known. The tool could calculate alpha_high and alpha_low (the sum of which will be lower than ESD significance unless relaxation = 1), the range of pixel rankings that will used for the trimmed mean calculation, and maybe more.

For example, assuming n=60, outliers=0.3, and significance=.05:

relaxation 1 -> alpha_high=0.025, alpha_low=0.025, trimmed mean calculated on pixels 19..42 (sorted low to high)

relaxation 1.5 -> alpha_high=0.025, alpha_low=0.002, trimmed mean calculated on pixels 13..42

relaxation 2 -> alpha_high=0.025, alpha_low=00001, trimmed mean calculated on pixels 10..42

relaxation 3 -> alpha_high=0.025, alpha_low=0.00000001, trimmed mean calculated on pixels 7..42

relaxation 5 -> alpha_high=0.025, alpha_low≈0, trimmed mean calculated on pixels 4..42If I think 3 pixels in my 60-pixel stack seem reasonable to reject, I can get the trimmed mean to exclude them by setting relaxation to 5 but by doing that I virtually ensure they will NOT be rejected because of the effective alpha.

johnpane

Well-known member

## Comments