This post is a collection of notes regarding PixInsight’s Generalized Extreme Studentized Test Rejection Algorithm. It is gathered from the following PixInsight forum post.
An explanation of the algorithm can be found on NIST.
ESD Significance Parameter
The ESD significance parameter does not define a Boolean condition (reject or don’t reject) over the whole stack. It works as a limit to compute critical values (lambda_i variables), which are compared to test statistics (R_i) in order to find the number of outliers. By increasing the ESD significance parameter more pixels will be rejected because the algorithm will allow more mistakes made by rejecting the null hypothesis (no outliers in the stack). It’s a bit convoluted but that’s how it works.
The ESD outliers parameter defines an upper bound for the number of outliers that can be rejected in the sample. For example, for a stack of 10 pixels, if ESD outliers is equal to 0.3 this means that 0, 1, 2 or 3 outliers can be detected. Of course, this also means that in case there were an additional fourth outlier, it would pass unnoticed.
At the default settings the effect of the ESD outliers parameter seems likely to be far greater through its effect on the trimmed mean function than throught limiting the number of rejected pixels. Essentially, at 0.3 it is trimming 30% of the high pixels and 20% of the low to compute the mean of 50% of the pixels. However, it seems very unlikely that so many pixels would pass the significance test that the 30% cap on rejection would come into play.
As such, if a user wants to reject fewer pixels, the alpha parameter has much greater leverage but the tooltips as written might lead them to focus on reducing the outliers parameter. They may would have to reduce it very substantially, perhaps down to 5 or 10% before they see any effect, and such a reduction would of course vastly reduce the amount of trimming. Again, it makes me wonder why those two functions (amount of trimming to find the central value, and upper bound on the percentage of pixels rejected) are tied to the same parameter.
Note that our implementation introduces two important changes to the original algorithm that make the ESD rejection method more robust and versatile. For robustness, we don’t use the mean and the standard deviation of the sample, but a trimmed mean and the standard deviation of the trimmed sample at each iteration. In all of our tests the algorithm behaves much more consistently with this variation. For versatility, we introduce a relaxation factor that multiplies the standard deviation (the s variable in the algorithm description above) for pixels with values smaller than the trimmed mean. This allows us to apply a more tolerant rejection for dark pixels.
The answer is c). As I’ve noted above, the relaxation parameter multiplies the standard deviation of the (trimmed) sample at each iteration to compute test statistics for pixels with values smaller than the trimmed mean. Since the low relaxation parameter is >= 1, it causes the algorithm to be more tolerant for rejection of low pixels, since the algorithm ‘sees’ a higher dispersion for these low pixels. In a sense, what we are doing here is telling lies to the ESD test for a subset of pixels where we want less rejection.
Users not well-versed in statistics may not realize just how much leverage the relaxation parameter has. With the default settings, alpha=.05 and that means that high pixels are rejected if they are about 2 sd or more from the central value assuming a stack of 60 pixels. With relaxation of 1.5, low pixels have to be about 3 sd or more from the central value, making alpha≈.002 for low rejection. If users bump relaxation up to 2, they are effectively setting alpha for low rejection to ≈.00001, if they set it to 3 effective alpha≈0.00000001 (six sigma, but using a t-distribution with 60df), and if they set it to 5 they are essentially setting alpha to zero for low rejection.
I wonder if the interface could display some calculations? Once the input frames have been loaded, n is known. The tool could calculate alpha_high and alpha_low (the sum of which will be lower than ESD significance unless relaxation = 1), the range of pixel rankings that will used for the trimmed mean calculation, and maybe more.
For example, assuming n=60, outliers=0.3, and significance=.05:
relaxation 1 -> alpha_high=0.025, alpha_low=0.025, trimmed mean calculated on pixels 19..42 (sorted low to high) relaxation 1.5 -> alpha_high=0.025, alpha_low=0.002, trimmed mean calculated on pixels 13..42 relaxation 2 -> alpha_high=0.025, alpha_low=00001, trimmed mean calculated on pixels 10..42 relaxation 3 -> alpha_high=0.025, alpha_low=0.00000001, trimmed mean calculated on pixels 7..42 relaxation 5 -> alpha_high=0.025, alpha_low≈0, trimmed mean calculated on pixels 4..42
If I think 3 pixels in my 60-pixel stack seem reasonable to reject, I can get the trimmed mean to exclude them by setting relaxation to 5 but by doing that I virtually ensure they will NOT be rejected because of the effective alpha.
IC1396 25-Jul-2021, RedCat 51, Optolong L-Extreme, 36x300sec (3 hours) Under a Full Moon
Equipment
To capture this target I am using the following setup:
Telescope: Williams Optics RedCat 51
Filter(s): Optolong L-Pro 2″ filter
Camera: ASI2600MC Pro
Mount/Star Tracker: Starwatcher Sky Adventure Pro
EAA: ASIAir Pro, ASI120MM Mini Guide Camera, f/4 mini guide scope, ZWO EAF
Observation Log
I live in a Bortle Class 7 area and image primarily from my backyard. I will attempt to get as much imaging time as possible with this target. The target coordinates are as follows:
RA: 21h 35m 37s
Dec: 57degrees 24′ 03″
Due to the hot summer nights, I am aiming only for about.
Notes: I had a lot of issues this night. I spent the majority of my time trying to figure out how to focus the telescope. The issue was that the 3D printed focus ring did not fully engage the belt, as a result I could not ever get focus. After those issues were sorted out, I was able to finally polar align and let the system run through the night. The clouds started rolling in late night, so there were not many usable images from this run.
2021-07-25
Weather: Partly Cloudy (late night)
Imaging: – 36x300s (36 rejects) ~ 3 hours
Notes: For this run, I had updated my ZWO EAF mount to a different design which worked much better. My focus was within 2″, which was better than I had ever achieved. The guiding was great an my RMS error was under 1″. I was able to image about 1h30 before I ran into a major issue with my autofocus routing. I had set up the autofocus to run about once every hour or every 2 degrees C in temperature change. A little over an hour in the auto-focus routine executed, and got extremely confused. I ended up losing about 30 minutes of imaging time before the clouds rolled in.
Processing: Below is my first pass at processing the lights from this night. Overall the image is much sharper than the previous night but there are some areas I need to work on. Namely there seems to be an issue with the uneven background subtraction evident by the darker circular area to the mid right-side of the image.
2021-07-26
Weather: Expected Partly Cloudy
Imaging:
Notes:
This was the longest imaging run I had yet, but I had a few issues.
The AZ lock on the mount, though tight does not prevent rotation. This can lead to some polar alignment drift when I am repositioning RA/DEC on the telescope.
The EAF was flakey, probably due to backlash. I had the EAF run only when the temperature changed by more than 2 degC, but when the routine was triggered it really threw off the focus. I had to manually refocus.
The SWSA mount ran out of batteries and I had to change it. This led to some images with lots of star trails.
I had to do a meridian flip from the west side of the mount to the east side. This will lead to some alignment issues in the stacked frames.
2021-07-28
Weather: Mostly Clear
This was yet a better run tonight than any other before. I figured out the issue with the AZ on my mount. It turned out I just needed to screw in the base a little tighter. I set autofocus manually at the beginning of the run and did not change focus since. I expect some loss in clarity due to this, but I did get about 9 hours of imaging time. The batteries lasted all night.
Notes:
2021-07-30
Weather: Clear
Tonight, the weather is amazingly clear. I was surprised because all day it had been cloudy.
For imaging tonight, I decided to switch up filters and utilize my Optolong L-Extreme 2″ on the same target. I am also switching up mounts to the iOptron GEM45 with a Literoc 1.75″ tripod.
Notes:
I am still figuring out this mount. I am having issues with some star trailing (maybe at the meridian flip?), and some cable snags potentially.
In this post I will show you how to print a temperature tower with Prusslicer.
Temperature Calibration Tower
This temperature tower has markings for temperatures in 5 degree increments from 240 C downto 185 C. The printing is done in reverse to prevent cold extrusions (which are bad!!!) if the temperature is too low. Note, I use this temperature tower for PLA with a 0.5 mm nozzle, which requires a higher melt temperature than normal. You may require a different range of temperature for your nozzle, printer, and material. Results may vary, but you can adjust these steps with any model.
Start by downloading the model from the link below.
Load the model in your slicer do a preliminary slice. Use the preview window to identify the layer_z index immediately above the first temperature marking as layer_initial. Next, identify the height between layers as layer_increment. Copy the code snippet below and edit it as necessary, changing the first value for “layer_z<=” value to layer_initial, and then adjusting each subsequent layer by layer_increment. In the example below, layer_initial=9 and layer_increment=7. If necessary, adjust the gcode value to correspond to your temperature tower. In the example below, M104 S240 sets the initial T tower floor to 240C and then increments each layer by 5C until the final layer of 180C.
Copy the code and paste it in Prusslicer at “Printer Settings->Custom G-Code->Before Layer Change”.
;BEFORE_LAYER_CHANGE
{if layer_z<=9}
M104 S240
; T tower floor 1
{elsif layer_z<=16}
; T tower floor 2
M104 S235
{elsif layer_z<=23}
; T tower floor 3
M104 S230
{elsif layer_z<=30}
; T tower floor 4
M104 S225
{elsif layer_z<=37}
; T tower floor 5
M104 S220
{elsif layer_z<=43}
; T tower floor 6
M104 S215
{elsif layer_z<=50}
; T tower floor 7
M104 S210
{elsif layer_z<=57}
; T tower floor 8
M104 S205
{elsif layer_z<=63}
; T tower floor 8
M104 S200
{elsif layer_z<=70}
; T tower floor 8
M104 S195
{elsif layer_z<=77}
; T tower floor 8
M104 S190
{elsif layer_z<=84}
; T tower floor 8
M104 S185
{else}
; T tower floor 9
M104 S180
{endif}
Perform a print with your preferred settings.
Evaluating the Results
Rather than repeating what has been said elsewhere, I will direct you to the following site which has some great tips on how to evaluate the results.
In my case, I found the following:
Temperatures below 195 lost adhesion completely.
Mechanical tests revealed that a temperature of at least 215 C would be needed. I was able to manually break the bridge and separate layers with temperatures below this.
From this point forward, temperatures beyond 220 C showed significant warping on the bridge.
All of the temperatures showed stringing which probably is a sign that I need to adjust my retraction settings, or program in a new linear advance value. In this example, I used a linear advance setting of 0 which effectively disables the feature.
In a future post, I discuss calibrating linear advance. I chose this option because Prusslicer only allows adjusting retraction settings on a Per Machine/Extruder basis. I will likely need different settings for each filament, therefore, I will customize the K-Factor for each filament instead.