TESS Field Study SCC313: Asteroid Detection Improvements and Matching Solved
This is another investigation of full frame TESS images like the preliminary study I published last month. I think I'm going to call these Field Studies going forward. The SCC designation refers to the TESS Sector, Camera and CCD that I'm looking at (in this case 3,1,3). So with those definitions out of the way, let's get into the investigation. I will remind you to go wide with your (ideally non-mobile) browser to see the detail better. There are more technical details below the image sets.
1. This is an animation of 5 stacked TESS frames from October 10th to 14th 2018 that have been brightness matched. With the preliminary study I chose frames that I could actually see object movement in, but I see nothing moving here. It doesn't even look like an animation aside from the frame counter in the lower right. Still, these observations are close to the ecliptic so we should expect to find a lot of asteroids.
2. This is the static sky template. It's calculated as a pixel-wise trimmed max of 10 frames which include the 5 frames of image set 1.
3. These are the difference frames generated by a pixel-wise subtraction of the static sky template (img 2) from each frame of image set 1. Unlike previous studies, I have not thresholded the difference images. More on this below. Click and hold the image set above for the streaks view.
4. And finally these are the same difference image frames from image set 3 with known asteroid position matches for frame 4 plotted. Said another way, each of those circles with a dot inside of them for frame 4 is a verified asteroid. There are 172 circled detections matched to asteroids. I think I've really nailed the matching process now. You can see each of these matches is hitting the dead center of the JPL expected position circle. The magnitude of the matches is as follows: 13, 14, 15, 16, 17, 18. Click and hold the image set above for the streaks view.
Image set technical details:
Image set 1: Each frame here is a pixel-wise maximum of 3 sequential observations spanning one hour on the date of the observation. The frames are available for download here. My knowledge of CCDs is limited, but I know that the distribution of values coming out of the full frame images have some crazy outliers (even before the max-ing). So new for this analysis I have clipped the low and high end of the images at the 2% and 98% percentile values respectively. This helps keep the difference image ranges well behaved. These frames are then brightness matched to a pixel-wise median of the 10 original images as usual. Brightness matching continues to be a useful tool for normalizing the frames before creating the template and differencing them. I'm only showing 5 of the 10 frames in this presentation.
Image 2: The static sky template is generated as a trimmed maximum of the 10 processed frames mentioned above. The trim removes the highest value at each pixel location in the stack of 10 processed images.
Image set 3: First let me suggest again that you do not miss the streaks view for a composite cumulative representation of the difference images. These frames are just a simple pixel-wise subtraction of image 2 from image set 1 frames. Because of the processing described in image set 1 I'm getting much better range consistency in the differences. I haven't even thresholded these images like I've always done in the past; you're looking at the raw differences. I calculated the differences for all 10 frames but selected the 5 shown in this analysis by choosing the difference images with the most consistent amount of source counts.
Image set 4: Potential matches are selected by thresholding the difference images at 1σ. The circles correspond to known asteroid positions on the date and time of the 4th frame. I was able to match 172 out of 572 known objects (30%) brighter than 19th magnitude with a position error limit of 1px. As a sanity check I tried to match the frame 4 objects to frame 3 and frame 5 sources and there were zero matches as expected.
Matching image source detections to real physical objects was dramatically improved for this analysis. In fact I would consider it solved now. I was doing a number of things wrong in the preliminary study including: using a geocentric frame, using apparent coordinates and making incorrect assumptions about the WCS behavior for the TESS border/buffer pixels. Some of the effects of those choices I was aware of but underestimated, others I was just totally ignorant about. I'm now using JPL's ISPY tool to get object positions from a TESS centered frame and am getting average position error deltas of less than 1/10th of a pixel. A big thanks goes out to Jon Giorgini at JPL's Solar System Dynamics group who pointed me away from the SB Ident tool and towards the lesser known ISPY tool. He also updated the TESS orbit for me which I'm sure contributes to the excellent precision I'm getting here.
One interesting challenge I ran into is that I was having trouble matching one of the obvious brighter objects. This is because brighter objects create larger blobs. The centroid of a large blob might not actually be within the 1px limit of the expected position for the object yet the object blob as whole could still contain the expected object position. To handle this I tested contiguous blob pixels for matching as well. As long as the pixels were joined to the centroid of the object I considered them a candidate for matching. That seemed to do the trick.
|Object ID||Name||Visual Magnitude|
|...||...||show all 172|
Other process notes:
- Max stacking: In processing these images I used the pixel-wise maximum of a stack of 3 images to make each of the processed frames that I start with in image set 1. Taking a max of values across pixels can be problematic as it will select outliers in noisy data. For the purposes of this analysis at least, the data does not seem problematically noisy. In fact, in testing the median, average and maximum, the maximum value showed the best results. This isn't totally surprising; if a series of observations was noiseless the maximum value of those series of pixel observations would be the best registration of the source compared to the average or median. The way I think of it, I want to be using the pixel value with the strongest potential signal. I can figure out if that signal is real or not by looking for it across multiple frames. More noise will be present, yes, but it is not likely to arrange itself in a streak that coincides with the known orbit of a solar system object.
- Trimmed max template: Related to the above point, my static sky template is also not a median or an average; it's a trimmed maximum. The idea here is that I am building a mask as much as a template. If I have 10 input image pixels at a location and I select the 2nd highest value for the template I will essentially zero out that pixel for 9/10 difference images (I enforce a floor of zero for difference images). The one non-zeroed pixel is my potential detection or it's something close to zero (for a non-detection). This is likely just a decent heuristic-based approach and a more careful characterization of the pixel statistics of the image would probably yield better results.
Ideas & Todos:
- Object recall: I mentioned I matched 172/572 (30%) of the know asteroids in frame 4 brighter than 19th magnitude. I think this is a good metric for comparison going forward. I can use it to see whether my process is improving or not. I matched 45/501 (8.9%) in my preliminary study, so there's certainly been some improvement since then.
- Detect across multiple frames: Related to the above point, I should be able to improve recall by matching detections in more than one frame. Some objects show up in one difference frame and not the other, so looking across all of my difference frames will certainly help.
- Matching without thresholding: I'm not thresholding the difference images for the display above, but I am still thresholding them to get the source locations that I use to try to match. This is definitely filtering out some dim/small potential sources, so I need to either push the threshold level lower or not threshold at all before I try to match. At a certain point I'm worried about having so much noise that I start to "match" noise to real object locations.
- Look for dimmer objects (maybe TNOs): I definitely still want to do this.
- Do some OD: I can probably start thinking about using orbit determination now that I've got a better understanding of the WCS and coordinate system for the TESS observer. As I start to track objects across multiple frames this may be the better way to go.
- Remove fewer frames: I mentioned I was using only 5 of 10 frames for this analysis. I might try rejecting frames with substantially more source counts than known frame objects rather than rejecting frames with counts different than the average count. For instance, all of these frames had ~250 sources in the difference images. I rejected frames that had as little as 500 sources because that was considerably more than the average. I may be rejecting my best frames for matching.
- Difference imaging for SSBs vs. other phenomena: This is more of a musing, but as I read more about how LSST builds templates and does image differencing, I wonder whether these two things are at all dependent on their application. For instance, LSST is doing far more than just detecting solar system objects. Are the template images best suited to detecting solar system objects the same as templates for detecting variable stars or supernovae? And are bootstrapped templates built from the frames under investigation better or worse for object detection? More reading is required here I guess.
- Look at Comet 46P/Wirtanen: TESS recorded an outburst of this comet. It might be interesting to apply some of my image processing tools to these frames to see if I can observe it too.
- Not deconvolving a PSF yet: This is another thing that will surely improve detection potential when I implement it. It's probably going to take some TESS-specific literature reading to understand how people are modeling the PSF for full frame images.
- The cover of my next album (if I made albums): I generated this image while segmenting the frame for this study and thought it was pretty cool.
Almost as soon as I published the preliminary study last month I was able to significantly improve detection with the techniques I mention above. This study was supposed to be a hunt for TNOs, but after I saw those improvements I decided to publish the results first and focus on getting matching nailed down at the same time. The matching work is not quite as exciting as the image analysis and detection, but correctly and accurately identifying dots in an image is obviously critical for moving forward with a project like this.
I'm exactly not sure what's next yet. I might look for TNOs or work on matching fainter objects in full frames a little more. Either will surely inform the other, so there's probably not a wrong way to proceed. I really like the idea of searching for the fainter objects either way, and I think this new recall metric is going to be a useful way to measure the improvement in the ability of my image processing techniques to see the fainter stuff. I haven't found a 19th magnitude object yet; maybe I can pull one of those out of the data soon.