As mentioned in my previous post (AstroBlog 2019.008.1 – Saturn with a Celestron NexStar 4SE and an iPhone), I went out on July 12 with my son’s Celestron NexStar 4SE, stock alt-az mount and tripod, iPhone XS Max, 25mm Plossl, and the Celestron NexYZ phone adapter.
As with the shots of Saturn, I started with single frames of Jupiter. Settings were f/1.8 (set by iPhone hardware), ISO 24 to avoid blowing out the planet’s details, and a simple 1/10th of a second duration:
Cropped, you can see some details:
I doubled the ISO to 50, same duration:
And cropped again:
There seems to be a bit more detail in the top half, but less detail in the bottom. Then again, on a 4″ scope with my iPhone, I’ll take it! 🙂
For the second part of my processing, I worked with a 3-minute video, ~1900 frames:
Initially, I tried working with PIPP and then Autostakkert but it is REALLY hard to process the image when it wants you to place little place markers on the image — in fact, it tells you NOT to try it for planetary items. I told it to find its own, the auto option, but it only found 7 (it wants a minimum of 24):
I manually added another 14 to get to 21. In the end, not sure it was worth it:
I tried again, this time with some quality control built-in, and with only keeping the best 40% of images, and again with 7 place markers:
Cropped, still not sure there’s much else there:
I tried it again with 40% of the best images plus 29 separate place markers, and got this:
Cropped, I get this:
More work, but it seems fainter to me, less detailed.
Finally, I tried processing it in Nebulosity, which is almost a sucker’s game for planets. First, it won’t take videos, so I processed it in PIPP first (noise filter, stabilized for planetary images, object detection, centred the object, rotated it 90 degrees counterclockwise, quality estimation and reordering by quality, output as individual TIFF files) and then opened them in Nebulosity. Second, Nebulosity is NOT designed for planetary stuff at this resolution as it wants me to identify and click on features that are all blurred most of the time. Plus, I’d have to manually process EVERY frame of the ~1900+ frames. I tried that in auto mode but it just blurred everything out. I tried a second time, focusing on the best 50 or so images of the first 400 based on a quality estimate by PIPP, then let it stack them, cropped it down to something usable, and finally ended up with this:
While they’re all interesting, I feel like I barely improved in a couple of them beyond what I had in the single frame shot! But I’m learning at least…I think.
An online friend took a stab at processing my videos and got this:
Which looked like one of my early results too.
