Astrophotography: A beginning.

Forums Forums Farktography General Chat The Gallery Astrophotography: A beginning.

Viewing 15 posts - 106 through 120 (of 124 total)
  • Author
    Posts
  • #45465
    ravnostic
    Participant

    One way to do it, chupa, would be to use filters, and live view focus to get the chroma pin-sharp for each R,G&B. The difficulty factor would be barrel distortion (which would be slightly different at each wavelength) and a steady center of focus so it could be corrected manually. Me? I often just crop down as needed to eliminate the worst of it.

    You can at least get filters that would transmit only the correcdt wavelengths, but that also means you’ll need longer exposures at least, and probably more stacking images.

    #45464
    ravnostic
    Participant

    errr, I mean fluffy

    #45468
    chupathingie
    Participant

    Yeah, rav‘s got the gist of it there.

    The best way to avoid the CA is to shoot prime with a T-adapter thru a Newt…that eliminates CA entirely, but trades it for coma in fast systems. Stopping down as rav mentioned helps quite a bit, at the expense of longer exposures; you have to make a trade-off between the accuracy of your guiding and how much CA you can deal with. Avoid attaching filters to your lens, they will produce flares.

    You’ve got me wondering just how low I can drop the R/G/B sensitivity with camera settings; that would eliminate the filter issue if it can be dropped to 0.

    I know there is software to help deal with CA in post, but I’ve not tried any so I can’t make a recommendation.

    As far as camera choice goes, any DSLR can be used… most any of them work surprisingly well. All the noise reduction is done in post and is determined more by technique than by the camera unless you’re shooting way up at the highest ISOs where things wind up being less repeatable/predictable.

    #45469
    fluffybunny
    Participant

    Chupa,

    You’ve got me wondering just how low I can drop the R/G/B sensitivity with camera settings; that would eliminate the filter issue if it can be dropped to 0.

    Veeeeerry interestink. I would like to subscribe to your newsletter, or at least hear back on what you find out.

    I figured the Newt would work that way, and I plan on giving it a try when I get the 16 finished, knowing I might have to invest in a coma corrector.

    You have discussed somewhat your post procedures and software but do you mind elaborating? I seem to remember that you use ImageMagick for some things.

    Rav,

    The crop idea might be the easiest, and I’m all about easy when I can get it. I was going to try Andromeda as a first target and Ron Wodaski’s CCD Calculator says I should be able to get the whole thing in one shot, maybe two and stitching. Cropping would add more sub frames so I was trying to think about novel ways to use the RAW data from our daily shooters before I went that route.

    #45470
    chupathingie
    Participant

    You have discussed somewhat your post procedures and software but do you mind elaborating? I seem to remember that you use ImageMagick for some things.

    Oh I suppose I could do that…this might get a wee bit wordy, just because I’m a big fan of the whys in addition to the hows; plus I’d like for anyone else who stumbles into this thread to have as much information in one place as I can pass on.
    /me grabs a cup of coffee and Bailey’s

    I’m sure everyone here knows what a long exposure (2mins+) looks like from a DSLR…very grainy, noisy and with scattered sprinkles of saturated red, green, blue and white pixels (“hot pixels”). It’s not pretty.

    The basic premise here (and this is likely the only original thought I have on this, all the rest is information I’ve gathered reading forums and the writings of those more learned than I) is to create as good an image as you can of what you don’t want in your final image, and use that to clean up your data. “Data” in this sense is the subject of your composition, and expressed in basic math any image you take looks like this:

    Image=data+noise

    So the idea is to gather as much of the data in one pile, and noise in another. In-camera noise reduction does this in a simple fashion; you take your picture and the camera then takes an identical exposure with the shutter closed. Rearranging the above formula gives you:

    data=image-noise

    and this is exactly what in-camera noise reduction does. It subtracts the image taken with the shutter closed (a “darkframe”) from the normal image (a “lightframe”). This eliminates the hot pixels and cuts down on some of the grainy static in the final image.

    It doesn’t eliminate the static, however. Some of the static is what’s called thermal noise, and it’s random, so the darkframe will only share some of that thermal noise with the lightframe. The truly random thermal noise gets dealt with almost accidentally, and I’ll explain that shortly.

    The rest of the static in the image is not completely random, but due rather to imperfections in the camera sensor that makes some pixels more or less prone to thermal noise. The thermal noise is random, but the level of thermal noise at a given pixel may be higher or lower, on average, than its neighboring pixel. This is repeatable at any given temperature, so it’s something you can capture in a dark frame and use to your advantage.

    OK, so here’s a quick run-down on gathering the raw data:

    Shoot your lightframes, all at the same ISO, f-ratio (if applicable) and shutter speed. Shoot as many of these as you can get away with. The combined exposure time for them should be at least an hour, but two hours or more is better. Temperature is a factor, and it’s rarely stable…it will change during the duration of your shoot. In the middle of your shoot, stop and put the lens cap on (don’t forget to cover the viewfinder any time you’re shooting astrophotos, BTW) and shoot at least 20 darkframes at the same settings as your lightframes. When this is done, remove the lens cap and continue with the second half of your lightframes. Doing them this way will help ensure that your darkframes are taken at a temperature that falls somewhere in the middle of the temperature range of your lightframes. Shoot all of your images in RAW…you need the extra color precision with astrophotography, it’s simply a must.

    There is more to noise than just thermal noise and hot pixels…any flaw in the data collection path is considered noise, and includes dust on the sensor, vignetting, sky glow, light pollution, sensor read noise, etc. Everything from sky conditions to the technological limitations of your hardware. If you can isolate it and get an image of it, it can be used to clean up your data. I’m only covering thermal noise here, because I feel like I know enough about it to be useful. I’m not much help with shooting and using “flats” yet. Flats are images taken out-of-focus or of a uniform, evenly-lit field of view and are used to counter some of the other noise sources.

    Post processing:

    The first thing to do here is to convert all of your files to a 48 bit color format (often called 16 bit color, which refers to the 16 bits in each color channel). I convert to TIFF, but any lossless format that can handle the extra color depth works equally well. There are plenty of software titles available to do all of the post processing, but I’m a bit of a minimalist (were I an artist, I’d be the guy mixing his own paints) and running an open-source home photo-lab so my choices may seem limited. I use ImageMagick for the major steps, but whatever you use should be capable of working in 48bit color space. “Normal” 24 bit color is simply inadequate for astrophotographic (as well as advanced “normal” photographic) processing.

    I start by separating my lights and darks into different folders. I then open up a terminal window (or a CMD window in a Microsoft OS) and navigate to the folder where I’ve put my darks. Next, I run an ImageMagick command; convert -average *.tif darkframe.tif . This gives me a final image called darkframe.tif that is composed of a straight-up average of all of the color values for each pixel. What this does is filter out the truly random thermal noise leaving behind the repeatable noise… the hot pixels and those pixels prone to higher and lower levels of thermal noise.

    Next I apply darkframe.tif to all of the lights in that other folder, again using an ImageMagick command; composite -compose minus xxxx.tif darkframe.tif dsxxxx.tif, where “xxxx” is the filename for each lightframe. This is very tedious, especially if you have, say, 75 lightframes (I’ve posted a script with notes here that does all of the post-processing described here with a single command). This creates a new set of lights with “ds” (for “dark subtracted”) added to the start of each filename that have the repeatable noise removed. It is crucial that this darkframe subtraction be done before any other processing is performed on the lightframes. If it is not the first step, you will actually increase the noise in the images.

    Next I align the all the lightframes so that the images will lie precisely on top of each other for stacking. I use one of the commands included with Hugin’s Panorama Creator and it looks like this in my workflow: align_image_stack -a ais ds*.tif . It takes all of the .tif’s that begin with “ds” and shifts them as needed, creating a new set of files that now begin with “aisds”. If you’re using an auto-guider and are confident that all of your frames are aligned precisely with your target, this step is not needed.

    Here’s where the truly random noise gets dealt with, by using the same command we used to average the darkframes: convert -average aisds*.tif final_lightframe.tif. As stated above, this filters out the truly random noise component, and is the main reason behind shooting a large number of lightframes. Remove “ais” from this command if you’ve skipped the previous step.

    What you’re left with is an unadjusted lightframe that has had the vast majority of static and noise removed which you can then adjust levels on to bring out the wispy details. You can stretch levels and curves on this image much more aggressively before the remaining noise starts to show up.

    A lot of this applies to regular photos. You won’t get the benefit of stacked lightframes, but images shot in low light or with dark shadows will benefit from simply putting the lens cap on and shooting 1/2 dozen darks right afterwards and subtracting their average from the original…shadows or dim areas will be quite a bit cleaner without the blurring that goes along with traditional noise reduction. I have not done a side-by-side comparison, but I suspect this might be a cleaner method than in-camera noise reduction due to the filtering of the random noise in the darks, and not introducing a second set of random noise data into the processing.

    #45471
    ravnostic
    Participant

    That’s an excellent synopsis, Chupa; I’ll add my 2 cents (grabs a Budweiser; I’m lowbrow). I use Deep Sky Stacker myself. It does pretty much what you described, in that order, only I didn’t have to do any programming (I merely select my parameters–but I learned to do it without anything close to instructional aids beyond ‘the best way to learn to use “x” feature is to play with it’ {and I did, a lot}.)

    Stacking has 1 goal: Reduce the SNR (Signal to Noise Ratio). It doesn’t brighten the image or add any more color, it just reduces the noise. But this means you can stretch the histogram more without degrading the photo, and that brings out color (think along the lines of the grain of dry wood versus wet {or lacquered} wood) and finer details within very closely aligned tones.

    Chupa mentions shooting as many light frames as you can get away with, and that’s correct, to a point. Let’s say you shot 1 minute frames. For ease of de mathz, 64 would be great, and 121 would certainly be better. But your SNR in stacking them (darks, flats, and biases aside), would be improved on the order of 8 to 1 for 64 frames, versus 11 to 1 for 121. 225 frames would be 15 to 1, nearly twice as good as 64, but now you’ve nearly quadrupled your frames (and the processing time). It’s a square root deal.

    There is no easy formula for what’s worth it and what’s not in regard to how many you’re willing to stack. How much time do you have to take the frames? Under what conditions? How long will you spend stacking them? That’s all up to you. You can certainly stack stacks, and improve things, which is good if you’ve done frames on multiple evenings under different temperature conditions. But I get ahead of myself.

    Also, 100 1-minute frames is not the same as 10-10 minute frames. This is a little harder to explain, because in one way, they are. Imagine light being rain in a bucket, and one minute equals one inch; either way you’ve caught 100 inches. But in reality, it’s a little different. In stacking, you’re enhancing (creating depth) in each set of buckets. So in 100 1 inch buckets, you’ll have a finely detailed ‘map’. But the depth, while detailed, will only vary in thickness by 1 inch. In 10 10 inch buckets, you’ll have much greater depth. But it will be more coarse (more grainy, which in this analogy equates to the SNR). Another way to describe it is that in the 1 inch bucket frames, you miss the deeper details; they may be there in some frames (splashes, if you will), but in averaging all the buckets in most of which they don’t appear in the same spot, they disappear. In the 10 inch buckets, You’ll get the steady deeper ‘drops’ (detail), but then you have fewer images in which to average them out; the result looks more like a Lego ™ structure than a finely curved clay sculpture.

    Basically only the details that are captured in most of the frames will be in the finalized image, so depending upon what you shoot, you have to balance exposure time and number of frames against increased SNR that comes with longer exposures.

    Back to the combining stacks; the square root deal still applies. Combine 4 stacks (all things being equal between the stacks, exposure, frames, etc) and you’ll double your SNR. 9 will treble it. But if all things aren’t equal (and even if they are), the worst image among them will be the limiting factor in the result. Same, incidently, is true in any stack. So review your frames before stacking. Get rid of the ones with a little camera shake (or, as in Deep Sky Stacker, program things to only use the best x percent of frames).

    The rest–dark, flat, and bias frames, are calibrations. Darks average out the noise and subtract it. Flats average out the vignetting and subtract it. Biases average out electronic noise and subtract it (note: I’ve never dealt with biases, to this point). The idea is to take the light coming in and fix all the things wrong with it for an image without calibration flaws, much like we all want to have calibrated monitors that translate to our printers so our images print out like we see them on the computer screen.

    The same square root deal applies. A dark isn’t supposed to remove graininess from the image–that’s what the multiple lights do. A dark frame is to remove noise inherent to the camera sensor, which is dependent on temperature, ISO, etc., and thus they should always be taken at exactly the same settings, under (ideally) the exact same conditions. But you can actually add noise to your lights by having to few of them. Using 4 instead of two will decrease this by double. 8 instead of 2 by treble. Etc. I like using at least 10 darks myself, though I haven’t always had the time or patience or mind to do so, especially early on.

    Flats, on the other hand, serve the purpose to remove flaws in the lens system. Vignetting is the biggie here (but sensor dust, or dust anywhere in the optical mirror/lens assembly, plays a role). Here’s where the light bucket analogy kinda fails. All the light we see comes from (in essence) one direction, right? Yes, and no. It does, but using either a mirror or lens, we’re bending it. It’s bent less in the middle. More so at the fringe. Besides CA (which we don’t have with front-surfaced mirrors, of course), this leads to less light contributing to the final image from the edges of the frame.

    A flat, then, is an image that takes a look at (preferably) a white-or-greyscale field with the identical optic setup. Temperature and ISO don’t matter here–optics do. Change a lens, add a focal reducer, there go your flats. I took my flats with a foamcore whiteboard in the early evening. There’s a complicated formula for what an ideal flat should look like, but I’ll admit; I guessed and got a pretty good set. Take 10 of each at different exposure times and play with them. Once you have a set that removes vignetting (and lens/sensor dust), you’re good till you get new lens/sensor dust. Again, the more the better–you’re averaging out the flaws.

    Bias frames, I read, relate to digital cameras pixel-to-pixel variations. However, since dark frames contain the same bias as light frames, a dark frame takes care of them. Assuming the dark is equal to the light in exposure, etc, you don’t need to worry about them, but if it’s not, you could use the bias to equalize things.

    However, I’ve recently learned of the importance of Dark Flat frames. (Egads–there’s always a ‘however!!’.) And I’ve never used them, even though I’ve used flats. But here’s the principle. When you shoot the darks, you subtract the biases from the lights, since they are the same. But to calibrate fully (because no lens system is perfect), you need to subtract the flats as well. But if the lights are bias, and the darks subtract the bias, and the flats, of course, contain the bias, then in using both you’ve subtracted two biases from one! In other words, you re-introduce an inverted bias. So by taking dark flat frames–which, of course, have to be taken at the same exposure, ISO, focus, etc as the flats (just with the lens cap on), then you eliminate the bias from the flats, and you come out with two positives (light and flat) and two negatives (dark and dark flat), and you wind up with true calibration.

    Phew!

    If you use bias frames, however, they substitute for dark flats, so you need one or the other, but NOT BOTH.

    The program I use, Deep Sky Stacker (as mentioned) (also; I agree with chup regarding converting to TIFF–works much better than the actual RAW, though that’s what I shoot in for the same reason). First it determines the offsets (which include the natural bias, then creates a master dark upon which to apply them, subtracting from that the dark flats (if available) and then the flats themselves (cross-canceling the bias issue). All that gets applied the the images, which then result in a calibrated stack.

    //Damn I need to take my scope out, even if my 2Ti is still out for repairs. I bet I can betta’ myself now.

    I think my 2 cents has grown to a dime, or perhaps a quarter. I’ll leave you be now.

    A lot of this, but not all, I got from the deepskystaker.free.fr/english/index.html site. But it misses as much as it covers, so I relied on a few other sources as well.

    #45472
    fluffybunny
    Participant

    Many thanks to the two of you. Your tutorials compliment each other quite well, supporting and bringing together the details floating around in my head that have gathered there for some time. Excellent! Now I just need some time (this always kills me), clear skies (currently wrong season) and a few technical details (modifying my hand controller to accept guiding input from GPUSB, mounting the guide scope and so on).

    On a note related to that last parenthetic point, do either of you have experience with auto guiding? I am wondering how good of a polar alignment I need to make
    if I have an auto guider. My normal alignment is “good enough” for visual work and I hope I don’t have to spend a lot of time doing drift alignments, during what I already know to be an otherwise time intensive activity.

    #45473
    ravnostic
    Participant

    Simply put, in an equatorial mount system, the more accurate the polar alignment, the better. I use the CPC 1100 from Celestron (this one http://www.celestron.com/astronomy/celestron-cpc-1100-gps-xlt.html )coupled with a focal reducer (making it f/6.3 at 1728mm equiv, bumped back up to 2800mm with my APC sized camera sensor but retaining the f/6.3). Basic polar alignment is easy enough, and drift aligning really isn’t THAT difficult or time consuming. That being said, I’ve taken as long as 7 minute exposures with very little blur, but that comes with a caveat.

    There’s an oft-talked about rule (the 600 rule) when taking star photos without guidance. Divide the lens mm’s by 600 to get your maximum exposure time, in seconds, without star trailing. (This is effective mm, so on my APC with a 50 mm lens I have to treat it like it were 85mm). I can tell you, from my standpoint, it’s B.S. When I zoom in using that rule, all the stars are ovals. I go with 1/2 the recommended maximum and then I’m happy. Obviously using higher ISOs results in more stars.

    With guidance, though, you’re stretching that window. Polar (equitorial) guidance is a must, or you’ll have what’s called ‘field rotation’–that’s where the stars appear to rotate around the center of the image; you’ll find this happens with alt/az mounts (i.e., ‘fork’ mounted with one axis parallel to the ground) You can take a longer exposure, sure, by maybe a factor of 2, possibly 3. Now how much you can stretch that window, polar mounted, is a factor of how well aligned you are, because field rotation is eliminated. I find it’s pretty easy to get within about 1/4-1/5th a degree of ‘true’ alignment, and this allows me to get good shots of about 2 minutes. But I’ve had times when buy luck or perseverance I’ve been much closer, and I’ve taken those 7 minute exposures then.

    But the caveat: A guidance system is only as good as the gears–and any gear system has it’s quirks, at some point, that will show up in a long enough exposure. A gear system will ‘slip’ or ‘tick’ or hit a seam not dealt with in manufacture or elsewise cause either a shift or a bump in the exposure. When you’re dealing with my scope, it’s about a 3 minute limitation to my exposure, as currently configured. I WILL have some type of movement/shift in the starfield on longer exposures. If I went with the CCD system, where I’m at f/2 with a much wider FOV, this may not be noticeable. But where I’m at now, limited to about 1/2 a degree FOV, it is.

    Frankly, it’s one of the reasons I’m no longer happy with my telescope; the other being the limited FOV that would cost me about $800-2000 to address and would not be an easy thing to switch away from if I actually wanted to look through the telescope (I’d be stuck with it, and forced to use a laptop to see what the telescope sees, always–and I’d be on a CCD system instead of a DSLR system to boot.)

    I’d have to go back to find where you posted what rig you’re building, but keep that in mind for larger aperture mirrors. That, and every little shake (wind, footsteps, etc.) will mess up your shot–and it really doesn’t take much at all to do so.

    #45474
    fluffybunny
    Participant

    I have two working telescopes currently, the third being the 16″ Newt that I’m building.

    Working telescope #1 is an equatorial fork C8 Ultima, f10 with 6.3 focal reducer.
    about halfway down the page


    http://www.telescopebluebook.com/sct/celestron.htm

    Working telescope #2 is a GEM mounted 120mm f5 Achromat.
    OTA


    http://www.cloudynights.com/classifieds/showproduct.php?product=61717&sort=&cat=all&page=1
    mount


    http://www.astro-baby.com/heq5-rebuild/heq5-m1.htm

    I plan on having an auto guiding subsystem for one or both of them.

    So the way I understand it, there are two basic types of tracking system errors; the quick sharp “catch” type motion of footsteps, wind, or bearing/gear defects and the slow error of less than perfect polar alignment. The auto guider can compensate for the latter but not the former.

    I appreciate you relating the field experience you’ve had, it gives me at least a vague ballpark of what I can expect to get in terms of maximum exposure length. Your first explanation using bucket depth was also useful in reaffirming a suspicion I’ve had but never followed up on; you can use many short exposures but there is an optimum length and less than this value you will start to miss details.

    #45475
    ravnostic
    Participant

    You’re quite welcome; learned some things myself in the process. One more note, though, and that’s dynamic range. Let’s say you’re shooting Orion’s nebula. If you want the detail in the nebula, you’ll blow out the Trapezium. If you want the Trap, you’ll miss the nebula. It really does take HDR to capture it all, so to speak.

    //Currently restacking the Eagle Nebula, till I’m satisfied with improved results. Using 2x drizzle for more detail/structure; working on finding the right parameters. Will post when I’m happy. If I get happy.

    #45476
    chupathingie
    Participant

    Add a 3rd error to the tracking train (rav touched on this above): periodic error. It’s due to manufacturing imperfections in the worm or circular gears in the drive. Most good quality drive systems these days come with Periodic Error Correction (sometimes they just say PEC in the description), which “learns” where the irregularities are and compensates accordingly.

    And I hear ya about weather… I often get clear skies (dry desert air/4000ft elevation), but I’m on the edge of the caprock here and the wind rarely slows down enough to shoot off subs.

    Oh… something to keep in mind regarding drives: http://www.explorescientific.com/telescopedrivemaster/
    I’m waiting for the prices to come down on these… which shouldn’t be hard, since it’s likely to be a while before I ever put my money where my mouth is and build that scope I’ve been wanting.

    #45477
    ravnostic
    Participant

    So I reworked that Eagle Nebula (nest) shot. I 2x drizzled, to make it bigger (cropping down to it, of course). I went for true color, and as much detail as I could eek out of it, sacrificing a good many shots (only 14 lights now) that were the best of the best. Stacked it against 12 flats and 7 darks. A little noisy, yes, but far more detailed; you can make out several Bok Globules. I will need to revisit this again, and soon. I think next weekend (well, Sunday) I may go out with the scope, or perhaps the weekend after (when I hope to have my 2Ti again, far better resolution/less noise). But in the meanwhile, I still like this better than the one posted earlier, for the added detail and trueness of tone/color.

    https://plus.google.com/photos/107857888121727893520/albums/5799203425257274321/5799203430624633938

    #45478
    chupathingie
    Participant

    for only 14 subs, that still looks remarkably good.

    #45479
    chupathingie
    Participant

    Oh! Something that’s been bugging me about the whole number of exposures vs S/N for a while that I finally articulated upstairs.

    When dealing with targets with a high dynamic range, a larger number of exposures carries the day… try shooting the Horsehead with fewer, longer exposures. You’ll find that Zeta Orionis completely blows out it’s little corner of the sky. We’re still limited to the rules of regular photography; don’t blow out your highlights. So you choose multiple exposure durations and combine a la HDR or go with mad shorter exposures and brute-force the noise into the dirt. Going the HDR route costs you detail on the low end of the histogram, while doing the insane number of shorter subs costs you time but gives you a seriously smooth final to stretch levels on without the glare from brighter stars.

    #45480
    ravnostic
    Participant

    What, no pic, chupa? Share!

    //I’ts (mis)fortunate that I wouldn’t have that problem, as z. Orionis is sufficiently far enough away from my FOV when centered on the Horse that I wouldn’t capture it.

Viewing 15 posts - 106 through 120 (of 124 total)
  • The topic ‘Astrophotography: A beginning.’ is closed to new replies.