SYSTEMS, ISSUES, AND PROCEDURES

1. The CCD Camera and Its Attachment to the Telescope

The number and quality of CCD cameras available to the amateur astronomer grows daily. From the person willing to spend thousands of dollars to the person with only a few hundred dollars and a willingness to build a camera, there are CCD systems to match both the pocketbook and imaging needs.

CCD cameras have inherent characteristics that should be briefly noted. Each CCD chip and its attendant electronics have varying degrees of noise. The less noise the better the CCD will be at imaging faint objects. The most prevalent sources of noise are readout noise and dark current (thermal) noise. (See the discussion of CCD signal and noise in Section 4.f. Achieving Good SNR)

Dark current refers to the build up of signal on the CCD chip -- even when no light is present -- due to the simple presence of heat. The noise component of dark current is significant relative to the faintness of most astronomical objects. The amount of dark current drops off quickly with lower temperatures. By cooling a CCD chip, the dark current can be reduced to an acceptable level and its effects can further be mitigated in the calibration process. Good astronomical CCDs have cooling systems that can lower chip temperatures by 35 degrees C or more.

Readout noise exists because CCD camera electronics cannot determine and portray with perfect accuracy how many electron volts are stored in each pixel from impinging photons. There is sampling error which introduces noise into the system. Some cameras have "double correlated sampling" which in effect samples each pixel twice to measure as accurately as possible how many electron volts are resident in each pixel. Once the analog voltage from each pixel is determined it can be digitized, but low-level random variations in the digital output further contribute to readout noise.

The analog voltage in each pixel is digitized through the camera's analog-to-digital (A/D) conversion electronics. CCD cameras convert the analog signal into digital increments so that the image can be represented by pixel grayscale values. The number of increments your CCD divides the analog signal into determines the precision of its representation of the imaged object. Good cameras have at least 12-bit A/D conversion (4096 levels) with most having 16-bit A/D conversion (65535 levels).

Many thousands of words could be inserted here on system trade-off parameters: cost, readout noise, quantum efficiency across the imaging spectrum, system gain and dynamic range, system cooling method and efficiency, shuttering, etc.; but once the imager has decided on a camera and is ready to put it to use, these parameters are locked in and only a few items are continually important to take into consideration when planning an imaging session. These are:

a. Pixel size and chip size. When taken into account with the focal length of the optical system, these measurements are key to establishing the FOV that will be imaged and the resolution (i.e., level of fine detail) that the image will provide. As stated in the Glossary, the approximate width of the FOV in arcseconds can be calculated by dividing the width of the CCD chip in microns by the focal length of the optical system in millimeters and multiplying the result by 206. Similarly the FOV, or angular resolution, of each pixel can be determined using the same equation, substituting the width of a single pixel for the width of the entire CCD chip. To achieve images wherein no information is lost due to insufficient resolution of the data available from the optical system, each pixel should be no larger than 50% of the size of the expected image point spread function (PSF -- see the Glossary). The imager should try to match focal length to pixel size and chip size to plan for the appropriate system resolution and appropriate FOV for the object(s) being imaged. Sky seeing conditions and other factors will often mitigate the imager's best plans, but establishing appropriate imaging system parameters for your camera will lead to best long-term results. See http://www.nto.org/whatusee_en.html for a Windows-based program to help calculate the FOV for an imaging system.

b. Chip location at the focal plane. When first using a camera, the imager may find that reaching focus is difficult due to uncertainty about the location of the optical system's focal plane or the recessed location of the chip in the camera. The chip must be positioned at the focal plane to bring the camera into focus. One easy way to determine the location of the focal plane is to mark the location of an eyepiece's focal plane (same as the field stop) on the outside of the eyepiece and then bring an eyepiece into focus on a star. The mark on the outside of the eyepiece will then match the position of the telescope's focal plane. Coupling this knowledge with knowledge of the distance that the chip is recessed inside the camera housing should allow the imager to determine whether the CCD system can be brought into focus.

c. Vignetting. It is very important to minimize vignetting of the CCD chip. In order for the all portions of the chip to be illuminated by the full diameter of the telescope's primary optic, the secondary mirror and other optical elements must be of sufficient size and adequate design, and any apertures created by the focuser mechanism or system baffles must be wide enough to not restrict the optical path. Any vignetting will result in noticeable degradation of signal values in the outer regions of the chip. The darkening effect on sky background signal in peripheral regions can be reversed by flat-fielding, but lost SNR cannot be restored. Perhaps the easiest way to inspect for potential vignetting is to cut an aperture in a piece of paper the size and shape of the chip, then place this aperture at the focal plane during the daytime and look through it with the eye directly behind the aperture. Look at the primary optic to see if its entire surface can be seen through all portions of the aperture. If it can there should be no vignetting; but if it is partly obscured, then it should be readily apparent which part of the interceding optical or mechanical structure needs to be changed. See http://home.att.net/~dale.keller/atm/newtonians/newtsoft/newtsoft.htm for a Windows-based program to help determine vignetting of a Newtonian optical system.

d. Using a turret or slide or flip-mirror to center objects visually. One of the more difficult imaging tasks can be locating and accurately centering an object in the CCD's FOV. Unless you are fortunate enough to have a perfectly aligned system with extremely accurate automatic pointing, objects must be located within the FOV by manual means (i.e., star hopping and use of low-power finders). Even amateurs accustomed to manual visual work will be challenged and often frustrated by this requirement unless some means is adopted for centering objects visually within the main scope's eyepiece FOV before replacing the eyepiece with the CCD camera. This is where a two-position turret or slide mechanism or a flip-mirror assembly to divert the optical path to a fixed eyepiece comes in handy. Having to exchange an eyepiece and a camera in the same focuser tube requires constant rechecking of focus and creates additional flat-fielding requirements since camera orientation is unavoidably changed. Using a two-position rotating turret or slide assembly or a flip-mirror assembly that has focuser tubes for both an eyepiece and the CCD camera solves the problem by allowing visual centering before moving the fixed-orientation CCD into the eye's position. In addition, the finding/centering eyepiece can be parfocalized with the CCD chip so that changes in telescope focus can be accommodated rapidly by visual monitoring and adjustment as the night progresses. Please go to http://www.ghg.net/cshaw/slide.htm to see more about turret and slide designs. See http://members.aol.com/STRG8ZR/homeyer/homeyer.html for Andy Homeyer's flip mirror.

e. Filter wheels/slides. If you plan to do filtered imaging to concentrate on specific wavelengths or produce color images, attaching a rotating or sliding filter-holder mechanism in front of the camera provides a great advantage. Without such a device, filters must be separately and manually inserted into the optical path, requiring careful refocusing and unavoidably changing the camera's flat-field orientation. Filters carefully selected to have the same thickness will be parfocal in a filter wheel or slide, allowing quick filter changes without refocusing. Please see http://www.ghg.net/cshaw/filter.htm to see Andy Saulietis' filter wheel design. See http://members.aol.com/STRG8ZR/homeyer/homeyer.html for Andy Homeyer's filter wheel and other accessories.