HQM - Hamburg Quasar Monitoring

HQM-Observations - 2

 

HQM - OBSERVATIONS - 1

 

Observations

The primary observing equipment was a CCD camera in the cassegrain focus of the 1.23m telescope of the DSAZ observatory on Calar Alto, operated by the Max Planck Institute for Astronomy in Heidelberg.

Out of cooperations with other observers and from observations at other telescopes we have used data from Calar Alto (1.23 m, 2.2 m, 3.5 m), Nordic Telescope at La Palma (NOT), ESO (2.2 m, NTT, 3.6m, Danish 1.54m, Dutch 0.9m), 70cm at the Landessternwarte Heidelberg, HST, SAO 6m BAT , CFHT, Las Campanas 1.0m .

During the years of the program lifetime CCD technology involved and different Chips with different pixel sizes, quantum efficiencies, sprectral response and chip sizes have been used. Sometimes it became nessescary to observe in a binned mode, sometimes cameras were mounted rotated or the readout was flipped so that a total amount of 130 different camera configurations found entrance to the HQM data pool.

As the CCDs quantum efficiency was highest in the red band we have chosen the Johnson R filter for our primary filter. In the case of extraordinary events we used Johnson B and V. A uniform Johnson R filter characteristic was not achievable throughout the campaign on Calar Alto. Even at Calar Alto different filters with Johnson R characteristics were used. The specifications of filters from cooperations remained usually unknown but surely fitted the region of a large banded R filter.

The CCD frames have been reduced using standard flatfield-, bias- and dark-corrections. With some very early data from 1985 and 1986 additional stripe corrections became nescessary, which resulted from technical problems to that time.

In general observations were carried out with two subsequently following exposures, one with 100 seconds, the second with longer exposure times between 300 and 1000 seconds. Both frames have been reduced independently and both should give the same photometric values within the error bar. This procedure was used to controll the automatic process and to eliminate possible data corruptions from cosmics, dust, or fit errors. During the first years of HQM when the pointing of the telescope was poor and the guiding TV unit had only low power the short exposure was used online to calculate the pointing offset for the telescope.

Photometric Reduction

We have used relative photometry in a semi and full automatic precedure with the following steps:

  • A set of reference stars was defined manually over the frame of one of the first CCD frame. Later, with growng field sizes or with shifted CCD positions additional stars were included to the ID list. This set of objects is not fixed an can be changed when information on the objects has changed e.g. when variability of reference stars may become visible ortest purposes. Therefore a set of "sleeping" reference stars was added which were handled like active reference stars but nut used for the photometric calculation of the lightcurve. Both groups can exchange members: a bad reference star can be shifted to sleeping set and a good candidate in the sleeping set may become an active reference star. As lightcurves are calculated new with every new field a change in the reference lists works back on all previous frames.
  • Photometric values for the target and stars in the observed frame were measured by fitting a 2D gaussian to the intensity profile of the object. Errors within the fit or not confirming values in FWHM excluded a star from the photometric list. Additionally a simple count procedure with the pixel values belonging to the objects intensity profile checked the result from the gaussian for its relyability.
  • The frame with the highest number of reference stars was chosen to be the calibration reference (CF).
  • All other frames (OF) belonging to the lightcurve were normalized to CF using the identical set of stars in both frames. This procedure was looped until all frames were normalized.
  • To calculate the lightcurve for object x we used a star s with the ratio
    d mag (OF)1 = -2.5 log ( s1 (CF) / x (CF) * x (OF) /s1 (OF))
    the final d mag is then a magnitude weighted mean for all stars s1 to sn
  • In a second step the quality of the reference stars was checked. Every reference star was temporarely removed from the ID list and was handled in the same way as the target. In case of discrepancies in the lightcurve of a reference star this star could either be degraded into the set of sleeping reference stars or a single measurement which was possibly affected by cosmics, CCD-defects, dust corns, satellite transits etc. could be excluded.
  • Manually the lightcurve can be sharpened by excluding values of several parameters (exposure time, seeing conditions, background brightness etc.)

Calibrated Photometry

The result is a lightcurve with relative photometric values within the set of observed HQM frames.

Two initial calibration steps were used.

  • Known calibration sequences in the fields of the targets as given in the literature were used to calibrate a lightcurve directly.
  • During nights with photometric conditions we observed photometric standard stars given in the literature. With these we calibrated frames close in time and close in angular separation.

One calibrated frame immediately calibrates all other frames in the lightcurve of an object. By using

we receive a sample of calibration constants "const" for each night with observations. A photometric night is indicated by a set of very homogenous const values. By analyzing the const values we used a looped photometric network to calibrate all other lightcurves.

From nights with homogenous const we calculated the photometric values for not yet calibrated frames which added new calibration const for other nights so that the network was growing. This procedure was repeated until all lightcurves were calibrated. As this method is selfcontrolled we received very relyable photometric values.

Data Handling

HQM was designed in 1988 to run nearly automatically. Data observed by the astronomer were sent imediately after readout from the telescope controll to a Micro-Vax computer which started automatically with the reduction. The sets of flatfields, bias and darks had been loaded already to the database. The reduction ended with the display of the new lightcurve on the screen. For a 500x500 Chip the computer needed around 3 to 5 minutes, for a 1K chip between 5 and 10 minutes. So the astronomer was able to decide already from the first short exposure to add another exposure in other color bands or to switch to the next target. After the new telescope pointing the system was ready for the next frame.

Later after the observational campaign a second identical computer system in Hamburg was used to eliminate uncertainties within the lightcurves e.g. to carry about frames with bad observing conditions from clouds, moon shine or other effects.

In Hamburg the raw data were rereduced and written to the image data archive. Each photometric reduction produced an entrance to the photometric library for each star and all fit results were written into the frame specific fit result database. Telescope and observational parameters (pointing, seeing, temperature) were stored in othe datasets.

HQM-Results - 1

 

data, graphics and images may not be used without permission.
contact:Jochen Schramm, Hamburger Sternwarte.