See the 2dF User Manual for general procedures for preparing 2dF fields, observing and data reduction. The following procedures are those that are specific to the 2dF Galaxy Redshift Survey.
(See Chapter 5 of the 2dF User Manual.)
The first step is to check the configurations produced by GBD in Oxford (eventually the procedure may become sufficiently automated that these configurations can be produced by the observer). We use GBD's special version of configure, which has a few extra hidden switches for use in setting up the 2dFgg fields.
Login as twodfgg
with password Easy2Forget
.
Use one of the faster machines, like aatssf
, but not a
machine running 2dfdr
from the same username.Then begin by
typing
cd configure
Now edit the
dramastart
setup
file in this directory and change
CONFIG_FILES
to point to the location of the current set
of poschecks (e.g ~2dF/config/poscheck_jan99/m??
where
??
is 30 for the SGP and 05 for the NGP. Now execute this
file using
source ./setup
If ALL you have been left with is a .fld
file and a
corresponding .imp
file then the first thing to do is to
run
configure -i -f rootname.fld
This will read in the desired fibre assignment table from
rootname.imp
and set up a DRAMA file
(rootname.sds
) with this allocation, but without
flagging any errors in the process. If you started with a DRAMA file
(rootname.sds
) then this stage has probably already
been done for you, so you can proceed directly to the next stage.
The next step is to try to reconcile the
rootname.sds
file with the current state of the
instrument. Make sure that you have the CONFIG_FILES
environment variable set as described above. Now run
configure -q rootname.sds -p platenum
where platenum is either 1 or 0. GBD's convention is to use plate 0 for even numbered fields and plate 1 for odd numbered fields.
This will take a few minutes per field, but will perform the following steps:
rootname_new.sds
(this is the
file you will actually use to configure the fibres for observing).
This process will very probably flag some errors when it checks if the configuration is valid over +/-4hr in HA. The observer must manually recover from these errors. To do this you must run configure as normal,
configure -p platenum
and then use Open SDS... from the File menu to read in
the rootname_new.sds
file generated in the previous
step. It is probably best to use the -p
option to specify
the appropriate plate number (although you can set it manually from the
menu so long as you do this before you read in the field).
Now hit the F2
key. This will run the check over hour
angles from -4h to +4h, and attempt to correct any errors that are
generated. This routine will preferentially deallocate sky fibres and
low priority (4 or 5) targets to try to recover lost allocations.
At the end of this procedure you will probably find that there are
fewer fibres allocated to sky positions than you would really like
(the entries in the Spec 1
and Spec2
boxes
of the main configure screen. You can now try to assign unallocated
fibres to sky. If no sky positions are accessible, you can generate
new ones using CTRL-MiddleButton
with the cursor located
in an empty part of the field. It is also worth reallocating sky fibres
that cross many object fibres as these slow down the
configuration. You should also check to see how many fiducial fibres
are allocated to stars, and try to allocate extra stars if possible.
Finally you can use the View menu to highlight the unallocated objects with high priorities to see if there's anything you could recover (6-9 is the important range, and you should feel free to deallocate anything in the 4-5 range if this will help!).
When you finish these checks you should re-run the check over hour angle from the Commands menu to make sure there are no new errors.
When you are all done run Save from the File menu to
save the rootname_new.sds
file you have generated
and which you will use to configure the field for observing. You
should also generate a List from the File menu of the
allocated sky positions as a Digital Sky Survey input file. You can
generate this file for a single .sds
file without effort
using
configure -z -q filename.sds
You can use the Sparc in the control room (aatssf) to read the sky survey cdroms using
getimage -j -i filename.dss
This will generate a set of postage stamp images for the sky fibres
which you can look at using the -vsmap
option to
xv
(Use xv31
at the AAT as the default
version doesn't read fits files. The sky positions will be named here
as S???
, where ???
corresponds to the fibre
number. -Deallocate any sky fibres that look as if they land on
objects. See getimage -help
for more information on how
to run getimage
(See Chapter 8 of the 2dF User Manual.)
At present (with a typical configuration time of 1.33 hours) the standard set of exposures on a field is:
Using the night sky lines to give relative fibre throughputs seems to work just as well as using offset sky exposures, so offset skies are no longer required. With 120s for each CCD readout, the above series of exposures totals 1.2 hours, so with acquisition overheads comes to about the configuration time. However if there is spare time (e.g. due to a configuration problem) you might like to do 3x200s offset sky exposures.
(See Chapter 8 of the 2dF User Manual.)
Start by logging in as twodfgg
(password
Easy2Forget
) on a fast machine. The new procedure for data
reduction is to use ~twodfgg/sjm/zred/getraw.pl
to copy the
raw files into a directory tree where you use 2dfdr
. The
files are copied into new directories in your current directory using
the name of the configure file, spectrograph (i.e. CCD) and date of
observation read from the sdf file headers. For example
cd ~twodfgg/2dfred
~/sjm/zred/getraw.pl /vaxinst/ccd_1/980617
will look at the sdf files in the target directory and copy the raw
files into files like
~twodfgg/2dfred/ngp.config.133_980617/ccd_1/17jun0008.sdf
.
All of the frames for each configuration file go into one directory
where you can run 2dfdr
without worrying about interloper
files of the wrong observations. Each time you run getraw
it will check all of the files in the target directory but will copy
only the new sdf files.
Now change directory to the field and CCD you are going to reduce,
and start up 2dfdr
; for example...
cd ngp.config.133_980617/ccd_1
2dfdr
drcontrol
You have to do 2dfdr
to set things up before you first
run drcontrol
. Note that drcontrol
needs to be
started in the same directory as the data. It is supposed to be a bad
idea to run drcontrol
and configure
from the
same account on the same computer.
Select the Find tramlines command from the commands menu and use the flat-field frame. Check that the automatic routine has not missed any fibres (or confused the numbering) by zooming in on the plot and stepping through from bottom to top using the Next and Previous buttons. If it has missed a fibre, use the a key. If it has added an extra one use the d key. Click on the Quit button when you're finished. Beware of the first or last fibre falling off the CCD because of mis-positioning of the fibre slit. If you have a problem with automatically fitting the tramline map, try selecting the Plot tram map option in the Extract menu to force manual examination of the tramlines - mysteriously, this seems to circumvent the problem.
Change the throughput calculation method to Skylines to
use the night sky lines for the relative fibre throughput corrections.
(If you have offset skies you can also try leaving this set to the
default and compare the results.) Then click the Setup
button and then Autoreduce - 2dfdr
should now
do all the reductions for you. The final step is to combine the object
frames for each CCD and configuration to make a combined image - by
default called combine_frame.sdf
.
Then use
cd ~twodfgg/2dfred
~/sjm/zred/getcombined.pl ngp.config.133_980617 reduced/
to collect all the combined_frame.sdf
files in
ngp.config.133_980617/ccd_*
into the target directory (in
this example, giving them names reduced/ngp133_980617_*.sdf
).
The next step is to check some diagnostic plots and then measure the
redshifts. You do this using the runz
command which takes
the file name as an argument (leave off the .sdf
from the
name). For example:
cd ~twodfgg/2dfred/reduced
~/sjm/zred/runz ngp133_980617_1
This will write some information on the terminal and present the diagnostic plots (showing the absorption band correction, the mean S/N per pixel, the catalogue magnitude versus the fibre magnitude, and the difference between catalogue and fibre magnitudes as a function of position on the field) in the pgplot window. Press q with the cursor in the pgplot window to stop there. Any other key will go on to quickly plot the templates and stop at the first galaxy spectrum, with features marked at the best automatic redshift.
The program offers various options (listed on the terminal):
Using these options, step through all the spectra estimating redshifts and assigning quality flags. There are four files written out by the program:
ngp133_980617_1.zlog
ngp133_980617_1.zs
ngp133_980617_1.rz
.sdf
with the redshift and
quality flags added to the header: ngp133_980617_1z.sdf
The ngp133_980617_1z.sdf
files are the ones that
will be used for the database. You can get a summary of redshift
completeness using
~/sjm/zred/count.pl *.zs
You can add the ngp133_980617_1.rz
file to the current
allredshifts
file and make a new cone plot using
~/sjm/plots/coneplot
. The diagnostic plots can be done
separately using ~/sjm/zred/magplots
.
You can list out interesting header information from the
.sdf
files using hlist
. There are a couple of
scripts, hlistall
and hlistraw
, which search
directory trees and list the information for all the relevant files it
finds.