JWST Archival Product Reprocessing

The products in the eJWST archive have been processed with the latest version of the JWST processing pipeline and reference files at the time of their release by STSCI. However, users may want to do their own reprocessing of the data. For example, using a different version of the reference files, or a beta version of the processing pipeline.

Detailed instructions to install the JWST processing pipeline in the user's computer can be found in the JWST Data Reduction Pipeline page at STSCI. For convenience. we summarise here one of the procedures to install the pipeline and example test runs.

JWST Data Reduction Pipeline Instalation

  1. It is recommended to install each version of the pipeline in a fresh conda environment using a recent stable version of astroconda. In this case we will install version 0.15.1 of the pipeline as follows, first creating a new conda environment, second activating it and third installing in it the pipeline using pip:

    $ conda create -n env_name python
    $ conda activate env_name
    $ pip install git+https://github.com/spacetelescope/jwst@0.15.1	
    				

  2. The pipeline needs auxilliary files to run. These files can be retrieved automatically from the Calibration References Data System or CRDS, but these two environment variables have to be set first:

    $ export CRDS_PATH=$HOME/crds_cache
    $ export CRDS_SERVER_URL=https://jwst-crds.stsci.edu
    				
If the pipeline has been installed correctly, a number of executables are now available from the command line prompt, in particular strun, the executable to run the pipeline.

JWST Data Reduction Pipeline Example Runs
There are different ways of executing the data reduction pipeline as explained in the JWST Data Reduction Pipeline page, that we summarize here for completeness:

Stage 1: Apply detector-level corrections to the raw data for individual exposures and produce count rate (slope) images from the "ramps" of non-destructive readouts. The output of stage 1 processing is a countrate image per exposure, or per integration for some modes. Further details can be found here:

Stage 2: Apply physical corrections (e.g., slit loss) and calibrations (e.g., absolute fluxes and wavelengths) to individual exposures to produce fully calibrated exposures. The details differ for imaging and spectroscopic exposures. Further details can be found here:

Stage 3: Combine the fully calibrated data from multiple exposures and in most cases produce some kind of combined product. There are unique pipeline modules for stage 3 processing of imaging, spectroscopic, coronagraphic, AMI, and TSO observations. Further details can be found here:

In addition, there are several pipeline modules designed for special instrument or observing modes, including:


EXAMPLE 1: processing a NIRCAM image with Detector1Pipeline + Image2Pipeline + Image3Pipeline.

Each step of the pipeline can be run from the command line using the strun executable as follows:

$ strun class_name or configuration_file input_file 
	


The first argument to strun must be either the python class name of the step or pipeline to be run, or the name of a configuration (.asdf or .cfg) file for the desired step or pipeline. The second argument to strun is the name of the input data file to be processed.

In the first stage of the pipeline we use the module Detector1Pipeline as follows:

$ strun jwst.pipeline.Detector1Pipeline jw00042001001_01101_00001_nrca1_uncal.fits 
	


The input file is jw00042001001_01101_00001_nrca1_uncal.fits and the outputfile is a Level-2a jw00617001001_02102_00001_nrcb2_rate.fits, where the the following steps have been performed: group_scale, dq_init, saturation, ipc, superbias, refpix, rscd, lastframe, linearity, dark_current, persistence, jump detection, ramp_fit, and gain_scale.

In the second stage of the pipeline we use the module Image2Pipeline as follows:

$ strun jwst.pipeline.Image2Pipeline jw00617001001_02102_00001_nrcb2_rate.fits
	


The input file is jw00617001001_02102_00001_nrcb2_rate.fits and the outputfile is a Level-2b calibrated file jw00617001001_02102_00001_nrcb2_cal.fits, where the following steps have been performed: background_subtraction, assign_wcs, flat_field, photom and resample.

In the third stage of the pipeline we use the module Image3Pipeline as follows:

$ strun jwst.pipeline.Image3Pipeline jw00617001001_02102_00001_nrcb2_cal.fits
	


The input file is jw00617001001_02102_00001_nrcb2_cal.fits and the outputfile is a Level-3 calibrated file jw00617001001_02102_00001_nrcb2_i2d.fits, where the following steps have been performed: assign_mtwcs, tweakreg, skymatch, outlier_detection, resample, and source_catalog.

Further details:
CRDS reference file mappings are usually set by default to always give access to the most recent reference file deliveries and selection rules. On occasion it might be necessary or desirable to use one of the non-default mappings in order to, for example, run different versions of the pipeline software or use older versions of the reference files. This can be accomplished by setting the environment variable CRDS_CONTEXT to the desired project mapping version, e.g.

$ export CRDS_CONTEXT='jwst_0421.pmap'
	

Check out https://jwst-crds.stsci.edu to see the different contexts.