Gamma-ray Large Area Space Telescope
The GLAST instrument is composed of 16 towers, an Anti-Coincidence Detector
(ACD), and the Data Acquisition System (DAQ) mounted to a grid.
Each tower includes a Silicon Strip Detector Tracker (TKR),
a cesium-iodide (CsI) Calorimeter (CAL), and a Tower Electronics Module
(TEM). The subsystems are connected for data flow, commands, and telemetry
functions through an Instrument Data Bus (IDB).
Data Acquisition System
The Data Acquisition System (DAQ) of the GLAST instrument is a distributed
architecture system with identical units in each one of the 16 towers and a dual redundant Anti-Coincidence Detector (ACD)
interconnected by an
Instrument Data Bus (IDB). A Spacecraft Interface Unit (SIU) is used to
interface the Instrument specific requirements to Spacecraft (SC)
specific interfaces as described in the GLAST Interface Requirements Document (IRD).
Each Tower consists of a TKR, a CAL, and a TEM mounted within one cell
of a monolithic support grid. The Tower Electronics Module (TEM)
supports the Level 1 Trigger, Tracker (TKR) readout, and DAQ
management functions such as command processing, telemetry, file
management, scheduling, and instrument monitoring. The CAL is
composed of the CsI crystals with dual PIN diodes mounted on each end,
four CAL electronics boards (one on each side of the calorimeter), and
a CAL-TEM board (a standard TEM board. External interfaces for each
each Tower include one cable to each next nearest neighbor tower (with
wrap arounds for the towers on the perimeter), switched 28 volt power,
and two cables for the ACD (one for each redundant ACD unit).
Tower Electronics Module
The major components of the TEM board include:
- Level 1 Trigger
- Tracker readout
- Tower CPU
- 256 MBytes DRAM memory
- Main Bus
- DMA controller
- Instrument Data Bus node
- Multiple IDB links to other TEM boards
- Power conditioning for the TEM board
- Switched power conditioning for the TKR
- Housekeeping analog and digital monitoring
- System clock
- GPS synchronization pulse input
The CAL is supported by a modified version of the TEM board.
Differences between the CAL-TEM and TEM boards are in the power supply
voltages and currents for the CAL, the L1T FPGA (CAL-L1T), and the
readout FPGA to support the 4 CAL electronics boards. Control of the
CAL front end ASICs and ADCs is performed by dedicated FPGAs on the 4
CAL electronics boards. CAL readout of the 4 CAL electronics boards
mounted to the sides of the calorimeter is performed using the same
approach as used in the TEM for TKR readout. The CAL-TEM board
supplies the L1T FPGA on the TEM board with a CAL input computed in
the CAL-TEM L1T FPGA. The CAL-TEM L1T takes input from all of the
discriminators in the CAL and sends a CALREQ to the L1T whenever 3
discriminators simulataneously register more than 100 MeV.
power at +28 V nominal is provided by the SV with direct connection to
the battery. The SV has one low voltage discrete logic output for
each TEM which is used for TEM reset of the processor and other
components on the TEM PWB. Internally, the TEM has control over the
various Power Supply outputs.
The functions performed by the DAQ include the following:
In addition to these functions, the DAQ also supports watchdog
processes, performs general purpose processing, and acquires data from
the housekeeping monitors of temperature, voltages, and currents on
each Tower Electronics Module.
- Level 1 Trigger
- Data Readout of Anti-Coincidence Detector
- Data Readout of Tracker
- Data Readout of Calorimeter
- Data Readout of Level 1 Trigger
- Timestamp Data
- Time Synchronization of Towers
- Level 2/3 Trigger Processor
- Commanding of ACD, Tracker, Calorimeter, and Level 1 Trigger
- Telemetry Formatting and Output
- Data Onboard Storage
- Power Supplies
- Housekeeping Monitors
Level 1 Trigger (L1T)
The Level 1 Trigger is the heart of the GLAST Instrument. The L1T
provides the trigger which initiates readout of the instrument. The
degree to which the GLAST instrument attains the objectives of the
mission and therefore the success of the GLAST mission will be
determined in part by the performance of the L1T.
The L1T receives
inputs from each of the detector subsystems, ACD, Tracker (TKR) , and
calorimeter (CAL), and must issue a trigger prior to the 1.3 us peaking
time of the Tracker SSD Front End Shaper. The primary purpose of the
L1T is to trigger the acquisition of event data and initiate readout.
Inputs to the L1T are shown in Table 1.
Table 1. Level 1 Trigger Inputs
Anti-Coincidence Detector Low Level Threshold
Anti-Coincidence Detector High Level Threshold
TKRREQ (16 X, 16Y)
Tracker Discriminator outputs from fast OR of each of 16 X and 16
20 MHz Tower clock
The ACD provides two inputs to the L1T. The first input is a low
threshold detection which can be used to veto the 3-in-a-row TKR
trigger. The ACD produces a high threshold
trigger input which is used to identify high ionizing particles needed
for calibration of the calorimeter.
These events will be relatively few
and can easily be accommodated within the data stream.
The tracker REQ signals are formed from the OR of all 1600 SSD
discriminators on each plane of the tracker. The 16 X and 16 Y plane
REQs are individually input to the L1T PGA.
The calorimeter (CAL) input to the L1T is derived from a fast
discriminator which is ahead of the 3 us shaper in the calorimeter
front end. The purpose of the CAL input is to provide for capturing
very high energy events which are not detected in the tracker.
A CALREQ trigger will override the ACD veto input.
In addition to the inputs listed in Table 1, the L1T also receives
configuration commands from the Tower CPU via the main bus. The
commands include an AND mask which is applied to each input in order
to enable/disable the individual input signals. The enable/disable
function provides a means of masking failed inputs as well as testing
and calibrating the trigger inputs. The AND mask is followed by an OR
mask which serves a similar purpose. Finally, the internal logic in
the Field Programmable Logic Array (FPGA) can be reloaded via command to
modify the algorithm used to compute the L1T decision.
Level 1 Trigger Outputs
The function of the L1T is to initiate the readout of the full GLAST
instrument. This function is performed using one to four trigger signals
which are transmitted to adjacent towers using the same cable as the IDB.
The fours signals are physically identical for redundancy,
but the function of each signal is programmable. In normal usage all
four trigger signals are required
to be active in order to initiate the readout of an event. This
requirement provides a means of rejecting noise within the signal
distribution system which might be caused by
EMI or single event upsets. The
multiple trigger signals provide for redundancy in the L1T system in
case of a failure
within one of the other signal subsystems of the L1T trigger output.
Only one trigger signal per tower is required to
continue to operate the GLAST instrument. Since the L1T FPGA is
reprogrammable inflight, it will be possible to reconfigure the
trigger signal requirements of the readouts.
Once a valid trigger has
been received by the tracker readout FPGA, a command is forwarded to
the tracker controllers to initiate latch of the TKR FEE and readout.
The CAL readout is similarly triggered by the L1T system to initiate
sampling and digitization by the calorimeter ADCs after a programmable
time delay to allow for the 3 us shaping time. The ACD sample and
digitization is similarly delayed within the ACD to allow for the TBD
ACD shaping time.
Level 1 Trigger Technology
Trigger technology is well known within the particle physics
community, but the complexity represented by the GLAST instrument is
greater than any previously used in a space flight instrument. GLAST
represents several differences between the particle physics practice
and the trigger technology required for space flight and for the GLAST
instrument. These differences include the need for low power, the
relatively slower trigger requirements for GLAST as compared to the
high speed timing requirements used in accelerator physics, and
finally, the need for radiation tolerant devices.
It may be somewhat
surprising that radiation tolerance is not a severe problem in
accelerator physics experiments, but space borne instruments encounter
very high ionizing, heavy cosmic ray particles which are not normally
observed in particle experiment physics. Furthermore, the trigger
electronics is normally not exposed directly to the radiation in
ground based experiment. For GLAST, the trigger electronics must be
low power, complete the trigger algorithm within the 1.3 us window of
the tracker shaper and be radiation tolerant. The complexity of the
Level 1 Trigger and the constraints on power and radiation tolerance
appear to be within the capabilities of components which are expected
to be available in the time frame of a GLAST new start. In order to
reduce risk in technical performance, cost and schedule, several
problems should be solved in advance of a new start.
Tracker readout is initiated by the Tower Readout Interface
upon receipt of a valid L1T trigger.
The initiation of the readout is depends in part on the distance from
the tower initiating the readout.
The L1T signal ripples between towers through the L1T FPGA with
additional delays caused by the cable time delay and LVDS gate delays.
A timestamp is captured and an event
counter is incremented at the time the readout condition is met.
The ACK signal is a special two bit command on the
eight tracker serial communications lines.
The majority of tracker commands are not time critical and are
composed of longer words. After the event has been captured and the
L1T goes inactive, the
instrument is made live again and readout
Multiple events may occur and be cued for readout before the first
event has been completely readout into the Tower CPU. Readout of the
GLAST instrument is performed in a hierarchical system with buffering
at each level in order to increase the average output event rate and
to decrease the average dead time. The tracker frontend
discriminators feed an 1-bit 8 deep FIFO memory which permits up to 8
to occur before the first event must have been read into one of the
each end of a tracker plane. The controller in turn provides a 63
word deep double buffer for storing the hit addresses from each plane.
The Tracker Readout Interface FPGA
within the TEM controls the readout of the tracker data, which is then
written into a FIFO along with the timestamp and event
counter data. The L1T trigger data from the event is readout
into a separate FIFO for transfer to the Tower CPU. The
calorimeter and ACD each have internal FPGA controllers for acquiring
and readout of their respective data words.
Readout Controller Technology
Technology requirements of the Tower Readout Interface are similar to
those for the L1T. In fact, the same FPGA used for the L1T may be able
to support the tower readout. Technology issues which need to be
addressed for the Readout Interface include:
- Development of the readout command and control protocol.
- What synchronization rate is required so the timestamps will be
same across all towers?
- How are events handled when the timestamps
- How much time is required to readout each subsystem?
- What are the problems in tower to tower synchronization of
readouts, timestamping, and event counters.
Tower Central Processor Unit (TCPU)
Each Tower contains a CPU which runs the VxWorks Real Time Operating
System (RTOS) and is programmable in the C++ language. This processor
performs high level command and control functions for each tower as
well as supporting the Level 2 and Level 3 Trigger processing.
each event, local data from the tracker and global ACD veto data are
input to the Level 2 trigger
process which runs asynchronously in each tower. ACD data from the veto and high ionizing event discriminators are broadcast over the Instrument Data Bus for use in Level 2 and Level 3 Trigger processing.
Events which pass the Level 2 Trigger are
then passed to the Level 3 Trigger process in one of the towers designated as the Level 3 processor unit. Output events and
diagnostic data are stored in files within the tower local memory.
An Error Detection And Correction (EDAC)
stack using Reed-Solomon coding will contain at least 256 Mbytes of DRAM memory.
The DAQ will
have sufficient margin that incident cosmic rays
can be retained for onboard analysis or downlink. This margin insures
that the primary requirement for capture and downlink of all galactic
gamma rays is protected while permitting maximization of the science
return through dynamic control over the instrument operation for
cosmic ray data. The margins are necessary in order to minimize the
risk and cost of insuring capture of the desired gamma ray events.
Buffering of event data within the various subsystems and the TKR
reduces the required response time of the TCPU to perform the Level 2 processing since events are stored in 256 MBytes of tower memory under DMA control with no direct processor intervention.
The TCPU is performing
multiple tasks under control of the RTOS.
However, data flow and task scheduling will not be a
development driver for the TCPU.
Program memory and the boot process
Program memory for the TCPU at boot
time will be stored in a rad-hard PROM. The minimum PROM requirement
is about 512 kbytes and will support the lowest level of RTOS and
application code sufficient to enable command and data handling tasks.
Once the processor as booted, a reboot may be initiated from onboard
non-volatile RAM (EEPROM or FLASH) which is available to each TCPU.
This non-volatile RAM will hold the operational code including the
application code which performs the primary GLAST process. If the
EEPROM in one tower should fail or become corrupted, the tower can be
booted from any of the other 15 towers. Updates to the EEPROMs can be
performed from the command uplink (or on the ground via the IDB GSE
connection.) in the event of the need to update the EEPROMs during
flight, the VxWorks RTOS provides a means of replacing individual
modules without requiring uplink of the full code. Watchdog tasks
will be used to monitor the proper operation of all 16 TCPUs and
automatically reboot in the case of a failure. These tasks are fully
programmable and under ultimate control of the ground. The rad-hard
PROM boot code is the only required onboard code in order to gain
control over the TCPU and will be implemented in the most reliable
Tower CPU Technology
The candidate TCPU is the PowerPC 603E. This processor has been shown
to be sufficiently radiation tolerant for space flight and is being
used in a number of other programs. The PowerPC is expected to evolve
with time. If the PowerPC 740 series devices should become
available in a qualified version in the future, the development work
done now will enable direct substitution of the newer, lower power
Issues addressed in the TCPU development included:
- Can the IDB interface be made available to the memory or
does the TCPU have to manage this interface inline?
The conceptual design includes a feature in the IDB which permits reading and writing to the Main Bus from the network.
- How should the interface be implemented between the TCPU and the
offboard DSPs in the ACD and Calorimeter?
All DSPs in the original concept have now been replaced by the
PowerPC CPU. The interfaces for the ACD and Calorimeter are now
performed via the IDB for data transfer. Directly wired signals are
used for the Level 1 Trigger.
- How much actual time is required for the Level 2 and Level 3
The L2T and L3T tasks are still under development. Initial benchmark runs have been made on the R3081 processor using 'C' versions of the Level 2 Trigger C++ code.
- What is the maximum event rate which can be handled by each tower?
By the full GLAST instrument?
The maximum event rate depends on the maximum number of hits in any tower along with the total number of hits in any 8 sequential events. The Tracker contains an 8 deep 1 bit FIFO for each channel that permits simultaneous readout while the Tracker is live.
DAQ Development Tasks
Although the development is specifically to support a single tower, it
is necessary to develop and test in such a manner that the TEM
developed for the prototype tower can be replicated 16 times and
function as a complete unit. This is a problem which is minimal for
the tracker, calorimeter, and to a certain extent, the ACD. In
contrast, the DAQ requires that the L1T and IDB interfaces function
with 16 towers simultaneously. In order to validate the design as
implemented in the prototype tower, it is necessary to simulate the
remaining 24 towers. The full up instrument will be included in the
design of the TEM and initial development and testing will include
test which indicate how the complete DAQ may be expected to perform.
However, the full up simulator will not be implemented until after the
first tower is complete. This schedule should permit the
identification of problems and their resolution prior to the time of a
new start as well as providing a development test bed for the
The top level schedule for DAQ development begins with a L1T simulator
which is executed using an FPGA/VME board.
This board will permit
programmable testing of the L1T and interface testing with the Tracker
Controller at UCSC. Copies of the FPGA board can be used to simulate
a variety of interfaces and will be used to simulate the Tracker, the
ACD, the calorimeter, and the L1T interfaces. The TEM printed wiring
board will be executed early in the ATD program.
modifications, and other changes to the FPGA/VME board will be performed before
production of the TEM for the prototype tower.
Flight software is composed of three primary types:
The lowest level code is represented by the FPGA programming which is
performed using development tools supplied by the manufacturer.
Initially, we will use the Altera device in order to provide maximum
flexibility, ease of programming, and minimum cost during the
development cycle. Later, once the modifications to the FPGA code have
decreased in frequency, alternative devices suitable for flight will
- the VxWorks RTOS which runs the 16 tower CPUs,
- application code consisting of C++ modules which
runs on the RTOS, and
- FPGA programs which may be burned in or
downloaded depending on the implementation.
The VxWorks RTOS is a commercial, off-the-shelf operating
system by Wind River which supports a large number of different processor
architectures. We have used this RTOS for a VME based computer
controlling a realtime data acquisition and control system flown on
the Space Shuttle with much tighter timing requirements than for the
GLAST instrument. The VxWorks operating system is supported by three
development tools: the Tornado development environment, WindView, and
Stethoscope. This environment provides a rapid capability for code
development, testing, and validation, which permits writing most, if
not all of, the application code in a high level language.
TCPU port (known as the Board Support Package or BSP) of VxWorks
is supported by a development package from Wind River. The
availability of VxWorks, which
includes built-in support for TCP/IP, file systems, scheduling,
priorities, and interrupts, reduces the software development task
considerably as compared to the approach taken in the past with flight
computers. The primary task will be in the high level code for
command execution, control of the various tasks, data acquisition and
formatting, and the Level 2/3 Triggers.
The GLAST collaboration has developed a highly accurate simulation
based on the GISMO program for the science portion of the instrument.
This code, written in C++ Object Oriented, is being extended to
simulate portions of the data acquisition system. In addition, the
Level 2 and Level 3 triggers will be written and tested in this
environment before porting to the DAQ environment. The Level 2 and
3 trigger code tasks are expected to be the most demanding software
tasks within the GLAST DAQ.
The development schedule for DAQ flight
software permits a staged delivery of code. Initially, the FPGA
programs will support acquisition and readout of data from the
tracker. In the next phase, TEM programs will support readout of
data from the Tracker, L1T, ACD, and CAL. Minimal TCPU
programming is required to move the data to and from the TCPU.
There are additional
programming tasks which must be performed, but these can be performed
out of the critical path. The prototype tower and the 16 tower
simulator should provide a sufficient environment for software
Last modified: Wed Jun 7 17:02:42 PDT 2000