Paper: 265
Session: B (talk)
Speaker: Quadt, Arnulf, Oxford University, Oxford
Keywords: simulation, trigger algorithms, trigger systems

The ZEUS Central Tracking Detector.
A Case Study.

A. Quadt, S.Topp-Jorgensen and H.A.J.R. Uijterwaal, R.C.E. Devenish
University of Oxford, Department of Physics, Keble Road, Oxford, UK

ZEUS Collaboration

To simulate and predict the performance of a highly parallel system with
dynamic loading is a non-trivial task. The online performance of such systems
is characterised by two quantities, the average throughput (speed) and the
maximum processing time per event (latency). Their detailed understanding in
readout and trigger systems is of vital importance for the efficient data
taking of present and future HEP experiments. This paper presents a case
study on the online timing performance of the ZEUS Central Tracking Detector
(CTD) Second Level Trigger (SLT).

The ZEUS experiment uses a 3-level trigger system to cope with the high
interaction rate of 10^5 Hz at the HERA collider. The first 2 levels
consist of a pipelined trigger system per detector component and a global
decision unit. The third level consists of a computer
farm. In 1995 and 1996, typical rates of the 3 trigger levels were
500, 50 and 5 Hz respectively.

The CTD-SLT consists of 130 INMOS Transputers, running a parallel processing
track and vertex reconstruction algorithm. The algorithm comprises a high
track finding efficiency and resolution at a processing rate of more than
750 Hz, while the latency is less than 24 msec.

Such a latency can affect other components' performance and contributes to
the dead time of the
ZEUS experiment. Based on the measurement of the processing times and the
involved data volumes on the different CTD-SLT stages a simulation of the
total processing times has been developed. The main characteristics of the
resulting latency distribution reflect that of the measurements in 1995 and
1996. This simulation provides a deeper understanding of the ZEUS readout
system. It allows the study of several scenarios with varying data volumes,
processing power, data transfer times or modified trigger algorithms.
Due to the precise identification of bottlenecks a CTD-SLT scenario with a
reduced latency of less than 16 msec could be developed.
The simulation of the CTD-SLT timing performance serves as a powerful tool
to find an optimal balance between software developments and potential
upgrades of the hardware achitecture. The method presented here can be easily
applied to a vast range of parallel processor networks.