ACAT 2005

Abstracts of Talks in Session 2 by Author

Quick links:
Programme overview
detailed Timetable
Abstracts of Plenary and Invited Talks
Programme and Abstracts of Session 1
Programme and Abstracts of Session 2
Programme and Abstracts of Session 3
Title Bitmap Indices for Fast End-User Physics Analysis in ROOT
Speaker Brun, Rene
Institution CERN
Abstract
Rene Brun (1), Philippe Canal (2), Kurt Stockinger (3), and Kesheng Wu (3)

(1) European Organization for Nuclear Research, 1211 Geneva, Switzerland
(2) Fermi National Accelerator Laboratory, Batavia, IL 60510, USA
(3) Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA

Most physics analysis jobs involve multiple selection steps, known as
cuts, on the input data.  A common strategy to implement these cuts is
to read all input data from files and then process the cuts in
memory.  In many applications the number of variables used to define
these cuts is a relative small portion of the overall data set.  Reading all
variables into memory before performing the cuts is often unnecessary.
In this paper, we describe an integration effort that can significantly
reduce this unnecessary reading by using an efficient compressed bitmap
index technology. The primary advantage of this index is that it can
process arbitrary combinations of cuts very efficiently, while most
other indexing technologies suffer from the ``curse of dimensionality''
as the number of cuts increases.  By integrating this index technology
with the ROOT analysis framework, the end-user can benefit from the added
efficiency without having to modify their analysis programs. This new
algorithm could be particularly interesting when querying large event
metadata catalogues.

Title Simulation and Reconstruction Software for the ILC
Speaker Gaede, Frank
Institution DESY
Abstract
The International Linear Collider project is in a very active R&D phase
where currently three different detector concepts are studied in 
international working groups. In order to investigate the various 
physics aspects of the different concepts it is highly desirable to 
have a set of common software tools. In this talk we present some 
of the software packages that have been developed for the ILC. 
LCIO is a persistency framework that defines the data model from 
the generator to the final analysis step and serves as a standard 
for the exchange of data files throughout the ILC community.
Marlin is a modular C++ application framework that allows the 
distributed development of reconstruction and analysis software 
based on LCIO. Marlin is complemented with LCCD, a tool for storing and 
retrieving conditions data and an abstract geometry definition.

Title An analytic formula for track extrapolation in inhomogeneous magnetic field
Speaker Gorbunov, Sergey
Institution DESY
Abstract
The track propagation through an inhomogeneous magnetic field using an
analytic expression is presented.
The analytic formula has been derived under very general assumptions on a
magnetic field. Precision of extrapolation does not depend on a shape of the
magnetic field.
Results of the implementation in the CBM track fitting procedure based on
the Kalman filter are presented and compared with the extrapolation based on
the fourth-order Runge-Kutta method.

Title Elastic Neural Net for standalone RICH ring finding
Speaker Gorbunov, Sergey
Institution DESY
Abstract
The Elastic Neural Net is implemented for finding rings
in the Cherenkov detector. The method doesn't require any
prior track information and therefore can be used for triggering.
The method test at the CBM RICH detector shows very good
efficiency and extremely high speed.


Title Tagging B Jets associated with heavy neutral MSSM Higgs Bosons
Speaker Heikkinen, Aatos
Institution Helsinki Institute of Physics
Abstract
Since a neural network (NN) approach has been shown to be applicable to the
problem of Higgs boson detection at LHC [1, 2], we study the use of NNs to the
problem of tagging b jets in pp$\rightarrow\rm b\bar{\rm b}$H$_{\rm SUSY}$,
H$_{\rm SUSY}\rightarrow\tau\tau$ in the Compact Muons Solenoid experiment [3, 4].
B tagging can be used to separate the Higgs events with associated b jets from
the Drell-Yan background, for which the associated jets are mostly light quark
and gluon jets.

We teach multi-layer perceptrons (MLPs) available in the object oriented
implementation of data analysis framework ROOT [5].The following learning
methods are evaluated:
steepest descent algorithm, Broyden-Fletcher-Goldfarb-Shanno algorithm,
and variants of conjugate gradients. The ROOT code generation feature of
standalone C++ classifiers is utilized.

We compare the b tagging performance of MLPs with another ROOT based feed
forward NN tool NeuNet [6], which uses a common  back-propagation learning
method.

In addition, we demonstrate  the use of the self-organizing map program
package (SOM_PAK) and  the learning vector quantization program package
(LVQ_PAK) [7] in b tagging problem.  A background discriminating power of
these NN tools are compared.

References

[1] I. Iashvili and A. Kharchilava, $\rm H\rightarrow ZZ^*\rightarrow 4\ell$
Signal Separation Using a Neural Network, CMS TN-1996/100.

[2] M. Mjahed, Higgs search at LHC by neural networks,
Nuclear Physics B 140 (2005) 799-801.

[3] F. Hakl et al., Application of neural networks to Higgs boson search,
Nucl. Instr. & Meth. in Phys. Res. A 502 (2003) 489-491.

[4] S. Lehti, Tagging b-jets in $\rm b\bar{b}H_{SUSY}\rightarrow\tau\tau$,
CMS NOTE-2001/019; G. Segneri and F. Palla, Lifetime Based b-tagging with
CMS,
CMS NOTE-2002/046.

[5] ROOT - An Object Oriented Data Analysis Framework,
Proceedings AIHENP'96 Workshop, Lausanne, Sep. 1996, Nucl. Inst. & Meth.
in Phys. Res. A 389 (1997) 81-86.

[6] J.P. Ernenwein, NeuNet, http://e.home.cern.ch/e/ernen/www/NN.

[7] T. Kohonen, Self-Organizing Maps, Springer-Verlag, Heidelberg, 1995.

Title NeuroBayes - a robust classification and probability density reconstruction algorithm
Speaker Kerzel, Ulrich
Institution IEKP, Universitaet Karlsruhe
Abstract
NeuroBayes is a sophisticated neural network
based on Bayesian statistics solving complex
classification and density reconstruction tasks.
Several regularisation procedures suppress
statistical noise and thus avoid overtraining.
Correlations and missing information from the
variables are handled automatically.
Several highly successful applications from
experimental high-energy physics
and industry are presented.

Title Alignment of the ZEUS Micro-Vertex Detector Using Cosmic Tracks
Speaker Kohno, Takanori
Institution University of Oxford
Abstract
ZEUS Micro-Vertex Detector (MVD) was installed in ZEUS after the HERA
upgrade in 2000. MVD is a precise position detector consisting of 712
single-sided 
silicon strip detectors. The alignment of the barrel MVD has been performed
in units of ladders using cosmic tracks. The procedure used is an iterative
chi^2 minimization, where chi^2 is defined locally for each ladder. 
The procedure is numerically stable since it only requires an inversion of
30 6x6 matrices and reasonably fast in spite of the iterative
approach. 

Title A segmented principal component analysis applied to calorimetry information at ATLAS
Speaker Lima Jr, Herman
Institution UFRJ
Abstract
Authors: H. P. Lima Jr, J. M. de Seixas

In the new particle collider currently being constructed at CERN, the Large
Hadron Collider, two bunches of protons will collide at every 25 ns, producing
a huge amount of data to be processed.  These data include both the physics of
interest, like the signatures of the higgs boson, and the background noise. In
this scenario, complex trigger systems need to be designed by each experiment
in order to select only the interesting events. The ATLAS trigger system
consists of three distinct levels of event selection. Each trigger level
should perform specific algorithms to select only the events with high
probabiliy of interesting physics. From an initial bunch crossing rate of
40 MHz, the ATLAS trigger system will select events up to 100 Hz to permanent
storage. The first level trigger looks at detector data with reduced
granularity in order to take a fast decision, delivering events to the second
level at a maximum rate of 100 kHz. At the second level, complex algorithms
operate with the total granularity of the detector, guided by Region of
Interests (RoIs), which contains interesting features of the events. This
second level reduces the event rate to less than 1 kHz. The last step of
selection, the Event Filter, performs even more complex algorithms to reduce
the event rate to a maximum of 100 Hz, which corresponds to the data to be
permanently stored for offline analysis. The three levels of selection make
use of the information provided by the calorimeter system of ATLAS, due to
the fast response of the detectors and the detailed measurements achieved.  

Because of the highly segmented calorimetry environment present at ATLAS, and
also due to the fine-grain granularity of the calorimeter system, an
interesting idea is to reduce the event dimension in order to speed up the
selection process and decrease the computational load. This paper describes
the application of a segmented principal component analysis (PCA) in the task
of dimensionality reduction at the second level trigger of ATLAS. The
segmented PCA is proposed in order to explore the high segmentation available
and the different levels of granularity present at each segment. The reduced
dimension of the processed events will allow faster and higher discrimination
efficiency by using neural network processing. 

Title Energy Reconstruction for a Hadronic Calorimeter Using Neural Networks
Speaker Magacho da Silva, Paulo Vitor
Institution Federal University of Rio de Janeiro
Abstract
P. V. M. da Silva,  J. M. Seixas, L. P. Caloba

In high energy physics experiments with colliders, calorimeters play a very
important role, because they are used to measure the energy of the incoming
particles from the collisions. Depending on the granularity of these
calorimeters, they can also provide useful information about the type of
particle that interacted with the calorimeter.

However, some calorimeters have a non-compensating response (e/h>1) which
degrades the resolution and linearity of the calorimeter response for hadrons.
One of this calorimeters is the hadronic calorimeter of the ATLAS experiment,
the Tilecal.  In order to improve the response of the Tilecal calorimeter,
weighting techniques were already developed to perform the energy
reconstruction. 

These techniques often use linear combination of the energy deposited in the
calorimeters cells or longitudinal samples. In this work the use of a neural
network was proposed to perform the energy reconstruction of pions,
optimizing the linearity and energy resolution. Besides the non-linear
capability of the neural network, it's structure provides very good
generalization for data that are not presented during the training process.
This is very useful during operation since the knowledge of the particle
energy is not known beforehand.

Data coming from tests with pions beams performed at CERN with a prototype
of the Tilecal module, called Module 0, were used to feed the neural network.
Also the same data were used with the linear weighting techniques, so we can
compare both methods.

Two measurements are used to determine the quality of the methods : linearity
and energy resolution.  The linearity measurement is done by looking at the
RMS of the ratio between the reconstructed energy and the expected energy,
normalized by the ratio at 100 GeV. For the weighting techniques a RMS of
1.27\% was found, while for the neural network a better result was achieved,
with a RMS equal to 0.81\%. The raw data has a RMS equal to 1.6\%.

As for the energy resolution, the neural network tends to improve the constant
term, while the linear weighting techniques improves more the statistical term. 

The neural network achieved 62.0\% for the statistical term and 4.0\% for the
constant term and for the linear weighting techniques a statistical term of
40.8\% and a constant term of 5.3\% were achieved. The raw data resolution
has a statistical term equal to 56.3\% and a constant term equal to 6.9\%.
More studies are being made to improve the neural network energy
reconstruction and to understand the energy resolution results of the neural
network. 

Title Efficient algorithms of multidimensional gamma-ray spectra compression
Speaker Matousek, Vladislav
Institution Institute of Physics, Slovak Acad. of Sci.
Abstract
The volume of spectrometric data produced in nuclear experiments is
enormous. Because of practical reasons, e.g. interactive analysis, handling,
etc, the fast and efficient compression of large multidimensional arrays is
in some cases unavoidable. In the contribution we present some original
approaches to compress coincident multidimensional gamma-ray data
efficiently. 
The multidimensional coincidence gamma-ray spectra are symmetrical in their
nature. This can be utilized to achieve needed memory space reduction
without the loss of information. We present the symmetry based removal
method in
conjunction with compression via fast adaptive Walsh-Hadamard transform.
Our algorithms are based on direct modification of coefficients of the
transform kernel according to compressed data. In the examples presented the
achieved compression ratios are of the order of 10^4 for 2-fold gamma-ray
spectra, 10^8 and 10^11 for 3-fold and 4-fold spectra, respectively. 
Another method presented in the contribution is based on address
randomizing transformation. One event descriptor contains the position of an
event in multidimensional space (e.g. data read out from ADCs) and its
counts. The descriptors are passed through a transformation. The randomizing
transformation distributes qausi-uniformly the event descriptors in the
transformed space. Cluster of descriptors in physical field are spread over
the whole range of possible addresses, and adjacent descriptors go to
addresses far away from each other. In the contribution we propose the
randomizing transformation based on inverse numbers in the sense of modular
arithmetic.  The transformation is fast, so that it could be applied for
on-line acqusition mode. The compression ratios are approximately of the
same order like in the case of above mentioned adaptive Walsh-Hadamard
transform. 
The reconstructed multidimensional data compressed through the use of both
proposed algorithms are compared to the original data and to the results
achieved by using conventional methods of compression. The examples
presented prove in favor of the proposed compression algorithms.


Title Search for the Higgs boson at LHC by using Genetic Algorithms
Speaker Mjahed, Mostafa
Institution Ecole Royale de l' Air, Marrakech
Abstract
The search for  the  Higgs  boson  is  one  of  the  primary  tasks  of  the
experiments at the Large Hadron Collider  (LHC).  It  has  been  established
that a Standard Model Higgs boson can be discovered with  high  significance
over the full mass range of interest, from the lower limit set  by  the  LEP
experiments of 114.1 GeV/c2 up to about 1 TeV/ c2. Otherwise  the  discovery
of  the  Higgs  boson  should  be  complicated  by  the  presence  of   huge
backgrounds [1].

Our aim here is  to  use  a  genetic  algorithm  as  a  tool  for  a  better
discrimination between signal and background. A genetic algorithm [2, 3]  is
a search technique modelled on biological evolution, in which   real  valued
information on events are first encoded  as  strings  (chromosomes).  During
the "reproduction phase", each event is assigned  a  fitness  value  derived
from  its  raw  performance  measure  given  by   an   objective   function.
Recombination operators as crossover and mutation are used to  optimize  the
association between events and classes.

We will analyze the Higgs mass range 140-200 GeV. At this  mass  range,  the
dominant mechanism for Higgs production is gluon-gluon  fusion.  Usual  ways
to reduce background are lepton isolation, what motivated us  to  study  the
decay into four muons.

Events were produced at LHC energies ( MH =140-200 GeV  ),  using  the  Lund
Monte Carlo generator Pythia 6.1. Higgs boson  events  (decaying  into  four
muons)  and  the  most  relevant  background  are   considered.   The   most
discriminant variables, as the transverse momentum of the  four  muons,  the
invariant  masses  of  the  four  different  muons  pairs,  the  four  muons
invariant mass, the hadron's  multiplicity  and  other  new  variables,  are
used.

Genetic algorithms differ substantially from other  classification  methods.
They use probabilistic transition rules, not deterministic ones and work  on
an encoding of the variables set rather than the variables set itself.

The  results  compared  to  other  multivariate  analysis  methods   (neural
networks, linear and  non  linear  discriminant  analysis,  decision  trees.
[4]), illustrates a number of features of the genetic  algorithm  that  make
it potentially attractive in classification tasks.

    [1]    D. Froidevaux, in Proc of Large Hadron Collider Workshop, eds G.
    Jarlskob and D. Rein, CERN90-10, ECFA 90-133, Vol II, p444
    [2]    D.E. Goldberg, Genetic Algorithms in Search,  Optimization,  and
    Machine Learning. Addison Wesley, Reading, MA, 1989.
    [3]      Z.  Michalewicz,  Genetic  Algorithms  +  Data  Structures   =
    Evolution Programs, Springer Verlag,  New  York,  NY,  second  edition,
    1994.
    [4]    M. Mjahed, Nucl. Instrum. and Meth. A432,1, 1999, 170.
    M. Mjahed, Nucl. Instrum. and Meth. A 481 (1-3) (2002) 601.
    M. Mjahed, Nucl. Physics B (Proc. Suppl.) Vol 106-107C, (2002) 1094.
    M. Mjahed, Nucl. Physics B (Proc. Suppl.) Vol 140C, (2005) 799.

Title The use of Clustering Techniques for the Classification of High Energy Physics Data
Speaker Mjahed, Mostafa
Institution Ecole Royale de l' Air, Marrakech
Abstract
A number of interesting studies of high energy physics experiments,  may  be
performed using the differences  between  the  topologies  of  the  produced
events. In  fact,  at  LEP2  energies  and  beyond,  several  processes  and
channels are expected [1-3]. Hadronic events with multi-jet topologies  will
be produced,  with  dominant  rates.  The  Standard  Model  Higgs  boson  is
expected to be produced mainly via the process e+ e-?(?ZH (?4 jets.

In this paper, we present an attempt to separate between Higgs boson  events
(e+ e-?(ZH (?4 jets: CZH class) from other physics processes (e+  e-  ?(?Z/(
,?W+W-, ZZ(?4jets: CBack class ), using several clustering techniques.

In addition to the classical (Hierarchical and K-means)  clustering  methods
[4-6], we tries to construct an approximate space filling  curve  [7-9]  and
distances between  points  in  a  multidimensional  space  are  replaced  by
distances   along   a   Lebesgue   measure-preserving   curve.   With   this
classification algorithm, using a neighboring approach on the space  filling
curve, several clusters may emerge  from  data  and  configurations  may  be
associated to the considered classes CZH and CBack .

Events were produced at post-LEP2  energies,  using  the  Lund  Monte  Carlo
generator[10] and the Aleph package [11]. The  most  discriminant  variables
as the reconstructed jet mass, the jet properties (b-tag, rapidity  weighted
moments) and other variables are used.



References


    [1]    ALEPH Coll., Phys. Letters. B 495 (2000) 1-17.
    [2]    T.G. Malmgren et al, Nuc. Instrum. and Methods A 403, 2-3 (1998)
    481.
    [3]    K. Hultqvist et al, Nuc. Instrum. and Methods A  432,  1  (1999)
    176.
    [4]     Van  Ryzin,  Classification  and  clustering,  Academic  Press,
    NewYork, 1977.
    [5]    J.A. Hartigan, Clustering Algorithms, New York, Wiley, 1975.
    [6]    A.K. Jain and al., Algorithms  for  Clustering  Data.  Prentice-
    Hall, NJ, 1988.
    [7]    S.C. Milne, "Peano curves and smoothness of functions", Advances
    in Mathematics, 35 (1980) 129-157.
    [8]    H. Sagan, "Space-filling  curves",  Springer-Verlag,  New  York,
    1994.
    [9]    W.J. Gilbert, A cube filling  Hilbert  curve,  The  Mathematical
    Intelligencer, Vol.6 (1984) 78
    [10]   T. Sjostrand, M. Bengtsson, Comp. Phys. Comm. 82 (1994) 74.
    [11]   P. Janot, ALEPH package.

Title Deconvolution methods and their applications in the analysis of gamma-ray spectra
Speaker Morhac, Miroslav
Institution Institute of Physics, Slovak Academy of Sciences
Abstract
One of the most delicate problems of any spectrometric method is that
related to the extraction of the correct information out of the spectra
sections, where due to the limited resolution of the equipment, signals
coming from various sources are overlapping. Deconvolution methods are very
frequently employed to improve the resolution of an experimental measurement
by mathematically removing the smearing effects of an imperfect instrument,
using its known resolution function. They can be successfully applied for
the determination of positions and intensities of peaks and for the
decomposition of multiplets in gamma-ray spectroscopy. 
>From a numerical point of view the deconvolution is so called ill-posed
problem, which means that many different functions solve a convolution
equation within error bounds of experimental data. When employing standard
algorithms to solve a convolution system small errors or noise can cause
enormous oscillations in the result. This implies that a regularization must
be employed. The regularization encompasses a class of solution techniques
that modify an ill-posed problem into a well-posed one by approximation so
that a physically acceptable approximate solution can be obtained.
In the contribution we present deconvolution methods based on direct
solution as well as the methods based on iterative solution of system of
linear equations. We give a comparison of efficiencies of various
deconvolution algorithms and regularization techniques (Tikhonov, Riley,
VanCittert, Gold, Richardson-Lucy, etc). To improve the resolution of the
deconvolution of positive definite spectroscopic data we propose a
modification of deconvolution algorithms by introducing boosting operation
and regularization technique based on minimization of  squares of negative
values.
We have optimized and extended Gold deconvolution algorithm also to two- and
three-dimensional data. The presented examples prove in favor of the
deconvolution algorithms employed.
The analysis of peaks in spectra consists in determination of positions of
peaks and subsequent fitting, which results in the estimate of peak shape
parameters. The positions of peaks can be well determined from separated
peaks in decomposed spectra and can be fed as initial estimates into a
fitting procedure. Proper estimation of peak positions is a necessary
condition for correct analysis of experimental spectra. However the
resolution of conventional peak searching algorithms based on smoothed
second differences is quite limited. 
Therefore to improve the resolution capabilities we have proposed several
algorithms based on Gold deconvolution for both one- and two-dimensional
spectra. The deconvolution and peak finder methods have been implemented in
TSpectrum class of ROOT system.

Title Neural networks approach to parton distributions fitting
Speaker Piccione, Andrea
Institution University of Turin
Abstract
We will show an application of neural networks to extract
informations on the structure of hadrons. A Monte Carlo over experimental
data is performed to correctly reproduce  data errors and correlations. 
A neural network is then trained on each Monte Carlo replica via a genetic
algorithm. Results on the proton structure function [hep-ph/0501067], and on
the non-singlet parton distribution will be shown.
Title Detector Description of the ATLAS Muon Spectrometer and H8 Muon Testbeam
Speaker Pomarede, Daniel
Institution CEA/DAPNIA Saclay
Abstract
The Muon Spectrometer of the ATLAS experiment is a large and
complex system of gaseous detectors. The simulation and the reconstruction
of muon events require a careful description of these detectors, which
either participate in the trigger or in the precision measurements of
tracks. A thorough
description of the passive materials, such as the toroidal magnet systems,
is also needed to account for Coulomb scattering and energy losses. The
operation of the muon spectrometer relies on the alignment of its precision
chambers, so the geometrical model must fully implement their
misalignments and deformations. We present the Detector Description
chain employed in the Muon system and its integration in the ATLAS
software framework. It relies on a database technology and a standard
set of geometrical primitives common to all ATLAS subsystems.
The Muon Detector Description has been used successfully in the context of
the ATLAS Data Challenges, where it provides a unique and coherent geometry
source for the simulation and reconstruction algorithms.
It has also been validated in the context of the experimental program of the
ATLAS testbeams, where analyses of the treatment of chamber alignment in track
reconstruction relies crucially upon the detector description model.
Title Limits and Confidence Intervals in the Presence of Nuisance Parameters
Speaker Rolke, Wolfgang
Institution Univ. of Puerto Rico - Mayaguez
Abstract
I present the results of a study of the frequentist properties of
confidence intervals computed by the method known to statisticians as the
Profile Likelihood. 
It is seen that the coverage of these intervals is surprisingly good over
a wide range of possible parameter values for important classes of problems,
in particular whenever there are additional nuisance parameters with
statistical or systematic errors.
Title Monte Carlo based studies of polarized positrons source for the International Linear Collider (ILC)
Speaker Schaelicke, Andreas
Institution DESY
Abstract
The full exploitation of the physics potential of an International Linear
Collider (ILC) in addition to the LHC program will require the development
of polarized positron beams. Having both positron and electron beams
polarized will be a decisive improvement for physics studies, providing new
insight into structures of couplings and thus access to the physics beyond
the standard model. 
The new concept of a polarized positron source is based on the development
of a circular polarized photon source. The polarized photons create
electron-positron pairs in a thin target and transfer their polarisation
state to the outgoing leptons. 
To achieve a high level of  positron polarization the understanding of the
production  mechanisms in the target is crucial for an optimization in terms
of positron yield which is closely related to the target properties. 
In this talk we present a Geant4 based optimization study of the positron
production target for the ILC. 

Title Track reconstruction at the CMS experiment
Speaker Speer, Thomas
Institution University of Zurich, Switzerland
Abstract
An overview of the track reconstruction algorithms used in the Tracker of
the CMS experiments at the LHC will be presented, and some of their
respective features will discussed.
Properties, results and performance of these algorithms with simulated data
will be shown.
The CMS tracking system features an all-silicon layout consisting of a
pixel detector and a silicon micro-strip tracker.
Title Adaptive filters for track finding
Speaker Strandlie, Are
Institution Gjøvik University College, Norway
Abstract
Because of its recursive nature, the Kalman filter can and has been used
not only for track fitting but also for track finding. The simplest strategy
is to select the closest compatible observation at each step of the filter.
This turns out to be insufficient in scenarios with high track density and/or
large amounts of noise.
In order to reach high efficiency, several track hypotheses have to be
explored in parallel, resulting in a combinatorial Kalman filter. 
In this contribution we study the application of adaptive estimators
such as the Gaussian-sum filter and the Deterministic Annealing
Filter to track finding. We consider various scenarios with different track
density, contamination, and seed quality. It is shown by simulation studies 
that adaptive methods are competitive alternatives to the combinatorial
Kalman filter, 
and that in some cases there are appreciable gains in speed of the track
finding procedure with respect to the latter.

Title Modelling of non-Gaussian tails of multiple Coulomb scattering in track fitting with a Gaussian-sum filter
Speaker Strandlie, Are
Institution Gjøvik University College, Norway
Abstract
Abstract:

The Kalman filter has since many years been the default method for track
fitting in high-energy physics tracking detectors. The Kalman filter is
a least-squares estimator and is known to be optimal when all
probability densities involved during the track fit are Gaussian. If any
of the densities deviate from the Gaussian assumption, it is plausible
that a non-linear estimator which better takes the actual shape of the
distribution into account can do better. One such non-linear estimator
is the Gaussian-sum filter[1], which is adequate if the distributions
under consideration can be represented or approximated by Gaussian mixtures.

Quite recently, a two-component Gaussian-mixture approximation to the
multiple scattering distribution has been presented[2]. The availability
of such an approximation opens up for a treatment of multiple scattering
within the realm of the Gaussian-sum filter, and the main purpose
of this contribution is to present a Gaussian-sum filter for track
fitting, based on the abovementioned approximation. In a
simulation study within a linear track model the Gaussian-sum filter is
shown to be a competitive alternative to the Kalman filter. Scenarios at
various momenta and various maximum number of components in the
Gaussian-sum filter are considered. The difference between the two
approaches is mainly visible in the estimates of the uncertainties of
the track
parameters, particularly at low momenta. At such momenta the
Gaussian-sum filter yields a better estimate of the uncertainties than
the Kalman filter. This
feature could for instance lead to a better estimate of the vertex
position in a subsequent vertex fit.

References:

[1] R. Frühwirth, Track fitting with non-Gaussian noise. Computer
Physics Communications 100 (1997) 1.
[2] R. Frühwirth and M. Regler, On the quantitative modelling of tails
and core of multiple scattering by Gaussian mixtures. Nuclear
Instruments and Methods in Physics Research A 456 (2001) 369.

Title The FEDRA - framework for emulsion data reconstruction and analysis in the OPERA experiment
Speaker Tioukov, Valeri
Institution INFN (Napoli)
Abstract
OPERA is a massive lead/emulsion target for a long-baseline neutrino 
oscillation search. More then 90% of the useful experimental data in OPERA 
will be produced by the scanning of emulsion plates with the automatic
microscopes. 
The main goal of the data processing in OPERA will be the search, analysis and 
identification of primary and secondary vertexes produced by neutrino in 
lead-emulsion target.

The volume of middle and high-level data to be analysed and stored  
is expected to be of the order of several Gb per event. The storage, 
calibration, reconstruction, analysis and visualization of this data is 
the task of FEDRA - system written in C++ and based on ROOT framework. 
The system is now  actively used for processing of test beams and simulation
data. Several interesting 
algoritmic solutions permits us to make very effective code for fast pattern
recognition in heavy signal/noise conditions. The system consists of the
storage part, intercalibration and segments linking part, track finding and
fitting, vertex finding and fitting and kinematical analysis parts.
Kalman Filteing technique is used for tracks&vertex fitting. ROOT-based
event display is used for interactive analysis of the special events.
Title Neural Triggering System Operating on High Resolution Calorimetry Information
Speaker Torres, Rodrigo
Institution Federal University of Rio de Janeiro
Abstract
For the ATLAS detector, the online trigger system is designed with
three levels. The online triggering system rely on detailed calorimeter
information for achieving high background noise reduction.  The first level
uses coarse grain calorimeter granularity for reducing the input event from
40 MHz to 100 kHz. The second level, implemented by ~500 dual PCs  connected
by gigabit Ethernet network, will use fine-grain calorimeter granularity in
regions of interest originally marked by first level, to reduce further the
event rate to 1 kHz. The final level, based on ~1600 dual PCs also connected
by gigabit networks will operate on the full event, reducing the final rate
of events to only 100 Hz.

This paper presents a discriminator system for the electron/jet for
operating in the second level trigger. In order to handle the high data
dimensionality, the regions of interest are organized in the form of
concentric ring sums, so that both signal compaction and detection
efficiency can be improved. The rings information is fed into a feedforward
neural network, and this implementation resulted in a 93% electrons
detection efficiency for a false alarm of 10%.

The system will be implemented in the Athena environment. This environment
emulates the trigger behavior, so that algorithms for the high level trigger
can be efficiently developed and tested both offline and online.

In the second level, there are specific processors for specific purposes.
Among them, the level 2 processors are used for events selection. Inside
them, there are several worker threads, each one handling one event. For
comparison purposes, a single worker thread will be implemented on a digital
signal processor (DSP) with single instruction on multiple data stream
(SIMD) capabilities, and the final results will be compared to the one
implemented on the Athena environment.
Title Performance of Statistical Learning Methods
Speaker Zimmermann, Jens
Institution MPI für Physik, München
Abstract
Performance of Statistical Learning Methods
Examples from the ep experiment H1 and from a future
linear collider will be used to demonstrate the power of
statistical learning methods. Significant improvements compared to
classical algorithms will be shown and different learning methods
will also be compared against each other. Important guidelines
regarding the performance evaluation, statistical and systematical
uncertainties and the comparison of different methods will be given.

Title Statistical Learning Basics
Speaker Zimmermann, Jens
Institution MPI für Physik, München
Abstract
In this talk an introduction to statistical learning will be
given. The most famous learning methods will be presented and
interpreted. Basic prerequisites and common guidelines for the
correct and successful application of statistical learning methods
will be discussed. Examples from the X-ray satellite project XEUS
and from the Cherenkov telescope MAGIC will illustrate the
discussed topics.

Title On estimation of the exponential distribution parameter under conditions of small statistics and observation interval.
Speaker Zlokazov, Victor
Institution FNLPh, JINR, Dubna, Russia
Abstract
If a distribution function of a random quantity \xi is
 P(\xi < t) = 1 - exp(-t/T),     t \in [0,\infty),
then the least favorable conditions for the parameter T estimation is
the poor data statistics and/or small observation interval
t \in [0,B],\ B << T).

In particular, the equation of the maximum likelihood estimator for
T is practically unsolvable in this case.
Let us introduce two random quantities: n_1 and n_2 - sums of
registered decays in intervals [0,B] and [B,2B], respectively.
It is obvious that
[ \hat{E}n_1 = N / (1 - exp(-B/T)),
  \hat{E}n_2 = N / (exp(-B/T) - exp(-2B/T)) ]
Here \hat{E} is the operator of expectation.
We can build the following estimator of T:
\hat{T} = -B/ln(\frac{n_1}{n_2}).
Obviously, for the analysis only exponential-like curves are suitable.
For instance, use can be made of the following criterion for testing
the inequality n_2 < n_1 for statistical significance:
n_1 > n_2 + 3 \cdot \sigma(n_2).
From that we can obtain restrictions on
 - statistics level N at B/T given
 - or the length of the observation interval B at statistics N given,
which provide for a successful analysis of such data.

The restrictions are very hard and often non-realistic. Here an idea of
an estimate of a lower parameter bound instead of parameter itself is
very fruitful. The lower parameter bound is a quantity, which with a
certain (calculable) probability is less than T, but greater than
the length of the observation interval B.
In our case such an estimator can be obtained, e.g., from such a relation
n_1 - n_2 <= k\sigma(n_1+n_2), on condition that n_1 > n_2.
where k is any number, and \sigma - deviation function.
Title High precision numerical accuracy in Physics research
Speaker de Dinechin, Florent
Institution LIP, École Normale Supérieure de Lyon
Title Goodness-of-fit tests in many dimensions
Speaker van Hameren, Andre
Institution Universiteat Mainz
Abstract
A method is presented to construct goodness-of-fit statistics in many
dimensions for which the distribution of all possible test results in the limit
of an infinite number of data becomes Gaussian if also the number of dimensions
becomes infinite. Furthermore, an explicit example is presented, for which
this distribution as good as only depends on the expectation value and the
variance of the statistic for any dimension larger than one.

Quick links:
Programme overview
detailed Timetable
Abstracts of Plenary and Invited Talks
Programme and Abstracts of Session 1
Programme and Abstracts of Session 2
Programme and Abstracts of Session 3


, last updated: Tue Sep 27 15:05:04 2005