ACAT 2005

Abstracts of Talks by Author

Quick links:
Programme overview
detailed Timetable
Abstracts of Plenary and Invited Talks
Programme and Abstracts of Session 1
Programme and Abstracts of Session 2
Programme and Abstracts of Session 3
Title The Graphics Editor in ROOT
Speaker Antcheva, Ilka
Institution CERN
Abstract
The ROOT graphics editor is split into discrete units of so-called
object editors. It makes the Graphical User Interface easier to design and
adapted to the different users' profiles.

Title Parallel interactive and batch HEP data analysis with PROOF
Speaker Biskup, Marek
Institution CERN
Abstract
The Parallel ROOT Facility, PROOF, enables a physicist to analyze
and understand much larger data sets on a shorter time scale. It makes use
of the inherent parallelism in event data and implements an architecture
that optimizes I/O and CPU utilization in heterogeneous clusters with
distributed storage. The system provides transparent and interactive access
to gigabytes today. Being part of the ROOT framework PROOF inherits the
benefits of a performant object storage system and a wealth
of statistical and visualization tools.

In this talk we will describe the latest developments on closer integration
of PROOF into the ROOT user environment, e.g. support for the popular
TTree::Draw() interface for PROOF based trees, easy PROOF based tree access
via the tree viewer GUI and PROOF session access via the ROOT browser. We
will also outline how we plan to extend PROOF to support an "interactive"
batch mode where the user can disconnect and reconnect from several long
running PROOF sessions. This feature is especially interesting in a Grid
environment where the data is globally distributed.
Title DAQ software for SND detector
Speaker Bogdanchikov, Alexander
Institution Budker Institute of Nuclear Physics
Abstract
The report describes data acquisition system software for
the SND detector experiments on the new e+e- collider VEPP-2000
(Novosibirsk) which will operate at the energy range 0.4-2.0 GeV
with expected luminosity 10^32 s^-1 cm^-2. The system architecture 
is presented. An overview of its features is given.

The distinctive features of the SND data acquisition system are
following. Deep buffering of readout events provides independence of
data reading from their processing. Computer farm for events 
processing and selection is implemented in such a way to allow
linear scaling of computing power. The operator interface is  
implemented with Web-technologies. State machine, process starter,
process control & recovery services are designed to control system
processes. The system configuration and data taking conditions are          
stored in the relational (SQL) database. The database access is             
implemented through object-oriented API designed for this project.  
Events processing and selection modules are embedded into the highly
configurable software framework.

The DAQ software provides high level of robustness, flexibility and
scalability.

Title Towards the operation of the Italian Tier-1 for CMS: lessons learned from the CMS Data Challenge
Speaker Bonacorsi, Daniele
Institution CNAF - INFN Italy
Abstract
After CMS Data Challenge in 2004 (DC04) - which was devised to
test several key aspects of the CMS Computing Model - a deeper insight in
most of the crucial issues in the operation of a Tier-1 within the overall
CMS infrastructure was achieved. In particular, at the involved Italian
CNAF-INFN Tier-1 many improvements were done in one year since then,
concerning the data management and the distribution topology using the CMS
PhEDEx tool, the coexistence of local traditional farm operations and Grid
official CMS Monte Carlo production, the development and usage of the CRAB
tool to grant efficient data access to distributed users to analyse DST data
via Grid tools, the long-term local archiving and custodial responsibility
(e.g. MSS with Castor back-end), the daily CMS operations on Tier-1
resources shared by LHC (and not only) experiments, and so on. The INFN
Tier-1 resources, set-up and configuration are here reviewed and discussed,
thinking of the overall operation of the regional center in the next future
when real data from LHC will be available.
Title Twistor Approach to One Loop Amplitudes
Speaker Brandhuber, Andreas
Institution Queen Mary University of London
Abstract
Recently an interesting connection between twistor-string
theory and Yang-Mills theories has been proposed. This
observation has led to major advances in the calculation of scattering
amplitudes in gauge theories. We will review some of the new
"twistor inspired" techniques with particular focus on
applications to loop amplitudes.

Title Bitmap Indices for Fast End-User Physics Analysis in ROOT
Speaker Brun, Rene
Institution CERN
Abstract
Rene Brun (1), Philippe Canal (2), Kurt Stockinger (3), and Kesheng Wu (3)

(1) European Organization for Nuclear Research, 1211 Geneva, Switzerland
(2) Fermi National Accelerator Laboratory, Batavia, IL 60510, USA
(3) Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA

Most physics analysis jobs involve multiple selection steps, known as
cuts, on the input data.  A common strategy to implement these cuts is
to read all input data from files and then process the cuts in
memory.  In many applications the number of variables used to define
these cuts is a relative small portion of the overall data set.  Reading all
variables into memory before performing the cuts is often unnecessary.
In this paper, we describe an integration effort that can significantly
reduce this unnecessary reading by using an efficient compressed bitmap
index technology. The primary advantage of this index is that it can
process arbitrary combinations of cuts very efficiently, while most
other indexing technologies suffer from the ``curse of dimensionality''
as the number of cuts increases.  By integrating this index technology
with the ROOT analysis framework, the end-user can benefit from the added
efficiency without having to modify their analysis programs. This new
algorithm could be particularly interesting when querying large event
metadata catalogues.

Title FORM in CompHEP
Speaker Bunichev, Slava
Institution SINP MSU
Abstract
Using FORM language for symbolic calculation in CompHEP is described.
We present current status of this project and discuss plans for the future
development.  Also we show some examples for the real processes.
Title Geometrical methods in loop calculations
Speaker Davydychev, Andrei
Institution Schlumberger
Abstract
A geometrical way to calculate dimensionally-regulated Feynman 
diagrams is reviewed. In the one-loop N-point case, the 
results can be related to certain volume integrals in 
non-Eulidean geometry. Analytical continuation of the results 
to other regions of the kinematical variables is discussed. As 
an example, the dimensionally-regulated three-point function 
is considered, including all orders of its epsilon-expansion.

Title The CMS analysis chain in a distributed environment
Speaker De Filippis, Nicola
Institution Dipartimento di Fisica dell'Universita' e del Politecnico di Bari e INFN
Abstract
The CMS (Compact Muon Solenoid) collaboration is making a big effort
to define the analysis model and to develop software tools with the
purpose of analyzing several millions of simulated and real data events by
a large number of people in many geografically distributed sites.
>From the computing point of view, the most complex issue when doing remote
analysis is the data discovery and their access. Some software tools were
developed in order to move data, make them available to the full
international community and validate them for the subsequent analysis.
The batch analysis processing is performed with workload management
tools developped on purpose, which are mainly responsible for the job
preparation and the job submission. The job monitoring and the output
management are implemented as the last part of the analysis chain. Grid
tools provided by the LCG project are experimented to gain access to the
data and the resources by providing a user friendly interface to the
physicists submitting the analysis jobs.
An overview of the current implementation and of the interactions between
the previous components of the CMS analysis system is presented in this
work.

Title Interactive Analysis Environment of Unified Accelerator Libraries
Speaker Fine, Valeri
Institution Brookhaven National Laboratory
Abstract
Unified Accelerator Libraries (UAL,http://www.ual.bnl.gov) software is an
open accelerator simulation 
environment addressing a broad spectrum of accelerator tasks ranging 
from online-oriented efficient modeling to full-scale realistic beam 
dynamics studies. The paper introduces a new package integrating UAL 
simulation algorithms with the Qt-based Graphical User Interface and an 
open collection of analysis and visualization components. The primary 
user application is implemented as an interactive and configurable 
Accelerator Physics Player whose extensibility is provided by plug-in 
architecture. Its interface to data analysis and visualization modules 
is based on the Qt layer (http://root.bnl.gov) developed by the STAR 
experiment. The present version embodies the ROOT (http://root.cern.ch 
) data analysis framework, Qt/Root package supported
by STAR (http://www.star.bnl.gov) and Coin 3D 
(http://www.coin3d.org ) graphics library.

Title Precision control and GRACE
Speaker Fujimoto, Junpei
Institution KEK
Abstract
The precision control is indispensable for the
large scale computations like done using GRACE, the system for the
automatic Feynman diagram calculations.
Recently, Hitachi Co. Ltd. developed a new library of quadruple and
octuple precision for FORTRAN codes. It has also the function to
report information on lost-bits during the operation. Using this new
library, 1-loop corrections to e+e- -> e+e-tau+tau- were analyzed,
where it was known that the quadruple precision was requested in some
phase space points. The new library not only can locate where the
lost-bits occur but also tells us the precision of results
themselves.
Title Simulation and Reconstruction Software for the ILC
Speaker Gaede, Frank
Institution DESY
Abstract
The International Linear Collider project is in a very active R&D phase
where currently three different detector concepts are studied in 
international working groups. In order to investigate the various 
physics aspects of the different concepts it is highly desirable to 
have a set of common software tools. In this talk we present some 
of the software packages that have been developed for the ILC. 
LCIO is a persistency framework that defines the data model from 
the generator to the final analysis step and serves as a standard 
for the exchange of data files throughout the ILC community.
Marlin is a modular C++ application framework that allows the 
distributed development of reconstruction and analysis software 
based on LCIO. Marlin is complemented with LCCD, a tool for storing and 
retrieving conditions data and an abstract geometry definition.

Title Grid Technology in Production at DESY
Speaker Gellrich, Andreas
Institution DESY
Abstract
DESY is one of the world leading centers for research with particle
accelerators and a center for research with synchrotron light. The
hadron-electron collider HERA houses three experiments which are
taking data and will be operated until 2007.
 
The H1 and ZEUS collaborations face a growing demand for Monte Carlo
events after the recent luminosity upgrade of the collider. Grid technology
turns out to be an attractive way to meet this
challenge. The core site at DESY acts as a central hub to send production
jobs to sites which incorporate Grid resources in the dedicated HERA VOs.

The DESY Grid Infrastructure deploys the LCG-2 middleware, giving DESY a
spot on the worldwide map of active LCG-2 sites. The DESY Production Grid
provides Grid core services, including all components to make DESY a
complete and independent Grid site. In addition to hosting and supporting
dedicated VOs for H1 and ZEUS, DESY fosters the Grid activities of the LQCD
community and the International Linear Collider Group.

Data management is a key aspect of Grid computing in HEP. In cooperation
with Fermilab, DESY has developed a Storage Element (SE) which consists of
dCache as the core storage system and an implementation of the Storage
Resource Manager (SRM). Access to the entire DESY data space of 0.5 PB is
provided by a dCache-based SE.

In the contribution to ACAT 2005 we will describe the DESY Grid
infrastructure in the context of the DESY Grid activities and present
operation experiences and future plans.
Title Algorithm for Computing Groebner Bases for Linear Difference Systems
Speaker Gerdt, Vladimir
Institution Joint Institute for Nuclear Research
Title GPLs for Bhabha scattering
Speaker Gluza, Janusz
Institution DESY-Zeuthen
Abstract
The two dimensional harmonic polylogarithms (GPL-functions) appear
in calculations  of multi-loop integrals. We discuss these functions
in the context  of analytical solutions for master integrals in
the case of massive Bhabha scattering in QED. We derive a set of irreducible
GPLs up to  weight 4, with analytical representations up to weight 3.
Further, we discuss conformal transformations of GPLs, and also
transformations between master integrals in the s- and t- channel.

Title An analytic formula for track extrapolation in inhomogeneous magnetic field
Speaker Gorbunov, Sergey
Institution DESY
Abstract
The track propagation through an inhomogeneous magnetic field using an
analytic expression is presented.
The analytic formula has been derived under very general assumptions on a
magnetic field. Precision of extrapolation does not depend on a shape of the
magnetic field.
Results of the implementation in the CBM track fitting procedure based on
the Kalman filter are presented and compared with the extrapolation based on
the fourth-order Runge-Kutta method.

Title Elastic Neural Net for standalone RICH ring finding
Speaker Gorbunov, Sergey
Institution DESY
Abstract
The Elastic Neural Net is implemented for finding rings
in the Cherenkov detector. The method doesn't require any
prior track information and therefore can be used for triggering.
The method test at the CBM RICH detector shows very good
efficiency and extremely high speed.


Title Three loop renormalization of QCD in various nonlinear gauges
Speaker Gracey, John
Institution University of Liverpool
Abstract
We discuss the three loop renormalization of QCD in various
nonlinear gauges such as Curci-Ferrari and the maximal abelian
gauge. The anomalous dimensions are determined by paying
respect to the Slavnov-Taylor identities and hence the
renormalization group function of the BRST dimension two
operator is deduced. Given the large number of Feynman
diagrams to be evaluated using the Mincer algorithm written in
Form, the consistency checks on the results are discussed.
The programming issues concerning working with a colour group
split into diagonal and off-diagonal sectors are also considered.

Title Multidimensional numerical integration with Cuba
Speaker Hahn, Thomas
Institution Max-Planck-Institut für Physik
Abstract
Cuba is a library for multidimensional numerical integration.  
It features four independent algorithms, Vegas, Suave, 
Divonne, and Cuhre. 
All four are general-purpose methods (i.e. do not require a 
particular form of the integrand) and can integrate vector 
integrands.  The Cuba routines can be called from Fortran, 
C/C++, and Mathematica.  Their invocation is very similar and 
it is thus easy to exchange routines for comparison purposes.
Title Tagging B Jets associated with heavy neutral MSSM Higgs Bosons
Speaker Heikkinen, Aatos
Institution Helsinki Institute of Physics
Abstract
Since a neural network (NN) approach has been shown to be applicable to the
problem of Higgs boson detection at LHC [1, 2], we study the use of NNs to the
problem of tagging b jets in pp$\rightarrow\rm b\bar{\rm b}$H$_{\rm SUSY}$,
H$_{\rm SUSY}\rightarrow\tau\tau$ in the Compact Muons Solenoid experiment [3, 4].
B tagging can be used to separate the Higgs events with associated b jets from
the Drell-Yan background, for which the associated jets are mostly light quark
and gluon jets.

We teach multi-layer perceptrons (MLPs) available in the object oriented
implementation of data analysis framework ROOT [5].The following learning
methods are evaluated:
steepest descent algorithm, Broyden-Fletcher-Goldfarb-Shanno algorithm,
and variants of conjugate gradients. The ROOT code generation feature of
standalone C++ classifiers is utilized.

We compare the b tagging performance of MLPs with another ROOT based feed
forward NN tool NeuNet [6], which uses a common  back-propagation learning
method.

In addition, we demonstrate  the use of the self-organizing map program
package (SOM_PAK) and  the learning vector quantization program package
(LVQ_PAK) [7] in b tagging problem.  A background discriminating power of
these NN tools are compared.

References

[1] I. Iashvili and A. Kharchilava, $\rm H\rightarrow ZZ^*\rightarrow 4\ell$
Signal Separation Using a Neural Network, CMS TN-1996/100.

[2] M. Mjahed, Higgs search at LHC by neural networks,
Nuclear Physics B 140 (2005) 799-801.

[3] F. Hakl et al., Application of neural networks to Higgs boson search,
Nucl. Instr. & Meth. in Phys. Res. A 502 (2003) 489-491.

[4] S. Lehti, Tagging b-jets in $\rm b\bar{b}H_{SUSY}\rightarrow\tau\tau$,
CMS NOTE-2001/019; G. Segneri and F. Palla, Lifetime Based b-tagging with
CMS,
CMS NOTE-2002/046.

[5] ROOT - An Object Oriented Data Analysis Framework,
Proceedings AIHENP'96 Workshop, Lausanne, Sep. 1996, Nucl. Inst. & Meth.
in Phys. Res. A 389 (1997) 81-86.

[6] J.P. Ernenwein, NeuNet, http://e.home.cern.ch/e/ernen/www/NN.

[7] T. Kohonen, Self-Organizing Maps, Springer-Verlag, Heidelberg, 1995.

Title Factorization Method of Tree Feynman Amplitudes
Speaker Kaneko, Toshiaki
Institution KEK
Abstract
An algorithm is proposed to accelerate the calculation of tree processes
by classifying Feynman graphs and by factorizing Feynman amplitudes.
Feynman graphs in a process are classified without duplication based on
graph theoretical method.  As Fenman graphs in each class have a common
vertex, the Feynman amplitudes are factorized in terms of this vertex.
The classification and factorization method are applied recursively
combined with the generation method of Feynman graphs.  Calculations in
Electro-weak theory became 10 -- 100 times faster than the traditional
method for processes with 6 final particles.

Title Fast integration using quasi-random numbers
Speaker Kerzel, Ulrich
Institution IEKP, Universitaet Karlsruhe
Abstract
We present different low-discrepancy series as
source of quasi-random numbers.
These are useful for numerical integration
in 2 - 15 dimensions since they converge
faster than both equally spaced points (used in one
dimension) and pseudo-random points (used
for many dimensions).
Examples and applications from experimental
and theoretical high-energy physics are given.

Title NeuroBayes - a robust classification and probability density reconstruction algorithm
Speaker Kerzel, Ulrich
Institution IEKP, Universitaet Karlsruhe
Abstract
NeuroBayes is a sophisticated neural network
based on Bayesian statistics solving complex
classification and density reconstruction tasks.
Several regularisation procedures suppress
statistical noise and thus avoid overtraining.
Correlations and missing information from the
variables are handled automatically.
Several highly successful applications from
experimental high-energy physics
and industry are presented.

Title Alignment of the ZEUS Micro-Vertex Detector Using Cosmic Tracks
Speaker Kohno, Takanori
Institution University of Oxford
Abstract
ZEUS Micro-Vertex Detector (MVD) was installed in ZEUS after the HERA
upgrade in 2000. MVD is a precise position detector consisting of 712
single-sided 
silicon strip detectors. The alignment of the barrel MVD has been performed
in units of ladders using cosmic tracks. The procedure used is an iterative
chi^2 minimization, where chi^2 is defined locally for each ladder. 
The procedure is numerically stable since it only requires an inversion of
30 6x6 matrices and reasonably fast in spite of the iterative
approach. 

Title Optimization of Lattice QCD codes for the AMD Opteron processor
Speaker Koma, Miho
Institution DESY
Abstract
We report the current status of the new Opteron cluster at DESY
Hamburg, including benchmarks.
Details of the optimization using SSE/SSE2 instructions and 
the effective use of a prefetch instructions are discussed.
Title Talk cancelled
Speaker Kotikov, Anatoly
Institution Joint Institute for Nuclear Research
Title Analysis of SCTP and TCP based communication in high-speed cluster
Speaker Kozlovszky, Miklos
Institution BUTE
Abstract
Performance  and  financial  constraints  are  pushing  modern  DAQs   (Data
Acquisition Systems) to use  distributed  cluster  environments  instead  of
monolith one-box systems. Inside the cluster the communication layer of  the
nodes should support  outstanding  high  performance  requirements.  We  are
currently investigating different network  protocols  that  could  meet  the
requirements of high speed/low  latency  peer-to-peer  communication  within
DAQ system. We have carried out various performance  measurements  with  TCP
and SCTP over  Gigabit  Ethernet.  We  are  focusing  on  Gigabit  Ethernet,
because this transport medium is broad deployed, cost efficient and  it  has
much  better  cost/throughput  ratio  than  other  available   communication
alternatives (e.g.: Myrinet, Infiniband).
To reach the highest throughput, and minimize latency during data  transfer,
we have made both software and hardware tunings in the pilot system. On  the
hardware side we have increased the number of network interface  cards,  the
memory buffers, and the CPU performance. On the software side we  have  used
independent pending queues, multi-streaming, and  multi-threading  for  both
protocols.
The  major  topics  investigated  include:  blocking   versus   non-blocking
communication, multi-rail versus single-rail  connections  and  jumbo  frame
usage. We discuss the performance results  of  single/multi-stream  peer-to-
peer communication with TCP  and  SCTP  and  give  overview  about  protocol
overhead, CPU and memory usage.

Title New event generators for high-energy physics
Speaker Krauss, Frank
Institution TU Dresden
Abstract
In this talk, recent developments in the field of event generators
for high-energy physics will be presented.
Title Implementation of vertex form factors in grid-oriented version of CompHEP
Speaker Kryukov, Alexander
Institution SINP MSU
Abstract
CompHEP is a powerful system for calculation in HEP. One of the
advantage of the CompHEP is a possibility to use of user defined QFT models.
However, the model has some restrictions. In particular it can not contain
vertex which depends on scalar function on incoming momentum â~@~S form
factor. One of the non-trivial examples of such model is a SUSY. Authors
describe general approach to implementation of  such objects in
grid-oriented version of CompHEP.
Title A segmented principal component analysis applied to calorimetry information at ATLAS
Speaker Lima Jr, Herman
Institution UFRJ
Abstract
Authors: H. P. Lima Jr, J. M. de Seixas

In the new particle collider currently being constructed at CERN, the Large
Hadron Collider, two bunches of protons will collide at every 25 ns, producing
a huge amount of data to be processed.  These data include both the physics of
interest, like the signatures of the higgs boson, and the background noise. In
this scenario, complex trigger systems need to be designed by each experiment
in order to select only the interesting events. The ATLAS trigger system
consists of three distinct levels of event selection. Each trigger level
should perform specific algorithms to select only the events with high
probabiliy of interesting physics. From an initial bunch crossing rate of
40 MHz, the ATLAS trigger system will select events up to 100 Hz to permanent
storage. The first level trigger looks at detector data with reduced
granularity in order to take a fast decision, delivering events to the second
level at a maximum rate of 100 kHz. At the second level, complex algorithms
operate with the total granularity of the detector, guided by Region of
Interests (RoIs), which contains interesting features of the events. This
second level reduces the event rate to less than 1 kHz. The last step of
selection, the Event Filter, performs even more complex algorithms to reduce
the event rate to a maximum of 100 Hz, which corresponds to the data to be
permanently stored for offline analysis. The three levels of selection make
use of the information provided by the calorimeter system of ATLAS, due to
the fast response of the detectors and the detailed measurements achieved.  

Because of the highly segmented calorimetry environment present at ATLAS, and
also due to the fine-grain granularity of the calorimeter system, an
interesting idea is to reduce the event dimension in order to speed up the
selection process and decrease the computational load. This paper describes
the application of a segmented principal component analysis (PCA) in the task
of dimensionality reduction at the second level trigger of ATLAS. The
segmented PCA is proposed in order to explore the high segmentation available
and the different levels of granularity present at each segment. The reduced
dimension of the processed events will allow faster and higher discrimination
efficiency by using neural network processing. 

Title ThePEG, Pythia7, Herwig++ and Ariadne
Speaker Loennblad, Leif
Institution Lund University
Abstract
I present the status of the ThePEG project for creating a common
platform for implementing C++ event generators. I also describe briefly how
the new version of Pythia, Herwig and Ariadne are implemented in this
framework.
Title Fully automated calculation in fermion scattering
Speaker Lorca, Alejandro
Institution DESY
Abstract
The package aITALC has been developed for fully automated calculations of
two fermion production at e^+ e^- collider and other similar reactions. We
emphasize the connection and interoperability between the different modules
required for the calculation and the external tools DIANA, FORM and
LOOPTOOLS. Results for e^+ e^- -> e^+ e^-, f \bar{f}, b\bar{s} are
presented.
Title Energy Reconstruction for a Hadronic Calorimeter Using Neural Networks
Speaker Magacho da Silva, Paulo Vitor
Institution Federal University of Rio de Janeiro
Abstract
P. V. M. da Silva,  J. M. Seixas, L. P. Caloba

In high energy physics experiments with colliders, calorimeters play a very
important role, because they are used to measure the energy of the incoming
particles from the collisions. Depending on the granularity of these
calorimeters, they can also provide useful information about the type of
particle that interacted with the calorimeter.

However, some calorimeters have a non-compensating response (e/h>1) which
degrades the resolution and linearity of the calorimeter response for hadrons.
One of this calorimeters is the hadronic calorimeter of the ATLAS experiment,
the Tilecal.  In order to improve the response of the Tilecal calorimeter,
weighting techniques were already developed to perform the energy
reconstruction. 

These techniques often use linear combination of the energy deposited in the
calorimeters cells or longitudinal samples. In this work the use of a neural
network was proposed to perform the energy reconstruction of pions,
optimizing the linearity and energy resolution. Besides the non-linear
capability of the neural network, it's structure provides very good
generalization for data that are not presented during the training process.
This is very useful during operation since the knowledge of the particle
energy is not known beforehand.

Data coming from tests with pions beams performed at CERN with a prototype
of the Tilecal module, called Module 0, were used to feed the neural network.
Also the same data were used with the linear weighting techniques, so we can
compare both methods.

Two measurements are used to determine the quality of the methods : linearity
and energy resolution.  The linearity measurement is done by looking at the
RMS of the ratio between the reconstructed energy and the expected energy,
normalized by the ratio at 100 GeV. For the weighting techniques a RMS of
1.27\% was found, while for the neural network a better result was achieved,
with a RMS equal to 0.81\%. The raw data has a RMS equal to 1.6\%.

As for the energy resolution, the neural network tends to improve the constant
term, while the linear weighting techniques improves more the statistical term. 

The neural network achieved 62.0\% for the statistical term and 4.0\% for the
constant term and for the linear weighting techniques a statistical term of
40.8\% and a constant term of 5.3\% were achieved. The raw data resolution
has a statistical term equal to 56.3\% and a constant term equal to 6.9\%.
More studies are being made to improve the neural network energy
reconstruction and to understand the energy resolution results of the neural
network. 

Title Efficient algorithms of multidimensional gamma-ray spectra compression
Speaker Matousek, Vladislav
Institution Institute of Physics, Slovak Acad. of Sci.
Abstract
The volume of spectrometric data produced in nuclear experiments is
enormous. Because of practical reasons, e.g. interactive analysis, handling,
etc, the fast and efficient compression of large multidimensional arrays is
in some cases unavoidable. In the contribution we present some original
approaches to compress coincident multidimensional gamma-ray data
efficiently. 
The multidimensional coincidence gamma-ray spectra are symmetrical in their
nature. This can be utilized to achieve needed memory space reduction
without the loss of information. We present the symmetry based removal
method in
conjunction with compression via fast adaptive Walsh-Hadamard transform.
Our algorithms are based on direct modification of coefficients of the
transform kernel according to compressed data. In the examples presented the
achieved compression ratios are of the order of 10^4 for 2-fold gamma-ray
spectra, 10^8 and 10^11 for 3-fold and 4-fold spectra, respectively. 
Another method presented in the contribution is based on address
randomizing transformation. One event descriptor contains the position of an
event in multidimensional space (e.g. data read out from ADCs) and its
counts. The descriptors are passed through a transformation. The randomizing
transformation distributes qausi-uniformly the event descriptors in the
transformed space. Cluster of descriptors in physical field are spread over
the whole range of possible addresses, and adjacent descriptors go to
addresses far away from each other. In the contribution we propose the
randomizing transformation based on inverse numbers in the sense of modular
arithmetic.  The transformation is fast, so that it could be applied for
on-line acqusition mode. The compression ratios are approximately of the
same order like in the case of above mentioned adaptive Walsh-Hadamard
transform. 
The reconstructed multidimensional data compressed through the use of both
proposed algorithms are compared to the original data and to the results
achieved by using conventional methods of compression. The examples
presented prove in favor of the proposed compression algorithms.


Title Search for the Higgs boson at LHC by using Genetic Algorithms
Speaker Mjahed, Mostafa
Institution Ecole Royale de l' Air, Marrakech
Abstract
The search for  the  Higgs  boson  is  one  of  the  primary  tasks  of  the
experiments at the Large Hadron Collider  (LHC).  It  has  been  established
that a Standard Model Higgs boson can be discovered with  high  significance
over the full mass range of interest, from the lower limit set  by  the  LEP
experiments of 114.1 GeV/c2 up to about 1 TeV/ c2. Otherwise  the  discovery
of  the  Higgs  boson  should  be  complicated  by  the  presence  of   huge
backgrounds [1].

Our aim here is  to  use  a  genetic  algorithm  as  a  tool  for  a  better
discrimination between signal and background. A genetic algorithm [2, 3]  is
a search technique modelled on biological evolution, in which   real  valued
information on events are first encoded  as  strings  (chromosomes).  During
the "reproduction phase", each event is assigned  a  fitness  value  derived
from  its  raw  performance  measure  given  by   an   objective   function.
Recombination operators as crossover and mutation are used to  optimize  the
association between events and classes.

We will analyze the Higgs mass range 140-200 GeV. At this  mass  range,  the
dominant mechanism for Higgs production is gluon-gluon  fusion.  Usual  ways
to reduce background are lepton isolation, what motivated us  to  study  the
decay into four muons.

Events were produced at LHC energies ( MH =140-200 GeV  ),  using  the  Lund
Monte Carlo generator Pythia 6.1. Higgs boson  events  (decaying  into  four
muons)  and  the  most  relevant  background  are   considered.   The   most
discriminant variables, as the transverse momentum of the  four  muons,  the
invariant  masses  of  the  four  different  muons  pairs,  the  four  muons
invariant mass, the hadron's  multiplicity  and  other  new  variables,  are
used.

Genetic algorithms differ substantially from other  classification  methods.
They use probabilistic transition rules, not deterministic ones and work  on
an encoding of the variables set rather than the variables set itself.

The  results  compared  to  other  multivariate  analysis  methods   (neural
networks, linear and  non  linear  discriminant  analysis,  decision  trees.
[4]), illustrates a number of features of the genetic  algorithm  that  make
it potentially attractive in classification tasks.

    [1]    D. Froidevaux, in Proc of Large Hadron Collider Workshop, eds G.
    Jarlskob and D. Rein, CERN90-10, ECFA 90-133, Vol II, p444
    [2]    D.E. Goldberg, Genetic Algorithms in Search,  Optimization,  and
    Machine Learning. Addison Wesley, Reading, MA, 1989.
    [3]      Z.  Michalewicz,  Genetic  Algorithms  +  Data  Structures   =
    Evolution Programs, Springer Verlag,  New  York,  NY,  second  edition,
    1994.
    [4]    M. Mjahed, Nucl. Instrum. and Meth. A432,1, 1999, 170.
    M. Mjahed, Nucl. Instrum. and Meth. A 481 (1-3) (2002) 601.
    M. Mjahed, Nucl. Physics B (Proc. Suppl.) Vol 106-107C, (2002) 1094.
    M. Mjahed, Nucl. Physics B (Proc. Suppl.) Vol 140C, (2005) 799.

Title The use of Clustering Techniques for the Classification of High Energy Physics Data
Speaker Mjahed, Mostafa
Institution Ecole Royale de l' Air, Marrakech
Abstract
A number of interesting studies of high energy physics experiments,  may  be
performed using the differences  between  the  topologies  of  the  produced
events. In  fact,  at  LEP2  energies  and  beyond,  several  processes  and
channels are expected [1-3]. Hadronic events with multi-jet topologies  will
be produced,  with  dominant  rates.  The  Standard  Model  Higgs  boson  is
expected to be produced mainly via the process e+ e-?(?ZH (?4 jets.

In this paper, we present an attempt to separate between Higgs boson  events
(e+ e-?(ZH (?4 jets: CZH class) from other physics processes (e+  e-  ?(?Z/(
,?W+W-, ZZ(?4jets: CBack class ), using several clustering techniques.

In addition to the classical (Hierarchical and K-means)  clustering  methods
[4-6], we tries to construct an approximate space filling  curve  [7-9]  and
distances between  points  in  a  multidimensional  space  are  replaced  by
distances   along   a   Lebesgue   measure-preserving   curve.   With   this
classification algorithm, using a neighboring approach on the space  filling
curve, several clusters may emerge  from  data  and  configurations  may  be
associated to the considered classes CZH and CBack .

Events were produced at post-LEP2  energies,  using  the  Lund  Monte  Carlo
generator[10] and the Aleph package [11]. The  most  discriminant  variables
as the reconstructed jet mass, the jet properties (b-tag, rapidity  weighted
moments) and other variables are used.



References


    [1]    ALEPH Coll., Phys. Letters. B 495 (2000) 1-17.
    [2]    T.G. Malmgren et al, Nuc. Instrum. and Methods A 403, 2-3 (1998)
    481.
    [3]    K. Hultqvist et al, Nuc. Instrum. and Methods A  432,  1  (1999)
    176.
    [4]     Van  Ryzin,  Classification  and  clustering,  Academic  Press,
    NewYork, 1977.
    [5]    J.A. Hartigan, Clustering Algorithms, New York, Wiley, 1975.
    [6]    A.K. Jain and al., Algorithms  for  Clustering  Data.  Prentice-
    Hall, NJ, 1988.
    [7]    S.C. Milne, "Peano curves and smoothness of functions", Advances
    in Mathematics, 35 (1980) 129-157.
    [8]    H. Sagan, "Space-filling  curves",  Springer-Verlag,  New  York,
    1994.
    [9]    W.J. Gilbert, A cube filling  Hilbert  curve,  The  Mathematical
    Intelligencer, Vol.6 (1984) 78
    [10]   T. Sjostrand, M. Bengtsson, Comp. Phys. Comm. 82 (1994) 74.
    [11]   P. Janot, ALEPH package.

Title Symbolic Summation and Higher Orders in Perturbation Theory
Speaker Moch, Sven-Olaf
Institution DESY
Abstract
Higher orders in perturbation theory require the
calculation of Feynman integrals at multiple loops.
We report on an approach to systematically solve
Feynman integrals by means of symbolic summation.
As an example, we discuss recent calculations
of structure functions in deep-inelastic scattering
to three loops.

Title DaqProVis, a toolkit for acquisition, interactive analysis, processing and visualization of multidimensional data
Speaker Morhac, Miroslav
Institution Institute of Physics, Slovak Academy of Sciences
Abstract
In the contribution we present the data acquisition, processing and
visualization  system, which is being built at the Institute of Physics,
Slovak Academy of Sciences, Bratislava and FLNR JINR Dubna. DaqProVis is
well suited for interactive analysis of multiparameter data from small and
medium sized experiments in nuclear physiscs. However it can analyse event
data even from big experiments, e.g. from GAMASPHERE. The system is
continuously being developed, improved and supplemented with new additional
functions and capabilities.
The data acquisition part of the system allows one to acquire multiparameter
events either directly from the experiment, from a list file of events or
from another DaqProVis working in server mode. The capability of DaqProVis
to work simultaneously in both the client and the server mode enables us to
realize remote as well as distributed acquisition, processing and
visualization systems.
        The raw events coming from one of the above mentioned data sources
can be  sorted according to predefined criteria (gates) and written to
sorted streams as well. The event variables can be anlysed to create 1, 2,
3, 4, 5 "parameter histograms" spectra, analysed and compressed using
on-line compression procedure (the amplitude analysis is carried out
simultaneously with the compression, event by event, in on-line acquisition
mode), sampled using various sampler modes (sampling, multiscaling, or
stability measurement of a chosen event variable).
>From acquired multidimensional spectra one can make slices of lower
dimensionality. Continuous scanning aimed at looking for and localizing
interesting parts of multidimensional spectra, with automatic stop when the
attached condition is fulfilled, is also possible
Once collected the analysed data can be further processed using
sophisticated background elimination, deconvolution, peak searching and
fitting algorithms. A comprehensive number of both conventional and new
developed spectra processing algorithms were implemented in the system.
The system allows one to display 1, 2, 3, 4, 5-parameter spectra using a
great variety of conventional as well as sophisticated (shaded isosurface,
volume rendering etc) visualization techniques. It supports various
graphical formats (pcx, ps, jpg, bmp). If desired all changes of individual
pictures or entire screen can be recorded in an avi file. It proved to be
very efficient, e.g. in the analysis of iterative processing methods
(deconvolution, fitting). 
        The modular structure of the DaqProVis system provides a great
flexibility for both experimental and post-experimental configurations. To
write the software we have employed the object oriented approch. Objects
such as detection line, event, gate/condition, filter, analyser, sampler,
compressor, spectrum, picture etc. are internally represented by structures.
The experimental, processing and visualization configurations are completely
stored in the networks of structures.

Title Deconvolution methods and their applications in the analysis of gamma-ray spectra
Speaker Morhac, Miroslav
Institution Institute of Physics, Slovak Academy of Sciences
Abstract
One of the most delicate problems of any spectrometric method is that
related to the extraction of the correct information out of the spectra
sections, where due to the limited resolution of the equipment, signals
coming from various sources are overlapping. Deconvolution methods are very
frequently employed to improve the resolution of an experimental measurement
by mathematically removing the smearing effects of an imperfect instrument,
using its known resolution function. They can be successfully applied for
the determination of positions and intensities of peaks and for the
decomposition of multiplets in gamma-ray spectroscopy. 
>From a numerical point of view the deconvolution is so called ill-posed
problem, which means that many different functions solve a convolution
equation within error bounds of experimental data. When employing standard
algorithms to solve a convolution system small errors or noise can cause
enormous oscillations in the result. This implies that a regularization must
be employed. The regularization encompasses a class of solution techniques
that modify an ill-posed problem into a well-posed one by approximation so
that a physically acceptable approximate solution can be obtained.
In the contribution we present deconvolution methods based on direct
solution as well as the methods based on iterative solution of system of
linear equations. We give a comparison of efficiencies of various
deconvolution algorithms and regularization techniques (Tikhonov, Riley,
VanCittert, Gold, Richardson-Lucy, etc). To improve the resolution of the
deconvolution of positive definite spectroscopic data we propose a
modification of deconvolution algorithms by introducing boosting operation
and regularization technique based on minimization of  squares of negative
values.
We have optimized and extended Gold deconvolution algorithm also to two- and
three-dimensional data. The presented examples prove in favor of the
deconvolution algorithms employed.
The analysis of peaks in spectra consists in determination of positions of
peaks and subsequent fitting, which results in the estimate of peak shape
parameters. The positions of peaks can be well determined from separated
peaks in decomposed spectra and can be fed as initial estimates into a
fitting procedure. Proper estimation of peak positions is a necessary
condition for correct analysis of experimental spectra. However the
resolution of conventional peak searching algorithms based on smoothed
second differences is quite limited. 
Therefore to improve the resolution capabilities we have proposed several
algorithms based on Gold deconvolution for both one- and two-dimensional
spectra. The deconvolution and peak finder methods have been implemented in
TSpectrum class of ROOT system.

Title Performance Comparison of the LCG2 and gLite File Catalogues
Speaker Munro, Craig
Institution Brunel University
Abstract
File catalogues are presently one of the core components of the Grid
middleware and their perfomance is crucial to the performance the
entire system. We present a detailed comparison study of the
performance of the LCG File Catalogue (LFC) with the gLite FiReMan
catalogue developed in the EGEE project. A detailed discussion of
the merits and shortcomings of the different approaches is done
with an emphasis on the different access protocols.

Title Abusing QGRAF
Speaker Nogueira, Paulo
Institution IST-UTL, Lisbon
Abstract
Discussion of a few selected examples of Feynman diagram generation with
special constraints (most of them have already been used in a practical
calculation, but have not been presented before, to the best of my knowledge).
It will be shown that with a little help QGRAF can be used to solve some types
of problems that may seem to be out of its current reach. Those examples also
serve to expose some weaknesses of the package. The connection between those
weaknesses and the recent (and ongoing) evolution of QGRAF will be roughly
outlined.

Title Neural networks approach to parton distributions fitting
Speaker Piccione, Andrea
Institution University of Turin
Abstract
We will show an application of neural networks to extract
informations on the structure of hadrons. A Monte Carlo over experimental
data is performed to correctly reproduce  data errors and correlations. 
A neural network is then trained on each Monte Carlo replica via a genetic
algorithm. Results on the proton structure function [hep-ph/0501067], and on
the non-singlet parton distribution will be shown.
Title ILDG: DataGrids for Lattice QCD
Speaker Pleiter, Dirk
Institution NIC / DESY Zeuthen
Abstract
As the need for computing resources to carry out numerical simulations of
QCD formulated on a lattice has increased significantly, efficient use
of the generated data has become a major concern. To improve on this,
groups plan to share their configurations on a worldwide level within
the International Lattice DataGrid (ILDG). Doing so requires standardized
description of the configurations, standards on binary file formats and
common middleware interfaces. In this talk we will detail the requirements
for a ILDG, describe the problems and discuss the solutions.  Furthermore,
we will give an overview on the implementation of the LatFor DataGrid
(LDG) which will be one of the grids within ILDG's grid-of-grids. The
implementation of LDG is a common project of DESY (Hamburg/Zeuthen),
FZJ/ZAM (Juelich), NIC (Zeuthen/Juelich) and ZIB (Berlin).
Title Detector Description of the ATLAS Muon Spectrometer and H8 Muon Testbeam
Speaker Pomarede, Daniel
Institution CEA/DAPNIA Saclay
Abstract
The Muon Spectrometer of the ATLAS experiment is a large and
complex system of gaseous detectors. The simulation and the reconstruction
of muon events require a careful description of these detectors, which
either participate in the trigger or in the precision measurements of
tracks. A thorough
description of the passive materials, such as the toroidal magnet systems,
is also needed to account for Coulomb scattering and energy losses. The
operation of the muon spectrometer relies on the alignment of its precision
chambers, so the geometrical model must fully implement their
misalignments and deformations. We present the Detector Description
chain employed in the Muon system and its integration in the ATLAS
software framework. It relies on a database technology and a standard
set of geometrical primitives common to all ATLAS subsystems.
The Muon Detector Description has been used successfully in the context of
the ATLAS Data Challenges, where it provides a unique and coherent geometry
source for the simulation and reconstruction algorithms.
It has also been validated in the context of the experimental program of the
ATLAS testbeams, where analyses of the treatment of chamber alignment in track
reconstruction relies crucially upon the detector description model.
Title Storage resources management and access at TIER1 CNAF
Speaker Ricci, Pier Paolo
Institution INFN CNAF
Abstract
At presents at LCG TIER1 CNAF we have 2 main different mass
storage systems for archiving the HEP experiment data: a HSM software system
(CASTOR) and about 200TB of different storage devices over SAN. This paper
briefly describe our hardware and software environtment and summarize the
simple technical improvements we have implemented in order to obtain a
better avaliability and the best data access throughtput from the front-end
machines. Also some test results for different file systems over SAN are
reported.
Title A Maple Package for Computing Groebner bases for Linear Recurrence Relations
Speaker Robertz, Daniel
Institution RWTH Aachen, Lehrstuhl B fuer Mathematik
Abstract
As it is argued in [1] Groebner bases form the most universal algorithmic
tool for reduction of loop integrals determined by the recurrence relations
derived from the integration by parts method. These recurrence relations can
be considered as generators of an ideal in the ring of finite-difference
polynomials.

In this talk we present a Maple package for computing a Groebner basis of the
ideal. The built-in algorithm is based on the use of Janet-like monomial
division and will be presented in a separate talk [2].
The package is a modified version of our earlier package oriented to
commmutative and linear differential algebra and based on Janet involutive
division algorithm [3].

The modified version is specialized to linear difference ideals and uses
Janet-like division [4] which is more efficient than Janet division.
We illustrate the package by some one-loop examples

[1] V.P.Gerdt. Groebner Bases in Perturbative Calculations,
Nuclear Physics B (Proc. Suppl.) 135, 2004, 232-237. URL:
http://arXiv.org/hep-ph/0501053.

[2] V.P.Gerdt. An Algorithm for Reduction of Loop Integrals. A talk at ACAT-05.

[3] Yu.A.Blinkov, V.P.Gerdt, C.F.Cid, W.Plesken and D.Robertz.
The Maple Package "Janet": I. Polynomial Systems, Computer Algebra in
Scientific Computing / CASC 2003, V.G.Ganzha, E.W.Mayr, and E.V.Vorozhtsov,
eds., Institute of Informatics, Technical University of Munich, Garching, 2003,
pp.31--54.; II. Linear Partial Differential Equations, ibid., pp.41--54.

[4] V.P.Gerdt. Janet-like Groebner Bases. Submitted to
MEGA-05 (Porto Conte, Alghero, Sardinia, May 27th - June 1st, 2005).

Title Limits and Confidence Intervals in the Presence of Nuisance Parameters
Speaker Rolke, Wolfgang
Institution Univ. of Puerto Rico - Mayaguez
Abstract
I present the results of a study of the frequentist properties of
confidence intervals computed by the method known to statisticians as the
Profile Likelihood. 
It is seen that the coverage of these intervals is surprisingly good over
a wide range of possible parameter values for important classes of problems,
in particular whenever there are additional nuisance parameters with
statistical or systematic errors.
Title Evolution of the configuration database design
Speaker Salnikov, Andrei
Institution SLAC
Abstract
BaBar experiment at SLAC successfully collects physics data 
since 1999. One of the major parts of its on-line system is 
the configuration database which provides other parts of the
system with the configuration data necessary for data taking. 
Originally the configuration database was implemented 
in Objectivity/DB ODBMS. Recently BaBar performed a 
successful migration of its event store from Objectivity/DB 
to ROOT and this prompted a complete phase-out of the 
Objectivity/DB in all other BaBar databases. It required 
the complete redesign of the configuration database to hide
any implementation details and to support multiple 
implementations of the same interface. In this paper we 
describe the result of the migration of configuration database, 
its new design, implementation strategy and details.
Title Metadata Services on the Grid
Speaker Santos, Nuno
Institution CERN
Abstract
We present the design of a metadata service for the Grid which has been
developed in the ARDA project and which is now evolving as a common effort
together with the gLite Data Management team. The results of extensive
performance studies with our implementation of the service are shown
including a comparison of the SOAP based implemementation of the interface
with an implemenation based on TCP streaming. This allows to clarify in a
quantitative way the implication of the usage of SOAP as a metadata access
protocol. Finally, the activity of the ARDA team on metadata services within
the HEP community is reviewed.

Title Monte Carlo based studies of polarized positrons source for the International Linear Collider (ILC)
Speaker Schaelicke, Andreas
Institution DESY
Abstract
The full exploitation of the physics potential of an International Linear
Collider (ILC) in addition to the LHC program will require the development
of polarized positron beams. Having both positron and electron beams
polarized will be a decisive improvement for physics studies, providing new
insight into structures of couplings and thus access to the physics beyond
the standard model. 
The new concept of a polarized positron source is based on the development
of a circular polarized photon source. The polarized photons create
electron-positron pairs in a thin target and transfer their polarisation
state to the outgoing leptons. 
To achieve a high level of  positron polarization the understanding of the
production  mechanisms in the target is crucial for an optimization in terms
of positron yield which is closely related to the target properties. 
In this talk we present a Geant4 based optimization study of the positron
production target for the ILC. 

Title InfiniBand
Speaker Schwickerath, Ulrich
Institution Forschungszentrum Karlsruhe
Abstract
InfiniBand is an emerging technology which becomes more and more
interesting due to good performance and falling prices 
for both high performance and high throughput applications. The
Institute for Scientific Computing (IWR) of the Forschungszentrum
Karlsruhe was amongst the first adopters of 4x InfiniBand in Germany.
In this presentation, experiences with MPI based applications 
and performance results of own developements on various platforms are 
presented, and recent developements in the field are reviewed.

Title Talk cancelled
Speaker Seitliev, Aleksandr
Institution Gomel State University
Title A Software Package to Construct Polynomial Sets over Z_2 for Determining the Output of Quantum Computations
Speaker Severyanov, Vasily
Institution Joint Institute for Nuclear Research
Abstract
As it was recently shown in [1], determining the output
of a quantum computation is equivalent to counting the number of solutions
of a certain set of polynomials defined over the finite field Z_2.
In this talk we present a C# package that allows a user for an input quantum
circuit to generate a set of multivariate polynomials over Z_2 whose total
number of solutions in Z_2 determines the output of the quantum computation
defined by the circuit. Our program has a user-friendly graphical interface
and a built-in base of the basic elements, i.e., quantum gates and wires.
The user can easily assemble an input circuit from those elements.

The generated polynomial system can further be converted to the canonical
involutive form by applying efficient algorithms described in [2]. The
involutive form is generally more appropriate for counting the number of
common roots of the polynomials.

[1] Christopher M. Dawson et al. Quantum computing and polynomial
equations over the finite field Z_2. arXiv:quant-ph/0408129, 2004.
[2] Gerdt V.P. Involutive Algorithms for Computing Groebner Bases.
Proceedings of the NATO Advanced Research Workshop "Computational
commutative and non-commutative algebraic geometry" (Chishinau,
June 6-11, 2004), IOS Press, to appear.
Title The apeNEXT Project
Speaker Simma, Hubert
Institution DESY
Abstract
Numerical simulations in theoretical high-energy physics (Lattice QCD) require huge computing resources. 
Several generations of massively parallel computers optimised for these applications have been developed 
within the APE (array processor experiment) project. Large prototype systems of the latest generation, 
apeNEXT, are currently being assembled and tested.

This talk provides an overview of the hardware and software architecture of apeNEXT, describes its new features, 
like the SPMD programming model and the C compiler, and reports on the current status.

Title Track reconstruction at the CMS experiment
Speaker Speer, Thomas
Institution University of Zurich, Switzerland
Abstract
An overview of the track reconstruction algorithms used in the Tracker of
the CMS experiments at the LHC will be presented, and some of their
respective features will discussed.
Properties, results and performance of these algorithms with simulated data
will be shown.
The CMS tracking system features an all-silicon layout consisting of a
pixel detector and a silicon micro-strip tracker.
Title Monte Carlo Mass Production for the ZEUS experiment on the Grid
Speaker Stadie, Hartmut
Institution DESY
Abstract
The detector and collider upgrades for HERA-II have drastically
increased the demand on computing resources for Monte Carlo production for
the ZEUS experiment. To close the gap, the existing production system was
extended to use grid resources. This extended system has been used in
production since November 2004. Using 25 different LHC computing grid (LCG)
sites more than 100 million events were simulated and reconstructed
exceeding the capacity of the old system. We will present the production
setup and introduce the toolkit that was developed by ZEUS to use the
existing grid middleware efficiently. Finally, we will report about our
experiences on running mass production on the grid and our future plans.
Title Adaptive filters for track finding
Speaker Strandlie, Are
Institution Gjøvik University College, Norway
Abstract
Because of its recursive nature, the Kalman filter can and has been used
not only for track fitting but also for track finding. The simplest strategy
is to select the closest compatible observation at each step of the filter.
This turns out to be insufficient in scenarios with high track density and/or
large amounts of noise.
In order to reach high efficiency, several track hypotheses have to be
explored in parallel, resulting in a combinatorial Kalman filter. 
In this contribution we study the application of adaptive estimators
such as the Gaussian-sum filter and the Deterministic Annealing
Filter to track finding. We consider various scenarios with different track
density, contamination, and seed quality. It is shown by simulation studies 
that adaptive methods are competitive alternatives to the combinatorial
Kalman filter, 
and that in some cases there are appreciable gains in speed of the track
finding procedure with respect to the latter.

Title Modelling of non-Gaussian tails of multiple Coulomb scattering in track fitting with a Gaussian-sum filter
Speaker Strandlie, Are
Institution Gjøvik University College, Norway
Abstract
Abstract:

The Kalman filter has since many years been the default method for track
fitting in high-energy physics tracking detectors. The Kalman filter is
a least-squares estimator and is known to be optimal when all
probability densities involved during the track fit are Gaussian. If any
of the densities deviate from the Gaussian assumption, it is plausible
that a non-linear estimator which better takes the actual shape of the
distribution into account can do better. One such non-linear estimator
is the Gaussian-sum filter[1], which is adequate if the distributions
under consideration can be represented or approximated by Gaussian mixtures.

Quite recently, a two-component Gaussian-mixture approximation to the
multiple scattering distribution has been presented[2]. The availability
of such an approximation opens up for a treatment of multiple scattering
within the realm of the Gaussian-sum filter, and the main purpose
of this contribution is to present a Gaussian-sum filter for track
fitting, based on the abovementioned approximation. In a
simulation study within a linear track model the Gaussian-sum filter is
shown to be a competitive alternative to the Kalman filter. Scenarios at
various momenta and various maximum number of components in the
Gaussian-sum filter are considered. The difference between the two
approaches is mainly visible in the estimates of the uncertainties of
the track
parameters, particularly at low momenta. At such momenta the
Gaussian-sum filter yields a better estimate of the uncertainties than
the Kalman filter. This
feature could for instance lead to a better estimate of the vertex
position in a subsequent vertex fit.

References:

[1] R. Frühwirth, Track fitting with non-Gaussian noise. Computer
Physics Communications 100 (1997) 1.
[2] R. Frühwirth and M. Regler, On the quantitative modelling of tails
and core of multiple scattering by Gaussian mixtures. Nuclear
Instruments and Methods in Physics Research A 456 (2001) 369.

Title ParFORM: recent developments
Speaker Tentyukov, Mikhail
Institution Universitaet Karlsruhe
Abstract
We report on the status of our project of parallelization of the
symbolic manipulation program FORM.
We have now a parallel version of FORM running on Cluster- and
SMP-architectures. This version can be used to run arbitrary FORM
programs in parallel.
Title The FEDRA - framework for emulsion data reconstruction and analysis in the OPERA experiment
Speaker Tioukov, Valeri
Institution INFN (Napoli)
Abstract
OPERA is a massive lead/emulsion target for a long-baseline neutrino 
oscillation search. More then 90% of the useful experimental data in OPERA 
will be produced by the scanning of emulsion plates with the automatic
microscopes. 
The main goal of the data processing in OPERA will be the search, analysis and 
identification of primary and secondary vertexes produced by neutrino in 
lead-emulsion target.

The volume of middle and high-level data to be analysed and stored  
is expected to be of the order of several Gb per event. The storage, 
calibration, reconstruction, analysis and visualization of this data is 
the task of FEDRA - system written in C++ and based on ROOT framework. 
The system is now  actively used for processing of test beams and simulation
data. Several interesting 
algoritmic solutions permits us to make very effective code for fast pattern
recognition in heavy signal/noise conditions. The system consists of the
storage part, intercalibration and segments linking part, track finding and
fitting, vertex finding and fitting and kinematical analysis parts.
Kalman Filteing technique is used for tracks&vertex fitting. ROOT-based
event display is used for interactive analysis of the special events.
Title Neural Triggering System Operating on High Resolution Calorimetry Information
Speaker Torres, Rodrigo
Institution Federal University of Rio de Janeiro
Abstract
For the ATLAS detector, the online trigger system is designed with
three levels. The online triggering system rely on detailed calorimeter
information for achieving high background noise reduction.  The first level
uses coarse grain calorimeter granularity for reducing the input event from
40 MHz to 100 kHz. The second level, implemented by ~500 dual PCs  connected
by gigabit Ethernet network, will use fine-grain calorimeter granularity in
regions of interest originally marked by first level, to reduce further the
event rate to 1 kHz. The final level, based on ~1600 dual PCs also connected
by gigabit networks will operate on the full event, reducing the final rate
of events to only 100 Hz.

This paper presents a discriminator system for the electron/jet for
operating in the second level trigger. In order to handle the high data
dimensionality, the regions of interest are organized in the form of
concentric ring sums, so that both signal compaction and detection
efficiency can be improved. The rings information is fed into a feedforward
neural network, and this implementation resulted in a 93% electrons
detection efficiency for a false alarm of 10%.

The system will be implemented in the Athena environment. This environment
emulates the trigger behavior, so that algorithms for the high level trigger
can be efficiently developed and tested both offline and online.

In the second level, there are specific processors for specific purposes.
Among them, the level 2 processors are used for events selection. Inside
them, there are several worker threads, each one handling one event. For
comparison purposes, a single worker thread will be implemented on a digital
signal processor (DSP) with single instruction on multiple data stream
(SIMD) capabilities, and the final results will be compared to the one
implemented on the Athena environment.
Title Talk cancelled
Speaker Trott, Michael
Institution Wolfram Research
Title GiNaC - Symbolic computation with C++
Speaker Vollinga, Jens
Institution Institut für Physik - Universität Mainz
Abstract
An introduction to the C++ library GiNaC will be given. GiNaC
extents the C++ language by new objects and methods for the representation
and manipulation of arbitrary symbolic expressions. Features and
applications of GiNaC will also be highlighted.
Title Chiral fermions on the lattice
Speaker Wenger, Urs
Institution NIC/DESY Zeuthen
Abstract
We consider the recent progress in simulating light chiral fermions in QCD
on the lattice. We discuss various approaches to implementing an exact chiral
symmetry on the lattice using different 4- or 5-dimensional formulations
of QCD. This provides a theoretical framework within which we can compare the
algorithmic alternatives for their implementation.
Title Calculation of Multi-Particle Processes in QCD
Speaker Worek, Malgorzata
Institution INP, NCSR "Demokritos", Athens
Abstract
To estimate the multi-jet production cross sections
as well as their characteristic distributions is a difficult
task. Perturbation theory based on Feynman graphs runs into computational
problems, since the
number of graphs contributing to the amplitude grows like n!.
Color and helicity summation is additional
source of computational inefficiencies. In order
to overcome the computational obstacles recursive methods
can be used. A computational algorithm based on such
recursive equations will be  presented. The amplitude
computed by using
recursive equations result in a computational cost
growing asymptotically as 3^n. Additionally color and helicity structure
are appropriately transformed so the Monte Carlo summation is used in
those cases as well.
Title New Developments in Parallelization of the Multidimensional Integration Package DICE
Speaker Yuasa, Fukuko
Institution KEK
Abstract
New developments concerning the extension of the parallelized DICE 
are presented in this paper. DICE is a general purpose multidimensional 
numerical integration package. In general, there are two approaches for
the parallelization, "Data Parallelism" way and "Function Parallelism" way.
We had already developed the parallelization code using "Data Parallelism"
way and reported it in ACAT2002 in Moscow.  
Here, we will present the preliminary result of the implementation of 
parallelized DICE in an another approach, "Function Parallelism" way.

Title Performance of Statistical Learning Methods
Speaker Zimmermann, Jens
Institution MPI für Physik, München
Abstract
Performance of Statistical Learning Methods
Examples from the ep experiment H1 and from a future
linear collider will be used to demonstrate the power of
statistical learning methods. Significant improvements compared to
classical algorithms will be shown and different learning methods
will also be compared against each other. Important guidelines
regarding the performance evaluation, statistical and systematical
uncertainties and the comparison of different methods will be given.

Title Statistical Learning Basics
Speaker Zimmermann, Jens
Institution MPI für Physik, München
Abstract
In this talk an introduction to statistical learning will be
given. The most famous learning methods will be presented and
interpreted. Basic prerequisites and common guidelines for the
correct and successful application of statistical learning methods
will be discussed. Examples from the X-ray satellite project XEUS
and from the Cherenkov telescope MAGIC will illustrate the
discussed topics.

Title On estimation of the exponential distribution parameter under conditions of small statistics and observation interval.
Speaker Zlokazov, Victor
Institution FNLPh, JINR, Dubna, Russia
Abstract
If a distribution function of a random quantity \xi is
 P(\xi < t) = 1 - exp(-t/T),     t \in [0,\infty),
then the least favorable conditions for the parameter T estimation is
the poor data statistics and/or small observation interval
t \in [0,B],\ B << T).

In particular, the equation of the maximum likelihood estimator for
T is practically unsolvable in this case.
Let us introduce two random quantities: n_1 and n_2 - sums of
registered decays in intervals [0,B] and [B,2B], respectively.
It is obvious that
[ \hat{E}n_1 = N / (1 - exp(-B/T)),
  \hat{E}n_2 = N / (exp(-B/T) - exp(-2B/T)) ]
Here \hat{E} is the operator of expectation.
We can build the following estimator of T:
\hat{T} = -B/ln(\frac{n_1}{n_2}).
Obviously, for the analysis only exponential-like curves are suitable.
For instance, use can be made of the following criterion for testing
the inequality n_2 < n_1 for statistical significance:
n_1 > n_2 + 3 \cdot \sigma(n_2).
From that we can obtain restrictions on
 - statistics level N at B/T given
 - or the length of the observation interval B at statistics N given,
which provide for a successful analysis of such data.

The restrictions are very hard and often non-realistic. Here an idea of
an estimate of a lower parameter bound instead of parameter itself is
very fruitful. The lower parameter bound is a quantity, which with a
certain (calculable) probability is less than T, but greater than
the length of the observation interval B.
In our case such an estimator can be obtained, e.g., from such a relation
n_1 - n_2 <= k\sigma(n_1+n_2), on condition that n_1 > n_2.
where k is any number, and \sigma - deviation function.
Title Grid Middleware Configuration at the KIPT CMS Linux Cluster
Speaker Zub, Stanislav
Institution Institute of High Energy Physics and Nuclear Physics (NSC KIPT)
Abstract
Problems associated with storage, processing and analysis of huge data
samples expected in experiments, planned at the Large Hadron Collider
(LHC), are discussed. Current status and problems associated with
installation of LCG middleware on the KIPT CMS Linux Cluster (KCLC), which
is a part of the Moscow distributed regional center for the LHC data
analysis, are outlined. Configuration and testing of the LHC computing
Grid middleware at the KCLC is described. Participation of the KCLC in the
CMS Monte-Carlo event production is presented.
Title High precision numerical accuracy in Physics research
Speaker de Dinechin, Florent
Institution LIP, École Normale Supérieure de Lyon
Title Goodness-of-fit tests in many dimensions
Speaker van Hameren, Andre
Institution Universiteat Mainz
Abstract
A method is presented to construct goodness-of-fit statistics in many
dimensions for which the distribution of all possible test results in the limit
of an infinite number of data becomes Gaussian if also the number of dimensions
becomes infinite. Furthermore, an explicit example is presented, for which
this distribution as good as only depends on the expectation value and the
variance of the statistic for any dimension larger than one.

Quick links:
Programme overview
detailed Timetable
Abstracts of Plenary and Invited Talks
Programme and Abstracts of Session 1
Programme and Abstracts of Session 2
Programme and Abstracts of Session 3


, last updated: Tue Sep 27 15:04:45 2005