Large scale file service based on DCE/DFS in HEP

Paper: 449
Session: C (talk)
Speaker: Yashiro, Shigeo, KEK, Tsukuba
Keywords: commodity computing, data management, security, file systems, hierarchical storage management

Large scale file service based on DCE/DFS in HEP

Takashi Sasaki, Shigeo Yashiro, Tadashi Ishikawa,
Youhei Morita, Hiroshi Mawatari, Yoshiyuki Watase
KEK, National Laboratory for High Energy Physics
1-1 Oho, Tsukuba, Ibaraki 305 Japan

Toshiya Itoh
Software Development Centre, Hitachi Ltd.
TYG 11th Bldg., 3-16-1, Nakamachi, Atsugi, Kanagawa 243, Japan

Yutaka Kodama, Hiroshi Itoh
Government & Public Corporation Information
Systems Division, Hitachi Ltd.
Tsukuba Mitsui bldg.,1-6-1, Takezono, Tsukuba, Ibaraki 305, Japan


The central computer system at KEK was replaced from the main frame
to a distributed system in January 1996. This system is shared by many
working groups, experimental groups, theorists, accelerator physicists and
so on, which have variety of requirements on computing. The analysis of
user requirements led us a cluster model for the system design. We decided
to provided 10 workgroup services on it.

Users of this system requested the integration of their own computing
resource managed by themselves into this workgroup service. Besides, it's
well known that NIS and NFS are not enough secure for this kind of file
services. Also secure WindowsNT integration is desirable, because PC's are
becoming very popular due to very good cost performance among physicists
also. We decided to introduce DCE/DFS to answer these requirements.

Experimental groups needs huge storage capacity to store their data
on it. To answer this, the system is equipped with large scale mass storage
systems, SONY PetaSite tape system which have 20TB capacity, and 1TB RAID
system. For easier handling of files, we introduced Hierarchical
Storage Management (HSM). From users points of view, all files seems to be
on disks. However, files are automatically migrated between disks and
tapes handled by this system.

To realize the above requirements, the key issues were as follows:

i) Integrating tape-based mass storage systems (HSM),
ii) Maximizing the cache of mass storage systems,
iii) Rapid data transmission (more than 1MB/sec.), and
iv) Secure and robust system without particular operations.

In order for the distributed system to be able to handle the tape
storage system, we streamlined the data migration between the tape system
and a disk-based distributed file system(DCE/DFS). In particular, we
added a new protocol to balance the tape system and the distributed file

For the cache issue, we created a new policy to recycle the cache.
This allows the user to enjoy the cache of the smaller-sized files as
well as to normally cope with large files that could otherwise trigger
a cache overflow to degrade the system performance.

The rapid data transmission among the distributed servers and
clients is another key issue. We utilized Cluster Switch which
accomplishes 20MB/sec. transmission rate at the physical level in
addition to Ethernet and FDDI. By this, we attained the anticipated
data transmission rate, "more than 1MB/sec. at the user level." We
also modified DCE to be able to establish a DCE connection even when
asymmetrically connected to multiple networks.

One of the DCE's fortes is in its security. There are, however,
a number of users who have been familiar with legacy file systems,
namely NIS and NFS. We integrated UNIX login and DCE login, e.g., xdm,
ftp, and rlogin so that such users can seamlessly migrate from the
legacy systems to DCE.

We will present the design philosophy and status of the first
large scale DFS service among major HEP laboratories in the world.