COCOMON

COst-effective and COmpact cluster design for MONte-carlo simulation

INTRODUCTION

The goal of this project is to design a high performance computing (HPC) cluster for Monte-carlo simulation. To develop a small laboratory level HPC system, we designed a compact, scalable and cost-effective cluster which consists of commercial PC parts.

HISTORY

This project has begun in 2010 with two graduate students in Dept. of Radiologic Sciences, Korea University, Hakjae Lee and Boram Lee, to solve a lot of massive computing problems to design the nuclear medicine (SPECT and PET) and radiotherapy systems. With the financial support by Prof. Kisung Lee, the COCOMON project has been started.

The COCOMON.V1 has been developed in 2011, with 7 quard-core processors (AMD Denev 925). In this system, GATE (Geant4 Application for Tomographic Emission) V5.0 has been installed.

No sooner the COCOMON.V1 worked normally than the V2 project has begun. The COCOMON.V2 has been designed to be a compact system which can be installed in the commercial 19-inch rack case. In 2013, the COCOMON.V2 has been developed with heterogeneous hardwares which concsists of the parts of COCOMON.V1 and other commercial processors (AMD FX-6100, FX-8120, Intel i7-3770 and Xeon E3-1220).

The COCOMON.V2 is installed at Medical Information Processing Laboratory (MIPL), Korea University. GATE V7.0 is running on the 88 cpus. Now, this project is under control by Seungbin Bae and Kwangdon Kim, who are Ph.D. students of MIPL, and Jaehee Chun, who is M.S. students.

CONCEPT

COCOMON is designed for the high performance computing. In order to maximize the computing performance and simplicity to modulate each node, we adopt the COW (Cluster Of Workstation) design. In one aluminum plate, which is compatible for the 19-inch rack, two micro-ATX size motherboards are placed. Basically, COCOMON system consists of one frontend node and lots of compute nodes. Frontend node performs the job distribution, load balancing, terminal for network users and web based cluster condition report. Frontend node has two ethernet ports to communicate with network users and local compute nodes. Compute node which performs main calculation, is also connected with the frontend node by the high speed ethernet switching hub. To minimize the required area on the plate, TFX size power supplies are used. Each compute node has one HDD while the Frontend node has two HDDs to implement the RAID0 which is the fastest way to access the data with low cost.

In this study, we selected the Rocks Cluster 6.1.1 (Sand Boa) as the operating system (OS) of COCOMON. It supports lots of functionalities for the HPC. Above this OS, the GATE V7.0 is working.

NEWS

DOCUMENTS