HPC Trdina
HPC Trdina is installed in the upgraded server room of the Faculty of Information Studies in Novo mesto for this purpose and enables the expansion of the faculty’s IT infrastructure and its uninterrupted operation. HPC Trdina is built on the basis of HPC Maister, which is installed in UM. We have selected a minimum number of components that provide a supercomputer system that will enable quick access to HPC Maister and HPC Vega, which is scheduled to launch at the end of 2020 and will be located on the premises of the IZUM Institute.
HPC Trdina consists of the following components:
– Mini HPC
– Big RAM server (head node)
– GPU computing node
– Disk array
– Connection switch
– Management switch
HPC Trdina configuration:
Supercomputer Trdina consists of one main node, a computer node, three nodes that form the computational part, and a graphical computing node. In addition, the supercomputer also has a fast disk array for data storage.
“Head node” or the main computer node consists of 2x AMD EPYC Naples 7501 processors, each with 32 cores, a total of 64 cores and 1024GB of working memory and a SSD disk with a capacity of 980GB.
Main computer node is primarily intended for managing other nodes in terms of configuration, allocation of processing power and working memory, start-up and control of various computational processes,… It can of course also be used for computational purposes.
Computational part of the supercomputer consists of computational nodes for computing non-graphical simulations and calculations, while the graphical computational part is intended for graphical simulations.
Computing part consists of three identical servers containing 2x AMD EPYC Naples 7501 processors with 32 cores, a total of 192 cores and 1536GB of RAM. Each computer also contains a 2x SSD disk with a capacity of 480GB in a RAID1 configuration. The computing part of the supercomputer is capable of performing 10400 GFlops operations.
Graphics computing node contains a 2x INTEL SKL 6128 processor, each with 6 cores, a total of 12 cores and 256GB of working memory, 2x SSD disk with a capacity of 480GB in RAID1 configuration. 4x NVIDIA Tesla P100 32GB graphics cards are available for performing graphics-intensive simulations. The graphical part of the computational node has the capacity to perform operations between 4.7 TFlops and 9.3 TFlops.
Disk array or the data warehouse contains three servers with 2x INTEL CLX 4210 2P processors, each with 10 cores, a total of 60 cores and 768GB of RAM. Each of these servers also contains 2x 480GB SSDs in a RAID1 configuration to boot the system. The total disk space capacity is 138TB in the raw (RAW) configuration.
All nodes are connected to each other via a high-speed local area network (100GbE) and in addition, we have a connection established via a separate network switch to manage all nodes.
CentOS is installed on all nodes.