| Computer clusters of Reactor Engineering Division
To solve complex computational intensive problems which need a lot of processing power and memory we use computer clusters Skuta (in preparation), Razor and Krn. Computer cluster consists of computational nodes where several processor cores share memory. This allows us to solve problems in parallel and significantly reduce the computation time.
Overview of technical specifications
|
Krn |
Razor |
Skuta |
purchase (upgrades) |
2010 (2011, 2012) |
2014 (2014, 2015) |
2019 (2020, 2021) |
∑nodes |
51 |
50 |
100 |
processor |
Xeon X5650/X5670/X5675 2.9 GHz, 3.0 GHz, 12MB Cache |
Xeon E5-2680 v2/E5-2697 v2/E5-2697 v3 2.8(3.6)/2.7(3.5)/2.6(3.6) GHz 25/30/35 MB Cache |
Xeon Gold 6148(6240R, 6238R) 2.40(2.20) GHz Base / 3.7 GHz Turbo / 27.5 MB Cache |
cores/node (HT) |
12(24) |
20(40) / 24(48) / 28(56) |
40(80) / 48(96) / 56(112) |
∑cores (HT) |
616 |
1096 |
4912 |
tot. memory |
2.04 TB |
8.386 TB |
20.16 TB |
GPUs |
17 |
/ |
/ |
storage |
17 TB |
96 TB |
300 TB |
Interconnect |
QDR IB + GB Eth |
FDR IB + GB Eth |
Omni-Path (100 Gbps) |
OS |
SLES 11 SP1 |
SLES 11 SP2 |
Centos 7.6 |
UPS |
2x20 kVA |
2x20 kVA |
/ |
Layout
Computer cluster Skuta
Computer cluster Skuta (named after the mountain in Kamnik-Savinja Alps) consists of 100 computational nodes with 4912 processor cores, head node and control and backup node. Computational nodes have a total memory of ~20 GB. A dedicated storage node is allocated for long-term storage of data with disk array of size 300 TB. Omni-Path technology enablels a fast connection between the nodes.
Detailed specifications of cluster Skuta:
- login node:
- processors: 2x Intel(R) Xeon(R) CPU 6148 @ 2.40 GHz, 20 cores each (x2 HT);
- memory: 192 GB ECC DDR4 @ 2666 MHz;
- graphics: nVidia Quadro P4000, 8 GB GDDR5, 1792 Cuda cores;
- disk array: 126 TB (6 TB/disk) @ RAID 1 + 1 Hot spare;
- disk system for OS: 2x 960 GB SSD @ RAID 1;
- Omni-Path HCA;
- 10 Gbps Ethernet for data transfer + 1 Gbps Ethernet for system control (IPMI).
- control node with backup disk array:
- processors: 2x Intel(R) Xeon(R) CPU 6148 @ 2.40 GHz, 20 cores each (x2 HT);
- memory: 192 GB ECC DDR4 @ 2666 MHz;
- disk array: 204 TB (6 TB/disk) @ RAID 6;
- disk system for OS: 2x 960 GB SSD @ RAID 1;
- Omni-Path HCA;
- 10 Gbps Ethernet for data transfer + 1 Gbps Ethernet for system control (IPMI).
- 22 compute nodes (+ 1 node with additional memory):
- processors: 2x Intel(R) Xeon(R) CPU 6148 @ 2.40 GHz, 20 cores each (x2 HT);
- memory: 192 GB ECC DDR4 @ 2666 MHz;
- disk: 960 GB SSD;
- Omni-Path HCA;
- 10 Gbps Ethernet for data transfer + 1 Gbps Ethernet for system control (IPMI).
- 33 compute nodes (+ 1 node with additional memory):
- processors: 2x Intel(R) Xeon(R) 6240R CPU @ 2.40GHz, 28 cores each (x2 HT);
- memory: 192 GB ECC DDR4 @ 2933 Mbps;
- disk: 890 GB SSD;
- Omni-Path HCA;
- 10 Gbps Ethernet for data transfer + 1 Gbps Ethernet for system control (IPMI).
- 43 compute nodes (+ 1 node with additional memory):
- processors: 2x Intel(R) Xeon(R) Gold 6238R CPU @ 2.20GHz, 28 cores each (x2 HT);
- memory: 192 GB ECC DDR4 @ 2933 Mbps;
- disk: 1.6 TB SSD;
- Omni-Path HCA;
- 10 Gbps Ethernet for data transfer + 1 Gbps Ethernet for system control (IPMI).
Computer cluster Razor
Computer cluster Razor has a total of 50 compute nodes with 1096 processor cores, head node and storage node. With Hyper-Threading the compute nodes are capable of running up to 2192 parallel tasks. Compute nodes have 8368 GB of memory in total. A dedicated node is used to store data on 96 TB disk array. InfiniBand FDR technology allows for high throughput and low latency link between computational nodes.
Detailed specifications of cluster Razor:
- head node:
- processors: 2x Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80 GH, 10 cores each (x2 HT);
- memory: 128 GB DDR3 @ 1866 MHz;
- graphical card: nVidia Quadro K6000, 12 GB GDDR5, 2880 Cuda cores;
- disk: 4x (450 GB 15k7 HITACHI) system disks (for / /boot /data1): Σ820 GB @ RAID 1;
- 2x FDR Infiniband HCA's.
- storage node:
- disk array: 2 chassis with Σ21 4 TB (WD) disks;
- disk array: storage node chassis with embeded 11 4 TB (WD) disks;
- NFS storage server: 40 TB;
- 2x FDR Infiniband HCA's.
- 30 compute nodes:
- processors: 2x Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80 GHz, 10 cores each (x2 HT);
- memory: 128 GB DDR3 @ 1866 MHz;
- disk: 4x (450 GB 15k7) system disks (for / /boot /scratch): Σ1,6 TB @ RAID 0;
- FDR Infiniband HCA.
- 16 compute nodes:
- processors: 2x Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60 GHz, 14 cores each (x2 HT);
- memory: 256 GB DDR4 @ 1866 MHz;
- disk: 3x (600 GB 10k) system disks (for / /boot /scratch): Σ1,8 TB @ RAID 0;
- FDR Infiniband HCA (56 Gb/s).
- 2 compute nodes:
- processors: 2x Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80 GHz, 12 cores each (x2 HT);
- memory: 128 GB DDR3 @ 1866 MHz;
- disk: 4x (450 GB 15k7) system disks (for / /boot /scratch): Σ1,6 TB @ RAID 0;
- FDR Infiniband HCA.
- Infiniband FDR switch
- 36 port;
- data transfer: 54,54 Gbit/s @ 64/66 bit wrapping;
- bandwidth: 5.75 GB/s;
- latency: 4 us.
- Gigabit ethernet switch.
- Alternative power source: UPS, 2x20 kW.
- Operating system: Suse Linux Enterprise Server 11 Service pack 2.
Computer cluster Krn
Computer cluster Krn has a total of 50 nodes with 600 processor cores with Hyper-Threading technology and can run up to 1200 parallel processes. Sixteen nodes are equipped with GPU modules. Together with CUDA technology this enables further optimization of computational time. Cluster nodes have 2040 GB memory in total. For permanent storage of data 17 TB disk array is available. InfiniBand technology allows for high throughput and low latency link between computational nodes.
Detailed specifications of SGI cluster are:
- head node SGI Altix XE 270:
- processors: 2x Intel Xeon 5650 @ 2.67 GHz, 6 cores each (x2 HT);
- memory: 72 GB;
- graphical card: NVidia Quadro FX 3800;
- system disk:2x 150 GB @ 15000 rmp, RAID1;
- infiniband QDR 4x interconnect.
- visualisation server "Kanin"
- NVidia Quadro 6000 (448 CUDA cores, 6 GB memory);
- two 8-core processors and 128 GB memory;
- 4x 400 GB SAS @ 15000 rpm, RAID 0 and 1;
- infiniband QDR 4x interconnect.
- 6 compute nodes with 96 GB of memory SGI Altix XE 270:
- processors: 2x Intel Xeon 5670@ 2.93 GHz, 6 cores each (x2 HT);
- memory: 96 GB;
- system disk: 150 GB SAS @ 15000 rmp;
- scratch disk: 5x 146 GB SAS @ 15000 rmp, RAID0;
- infiniband QDR 4x interconnect.
- 5 compute nodes with 24 GB memory SGI Altix XE 270:
- processors: 2x Intel Xeon 5650@ 2.67 GHz, 6 cores each (x2 HT);
- memory: 24 GB;
- system disk: 150 GB SAS @ 15000 rpm;
- scratch disk: 5x 150 GB @ 15000 rmp, RAID0;
- infiniband QDR 4x interconnect.
- 22 compute nodes SGI Altix C1104-2TY9:
- processors: 2x Intel Xeon 5670@ 2.93 GHz, 6 cores each (x2 HT);
- memory: 24 GB;
- system disk: 300 GB SAS @ 15000 rpm;
- scratch disk: 300 GB SAS @ 15000 rpm;
- infiniband QDR 4x interconnect.
- compute node with GPU module:
- processors: 2x Intel Xeon X5675@ 3.07 GHz, 6 cores each (x2 HT);
- memory: 24 GB;
- system and scratch disk: 3x 300 GB SATA;
- GPU: Tesla M2075 (448 CUDA cores, 6 GB memory);
- infiniband QDR 4x interconnect.
- 15 compute nodes with GPU module:
- processors: 2x Intel Xeon X5675@ 3.07 GHz, 6 cores each (x2 HT);
- memory: 48 GB;
- system and scratch disk: 3x 300GB SAS @ 15000 rpm, RAID0;
- GPU: Tesla M2075 (448 CUDA cores, 6 GB memory);
- infiniband QDR 4x interconnect.
- shared disk array:
- 17 TB, RAID6;
- 2x Gigabit switch;
- 2x QDR 1U 40 Gb/s 36 port InfiniBand switch
(Mellanox IS5030 and Voltaire 4036);
- KVM switch, direct graphical access to nodes.
- alternative power source: UPS system, 20 kVA.
- operating system: SUSE Linux Enterprise Server 11 (x86_64).
Page editor: Matej Tekavčič
|
NEWS
Fast Fourier transform approach to strain gradient crystal plasticity: Regularization of strain localization and size effect | Fast Fourier transform approach to strain gradient crystal plasticity: Regularization of strain localization and size effect
Amirhossein Lame Jouybari, dr. Samir El Shawish and dr. Leon Cizelj from the Reactor Engineering Divis... | Extending intergranular normal-stress distributions using symmetries of linear-elastic polycrystalline materials | Extending intergranular normal-stress distributions using symmetries of linear-elastic polycrystalline materials
Dr. Samir El Shawish from the Reactor Engineering Division at Jožef Stefan Institute published the research artic... | |
|