This website uses cookies (to show videos, interactive maps and to simplify content sharing with friends).
By viewing pages on this site you agree with the usage of cookies.
I agree / more information / I disagree

 
logo logo VisitorsPartnersMedia

Username: Password:

Computer clusters of Reactor Engineering Division

To solve complex computational intensive problems which need a lot of processing power and memory we use computer clusters Skuta (in preparation), Razor and Krn. Computer cluster consists of computational nodes where several processor cores share memory. This allows us to solve problems in parallel and significantly reduce the computation time.

Overview of technical specifications

  Krn Razor Skuta
purchase (upgrades) 2010 (2011, 2012) 2014 (2014, 2015) 2019
∑nodes 51 50  23
processor Xeon X5650/X5670/X5675
2.9 GHz, 3.0 GHz, 12MB Cache
Xeon E5-2680 v2/E5-2697 v2/E5-2697 v3 2.8(3.6)/2.7(3.5)/2.6(3.6) GHz
25/30/35 MB Cache
 Xeon Gold 6148 2.40 GHz Base / 3.7 GHz Turbo / 27.5 MB Cache
cores/node (HT) 12 (24) 20 (40)/24 (48)/28 (56)  40 (80)
∑cores (HT) 616 1096  920
tot. memory 2.04 TB 8.386 TB  5.376 TB
GPUs 17 /  /
storage 17 TB 96 TB  300 TB
Interconnect QDR IB + GB Eth FDR IB + GB Eth  Omnipath (100 Gbps)
OS SLES 11 SP1 SLES 11 SP2  Centos 7.6
UPS 2x20 kVA 2x20 kVA  /

Layout

Layout of the clusters

Computer cluster Skuta

Computer cluster Skuta (named after the mountain in Kamnik-Savinja Alps) consists of 23 computational nodes with 920 processor cores, head node and control and backup node. Computational nodes have a total memory of 5376 GB. A dedicated storage node is allocated for long-term storage of data with disk array of size 300 TB. Omnipath technology enablels a fast connection between the nodes.

Detailed specifications of cluster Skuta:

  • login node:
    • processors: 2x Intel(R) Xeon(R) CPU 6148 @ 2.40 GHz, 20 cores each (x2 HT);
    • memory: 192 GB ECC DDR4 @ 2666 MHz;
    • graphics: nVidia Quadro P4000, 8 GB GDDR5, 1792 Cuda cores;
    • disk array: 126 TB (6 TB/disk) @ RAID 1 + 1 Hot spare;
    • disk system for OS: 2x 960 GB SSD @ RAID 1;
    • Omnipath HCA;
    • 10 Gbps Ethernet for data transfer  + 1 Gbps Ethernet for system control (IPMI).
  • control node with backup disk array:
    • processors: 2x Intel(R) Xeon(R) CPU 6148 @ 2.40 GHz, 20 cores each (x2 HT);
    • memory: 192 GB ECC DDR4 @ 2666 MHz;
    • disk array: 204 TB (6 TB/disk) @ RAID 6;
    • disk system for OS: 2x 960 GB SSD @ RAID 1;
    • Omnipath HCA;
    • 10 Gbps Ethernet for data transfer + 1 Gbps Ethernet for system control (IPMI).
  • 22 compute nodes (+ 1 node with additional memory):
    • processors: 2x Intel(R) Xeon(R) CPU 6148 @ 2.40 GHz, 20 cores each (x2 HT);
    • memory: 192 GB ECC DDR4 @ 2666 MHz;
    • disk: 960 GB SSD;
    • Omnipath HCA;
    • 10 Gbps Ethernet for data transfer + 1 Gbps Ethernet for system control (IPMI).

Computer cluster Razor

Computer cluster Razor has a total of 50 compute nodes with 1096 processor cores, head node and storage node. With Hyper-Threading the compute nodes are capable of running up to 2192 parallel tasks. Compute nodes have 8368 GB of memory in total. A dedicated node is used to store data on 96 TB disk array. InfiniBand FDR technology allows for high throughput and low latency link between computational nodes.

Detailed specifications of cluster Razor:

  • head node:
    • processors: 2x Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80 GH, 10 cores each (x2 HT);
    • memory: 128 GB DDR3 @ 1866 MHz;
    • graphical card: nVidia Quadro K6000, 12 GB GDDR5, 2880 Cuda cores;
    • disk: 4x (450 GB 15k7 HITACHI) system disks (for / /boot /data1): Σ820 GB @ RAID 1;
    • 2x FDR Infiniband HCA's.
  • storage node:
    • disk array: 2 chassis with Σ21 4 TB (WD) disks;
    • disk array: storage node chassis with embeded 11 4 TB (WD) disks;
    • NFS storage server: 40 TB;
    • 2x FDR Infiniband HCA's.
  • 30 compute nodes:
    • processors: 2x Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80 GHz, 10 cores each (x2 HT);
    • memory: 128 GB DDR3 @ 1866 MHz;
    • disk: 4x (450 GB 15k7) system disks (for / /boot /scratch): Σ1,6 TB @ RAID 0;
    • FDR Infiniband HCA.
  • 16 compute nodes:
    • processors: 2x Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60 GHz, 14 cores each (x2 HT);
    • memory: 256 GB DDR4 @ 1866 MHz;
    • disk: 3x (600 GB 10k) system disks (for / /boot /scratch): Σ1,8 TB @ RAID 0;
    • FDR Infiniband HCA (56 Gb/s).
  • 2 compute nodes:
    • processors: 2x Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80 GHz, 12 cores each (x2 HT);
    • memory: 128 GB DDR3 @ 1866 MHz;
    • disk: 4x (450 GB 15k7) system disks (for / /boot /scratch): Σ1,6 TB @ RAID 0;
    • FDR Infiniband HCA.
  • Infiniband FDR switch
    • 36 port;
    • data transfer: 54,54 Gbit/s @ 64/66 bit wrapping;
    • bandwidth: 5.75 GB/s;
    • latency: 4 us.
  • Gigabit ethernet switch.
  • Alternative power source: UPS, 2x20 kW.
  • Operating system: Suse Linux Enterprise Server 11 Service pack 2.

Computer cluster Krn

 

oprema1s

Computer cluster Krn has a total of 50 nodes with 600 processor cores with Hyper-Threading technology and can run up to 1200 parallel processes. Sixteen nodes are equipped with GPU modules. Together with CUDA technology this enables further optimization of computational time. Cluster nodes have 2040 GB memory in total. For permanent storage of data 17 TB disk array is available. InfiniBand technology allows for high throughput and low latency link between computational nodes.

Detailed specifications of SGI cluster are:

  • head node SGI Altix XE 270:
    • processors: 2x Intel Xeon 5650 @ 2.67 GHz, 6 cores each (x2 HT);
    • memory: 72 GB;
    • graphical card: NVidia Quadro FX 3800;
    • system disk:2x 150 GB @ 15000 rmp, RAID1;
    • infiniband QDR 4x interconnect.
  • visualisation server "Kanin"
    • NVidia Quadro 6000 (448 CUDA cores, 6 GB memory);
    • two 8-core processors and 128 GB memory;
    • 4x 400 GB SAS @ 15000 rpm, RAID 0 and 1;
    • infiniband QDR 4x interconnect.
  • 6 compute nodes with 96 GB of memory SGI Altix XE 270:
    • processors: 2x Intel Xeon 5670@ 2.93 GHz, 6 cores each (x2 HT);
    • memory: 96 GB;
    • system disk: 150 GB SAS @ 15000 rmp;
    • scratch disk: 5x 146 GB SAS @ 15000 rmp, RAID0;
    • infiniband QDR 4x interconnect.
  • 5 compute nodes with 24 GB memory SGI Altix XE 270:
    • processors: 2x Intel Xeon 5650@ 2.67 GHz, 6 cores each (x2 HT);
    • memory: 24 GB;
    • system disk: 150 GB SAS @ 15000 rpm;
    • scratch disk: 5x 150 GB @ 15000 rmp, RAID0;
    • infiniband QDR 4x interconnect.
  • 22 compute nodes SGI Altix C1104-2TY9: oprema3s
    • processors: 2x Intel Xeon 5670@ 2.93 GHz, 6 cores each (x2 HT);
    • memory: 24 GB;
    • system disk: 300 GB SAS @ 15000 rpm;
    • scratch disk: 300 GB SAS @ 15000 rpm;
    • infiniband QDR 4x interconnect.
  • compute node with GPU module:
    • processors: 2x Intel Xeon X5675@ 3.07 GHz, 6 cores each (x2 HT);
    • memory: 24 GB;
    • system and scratch disk: 3x 300 GB SATA;
    • GPU: Tesla M2075 (448 CUDA cores, 6 GB memory);
    • infiniband QDR 4x interconnect.
  • 15 compute nodes with GPU module:
    • processors: 2x Intel Xeon X5675@ 3.07 GHz, 6 cores each (x2 HT);
    • memory: 48 GB;
    • system and scratch disk: 3x 300GB SAS @ 15000 rpm, RAID0;
    • GPU: Tesla M2075 (448 CUDA cores, 6 GB memory);
    • infiniband QDR 4x interconnect.
  • shared disk array:
    • 17 TB, RAID6;
    • 2x Gigabit switch;
    • 2x QDR 1U 40 Gb/s 36 port InfiniBand switch
      (Mellanox IS5030 and Voltaire 4036);oprema2s
    • KVM switch, direct graphical access to nodes.
  • alternative power source: UPS system, 20 kVA.
  • operating system: SUSE Linux Enterprise Server 11 (x86_64).

 

 

 

 

 

Page editor: Matej Tekavčič

 

NEWS
Turbulent Flow over Confined Backward-Facing Step: PIV vs. DN
Boštjan Zajec, Marko Matkovič, Nejc Kosanič, Jure Oder, Blaž Mikuž, Jan Kren and Iztok Tiselj from the Reactor Engineering Division at Jožef Stefan Institute published the article “Turbul...
Prof. dr. Leon Cizelj elected as the president of the European Nuclear Society (ENS)
Prof. dr. Leon Cizelj was elected as the president of the European Nuclear Society (ENS; www.euronuclear.org) for 2022 and 2023 at the proposal of the Nuclear Society of Slo...
New experimental campaign - Investigation of Temperature Fluctuations in a Fully Developed Channel Flow
New experimental campaign - Investigation of Temperature Fluctuations in a Fully Developed Channel Flow Dr. Mohit Sharma and colleagues have performed turbulent heat transfer experiments with the heated thin fo...
The 8th Young Generation Nuclear Conference has been successfully held online
On 19 May 2021, the Young Generation Network of the Nuclear Society of Slovenia (YGN NSS), in cooperation with the Reactor Engineering Division of the Jožef Stefan Institute, organized the 8th Young Generation Nuclear Conference YGNC. Due to the coron...
A single grain boundary parameter to characterize normal stress fluctuations in materials with elastic cubic grains
Dr. Samir El Shawish and dr. Timon Mede from the Reactor Engineering Division at Jožef Stefan Institute, in collaboration with dr. Jeremy Hure from Université Paris-Saclay, CEA (France) published the article &l...
Modelling of premixed layer formation in stratified fuel–coolant configuration
Janez Kokalj, dr. Mitja Uršič and dr. Matjaž Leskovar from the Reactor Engineering Division at Jožef Stefan Institute published the article »...
Validation of a morphology adaptive multi-field two-fluid model considering counter-current stratified flow with interfacial turbulence damping
Dr. Matej Tekavčič from the Reactor Engineering Division at Jožef Stefan Institute in collaboration with researchers from the Institute of Fluid Dynamics, Helmholtz-Zentrum Dresden-Rossendorf (Germany) published the article »...
Symposium and seminar nuclear human development in TokyoTec
The international symposium and seminar on global nuclear human resource development for safety, security and safeguards (IS3S) is a yearly activity coordinated by the Academy for Global Nuclear Safety and Security Agent of the Tokyo Institute of Techn...
NESTet 2020 program board meeting, Bruselj, 7.-10.1.2020
The NESTet conference is dedicated to education and training in nuclear technology. After several years of silence (the last NESTet was organized by the ...
15th Steering Committee Meeting of the TSO Forum (TSOF), Vienn, Austria, 17.-18. 2. 2020
In 2010, the International Atomic Energy Agency (IAEA) set up a comitee of Certified Nuclear and Radiation Safety Experts (TSOs) to promote cooperation and exchange of information. The basis for the ...
WPSAE Progress Meeting & PMU meeting, Garching, Germany, 27.1.-29.1. 2020
Within the framework of Horizon 2020, European fusion research combines the targeted 7-year (2014-2020) EUROfusion fusion program under the auspices of the EURATOM Treaty. Within EUROfusion, a large part of the activities is devoted to the development ...
ENEN+ Communication workshop, Brussels, Belgium, 13.-14.10.2019
Part of the ENEN + (Attract, Retain and Develop New Nuclear Talents Beyond Academic Curricula) project is the development and implementation of a communication strategy to fam...
18th International Topical Meeting on Nuclear Reactor Thermal-Hydraulics (NURETH-18), Portland, USA, 18. 8. – 22. 8. 2019
NURETH is the most important conference related to the nuclear reactor thermal-hydraulics. At the NURETH-18 internetional conference, 545 papers were presented as well as 3 pannel discussions and 17 technical keynote addresses. Main topics were: ...
EUROfusion PPPT Project Management Meeting in DEMO Project Board Meeting, IPP, Garching, Germany, 11.-12. 7. 2019
As a part of Horizon 2020, european fusion research brings together a targeted 5-year (2014-2018) EUROfusion program under the auspices of the EURATOM Treaty. Due to the 7-year horizon of Horizon 202...
mobile view