Skip to content

Latest commit

 

History

History
62 lines (53 loc) · 4.05 KB

install.md

File metadata and controls

62 lines (53 loc) · 4.05 KB
title feature_image excerpt
Clone repo and Compilations
/pic/title-pic-c.png
Clone repo and compilation

Library Dependencies

dyGiLa is developed upon few frameworks. There is framework providing mathematical objects, discretized spital grid and time points for lattice field theory support. And there is also framework providing data streaming in mean of parallel for post-hoc analysis and in-situ visualizing simulating data. The former is provided by HILA project, while the latter is project Ascent. Make sure they have been compiled or installed in the proper way before going to next steps.

To leverage modern hybrid CPU-GPGPU computational source, both HILA and Ascent need GNU make, CMake, MPI (mpich, openmpi), FFTW(FFTW3) and LLVM/Clang, make sure these softwares and libraries have been installed on the system you intend to work on.

When the targeted platform offers GPGPU supporting, the vendor-specific firmware and libraries have to be installed and configured properly as well. For the Nvidia hardware, the architecture e.g.,sm_75, sm_80, sm_90 etc have to be identified properly for the hardware, and CUDA have to be installed, OpenACC is not supported. For ÀMD hardware, drivers and ROCm open source library have to be installed. However, things for AMD hardwares could be tricky dependent on their architecture,ÀMD Instinct CDNA architecture have the full support from ROCm, while Radeon RDNA only partially supported by ROCm. To know if given AMD hardware is supported by ROCm, one could check the ROCm support list

Getting Source and Compile

Same as most of FOSS project, dyGiLa is hosted on public source code repositories service providers Github, Bitbucket and Gitlab.

$ git clone [email protected]:dyGiLa/dyGiLa.git ./dyGiLa

or

$ git clone https://bitbucket.org/hindmars/he3-simulator.git ./dyGila

or

$ git clone https://gitlab.com/lftgl2/dygila.git ./dyGila

will fetch default branch to the current path under the folder dyGiLa. After this run

$ cd dyGiLa
$ make -j n ARCH=xxx-xxx

to compile the dyGiLa binary. Here n is thread numbers of parallel compilation, and ARCH=xxx-xxx specifies the architectures of hardwares, it could be

  • vanila:X86-64 CUP architecture
  • ÀVX: SIMP/AVX vector accelerating CUP architecture

or for specific clusters:

  • mahti: CPU architecture on cluster mahti
  • mahti-cuda: GPU-aware MPI architecture on cluster mahti
  • lumi: CPU architecture on cluster LUMI
  • lumi-hip-CC: GPU-aware MPI architecture on cluster LUMI

There are more possibly supported architecture configurations can be found in HILA source files.

If every goes well, user could find the linked binary executable dyGiLa under ./build folder. However, the reality could be hash and complicated, especially on computational clusters, to which dyGiLa targets. In most common cases, these big machines are configured with Lmod module system. The crucial dependency may be missing on their available list such as LLVM/Clang, which provides Abstract Syntax Tree (AST) utilities for GPU kernel generation. For how to solve this problem or walk around it, please read Dependencies On Supercomputer of HILA.