Skip to content

Latest commit

 

History

History
51 lines (35 loc) · 1.12 KB

File metadata and controls

51 lines (35 loc) · 1.12 KB

What about HPC execution?

Assuming Nordugrid cluster architecture, the common way to run code with Singularity is:

Finding the GPU-capable RTE

arcinfo -l cluster_name

Define the GPU-capable container

See Definition of GPU-based containers

Define an xrsl file

&
  (executable = runfile.sh)
  (jobname = "GPUtest")

  (inputFiles = 
   ("ubuntu_latest.sif" "gsiftp://dcache.arnes.si:2811/data/arnes.si/gen.vo.sling.si/yourname/ubuntu_latest.sif")
   ("bestGPUcodeEver.py" "")
  )
  (stdout=test.log)
  (join=yes)
  (gridtime=100)
  (gmlog=log)
  (cache=no)
  (memory=300)

Running Singularity

Note that for running singularity, you need the following flag (this could be runfile.sh):

singularity exec --nv ubuntu_latest bestGPUcodeEver.py

What in the name is dcache?

Please, see the Detailed tutorial by Barbara Krašovec

TLDR: dcache offers elegant storage of the images somewhere other clusters can see them. Assuming ARC client, copy the image via:

arccp gpuImage.sif place_on_the_cluster