GPU-Cluster is available for user operation at LRZ
LRZ aktuell
publish at lrz.de
Mi Jun 12 12:26:08 CEST 2013
A GPU-cluster with 4 nodes and 8 Tesla Fermi 2070 GPU is available for
user operation at LRZ, each node
is equipped with Intel Xeon based 8-way (L5630) and 2 Tesla Fermi 2070.
Note: Interactive jobs are not supported on this system. The resource
manager SLURM
is used to allocate a free GPUs or CPUs for your job.
Usage:
First log in to the LRZ linux-Cluster : login and Security
Once you've logged into the LRZ Cluster, you should be able to access
the node: lxlogin_gpu via:
ssh lxlogin_gpu
Here's an example jobscript to submit to GPU-cluster using 2 GPUs:
cat slurm-gpu-job.sh
#!/bin/sh
#SBATCH --clusters=gpgpu
#SBATCH --nodelist=lxgp1 #or lxgp2..4
#SBATCH --gres=gpu:2
#SBATCH --ntasks=2
#SBATCH -e sample_script_err.log
#SBATCH -o sample_script_out.log
#SBATCH --job-name=jobname
#SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=08:00:00
source /etc/profile.d/modules.sh
cd mydir
./myprogram
Next, submit the job using the sbatch command.
sbatch slurm-gpu-job.sh
To see your job running type.
squeue --clusters=gpgpu -u your-userID
Use the scontrol command to get detailed information about the job.
scontrol show job job-ID
If you have any problems on the GPU-Cluster, please contact HPC support
via https://servicedesk.lrz.de/plainsubmit/
This information is also available on our web server
http://www.lrz-muenchen.de/services/compute/aktuell/ali4601/
Momme Allalen
Mehr Informationen über die Mailingliste aktuell