Upd:Linux Cluster: Maintenance and Configuration Changes Jan 28 - Feb 1

LRZ aktuell publish at lrz.de
Fr Feb 1 17:20:47 CET 2013


 Changes to this message: Partially returned to user operation
Update (Feb 1 17:15): Except for the MPP segment, the cluster systems
 have been returned to user operation. Possible configuration problems
 on the MPP cluster still need investigation, please bear with us.
 
 -----------------------------------------------------------------------
 
 Maintenance windows
 
 Due to an extended hard- and software maintenance, the MPP, MAC and
 8-way Myrinet clusters will be unavailable for user operation between 
 Jan 28, 8:00 am and Feb 1. This includes the login nodes lxlogin1-2. (
 Note: Until Jan 31, 8:00am, the lxlogin3.lrz.de system will remain
 available for submission of serial jobs.)
 
 All other cluster segments (serial cluster including all hosted
 systems, sgi ICE and UV systems, 32 way Myrinet systems) will be
 unavailable between Jan 31, 8:00 am and Feb 1.
 
 Configuration changes
 
   * The default MPI environment on the MPP cluster will be changed from
     Parastation MPI to Intel MPI. However, the mpi.parastation module
     will remain available for legacy use until the end of 2013. On the
     sgi ICE and UV systems, the sgi MPI (mpi.mpt) will remain default.
   * The 8-way Myrinet Cluster will be retired from parallel processing,
     and the nodes will be added to the serial processing pool. This
     implies that the partition "myri_std" in the SLURM cell "myri" will
     become unavailable.
   * For the serial queues, SLURM fair share scheduling will be
     introduced. For the parallel queues, a combination of fair share
     scheduling and favoring large jobs will be activated. This is to
     prevent a single user from monopolizing cluster segments for long
     times if there are many jobs in the queue.
   * New storage systems will be introduced for the WORK (==PROJECT) and
     SCRATCH file systems. Please note that LRZ will only migrate WORK
     data to the new file system; data in SCRATCH will not be migrated.
     However the old SCRATCH file system will remain available as a
     separate mount in read-only mode on the login nodes until the end
     of March, 2013. Migration of data can then be done via commands
     like
     cd $SCRATCH_LEGACY
     cp -a my_scratch_subdirectory $SCRATCH
     The environment variable $SCRATCH_LEGACY will remain defined and
     point to the legacy scratch area until end of March, 2013.
 
 Further configuration changes may occur and will be added to the above
 list if they are applied.


 This information is also available on our web server
 http://www.lrz-muenchen.de/services/compute/aktuell/ali4501/

 Reinhold Bader



Mehr Informationen über die Mailingliste aktuell