Skip to content

GCS v9.0.2

Latest
Compare
Choose a tag to compare
@ernst-bablick ernst-bablick released this 20 Jan 19:26
· 43 commits to master since this release

Enhanced NVIDIA GPU Support with qgpu

  • With the release of patch 9.0.2, the qgpu command has been added to simplify workload management for GPU resources. The qgpu command allows administrators to manage GPU resources more efficiently. It is available for Linux amd64 and Linux arm64. qgpu is a multi-purpose command which can act as a load sensor reporting the characteristics and metrics of of NVIDIA GPU devices. For that it depends on NVIDIA DCGM to be installed on the GPU nodes. It also works as a prolog and epilog for jobs to setup NVIDIA runtime and environment variables. Further it sets up per job GPU accounting so that the GPU usage and power consumption is automatically reported in the accounting being visible in the standard qacct -j output. It supports all NVIDIA GPUs which are supported by Nvidias DCGM including NVIDIA's latest Grace Hopper superchips. For more information about qgpu please refer to the Admin Guide.

(Available in Gridware Cluster Scheduler only)

Automatic Session Management

  • Patch 9.0.2 introduces the new concept of automatic sessions. Session allows the Gridware Cluster Scheduler system to synchronize internal data stores, so that client commands can be enforced to get the most recent data. Session management is enabled, but can be disabled by setting the DISABLE_AUTOMATIC_SESSIONS parameter to true in the qmaster_params of the cluster configuration.

    The default for the qmaster_param DISABLE_SECONDARY_DS_READER is now also false. This means that the reader thread pool is enabled by default and does not need to be enabled manually as in patch 9.0.1.

    The reader thread pool in combination with sessions ensure that commands that trigger changes within the cluster (write-requests), such as submitting a job, modifying a queue or changing a complex value, are executed and the outcome of those commands is guaranteed to be visible to the user who initiated the change. Commands that only read data (read-requests), such as qstat, qhost or qconf -s..., that are triggered by the same user, always return the most recent data although all read-requests in the system are executed completely in parallel to the other Gridware Cluster Scheduler core components. This additional synchronization ensures that the data is consistent for the user with each read-request but on the other side might slow down individual read-requests.

    Assume following script:

    #!/bin/sh
    
    job_id=`qsub -terse ...`
    qstat -j $job_id
    

    Without activated sessions it is not guaranteed that the qstat -j command will see the job that was submitted before. With sessions enabled, the qstat -j command will always see the job but the command will be slightly slower compared to the same scenario without sessions.

    Sessions eliminate the need to poll for information about an action until it is visible in the system. Unlike other workload management systems, session management in Gridware Cluster Scheduler is automatic. There is no need to manually create or destroy sessions after they have been enabled globally.

  • The sge_qmaster monitoring has been improved. Beginning with this patch the output for worker and reader threads will show following numbers in the output section for reader and worker threads:

    ... OTHER (ql:0,rql:0,wrql:0) ...
    

    All three values show internal request queue lengths. Usually they are all 0 but in high load situations or when sessions are enabled then they can increase:

    • ql shows the queue length of the worker threads. This request queue contains requests that require a write lock on the main data store.
    • rql shows the queue length of the reader threads. The queue contains requests that require a read lock on the secondary reader data store.
    • wrql shows the queue length of the waiting reader threads. All requests that cannot be handled by reader threads immediately are stored in this list till the secondary reader data store is ready to handle them. If sessions are disabled then the number will always be 0.

    Increasing values are uncritical as long as the numbers also decrease again. If the numbers increase continuously then the system is under high load and the performance might be impacted.

    (Available in Open Cluster Scheduler and Gridware Cluster Scheduler)

Departments, Users and Jobs - Department View

With the release of patch 9.0.2, we have removed the restriction that users can only be assigned to one department. Users can now be assigned to multiple departments. This is particularly useful in environments where users are members of multiple departments in a company and access to resources is based on department affiliation.

Jobs must still be assigned to a single department. This means that a user who is a member of multiple departments can submit jobs to any of the departments of which he/she is a member, by specifying the department in the job submission command using the -dept switch. If a user does not specify a particular department, sge_qmaster assigns the job to the first department found.

Using qstat and qhost, the output can be filtered based on access lists and departments using the -sdv switch. When this switch is used, the following applies:

  • Only the hosts/queues to which the user has access are displayed.
  • Jobs are only displayed if they belong to the executing user or a user who belongs to one of the departments where the executing user is also part of.
  • Child objects are only displayed if the user also has access to the corresponding parent object. This means that jobs are not displayed if the queue or host does not offer access (anymore) where the jobs are running, and queues if the host is not accessible (anymore).

Please note that this may result in situations where users are no longer being able to see their own jobs if the access permissions are changed for a user who has jobs running in the system.

Users having the manager role always see all hosts/queues and jobs independent of the use of the -sdv switch.

Please note that this specific functionality is still in beta phase. It is only available in Gridware Cluster Scheduler and the implementation will change with upcoming patch releases.