Total number of queued jobs/tasks/slots: 114/43,307/51,493
87 users have/had running or queued jobs over the past 7 days, 95 over the past 15 days.
120 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Monday, 15-Dec-2025 20:12:21 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 1:02.
Warnings
Oversubscribed Jobs
As of Mon Dec 15 20:02:53 EST 2025 (2 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 4057/1795, 114 queued (jobs), 1 extra, showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11769410 bmod_optimised_ hinckleya 13:39 8 200.0% lThC.q 65-06
11769411 bmod_lognormrel hinckleya 13:36 8 162.8% lThC.q 64-03
⇒ Equivalent to 13.0 overused CPUs: 16 CPUs used at 181.4% on average.
Inefficient Jobs
As of Mon Dec 15 20:03:12 EST 2025 (57 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 4061/1799, 114 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11481231 AssembleBarcode breusingc +3:07 16 2.6% mThM.q 65-14 243
11481232 metaspades breusingc +5:01 16 6.5% mThM.q 76-13 107
11596723 ombro_ges_disc_ hchong +3:06 1 1.1% sThC.q 65-14 36809
11596723 ombro_ges_disc_ hchong +3:05 1 0.8% sThC.q 65-14 36916
11596723 ombro_ges_disc_ hchong +3:05 1 0.7% sThC.q 65-14 36938
(more by hchong)
11759798 gbz_recall_chr0 niez +2:22 32 12.3% mThC.q 84-01
11759799 gbz_recall_chr0 niez +2:22 32 6.8% mThC.q 93-03
11759800 gbz_recall_chr0 niez +2:22 32 6.6% mThC.q 93-02
(more by niez)
11769612 kraken2_build_s vohsens 06:11 40 1.5% sThC.q 76-05
11769617 BRZ015_mitofind mcfaddenc 05:43 12 31.3% mThC.q 64-06
11769621 BRZ026_mitofind mcfaddenc 05:43 12 30.9% mThC.q 65-28
11769623 BRZ030_mitofind mcfaddenc 05:43 12 30.5% mThC.q 65-09
(more by mcfaddenc)
11769683 Delphinidae_no_ mcgowenm 05:07 6 16.5% lThM.q 65-09
11769860 kraken2_build_s vohsens 03:57 40 0.1% mThC.q 75-02
11769997 kraken2_build_o vohsens 01:07 40 4.7% sThC.q 75-04
⇒ Equivalent to 1002.7 underused CPUs: 1182 CPUs used at 15.2% on average.
To see them all use:
'q+ -ineff -u hchong' (8)
'q+ -ineff -u mcfaddenc' (18)
'q+ -ineff -u niez' (25)
Nodes with Excess Load
As of Mon Dec 15 20:03:38 EST 2025 (7 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
64-03 40 16 26.0 10.0 * 64-04 40 24 25.7 1.7 * 64-17 32 17 19.0 2.0 * 65-06 64 31 36.9 5.9 * 65-30 64 36 38.9 2.9 * 76-02 192 0 170.8 170.8 * 76-04 192 112 118.9 6.9 *Total excess load = 200.1
As of Mon Dec 15 20:03:37 EST 2025
2 jobs waiting for breusingc:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11481231 AssembleBarcode breusingc +6:08 16 256.0 mThM.q 566-572:1
11481232 metaspades breusingc +6:08 16 0.0 mThM.q 240-572:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hM_slots_per_user/2 slots=480/585 82.1% for breusingc in queue mThM.q
max_slots_per_user/1 slots=480/840 57.1% for breusingc
max_mem_res_per_user/2 mem_res=2.500T/8.944T 28.0% for breusingc in queue uThM.q
------------------- ------------------------------- ------
3 jobs waiting for hchong:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11596728 ombro_ges_disc_ hchong +5:10 1 6.0 sThC.q 88653-90000:1
11596730 ombro_ges_disc_ hchong +5:10 1 6.0 sThC.q 90001-100000:1
11596731 ombro_ges_disc_ hchong +5:10 1 6.0 sThC.q 100001-108597:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=838/840 99.8% for hchong
max_hC_slots_per_user/1 slots=838/840 99.8% for hchong in queue sThC.q
max_mem_res_per_user/1 mem_res=4.910T/9.985T 49.2% for hchong in queue uThC.q
------------------- ------------------------------- ------
3 jobs waiting for jyee:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11769984 OB181074 jyee 03:02 1 sThC.q 4301-9045:1
11769986 OB181115 jyee 03:02 1 sThC.q 69-9045:1
11769988 OB181254 jyee 03:02 1 sThC.q 11-9045:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=826/840 98.3% for jyee
max_hC_slots_per_user/1 slots=826/840 98.3% for jyee in queue sThC.q
max_mem_res_per_user/1 mem_res=1.613T/9.985T 16.2% for jyee in queue uThC.q
------------------- ------------------------------- ------
10 jobs waiting for nevesk (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11769583 spadescript nevesk 08:03 12 600.0 lThM.q
11769584 spadescript nevesk 08:03 12 600.0 lThM.q
11769588 spadescript nevesk 07:18 12 600.0 lThM.q
11769589 spadescript nevesk 07:18 12 600.0 lThM.q
11769590 spadescript nevesk 07:18 12 600.0 lThM.q
none running.
96 jobs waiting for niez (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11760612 HX_Vitis_heynea niez +2:21 32 100.0 mTgpu.q
11760613 HX_Vitis_heynea niez +2:21 32 100.0 mTgpu.q
11760614 HX_Vitis_montic niez +2:21 32 100.0 mTgpu.q
11760615 HX_Vitis_montic niez +2:21 32 100.0 mTgpu.q
11760616 HX_Vitis_mustan niez +2:21 32 100.0 mTgpu.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=832/840 99.0% for niez
max_mem_res_per_user/1 mem_res=9.766T/9.985T 97.8% for niez in queue uThC.q
max_hC_slots_per_user/2 slots=800/840 95.2% for niez in queue mThC.q
max_gpus_per_user/2 GPUS=1/3 33.3% for niez in queue mTgpu.q
total_gpus_per_user/1 GPUS=1/4 25.0% for niez in queue qgpu.iq
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=99/110 90.0% for *
total_slots/1 slots=4049/5960 67.9% for *
total_mem_res/1 mem_res=18.01T/39.94T 45.1% for * in queue uThC.q
total_mem_res/2 mem_res=7.580T/35.78T 21.2% for * in queue uThM.q
total_gpus/1 GPUS=1/8 12.5% for * in queue mTgpu.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Mon Dec 15 20:03:39 EST 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=18.01T/39.94T 45.1% for * in queue uThC.q
total_mem_res/2 mem_res=7.580T/35.78T 21.2% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.