Total number of queued jobs/tasks/slots: 214/58,773/59,261
69 users have/had running or queued jobs over the past 7 days, 83 over the past 15 days.
100 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Wednesday, 04-Feb-2026 21:02:22 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:50.
Warnings
Oversubscribed Jobs
As of Wed Feb 4 20:57:29 EST 2026 (0 oversubscribed job)
Inefficient Jobs
As of Wed Feb 4 20:57:34 EST 2026 (20 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1051/552, 214 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11801118 angsd_strict uribeje +16:03 8 13.6% uThM.q 65-29
11807104 vitis_pggb_mamb niez +5:10 110 31.9% lThC.q 76-11
11811177 m_11_sentinel_r xuj +1:02 3 32.9% mThM.q 75-04
11811221 m_11_sentinel_r xuj 18:11 3 32.7% mThM.q 84-01
11811302 m_11_sentinel_r xuj 10:11 3 32.9% mThM.q 75-03
(more by xuj)
11812420 Step4_de_novo_a bourkeb 15:33 16 31.6% mThM.q 76-08
11812692 callSNPs kistlerl 10:42 32 3.9% mThC.q 75-03
11812791 stairway.job byerlyp 08:20 20 5.0% lThM.q 75-06
11813186 vqvaegpt_paper mperez 05:30 32 27.0% lTgpu.q 50-01
11813270 mapping_tox beckerm 03:45 8 13.6% mThC.q 64-06
11813271 mapping_tox beckerm 03:44 8 13.4% mThC.q 65-28
11813274 mapping_tox beckerm 03:44 8 13.5% mThC.q 64-04
(more by beckerm)
11813363 sp_optical_v1_5 rdi_tella 01:58 20 6.1% mTgpu.q 79-01
⇒ Equivalent to 235.4 underused CPUs: 302 CPUs used at 22.1% on average.
To see them all use:
'q+ -ineff -u beckerm' (5)
'q+ -ineff -u xuj' (8)
Nodes with Excess Load
As of Wed Feb 4 20:57:41 EST 2026 (3 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-06 64 0 22.0 22.0 * 76-04 192 16 25.2 9.2 * 76-08 128 16 38.5 22.5 *Total excess load = 53.8
As of Wed Feb 4 20:57:41 EST 2026
6 jobs waiting for hchong (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11813139 ombro_sts_20260 hchong 06:00 1 7.0 sThC.q 50501-60000:1
11813140 ombro_sts_20260 hchong 06:00 1 7.0 sThC.q 60001-70000:1
11813141 ombro_sts_20260 hchong 06:00 1 7.0 sThC.q 70001-80000:1
11813142 ombro_sts_20260 hchong 06:00 1 7.0 sThC.q 80001-90000:1
11813143 ombro_sts_20260 hchong 06:00 1 7.0 sThC.q 90001-100000:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=500/840 59.5% for hchong
max_hC_slots_per_user/1 slots=500/840 59.5% for hchong in queue sThC.q
max_mem_res_per_user/1 mem_res=3.418T/9.985T 34.2% for hchong in queue uThC.q
------------------- ------------------------------- ------
8 jobs waiting for nevesk (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11813419 spadescript nevesk 00:47 12 600.0 lThM.q
11813420 spadescript nevesk 00:47 12 600.0 lThM.q
11813424 spadescript nevesk 00:32 12 600.0 lThM.q
11813425 spadescript nevesk 00:32 12 600.0 lThM.q
11813426 spadescript nevesk 00:32 12 600.0 lThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=2.344T/8.944T 26.2% for nevesk in queue uThM.q
max_hM_slots_per_user/3 slots=48/390 12.3% for nevesk in queue lThM.q
max_slots_per_user/1 slots=48/840 5.7% for nevesk
------------------- ------------------------------- ------
200 jobs waiting for xuj (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11811363 m_11_sentinel_r xuj +1:14 3 450.0 mThM.q
11811364 m_11_sentinel_r xuj +1:14 3 450.0 mThM.q
11811365 m_11_sentinel_r xuj +1:14 3 450.0 mThM.q
11811366 m_11_sentinel_r xuj +1:14 3 450.0 mThM.q
11811367 m_11_sentinel_r xuj +1:14 3 450.0 mThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=9.229T/8.944T 103.2% for xuj in queue uThM.q
max_concurrent_jobs_per_u no_concurrent_jobs=1/4 25.0% for xuj in queue qrsh.iq
max_hM_slots_per_user/2 slots=63/585 10.8% for xuj in queue mThM.q
max_slots_per_user/1 slots=64/840 7.6% for xuj
qrsh_u_slots/1 slots=1/16 6.2% for xuj in queue qrsh.iq
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=13.44T/35.78T 37.6% for * in queue uThM.q
blast2GO/1 slots=28/110 25.5% for *
total_slots/1 slots=1051/5960 17.6% for *
total_mem_res/1 mem_res=5.180T/39.94T 13.0% for * in queue uThC.q
total_gpus/1 GPUS=1/8 12.5% for * in queue mTgpu.q
total_gpus/1 GPUS=1/8 12.5% for * in queue lTgpu.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Wed Feb 4 20:57:41 EST 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=5.180T/39.94T 13.0% for * in queue uThC.q
total_mem_res/2 mem_res=13.44T/35.78T 37.6% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.