Total number of queued jobs/tasks/slots: 103/207/1,209
75 users have/had running or queued jobs over the past 7 days, 93 over the past 15 days.
109 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Friday, 01-May-2026 13:32:59 EDT
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 1:10.
Warnings
Oversubscribed Jobs
As of Fri May 1 13:27:09 EDT 2026 (93 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 3407/642, 104 queued (jobs), 9 extra, showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
13372684 trim_galore_wgb corderm +2:21 1 144.6% mThC.q 76-05
13536391 test-ja mashby 01:22 2 179.4% sThC.q 84-01 10201
13536391 test-ja mashby 01:22 2 144.2% sThC.q 93-03 10301
13536391 test-ja mashby 01:22 2 148.2% sThC.q 93-04 10801
(more by mashby)
⇒ Equivalent to 95.4 overused CPUs: 185 CPUs used at 151.6% on average.
To see them all use:
'q+ -osub -u mashby' (92)
Inefficient Jobs
As of Fri May 1 13:27:11 EDT 2026 (24 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 3401/637, 104 queued (jobs), 9 extra, showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12804788 IQ_50p_iqtree morrisseyd +29:05 64 19.0% lThC.q 76-03
12804791 IQ_75p_iqtree morrisseyd +29:05 64 25.2% lThC.q 65-27
13369765 Austro184 blaimerbb +4:05 12 16.2% mThC.q 76-11
13369766 Austro184 blaimerbb +4:04 12 16.2% mThC.q 76-09
13369767 Austro184 blaimerbb +4:04 12 16.2% mThC.q 76-06
(more by blaimerbb)
13377415 hatp13_g395m_1 szieba +2:11 60 9.0% lThM.q 84-01
13392202 iqtree50_blende oviedodiegom +2:00 24 32.5% lThC.q 65-23
13431979 sapdescript santosbe 19:01 8 5.1% lThM.q 76-13
13432018 sapdescript santosbe 11:26 8 10.2% lThM.q 76-10
13432029 sapdescript santosbe 07:42 8 20.2% lThM.q 76-14
(more by santosbe)
13439130 CJS_3way_REyr suttonm 22:44 1 7.3% sThC.q 64-14 107
13439130 CJS_3way_REyr suttonm 22:44 1 7.3% sThC.q 64-14 181
13466506 egapx zhangy 15:07 16 25.5% lThM.q 65-09
13515490 run_may01_2026_ szieba 05:18 60 10.3% lThM.q 75-07
13515537 run_may01_2026_ szieba 05:18 60 10.6% lThM.q 65-12
(more by szieba)
13519071 earthaccess_202 ggonzale 04:32 1 8.9% lTIO.sq 64-15
13529437 earlgrey zhangy 02:32 12 11.3% lThC.q 65-14 3
13536959 bears atkinsonga 01:08 1 2.4% lTWFM.sq 64-15
⇒ Equivalent to 496.8 underused CPUs: 584 CPUs used at 14.9% on average.
To see them all use:
'q+ -ineff -u blaimerbb' (5)
'q+ -ineff -u santosbe' (5)
'q+ -ineff -u szieba' (5)
As of Fri May 1 13:27:16 EDT 2026
67 jobs waiting for atkinsonga (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
13537110 nf-alignSeqs_(6 atkinsonga 01:07 20 2.0 mThC.q
13537112 nf-alignSeqs_(6 atkinsonga 01:07 20 2.0 mThC.q
13537114 nf-alignSeqs_(6 atkinsonga 01:07 20 2.0 mThC.q
13537115 nf-alignSeqs_(6 atkinsonga 01:07 20 2.0 mThC.q
13537117 nf-alignSeqs_(6 atkinsonga 01:07 20 2.0 mThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hC_slots_per_user/2 slots=640/640 100.0% for atkinsonga in queue mThC.q
max_concurrent_jobs_per_u no_concurrent_jobs=1/1 100.0% for atkinsonga in queue lTWFM.sq
max_slots_per_user/1 slots=643/840 76.5% for atkinsonga
wfm_slots_per_user/1 slots=1/2 50.0% for atkinsonga in queue lTWFM.sq
max_mem_res_per_user/1 mem_res=64.00G/9.985T 0.6% for atkinsonga in queue uThC.q
max_mem_res_per_user/2 mem_res=32.00G/8.944T 0.3% for atkinsonga in queue uThM.q
max_hM_slots_per_user/2 slots=2/585 0.3% for atkinsonga in queue mThM.q
------------------- ------------------------------- ------
1 job waiting for mashby:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
13536391 test-ja mashby 01:22 2 90001-93201:100
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=840/840 100.0% for mashby
max_hC_slots_per_user/1 slots=840/840 100.0% for mashby in queue sThC.q
max_mem_res_per_user/1 mem_res=840.0G/9.985T 8.2% for mashby in queue uThC.q
------------------- ------------------------------- ------
1 job waiting for mghahrem:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
13369237 A10_Map_Creatio mghahrem +6:19 1 sTgpu.q 72-143:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_gpus_per_user/1 GPUS=1/4 25.0% for mghahrem in queue qgpu.iq
max_gpus_per_user/1 GPUS=1/4 25.0% for mghahrem in queue sTgpu.q
max_slots_per_user/1 slots=1/840 0.1% for mghahrem
------------------- ------------------------------- ------
32 jobs waiting for santosbe (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
13432120 sapdescript santosbe +1:00 8 960.0 lThM.q
13432123 sapdescript santosbe +1:00 8 960.0 lThM.q
13432125 sapdescript santosbe +1:00 8 960.0 lThM.q
13432127 sapdescript santosbe +1:00 8 960.0 lThM.q
13432130 sapdescript santosbe +1:00 8 960.0 lThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.438T/8.944T 94.3% for santosbe in queue uThM.q
max_hM_slots_per_user/3 slots=72/390 18.5% for santosbe in queue lThM.q
max_slots_per_user/1 slots=72/840 8.6% for santosbe
------------------- ------------------------------- ------
3 jobs waiting for sylvain:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
13542205 q_dofit4fx2.172 sylvain 00:00 1 sThC.q
13542206 q_dofit4fx2.172 sylvain 00:00 1 sThC.q
13542208 q_dofit4fx2.171 sylvain 00:00 1 sThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=42/840 5.0% for sylvain
max_hC_slots_per_user/1 slots=42/840 5.0% for sylvain in queue sThC.q
max_mem_res_per_user/1 mem_res=84.00G/9.985T 0.8% for sylvain in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=100/110 90.9% for *
total_slots/1 slots=3387/5960 56.8% for *
total_mem_res/2 mem_res=14.58T/35.78T 40.8% for * in queue uThM.q
total_gpus/1 GPUS=1/8 12.5% for * in queue sTgpu.q
total_mem_res/1 mem_res=4.212T/39.94T 10.5% for * in queue uThC.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Fri May 1 13:27:17 EDT 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=4.212T/39.94T 10.5% for * in queue uThC.q
total_mem_res/2 mem_res=14.58T/35.78T 40.8% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lTWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=640
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.