Total number of queued jobs/tasks/slots: 56/3,337/4,712
56 users have/had running or queued jobs over the past 7 days, 75 over the past 15 days.
95 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Wednesday, 18-Feb-2026 14:52:29 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:40.
Warnings
Oversubscribed Jobs
As of Wed Feb 18 14:47:47 EST 2026 (8 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 2555/1017, 56 queued (jobs), showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12198995 euk-SNP_array johnsonsj 20:21 2 135.2% mThM.q 65-23 11
12198995 euk-SNP_array johnsonsj 11:05 2 138.0% mThM.q 65-24 19
12198995 euk-SNP_array johnsonsj 10:12 2 150.2% mThM.q 65-26 20
12199769 xPSMC5p uribeje 03:34 4 497.7% uThM.q 65-18
12201097 xPSMC5p uribeje 01:21 4 492.1% uThM.q 65-06
12201113 xPSMC5p uribeje 01:20 4 495.1% uThM.q 64-17
(more by uribeje)
⇒ Equivalent to 81.6 overused CPUs: 26 CPUs used at 413.8% on average.
To see them all use:
'q+ -osub -u uribeje' (5)
Inefficient Jobs
As of Wed Feb 18 14:47:56 EST 2026 (8 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 2554/1016, 56 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11801118 angsd_strict uribeje +29:21 8 13.3% uThM.q 65-29
12190422 stairwayAZ.job byerlyp +2:05 5 20.0% lThM.q 64-17
12195552 stairwayNE.job byerlyp +2:01 5 20.0% lThM.q 76-04
12198833 stairwayCAR.job byerlyp +1:04 5 19.9% lThM.q 76-14
12199275 xPSMC4 uribeje 10:57 8 12.5% uThM.q 65-10
12199282 xPSMC3 uribeje 10:38 8 12.5% uThM.q 65-19
12199775 manacus_populat hoffmannmeyerg 03:28 8 12.4% mThC.q 93-01
12201275 callGL kistlerl 01:10 16 14.0% mThC.q 76-13
⇒ Equivalent to 53.7 underused CPUs: 63 CPUs used at 14.7% on average.
Nodes with Excess Load
As of Wed Feb 18 14:49:12 EST 2026 (7 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
64-17 32 9 21.0 12.0 * 65-06 64 21 37.1 16.1 * 65-18 64 12 23.2 11.2 * 65-26 64 6 22.8 16.8 * 76-03 192 64 88.9 24.9 * 76-04 192 91 121.4 30.4 * 84-01 112 24 37.7 13.7 *Total excess load = 125.1
As of Wed Feb 18 14:48:57 EST 2026
42 jobs waiting for beckerm (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12199723 nf-alignSeqs_(5 beckerm 04:16 20 2.0 mThC.q
12199724 nf-alignSeqs_(5 beckerm 04:16 20 2.0 mThC.q
12199725 nf-alignSeqs_(5 beckerm 04:15 20 2.0 mThC.q
12199726 nf-alignSeqs_(5 beckerm 04:15 20 2.0 mThC.q
12199727 nf-alignSeqs_(5 beckerm 04:15 20 2.0 mThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=836/840 99.5% for beckerm
max_hC_slots_per_user/2 slots=780/840 92.9% for beckerm in queue mThC.q
max_mem_res_per_user/2 mem_res=960.0G/8.944T 10.5% for beckerm in queue uThM.q
max_hM_slots_per_user/2 slots=56/585 9.6% for beckerm in queue mThM.q
max_mem_res_per_user/1 mem_res=78.00G/9.985T 0.8% for beckerm in queue uThC.q
------------------- ------------------------------- ------
11 jobs waiting for campanam (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12201318 final_ASC campanam 00:29 48 400.0 mThM.q
12201319 final_ASC campanam 00:29 48 400.0 mThM.q
12201320 final_ASC campanam 00:29 48 400.0 mThM.q
12201321 final_ASC campanam 00:29 48 400.0 mThM.q
12201322 final_ASC campanam 00:29 48 400.0 mThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hM_slots_per_user/2 slots=576/585 98.5% for campanam in queue mThM.q
max_slots_per_user/1 slots=576/840 68.6% for campanam
max_mem_res_per_user/2 mem_res=4.688T/8.944T 52.4% for campanam in queue uThM.q
------------------- ------------------------------- ------
1 job waiting for friedmans2:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12199000 align_bwa_grsp friedmans2 +1:01 4 40.0 mThM.q 37-39:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hM_slots_per_user/2 slots=20/585 3.4% for friedmans2 in queue mThM.q
max_slots_per_user/1 slots=20/840 2.4% for friedmans2
max_mem_res_per_user/2 mem_res=200.0G/8.944T 2.2% for friedmans2 in queue uThM.q
------------------- ------------------------------- ------
1 job waiting for johnsonsj:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12198995 euk-SNP_array johnsonsj +1:01 2 40.0 mThM.q 21-71:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=440.0G/8.944T 4.8% for johnsonsj in queue uThM.q
max_hM_slots_per_user/2 slots=22/585 3.8% for johnsonsj in queue mThM.q
max_slots_per_user/1 slots=22/840 2.6% for johnsonsj
------------------- ------------------------------- ------
1 job waiting for richardjm:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12199366 BHL-WebP richardjm 06:02 1 2.0 sThC.q 16791-20000:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=296/840 35.2% for richardjm
max_hC_slots_per_user/1 slots=296/840 35.2% for richardjm in queue sThC.q
max_mem_res_per_user/1 mem_res=592.0G/9.985T 5.8% for richardjm in queue uThC.q
------------------- ------------------------------- ------
1 job waiting for suttonm:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12201362 fit_int-fixed suttonm 00:00 1 100.0 sThM.q 93-164:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.797T/8.944T 98.4% for suttonm in queue uThM.q
max_slots_per_user/1 slots=94/840 11.2% for suttonm
max_hM_slots_per_user/1 slots=94/840 11.2% for suttonm in queue sThM.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=17.96T/35.78T 50.2% for * in queue uThM.q
total_gpus/1 GPUS=4/8 50.0% for * in queue mTgpu.q
total_slots/1 slots=2635/5960 44.2% for *
blast2GO/1 slots=24/110 21.8% for *
total_mem_res/1 mem_res=5.023T/39.94T 12.6% for * in queue uThC.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Wed Feb 18 14:49:14 EST 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=5.186T/39.94T 13.0% for * in queue uThC.q
total_mem_res/2 mem_res=18.00T/35.78T 50.3% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.