Total number of queued jobs/tasks/slots: 74/8,200/10,310
63 users have/had running or queued jobs over the past 7 days, 86 over the past 15 days.
99 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Friday, 06-Mar-2026 18:12:22 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:54.
Warnings
Oversubscribed Jobs
As of Fri Mar 6 18:07:32 EST 2026 (4 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 3816/1167, 74 queued (jobs), 1 extra, showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12257172 paleomix_anc_ot hagemannm +1:14 4 190.5% mThC.q 65-07
12257173 paleomix_anc_ot hagemannm +1:14 4 196.1% mThC.q 64-13
12257174 paleomix_anc_ot hagemannm +1:14 4 193.3% mThC.q 65-14
12258796 combine_files granquistm 02:55 8 214.6% sThC.q 65-03
⇒ Equivalent to 20.4 overused CPUs: 20 CPUs used at 201.8% on average.
Inefficient Jobs
As of Fri Mar 6 18:07:43 EST 2026 (29 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 3814/1165, 74 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12190422 stairwayAZ.job byerlyp +18:08 5 20.0% lThM.q 64-17
12195552 stairwayNE.job byerlyp +18:04 5 19.9% lThM.q 76-04
12198833 stairwayCAR.job byerlyp +17:07 5 19.7% lThM.q 76-14
12226716 sansibia_spades nelsonjo +8:04 10 14.6% mThM.q 65-06
12248637 sansibia_spades nelsonjo +3:03 10 16.8% mThM.q 64-18
12253537 angsd1_arg.job beckerm +2:08 8 24.9% mThM.q 84-01
12257088 vg_SRR5891597 niez +1:14 32 15.9% mThC.q 65-02
12257089 vg_SRR5891622 niez +1:14 32 28.7% mThC.q 65-07
12257092 vg_SRR5891647 niez +1:14 32 14.6% mThC.q 76-03
(more by niez)
12257371 satsuma willishr +1:03 2 6.4% lTWFM.sq 64-16 1
12257453 make_SFS_i_CI_C beckerm 21:16 8 30.8% mThM.q 93-01
12257454 make_SFS_i_CI_o beckerm 21:14 8 30.0% mThM.q 76-06
(more by beckerm)
12258160 DESI_AION_fits rbottger 04:08 4 20.2% sTgpu.q 50-01
12258775 pre_29 jhora 01:41 32 8.3% sThM.q 75-04
12258776 pre_30 jhora 01:35 32 8.4% sThM.q 75-04
12258778 pre_32 jhora 01:30 32 7.1% sThM.q 76-01
(more by jhora)
12258802 plume-batch qzhu 02:43 10 6.5% lThM.q 75-06
⇒ Equivalent to 511.6 underused CPUs: 603 CPUs used at 15.2% on average.
To see them all use:
'q+ -ineff -u beckerm' (5)
'q+ -ineff -u jhora' (7)
'q+ -ineff -u niez' (9)
Nodes with Excess Load
As of Fri Mar 6 18:07:57 EST 2026 (2 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-13 64 16 35.7 19.7 * 93-04 72 22 38.6 16.6 *Total excess load = 36.3
As of Fri Mar 6 18:07:56 EST 2026
2 jobs waiting for hchong:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12257510 ombro_ges_disc_ hchong 10:43 1 4.0 sThC.q 91781-99991:10
12257511 ombro_ges_disc_ hchong 10:43 1 4.0 sThC.q 100001-109061:10
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=835/840 99.4% for hchong
max_hC_slots_per_user/1 slots=835/840 99.4% for hchong in queue sThC.q
max_mem_res_per_user/1 mem_res=3.262T/9.985T 32.7% for hchong in queue uThC.q
------------------- ------------------------------- ------
15 jobs waiting for jhora (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12259310 poly_17 jhora 00:23 32 60.0 sThM.q
12259311 poly_18 jhora 00:23 32 60.0 sThM.q
12259312 poly_19 jhora 00:23 32 60.0 sThM.q
12259313 poly_20 jhora 00:23 32 60.0 sThM.q
12259314 poly_21 jhora 00:23 32 60.0 sThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=832/840 99.0% for jhora
max_hM_slots_per_user/1 slots=832/840 99.0% for jhora in queue sThM.q
max_mem_res_per_user/2 mem_res=1.523T/8.944T 17.0% for jhora in queue uThM.q
------------------- ------------------------------- ------
54 jobs waiting for niez (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12257116 vg_SRR5891948 niez +1:15 32 200.0 mThC.q
12257117 vg_SRR5891949 niez +1:15 32 200.0 mThC.q
12257118 vg_SRR5891952 niez +1:15 32 200.0 mThC.q
12257119 vg_SRR5892012 niez +1:15 32 200.0 mThC.q
12257120 vg_SRR5892013 niez +1:15 32 200.0 mThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=832/840 99.0% for niez
max_hC_slots_per_user/2 slots=832/840 99.0% for niez in queue mThC.q
max_mem_res_per_user/1 mem_res=5.078T/9.985T 50.9% for niez in queue uThC.q
------------------- ------------------------------- ------
1 job waiting for pappalardop:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12258735 cdfw2026_bold pappalardop 03:11 1 300.0 mThM.q 32-487:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.789T/8.944T 98.3% for pappalardop in queue uThM.q
max_hM_slots_per_user/2 slots=30/585 5.1% for pappalardop in queue mThM.q
max_slots_per_user/1 slots=30/840 3.6% for pappalardop
------------------- ------------------------------- ------
1 job waiting for richardjm:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12258725 BHL-WebP richardjm 03:25 1 3.0 sThC.q 24080-30000:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=142/840 16.9% for richardjm
max_hC_slots_per_user/1 slots=142/840 16.9% for richardjm in queue sThC.q
max_mem_res_per_user/1 mem_res=426.0G/9.985T 4.2% for richardjm in queue uThC.q
------------------- ------------------------------- ------
1 job waiting for willishr:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12257371 satsuma willishr +1:03 2 lTWFM.sq 2-4:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
wfm_slots_per_user/1 slots=2/2 100.0% for willishr in queue lTWFM.sq
max_slots_per_user/1 slots=2/840 0.2% for willishr
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=81/110 73.6% for *
total_slots/1 slots=3812/5960 64.0% for *
total_mem_res/2 mem_res=13.12T/35.78T 36.7% for * in queue uThM.q
total_mem_res/1 mem_res=11.03T/39.94T 27.6% for * in queue uThC.q
total_gpus/1 GPUS=1/8 12.5% for * in queue sTgpu.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Fri Mar 6 18:07:57 EST 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=11.03T/39.94T 27.6% for * in queue uThC.q
total_mem_res/2 mem_res=13.12T/35.78T 36.7% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.