Total number of queued jobs/tasks/slots: 41/41/328
66 users have/had running or queued jobs over the past 7 days, 87 over the past 15 days.
107 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Friday, 27-Mar-2026 23:32:51 EDT
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:53.
Warnings
Oversubscribed Jobs
As of Fri Mar 27 23:27:04 EDT 2026 (1 oversubscribed job)
Total running (PEs/jobs) = 1965/102, 41 queued (jobs), showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12785330 bmod_lognormrel hinckleya +1:11 8 224.5% lThC.q 64-14
⇒ Equivalent to 10.0 overused CPUs: 8 CPUs used at 224.5% on average.
Inefficient Jobs
As of Fri Mar 27 23:27:05 EDT 2026 (18 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1965/102, 41 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12190422 stairwayAZ.job byerlyp +39:13 5 20.0% lThM.q 64-17
12195552 stairwayNE.job byerlyp +39:10 5 19.9% lThM.q 76-04
12198833 stairwayCAR.job byerlyp +38:12 5 19.8% lThM.q 76-14
12771414 earlgrey zhangy +7:09 24 14.9% lThC.q 64-07 1
12771415 earlgrey zhangy +6:14 24 18.0% lThC.q 65-24 18
12771415 earlgrey zhangy +6:02 24 16.5% lThC.q 65-21 23
(more by zhangy)
12783074 sapdescript santosbe +2:19 8 11.3% lThM.q 76-12
12790202 snpcalling_call morrisseyd 15:16 48 14.4% mThC.q 65-18
12794696 call_ngs_GL kistlerl 13:25 32 5.9% mThC.q 75-06
12794835 montagem-longin santossam 11:09 20 3.2% mThM.q 93-05
12794836 montagem-longin santossam 11:09 20 4.5% mThM.q 65-29
12794838 montagem-longin santossam 11:06 20 3.7% mThM.q 65-15
12794896 earthaccess_202 ggonzale 03:27 1 7.7% lTIO.sq 64-16
⇒ Equivalent to 310.0 underused CPUs: 356 CPUs used at 12.9% on average.
To see them all use:
'q+ -ineff -u zhangy' (8)
Nodes with Excess Load
As of Fri Mar 27 23:27:07 EDT 2026 (3 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
64-14 40 8 25.5 17.5 * 76-02 64 0 2.0 2.0 * 93-06 96 24 27.2 3.2 *Total excess load = 22.7
As of Fri Mar 27 23:27:06 EDT 2026
12 jobs waiting for ggonzale (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12794911 GET_OML1BRVG_20 ggonzale 00:27 1 lTIO.sq
12794912 GET_OMCLDO2_202 ggonzale 00:27 1 lTIO.sq
12794913 GET_OMTO3_20260 ggonzale 00:27 1 lTIO.sq
12794914 GET_OMTO3d_2026 ggonzale 00:27 1 lTIO.sq
12794915 GET_OMPS_NPP_NM ggonzale 00:27 1 lTIO.sq
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_concurrent_jobs_per_u no_concurrent_jobs=2/2 100.0% for ggonzale in queue lTIO.sq
io_slots_per_user/1 slots=2/8 25.0% for ggonzale in queue lTIO.sq
max_slots_per_user/1 slots=2/840 0.2% for ggonzale
------------------- ------------------------------- ------
24 jobs waiting for granquistm (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12794788 USNM1739427_spa granquistm 12:36 12 240.0 mThM.q
12794789 USNM1739428_spa granquistm 12:36 12 240.0 mThM.q
12794790 USNM1739429_spa granquistm 12:36 12 240.0 mThM.q
12794791 USNM1739430_spa granquistm 12:36 12 240.0 mThM.q
12794792 USNM1739431_spa granquistm 12:36 12 240.0 mThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=9.141T/8.944T 102.2% for granquistm in queue uThM.q
max_hM_slots_per_user/2 slots=468/585 80.0% for granquistm in queue mThM.q
max_slots_per_user/1 slots=468/840 55.7% for granquistm
------------------- ------------------------------- ------
2 jobs waiting for jmcclung:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12794890 psc_plot jmcclung 05:05 1 lThC.q
12794891 psc_plot jmcclung 05:05 1 lThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hC_slots_per_user/3 slots=130/431 30.2% for jmcclung in queue lThC.q
max_slots_per_user/1 slots=130/840 15.5% for jmcclung
max_mem_res_per_user/1 mem_res=4.000G/9.985T 0.0% for jmcclung in queue uThC.q
------------------- ------------------------------- ------
2 jobs waiting for taom:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12794899 restart_align_h taom 02:51 1
12794900 restart_align_f taom 02:51 1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=16/840 1.9% for taom
max_hC_slots_per_user/2 slots=16/840 1.9% for taom in queue mThC.q
max_mem_res_per_user/1 mem_res=4.000G/9.985T 0.0% for taom in queue uThC.q
------------------- ------------------------------- ------
1 job waiting for zhangy:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12782849 earlgrey zhangy +3:12 24 192.0 lThC.q 4,5
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hC_slots_per_user/3 slots=528/431 122.5% for zhangy in queue lThC.q
max_slots_per_user/1 slots=528/840 62.9% for zhangy
max_mem_res_per_user/1 mem_res=4.125T/9.985T 41.3% for zhangy in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_slots/1 slots=1968/5960 33.0% for *
total_mem_res/2 mem_res=10.93T/35.78T 30.6% for * in queue uThM.q
blast2GO/1 slots=24/110 21.8% for *
total_mem_res/1 mem_res=7.832T/39.94T 19.6% for * in queue uThC.q
total_gpus/1 GPUS=1/8 12.5% for * in queue mTgpu.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Fri Mar 27 23:27:07 EDT 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=7.832T/39.94T 19.6% for * in queue uThC.q
total_mem_res/2 mem_res=10.93T/35.78T 30.6% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lTWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.