Total number of queued jobs/tasks/slots: 3/619/2,891
63 users have/had running or queued jobs over the past 7 days, 89 over the past 15 days.
113 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
This page was last updated on Thursday, 24-Jul-2025 20:42:16 EDT
with mk-webpage.pl ver. 7.2/1 (Aug 2024/SGK) in 1:09.
Warnings
Oversubscribed Jobs
As of Thu Jul 24 20:37:07 EDT 2025 (0 oversubscribed job)
Inefficient Jobs
As of Thu Jul 24 20:37:09 EDT 2025 (26 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1491/533, 3 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
8271748 beast_partition jassoj +15:07 40 21.7% lThC.q 64-13
9012393 uce_spades_conc beckerm +5:10 40 2.3% mThM.q 75-01
9012565 uce_spades_conc beckerm +4:00 40 2.6% mThM.q 65-07
9013348 beast_partition jassoj +2:06 40 22.7% lThC.q 76-10
9013349 beast_unpartiti jassoj +2:06 40 23.1% lThC.q 65-12
9011217 metawrap_long_p vohsens +7:00 16 11.0% mThM.q 65-10 20
9011217 metawrap_long_p vohsens +6:23 16 10.4% mThM.q 65-09 26
9011217 metawrap_long_p vohsens +5:23 16 10.6% mThM.q 65-14 148
(more by vohsens)
9054348 gllvm4 gonzalezm2 09:50 15 9.0% uThM.q 65-05
9058610 phypart1171_a1 wangt2 06:38 10 12.4% mThM.q 93-04
9013381 xvcf1 uribeje +2:04 8 12.4% uThM.q 76-14
9013384 xvcf2 uribeje +2:04 8 12.4% uThM.q 84-01
9013386 xvcf3 uribeje +2:04 8 12.5% uThM.q 93-02
(more by uribeje)
9013131 nf-blastUnalign hydem2 +2:16 5 22.5% lThM.q 76-06
9013134 nf-blastUnalign hydem2 +2:15 5 32.7% lThM.q 76-08
8137841 Job_Step5 perezm4 +56:04 4 26.2% lThM.q 64-17
9013122 albicollis hydem2 +3:01 1 0.1% lTWFM.sq 64-16
⇒ Equivalent to 350.8 underused CPUs: 408 CPUs used at 14.0% on average.
To see them all use:
'q+ -ineff -u uribeje' (9)
'q+ -ineff -u vohsens' (6)
1 for bakerd
2 for beckerm
1 for bourkeb
29 for cerqueirat
23 for collinsa
1 for gonzalezm2
1 for hinckleya
2 for hydem2
1 for macdonaldk
1 for perezm4
8 for uribeje
1 for vagac
6 for vohsens
1 for wangt2
1 for wirshingh
As of Thu Jul 24 20:37:12 EDT 2025
1 job waiting for collinsa:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9015692 spades_array.jo collinsa +1:02 16 400.0 mThM.q 117-267:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.984T/8.944T 100.4% for collinsa in queue uThM.q
max_hM_slots_per_user/2 slots=368/585 62.9% for collinsa in queue mThM.q
max_slots_per_user/1 slots=368/840 43.8% for collinsa
------------------- ------------------------------- ------
1 job waiting for pappalardop:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9062688 soakblend_bold pappalardop 03:26 1 300.0 mThM.q 53-519:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=7.617T/8.944T 85.2% for pappalardop in queue uThM.q
max_hM_slots_per_user/2 slots=26/585 4.4% for pappalardop in queue mThM.q
max_slots_per_user/1 slots=26/840 3.1% for pappalardop
------------------- ------------------------------- ------
1 job waiting for uribeje:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9013401 xvcf14 uribeje +2:04 8 400.0 uThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hM_slots_per_user/4 slots=72/73 98.6% for uribeje in queue uThM.q
max_mem_res_per_user/2 mem_res=3.516T/8.944T 39.3% for uribeje in queue uThM.q
max_slots_per_user/1 slots=72/840 8.6% for uribeje
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=26.47T/35.78T 74.0% for * in queue uThM.q
blast2GO/1 slots=49/110 44.5% for *
total_gpus/1 num_gpu=3/8 37.5% for * in queue mTgpu.q
total_slots/1 slots=1489/5960 25.0% for *
total_mem_res/1 mem_res=3.473T/39.94T 8.7% for * in queue uThC.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Thu Jul 24 20:37:12 EDT 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=3.473T/39.94T 8.7% for * in queue uThC.q
total_mem_res/2 mem_res=26.47T/35.78T 74.0% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to num_gpu=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to num_gpu=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to num_gpu=4
users {*} queues {mTgpu.q} to num_gpu=3
users {*} queues {lTgpu.q} to num_gpu=2
users {*} queues {qgpu.iq} to num_gpu=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
You can view the plots of the GPFS IB traffic for the past
1
, 7
or 30
days, and throughput info.