Total number of queued jobs/tasks/slots: 18/18/288
64 users have/had running or queued jobs over the past 7 days, 90 over the past 15 days.
114 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Saturday, 06-Sep-2025 06:31:58 EDT
with mk-webpage.pl ver. 7.2/1 (Aug 2024/SGK) in 0:52.
Warnings
Oversubscribed Jobs
As of Sat Sep 6 06:27:02 EDT 2025 (0 oversubscribed job)
Inefficient Jobs
As of Sat Sep 6 06:27:03 EDT 2025 (25 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1152/82, 18 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
10232855 gene_IQ_50p_iqt morrisseyd +3:15 64 1.6% lThC.q 76-09
10242434 metawrap_long_p vohsens 15:45 40 21.9% mThC.q 75-06
9371868 exabayes-et2-50 gouldingt +31:14 32 12.5% lThC.q 64-06
10241928 assembly-notmp castanedaricos 17:06 26 16.5% lThM.q 76-12
10242429 assembly2-notmp castanedaricos 16:21 26 12.2% lThM.q 76-04
10232804 iqtree_75_tubi nelsonjo +3:16 20 5.0% lThC.q 64-14
10235112 xadm1 uribeje +2:21 20 5.0% lThM.q 76-06
10240916 assembly1 peresph +1:16 20 4.1% mThM.q 76-10
10242547 spades_5 santossam 14:08 20 8.0% mThM.q 76-03
10242911 Step3_trinity bourkeb 01:42 16 28.8% mThM.q 75-05
10230718 rev_bayes_braco jassoj +4:09 12 8.3% uThC.q 75-07
10241799 assembly2 peresph 19:59 12 9.9% mThM.q 64-18
10163105 vcf2fasta carrionj +11:17 4 24.9% mThC.q 76-03 12
10163105 vcf2fasta carrionj +11:17 4 24.9% mThC.q 76-09 15
10163105 vcf2fasta carrionj +11:17 4 24.9% mThC.q 76-06 19
10242432 spadescript zhangy 16:05 4 10.5% lThM.q 64-18
10242433 spadescript zhangy 16:05 4 22.2% lThM.q 75-02
(more by carrionj)
10241802 mitobim_loop.jo wirshingh 19:52 3 32.6% mThC.q 75-05
10235626 raxml coellogarridoa +2:20 1 4.2% sThC.q 76-13 89
10235626 raxml coellogarridoa +2:20 1 4.2% sThC.q 76-13 93
10238777 test2 lingof +2:11 1 0.1% uThM.q 76-09
⇒ Equivalent to 308.1 underused CPUs: 346 CPUs used at 10.9% on average.
To see them all use:
'q+ -ineff -u carrionj' (7)
Nodes with Excess Load
As of Sat Sep 6 06:27:04 EDT 2025 (5 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-02 64 0 4.2 4.2 * 65-04 64 0 38.1 38.1 * 65-30 64 0 48.5 48.5 * 75-05 128 19 32.9 13.9 * 76-08 128 64 82.6 18.6 *Total excess load = 123.3
2 for bourkeb
2 for castanedaric
2 for coellogarrid
1 for hinckleya
20 for macdonaldk
2 for peresph
1 for santossam
1 for uribeje
3 for xuj
2 for zhangy
As of Sat Sep 6 06:27:04 EDT 2025
18 jobs waiting for jenkinskel (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
10241028 Full_Matrix_Bay jenkinskel +1:16 16 uThC.q
10241061 Full_Matrix_Bay jenkinskel +1:16 16 uThC.q
10241065 Full_Matrix_Bay jenkinskel +1:16 16 uThC.q
10241076 Full_Matrix_Bay jenkinskel +1:16 16 uThC.q
10241084 Full_Matrix_Bay jenkinskel +1:16 16 uThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hC_slots_per_user/4 slots=136/143 95.1% for jenkinskel in queue uThC.q
max_slots_per_user/1 slots=136/840 16.2% for jenkinskel
max_mem_res_per_user/1 mem_res=16.00G/9.985T 0.2% for jenkinskel in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=10.77T/35.78T 30.1% for * in queue uThM.q
blast2GO/1 slots=29/110 26.4% for *
total_slots/1 slots=1161/5960 19.5% for *
total_mem_res/1 mem_res=1.243T/39.94T 3.1% for * in queue uThC.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Sat Sep 6 06:27:04 EDT 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=1.243T/39.94T 3.1% for * in queue uThC.q
total_mem_res/2 mem_res=10.77T/35.78T 30.1% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.4 135.3 420.1 - 368.1 83.3 - 32 3 3.0 - 29 29.0
compute-64-18 - 503.4 14.6 170.1 - 488.8 333.3 - 32 16 2.2 - 16 29.8
compute-65-02 - 503.5 x x - node down - 64 x x - x x
compute-65-03 - 503.5 x x - node down - 64 x x - x x
compute-65-04 - 503.5 x x - node down - 64 x x - x x
compute-65-05 - 503.5 x x - node down - 64 x x - x x
compute-65-06 - 503.5 x x - node down - 64 x x - x x
compute-65-07 - 503.5 x x - node down - 64 x x - x x
compute-65-09 - 503.5 x x - node down - 64 x x - x x
compute-65-10 - 503.5 x x - node down - 64 x x - x x
compute-65-11 - 503.5 x x - node down - 64 x x - x x
compute-65-12 - 503.5 x x - node down - 64 x x - x x
compute-65-13 - 503.5 x x - node down - 64 x x - x x
compute-65-14 - 503.5 x x - node down - 64 x x - x x
compute-65-15 - 503.5 x x - node down - 64 x x - x x
compute-65-16 - 503.5 x x - node down - 64 x x - x x
compute-65-17 - 503.5 x x - node down - 64 x x - x x
compute-65-18 - 503.5 x x - node down - 64 x x - x x
compute-65-19 - 503.5 x x - node down - 64 x x - x x
compute-65-20 - 503.5 19.5 288.0 - 484.0 215.5 - 64 12 12.0 - 52 52.0
compute-65-21 - 503.5 15.3 288.0 - 488.2 215.5 - 64 12 12.7 - 52 51.3
compute-65-22 - 503.5 146.7 420.0 - 356.8 83.5 - 64 3 3.2 - 61 60.8
compute-65-23 - 503.5 18.5 288.0 - 485.0 215.5 - 64 12 13.1 - 52 50.9
compute-65-24 - 503.5 21.6 290.0 - 481.9 213.5 - 64 17 17.1 - 47 46.9
compute-65-25 - 503.5 24.4 290.0 - 479.1 213.5 - 64 17 17.1 - 47 46.9
compute-65-26 - 503.5 17.5 288.0 - 486.0 215.5 - 64 12 12.3 - 52 51.6
compute-65-27 - 503.5 15.2 288.0 - 488.3 215.5 - 64 12 13.0 - 52 51.0
compute-65-28 - 503.5 14.8 288.0 - 488.7 215.5 - 64 12 12.0 - 52 52.0
compute-65-29 - 503.5 13.4 0.0 - 490.1 503.5 - 64 0 0.1 - 64 63.9
compute-65-30 - 503.5 x x - node down - 64 x x - x x
compute-75-01 - 1007.5 35.4 120.1 - 972.1 887.4 - 128 40 40.0 - 88 88.0
compute-75-02 - 1007.5 27.1 850.0 - 980.4 157.5 - 128 20 12.1 - 108 115.9
compute-75-03 - 755.5 22.1 14.0 - 733.4 741.5 - 128 19 17.0 - 109 111.0
compute-75-04 - 755.5 17.2 0.0 - 738.3 755.5 - 128 0 0.1 - 128 128.0
compute-75-05 - 755.5 19.4 524.0 - 736.1 231.5 - 128 19 32.9 - 109 95.1
compute-75-06 - 755.5 21.3 608.0 - 734.2 147.5 - 128 52 20.9 - 76 107.1
compute-75-07 - 755.5 21.3 110.0 - 734.2 645.5 - 128 31 18.0 - 97 110.0
compute-76-03 - 1007.4 34.4 144.5 - 973.0 862.9 - 128 45 23.2 - 83 104.8
compute-76-04 - 1007.4 30.4 548.0 - 977.0 459.4 - 128 38 13.1 - 90 114.9
compute-76-05 - 1007.4 42.1 122.0 - 965.3 885.4 - 128 45 45.1 - 83 82.9
compute-76-06 - 1007.4 62.6 710.0 - 944.8 297.4 - 128 56 30.6 - 72 97.4
compute-76-07 - 1007.4 47.6 490.0 - 959.8 517.4 - 128 33 30.6 - 95 97.5
compute-76-08 - 1007.4 40.0 128.0 - 967.4 879.4 - 128 64 82.6 - 64 45.4
compute-76-09 - 1007.4 59.1 596.0 - 948.3 411.4 - 128 81 15.1 - 47 112.9
compute-76-10 - 1007.4 25.6 488.0 - 981.8 519.4 - 128 32 13.2 - 96 114.8
compute-76-11 - 1007.4 23.2 14.0 - 984.2 993.4 - 128 19 17.1 - 109 110.9
compute-76-12 - 1007.4 154.6 680.0 - 852.8 327.4 - 128 29 5.2 - 99 122.8
compute-76-13 - 1007.4 x x - node down - 128 x x - x x
compute-76-14 - 1007.4 27.8 576.0 - 979.6 431.4 - 128 24 25.0 - 104 103.0
compute-84-01 - 881.1 85.5 290.0 - 795.6 591.1 - 112 28 29.0 - 84 83.0
compute-93-01 - 503.8 30.9 72.0 - 472.9 431.8 - 64 24 20.3 - 40 43.7
compute-93-02 - 755.6 40.6 290.0 - 715.0 465.6 - 72 32 30.9 - 40 41.1
compute-93-03 - 755.6 43.7 2.0 - 711.9 753.6 - 72 20 18.3 - 52 53.7
compute-93-04 - 755.6 20.0 576.0 - 735.6 179.6 - 72 24 25.0 - 48 47.0
======= ===== ====== ==== ==== =====
Totals 26567.4 1388.7 11270.8 3400 903 683.1
==> 5.2% 42.4% ==> 26.6% 20.1%
Most unreserved/unused memory (993.4/984.2GB) is on compute-76-11 with 109/110.9 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 21.2 -0.0 - 1490.2 1511.4 - 192 0 0.1 - 192 191.9
compute-76-02 - 1511.4 x x - node down - 192 x x - x x
compute-93-05 - 2016.3 20.1 0.0 - 1996.2 2016.3 - 96 0 0.0 - 96 96.0
compute-93-06 - 3023.9 18.9 0.0 - 3005.0 3023.9 - 56 0 0.0 - 56 56.0
======= ===== ====== ==== ==== =====
Totals 6551.6 60.2 0.0 344 0 0.1
==> 0.9% 0.0% ==> 0.0% 0.0%
Most unreserved/unused memory (3023.9/3005.0GB) is on compute-93-06 with 56/56.0 slots/CPUs free/unused.
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit slots/user in GPU queues
users {*} queues {sTgpu.q} to slots=40
users {*} queues {mTgpu.q} to slots=20
users {*} queues {lTgpu.q} to slots=10
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to num_gpu=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to num_gpu=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to num_gpu=4
users {*} queues {mTgpu.q} to num_gpu=3
users {*} queues {lTgpu.q} to num_gpu=2
users {*} queues {qgpu.iq} to num_gpu=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
You can view the plots of the GPFS IB traffic for the past
1
, 7
or 30
days, and throughput info.