Total number of queued jobs/tasks/slots: 148/1,820/12,378
76 users have/had running or queued jobs over the past 7 days, 96 over the past 15 days.
111 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Wednesday, 13-May-2026 07:11:57 EDT
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:51.
Warnings
Oversubscribed Jobs
As of Wed May 13 07:07:04 EDT 2026 (5 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1843/113, 148 queued (jobs), showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
14344827 euk-SNP_array johnsonsj +1:13 2 135.8% mThM.q 65-19 114
14345675 alignment_bisma mancusij +1:09 8 164.7% lThM.q 93-01
14347024 alignment_bisma mancusij 18:12 2 202.6% mThM.q 65-22
14347027 alignment_bisma mancusij 18:12 2 205.8% lThM.q 93-02
(more by mancusij)
⇒ Equivalent to 12.1 overused CPUs: 16 CPUs used at 175.9% on average.
To see them all use:
'q+ -osub -u mancusij' (4)
Inefficient Jobs
As of Wed May 13 07:07:05 EDT 2026 (47 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1843/113, 148 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12804788 IQ_50p_iqtree morrisseyd +40:23 64 18.7% lThC.q 76-03
13950738 vitis_ssp_cactu niez +6:23 110 1.2% mThC.q 76-08
13993447 earlgrey zhangy +6:10 12 29.6% lThM.q 65-10 13
13993447 earlgrey zhangy +6:10 12 31.5% lThM.q 65-07 4
13993447 earlgrey zhangy +6:10 12 28.1% lThM.q 65-05 5
(more by zhangy)
14230685 iqtree.50p.oct2 cerqueirat +3:11 12 17.6% lThM.q 65-21
14343931 dxy_windowed_wi figueiroh +1:16 16 6.0% mThC.q 65-06 1
14343931 dxy_windowed_wi figueiroh +1:16 16 3.2% mThC.q 65-18 10
14343931 dxy_windowed_wi figueiroh +1:16 16 3.1% mThC.q 65-29 11
(more by figueiroh)
14346283 IQ50_Myr santosbe 21:54 30 25.7% lThM.q 65-12
14346767 make_plink.job beckerm 19:30 8 11.9% mThM.q 65-25
14346768 make_plink.job beckerm 19:29 8 11.9% mThM.q 93-05
14346779 assemble_2026-0 girardmg 18:41 2 1.4% lTWFM.sq 64-15
14348022 bears atkinsonga 13:49 1 0.3% lTWFM.sq 64-16
14348240 dl_hrrr_vwind taom 13:07 1 15.1% mThC.q 76-08
14348412 earthaccess_ges ggonzale 03:07 1 6.3% lTIO.sq 64-15
⇒ Equivalent to 748.9 underused CPUs: 809 CPUs used at 7.4% on average.
To see them all use:
'q+ -ineff -u figueiroh' (32)
'q+ -ineff -u zhangy' (5)
Nodes with Excess Load
As of Wed May 13 07:07:07 EDT 2026 (2 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-23 64 2 4.7 2.7 * 93-01 64 9 19.4 10.4 *Total excess load = 13.1
As of Wed May 13 07:07:06 EDT 2026
1 job waiting for collinsa:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
14347905 megahit_array collinsa 14:17 16 146.0 sThM.q 139-265:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=160/840 19.0% for collinsa
max_hM_slots_per_user/1 slots=160/840 19.0% for collinsa in queue sThM.q
max_mem_res_per_user/2 mem_res=1.426T/8.944T 15.9% for collinsa in queue uThM.q
------------------- ------------------------------- ------
2 jobs waiting for johnsonsj:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
14344845 GenomicsDBImpor johnsonsj +1:13 2 8.0 mThC.q 1-774:1
14344847 Genotyping_arra johnsonsj +1:13 2 8.0 mThC.q 1-774:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=280.0G/8.944T 3.1% for johnsonsj in queue uThM.q
max_hM_slots_per_user/2 slots=14/585 2.4% for johnsonsj in queue mThM.q
max_slots_per_user/1 slots=14/840 1.7% for johnsonsj
------------------- ------------------------------- ------
145 jobs waiting for niez (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
14348469 reads_first niez 00:48 50 0.0 mThC.q
14348470 reads_first niez 00:47 50 0.0 mThC.q
14348471 reads_first niez 00:47 50 0.0 mThC.q
14348472 reads_first niez 00:47 50 0.0 mThC.q
14348473 reads_first niez 00:46 50 0.0 mThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hC_slots_per_user/2 slots=610/640 95.3% for niez in queue mThC.q
max_slots_per_user/1 slots=610/840 72.6% for niez
max_mem_res_per_user/1 mem_res=800.0G/9.985T 7.8% for niez in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=50/110 45.5% for *
total_slots/1 slots=1845/5960 31.0% for *
total_gpus/1 GPUS=2/8 25.0% for * in queue lTgpu.q
total_mem_res/1 mem_res=5.693T/39.94T 14.3% for * in queue uThC.q
total_mem_res/2 mem_res=3.961T/35.78T 11.1% for * in queue uThM.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Wed May 13 07:07:07 EDT 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=5.693T/39.94T 14.3% for * in queue uThC.q
total_mem_res/2 mem_res=3.961T/35.78T 11.1% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lTWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=640
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.