Total number of queued jobs/tasks/slots: 135/11,591/14,664
47 users have/had running or queued jobs over the past 7 days, 68 over the past 15 days.
97 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Monday, 08-Dec-2025 09:41:47 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:40.
Warnings
Oversubscribed Jobs
As of Mon Dec 8 09:37:12 EST 2025 (0 oversubscribed job)
Inefficient Jobs
As of Mon Dec 8 09:37:12 EST 2025 (28 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1201/214, 133 queued (jobs), 4 extra, showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11104773 admixBayes.job figueiroh +3:21 64 7.4% mThC.q 75-01
11104795 admixBayes2.job figueiroh +3:21 64 7.9% mThC.q 93-04
11104801 admixBayes3.job figueiroh +3:21 64 7.0% mThC.q 76-05
11373441 vg_SRR14617986 niez 01:39 20 6.6% mThM.q 76-10
11373443 vg_SRR14617997 niez 01:39 20 7.3% mThM.q 75-06
11373445 vg_SRR14618037 niez 01:39 20 6.5% mThM.q 76-08
(more by niez)
11043155 Delphinidae_IQT mcgowenm +24:00 6 16.6% lThM.q 75-02
11073831 a18S_mitobim_lo wirshingh +4:22 6 16.7% mThC.q 64-10
⇒ Equivalent to 559.4 underused CPUs: 620 CPUs used at 9.8% on average.
To see them all use:
'q+ -ineff -u niez' (23)
Nodes with Excess Load
As of Mon Dec 8 09:37:39 EST 2025 (6 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-04 64 16 20.7 4.7 * 65-06 64 16 20.0 4.0 * 65-23 64 17 19.6 2.6 * 75-05 128 19 25.4 6.4 * 76-04 192 16 17.9 1.9 * 93-06 96 4 5.6 1.6 *Total excess load = 21.2
As of Mon Dec 8 09:37:39 EST 2025
1 job waiting for breusingc:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11227966 ReadFilter breusingc +2:13 16 64.0 mThC.q 519-572:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=320/840 38.1% for breusingc
max_hC_slots_per_user/2 slots=320/840 38.1% for breusingc in queue mThC.q
max_mem_res_per_user/1 mem_res=1.250T/9.985T 12.5% for breusingc in queue uThC.q
------------------- ------------------------------- ------
3 jobs waiting for hchong:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11091085 mea_l2prof_Aura hchong +4:01 1 8.0 sThM.q 92103-95000:1
11091086 mea_l2prof_Aura hchong +4:01 1 8.0 sThM.q 95001-100000:1
11091087 mea_l2prof_Aura hchong +4:01 1 8.0 sThM.q 100001-103504:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=96/840 11.4% for hchong
max_hM_slots_per_user/1 slots=96/840 11.4% for hchong in queue sThM.q
max_mem_res_per_user/2 mem_res=768.0G/8.944T 8.4% for hchong in queue uThM.q
------------------- ------------------------------- ------
3 jobs waiting for hwang:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11376662 V08map1_band000 hwang 00:00 1
11376664 V08map2__band00 hwang 00:00 1
11376666 V08map1_band000 hwang 00:00 1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_concurrent_jobs_per_u no_concurrent_jobs=1/4 25.0% for hwang in queue qrsh.iq
qrsh_u_slots/1 slots=1/16 6.2% for hwang in queue qrsh.iq
max_slots_per_user/1 slots=21/840 2.5% for hwang
max_hC_slots_per_user/1 slots=20/840 2.4% for hwang in queue sThC.q
max_mem_res_per_user/1 mem_res=40.00G/9.985T 0.4% for hwang in queue uThC.q
------------------- ------------------------------- ------
125 jobs waiting for niez (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11372152 HT_Vitis_montic niez 04:19 16 160.0 lThM.q
11372153 HT_Vitis_mustan niez 04:19 16 160.0 lThM.q
11372154 HT_Vitis_mustan niez 04:18 16 160.0 lThM.q
11372155 HT_Vitis_piasez niez 04:18 16 160.0 lThM.q
11372156 HT_Vitis_piasez niez 04:18 16 160.0 lThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=9.336T/8.944T 104.4% for niez in queue uThM.q
max_slots_per_user/1 slots=476/840 56.7% for niez
max_hM_slots_per_user/3 slots=176/390 45.1% for niez in queue lThM.q
max_hM_slots_per_user/2 slots=260/585 44.4% for niez in queue mThM.q
max_hC_slots_per_user/2 slots=40/840 4.8% for niez in queue mThC.q
max_mem_res_per_user/1 mem_res=480.0G/9.985T 4.7% for niez in queue uThC.q
------------------- ------------------------------- ------
2 jobs waiting for sylvain:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11376663 cmpps_72d_r.150 sylvain 00:00 1
11376665 cmpps_72d_r.150 sylvain 00:00 1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=2/840 0.2% for sylvain
max_hC_slots_per_user/1 slots=2/840 0.2% for sylvain in queue sThC.q
max_mem_res_per_user/1 mem_res=4.000G/9.985T 0.0% for sylvain in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=15.95T/35.78T 44.6% for * in queue uThM.q
blast2GO/1 slots=36/110 32.7% for *
total_slots/1 slots=1199/5960 20.1% for *
total_gpus/1 GPUS=1/8 12.5% for * in queue qgpu.iq
total_mem_res/1 mem_res=3.424T/39.94T 8.6% for * in queue uThC.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Mon Dec 8 09:37:39 EST 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=3.424T/39.94T 8.6% for * in queue uThC.q
total_mem_res/2 mem_res=15.95T/35.78T 44.6% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.