Total number of queued jobs/tasks/slots: 86/86/2,752
68 users have/had running or queued jobs over the past 7 days, 85 over the past 15 days.
99 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Monday, 02-Mar-2026 16:32:12 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 1:01.
Warnings
Oversubscribed Jobs
As of Mon Mar 2 16:27:04 EST 2026 (8 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1655/108, 86 queued (jobs), showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12235529 denovo_assembly chippsa 06:34 32 215.6% mThM.q 93-04
12235531 denovo_assembly chippsa 06:33 32 346.6% mThM.q 75-03
12235532 denovo_assembly chippsa 06:33 32 373.1% mThM.q 76-03
(more by chippsa)
12240462 paleomix_anc_fl hagemannm 01:16 4 191.3% mThC.q 64-07
12240463 paleomix_anc_fl hagemannm 01:16 4 193.7% mThC.q 64-14
12240464 paleomix_anc_fl hagemannm 01:16 4 191.3% mThC.q 64-08
⇒ Equivalent to 352.6 overused CPUs: 172 CPUs used at 305.0% on average.
To see them all use:
'q+ -osub -u chippsa' (5)
Inefficient Jobs
As of Mon Mar 2 16:27:04 EST 2026 (36 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1655/108, 86 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12190422 stairwayAZ.job byerlyp +14:06 5 20.0% lThM.q 64-17
12195552 stairwayNE.job byerlyp +14:03 5 19.9% lThM.q 76-04
12198833 stairwayCAR.job byerlyp +13:05 5 19.6% lThM.q 76-14
12226716 sansibia_spades nelsonjo +4:02 10 16.0% mThM.q 65-06
12230404 Vitis_VG_Idx3_S niez +2:16 32 3.1% uTxlM.rq 93-06
12235370 astral niez 17:00 10 10.3% mThC.q 75-04
12235382 vg_SRR14617986 niez 12:10 32 3.1% mThC.q 84-01
(more by niez)
12237515 getorganelle_ar athalappila 03:23 8 18.3% mThM.q 75-04 30
12237515 getorganelle_ar athalappila 03:23 8 18.8% mThM.q 76-07 51
12237515 getorganelle_ar athalappila 03:23 8 19.2% mThM.q 65-21 57
(more by athalappila)
12238822 RepeatM_Azeteki graujh 02:40 32 3.5% mThM.q 76-04
⇒ Equivalent to 867.3 underused CPUs: 907 CPUs used at 4.4% on average.
To see them all use:
'q+ -ineff -u athalappila' (5)
'q+ -ineff -u niez' (26)
2 for athalappila
1 for bourkeb
15 for breusingc
3 for byerlyp
1 for castanedaric
5 for chippsa
15 for morrisseyd
1 for nelsonjo
1 for niez
1 for ramosi
As of Mon Mar 2 16:27:06 EST 2026
86 jobs waiting for niez (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12235406 vg_SRR5627786 niez 12:10 32 200.0 mThC.q
12235407 vg_SRR5627787 niez 12:10 32 200.0 mThC.q
12235408 vg_SRR5627789 niez 12:10 32 200.0 mThC.q
12235409 vg_SRR5627790 niez 12:10 32 200.0 mThC.q
12235410 vg_SRR5627791 niez 12:09 32 200.0 mThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=810/840 96.4% for niez
max_hC_slots_per_user/2 slots=778/840 92.6% for niez in queue mThC.q
max_mem_res_per_user/1 mem_res=4.805T/9.985T 48.1% for niez in queue uThC.q
max_concurrent_jobs_per_u no_concurrent_jobs=1/3 33.3% for niez in queue uTxlM.rq
max_mem_res_per_user/3 mem_res=1.875T/7.874T 23.8% for niez in queue uTxlM.rq
max_xlM_slots_per_user/1 slots=32/536 6.0% for niez in queue uTxlM.rq
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=64/110 58.2% for *
total_slots/1 slots=1657/5960 27.8% for *
total_mem_res/3 mem_res=1.875T/7.874T 23.8% for * in queue uTxlM.rq
total_mem_res/2 mem_res=5.320T/35.78T 14.9% for * in queue uThM.q
total_mem_res/1 mem_res=5.092T/39.94T 12.7% for * in queue uThC.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Mon Mar 2 16:27:06 EST 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=5.092T/39.94T 12.7% for * in queue uThC.q
total_mem_res/2 mem_res=5.320T/35.78T 14.9% for * in queue uThM.q
total_mem_res/3 mem_res=1.875T/7.874T 23.8% for * in queue uTxlM.rq
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.