Total number of queued jobs/tasks/slots: 55/88/2,544
67 users have/had running or queued jobs over the past 7 days, 93 over the past 15 days.
111 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Friday, 20-Mar-2026 20:42:08 EDT
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 1:01.
Warnings
Oversubscribed Jobs
As of Fri Mar 20 20:37:04 EDT 2026 (0 oversubscribed job)
Inefficient Jobs
As of Fri Mar 20 20:37:04 EDT 2026 (18 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 2185/253, 55 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12190422 stairwayAZ.job byerlyp +32:10 5 19.9% lThM.q 64-17
12195552 stairwayNE.job byerlyp +32:07 5 19.9% lThM.q 76-04
12198833 stairwayCAR.job byerlyp +31:09 5 19.8% lThM.q 76-14
12510406 krakenuniq_Scri yisraell +9:20 48 0.0% mThC.q 65-15
12586423 treemix_batch uribeje +9:05 20 5.0% lThM.q 76-07
12647567 treemix_batch uribeje +8:11 20 5.0% lThM.q 93-04
12762748 treemix_batch uribeje +3:12 20 5.0% lThM.q 76-12
12769475 iqtree2 santossam +2:07 20 7.9% mThM.q 93-03
12769494 astral_iqtree santossam +2:07 20 8.7% mThM.q 76-04
12769820 BPP_rajah_2 chippsa +1:10 64 18.8% mThC.q 84-01
12770419 vg_SRR25030295 niez 15:26 32 9.4% mThC.q 76-12
12770457 vg_SRR5891798 niez 08:20 32 32.1% mThC.q 65-26
12770459 vg_SRR5891805 niez 08:15 32 22.0% mThC.q 65-17
(more by niez)
12770916 massbank_ESIn corderm 08:39 4 24.9% lThC.q 64-11
12771413 snpcalling_call morrisseyd 06:39 48 31.4% mThC.q 76-08
12771415 earlgrey zhangy 06:37 24 24.7% lThC.q 65-07 11
12771415 earlgrey zhangy 06:36 24 16.3% lThC.q 65-27 14
⇒ Equivalent to 381.0 underused CPUs: 455 CPUs used at 16.3% on average.
To see them all use:
'q+ -ineff -u niez' (4)
Nodes with Excess Load
As of Fri Mar 20 20:37:06 EDT 2026 (5 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-21 64 1 6.3 5.3 * 75-01 128 21 25.7 4.7 * 75-07 128 0 2.2 2.2 * 76-02 192 0 1.6 1.6 * 93-06 96 13 19.3 6.3 *Total excess load = 20.1
High Memory Jobs
Statistics
Following jobs do not exist or permissions are not sufficient:
12773484
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
xuj 152 34.2% 7.4219 49.2% 1.3997 79.8% 1.0483 2.1846 3.4
sossajef 104 23.4% 4.3945 29.1% 0.0790 4.5% 0.0923 0.1345 32.7
uribeje 60 13.5% 1.7578 11.6% 0.0405 2.3% 0.0406 0.0406 43.3
ariasc 32 7.2% 0.6250 4.1% 0.1304 7.4% 0.1307 0.1322 4.7
nevesk 20 4.5% 0.3906 2.6% 0.0009 0.1% 0.1567 0.1602 2.4
santossam 40 9.0% 0.2344 1.6% 0.0112 0.6% 0.0107 0.0204 11.5
morrisseyd 9 2.0% 0.1406 0.9% 0.0288 1.6% 0.0323 0.0470 3.0
byerlyp 15 3.4% 0.0586 0.4% 0.0100 0.6% 0.0101 0.0101 5.8
willishr 6 1.4% 0.0391 0.3% 0.0055 0.3% 0.0672 0.0987 0.4
horowitzj 2 0.5% 0.0312 0.2% 0.0058 0.3% 0.0068 0.0069 4.5
cabreroa 4 0.9% 0.0010 0.0% 0.0413 2.4% 0.0535 0.0830 0.0
==================================================================================================
Total 444 15.0947 1.7532 1.6491 2.9183 5.2
Warnings
28 high memory jobs produced a warning:
4 for ariasc
3 for byerlyp
1 for cabreroa
2 for horowitzj
8 for morrisseyd
1 for nevesk
2 for santossam
3 for sossajef
3 for uribeje
1 for willishr
As of Fri Mar 20 20:37:05 EDT 2026
54 jobs waiting for niez (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12770469 vg_SRR5891945 niez 18:31 32 300.0 mThC.q
12770470 vg_SRR5891947 niez 18:31 32 300.0 mThC.q
12770471 vg_SRR5891948 niez 18:31 32 300.0 mThC.q
12770472 vg_SRR5891949 niez 18:31 32 300.0 mThC.q
12770473 vg_SRR5891952 niez 18:31 32 300.0 mThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=832/840 99.0% for niez
max_hC_slots_per_user/2 slots=832/840 99.0% for niez in queue mThC.q
max_mem_res_per_user/1 mem_res=7.617T/9.985T 76.3% for niez in queue uThC.q
------------------- ------------------------------- ------
1 job waiting for zhangy:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12771415 earlgrey zhangy 06:39 24 192.0 lThC.q 16-49:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hC_slots_per_user/3 slots=408/431 94.7% for zhangy in queue lThC.q
max_slots_per_user/1 slots=408/840 48.6% for zhangy
max_mem_res_per_user/1 mem_res=3.188T/9.985T 31.9% for zhangy in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=65/110 59.1% for *
total_mem_res/2 mem_res=15.14T/35.78T 42.3% for * in queue uThM.q
total_slots/1 slots=2186/5960 36.7% for *
total_mem_res/1 mem_res=13.27T/39.94T 33.2% for * in queue uThC.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Fri Mar 20 20:37:06 EDT 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=13.27T/39.94T 33.2% for * in queue uThC.q
total_mem_res/2 mem_res=15.14T/35.78T 42.3% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lTWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.