Total number of queued jobs/tasks/slots: 461/896/899
74 users have/had running or queued jobs over the past 7 days, 92 over the past 15 days.
106 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Tuesday, 14-Apr-2026 14:23:17 EDT
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:55.
Warnings
Oversubscribed Jobs
As of Tue Apr 14 14:17:06 EDT 2026 (3 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1866/908, 461 queued (jobs), showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
13143410 Josa_v2 martinezl2 +1:16 4 197.8% lThM.q 64-18
13143800 Josa18 martinezl2 +1:02 4 245.4% lThM.q 75-02
13143892 Josa19 martinezl2 +1:01 4 210.2% lThM.q 65-05
⇒ Equivalent to 14.1 overused CPUs: 12 CPUs used at 217.8% on average.
Inefficient Jobs
As of Tue Apr 14 14:17:07 EDT 2026 (13 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1866/908, 461 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12190422 stairwayAZ.job byerlyp +57:04 5 20.0% lThM.q 64-17
12195552 stairwayNE.job byerlyp +57:01 5 19.9% lThM.q 76-04
12198833 stairwayCAR.job byerlyp +56:03 5 19.8% lThM.q 76-14
12770885 IQ_50p_iqtree morrisseyd +25:02 64 31.0% lThC.q 76-08
12804788 IQ_50p_iqtree morrisseyd +12:06 64 19.6% lThC.q 76-03
12804791 IQ_75p_iqtree morrisseyd +12:06 64 26.5% lThC.q 65-27
12854032 montagem-longin santossam +6:02 20 5.4% mThM.q 93-06
13029628 vitis_ssp_cactu niez +3:09 110 0.9% mThC.q 75-02
13144087 run_apr14_2026_ szieba 11:09 60 19.4% lThM.q 65-26
13144088 run_apr14_2026_ szieba 11:08 60 19.2% lThM.q 65-23
13144121 snaq_boot jourdain-fievetl 05:36 4 24.9% sThC.q 65-13
13147121 poouli campanam 02:59 1 1.9% lTWFM.sq 64-15
13157259 vamb_cat_derep_ bourkeb 01:14 16 29.7% mThM.q 93-05
⇒ Equivalent to 394.7 underused CPUs: 478 CPUs used at 17.4% on average.
As of Tue Apr 14 14:17:09 EDT 2026
1 job waiting for campanam:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
13147250 historicOAAM campanam 02:59 1 8.0 lTWFM.sq
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_concurrent_jobs_per_u no_concurrent_jobs=1/1 100.0% for campanam in queue lTWFM.sq
wfm_slots_per_user/1 slots=1/2 50.0% for campanam in queue lTWFM.sq
max_concurrent_jobs_per_u no_concurrent_jobs=1/4 25.0% for campanam in queue qrsh.iq
qrsh_u_slots/1 slots=1/16 6.2% for campanam in queue qrsh.iq
max_slots_per_user/1 slots=46/840 5.5% for campanam
max_hC_slots_per_user/2 slots=44/840 5.2% for campanam in queue mThC.q
max_mem_res_per_user/1 mem_res=36.00G/9.985T 0.4% for campanam in queue uThC.q
------------------- ------------------------------- ------
5 jobs waiting for mghahrem:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
13018609 A10_Map_Creatio mghahrem +4:21 1 sTgpu.q 52-143:1
13018610 A10_Map_Creatio mghahrem +4:21 1 sTgpu.q 52-143:1
13018611 A10_Map_Creatio mghahrem +4:21 1 sTgpu.q 52-143:1
13018612 A10_Map_Creatio mghahrem +4:21 1 sTgpu.q 52-143:1
13018613 A10_Map_Creatio mghahrem +4:21 1 sTgpu.q 1-72:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_gpus_per_user/1 GPUS=4/4 100.0% for mghahrem in queue qgpu.iq
max_gpus_per_user/1 GPUS=4/4 100.0% for mghahrem in queue sTgpu.q
max_slots_per_user/1 slots=4/840 0.5% for mghahrem
------------------- ------------------------------- ------
1 job waiting for pattonp:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
13143944 sjs-rsf pattonp 23:30 4 64.0 sTgpu.q
none running.
454 jobs waiting for sylvain (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
13158333 cmpps_72d_r.3.1 sylvain 00:45 1
13158334 cmpps_72d_r.3.1 sylvain 00:45 1
13158335 cmpps_72d_r.3.1 sylvain 00:45 1
13158336 cmpps_72d_r.3.1 sylvain 00:45 1
13158337 cmpps_72d_r.3.1 sylvain 00:45 1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=840/840 100.0% for sylvain
max_hC_slots_per_user/1 slots=840/840 100.0% for sylvain in queue sThC.q
max_mem_res_per_user/1 mem_res=1.641T/9.985T 16.4% for sylvain in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_gpus/1 GPUS=7/8 87.5% for * in queue sTgpu.q
total_slots/1 slots=1868/5960 31.3% for *
blast2GO/1 slots=30/110 27.3% for *
total_mem_res/1 mem_res=4.944T/39.94T 12.4% for * in queue uThC.q
total_mem_res/2 mem_res=2.980T/35.78T 8.3% for * in queue uThM.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Tue Apr 14 14:17:09 EDT 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=4.944T/39.94T 12.4% for * in queue uThC.q
total_mem_res/2 mem_res=2.980T/35.78T 8.3% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lTWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.