Total number of queued jobs/tasks/slots: 60/682/2,594
37 users have/had running or queued jobs over the past 7 days, 52 over the past 15 days.
110 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Saturday, 03-Jan-2026 22:14:16 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 2:17.
Warnings
Oversubscribed Jobs
As of Sat Jan 3 22:07:05 EST 2026 (3 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1875/142, 60 queued (jobs), showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11787162 m_2_prithvi_pip xuj +8:19 1 262.8% lTgpu.q 79-02
11787163 m_2_prithvi_pip xuj +8:19 1 170.4% lTgpu.q 79-01
11789703 vae mperez 06:15 32 135.5% lTgpu.q 50-01
⇒ Equivalent to 13.7 overused CPUs: 34 CPUs used at 140.3% on average.
Inefficient Jobs
As of Sat Jan 3 22:07:05 EST 2026 (3 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1875/142, 60 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11049128 xvcf2 uribeje +42:09 8 27.3% uThM.q 76-13
11789697 phyluce_assembl horowitzj 11:35 24 4.1% mThC.q 76-09
11789698 phyluce_assembl horowitzj 11:35 24 4.1% mThC.q 64-08
⇒ Equivalent to 51.8 underused CPUs: 56 CPUs used at 7.4% on average.
Nodes with Excess Load
As of Sat Jan 3 22:07:08 EST 2026 (4 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
50-01 64 50 64.3 14.3 * 76-03 192 36 38.5 2.5 * 76-04 192 34 51.1 17.1 * 76-12 128 34 40.9 6.9 *Total excess load = 40.8
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
pappalardop 30 48.4% 8.7891 87.5% 0.0691 16.3% 0.0880 0.0880 99.9
uribeje 16 25.8% 0.7812 7.8% 0.2709 63.9% 0.3714 0.3735 2.1
nelsonjo 12 19.4% 0.3750 3.7% 0.0019 0.4% 0.0526 0.0798 4.7
woodh 4 6.5% 0.0977 1.0% 0.0824 19.4% 0.1587 0.1822 0.5
==================================================================================================
Total 62 10.0430 0.4243 0.6707 0.7234 13.9
Warnings
7 high memory jobs produced a warning:
1 for nelsonjo
3 for pappalardop
2 for uribeje
1 for woodh
As of Sat Jan 3 22:07:07 EST 2026
1 job waiting for johnsonsj:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11789431 euk-align_array johnsonsj +1:06 5 80.0 mThC.q 32-40:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/1 mem_res=800.0G/9.985T 7.8% for johnsonsj in queue uThC.q
max_slots_per_user/1 slots=50/840 6.0% for johnsonsj
max_hC_slots_per_user/2 slots=50/840 6.0% for johnsonsj in queue mThC.q
------------------- ------------------------------- ------
1 job waiting for medeirosi:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11789710 metaspades_erro medeirosi 03:39 18 144.0 mThC.q 89-96:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=830/840 98.8% for medeirosi
max_hC_slots_per_user/2 slots=830/840 98.8% for medeirosi in queue mThC.q
max_mem_res_per_user/1 mem_res=5.117T/9.985T 51.2% for medeirosi in queue uThC.q
------------------- ------------------------------- ------
3 jobs waiting for mghahrem:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11789500 A10_Map_Creatio mghahrem +1:00 1 sTgpu.q 7-143:1
11789501 A10_Map_Creatio mghahrem +1:00 1 sTgpu.q 5-143:1
11789502 A10_Map_Creatio mghahrem +1:00 1 sTgpu.q 1-143:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_gpus_per_user/1 GPUS=2/4 50.0% for mghahrem in queue qgpu.iq
max_gpus_per_user/1 GPUS=2/4 50.0% for mghahrem in queue sTgpu.q
max_slots_per_user/1 slots=2/840 0.2% for mghahrem
------------------- ------------------------------- ------
51 jobs waiting for niez (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11789534 DG_Vitis_betuli niez 20:40 16 200.0 mTgpu.q
11789535 DG_Vitis_betuli niez 20:40 16 200.0 mTgpu.q
11789536 DG_Vitis_betuli niez 20:40 16 200.0 mTgpu.q
11789537 DG_Vitis_betuli niez 20:40 16 200.0 mTgpu.q
11789538 DG_Vitis_betuli niez 20:40 16 200.0 mTgpu.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_gpus_per_user/2 GPUS=1/3 33.3% for niez in queue mTgpu.q
total_gpus_per_user/1 GPUS=1/4 25.0% for niez in queue qgpu.iq
max_slots_per_user/1 slots=16/840 1.9% for niez
------------------- ------------------------------- ------
1 job waiting for pappalardop:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11789437 chesatampa pappalardop +1:04 1 300.0 mThM.q 163-290:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.789T/8.944T 98.3% for pappalardop in queue uThM.q
max_hM_slots_per_user/2 slots=30/585 5.1% for pappalardop in queue mThM.q
max_slots_per_user/1 slots=30/840 3.6% for pappalardop
------------------- ------------------------------- ------
1 job waiting for vohsens:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11789722 coverm_black_co vohsens 01:29 16 128.0 sThC.q 77-141:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=816/840 97.1% for vohsens
max_hC_slots_per_user/1 slots=816/840 97.1% for vohsens in queue sThC.q
max_mem_res_per_user/1 mem_res=6.375T/9.985T 63.8% for vohsens in queue uThC.q
------------------- ------------------------------- ------
2 jobs waiting for xuj:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11787164 m_2_prithvi_pip xuj +8:19 1 lTgpu.q
11787245 m_2_prithvi_pip xuj +8:01 1 lTgpu.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_gpus_per_user/3 GPUS=2/2 100.0% for xuj in queue lTgpu.q
total_gpus_per_user/1 GPUS=2/4 50.0% for xuj in queue qgpu.iq
max_slots_per_user/1 slots=2/840 0.2% for xuj
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=100/110 90.9% for *
total_gpus/1 GPUS=3/8 37.5% for * in queue lTgpu.q
total_mem_res/1 mem_res=12.65T/39.94T 31.7% for * in queue uThC.q
total_slots/1 slots=1875/5960 31.5% for *
total_mem_res/2 mem_res=10.29T/35.78T 28.8% for * in queue uThM.q
total_gpus/1 GPUS=2/8 25.0% for * in queue sTgpu.q
total_gpus/1 GPUS=1/8 12.5% for * in queue mTgpu.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Sat Jan 3 22:07:08 EST 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=12.78T/39.94T 32.0% for * in queue uThC.q
total_mem_res/2 mem_res=10.29T/35.78T 28.8% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.