Hydra-7@ADC Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
afoster
andersonhl
ariasc
athalappila
auscavitchs
babb
bakerd
beckerm
bennettkf
blackburnrc
bornbuschs
bourkeb
breusingc
bushsl
byerlyp
campanam
capadorhd
carlsenm
castanedaricos
cerqueirat
clelandtp
coellogarridoa
collensab
collinsa
connellym
corderm
craigc
figueiroh
floresm
franzena
gallego-narbona
ggonzale
gonzalezv
gouldingt
granquistm
graujh
grossc2
gtorres
guerravc
hawkinsmt
hchong
hinckleya
horowitzj
hpc
hwang
jhora
johnsone
johnsonsj
jspark
jyee
keoghs2
kimcj
krajpuro
kramerb
kweskinm
lealc
leihersj
longk
lyonss
macdonaldk
macguigand
mcfaddenc
mcgowenm
medeirosi
mghahrem
morrisseyd
mperez
murphykr
nevesk
niez
palmerem
pappalardop
peresph
pradon
przelomskan
quattrinia
quinteroh
qzhu
rasbands
rbottger
roa-varona
rotzeln
sandoval-velascom
santosbe
seim
ssanjaripour
storeyk
sylvain
triznam
uribeje
vagac
villanueval
vohsens
wirshingh
yancos
zarril
zehnpfennigj
zhangy
highlighted.
As of Fri Dec 19 07:07:03 2025: #CPUs/nodes 5740/74, 2 down.
Loads:
head node: 1.92, login nodes: 0.00, 0.14, 0.04, 0.16; NSDs: 0.14, 0.04, 0.17, 4.10, 3.09; licenses: none used.
Queues status: none disabled, 18 need attention, none in error state.
13 users with running jobs (slots/jobs):
Current load: 546.5, #running (slots/jobs): 1,039/76, usage: 18.1%, efficiency: 52.6%
2 users with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 6/44/684
67 users have/had running or queued jobs over the past 7 days, 98 over the past 15 days.
116 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Friday, 19-Dec-2025 07:12:03 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:54.
Warnings
Warnings
Oversubscribed Jobs
As of Fri Dec 19 07:07:03 EST 2025 (0 oversubscribed job)
Inefficient Jobs
As of Fri Dec 19 07:07:04 EST 2025 (25 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1039/76, 6 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11481231 AssembleBarcode breusingc +6:18 16 1.3% mThM.q 65-14 243
11481232 metaspades breusingc +8:12 16 6.4% mThM.q 76-13 107
11596723 ombro_ges_disc_ hchong +6:17 1 0.5% sThC.q 65-14 36809
11596723 ombro_ges_disc_ hchong +6:16 1 0.4% sThC.q 65-14 36916
11596723 ombro_ges_disc_ hchong +6:16 1 0.3% sThC.q 65-14 36938
(more by hchong)
11770485 arms.coi quattrinia +2:19 4 2.5% lThC.q 65-05
11770751 spades_genome_t gouldingt +2:02 20 17.0% mThC.q 64-14
11773687 jetformer_job ssanjaripour 03:13 32 15.6% lTgpu.q 50-01
⇒ Equivalent to 98.1 underused CPUs: 108 CPUs used at 9.2% on average.
To see them all use:
'q+ -ineff -u hchong' (20)
Nodes with Excess Load
As of Fri Dec 19 07:07:05 EST 2025 (3 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
76-04 192 28 39.3 11.3 *
76-06 128 0 1.6 1.6 *
76-14 128 0 1.6 1.6 *
Total excess load = 14.5
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
nevesk 96 10.6% 4.6875 69.6% 0.0726 7.7% 0.0689 0.1361 34.4
uribeje 16 1.8% 0.7812 11.6% 0.2367 25.0% 0.3308 0.3328 2.3
niez 360 39.7% 0.3164 4.7% 0.1082 11.4% 0.1444 4.8116 0.1
breusingc 336 37.1% 0.2500 3.7% 0.3976 42.1% 0.3178 0.6692 0.4
bourkeb 8 0.9% 0.2500 3.7% 0.0068 0.7% 0.0040 0.0139 18.0
coellogarridoa 80 8.8% 0.2344 3.5% 0.0622 6.6% 0.0623 0.0625 3.8
gouldingt 10 1.1% 0.2148 3.2% 0.0615 6.5% 0.0056 0.1408 1.5
==================================================================================================
Total 906 6.7344 0.9456 0.9337 6.1669 1.1
Warnings
27 high memory jobs produced a warning:
1 for bourkeb
21 for breusingc
2 for coellogarrid
1 for gouldingt
2 for uribeje
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =18
mThC.q =24
lThC.q =1
uThC.q =0
43 4928 0.9% 1192.7%
sThM.q =0
mThM.q =698
lThM.q =176
uThM.q =16
890 4552 19.6% 57.3%
sTgpu.q =0
mTgpu.q =0
lTgpu.q =64
qgpu.iq =0
64 104 61.5% 25.5%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =0
0 8 0.0%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =0
0 68 0.0%
Total: 997
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Fri Dec 19 07:07:04 EST 2025
4145 avail(slots), free(load)=5113.4, unresd(mem)=33400.9G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5120 total(mem) 39.8T
unused(slots) 4145 unused(load) 5113.4 ie: 81.0% 99.9%
unreserved(mem) 32.6T unused(mem) 38.0T ie: 81.9% 95.5%
unreserved(mem) 8.1G unused(mem) 9.4G per unused(slots)
3743 avail(slots), free(load)=4673.9, unresd(mem)=29563.0G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4680 total(mem) 35.8T
unused(slots) 3743 unused(load) 4673.9 ie: 80.0% 99.9%
unreserved(mem) 28.9T unused(mem) 34.1T ie: 80.7% 95.4%
unreserved(mem) 7.9G unused(mem) 9.3G per unused(slots)
536 avail(slots), free(load)=536.0, unresd(mem)=8063.0G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 536 total(mem) 7.9T
unused(slots) 536 unused(load) 536.0 ie: 100.0% 100.0%
unreserved(mem) 7.9T unused(mem) 7.8T ie: 100.0% 99.2%
unreserved(mem) 15.0G unused(mem) 14.9G per unused(slots)
40 avail(slots), free(load)=40.0, unresd(mem)=250.9G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 40 unused(load) 103.8 ie: 38.5% 99.8%
unreserved(mem) 0.7T unused(mem) 0.3T ie: 99.4% 37.1%
unreserved(mem) 18.8G unused(mem) 7.0G per unused(slots)
GPU Usage
Fri Dec 19 07:07:10 EST 2025
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 451.7 51.6 - 4/2 - 64 64 16.3 - 0 47.7
compute-79-01 - 125.5 12.0 113.5 - 2/0 - 20 0 0.1 - 20 19.9
compute-79-02 - 125.5 11.0 114.5 - 2/0 - 20 0 0.0 - 20 20.0
Total GPU=8, used=2 (25.0%)
Waiting Job(s)
As of Fri Dec 19 07:07:05 EST 2025
1 job waiting for breusingc :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11481232 metaspades breusingc +9:19 16 0.0 mThM.q 534-572:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hM_slots_per_user/2 slots=336/585 57.4% for breusingc in queue mThM.q
max_slots_per_user/1 slots=336/840 40.0% for breusingc
max_mem_res_per_user/2 mem_res=256.0G/8.944T 2.8% for breusingc in queue uThM.q
------------------- ------------------------------- ------
5 jobs waiting for nevesk :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11773926 spadescript nevesk 00:20 12 600.0 lThM.q
11773927 spadescript nevesk 00:20 12 600.0 lThM.q
11773928 spadescript nevesk 00:20 12 600.0 lThM.q
11773929 spadescript nevesk 00:05 12 600.0 lThM.q
11773930 spadescript nevesk 00:05 12 600.0 lThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=4.688T/8.944T 52.4% for nevesk in queue uThM.q
max_hM_slots_per_user/3 slots=96/390 24.6% for nevesk in queue lThM.q
max_slots_per_user/1 slots=96/840 11.4% for nevesk
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_gpus/1 GPUS=2/8 25.0% for * in queue lTgpu.q
total_mem_res/2 mem_res=6.734T/35.78T 18.8% for * in queue uThM.q
total_slots/1 slots=1039/5960 17.4% for *
total_mem_res/1 mem_res=482.0G/39.94T 1.2% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Fri Dec 19 07:07:05 EST 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=482.0G/39.94T 1.2% for * in queue uThC.q
total_mem_res/2 mem_res=6.734T/35.78T 18.8% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.5 11.6 0.2 - 491.9 503.3 - 32 0 0.9 - 32 31.1
compute-64-18 - 503.5 14.3 0.2 - 489.2 503.3 - 32 16 16.1 - 16 15.9
compute-65-02 - 503.5 19.0 0.0 - 484.5 503.5 - 64 16 16.1 - 48 48.0
compute-65-03 - 503.5 17.3 0.0 - 486.2 503.5 - 64 16 16.0 - 48 48.0
compute-65-04 - 503.5 16.4 27.0 - 487.1 476.5 - 64 30 2.0 - 34 62.0
compute-65-06 - 503.5 14.8 0.0 - 488.7 503.5 - 64 0 0.5 - 64 63.5
compute-65-07 - 503.5 16.8 27.0 - 486.7 476.5 - 64 30 2.1 - 34 61.9
compute-65-09 - 503.5 14.2 0.0 - 489.3 503.5 - 64 0 0.2 - 64 63.8
compute-65-10 - 503.5 49.9 0.0 - 453.6 503.5 - 64 16 12.9 - 48 51.1
compute-65-11 - 503.5 19.4 0.0 - 484.1 503.5 - 64 16 16.1 - 48 48.0
compute-65-12 - 503.5 228.3 400.0 - 275.2 103.5 - 64 8 4.7 - 56 59.3
compute-65-13 - 503.5 24.3 0.0 - 479.2 503.5 - 64 16 16.1 - 48 47.9
compute-65-15 - 503.5 16.2 27.0 - 487.3 476.5 - 64 30 2.6 - 34 61.4
compute-65-16 - 503.5 31.1 0.0 - 472.4 503.5 - 64 16 13.6 - 48 50.5
compute-65-17 - 503.5 18.0 0.0 - 485.5 503.5 - 64 16 16.0 - 48 48.0
compute-65-18 - 503.5 16.9 0.0 - 486.6 503.5 - 64 16 16.1 - 48 47.9
compute-65-19 - 503.5 13.4 0.0 - 490.1 503.5 - 64 0 0.2 - 64 63.8
compute-65-20 - 503.5 14.6 6.0 - 488.9 497.5 - 64 1 1.0 - 63 63.0
compute-65-21 - 503.5 16.4 0.0 - 487.1 503.5 - 64 16 16.1 - 48 47.9
compute-65-22 - 503.5 17.5 27.0 - 486.0 476.5 - 64 30 2.3 - 34 61.7
compute-65-23 - 503.5 16.3 0.0 - 487.2 503.5 - 64 16 16.0 - 48 48.0
compute-65-24 - 503.5 19.6 0.0 - 483.9 503.5 - 64 16 16.0 - 48 48.0
compute-65-25 - 503.5 14.7 0.0 - 488.8 503.5 - 64 0 0.3 - 64 63.7
compute-65-26 - 503.5 14.5 0.0 - 489.0 503.5 - 64 0 0.2 - 64 63.8
compute-65-27 - 503.5 14.8 27.0 - 488.7 476.5 - 64 30 2.2 - 34 61.8
compute-65-28 - 503.5 19.1 32.0 - 484.4 471.5 - 64 4 4.0 - 60 60.0
compute-65-29 - 503.5 34.7 0.0 - 468.8 503.5 - 64 16 14.6 - 48 49.4
compute-65-30 - 503.5 15.5 27.0 - 488.0 476.5 - 64 30 2.2 - 34 61.8
compute-75-01 - 1007.5 20.4 256.1 - 987.1 751.4 - 128 8 1.2 - 120 126.8
compute-75-02 - 1007.5 27.7 600.0 - 979.8 407.5 - 128 28 24.4 - 100 103.6
compute-75-03 - 755.5 35.3 600.0 - 720.2 155.5 - 128 28 24.2 - 100 103.8
compute-75-04 - 755.0 154.1 599.5 - 600.9 155.5 - 128 28 26.0 - 100 102.0
compute-75-05 - 755.5 34.7 600.0 - 720.8 155.5 - 128 12 8.4 - 116 119.6
compute-75-06 - 755.5 45.4 600.0 - 710.1 155.5 - 128 28 24.1 - 100 103.9
compute-75-07 - 755.5 19.7 600.0 - 735.8 155.5 - 128 28 25.1 - 100 102.9
compute-76-03 - 1007.4 23.9 627.5 - 983.5 379.9 - 128 42 11.6 - 86 116.5
compute-76-04 - 1007.4 32.4 600.0 - 975.0 407.4 - 128 28 26.2 - 100 101.8
compute-76-05 - 1007.4 19.6 0.0 - 987.8 1007.4 - 128 0 0.2 - 128 127.8
compute-76-06 - 1007.4 18.4 0.0 - 989.0 1007.4 - 128 0 1.6 - 128 126.5
compute-76-07 - 1007.4 33.5 120.0 - 973.9 887.4 - 128 40 40.1 - 88 87.9
compute-76-08 - 1007.4 19.9 0.0 - 987.5 1007.4 - 128 0 0.4 - 128 127.6
compute-76-09 - 1007.4 18.9 0.0 - 988.5 1007.4 - 128 0 0.3 - 128 127.7
compute-76-10 - 1007.4 22.9 27.0 - 984.5 980.4 - 128 30 2.1 - 98 125.8
compute-76-11 - 1007.4 21.2 27.0 - 986.2 980.4 - 128 30 3.1 - 98 124.9
compute-76-12 - 1007.4 18.5 0.0 - 988.9 1007.4 - 128 0 0.7 - 128 127.3
compute-76-13 - 1007.4 196.7 520.0 - 810.7 487.4 - 128 64 44.9 - 64 83.1
compute-76-14 - 1007.4 18.1 0.0 - 989.3 1007.4 - 128 0 1.6 - 128 126.4
compute-84-01 - 881.1 96.4 0.0 - 784.7 881.1 - 112 0 0.6 - 112 111.4
compute-93-01 - 503.8 15.2 27.0 - 488.6 476.8 - 64 30 1.3 - 34 62.7
compute-93-02 - 755.6 18.0 27.0 - 737.6 728.6 - 72 30 4.3 - 42 67.7
compute-93-03 - 755.6 21.3 220.0 - 734.3 535.6 - 72 10 10.1 - 62 61.9
compute-93-04 - 755.6 16.3 27.0 - 739.3 728.6 - 72 30 1.3 - 42 70.7
======= ===== ====== ==== ==== =====
Totals 35630.5 1688.1 6678.5 4552 895 510.7
==> 4.7% 18.7% ==> 19.7% 11.2%
Most unreserved/unused memory (1007.4/987.8GB) is on compute-76-05 with 128/127.8 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 16.3 -0.0 - 1495.1 1511.4 - 192 0 0.1 - 192 191.9
compute-76-02 - 1511.4 14.0 -0.0 - 1497.4 1511.4 - 192 0 0.1 - 192 191.9
compute-93-05 - 2016.3 16.5 0.0 - 1999.8 2016.3 - 96 0 0.0 - 96 96.0
compute-93-06 - 3023.9 14.3 0.0 - 3009.6 3023.9 - 56 0 0.0 - 56 56.0
======= ===== ====== ==== ==== =====
Totals 8063.0 61.1 0.0 536 0 0.2
==> 0.8% 0.0% ==> 0.0% 0.0%
Most unreserved/unused memory (3023.9/3009.6GB) is on compute-93-06 with 56/56.0 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 12/10/25 and 12/17/25
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
graujh 2/64 0.00 1.5 240.0 0.0 0.0 6403.2 > 2.5
ariasc 6/55 0.00 9.1 254.3 0.6 0.0 461.3 > 2.5
babb 2/2 0.00 98.4 19.8 0.3 0.2 77.0 > 2.5
hpc 1/1 0.00 0.1 0.0 0.0 0.0 0.0
auscavitchs 13/52 0.00 62.4 64.0 18.6 4.7 3.4 > 2.5
bourkeb 7/56 0.02 15.0 256.0 195.9 13.4 1.3
kweskinm 39/180 0.03 29.8 60.8 35.1 1.8 1.7
wirshingh 41/212 0.03 30.8 60.4 28.9 2.5 2.1
longk 37/196 0.03 29.9 60.3 29.9 2.5 2.0
castanedaricos 36/190 0.03 29.1 60.3 35.0 2.9 1.7
seim 29/140 0.03 21.5 60.5 33.8 1.6 1.8
collensab 37/196 0.03 26.9 60.3 28.3 2.2 2.1
kramerb 41/212 0.03 29.3 60.4 29.4 2.7 2.1
vagac 45/236 0.03 29.9 60.4 24.7 2.2 2.4
macdonaldk 37/196 0.03 27.6 60.3 27.2 2.2 2.2
floresm 37/196 0.04 27.4 60.3 31.6 2.8 1.9
hinckleya 7/18 0.04 80.9 105.6 1.6 1.5 65.2 > 2.5
bushsl 50/258 0.04 30.4 60.4 27.9 3.3 2.2
rotzeln 59/308 0.05 29.8 60.4 26.3 2.9 2.3
hawkinsmt 56/272 0.05 73.0 59.3 20.5 2.4 2.9 > 2.5
capadorhd 10/31 0.05 11410.2 64.0 11.5 11.3 5.6 > 2.5
morrisseyd 65/356 0.06 28.4 67.2 29.0 1.7 2.3
lyonss 38/194 0.07 22.8 60.2 31.9 0.8 1.9
murphykr 49/260 0.07 23.9 60.2 31.4 1.4 1.9
zarril 61/300 0.08 39.7 91.2 27.0 3.3 3.4 > 2.5
granquistm 53/276 0.08 23.0 55.4 29.9 1.3 1.9
macguigand 50/262 0.14 40.4 101.3 29.2 3.8 3.5 > 2.5
craigc 46/242 0.17 19.6 60.1 33.0 0.5 1.8
bennettkf 16/64 0.22 8.5 128.0 68.4 7.2 1.9
bornbuschs 2/16 0.28 80.0 320.0 62.0 15.1 5.2 > 2.5
kimcj 10/10 0.31 504.2 30.9 12.9 4.8 2.4
grossc2 211/1234 0.41 55.9 60.0 26.4 3.8 2.3
carlsenm 4/50 0.42 72.5 281.4 206.6 2.4 1.4
jhora 10/188 0.42 67.9 246.8 137.3 3.1 1.8
roa-varona 2/40 0.52 35.3 320.0 204.0 8.4 1.6
zehnpfennigj 42/232 0.68 60.8 193.3 199.5 20.1 1.0
mcgowenm 5/30 0.93 16.6 90.0 7.2 5.9 12.4 > 2.5
bakerd 1/8 1.05 12.6 400.0 0.1 0.0 4492.0 > 2.5
nevesk 639/5812 1.20 70.0 599.7 18.2 7.8 32.9 > 2.5
santosbe 24/520 1.44 93.1 123.8 25.7 9.8 4.8 > 2.5
gouldingt 11/178 1.65 35.2 203.8 50.1 31.3 4.1 > 2.5
qzhu 25/25 2.27 101.4 100.0 16.2 9.6 6.2 > 2.5
gallego-narbona 257/1726 2.45 108.6 631.9 40.8 3.1 15.5 > 2.5
campanam 12/192 3.15 60.3 212.8 115.0 40.2 1.9
horowitzj 4000/4120 3.16 78.6 47.8 5.3 1.1 9.1 > 2.5
coellogarridoa 3/120 3.18 98.9 120.0 8.6 7.3 14.0 > 2.5
uribeje 24/210 3.30 58.2 110.8 24.2 6.8 4.6 > 2.5
lealc 2/32 3.89 87.1 200.0 126.6 5.2 1.6
johnsonsj 147/800 4.28 39.8 69.2 44.4 9.8 1.6
beckerm 578/3794 5.20 64.7 11.6 27.6 18.8 0.4
mghahrem 15/120 5.39 74.7 536.4 180.7 17.6 3.0 > 2.5
byerlyp 45/330 21.95 23.4 28.3 5.7 2.2 5.0 > 2.5
collinsa 376/5180 29.49 79.7 322.0 96.5 24.5 3.3 > 2.5
medeirosi 200/410 103.27 70.1 120.7 48.0 0.3 2.5 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 7615/30402 201.76 69.7 160.8 54.4 6.8 3.0 > 2.5
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 0/0 0.00
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
Disk Usage & Quota
Disk Usage & Quota
As of Fri Dec 19 05:06:02 EST 2025
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.36T 16.53T 5.83T 74%/12% /home
netapp-fas83-n02:/vol_data_public 332.50T 44.23T 288.27T 14%/2% /data/public
gpfs02:public 800.00T 448.74T 351.26T 57%/28% /scratch/public
gpfs02:nmnh_bradys 25.00T 18.49T 6.51T 74%/58% /scratch/bradys
gpfs02:nmnh_kistlerl 120.00T 98.25T 21.75T 82% /14% /scratch/kistlerl
gpfs02:nmnh_meyerc 25.00T 18.85T 6.15T 76%/7% /scratch/meyerc
gpfs02:nmnh_corals 60.00T 47.23T 12.77T 79%/22% /scratch/nmnh_corals
gpfs02:nmnh_ggi 130.00T 36.46T 93.54T 29%/15% /scratch/nmnh_ggi
gpfs02:nmnh_lab 25.00T 11.44T 13.56T 46%/11% /scratch/nmnh_lab
gpfs02:nmnh_mammals 35.00T 28.66T 6.34T 82% /39% /scratch/nmnh_mammals
gpfs02:nmnh_mdbc 60.00T 50.29T 9.71T 84% /25% /scratch/nmnh_mdbc
gpfs02:nmnh_ocean_dna 90.00T 53.36T 36.64T 60%/2% /scratch/nmnh_ocean_dna
gpfs02:nzp_ccg 45.00T 30.90T 14.10T 69%/3% /scratch/nzp_ccg
gpfs01:ocio_dpo 10.00T 3.09T 6.91T 31%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
gpfs02:pool_kozakk 12.00T 10.67T 1.33T 89% /2% /scratch/pool_kozakk
gpfs02:pool_sao_access 50.00T 4.79T 45.21T 10%/9% /scratch/pool_sao_access
gpfs02:pool_sao_rtdc 20.00T 908.33G 19.11T 5%/1% /scratch/pool_sao_rtdc
gpfs02:sao_atmos 350.00T 235.93T 114.07T 68%/10% /scratch/sao_atmos
gpfs02:sao_cga 25.00T 9.44T 15.56T 38%/28% /scratch/sao_cga
gpfs02:sao_tess 50.00T 23.25T 26.75T 47%/83% /scratch/sao_tess
gpfs02:scbi_gis 95.00T 60.93T 34.07T 65%/14% /scratch/scbi_gis
gpfs02:nmnh_schultzt 35.00T 20.00T 15.00T 58%/75% /scratch/schultzt
gpfs02:serc_cdelab 15.00T 12.87T 2.13T 86% /19% /scratch/serc_cdelab
gpfs02:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 145.00T 87.05T 57.95T 61%/60% /scratch/sylvain
gpfs02:usda_sel 25.00T 5.48T 19.52T 22%/30% /scratch/usda_sel
gpfs02:wrbu 50.00T 40.70T 9.30T 82% /14% /scratch/wrbu
nas1:/mnt/pool/public 175.00T 101.73T 73.27T 59%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 14.30T 25.70T 36%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 14.60T 25.40T 37%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 70.00T 28.41T 41.59T 41%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 265.00T 112.28T 152.72T 43%/1% /store/nzp_ccg
nas2:/mnt/pool/nzp_cec 40.00T 20.50T 19.50T 52%/1% /store/nzp_cec
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 3.07T 46.93T 7%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 390.82T 359.18T 53%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 80.00T 24.96T 55.04T 32%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 9.42T 40.58T 19%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
nas1:/mnt/pool/admin 20.00T 8.00T 12.00T 41%/1% /store/admin
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.13TB 91.8% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
/data/public 3.91TB 86.9% 0.00M 0.0% Zelong Nie, NMNH - niez
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 384.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 363.8GB 94.7% 0.28M 2.8% Juan Uribe, NMNH - uribeje
/home 359.8GB 93.7% 2.84M 28.4% Brian Bourke, WRBU - bourkeb
/home 359.1GB 93.5% 2.10M 21.0% Michael Trizna, NMNH/BOL - triznam
/home 331.6GB 86.4% 0.26M 2.6% Paul Cristofari, SAO/SSP - pcristof
/home 328.1GB 85.4% 0.00M 0.0% Allan Cabrero, NMNH - cabreroa
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/39.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 17.20TB 114.7% 3.02M 0.0% *** Ting Wang, NMNH - wangt2
/scratch/public 15.60TB 104.0% 26.07M 0.0% *** Zelong Nie, NMNH - niez
/scratch/public 15.00TB 100.0% 0.00M 0.0% *** Rebeka Tamasi Bottger, SAO/OIR - rbottger
/scratch/public 14.90TB 99.3% 1.55M 0.0% *** Juan Uribe, NMNH - uribeje
/scratch/public 14.30TB 95.3% 0.11M 0.0% *** Madeleine Becker, NZCBI - beckerm
/scratch/public 14.20TB 94.7% 0.08M 0.2% Qindan Zhu, SAO/AMP - qzhu
/scratch/public 14.20TB 94.7% 4.24M 0.0% Kevin Mulder, NZP - mulderk
/scratch/public 13.50TB 90.0% 2.09M 0.0% Solomon Chak, SERC - chaks
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 292.0% 0.05M 0.0% *** Carlos Arias, STRI - ariasc
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.91TB 98.3% - - *** Zelong Nie, NMNH - niez (5.0TB/0M)
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.71T 43.0G 1.67T 2.5% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-02 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-03 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-04 414.7G 21.2G 393.4G 5.1% | 0.0G 3.49T 0.0% | 0.00
65-05 414.7G 21.2G 393.4G 5.1% | 0.0G 3.49T 0.0% | 0.00
65-06 414.7G 21.1G 393.6G 5.1% | 0.0G 3.49T 0.0% | 0.00
65-09 3.46T 45.1G 3.42T 1.3% | 0.0G 3.49T 0.0% | 0.00
65-10 414.7G 35.7G 378.9G 8.6% | 0.0G 1.55T 0.0% | 0.00
65-11 414.7G 22.6G 392.0G 5.5% | 0.0G 1.55T 0.0% | 0.00
65-12 414.7G 21.0G 393.7G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-13 414.7G 21.6G 393.1G 5.2% | 0.0G 1.55T 0.0% | 0.00
65-14 414.7G 21.6G 393.1G 5.2% | 0.0G 1.55T 0.0% | 0.00
65-15 414.7G 21.3G 393.4G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-16 414.7G 83.4G 331.2G 20.1% | 0.0G 1.55T 0.0% | 0.00
65-17 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-18 414.7G 21.0G 393.7G 5.1% | 0.0G 1.55T 0.0% | 0.00
65-19 414.7G 22.8G 391.9G 5.5% | 0.0G 1.75T 0.0% | 0.00
65-20 414.7G 21.1G 393.6G 5.1% | 414.7G 0.0G 100.0% | 19.69
65-21 414.7G 21.1G 393.6G 5.1% | 0.0G 1.55T 0.0% | 0.00
65-22 414.7G 21.1G 393.6G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-23 414.7G 22.9G 391.8G 5.5% | 0.0G 1.55T 0.0% | 0.00
65-24 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-25 414.7G 20.9G 393.8G 5.0% | 414.7G 0.0G 100.0% | 19.86
65-26 414.7G 21.1G 393.6G 5.1% | 414.7G 0.0G 100.0% | 19.69
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 414.7G 21.0G 393.7G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-29 414.7G 21.1G 393.6G 5.1% | 0.0G 1.55T 0.0% | 0.00
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-02 6.98T 53.2G 6.93T 0.7% | 400.4G 6.59T 5.6% | 7.52
75-03 414.7G 21.1G 393.6G 5.1% | 0.0G 6.59T 0.0% | 0.00
75-04 6.95T 98.3G 6.85T 1.4% | 367.6G 6.59T 5.2% | 3.74
75-05 6.98T 57.3G 6.93T 0.8% | 199.7G 6.79T 2.8% | 3.48
75-06 6.95T 72.7G 6.88T 1.0% | 367.6G 6.59T 5.2% | 5.06
75-07 6.95T 67.6G 6.88T 0.9% | 367.6G 6.59T 5.2% | 5.44
76-03 1.75T 21.5G 1.72T 1.2% | 199.7G 1.55T 11.2% | 9.29
76-04 1.75T 12.3G 1.73T 0.7% | 400.4G 1.35T 22.4% | 32.58
76-13 1.75T 101.4G 1.65T 5.7% | 199.7G 1.55T 11.2% | 1.97
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.95T 68.6G 6.88T 1.0% | 0.0G 6.98T 0.0% | 0.00
---------------------------------------------------------------
Total 94.42T 1.37T 93.05T 1.5% | 4.63T 122.3T 4.9% | 3.38
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.