Hydra-7@ADC Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
afoster
ariasc
atkinsonga
auscavitchs
bakerd
beckerm
bornbuschs
bourkeb
breusingc
byerlyp
campanam
carrionj
cerqueirat
cnowlan
coellogarridoa
connellym
figueiroh
fowlera
franzena
fzaidouni
gchen
ggonzale
girardmg
gonzalezv
gouldingt
granquistm
graujh
gtorres
guerravc
hchong
hinckleya
horowitzj
hpc
jbak
jhora
jmichail
johnsone
johnsonsj
jspark
karnan
lealc
mcfaddenc
mcgowenm
medeirosi
mghahrem
morrisseyd
msmith
niez
palmerem
pangy
pappalardop
pcristof
quattrinia
qzhu
radicev
rbottger
rdi_tella
rraj
sandoval-velascom
santossam
scottjj
szieba
uribeje
vohsens
willishr
xuj
zayazpou
zehnpfennigj
highlighted.
As of Sat Nov 15 21:37:49 2025: #CPUs/nodes 5868/74, 0 down.
Loads:
head node: 1.97, login nodes: 1.27, 0.07, 27.69, 0.10; NSDs: 0.00, 0.01, 1.05, 3.38, 2.94; licenses: none used.
Queues status: none disabled, none need attention, none in error state.
12 users with running jobs (slots/jobs):
Current load: 827.3, #running (slots/jobs): 1,281/742, usage: 21.8%, efficiency: 64.6%
1 user with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 1/235/235
55 users have/had running or queued jobs over the past 7 days, 68 over the past 15 days.
86 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Saturday, 15-Nov-2025 21:41:44 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:35.
Warnings
Warnings
Oversubscribed Jobs
As of Sat Nov 15 21:37:53 EST 2025 (0 oversubscribed job)
Inefficient Jobs
As of Sat Nov 15 21:37:56 EST 2025 (5 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1152/613, 1 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11034213 vitis_cactus_si niez +4:17 110 15.1% mThC.q 84-01
11042543 xadm uribeje +2:22 20 5.0% lThM.q 65-06
11042546 xadm uribeje +2:22 20 5.0% lThM.q 65-20
11043155 Delphinidae_IQT mcgowenm +1:12 6 16.6% lThM.q 75-02
11043465 earthaccess_202 ggonzale 01:37 1 8.6% lTIO.sq 64-16
⇒ Equivalent to 137.3 underused CPUs: 157 CPUs used at 12.5% on average.
Nodes with Excess Load
As of Sat Nov 15 21:38:09 EST 2025 (39 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
64-03 40 0 4.0 4.0 *
64-04 40 0 3.9 3.9 *
64-06 40 0 4.0 4.0 *
64-07 40 0 3.7 3.7 *
64-08 40 3 5.0 2.0 *
64-09 40 1 4.2 3.2 *
64-12 40 0 3.5 3.5 *
64-17 32 1 3.5 2.5 *
64-18 32 1 3.6 2.6 *
65-03 64 0 6.3 6.3 *
65-05 64 4 6.8 2.8 *
65-10 64 5 7.1 2.1 *
65-11 64 1 6.7 5.7 *
65-12 64 0 6.2 6.2 *
65-13 64 5 7.3 2.3 *
65-14 64 1 6.7 5.7 *
65-17 64 1 6.7 5.7 *
65-19 64 1 6.7 5.7 *
65-22 64 5 7.1 2.1 *
65-24 64 2 6.9 4.9 *
65-25 64 3 6.8 3.8 *
65-28 64 5 6.6 1.6 *
65-29 64 5 6.7 1.7 *
65-30 64 1 6.8 5.8 *
75-02 128 6 13.3 7.3 *
75-04 128 12 15.1 3.1 *
75-06 128 3 13.5 10.5 *
75-07 128 10 13.7 3.7 *
76-03 192 13 20.6 7.6 *
76-05 128 4 14.1 10.1 *
76-07 128 5 13.5 8.5 *
76-08 128 9 14.1 5.1 *
76-09 128 10 14.0 4.0 *
76-11 128 3 13.5 10.5 *
76-13 128 12 13.8 1.8 *
76-14 128 10 13.7 3.7 *
93-01 64 1 6.7 5.7 *
93-02 72 1 7.2 6.2 *
93-03 72 6 7.5 1.5 *
Total excess load = 180.8
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
pappalardop 30 22.2% 8.7891 81.5% 0.0744 21.0% 0.0950 0.0951 92.4
mcgowenm 6 4.4% 0.8789 8.1% 0.0088 2.5% 0.0067 0.0226 38.9
uribeje 40 29.6% 0.7812 7.2% 0.0287 8.1% 0.0378 0.0511 15.3
hinckleya 2 1.5% 0.2930 2.7% 0.0652 18.4% 0.1853 0.2781 1.1
graujh 56 41.5% 0.0293 0.3% 0.1781 50.1% 0.9508 1.1640 0.0
vohsens 1 0.7% 0.0156 0.1% 0.0000 0.0% 0.0000 0.0000 328.5
==================================================================================================
Total 135 10.7871 0.3552 1.2756 1.6109 6.7
Warnings
6 high memory jobs produced a warning:
1 for graujh
1 for hinckleya
1 for mcgowenm
1 for pappalardop
2 for uribeje
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =687
mThC.q =448
lThC.q =0
uThC.q =0
1135 5056 22.4% 72.1%
sThM.q =1
mThM.q =86
lThM.q =48
uThM.q =0
135 4680 2.9% 577.3%
sTgpu.q =0
mTgpu.q =1
lTgpu.q =0
qgpu.iq =0
1 104 1.0% 128.0%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =1
1 8 12.5% 3.1%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =0
0 68 0.0%
Total: 1272
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Sat Nov 15 21:37:58 EST 2025
4005 avail(slots), free(load)=4983.7, unresd(mem)=26557.4G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5120 total(mem) 39.8T
unused(slots) 4036 unused(load) 5110.7 ie: 78.8% 99.8%
unreserved(mem) 26.2T unused(mem) 38.2T ie: 65.8% 95.9%
unreserved(mem) 6.6G unused(mem) 9.7G per unused(slots)
3641 avail(slots), free(load)=4544.9, unresd(mem)=22561.5G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4680 total(mem) 35.8T
unused(slots) 3669 unused(load) 4671.9 ie: 78.4% 99.8%
unreserved(mem) 22.3T unused(mem) 34.3T ie: 62.3% 95.8%
unreserved(mem) 6.2G unused(mem) 9.6G per unused(slots)
535 avail(slots), free(load)=536.0, unresd(mem)=8046.6G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 536 total(mem) 7.9T
unused(slots) 535 unused(load) 536.0 ie: 99.8% 100.0%
unreserved(mem) 7.9T unused(mem) 7.8T ie: 99.8% 99.2%
unreserved(mem) 15.0G unused(mem) 15.0G per unused(slots)
103 avail(slots), free(load)=104.0, unresd(mem)=752.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 103 unused(load) 104.0 ie: 99.0% 100.0%
unreserved(mem) 0.7T unused(mem) 0.7T ie: 99.7% 93.0%
unreserved(mem) 7.3G unused(mem) 6.8G per unused(slots)
GPU Usage
Sat Nov 15 21:38:23 EST 2025
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 29.3 474.0 - 4/1 - 64 1 1.1 - 63 62.9
compute-79-01 - 125.5 12.9 112.6 - 2/0 - 20 0 0.1 - 20 19.9
compute-79-02 - 125.5 10.8 114.7 - 2/0 - 20 0 0.1 - 20 19.9
Total GPU=8, used=1 (12.5%)
Waiting Job(s)
As of Sat Nov 15 21:38:08 EST 2025
1 job waiting for pappalardop :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11042796 soakblend_bold pappalardop +2:00 1 300.0 mThM.q 285-519:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.789T/8.944T 98.3% for pappalardop in queue uThM.q
max_hM_slots_per_user/2 slots=30/585 5.1% for pappalardop in queue mThM.q
max_slots_per_user/1 slots=30/840 3.6% for pappalardop
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=110/110 100.0% for *
total_mem_res/2 mem_res=10.79T/35.78T 30.1% for * in queue uThM.q
total_slots/1 slots=924/5960 15.5% for *
total_gpus/1 GPUS=1/8 12.5% for * in queue mTgpu.q
total_mem_res/1 mem_res=2.549T/39.94T 6.4% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Sat Nov 15 21:38:10 EST 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=2.420T/39.94T 6.1% for * in queue uThC.q
total_mem_res/2 mem_res=10.79T/35.78T 30.1% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.5 11.2 300.2 - 492.3 203.3 - 32 1 3.5 - 31 28.5
compute-64-18 - 503.5 10.9 300.2 - 492.6 203.3 - 32 1 3.5 - 31 28.5
compute-65-02 - 503.5 12.2 0.0 - 491.3 503.5 - 64 0 6.7 - 64 57.3
compute-65-03 - 503.5 12.3 0.0 - 491.2 503.5 - 64 0 6.3 - 64 57.7
compute-65-04 - 503.5 12.3 300.0 - 491.2 203.5 - 64 1 7.3 - 63 56.7
compute-65-05 - 503.5 12.7 0.0 - 490.8 503.5 - 64 0 6.6 - 64 57.4
compute-65-06 - 503.5 29.2 400.0 - 474.3 103.5 - 64 20 7.8 - 44 56.2
compute-65-07 - 503.5 13.0 0.0 - 490.5 503.5 - 64 0 7.4 - 64 56.6
compute-65-09 - 503.5 13.0 0.0 - 490.5 503.5 - 64 0 6.8 - 64 57.2
compute-65-10 - 503.5 13.3 0.0 - 490.2 503.5 - 64 0 7.1 - 64 56.9
compute-65-11 - 503.5 12.2 300.0 - 491.3 203.5 - 64 1 6.7 - 63 57.3
compute-65-12 - 503.5 11.4 0.0 - 492.1 503.5 - 64 0 6.5 - 64 57.5
compute-65-13 - 503.5 12.2 300.0 - 491.3 203.5 - 64 1 7.3 - 63 56.7
compute-65-14 - 503.5 13.0 300.0 - 490.5 203.5 - 64 1 6.7 - 63 57.3
compute-65-15 - 503.5 12.1 300.0 - 491.4 203.5 - 64 1 7.4 - 63 56.6
compute-65-16 - 503.5 11.9 0.0 - 491.6 503.5 - 64 0 6.8 - 64 57.2
compute-65-17 - 503.5 13.7 300.0 - 489.8 203.5 - 64 1 6.7 - 63 57.3
compute-65-18 - 503.5 12.6 0.0 - 490.9 503.5 - 64 0 7.1 - 64 56.9
compute-65-19 - 503.5 11.7 300.0 - 491.8 203.5 - 64 1 6.8 - 63 57.2
compute-65-20 - 503.5 24.7 400.0 - 478.8 103.5 - 64 20 7.5 - 44 56.5
compute-65-21 - 503.5 11.8 0.0 - 491.7 503.5 - 64 0 7.1 - 64 56.9
compute-65-22 - 503.5 12.2 300.0 - 491.3 203.5 - 64 1 7.1 - 63 56.9
compute-65-23 - 503.5 11.7 0.0 - 491.8 503.5 - 64 0 7.0 - 64 57.0
compute-65-24 - 503.5 13.1 300.0 - 490.4 203.5 - 64 1 6.7 - 63 57.3
compute-65-25 - 503.5 13.2 300.0 - 490.3 203.5 - 64 1 6.6 - 63 57.4
compute-65-26 - 503.5 13.0 0.0 - 490.5 503.5 - 64 0 7.5 - 64 56.5
compute-65-27 - 503.5 12.2 0.0 - 491.3 503.5 - 64 0 7.3 - 64 56.7
compute-65-28 - 503.5 13.0 0.0 - 490.5 503.5 - 64 0 6.6 - 64 57.4
compute-65-29 - 503.5 14.1 0.0 - 489.4 503.5 - 64 0 6.9 - 64 57.1
compute-65-30 - 503.5 12.0 300.0 - 491.5 203.5 - 64 1 6.5 - 63 57.5
compute-75-01 - 1007.5 35.3 128.1 - 972.2 879.4 - 128 16 19.1 - 112 108.9
compute-75-02 - 1007.5 23.5 900.0 - 984.0 107.5 - 128 6 13.3 - 122 114.7
compute-75-03 - 755.5 71.0 520.0 - 684.5 235.5 - 128 65 63.2 - 63 64.8
compute-75-04 - 755.0 15.0 599.5 - 740.0 155.5 - 128 2 15.1 - 126 112.9
compute-75-05 - 755.5 54.5 500.0 - 701.0 255.5 - 128 128 128.4 - 0 -0.4
compute-75-06 - 755.5 16.5 600.0 - 739.0 155.5 - 128 2 13.4 - 126 114.5
compute-75-07 - 755.5 14.9 300.0 - 740.6 455.5 - 128 1 13.5 - 127 114.5
compute-76-03 - 1007.4 17.5 0.5 - 989.9 1006.9 - 128 0 13.8 - 128 114.2
compute-76-04 - 1007.4 61.1 256.0 - 946.3 751.4 - 128 64 14.2 - 64 113.8
compute-76-05 - 1007.4 90.8 608.0 - 916.6 399.4 - 128 4 14.1 - 124 113.9
compute-76-06 - 1007.4 32.8 512.0 - 974.6 495.4 - 128 64 28.2 - 64 99.8
compute-76-07 - 1007.4 16.4 600.0 - 991.0 407.4 - 128 2 13.2 - 126 114.8
compute-76-08 - 1007.4 16.9 300.0 - 990.5 707.4 - 128 1 14.1 - 127 113.9
compute-76-09 - 1007.4 16.4 300.0 - 991.0 707.4 - 128 1 14.0 - 127 114.0
compute-76-10 - 1007.4 37.7 30.0 - 969.7 977.4 - 128 56 56.2 - 72 71.8
compute-76-11 - 1007.4 17.8 600.0 - 989.6 407.4 - 128 2 13.5 - 126 114.5
compute-76-12 - 1007.4 16.1 600.0 - 991.3 407.4 - 128 2 14.3 - 126 113.7
compute-76-13 - 1007.4 17.0 300.0 - 990.4 707.4 - 128 1 13.8 - 127 114.2
compute-76-14 - 1007.4 16.8 0.0 - 990.6 1007.4 - 128 0 13.7 - 128 114.3
compute-84-01 - 881.1 486.7 8.0 - 394.4 873.1 - 112 110 80.8 - 2 31.2
compute-93-01 - 503.8 13.2 300.0 - 490.6 203.8 - 64 1 7.1 - 63 56.9
compute-93-02 - 755.6 14.2 0.0 - 741.4 755.6 - 72 0 7.4 - 72 64.6
compute-93-03 - 755.6 14.5 0.0 - 741.1 755.6 - 72 0 7.5 - 72 64.5
compute-93-04 - 755.6 14.2 300.0 - 741.4 455.6 - 72 1 8.4 - 71 63.6
======= ===== ====== ==== ==== =====
Totals 36637.5 1532.7 12962.5 4680 582 802.5
==> 4.2% 35.4% ==> 12.4% 17.1%
Most unreserved/unused memory (1007.4/990.6GB) is on compute-76-14 with 128/114.3 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 15.1 16.4 - 1496.3 1495.0 - 192 1 0.1 - 191 191.9
compute-76-02 - 1511.4 17.7 -0.0 - 1493.7 1511.4 - 192 0 0.2 - 192 191.8
compute-93-05 - 2016.3 14.9 0.0 - 2001.4 2016.3 - 96 0 0.1 - 96 96.0
compute-93-06 - 3023.9 13.1 0.0 - 3010.8 3023.9 - 56 0 0.1 - 56 55.9
======= ===== ====== ==== ==== =====
Totals 8063.0 60.8 16.4 536 1 0.4
==> 0.8% 0.2% ==> 0.2% 0.1%
Most unreserved/unused memory (3023.9/3010.8GB) is on compute-93-06 with 56/55.9 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 11/05/25 and 11/12/25
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
zehnpfennigj 1/5 0.00 17.7 70.0 0.7 0.3 106.3 > 2.5
byerlyp 8/16 0.01 67.7 20.0 30.5 10.7 0.7
jmichail 1/8 0.03 12.3 32.0 27.0 7.4 1.2
pappalardop 2/8 0.04 147.6 300.0 86.0 74.8 3.5 > 2.5
bornbuschs 84/656 0.07 17.2 305.9 35.1 21.8 8.7 > 2.5
vohsens 6/6 0.09 66.9 63.2 0.3 0.2 199.1 > 2.5
rbottger 1/1 0.14 100.2 16.0 13.8 13.7 1.2
mghahrem 18/18 0.37 86.8 0.0 98.9 62.2 0.0
palmerem 76/83 0.54 97.8 285.6 3.4 2.4 84.5 > 2.5
graujh 2/112 0.68 65.6 30.0 1034.2 171.5 0.0
szieba 46/1840 0.82 30.0 191.5 844.2 2.8 0.2
xuj 6/69 0.91 86.8 553.3 251.3 36.8 2.2
atkinsonga 36/41 0.91 149.7 119.3 122.6 97.6 1.0
figueiroh 3/3 1.29 99.7 64.0 12.5 12.4 5.1 > 2.5
beckerm 24/288 1.29 75.4 82.1 12.9 9.2 6.3 > 2.5
pcristof 528/4974 1.54 145.7 116.1 9.3 4.9 12.5 > 2.5
scottjj 63/438 1.71 508.4 269.7 133.1 15.5 2.0
jhora 98/2872 2.13 49.4 162.6 178.2 3.7 0.9
cerqueirat 3010/3021 2.15 14.8 65.6 8.1 0.2 8.1 > 2.5
bourkeb 64/864 2.96 52.6 618.3 477.3 87.2 1.3
qzhu 39/138 3.07 90.1 100.0 29.0 28.1 3.5 > 2.5
medeirosi 1/4 6.06 49.0 120.0 81.5 0.3 1.5
granquistm 60/600 10.41 79.4 160.0 77.5 32.7 2.1
morrisseyd 3040/3040 14.21 96.4 16.0 3.0 1.6 5.4 > 2.5
horowitzj 5969/5969 14.37 91.2 16.0 2.7 1.6 5.9 > 2.5
uribeje 176/1948 39.99 46.6 285.4 21.1 13.9 13.5 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 13362/27022 105.81 75.2 176.8 58.4 15.4 3.0 > 2.5
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
graujh 3/258 1.67 39.6 1500.2 711.3 402.2 2.1
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 3/258 1.67 39.6 1500.2 711.3 402.2 2.1
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
Disk Usage & Quota
Disk Usage & Quota
As of Sat Nov 15 17:06:02 EST 2025
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.36T 15.86T 6.50T 71%/12% /home
netapp-fas83-n02:/vol_data_public 332.50T 43.72T 288.78T 14%/2% /data/public
gpfs02:public 800.00T 416.22T 383.78T 53%/27% /scratch/public
gpfs02:nmnh_bradys 25.00T 19.24T 5.76T 77%/87% /scratch/bradys
gpfs02:nmnh_kistlerl 120.00T 97.89T 22.11T 82% /14% /scratch/kistlerl
gpfs02:nmnh_meyerc 25.00T 18.85T 6.15T 76%/7% /scratch/meyerc
gpfs02:nmnh_corals 60.00T 43.66T 16.34T 73%/22% /scratch/nmnh_corals
gpfs02:nmnh_ggi 130.00T 36.46T 93.54T 29%/15% /scratch/nmnh_ggi
gpfs02:nmnh_lab 25.00T 10.69T 14.31T 43%/11% /scratch/nmnh_lab
gpfs02:nmnh_mammals 35.00T 21.04T 13.96T 61%/48% /scratch/nmnh_mammals
gpfs02:nmnh_mdbc 60.00T 49.65T 10.35T 83% /25% /scratch/nmnh_mdbc
gpfs02:nmnh_ocean_dna 90.00T 48.31T 41.69T 54%/2% /scratch/nmnh_ocean_dna
gpfs02:nzp_ccg 45.00T 32.53T 12.47T 73%/3% /scratch/nzp_ccg
gpfs01:ocio_dpo 10.00T 0.00G 10.00T 1%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
gpfs02:pool_kozakk 12.00T 10.67T 1.33T 89% /2% /scratch/pool_kozakk
gpfs02:pool_sao_access 50.00T 4.79T 45.21T 10%/9% /scratch/pool_sao_access
gpfs02:pool_sao_rtdc 20.00T 908.33G 19.11T 5%/1% /scratch/pool_sao_rtdc
gpfs02:sao_atmos 350.00T 264.02T 85.98T 76%/10% /scratch/sao_atmos
gpfs02:sao_cga 25.00T 9.44T 15.56T 38%/28% /scratch/sao_cga
gpfs02:sao_tess 50.00T 23.25T 26.75T 47%/83% /scratch/sao_tess
gpfs02:scbi_gis 95.00T 71.83T 23.17T 76%/14% /scratch/scbi_gis
gpfs02:nmnh_schultzt 35.00T 19.52T 15.48T 56%/75% /scratch/schultzt
gpfs02:serc_cdelab 15.00T 12.86T 2.14T 86% /19% /scratch/serc_cdelab
gpfs02:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 145.00T 85.99T 59.01T 60%/60% /scratch/sylvain
gpfs02:usda_sel 25.00T 5.47T 19.53T 22%/30% /scratch/usda_sel
gpfs02:wrbu 50.00T 40.70T 9.30T 82% /14% /scratch/wrbu
nas1:/mnt/pool/public 175.00T 98.63T 76.37T 57%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 13.78T 26.22T 35%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 14.58T 25.42T 37%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 70.00T 27.96T 42.04T 40%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 265.00T 111.87T 153.13T 43%/1% /store/nzp_ccg
nas2:/mnt/pool/nzp_cec 40.00T 20.68T 19.32T 52%/1% /store/nzp_cec
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 182.85G 49.82T 1%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 363.25T 386.75T 49%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 80.00T 24.96T 55.04T 32%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 9.42T 40.58T 19%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
nas1:/mnt/pool/admin 20.00T 7.99T 12.01T 40%/1% /store/admin
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.13TB 91.8% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
/data/public 4.01TB 89.1% 1.57M 15.7% Zelong Nie, NMNH - niez
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 384.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 361.7GB 94.2% 0.27M 2.7% Juan Uribe, NMNH - uribeje
/home 358.6GB 93.4% 2.84M 28.4% Brian Bourke, WRBU - bourkeb
/home 348.6GB 90.8% 2.06M 20.6% Michael Trizna, NMNH/BOL - triznam
/home 346.5GB 90.2% 0.30M 3.0% Paul Cristofari, SAO/SSP - pcristof
/home 328.1GB 85.4% 0.00M 0.0% Allan Cabrero, NMNH - cabreroa
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/39.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 17.20TB 114.7% 3.02M 0.0% *** Ting Wang, NMNH - wangt2
/scratch/public 15.00TB 100.0% 0.00M 0.0% *** Rebeka Tamasi Bottger, SAO/OIR - rbottger
/scratch/public 15.00TB 100.0% 4.26M 0.0% *** Kevin Mulder, NZP - mulderk
/scratch/public 14.30TB 95.3% 0.00M 0.0% *** Heather Willis, NZCBI - willishr
/scratch/public 13.80TB 92.0% 33.06M 0.0% Zelong Nie, NMNH - niez
/scratch/public 13.60TB 90.7% 0.36M 0.0% Jose Grau, SCBI - graujh
/scratch/public 13.50TB 90.0% 2.09M 0.0% Solomon Chak, SERC - chaks
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 292.0% 0.05M 0.0% *** Carlos Arias, STRI - ariasc
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.71T 42.0G 1.67T 2.4% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-02 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-03 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-04 414.7G 19.1G 395.5G 4.6% | 0.0G 3.49T 0.0% | 0.00
65-05 414.7G 19.5G 395.2G 4.7% | 0.0G 3.49T 0.0% | 0.00
65-06 414.7G 21.1G 393.6G 5.1% | 0.0G 3.49T 0.0% | 0.00
65-09 3.46T 43.0G 3.42T 1.2% | 0.0G 3.49T 0.0% | 0.00
65-10 414.7G 21.1G 393.5G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-11 414.7G 21.0G 393.7G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-12 414.7G 21.0G 393.7G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-13 414.7G 19.3G 395.4G 4.7% | 0.0G 1.75T 0.0% | 0.00
65-14 414.7G 21.0G 393.6G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-15 414.7G 19.4G 395.2G 4.7% | 0.0G 1.75T 0.0% | 0.00
65-16 414.7G 19.3G 395.4G 4.6% | 0.0G 1.75T 0.0% | 0.00
65-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-18 414.7G 19.9G 394.7G 4.8% | 0.0G 1.75T 0.0% | 0.00
65-19 414.7G 21.0G 393.7G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-20 414.7G 20.5G 394.2G 4.9% | 414.7G 0.0G 100.0% | 20.23
65-21 414.7G 23.5G 391.1G 5.7% | 0.0G 1.75T 0.0% | 0.00
65-22 414.7G 19.9G 394.8G 4.8% | 0.0G 1.75T 0.0% | 0.00
65-23 414.7G 20.8G 393.9G 5.0% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-25 414.7G 21.0G 393.7G 5.1% | 414.7G 0.0G 100.0% | 19.75
65-26 414.7G 21.0G 393.7G 5.1% | 414.7G 0.0G 100.0% | 19.75
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 414.7G 21.1G 393.6G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-29 414.7G 21.0G 393.7G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-02 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-03 414.7G 21.2G 393.5G 5.1% | 0.0G 6.98T 0.0% | 0.00
75-04 6.95T 67.6G 6.88T 0.9% | 0.0G 6.98T 0.0% | 0.00
75-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-06 6.95T 67.6G 6.88T 0.9% | 0.0G 6.98T 0.0% | 0.00
75-07 6.95T 67.6G 6.88T 0.9% | 0.0G 6.98T 0.0% | 0.00
76-03 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-04 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-13 1.75T 101.4G 1.65T 5.7% | 0.0G 1.75T 0.0% | 0.00
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.95T 67.6G 6.88T 0.9% | 0.0G 6.98T 0.0% | 0.00
---------------------------------------------------------------
Total 94.42T 1.22T 93.20T 1.3% | 1.21T 128.0T 1.3% | 0.99
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.