Hydra-7@ADC Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
afoster
andersonhl
ariasc
beckerm
bourkeb
byerlyp
cabreroa
capadorhd
castanedaricos
chippsa
collinsa
collinsl2
figueiroh
franzena
gallego-narbona
ggonzale
girardmg
gonzalezb
gouldingt
granquistm
griebenowz
grossc2
gtorres
guerravc
hagemannm
hawkinsmt
hinckleya
hoffmannmeyerg
holmk
horowitzj
jenkinskel
jhora
jmcclung
johnsone
johnsonsj
jourdain-fievetl
jspark
karnan
kistlerl
lingof
longk
mattersonk
mccormickm
mghahrem
morrisseyd
myerse
nelsonjo
nevesk
niez
oviedodiegom
palmerem
pangy
pappalardop
pattonp
peresph
phillipsaj
pradon
quattrinia
qzhu
ramosi
rbottger
sandoval-velascom
santosbe
santossam
sossajef
szieba
taom
triznam
uribeje
vagac
vohsens
willishr
wirshingh
xuj
yancos
yisraell
zayazpou
zhangy
highlighted.
As of Sun Apr 5 06:07:03 2026: #CPUs/nodes 5740/74, 0 down.
Loads:
head node: 0.64, login nodes: 0.03, 0.27, 1.53, 0.02; NSDs: 0.18, 0.00, 1.70, 3.89, 4.16; licenses: none used.
Queues status: none disabled, none need attention, none in error state.
20 users with running jobs (slots/jobs):
Current load: 399.4, #running (slots/jobs): 802/37, usage: 14.0%, efficiency: 49.8%
no job in any of the queues.
67 users have/had running or queued jobs over the past 7 days, 78 over the past 15 days.
100 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Sunday, 05-Apr-2026 06:12:04 EDT
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:55.
Warnings
Warnings
Oversubscribed Jobs
As of Sun Apr 5 06:07:04 EDT 2026 (0 oversubscribed job)
Inefficient Jobs
As of Sun Apr 5 06:07:04 EDT 2026 (15 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 802/37, 0 queued (job), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12190422 stairwayAZ.job byerlyp +47:20 5 20.0% lThM.q 64-17
12195552 stairwayNE.job byerlyp +47:16 5 19.9% lThM.q 76-04
12198833 stairwayCAR.job byerlyp +46:19 5 19.8% lThM.q 76-14
12782849 earlgrey zhangy +7:11 24 15.0% lThC.q 65-05 5
12795848 bayestraits_cae gouldingt +5:17 4 25.0% mThC.q 64-04
12795854 bayestraits_cae gouldingt +5:17 4 25.0% mThC.q 64-10
12795857 bayestraits_cae gouldingt +5:17 4 25.0% mThC.q 64-14
12799396 ratestools willishr +4:14 8 15.1% mThM.q 64-18
12801410 vitis_ssp_cactu niez +4:02 110 1.7% mThC.q 75-02
12801449 iqtree2 santossam +3:20 20 8.6% mThM.q 75-03
12804788 IQ_50p_iqtree morrisseyd +2:22 64 18.8% lThC.q 76-03
12804791 IQ_75p_iqtree morrisseyd +2:21 64 26.0% lThC.q 65-27
12805060 invasion_analys johnsone +2:18 64 27.8% mThC.q 76-11
12805119 spades cabreroa +2:12 6 23.1% lThM.q 93-04
12805531 earthaccess_202 ggonzale 10:07 1 5.0% lTIO.sq 64-15
⇒ Equivalent to 325.7 underused CPUs: 388 CPUs used at 16.1% on average.
Nodes with Excess Load
As of Sun Apr 5 06:07:05 EDT 2026 (0 node has a high load, offset=1.5)
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
bourkeb 16 13.8% 0.7812 58.8% 0.0321 23.6% 0.0291 0.7227 1.1
nevesk 30 25.9% 0.1953 14.7% 0.0156 11.5% 0.0157 0.0714 2.7
willishr 16 13.8% 0.1406 10.6% 0.0164 12.0% 0.1162 0.3227 0.4
santossam 20 17.2% 0.1172 8.8% 0.0069 5.0% 0.0079 0.0081 14.5
byerlyp 15 12.9% 0.0586 4.4% 0.0100 7.3% 0.0101 0.0101 5.8
hinckleya 1 0.9% 0.0234 1.8% 0.0069 5.1% 0.0066 0.0095 2.5
castanedaricos 12 10.3% 0.0117 0.9% 0.0163 11.9% 0.0174 0.0180 0.7
cabreroa 6 5.2% 0.0010 0.1% 0.0321 23.5% 0.0023 0.0693 0.0
==================================================================================================
Total 116 1.3291 0.1363 0.2054 1.2317 1.1
Warnings
11 high memory jobs produced a warning:
1 for bourkeb
3 for byerlyp
1 for cabreroa
1 for castanedaric
1 for hinckleya
1 for nevesk
1 for santossam
2 for willishr
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =0
mThC.q =205
lThC.q =525
uThC.q =16
746 5056 14.8% 50.4%
sThM.q =0
mThM.q =52
lThM.q =64
uThM.q =0
116 4680 2.5% 288.8%
sTgpu.q =0
mTgpu.q =2
lTgpu.q =0
qgpu.iq =0
2 104 1.9% 121.0%
uTxlM.rq =0
0 408 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =1
1 8 12.5% 1.6%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =0
0 68 0.0%
Total: 865
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Sun Apr 5 06:07:04 EDT 2026
4282 avail(slots), free(load)=5052.1, unresd(mem)=36551.4G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5120 total(mem) 39.8T
unused(slots) 4282 unused(load) 5115.7 ie: 83.6% 99.9%
unreserved(mem) 35.9T unused(mem) 37.9T ie: 90.2% 95.3%
unreserved(mem) 8.6G unused(mem) 9.1G per unused(slots)
4029 avail(slots), free(load)=4765.0, unresd(mem)=36786.2G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4832 total(mem) 40.7T
unused(slots) 4029 unused(load) 4828.6 ie: 83.4% 99.9%
unreserved(mem) 36.2T unused(mem) 38.9T ie: 88.9% 95.6%
unreserved(mem) 9.2G unused(mem) 9.9G per unused(slots)
384 avail(slots), free(load)=407.8, unresd(mem)=7191.6G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 408 total(mem) 7.9T
unused(slots) 384 unused(load) 407.8 ie: 94.1% 100.0%
unreserved(mem) 7.0T unused(mem) 7.5T ie: 89.2% 94.9%
unreserved(mem) 18.7G unused(mem) 19.9G per unused(slots)
102 avail(slots), free(load)=104.0, unresd(mem)=750.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 102 unused(load) 104.0 ie: 98.1% 100.0%
unreserved(mem) 0.7T unused(mem) 0.7T ie: 99.4% 90.8%
unreserved(mem) 7.4G unused(mem) 6.7G per unused(slots)
GPU Usage
Sun Apr 5 06:07:10 EDT 2026
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 49.0 454.3 - 4/2 - 64 2 2.2 - 62 61.8
compute-79-01 - 125.5 10.0 115.5 - 2/0 - 20 0 0.1 - 20 19.9
compute-79-02 - 125.5 10.4 115.1 - 2/0 - 20 0 0.1 - 20 19.9
Total GPU=8, used=2 (25.0%)
Waiting Job(s)
As of Sun Apr 5 06:07:05 EDT 2026
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_gpus/1 GPUS=2/8 25.0% for * in queue mTgpu.q
total_slots/1 slots=866/5960 14.5% for *
blast2GO/1 slots=12/110 10.9% for *
total_mem_res/1 mem_res=3.408T/39.94T 8.5% for * in queue uThC.q
total_mem_res/2 mem_res=1.330T/35.78T 3.7% for * in queue uThM.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Sun Apr 5 06:07:05 EDT 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=3.408T/39.94T 8.5% for * in queue uThC.q
total_mem_res/2 mem_res=1.330T/35.78T 3.7% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (56 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.5 14.3 20.2 - 489.2 483.3 - 32 5 1.0 - 27 31.0
compute-64-18 - 503.5 16.4 72.2 - 487.1 431.3 - 32 8 1.1 - 24 30.9
compute-65-02 - 503.5 16.1 0.0 - 487.4 503.5 - 64 2 1.0 - 62 63.0
compute-65-03 - 503.5 19.1 192.0 - 484.4 311.5 - 64 26 2.0 - 38 62.0
compute-65-04 - 503.5 17.7 0.0 - 485.8 503.5 - 64 2 1.0 - 62 63.0
compute-65-05 - 503.5 11.4 192.0 - 492.1 311.5 - 64 26 2.8 - 38 61.2
compute-65-06 - 503.5 18.5 0.0 - 485.0 503.5 - 64 2 1.0 - 62 63.0
compute-65-07 - 503.5 15.7 0.0 - 487.8 503.5 - 64 2 1.0 - 62 63.0
compute-65-09 - 503.5 22.6 24.0 - 480.9 479.5 - 64 1 1.0 - 63 63.0
compute-65-10 - 503.5 27.6 0.0 - 475.9 503.5 - 64 2 1.0 - 62 63.0
compute-65-11 - 503.5 16.8 0.0 - 486.7 503.5 - 64 2 1.0 - 62 63.0
compute-65-12 - 503.5 16.2 0.0 - 487.3 503.5 - 64 2 1.0 - 62 63.0
compute-65-13 - 503.5 16.6 0.0 - 486.9 503.5 - 64 0 0.1 - 64 63.9
compute-65-14 - 503.5 15.1 2.0 - 488.4 501.5 - 64 1 1.0 - 63 63.0
compute-65-15 - 503.5 16.6 0.0 - 486.9 503.5 - 64 0 0.0 - 64 64.0
compute-65-16 - 503.5 16.1 0.0 - 487.4 503.5 - 64 0 0.0 - 64 64.0
compute-65-17 - 503.5 16.9 0.0 - 486.6 503.5 - 64 0 0.0 - 64 64.0
compute-65-18 - 503.5 14.9 0.0 - 488.6 503.5 - 64 0 0.0 - 64 64.0
compute-65-19 - 503.5 16.3 0.0 - 487.2 503.5 - 64 2 1.0 - 62 63.0
compute-65-20 - 503.5 34.6 0.0 - 468.9 503.5 - 64 0 0.0 - 64 64.0
compute-65-21 - 503.5 16.5 0.0 - 487.0 503.5 - 64 2 1.0 - 62 63.0
compute-65-22 - 503.5 16.8 0.0 - 486.7 503.5 - 64 0 0.0 - 64 64.0
compute-65-23 - 503.5 16.4 0.0 - 487.1 503.5 - 64 2 1.0 - 62 63.0
compute-65-24 - 503.5 16.2 0.0 - 487.3 503.5 - 64 0 0.0 - 64 64.0
compute-65-25 - 503.5 14.5 0.0 - 489.0 503.5 - 64 2 1.0 - 62 63.0
compute-65-26 - 503.5 15.7 0.0 - 487.8 503.5 - 64 2 1.0 - 62 63.0
compute-65-27 - 503.5 49.1 256.0 - 454.4 247.5 - 64 64 25.7 - 0 38.3
compute-65-28 - 503.5 17.9 30.0 - 485.6 473.5 - 64 4 4.0 - 60 60.0
compute-65-29 - 503.5 17.1 0.0 - 486.4 503.5 - 64 2 1.0 - 62 63.0
compute-65-30 - 503.5 17.1 0.0 - 486.4 503.5 - 64 2 1.0 - 62 63.0
compute-75-01 - 1007.5 15.2 0.1 - 992.3 1007.4 - 128 0 0.0 - 128 128.0
compute-75-02 - 1007.5 25.9 800.0 - 981.6 207.5 - 128 112 2.0 - 16 126.0
compute-75-03 - 755.5 19.3 120.0 - 736.2 635.5 - 128 22 2.8 - 106 125.2
compute-75-04 - 755.5 16.1 0.0 - 739.4 755.5 - 128 2 1.0 - 126 127.0
compute-75-05 - 755.5 16.9 0.0 - 738.6 755.5 - 128 0 0.0 - 128 128.0
compute-75-06 - 755.5 15.4 0.0 - 740.1 755.5 - 128 0 0.0 - 128 128.0
compute-75-07 - 755.5 27.8 200.0 - 727.7 555.5 - 128 30 30.0 - 98 98.0
compute-76-03 - 1007.4 74.6 256.5 - 932.8 750.9 - 128 64 12.6 - 64 115.4
compute-76-04 - 1007.4 188.9 276.0 - 818.5 731.4 - 128 69 45.2 - 59 82.8
compute-76-05 - 1007.4 18.1 0.0 - 989.3 1007.4 - 128 2 1.1 - 126 126.9
compute-76-06 - 1007.4 113.8 256.0 - 893.6 751.4 - 128 64 48.9 - 64 79.1
compute-76-07 - 1007.4 170.3 256.0 - 837.1 751.4 - 128 64 44.5 - 64 83.5
compute-76-08 - 1007.4 174.7 256.0 - 832.7 751.4 - 128 64 46.9 - 64 81.1
compute-76-09 - 1007.4 18.9 2.0 - 988.5 1005.4 - 128 16 16.0 - 112 112.0
compute-76-10 - 1007.4 17.4 6.0 - 990.0 1001.4 - 128 3 1.0 - 125 127.0
compute-76-11 - 1007.4 40.9 512.0 - 966.5 495.4 - 128 66 6.1 - 62 121.9
compute-76-12 - 1007.4 18.1 2.0 - 989.3 1005.4 - 128 2 1.0 - 126 127.0
compute-76-13 - 1007.4 16.1 6.0 - 991.3 1001.4 - 128 1 1.1 - 127 126.9
compute-76-14 - 1007.4 17.6 20.0 - 989.8 987.4 - 128 7 1.1 - 121 126.9
compute-84-01 - 881.1 113.3 12.0 - 767.8 869.1 - 112 12 12.0 - 100 100.0
compute-93-01 - 503.8 16.9 0.0 - 486.9 503.8 - 64 2 1.1 - 62 62.9
compute-93-02 - 755.6 16.5 0.0 - 739.1 755.6 - 72 0 0.1 - 72 71.9
compute-93-03 - 755.6 21.2 2.0 - 734.4 753.6 - 72 8 7.7 - 64 64.3
compute-93-04 - 755.6 19.8 2.0 - 735.8 753.6 - 72 6 1.2 - 66 70.8
compute-93-05 - 2016.3 26.4 71.7 - 1989.9 1944.6 - 96 8 2.0 - 88 94.0
compute-93-06 - 3023.9 43.0 799.8 - 2980.9 2224.1 - 56 16 9.0 - 40 47.0
======= ===== ====== ==== ==== =====
Totals 41678.2 1819.9 4644.5 4832 803 347.6
==> 4.4% 11.1% ==> 16.6% 7.2%
Most unreserved/unused memory (2224.1/2980.9GB) is on compute-93-06 with 40/47.0 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 18.0 -0.0 - 1493.4 1511.4 - 192 0 0.1 - 192 191.9
compute-76-02 - 1511.4 322.4 -0.0 - 1189.0 1511.4 - 192 0 4.5 - 192 187.5
compute-93-05 - 2016.3 26.4 71.7 - 1989.9 1944.6 - 96 8 2.0 - 88 94.0
compute-93-06 - 3023.9 43.0 799.8 - 2980.9 2224.1 - 56 16 9.0 - 40 47.0
======= ===== ====== ==== ==== =====
Totals 8063.0 409.8 871.4 536 24 15.5
==> 5.1% 10.8% ==> 4.5% 2.9%
Most unreserved/unused memory (2224.1/2980.9GB) is on compute-93-06 with 40/47.0 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 03/25/26 and 04/01/26
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
cabreroa 1/4 0.00
collinsl2 1/1 0.00 51.5 120.0 0.0 0.0 0.0
jhora 6/192 0.00 22.9 60.0 19.9 0.3 3.0 > 2.5
nevesk 7/142 0.00 82.6 23.5 2.7 1.3 8.6 > 2.5
johnsone 3/3 0.01 96.0 46.4 28.8 18.8 1.6
ramosi 7/24 0.02 67.2 8.0 4.8 4.3 1.7
longk 1/8 0.03 79.1 80.0 9.2 2.0 8.7 > 2.5
quattrinia 54/648 0.04 95.2 120.0 6.4 3.6 18.7 > 2.5
beckerm 4/32 0.05 54.1 160.0 1.9 1.6 83.6 > 2.5
sossajef 3/3 0.09 55.4 12.0 0.2 0.1 72.2 > 2.5
pradon 1/64 0.16 45.9 512.0 158.1 64.6 3.2 > 2.5
palmerem 15/15 0.19 122.1 248.4 50.2 20.1 4.9 > 2.5
szieba 47/1880 0.24 11.2 0.0 98.2 10.4 0.0
castanedaricos 20/600 0.30 51.7 300.0 37.6 25.2 8.0 > 2.5
capadorhd 10/50 0.33 797.4 65.1 7.0 0.8 9.2 > 2.5
qzhu 6/60 0.36 69.9 200.0 19.0 11.3 10.5 > 2.5
mghahrem 10/10 0.47 80.3 0.0 105.7 73.8 0.0
yisraell 2/20 0.66 85.7 1000.0 31.4 12.4 31.9 > 2.5
niez 4/64 0.68 70.7 160.0 299.2 3.1 0.5
hawkinsmt 451/453 0.69 110.2 16.0 0.3 0.2 60.6 > 2.5
afoster 4/4 0.96 99.8 6.4 5.1 4.7 1.3
oviedodiegom 22/228 1.56 96.9 49.3 9.8 1.2 5.0 > 2.5
johnsonsj 191/256 2.42 89.9 40.0 23.7 22.9 1.7
uribeje 30/328 2.44 23.7 219.1 25.4 6.5 8.6 > 2.5
bourkeb 51/816 2.64 14.3 305.4 93.7 73.5 3.3 > 2.5
vagac 1/24 2.67 62.0 384.0 302.6 44.6 1.3
willishr 11/72 3.66 41.6 72.2 145.2 7.6 0.5
santosbe 22/446 7.02 35.0 685.0 108.3 26.4 6.3 > 2.5
hinckleya 8309/8496 8.18 59.2 41.7 26.5 14.0 1.6
santossam 66/1187 9.05 8.5 120.0 26.2 13.8 4.6 > 2.5
xuj 820/2600 10.28 53.9 420.3 80.8 24.3 5.2 > 2.5
kistlerl 297/297 12.61 165.7 46.7 8.1 7.6 5.7 > 2.5
collinsa 1406/22448 22.50 83.2 146.0 17.5 10.0 8.4 > 2.5
yancos 2833/2833 23.33 100.1 99.4 1.8 1.1 54.8 > 2.5
horowitzj 9839/10379 76.24 92.2 24.5 6.5 3.1 3.8 > 2.5
granquistm 286/3226 111.48 85.0 212.1 208.7 67.8 1.0
morrisseyd 2999/3171 224.28 99.5 16.7 5.1 3.2 3.3 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 27840/61084 525.64 91.2 95.0 56.3 19.1 1.7
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 0/0 0.00
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lTWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Disk Usage & Quota
Disk Usage & Quota
As of Sun Apr 5 05:06:02 EDT 2026
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.36T 18.86T 3.50T 85% /13% /home
netapp-fas83-n02:/vol_data_public 332.50T 48.91T 283.59T 15%/2% /data/public
gpfs02:public 800.00T 523.93T 276.07T 66%/37% /scratch/public
gpfs02:nmnh_bradys 25.00T 19.18T 5.82T 77%/59% /scratch/bradys
gpfs02:nmnh_kistlerl 120.00T 87.99T 32.01T 74%/14% /scratch/kistlerl
gpfs02:nmnh_meyerc 25.00T 20.47T 4.53T 82% /7% /scratch/meyerc
gpfs02:nmnh_corals 60.00T 53.09T 6.91T 89% /23% /scratch/nmnh_corals
gpfs02:nmnh_ggi 130.00T 36.46T 93.54T 29%/15% /scratch/nmnh_ggi
gpfs02:nmnh_lab 25.00T 11.55T 13.45T 47%/11% /scratch/nmnh_lab
gpfs02:nmnh_mammals 35.00T 28.42T 6.58T 82% /39% /scratch/nmnh_mammals
gpfs02:nmnh_mdbc 60.00T 50.16T 9.84T 84% /25% /scratch/nmnh_mdbc
gpfs02:nmnh_ocean_dna 90.00T 68.75T 21.25T 77%/5% /scratch/nmnh_ocean_dna
gpfs02:nzp_ccg 45.00T 34.81T 10.19T 78%/3% /scratch/nzp_ccg
gpfs01:ocio_dpo 10.00T 152.05G 9.85T 2%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
gpfs02:pool_kozakk 12.00T 10.67T 1.33T 89% /2% /scratch/pool_kozakk
gpfs02:pool_sao_access 50.00T 4.79T 45.21T 10%/9% /scratch/pool_sao_access
gpfs02:pool_sao_rtdc 20.00T 908.33G 19.11T 5%/1% /scratch/pool_sao_rtdc
gpfs02:sao_atmos 350.00T 264.55T 85.45T 76%/12% /scratch/sao_atmos
gpfs02:sao_cga 25.00T 9.44T 15.56T 38%/28% /scratch/sao_cga
gpfs02:sao_tess 50.00T 23.25T 26.75T 47%/83% /scratch/sao_tess
gpfs02:scbi_gis 184.00T 141.22T 42.78T 77%/9% /scratch/scbi_gis
gpfs02:nmnh_schultzt 35.00T 24.81T 10.19T 71%/75% /scratch/schultzt
gpfs02:serc_cdelab 15.00T 10.19T 4.81T 68%/18% /scratch/serc_cdelab
gpfs02:stri_ap 25.00T 19.53T 5.47T 79%/1% /scratch/stri_ap
gpfs01:sao_sylvain 145.00T 33.37T 111.63T 24%/23% /scratch/sylvain
gpfs02:usda_sel 25.00T 8.92T 16.08T 36%/33% /scratch/usda_sel
gpfs02:wrbu 50.00T 40.98T 9.02T 82% /14% /scratch/wrbu
nas1:/mnt/pool/public 175.00T 102.39T 72.61T 59%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 14.58T 25.42T 37%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 16.35T 23.65T 41%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 70.00T 31.03T 38.97T 45%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 265.00T 119.18T 145.82T 45%/1% /store/nzp_ccg
nas2:/mnt/pool/nzp_cec 40.00T 20.71T 19.29T 52%/1% /store/nzp_cec
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 4.80T 45.20T 10%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 404.39T 345.61T 54%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 80.00T 24.96T 55.04T 32%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 9.42T 40.58T 19%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
nas1:/mnt/pool/admin 20.00T 8.04T 11.96T 41%/1% /store/admin
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.13TB 91.8% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 384.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 373.9GB 97.4% 0.08M 0.8% *** Rebeka Tamasi Bottger, SAO/OIR - rbottger
/home 363.6GB 94.7% 0.27M 2.7% Juan Uribe, NMNH - uribeje
/home 348.9GB 90.9% 0.70M 7.0% Adam Foster, SAO/HEA - afoster
/home 348.3GB 90.7% 0.28M 2.8% Paul Cristofari, SAO/SSP - pcristof
/home 339.3GB 88.4% 2.92M 29.2% Brian Bourke, WRBU - bourkeb
/home 329.1GB 85.7% 0.00M 0.0% Allan Cabrero, NMNH - cabreroa
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/39.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 17.20TB 114.7% 3.02M 7.6% *** Ting Wang, NMNH - wangt2
/scratch/public 14.90TB 99.3% 0.54M 1.4% *** Carlos Arias, STRI - ariasc
/scratch/public 13.50TB 90.0% 2.09M 5.3% Solomon Chak, SERC - chaks
/scratch/public 13.50TB 90.0% 26.79M 67.2% Qindan Zhu, SAO/AMP - qzhu
/scratch/public 13.30TB 88.7% 31.22M 78.4% Alberto Coello Garrido, NMNH - coellogarridoa
/scratch/public 13.30TB 88.7% 0.03M 0.1% James McClung, SAO/HEA - jmcclung
/scratch/public 13.20TB 88.0% 0.12M 0.3% Allen G. Collins, NMNH - collinsa
/scratch/public 13.20TB 88.0% 4.20M 10.5% Kevin Mulder, NZP - mulderk
/scratch/public 13.00TB 86.7% 15.96M 40.0% Brian Bourke, WRBU - bourkeb
/scratch/public 12.80TB 85.3% 0.10M 0.2% Susette CastaƱeda-Rico, NZP - castanedaricos
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 15.20TB 304.0% 0.06M 0.0% *** Carlos Arias, STRI - ariasc
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.73TB 94.6% - - Zelong Nie, NMNH - niez (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-02 3.49T 65.5G 3.43T 1.8% | 0.0G 3.49T 0.0% | 0.00
65-03 3.49T 64.5G 3.43T 1.8% | 0.0G 3.49T 0.0% | 0.00
65-04 3.49T 65.5G 3.43T 1.8% | 0.0G 3.49T 0.0% | 0.00
65-05 3.49T 64.5G 3.43T 1.8% | 0.0G 3.49T 0.0% | 0.00
65-06 3.49T 63.5G 3.43T 1.8% | 0.0G 3.49T 0.0% | 0.00
65-07 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-10 1.75T 62.5G 1.68T 3.5% | 0.0G 1.75T 0.0% | 0.00
65-11 1.75T 52.2G 1.69T 2.9% | 0.0G 1.75T 0.0% | 0.00
65-12 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-13 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-14 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-15 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-16 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-17 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-18 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-19 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-20 1.75T 159.7G 1.59T 8.9% | 0.0G 1.75T 0.0% | 0.00
65-21 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-22 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-23 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 52.2G 1.69T 2.9% | 0.0G 1.75T 0.0% | 0.00
65-25 1.75T 52.2G 1.69T 2.9% | 0.0G 1.75T 0.0% | 0.00
65-26 1.75T 52.2G 1.69T 2.9% | 0.0G 1.75T 0.0% | 0.00
65-27 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
65-28 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-29 1.75T 52.2G 1.69T 2.9% | 0.0G 1.75T 0.0% | 0.00
65-30 1.75T 54.3G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
75-01 5.24T 78.8G 5.16T 1.5% | 0.0G 5.24T 0.0% | 0.00
75-02 6.98T 91.1G 6.89T 1.3% | 0.0G 6.98T 0.0% | 0.00
75-03 6.98T 90.1G 6.89T 1.3% | 0.0G 6.98T 0.0% | 0.00
75-04 6.98T 90.1G 6.89T 1.3% | 0.0G 6.98T 0.0% | 0.00
75-05 6.98T 154.6G 6.83T 2.2% | 0.0G 6.98T 0.0% | 0.00
75-06 6.98T 90.1G 6.89T 1.3% | 0.0G 6.98T 0.0% | 0.00
76-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-03 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
76-04 1.75T 52.2G 1.69T 2.9% | 0.0G 1.75T 0.0% | 0.00
76-05 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
76-06 1.75T 96.3G 1.65T 5.4% | 0.0G 1.75T 0.0% | 0.00
76-07 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-08 1.75T 94.2G 1.65T 5.3% | 0.0G 1.75T 0.0% | 0.00
76-09 1.75T 93.2G 1.65T 5.2% | 0.0G 1.75T 0.0% | 0.00
76-10 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
76-11 1.75T 53.2G 1.69T 3.0% | 0.0G 1.75T 0.0% | 0.00
76-12 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-13 1.75T 99.3G 1.65T 5.6% | 0.0G 1.75T 0.0% | 0.00
76-14 1.75T 55.3G 1.69T 3.1% | 0.0G 1.75T 0.0% | 0.00
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-06 1.64T 11.3G 1.62T 0.7% | 0.0G 1.64T 0.0% | 0.00
---------------------------------------------------------------
Total 141.8T 2.87T 139.0T 2.0% | 0.0G 141.8T 0.0% | 0.00
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.