Hydra-7@ADC Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
ariasc
athalappila
atkinsonga
auscavitchs
beckerm
bolivarleguizamons
bourkeb
breusingc
byerlyp
campanam
carlsenm
castanedaricos
chippsa
clelandtp
coellogarridoa
collinsa
corderm
figueiroh
frandsenp
friedmans2
fzaidouni
ggonzale
gonzalezb
gonzalezv
gouldingt
granquistm
graujh
griebenowz
gtorres
guerravc
hagemannm
hchong
hoffmannmeyerg
hpc
hwang
jassoj
jbak
jhora
jmcclung
johnsone
johnsonsj
jwing
kistlerl
kweskinm
lealc
longk
macdonaldk
martinezl2
mcfaddenc
mcgowenm
mghahrem
morrisseyd
mperez
mulderk
myerse
nelsonjo
niez
palmerem
pappalardop
parkerld
pattonp
pcristof
peresph
przelomskan
quattrinia
qzhu
ramosi
rdi_tella
richardjm
santossam
suttonm
sylvain
szieba
triznam
uribeje
vantasseln
villanueval
vohsens
willishr
xuj
zarril
zayazpou
zehnpfennigj
highlighted.
As of Sun Mar 1 19:27:05 2026: #CPUs/nodes 5484/74, 2 down.
Loads:
head node: 0.61, login nodes: 0.34, 0.13, 25.34, 0.00; NSDs: 0.69, 0.00, 1.35, 4.51, 5.28; licenses: none used.
Queues status: 2 disabled, 18 need attention, none in error state.
15 users with running jobs (slots/jobs):
Current load: 1224.4, #running (slots/jobs): 1,293/188, usage: 23.6%, efficiency: 94.7%
1 user with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 1/6,080/6,080
68 users have/had running or queued jobs over the past 7 days, 83 over the past 15 days.
99 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Sunday, 01-Mar-2026 19:32:13 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 1:02.
Warnings
Warnings
Oversubscribed Jobs
As of Sun Mar 1 19:27:06 EST 2026 (4 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1293/188, 1 queued (jobs), showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12214486 angs34-5 uribeje +7:00 8 163.4% uThM.q 65-24
12230416 paleomix_anc_fl hagemannm +1:17 4 196.9% mThC.q 64-04
12230417 paleomix_anc_fl hagemannm +1:17 4 195.6% mThC.q 64-09
12230418 paleomix_anc_fl hagemannm +1:17 4 196.5% mThC.q 64-10
⇒ Equivalent to 16.6 overused CPUs: 20 CPUs used at 183.2% on average.
Inefficient Jobs
As of Sun Mar 1 19:27:08 EST 2026 (14 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1293/188, 1 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12190422 stairwayAZ.job byerlyp +13:09 5 20.0% lThM.q 64-17
12195552 stairwayNE.job byerlyp +13:06 5 19.9% lThM.q 76-04
12198833 stairwayCAR.job byerlyp +12:08 5 19.5% lThM.q 76-14
12226716 sansibia_spades nelsonjo +3:05 10 16.8% mThM.q 65-06
12230404 Vitis_VG_Idx3_S niez +1:19 32 3.1% uTxlM.rq 93-06
12232402 BHL-WebP richardjm 03:23 1 1.1% sThC.q 65-06 12812
12232402 BHL-WebP richardjm 03:10 1 0.5% sThC.q 65-06 12881
12232402 BHL-WebP richardjm 02:52 1 0.3% sThC.q 76-11 13016
(more by richardjm)
⇒ Equivalent to 60.3 underused CPUs: 66 CPUs used at 8.6% on average.
To see them all use:
'q+ -ineff -u richardjm' (9)
Nodes with Excess Load
As of Sun Mar 1 19:27:12 EST 2026 (13 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
64-04 40 4 8.3 4.3 *
64-09 40 4 8.1 4.1 *
64-10 40 14 18.1 4.1 *
65-03 64 10 12.0 2.0 *
65-18 64 11 18.3 7.3 *
65-24 64 8 13.6 5.6 *
75-03 128 21 26.2 5.2 *
75-07 128 19 20.9 1.9 *
76-02 192 0 3.0 3.0 *
76-03 192 20 23.7 3.7 *
76-04 192 15 30.4 15.4 *
76-10 128 20 38.2 18.2 *
76-13 128 36 60.6 24.6 *
Total excess load = 99.5
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
niez 32 8.8% 1.8750 47.5% 1.6092 85.7% 1.7640 1.7640 1.1
bourkeb 16 4.4% 0.7812 19.8% 0.0110 0.6% 0.0651 0.2028 3.9
morrisseyd 27 7.4% 0.4219 10.7% 0.1504 8.0% 0.1581 0.1636 2.6
nelsonjo 10 2.7% 0.4102 10.4% 0.0058 0.3% 0.0067 0.0129 31.9
uribeje 16 4.4% 0.3906 9.9% 0.0042 0.2% 0.0221 0.0221 17.7
byerlyp 15 4.1% 0.0586 1.5% 0.0099 0.5% 0.0101 0.0101 5.8
ramosi 8 2.2% 0.0078 0.2% 0.0328 1.7% 0.0330 0.0333 0.2
breusingc 240 65.9% 0.0000 0.0% 0.0553 2.9% 0.0565 0.0604 0.0
==================================================================================================
Total 364 3.9453 1.8787 2.1155 2.2692 1.7
Warnings
50 high memory jobs produced a warning:
1 for bourkeb
15 for breusingc
3 for byerlyp
26 for morrisseyd
1 for nelsonjo
1 for niez
1 for ramosi
2 for uribeje
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =50
mThC.q =846
lThC.q =16
uThC.q =0
912 4864 18.8% 131.2%
sThM.q =0
mThM.q =293
lThM.q =23
uThM.q =8
324 4488 7.2% 339.3%
sTgpu.q =0
mTgpu.q =0
lTgpu.q =0
qgpu.iq =0
0 104 0.0%
uTxlM.rq =32
32 536 6.0% 15.2%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =0
0 8 0.0%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =17
17 68 25.0% 7.7%
Total: 1285
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Sun Mar 1 19:27:08 EST 2026
3856 avail(slots), free(load)=5103.9, unresd(mem)=33330.9G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5120 total(mem) 39.8T
unused(slots) 3876 unused(load) 5103.9 ie: 75.7% 99.7%
unreserved(mem) 32.5T unused(mem) 38.4T ie: 81.7% 96.5%
unreserved(mem) 8.6G unused(mem) 10.2G per unused(slots)
3549 avail(slots), free(load)=4666.8, unresd(mem)=29938.0G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4680 total(mem) 35.8T
unused(slots) 3569 unused(load) 4666.8 ie: 76.3% 99.7%
unreserved(mem) 29.2T unused(mem) 34.6T ie: 81.7% 96.7%
unreserved(mem) 8.4G unused(mem) 9.9G per unused(slots)
312 avail(slots), free(load)=344.0, unresd(mem)=4631.6G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 344 total(mem) 6.4T
unused(slots) 312 unused(load) 344.0 ie: 90.7% 100.0%
unreserved(mem) 4.5T unused(mem) 4.8T ie: 70.7% 75.1%
unreserved(mem) 14.8G unused(mem) 15.8G per unused(slots)
104 avail(slots), free(load)=104.0, unresd(mem)=754.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 104 unused(load) 104.0 ie: 100.0% 100.0%
unreserved(mem) 0.7T unused(mem) 0.7T ie: 100.0% 95.3%
unreserved(mem) 7.3G unused(mem) 6.9G per unused(slots)
GPU Usage
Sun Mar 1 19:27:34 EST 2026
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 14.3 489.0 - 4/0 - 64 0 0.1 - 64 63.9
compute-79-01 - 125.5 10.5 115.0 - 2/0 - 20 0 0.1 - 20 19.9
compute-79-02 - 125.5 10.8 114.7 - 2/0 - 20 0 0.1 - 20 19.9
Total GPU=8, used=0 (0.0%)
Waiting Job(s)
As of Sun Mar 1 19:27:11 EST 2026
1 job waiting for richardjm :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12232402 BHL-WebP richardjm 11:38 1 3.0 sThC.q 13922-20000:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=50/840 6.0% for richardjm
max_hC_slots_per_user/1 slots=50/840 6.0% for richardjm in queue sThC.q
max_mem_res_per_user/1 mem_res=150.0G/9.985T 1.5% for richardjm in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/3 mem_res=1.875T/7.874T 23.8% for * in queue uTxlM.rq
total_slots/1 slots=1283/5960 21.5% for *
blast2GO/1 slots=21/110 19.1% for *
total_mem_res/1 mem_res=5.145T/39.94T 12.9% for * in queue uThC.q
total_mem_res/2 mem_res=2.070T/35.78T 5.8% for * in queue uThM.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Sun Mar 1 19:27:12 EST 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=5.145T/39.94T 12.9% for * in queue uThC.q
total_mem_res/2 mem_res=2.070T/35.78T 5.8% for * in queue uThM.q
total_mem_res/3 mem_res=1.875T/7.874T 23.8% for * in queue uTxlM.rq
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.5 16.6 20.2 - 486.9 483.3 - 32 21 17.0 - 11 15.0
compute-64-18 - 503.5 16.6 49.2 - 486.9 454.3 - 32 12 2.8 - 20 29.2
compute-65-02 - 503.5 22.9 160.0 - 480.6 343.5 - 64 20 20.1 - 44 43.9
compute-65-03 - 503.5 16.4 60.0 - 487.1 443.5 - 64 10 11.8 - 54 52.2
compute-65-04 - 503.5 17.2 123.0 - 486.3 380.5 - 64 21 19.3 - 43 44.7
compute-65-05 - 503.5 17.4 0.0 - 486.1 503.5 - 64 16 16.1 - 48 47.9
compute-65-06 - 503.5 22.5 483.0 - 481.0 20.5 - 64 31 3.7 - 33 60.3
compute-65-07 - 503.5 16.9 120.0 - 486.6 383.5 - 64 20 20.3 - 44 43.7
compute-65-09 - 503.5 22.9 152.0 - 480.6 351.5 - 64 22 22.0 - 42 42.0
compute-65-10 - 503.5 29.5 136.0 - 474.0 367.5 - 64 21 21.0 - 43 43.0
compute-65-12 - 503.5 16.1 120.0 - 487.4 383.5 - 64 20 20.0 - 44 44.0
compute-65-13 - 503.5 25.8 76.0 - 477.7 427.5 - 64 11 11.2 - 53 52.8
compute-65-14 - 503.5 15.1 120.0 - 488.4 383.5 - 64 20 20.0 - 44 44.0
compute-65-15 - 503.5 25.6 16.0 - 477.9 487.5 - 64 17 17.0 - 47 47.0
compute-65-16 - 503.5 18.9 142.0 - 484.6 361.5 - 64 23 21.0 - 41 43.0
compute-65-17 - 503.5 24.8 16.0 - 478.7 487.5 - 64 17 17.1 - 47 46.9
compute-65-18 - 503.5 19.0 76.0 - 484.5 427.5 - 64 11 17.4 - 53 46.6
compute-65-19 - 503.5 52.2 252.0 - 451.3 251.5 - 64 42 27.8 - 22 36.2
compute-65-20 - 503.5 49.6 136.0 - 453.9 367.5 - 64 21 21.0 - 43 43.0
compute-65-21 - 503.5 16.4 120.0 - 487.1 383.5 - 64 20 19.9 - 44 44.0
compute-65-22 - 503.5 21.1 0.0 - 482.4 503.5 - 64 16 16.0 - 48 48.0
compute-65-23 - 503.5 23.8 76.0 - 479.7 427.5 - 64 11 11.1 - 53 52.9
compute-65-24 - 503.5 18.8 200.0 - 484.7 303.5 - 64 8 13.7 - 56 50.3
compute-65-25 - 503.5 15.9 0.0 - 487.6 503.5 - 64 16 16.0 - 48 48.0
compute-65-26 - 503.5 15.0 120.0 - 488.5 383.5 - 64 20 19.8 - 44 44.2
compute-65-27 - 503.5 20.5 136.0 - 483.0 367.5 - 64 21 20.1 - 43 43.9
compute-65-28 - 503.5 18.8 0.0 - 484.7 503.5 - 64 16 16.0 - 48 48.0
compute-65-29 - 503.5 17.3 120.0 - 486.2 383.5 - 64 20 20.0 - 44 44.0
compute-65-30 - 503.5 16.0 60.0 - 487.5 443.5 - 64 10 11.0 - 54 53.0
compute-75-01 - 1007.5 17.4 60.1 - 990.1 947.4 - 128 26 26.0 - 102 102.0
compute-75-02 - 1007.5 20.2 180.0 - 987.3 827.5 - 128 30 30.0 - 98 98.0
compute-75-03 - 755.5 19.4 123.0 - 736.1 632.5 - 128 21 25.7 - 107 102.3
compute-75-04 - 755.0 15.8 179.5 - 739.2 575.5 - 128 30 30.1 - 98 97.9
compute-75-05 - 755.5 20.9 0.0 - 734.6 755.5 - 128 32 32.2 - 96 95.8
compute-75-06 - 755.5 31.6 135.0 - 723.9 620.5 - 128 22 14.8 - 106 113.2
compute-75-07 - 755.5 28.9 136.0 - 726.6 619.5 - 128 19 20.6 - 109 107.4
compute-76-03 - 1007.4 16.3 120.5 - 991.1 886.9 - 128 20 23.7 - 108 104.3
compute-76-04 - 1007.4 16.4 80.0 - 991.0 927.4 - 128 15 29.6 - 113 98.4
compute-76-05 - 1007.4 17.5 134.0 - 989.9 873.4 - 128 30 24.2 - 98 103.8
compute-76-07 - 1007.4 43.9 184.0 - 963.5 823.4 - 128 24 24.1 - 104 103.9
compute-76-08 - 1007.4 18.0 60.0 - 989.4 947.4 - 128 26 26.1 - 102 101.9
compute-76-09 - 1007.4 21.4 60.0 - 986.0 947.4 - 128 26 25.4 - 102 102.6
compute-76-10 - 1007.4 16.2 120.0 - 991.2 887.4 - 128 20 36.5 - 108 91.5
compute-76-11 - 1007.4 18.6 63.0 - 988.8 944.4 - 128 27 26.0 - 101 102.0
compute-76-12 - 1007.4 36.0 108.0 - 971.4 899.4 - 128 29 29.1 - 99 98.9
compute-76-13 - 1007.4 17.8 920.0 - 989.6 87.4 - 128 36 60.6 - 92 67.4
compute-76-14 - 1007.4 18.9 80.0 - 988.5 927.4 - 128 31 27.0 - 97 101.0
compute-84-01 - 881.1 99.1 123.0 - 782.0 758.1 - 112 21 20.0 - 91 92.0
compute-93-01 - 503.8 19.4 76.0 - 484.4 427.8 - 64 11 11.2 - 53 52.8
compute-93-02 - 755.6 16.7 120.0 - 738.9 635.6 - 72 20 20.1 - 52 51.9
compute-93-03 - 755.6 19.8 136.0 - 735.8 619.6 - 72 21 21.1 - 51 50.9
compute-93-04 - 755.6 26.6 152.0 - 729.0 603.6 - 72 22 21.9 - 50 50.1
======= ===== ====== ==== ==== =====
Totals 35126.6 1206.4 6439.5 4488 1093 1096.3
==> 3.4% 18.3% ==> 24.4% 24.4%
Most unreserved/unused memory (947.4/990.1GB) is on compute-75-01 with 102/102.0 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 17.8 -0.0 - 1493.6 1511.4 - 192 0 0.1 - 192 191.9
compute-76-02 - 1511.4 x x - node down - 192 x x - x x
compute-93-05 - 2016.3 16.7 0.0 - 1999.6 2016.3 - 96 0 0.0 - 96 96.0
compute-93-06 - 3023.9 1597.4 1920.0 - 1426.5 1103.9 - 56 32 1.7 - 24 54.3
======= ===== ====== ==== ==== =====
Totals 6551.6 1631.9 1920.0 344 32 1.8
==> 24.9% 29.3% ==> 9.3% 0.5%
Most unreserved/unused memory (2016.3/1999.6GB) is on compute-93-05 with 96/96.0 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 02/18/26 and 02/25/26
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
mcgowenm 1/12 0.00 2.2 4.0 0.0 0.0 0.0
hpc 4/13 0.00 20.4 10.0 0.0 0.0 0.0
pappalardop 28/28 0.00 40.8 282.4 0.0 0.0 198213.2 > 2.5
martinezl2 4/24 0.00 1.7 40.0 0.1 0.0 568.7 > 2.5
gouldingt 1/16 0.02 6.2 192.0 3.9 3.9 49.1 > 2.5
kweskinm 1/26 0.02 56.4 260.0 25.3 17.8 10.3 > 2.5
gonzalezv 2/64 0.04 2.1 960.0 712.4 343.9 1.3
carlsenm 1/1 0.10 99.5 10.0 0.0 0.0 293.1 > 2.5
willishr 46/256 0.26 30.3 80.3 19.6 3.1 4.1 > 2.5
coellogarridoa 19/109 0.26 297.5 477.5 332.2 4.5 1.4
szieba 53/2120 0.28 73.7 0.0 177.4 2.5 0.0
johnsonsj 285/570 0.35 55.7 40.0 19.6 19.3 2.0
zehnpfennigj 1/20 0.59 75.8 200.0 214.3 21.2 0.9
niez 16/416 0.72 3.6 786.6 1021.9 640.8 0.8
mghahrem 9/720 0.86 55.6 0.0 30.6 29.4 0.0
pcristof 40/240 0.90 68.9 90.0 10.1 1.6 8.9 > 2.5
castanedaricos 33/1180 1.19 37.3 382.3 154.3 62.5 2.5
qzhu 16/142 1.56 79.5 162.4 25.5 12.8 6.4 > 2.5
friedmans2 120/480 1.69 26.5 160.0 18.2 9.1 8.8 > 2.5
athalappila 337/3033 1.82 57.9 900.0 175.3 0.6 5.1 > 2.5
ramosi 4/40 1.90 102.9 6.0 16.3 6.4 0.4
bourkeb 25/305 2.20 87.0 440.9 80.6 14.5 5.5 > 2.5
hchong 32263/32205 2.61 6.6 11.1 0.3 0.1 33.9 > 2.5
santossam 22/402 2.99 5.0 120.0 7.0 5.1 17.2 > 2.5
jhora 276/8832 4.16 12.7 60.0 107.2 7.6 0.6
campanam 118/3992 7.14 88.7 395.9 7.7 7.1 51.6 > 2.5
suttonm 990/990 7.77 120.9 85.5 11.7 9.6 7.3 > 2.5
kistlerl 179/179 8.82 99.4 24.4 3.1 2.1 8.0 > 2.5
beckerm 500/1390 9.11 60.6 119.2 22.5 19.5 5.3 > 2.5
uribeje 115/1296 11.68 103.4 367.9 24.4 5.8 15.1 > 2.5
collinsa 627/9896 42.49 80.2 279.5 56.7 16.8 4.9 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 36136/68997 111.50 78.0 234.6 48.4 16.4 4.9 > 2.5
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
gonzalezv 2/2 0.03 79.5 960.0 686.0 327.5 1.4
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 2/2 0.03 79.5 960.0 686.0 327.5 1.4
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
Disk Usage & Quota
Disk Usage & Quota
As of Sun Mar 1 17:06:02 EST 2026
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.36T 17.44T 4.92T 78%/12% /home
netapp-fas83-n02:/vol_data_public 332.50T 43.96T 288.54T 14%/2% /data/public
gpfs02:public 800.00T 482.13T 317.87T 61%/34% /scratch/public
gpfs02:nmnh_bradys 25.00T 18.61T 6.39T 75%/58% /scratch/bradys
gpfs02:nmnh_kistlerl 120.00T 99.00T 21.00T 83% /14% /scratch/kistlerl
gpfs02:nmnh_meyerc 25.00T 20.38T 4.62T 82% /7% /scratch/meyerc
gpfs02:nmnh_corals 60.00T 53.53T 6.47T 90% /24% /scratch/nmnh_corals
gpfs02:nmnh_ggi 130.00T 36.46T 93.54T 29%/15% /scratch/nmnh_ggi
gpfs02:nmnh_lab 25.00T 11.52T 13.48T 47%/11% /scratch/nmnh_lab
gpfs02:nmnh_mammals 35.00T 27.90T 7.10T 80%/39% /scratch/nmnh_mammals
gpfs02:nmnh_mdbc 60.00T 55.94T 4.06T 94% /25% /scratch/nmnh_mdbc
gpfs02:nmnh_ocean_dna 90.00T 54.45T 35.55T 61%/2% /scratch/nmnh_ocean_dna
gpfs02:nzp_ccg 45.00T 33.58T 11.42T 75%/3% /scratch/nzp_ccg
gpfs01:ocio_dpo 10.00T 152.05G 9.85T 2%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
gpfs02:pool_kozakk 12.00T 10.67T 1.33T 89% /2% /scratch/pool_kozakk
gpfs02:pool_sao_access 50.00T 4.79T 45.21T 10%/9% /scratch/pool_sao_access
gpfs02:pool_sao_rtdc 20.00T 908.33G 19.11T 5%/1% /scratch/pool_sao_rtdc
gpfs02:sao_atmos 350.00T 244.38T 105.62T 70%/12% /scratch/sao_atmos
gpfs02:sao_cga 25.00T 9.44T 15.56T 38%/28% /scratch/sao_cga
gpfs02:sao_tess 50.00T 23.25T 26.75T 47%/83% /scratch/sao_tess
gpfs02:scbi_gis 184.00T 69.50T 114.50T 38%/8% /scratch/scbi_gis
gpfs02:nmnh_schultzt 35.00T 21.72T 13.28T 63%/75% /scratch/schultzt
gpfs02:serc_cdelab 15.00T 10.11T 4.89T 68%/18% /scratch/serc_cdelab
gpfs02:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 145.00T 5.93T 139.07T 5%/2% /scratch/sylvain
gpfs02:usda_sel 25.00T 8.44T 16.56T 34%/30% /scratch/usda_sel
gpfs02:wrbu 50.00T 41.12T 8.88T 83% /14% /scratch/wrbu
nas1:/mnt/pool/public 175.00T 102.38T 72.62T 59%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 14.58T 25.42T 37%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 16.22T 23.78T 41%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 70.00T 29.45T 40.55T 43%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 264.67T 117.79T 146.88T 45%/1% /store/nzp_ccg
nas2:/mnt/pool/nzp_cec 40.00T 20.71T 19.29T 52%/1% /store/nzp_cec
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 3.08T 46.92T 7%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 401.11T 348.89T 54%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 80.00T 24.96T 55.04T 32%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 9.42T 40.58T 19%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
nas1:/mnt/pool/admin 20.00T 8.03T 11.97T 41%/1% /store/admin
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.13TB 91.8% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 384.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 364.9GB 95.0% 2.87M 28.7% *** Brian Bourke, WRBU - bourkeb
/home 361.3GB 94.1% 0.23M 2.3% Juan Uribe, NMNH - uribeje
/home 350.9GB 91.4% 0.28M 2.8% Paul Cristofari, SAO/SSP - pcristof
/home 345.5GB 90.0% 0.70M 7.0% Adam Foster, SAO/HEA - afoster
/home 328.1GB 85.4% 0.00M 0.0% Allan Cabrero, NMNH - cabreroa
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/39.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 17.20TB 114.7% 3.02M 7.6% *** Ting Wang, NMNH - wangt2
/scratch/public 14.20TB 94.7% 36.05M 90.5% Alberto Coello Garrido, NMNH - coellogarridoa
/scratch/public 14.00TB 93.3% 16.71M 41.9% Brian Bourke, WRBU - bourkeb
/scratch/public 13.70TB 91.3% 24.73M 62.1% Qindan Zhu, SAO/AMP - qzhu
/scratch/public 13.50TB 90.0% 2.09M 5.3% Solomon Chak, SERC - chaks
/scratch/public 13.20TB 88.0% 4.20M 10.5% Kevin Mulder, NZP - mulderk
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 292.0% 0.05M 0.0% *** Carlos Arias, STRI - ariasc
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.73TB 94.6% - - Zelong Nie, NMNH - niez (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
64-17 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
64-18 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-02 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-03 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-04 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-05 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-06 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-07 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-10 1.75T 21.5G 1.72T 1.2% | 0.0G 1.75T 0.0% | 0.00
65-11 1.75T 21.5G 1.72T 1.2% | 0.0G 1.75T 0.0% | 0.00
65-12 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-13 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-14 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-15 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-16 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-17 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-18 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-19 1.75T 110.6G 1.64T 6.2% | 249.9G 1.50T 14.0% | 2.26
65-20 1.75T 118.8G 1.63T 6.6% | 0.0G 1.75T 0.0% | 0.00
65-21 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-22 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-23 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-25 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-26 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-29 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-01 5.24T 37.9G 5.20T 0.7% | 199.7G 5.04T 3.7% | 5.27
75-02 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-03 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-05 6.98T 113.7G 6.87T 1.6% | 400.4G 6.59T 5.6% | 3.52
76-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-03 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-04 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-05 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-06 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-07 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-08 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
76-09 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
76-10 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-11 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
76-12 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
76-13 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-14 1.75T 14.3G 1.73T 0.8% | 199.7G 1.55T 11.2% | 13.93
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-06 1.64T 11.3G 1.62T 0.7% | 0.0G 1.64T 0.0% | 0.00
---------------------------------------------------------------
Total 127.9T 1.17T 126.7T 0.9% | 3.37T 124.5T 2.6% | 2.89
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.