Hydra-7@ADC Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
afoster
andersonhl
ariasc
athalappila
atkinsonga
auscavitchs
babb
bakerd
beckerm
bennettkf
blackburnrc
bornbuschs
bourkeb
breusingc
bushsl
byerlyp
campanam
capadorhd
carlsenm
castanedaricos
cerqueirat
clelandtp
coellogarridoa
collensab
collinsa
connellym
corderm
craigc
figueiroh
floresm
franzena
gallego-narbona
ggonzale
gonzalezv
gouldingt
granquistm
graujh
grossc2
gtorres
guerravc
hawkinsmt
hchong
hinckleya
horowitzj
hpc
hwang
jhora
johnsone
johnsonsj
jspark
jyee
keoghs2
kimcj
krajpuro
kramerb
kweskinm
lealc
leihersj
longk
lyonss
macdonaldk
macguigand
mcfaddenc
mcgowenm
medeirosi
mghahrem
morrisseyd
mperez
murphykr
nevesk
niez
palmerem
pappalardop
peresph
pradon
przelomskan
quattrinia
quinteroh
qzhu
rasbands
rbottger
roa-varona
rotzeln
sandoval-velascom
santosbe
seim
ssanjaripour
storeyk
sylvain
triznam
uribeje
vagac
villanueval
vohsens
wirshingh
yancos
zarril
zehnpfennigj
zhangy
highlighted.
As of Thu Dec 18 16:37:03 2025: #CPUs/nodes 5740/74, 2 down.
Loads:
head node: 1.98, login nodes: 0.44, 0.00, 0.13, 0.00; NSDs: 0.41, 0.00, 0.08, 4.17, 3.47; licenses: none used.
Queues status: none disabled, 18 need attention, none in error state.
15 users with running jobs (slots/jobs):
Current load: 537.3, #running (slots/jobs): 1,111/82, usage: 19.4%, efficiency: 48.4%
1 user with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 1/85/1,360
89 users have/had running or queued jobs over the past 7 days, 99 over the past 15 days.
119 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Thursday, 18-Dec-2025 16:42:56 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 0:59.
Warnings
Warnings
Oversubscribed Jobs
As of Thu Dec 18 16:37:04 EST 2025 (0 oversubscribed job)
Inefficient Jobs
As of Thu Dec 18 16:37:04 EST 2025 (28 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1111/82, 1 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11481231 AssembleBarcode breusingc +6:03 16 1.4% mThM.q 65-14 243
11481232 metaspades breusingc +7:21 16 6.4% mThM.q 76-13 107
11596723 ombro_ges_disc_ hchong +6:02 1 0.6% sThC.q 65-14 36809
11596723 ombro_ges_disc_ hchong +6:02 1 0.4% sThC.q 65-14 36916
11596723 ombro_ges_disc_ hchong +6:02 1 0.4% sThC.q 65-14 36938
(more by hchong)
11759799 gbz_recall_chr0 niez +5:19 32 8.2% mThC.q 93-03
11759800 gbz_recall_chr0 niez +5:19 32 8.5% mThC.q 93-02
11770485 arms.coi quattrinia +2:05 4 3.1% lThC.q 65-05
11770751 spades_genome_t gouldingt +1:12 20 22.3% mThC.q 64-14
11772787 pre_25 jhora 02:35 32 13.0% sThM.q 76-02
11772788 pre_26 jhora 02:35 32 17.6% sThM.q 76-01
⇒ Equivalent to 182.8 underused CPUs: 204 CPUs used at 10.4% on average.
To see them all use:
'q+ -ineff -u hchong' (20)
Nodes with Excess Load
As of Thu Dec 18 16:37:05 EST 2025 (16 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-07 64 0 3.6 3.6 *
65-19 64 1 4.4 3.4 *
65-20 64 1 4.7 3.7 *
65-22 64 0 2.2 2.2 *
65-23 64 1 4.6 3.6 *
65-24 64 0 3.6 3.6 *
65-26 64 1 5.4 4.4 *
65-27 64 1 4.9 3.9 *
65-29 64 0 4.7 4.7 *
65-30 64 0 4.3 4.3 *
76-03 192 18 27.1 9.1 *
76-04 192 16 24.1 8.1 *
76-05 128 0 3.7 3.7 *
76-08 128 0 3.1 3.1 *
76-10 128 0 3.9 3.9 *
76-11 128 0 3.6 3.6 *
Total excess load = 68.7
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
jhora 480 51.2% 0.8789 32.2% 0.0176 2.3% 0.8392 0.8485 1.0
uribeje 16 1.7% 0.7812 28.7% 0.2351 30.7% 0.3289 0.3311 2.4
breusingc 336 35.8% 0.2500 9.2% 0.3544 46.3% 0.2144 0.5765 0.4
bourkeb 8 0.9% 0.2500 9.2% 0.0068 0.9% 0.0056 0.0139 18.0
coellogarridoa 80 8.5% 0.2344 8.6% 0.0622 8.1% 0.0623 0.0625 3.8
gouldingt 10 1.1% 0.2148 7.9% 0.0709 9.3% 0.1054 0.1408 1.5
medeirosi 8 0.9% 0.1172 4.3% 0.0182 2.4% 0.0002 0.0347 3.4
==================================================================================================
Total 938 2.7266 0.7651 1.5559 2.0080 1.4
Warnings
26 high memory jobs produced a warning:
1 for bourkeb
18 for breusingc
2 for coellogarrid
1 for gouldingt
2 for jhora
2 for uribeje
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =9
mThC.q =124
lThC.q =1
uThC.q =0
134 4928 2.7% 367.6%
sThM.q =488
mThM.q =338
lThM.q =80
uThM.q =16
922 4552 20.3% 52.5%
sTgpu.q =0
mTgpu.q =0
lTgpu.q =0
qgpu.iq =0
0 104 0.0%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =0
0 8 0.0%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =13
13 68 19.1% 1.8%
Total: 1069
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Thu Dec 18 16:37:04 EST 2025
4154 avail(slots), free(load)=5113.4, unresd(mem)=36926.9G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5120 total(mem) 39.8T
unused(slots) 4154 unused(load) 5113.4 ie: 81.1% 99.9%
unreserved(mem) 36.1T unused(mem) 37.9T ie: 90.6% 95.1%
unreserved(mem) 8.9G unused(mem) 9.3G per unused(slots)
3756 avail(slots), free(load)=4674.1, unresd(mem)=33067.0G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4680 total(mem) 35.8T
unused(slots) 3756 unused(load) 4674.1 ie: 80.3% 99.9%
unreserved(mem) 32.3T unused(mem) 34.0T ie: 90.3% 95.0%
unreserved(mem) 8.8G unused(mem) 9.3G per unused(slots)
404 avail(slots), free(load)=535.8, unresd(mem)=7761.9G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 536 total(mem) 7.9T
unused(slots) 404 unused(load) 535.8 ie: 75.4% 100.0%
unreserved(mem) 7.6T unused(mem) 7.7T ie: 96.3% 97.5%
unreserved(mem) 19.2G unused(mem) 19.5G per unused(slots)
104 avail(slots), free(load)=104.0, unresd(mem)=754.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 104 unused(load) 104.0 ie: 100.0% 100.0%
unreserved(mem) 0.7T unused(mem) 0.7T ie: 100.0% 95.1%
unreserved(mem) 7.3G unused(mem) 6.9G per unused(slots)
GPU Usage
Thu Dec 18 16:37:10 EST 2025
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 14.2 489.1 - 4/0 - 64 0 0.0 - 64 64.0
compute-79-01 - 125.5 12.0 113.5 - 2/0 - 20 0 0.1 - 20 19.9
compute-79-02 - 125.5 11.0 114.5 - 2/0 - 20 0 0.1 - 20 19.9
Total GPU=8, used=0 (0.0%)
Waiting Job(s)
As of Thu Dec 18 16:37:05 EST 2025
1 job waiting for breusingc :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11481232 metaspades breusingc +9:04 16 0.0 mThM.q 488-572:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hM_slots_per_user/2 slots=336/585 57.4% for breusingc in queue mThM.q
max_slots_per_user/1 slots=336/840 40.0% for breusingc
max_mem_res_per_user/2 mem_res=256.0G/8.944T 2.8% for breusingc in queue uThM.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=32/110 29.1% for *
total_slots/1 slots=1111/5960 18.6% for *
total_mem_res/2 mem_res=2.727T/35.78T 7.6% for * in queue uThM.q
total_mem_res/1 mem_res=1.328T/39.94T 3.3% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Thu Dec 18 16:37:05 EST 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=1.328T/39.94T 3.3% for * in queue uThC.q
total_mem_res/2 mem_res=2.727T/35.78T 7.6% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.5 16.5 2.2 - 487.0 501.3 - 32 1 2.1 - 31 29.9
compute-64-18 - 503.5 14.3 0.2 - 489.2 503.3 - 32 16 16.0 - 16 16.0
compute-65-02 - 503.5 19.0 0.0 - 484.5 503.5 - 64 16 16.0 - 48 48.0
compute-65-03 - 503.5 17.3 0.0 - 486.2 503.5 - 64 16 16.0 - 48 48.0
compute-65-04 - 503.5 46.2 0.0 - 457.3 503.5 - 64 16 9.2 - 48 54.8
compute-65-06 - 503.5 15.2 0.0 - 488.3 503.5 - 64 16 1.4 - 48 62.6
compute-65-07 - 503.5 14.1 0.0 - 489.4 503.5 - 64 0 3.5 - 64 60.5
compute-65-09 - 503.5 16.2 0.0 - 487.3 503.5 - 64 16 14.2 - 48 49.8
compute-65-10 - 503.5 17.7 0.0 - 485.8 503.5 - 64 16 12.5 - 48 51.5
compute-65-11 - 503.5 21.8 60.0 - 481.7 443.5 - 64 32 1.3 - 32 62.7
compute-65-12 - 503.5 226.4 400.0 - 277.1 103.5 - 64 8 4.8 - 56 59.2
compute-65-13 - 503.5 18.6 0.0 - 484.9 503.5 - 64 16 11.8 - 48 52.2
compute-65-15 - 503.5 19.1 0.0 - 484.4 503.5 - 64 16 16.1 - 48 48.0
compute-65-16 - 503.5 21.5 60.0 - 482.0 443.5 - 64 32 4.2 - 32 59.8
compute-65-17 - 503.5 18.0 0.0 - 485.5 503.5 - 64 16 16.0 - 48 48.0
compute-65-18 - 503.5 16.8 0.0 - 486.7 503.5 - 64 16 16.0 - 48 48.0
compute-65-19 - 503.5 13.5 2.0 - 490.0 501.5 - 64 1 4.4 - 63 59.6
compute-65-20 - 503.5 14.6 6.0 - 488.9 497.5 - 64 1 4.7 - 63 59.3
compute-65-21 - 503.5 16.4 0.0 - 487.1 503.5 - 64 16 16.2 - 48 47.8
compute-65-22 - 503.5 14.3 0.0 - 489.2 503.5 - 64 0 2.2 - 64 61.8
compute-65-23 - 503.5 13.6 2.0 - 489.9 501.5 - 64 1 4.6 - 63 59.4
compute-65-24 - 503.5 14.2 0.0 - 489.3 503.5 - 64 0 3.6 - 64 60.4
compute-65-25 - 503.5 21.5 60.0 - 482.0 443.5 - 64 32 3.8 - 32 60.2
compute-65-26 - 503.5 14.8 2.0 - 488.7 501.5 - 64 1 5.4 - 63 58.6
compute-65-27 - 503.5 14.6 2.0 - 488.9 501.5 - 64 1 4.9 - 63 59.1
compute-65-28 - 503.5 19.1 32.0 - 484.4 471.5 - 64 4 4.1 - 60 59.9
compute-65-29 - 503.5 15.0 0.0 - 488.5 503.5 - 64 0 4.7 - 64 59.3
compute-65-30 - 503.5 13.1 0.0 - 490.4 503.5 - 64 0 4.3 - 64 59.7
compute-75-01 - 1007.5 21.7 256.1 - 985.8 751.4 - 128 8 8.0 - 120 120.0
compute-75-02 - 1007.5 19.4 0.0 - 988.1 1007.5 - 128 16 16.1 - 112 112.0
compute-75-03 - 755.5 33.0 0.0 - 722.5 755.5 - 128 16 10.4 - 112 117.6
compute-75-04 - 755.0 27.6 59.5 - 727.4 695.5 - 128 32 4.4 - 96 123.6
compute-75-05 - 755.5 22.5 0.0 - 733.0 755.5 - 128 16 14.0 - 112 114.0
compute-75-06 - 755.5 108.9 0.0 - 646.6 755.5 - 128 32 23.5 - 96 104.5
compute-75-07 - 755.5 18.2 0.0 - 737.3 755.5 - 128 16 16.0 - 112 112.0
compute-76-03 - 1007.4 24.5 90.5 - 982.9 916.9 - 128 18 18.1 - 110 109.9
compute-76-04 - 1007.4 22.8 0.0 - 984.6 1007.4 - 128 16 16.1 - 112 111.9
compute-76-05 - 1007.4 29.3 0.0 - 978.1 1007.4 - 128 0 3.7 - 128 124.3
compute-76-06 - 1007.4 29.1 60.0 - 978.3 947.4 - 128 32 3.9 - 96 124.1
compute-76-07 - 1007.4 33.5 120.0 - 973.9 887.4 - 128 40 40.0 - 88 88.0
compute-76-08 - 1007.4 30.6 0.0 - 976.8 1007.4 - 128 0 3.1 - 128 124.9
compute-76-09 - 1007.4 28.4 60.0 - 979.0 947.4 - 128 32 4.0 - 96 124.0
compute-76-10 - 1007.4 20.4 0.0 - 987.0 1007.4 - 128 0 3.6 - 128 124.4
compute-76-11 - 1007.4 20.3 0.0 - 987.1 1007.4 - 128 0 3.6 - 128 124.4
compute-76-12 - 1007.4 28.2 60.0 - 979.2 947.4 - 128 32 3.3 - 96 124.7
compute-76-13 - 1007.4 196.7 520.0 - 810.7 487.4 - 128 64 44.5 - 64 83.5
compute-76-14 - 1007.4 27.8 120.0 - 979.6 887.4 - 128 36 5.0 - 92 123.0
compute-84-01 - 881.1 105.6 60.0 - 775.5 821.1 - 112 32 3.4 - 80 108.6
compute-93-01 - 503.8 21.6 60.0 - 482.2 443.8 - 64 32 3.9 - 32 60.1
compute-93-02 - 755.6 38.8 400.0 - 716.8 355.6 - 72 32 1.5 - 40 70.5
compute-93-03 - 755.6 196.9 620.0 - 558.7 135.6 - 72 42 2.8 - 30 69.2
compute-93-04 - 755.6 23.5 60.0 - 732.1 695.6 - 72 32 5.8 - 40 66.2
======= ===== ====== ==== ==== =====
Totals 35630.5 1832.7 3174.5 4552 882 482.7
==> 5.1% 8.9% ==> 19.4% 10.6%
Most unreserved/unused memory (1007.5/988.1GB) is on compute-75-02 with 112/112.0 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 79.2 60.4 - 1432.2 1451.0 - 192 32 8.7 - 160 183.3
compute-76-02 - 1511.4 72.0 60.4 - 1439.4 1451.0 - 192 32 9.8 - 160 182.2
compute-93-05 - 2016.3 24.8 119.9 - 1991.5 1896.4 - 96 36 4.8 - 60 91.2
compute-93-06 - 3023.9 22.0 60.4 - 3001.9 2963.5 - 56 32 1.4 - 24 54.6
======= ===== ====== ==== ==== =====
Totals 8063.0 198.0 301.1 536 132 24.7
==> 2.5% 3.7% ==> 24.6% 4.6%
Most unreserved/unused memory (2963.5/3001.9GB) is on compute-93-06 with 24/54.6 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 12/10/25 and 12/17/25
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
graujh 2/64 0.00 1.5 240.0 0.0 0.0 6403.2 > 2.5
ariasc 6/55 0.00 9.1 254.3 0.6 0.0 461.3 > 2.5
babb 2/2 0.00 98.4 19.8 0.3 0.2 77.0 > 2.5
hpc 1/1 0.00 0.1 0.0 0.0 0.0 0.0
auscavitchs 13/52 0.00 62.4 64.0 18.6 4.7 3.4 > 2.5
bourkeb 7/56 0.02 15.0 256.0 195.9 13.4 1.3
kweskinm 39/180 0.03 29.8 60.8 35.1 1.8 1.7
wirshingh 41/212 0.03 30.8 60.4 28.9 2.5 2.1
longk 37/196 0.03 29.9 60.3 29.9 2.5 2.0
castanedaricos 36/190 0.03 29.1 60.3 35.0 2.9 1.7
seim 29/140 0.03 21.5 60.5 33.8 1.6 1.8
collensab 37/196 0.03 26.9 60.3 28.3 2.2 2.1
kramerb 41/212 0.03 29.3 60.4 29.4 2.7 2.1
vagac 45/236 0.03 29.9 60.4 24.7 2.2 2.4
macdonaldk 37/196 0.03 27.6 60.3 27.2 2.2 2.2
floresm 37/196 0.04 27.4 60.3 31.6 2.8 1.9
hinckleya 7/18 0.04 80.9 105.6 1.6 1.5 65.2 > 2.5
bushsl 50/258 0.04 30.4 60.4 27.9 3.3 2.2
rotzeln 59/308 0.05 29.8 60.4 26.3 2.9 2.3
hawkinsmt 56/272 0.05 73.0 59.3 20.5 2.4 2.9 > 2.5
capadorhd 10/31 0.05 11410.2 64.0 11.5 11.3 5.6 > 2.5
morrisseyd 65/356 0.06 28.4 67.2 29.0 1.7 2.3
lyonss 38/194 0.07 22.8 60.2 31.9 0.8 1.9
murphykr 49/260 0.07 23.9 60.2 31.4 1.4 1.9
zarril 61/300 0.08 39.7 91.2 27.0 3.3 3.4 > 2.5
granquistm 53/276 0.08 23.0 55.4 29.9 1.3 1.9
macguigand 50/262 0.14 40.4 101.3 29.2 3.8 3.5 > 2.5
craigc 46/242 0.17 19.6 60.1 33.0 0.5 1.8
bennettkf 16/64 0.22 8.5 128.0 68.4 7.2 1.9
bornbuschs 2/16 0.28 80.0 320.0 62.0 15.1 5.2 > 2.5
kimcj 10/10 0.31 504.2 30.9 12.9 4.8 2.4
grossc2 211/1234 0.41 55.9 60.0 26.4 3.8 2.3
carlsenm 4/50 0.42 72.5 281.4 206.6 2.4 1.4
jhora 10/188 0.42 67.9 246.8 137.3 3.1 1.8
roa-varona 2/40 0.52 35.3 320.0 204.0 8.4 1.6
zehnpfennigj 42/232 0.68 60.8 193.3 199.5 20.1 1.0
mcgowenm 5/30 0.93 16.6 90.0 7.2 5.9 12.4 > 2.5
bakerd 1/8 1.05 12.6 400.0 0.1 0.0 4492.0 > 2.5
nevesk 639/5812 1.20 70.0 599.7 18.2 7.8 32.9 > 2.5
santosbe 24/520 1.44 93.1 123.8 25.7 9.8 4.8 > 2.5
gouldingt 11/178 1.65 35.2 203.8 50.1 31.3 4.1 > 2.5
qzhu 25/25 2.27 101.4 100.0 16.2 9.6 6.2 > 2.5
gallego-narbona 257/1726 2.45 108.6 631.9 40.8 3.1 15.5 > 2.5
campanam 12/192 3.15 60.3 212.8 115.0 40.2 1.9
horowitzj 4000/4120 3.16 78.6 47.8 5.3 1.1 9.1 > 2.5
coellogarridoa 3/120 3.18 98.9 120.0 8.6 7.3 14.0 > 2.5
uribeje 24/210 3.30 58.2 110.8 24.2 6.8 4.6 > 2.5
lealc 2/32 3.89 87.1 200.0 126.6 5.2 1.6
johnsonsj 147/800 4.28 39.8 69.2 44.4 9.8 1.6
beckerm 578/3794 5.20 64.7 11.6 27.6 18.8 0.4
mghahrem 15/120 5.39 74.7 536.4 180.7 17.6 3.0 > 2.5
byerlyp 45/330 21.95 23.4 28.3 5.7 2.2 5.0 > 2.5
collinsa 376/5180 29.49 79.7 322.0 96.5 24.5 3.3 > 2.5
medeirosi 200/410 103.27 70.1 120.7 48.0 0.3 2.5 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 7615/30402 201.76 69.7 160.8 54.4 6.8 3.0 > 2.5
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 0/0 0.00
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
Disk Usage & Quota
Disk Usage & Quota
As of Thu Dec 18 11:06:02 EST 2025
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.36T 16.47T 5.89T 74%/12% /home
netapp-fas83-n02:/vol_data_public 332.50T 44.23T 288.27T 14%/2% /data/public
gpfs02:public 800.00T 451.49T 348.51T 57%/28% /scratch/public
gpfs02:nmnh_bradys 25.00T 18.49T 6.51T 74%/58% /scratch/bradys
gpfs02:nmnh_kistlerl 120.00T 98.25T 21.75T 82% /14% /scratch/kistlerl
gpfs02:nmnh_meyerc 25.00T 18.85T 6.15T 76%/7% /scratch/meyerc
gpfs02:nmnh_corals 60.00T 47.23T 12.77T 79%/22% /scratch/nmnh_corals
gpfs02:nmnh_ggi 130.00T 36.46T 93.54T 29%/15% /scratch/nmnh_ggi
gpfs02:nmnh_lab 25.00T 11.44T 13.56T 46%/11% /scratch/nmnh_lab
gpfs02:nmnh_mammals 35.00T 28.66T 6.34T 82% /39% /scratch/nmnh_mammals
gpfs02:nmnh_mdbc 60.00T 50.25T 9.75T 84% /25% /scratch/nmnh_mdbc
gpfs02:nmnh_ocean_dna 90.00T 53.34T 36.66T 60%/2% /scratch/nmnh_ocean_dna
gpfs02:nzp_ccg 45.00T 30.90T 14.10T 69%/3% /scratch/nzp_ccg
gpfs01:ocio_dpo 10.00T 2.92T 7.08T 30%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
gpfs02:pool_kozakk 12.00T 10.67T 1.33T 89% /2% /scratch/pool_kozakk
gpfs02:pool_sao_access 50.00T 4.79T 45.21T 10%/9% /scratch/pool_sao_access
gpfs02:pool_sao_rtdc 20.00T 908.33G 19.11T 5%/1% /scratch/pool_sao_rtdc
gpfs02:sao_atmos 350.00T 235.93T 114.07T 68%/10% /scratch/sao_atmos
gpfs02:sao_cga 25.00T 9.44T 15.56T 38%/28% /scratch/sao_cga
gpfs02:sao_tess 50.00T 23.25T 26.75T 47%/83% /scratch/sao_tess
gpfs02:scbi_gis 95.00T 60.93T 34.07T 65%/14% /scratch/scbi_gis
gpfs02:nmnh_schultzt 35.00T 20.00T 15.00T 58%/75% /scratch/schultzt
gpfs02:serc_cdelab 15.00T 12.87T 2.13T 86% /19% /scratch/serc_cdelab
gpfs02:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 145.00T 87.05T 57.95T 61%/60% /scratch/sylvain
gpfs02:usda_sel 25.00T 5.48T 19.52T 22%/30% /scratch/usda_sel
gpfs02:wrbu 50.00T 40.70T 9.30T 82% /14% /scratch/wrbu
nas1:/mnt/pool/public 175.00T 101.73T 73.27T 59%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 14.30T 25.70T 36%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 14.60T 25.40T 37%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 70.00T 28.41T 41.59T 41%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 265.00T 112.28T 152.72T 43%/1% /store/nzp_ccg
nas2:/mnt/pool/nzp_cec 40.00T 20.50T 19.50T 52%/1% /store/nzp_cec
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 3.07T 46.93T 7%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 390.73T 359.27T 53%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 80.00T 24.96T 55.04T 32%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 9.42T 40.58T 19%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
nas1:/mnt/pool/admin 20.00T 8.00T 12.00T 41%/1% /store/admin
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.13TB 91.8% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
/data/public 3.91TB 86.9% 0.00M 0.0% Zelong Nie, NMNH - niez
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 384.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 363.8GB 94.7% 0.28M 2.8% Juan Uribe, NMNH - uribeje
/home 359.8GB 93.7% 2.84M 28.4% Brian Bourke, WRBU - bourkeb
/home 357.6GB 93.1% 2.10M 21.0% Michael Trizna, NMNH/BOL - triznam
/home 331.6GB 86.4% 0.26M 2.6% Paul Cristofari, SAO/SSP - pcristof
/home 328.1GB 85.4% 0.00M 0.0% Allan Cabrero, NMNH - cabreroa
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/39.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 17.20TB 114.7% 3.02M 0.0% *** Ting Wang, NMNH - wangt2
/scratch/public 15.70TB 104.7% 26.07M 0.0% *** Zelong Nie, NMNH - niez
/scratch/public 15.00TB 100.0% 0.00M 0.0% *** Rebeka Tamasi Bottger, SAO/OIR - rbottger
/scratch/public 14.90TB 99.3% 1.55M 0.0% *** Juan Uribe, NMNH - uribeje
/scratch/public 14.20TB 94.7% 0.08M 0.2% Qindan Zhu, SAO/AMP - qzhu
/scratch/public 14.20TB 94.7% 4.24M 0.0% Kevin Mulder, NZP - mulderk
/scratch/public 13.80TB 92.0% 0.12M 0.0% Madeleine Becker, NZCBI - beckerm
/scratch/public 13.50TB 90.0% 2.09M 0.0% Solomon Chak, SERC - chaks
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 292.0% 0.05M 0.0% *** Carlos Arias, STRI - ariasc
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.91TB 98.3% - - *** Zelong Nie, NMNH - niez (5.0TB/0M)
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.71T 43.0G 1.67T 2.5% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-02 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-03 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-04 414.7G 21.2G 393.5G 5.1% | 0.0G 3.29T 0.0% | 0.00
65-05 414.7G 21.2G 393.5G 5.1% | 0.0G 3.49T 0.0% | 0.00
65-06 414.7G 21.1G 393.6G 5.1% | 0.0G 3.29T 0.0% | 0.00
65-09 3.46T 45.1G 3.42T 1.3% | 167.9G 3.29T 4.7% | 3.73
65-10 414.7G 21.0G 393.6G 5.1% | 0.0G 1.55T 0.0% | 0.00
65-11 414.7G 22.6G 392.1G 5.4% | 0.0G 1.75T 0.0% | 0.00
65-12 414.7G 20.9G 393.7G 5.0% | 0.0G 1.75T 0.0% | 0.00
65-13 414.7G 21.6G 393.0G 5.2% | 0.0G 1.55T 0.0% | 0.00
65-14 414.7G 21.6G 393.0G 5.2% | 0.0G 1.55T 0.0% | 0.00
65-15 414.7G 19.7G 395.0G 4.7% | 0.0G 1.55T 0.0% | 0.00
65-16 414.7G 19.6G 395.1G 4.7% | 0.0G 1.75T 0.0% | 0.00
65-17 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-18 414.7G 20.9G 393.7G 5.0% | 0.0G 1.55T 0.0% | 0.00
65-19 414.7G 22.8G 391.9G 5.5% | 0.0G 1.75T 0.0% | 0.00
65-20 414.7G 21.0G 393.6G 5.1% | 414.7G 0.0G 100.0% | 19.72
65-21 414.7G 21.1G 393.6G 5.1% | 0.0G 1.55T 0.0% | 0.00
65-22 414.7G 21.0G 393.6G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-23 414.7G 22.9G 391.8G 5.5% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-25 414.7G 21.0G 393.7G 5.1% | 414.7G 0.0G 100.0% | 19.79
65-26 414.7G 21.1G 393.6G 5.1% | 414.7G 0.0G 100.0% | 19.69
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 414.7G 21.1G 393.6G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-29 414.7G 21.0G 393.6G 5.1% | 0.0G 1.75T 0.0% | 0.00
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-02 6.98T 50.2G 6.93T 0.7% | 199.7G 6.79T 2.8% | 3.98
75-03 414.7G 21.0G 393.7G 5.1% | 0.0G 6.79T 0.0% | 0.00
75-04 6.95T 95.2G 6.86T 1.3% | 0.0G 6.98T 0.0% | 0.00
75-05 6.98T 50.2G 6.93T 0.7% | 199.7G 6.79T 2.8% | 3.98
75-06 6.95T 68.6G 6.88T 1.0% | 367.6G 6.59T 5.2% | 5.36
75-07 6.95T 67.6G 6.88T 0.9% | 166.9G 6.79T 2.3% | 2.47
76-03 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-04 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
76-13 1.75T 101.4G 1.65T 5.7% | 199.7G 1.55T 11.2% | 1.97
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.95T 68.6G 6.88T 1.0% | 0.0G 6.98T 0.0% | 0.00
---------------------------------------------------------------
Total 94.42T 1.27T 93.15T 1.3% | 3.46T 123.9T 3.7% | 2.73
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.