Hydra-7 Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
afoster
ariasc
beckerm
bourkeb
byerlyp
campanam
cnowlan
coellogarridoa
coilk
collensab
collinsa
connellym
cortesem
dbowden
erickoch
figueiroh
franzena
gallego-narbona
gchen
ggonzale
gonzalezv
graujh
gresse
gtorres
hawkinsmt
heckwolfm
hinckleya
horowitzj
hpc
ingushin
jassoj
jbak
jhora
jmartine
jslavin
jwing
kistlerl
kweskinm
lealc
linat
longk
macdonaldk
macguigand
mcgowenm
medeirosi
mghahrem
morrisseyd
mulderk
nelsonjo
pappalardop
parkerld
pcristof
przelomskan
roa-varona
sandoval-velascom
scottjj
sossajef
spasojevict
steierj
stlaurentr
sylvain
talaveraa
triznam
tueda
uribeje
vagac
vdiaz
wangt2
wbrennom
wettewae
wirshingh
zehnpfennigj
zhangy
highlighted.
As of Tue Nov 12 18:47:03 2024: #CPUs/nodes 5824/76, 0 down.
Loads:
head node: 0.23, login nodes: 3.08, 0.00, 0.04, 0.18; NSDs: 2.05, 0.66; licenses: 4 idlrt used.
Queues status: 1 disabled, none need attention, none in error state.
16 users with running jobs (slots/jobs):
Current load: 1233.3, #running (slots/jobs): 1,581/484, usage: 27.1%, efficiency: 78.0%
no job in any of the queues.
52 users have/had running or queued jobs over the past 7 days, 73 over the past 15 days.
107 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Tuesday, 12-Nov-2024 18:52:23 EST
with mk-webpage.pl ver. 7.2/1 (Aug 2024/SGK) in 1:15.
Warnings
Warnings
Oversubscribed Jobs
As of Tue Nov 12 18:47:03 EST 2024 (0 oversubscribed job)
Inefficient Jobs
As of Tue Nov 12 18:47:04 EST 2024 (20 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1571/483, 0 queued (job), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
3786757 IQ_50p_iqtree morrisseyd +1:08 64 19.4% lThC.q 76-08
3786758 IQ_75p_iqtree morrisseyd +1:08 64 22.8% lThC.q 76-10
3788128 fit188570092 wbrennom 09:09 50 24.9% mThC.q 76-12
3788090 snparcher_hydra figueiroh 13:28 40 31.9% sThM.q 93-05
3788188 redloci85 coilk 08:23 12 29.0% lThM.q 64-18
3788220 GATK_genotype_m heckwolfm 08:11 6 22.2% lThM.q 65-14
3788260 GATK_genotype_m heckwolfm 08:11 6 21.9% lThM.q 65-02
3788264 GATK_genotype_m heckwolfm 08:11 6 31.9% lThM.q 75-07
(more by heckwolfm)
⇒ Equivalent to 241.4 underused CPUs: 320 CPUs used at 24.6% on average.
To see them all use:
'q+ -ineff -u heckwolfm' (15)
Nodes with Excess Load
As of Tue Nov 12 18:47:05 EST 2024 (6 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
64-12 40 30 42.0 12.0 *
65-18 64 30 48.4 18.4 *
75-02 128 3 23.0 20.0 *
76-04 128 39 46.7 7.7 *
76-07 128 0 8.4 8.4 *
76-14 128 1 4.3 3.3 *
Total excess load = 69.8
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
uribeje 80 26.2% 2.3438 38.9% 0.0448 3.0% 0.0106 0.2442 9.6
figueiroh 40 13.1% 1.2500 20.8% 0.0169 1.1% 0.3007 0.4922 2.5
kistlerl 19 6.2% 0.8906 14.8% 0.1063 7.2% 0.1112 0.1389 6.4
heckwolfm 120 39.3% 0.7812 13.0% 0.7506 50.7% 0.7520 0.7520 1.0
hinckleya 1 0.3% 0.3906 6.5% 0.2234 15.1% 0.0124 0.3174 1.2
beckerm 26 8.5% 0.2031 3.4% 0.2756 18.6% 0.2777 0.2802 0.7
gonzalezv 6 2.0% 0.1172 1.9% 0.0604 4.1% 0.0609 0.0659 1.8
coilk 12 3.9% 0.0293 0.5% 0.0011 0.1% 0.0493 0.0502 0.6
franzena 1 0.3% 0.0117 0.2% 0.0001 0.0% 0.0001 0.0001 88.6
==================================================================================================
Total 305 6.0176 1.4791 1.5750 2.3412 2.6
Warnings
64 high memory jobs produced a warning:
13 for beckerm
1 for coilk
1 for figueiroh
6 for gonzalezv
20 for heckwolfm
19 for kistlerl
4 for uribeje
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =60
mThC.q =1000
lThC.q =208
uThC.q =0
1268 5136 24.7% 96.0%
sThM.q =41
mThM.q =45
lThM.q =219
uThM.q =0
305 4680 6.5% 360.3%
sTgpu.q =0
mTgpu.q =0
lTgpu.q =0
qgpu.iq =0
0 104 0.0%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =0
0 8 0.0%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =8
8 40 20.0% 0.6%
Total: 1581
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Tue Nov 12 18:47:04 EST 2024
3631 avail(slots), free(load)=5121.1, unresd(mem)=34191.3G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5136 total(mem) 39.6T
unused(slots) 3631 unused(load) 5121.1 ie: 70.7% 99.7%
unreserved(mem) 33.4T unused(mem) 35.3T ie: 84.4% 89.1%
unreserved(mem) 9.4G unused(mem) 9.9G per unused(slots)
3273 avail(slots), free(load)=4668.0, unresd(mem)=30485.0G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4680 total(mem) 35.8T
unused(slots) 3273 unused(load) 4668.0 ie: 69.9% 99.7%
unreserved(mem) 29.8T unused(mem) 31.9T ie: 83.2% 89.2%
unreserved(mem) 9.3G unused(mem) 10.0G per unused(slots)
495 avail(slots), free(load)=535.9, unresd(mem)=6770.7G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 536 total(mem) 7.9T
unused(slots) 495 unused(load) 535.9 ie: 92.4% 100.0%
unreserved(mem) 6.6T unused(mem) 7.7T ie: 84.0% 98.3%
unreserved(mem) 13.7G unused(mem) 16.0G per unused(slots)
104 avail(slots), free(load)=104.0, unresd(mem)=754.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 104 unused(load) 104.0 ie: 100.0% 100.0%
unreserved(mem) 0.7T unused(mem) 0.7T ie: 100.0% 91.3%
unreserved(mem) 7.3G unused(mem) 6.6G per unused(slots)
GPU Usage
Tue Nov 12 18:47:10 EST 2024
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 25.1 478.2 - 4/0 - 64 0 0.1 - 64 63.9
compute-79-01 - 125.5 20.5 105.0 - 2/0 - 20 0 0.1 - 20 19.9
compute-79-02 - 125.5 19.9 105.6 - 2/0 - 20 0 0.0 - 20 20.0
Total #GPU=8 used=0 (0.0%)
Waiting Job(s)
As of Tue Nov 12 18:47:05 EST 2024
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=59/110 53.6% for *
total_slots/1 slots=1574/5960 26.4% for *
total_mem_res/2 mem_res=6.018T/35.78T 16.8% for * in queue uThM.q
total_mem_res/1 mem_res=1.537T/39.94T 3.8% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Tue Nov 12 18:47:05 EST 2024
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=1.537T/39.94T 3.8% for * in queue uThC.q
total_mem_res/2 mem_res=6.018T/35.78T 16.8% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.3 30.5 0.0 - 472.8 503.3 - 32 0 0.1 - 32 31.9
compute-64-18 - 503.3 52.6 110.0 - 450.7 393.3 - 32 17 7.0 - 15 25.0
compute-65-02 - 503.5 87.0 90.0 - 416.5 413.5 - 64 17 8.5 - 47 55.5
compute-65-03 - 503.5 52.2 80.0 - 451.3 423.5 - 64 5 4.8 - 59 59.2
compute-65-04 - 503.5 45.7 4.0 - 457.8 499.5 - 64 37 37.0 - 27 27.0
compute-65-05 - 503.5 79.9 66.0 - 423.6 437.5 - 64 19 14.7 - 45 49.3
compute-65-06 - 503.5 32.9 2.0 - 470.6 501.5 - 64 10 7.0 - 54 57.0
compute-65-07 - 503.5 43.0 54.0 - 460.5 449.5 - 64 4 4.2 - 60 59.8
compute-65-09 - 503.5 33.6 0.0 - 469.9 503.5 - 64 0 0.0 - 64 64.0
compute-65-10 - 503.5 63.8 118.0 - 439.7 385.5 - 64 58 58.0 - 6 6.0
compute-65-11 - 503.5 63.1 26.0 - 440.4 477.5 - 64 13 13.0 - 51 51.0
compute-65-12 - 503.5 76.4 82.0 - 427.1 421.5 - 64 22 8.4 - 42 55.6
compute-65-13 - 503.5 44.3 98.0 - 459.2 405.5 - 64 12 8.6 - 52 55.4
compute-65-14 - 503.5 87.4 64.0 - 416.1 439.5 - 64 18 13.1 - 46 50.9
compute-65-15 - 503.5 80.0 70.0 - 423.5 433.5 - 64 17 17.1 - 47 46.9
compute-65-16 - 503.5 73.6 36.0 - 429.9 467.5 - 64 18 18.0 - 46 46.0
compute-65-17 - 503.5 55.1 48.0 - 448.4 455.5 - 64 6 5.3 - 58 58.6
compute-65-18 - 503.5 39.2 60.0 - 464.3 443.5 - 64 30 48.4 - 34 15.6
compute-65-19 - 503.5 56.8 64.0 - 446.7 439.5 - 64 9 9.0 - 55 55.0
compute-65-20 - 503.5 57.6 80.0 - 445.9 423.5 - 64 12 2.8 - 52 61.2
compute-65-21 - 503.5 38.9 48.0 - 464.6 455.5 - 64 1 1.3 - 63 62.7
compute-65-22 - 503.5 65.7 30.0 - 437.8 473.5 - 64 15 15.1 - 49 48.9
compute-65-23 - 503.5 44.3 96.0 - 459.2 407.5 - 64 2 2.8 - 62 61.2
compute-65-24 - 503.5 72.9 34.0 - 430.6 469.5 - 64 17 17.0 - 47 47.0
compute-65-25 - 503.5 32.5 0.0 - 471.0 503.5 - 64 0 0.2 - 64 63.8
compute-65-26 - 503.5 88.4 66.0 - 415.1 437.5 - 64 18 12.3 - 46 51.6
compute-65-27 - 503.5 45.2 12.0 - 458.3 491.5 - 64 6 6.0 - 58 58.0
compute-65-28 - 503.5 47.0 64.0 - 456.5 439.5 - 64 3 3.2 - 61 60.8
compute-65-29 - 503.5 36.7 2.0 - 466.8 501.5 - 64 50 15.3 - 14 48.7
compute-65-30 - 503.5 54.1 62.0 - 449.4 441.5 - 64 8 8.0 - 56 56.0
compute-75-01 - 1007.4 102.3 50.0 - 905.1 957.4 - 128 25 25.1 - 103 102.9
compute-75-02 - 1007.5 55.5 64.0 - 952.0 943.5 - 128 3 23.0 - 125 105.0
compute-75-03 - 755.5 78.3 634.0 - 677.2 121.5 - 128 37 30.4 - 91 97.6
compute-75-04 - 755.5 113.5 74.0 - 642.0 681.5 - 128 23 18.9 - 105 109.1
compute-75-05 - 755.5 123.2 82.0 - 632.3 673.5 - 128 27 23.3 - 101 104.7
compute-75-06 - 755.5 181.4 164.0 - 574.1 591.5 - 128 40 28.5 - 88 99.5
compute-75-07 - 755.5 117.3 82.0 - 638.2 673.5 - 128 27 23.0 - 101 105.0
compute-76-03 - 1007.4 40.3 640.5 - 967.1 366.9 - 128 26 17.3 - 102 110.7
compute-76-04 - 1007.4 175.2 134.0 - 832.2 873.4 - 128 39 31.1 - 89 96.9
compute-76-05 - 1007.4 80.7 20.0 - 926.7 987.4 - 128 91 90.2 - 37 37.8
compute-76-06 - 1007.4 106.8 668.0 - 900.6 339.4 - 128 40 31.6 - 88 96.3
compute-76-07 - 1007.4 38.5 0.0 - 968.9 1007.4 - 128 0 8.4 - 128 119.6
compute-76-08 - 1007.4 86.0 304.0 - 921.4 703.4 - 128 65 14.2 - 63 113.8
compute-76-09 - 1007.4 47.9 2.0 - 959.5 1005.4 - 128 50 26.3 - 78 101.7
compute-76-10 - 1007.4 59.1 256.0 - 948.3 751.4 - 128 64 14.4 - 64 113.6
compute-76-11 - 1007.4 91.9 22.0 - 915.5 985.4 - 128 111 107.0 - 17 21.0
compute-76-12 - 1007.4 39.2 2.0 - 968.2 1005.4 - 128 50 12.2 - 78 115.8
compute-76-13 - 1007.4 52.5 0.0 - 954.9 1007.4 - 128 110 110.0 - 18 18.0
compute-76-14 - 1007.4 42.6 400.0 - 964.8 607.4 - 128 1 4.0 - 127 124.0
compute-84-01 - 881.1 372.2 186.0 - 508.9 695.1 - 112 59 23.9 - 53 88.1
compute-93-01 - 503.8 61.1 66.0 - 442.7 437.8 - 64 10 10.1 - 54 53.9
compute-93-02 - 755.6 96.6 86.0 - 659.0 669.6 - 72 20 15.2 - 52 56.8
compute-93-03 - 755.6 60.4 26.0 - 695.2 729.6 - 72 13 13.0 - 59 59.0
compute-93-04 - 755.6 61.4 624.0 - 694.2 131.6 - 72 32 28.1 - 40 43.9
======= ===== ====== ==== ==== =====
Totals 36637.5 3964.3 6152.5 4680 1407 1095.7
==> 10.8% 16.8% ==> 30.1% 23.4%
Most unreserved/unused memory (1007.4/968.9GB) is on compute-76-07 with 128/119.6 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 35.7 12.3 - 1475.7 1499.1 - 192 1 1.1 - 191 190.9
compute-76-02 - 1511.4 35.6 -0.0 - 1475.8 1511.4 - 192 0 0.1 - 192 191.9
compute-93-05 - 2016.3 37.2 1280.0 - 1979.1 736.3 - 96 40 7.2 - 56 88.8
compute-93-06 - 3023.9 32.6 0.0 - 2991.3 3023.9 - 56 0 0.1 - 56 55.9
======= ===== ====== ==== ==== =====
Totals 8063.0 141.1 1292.3 536 41 8.5
==> 1.7% 16.0% ==> 7.6% 1.6%
Most unreserved/unused memory (3023.9/2991.3GB) is on compute-93-06 with 56/55.9 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 10/30/24 and 11/06/24
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
vagac 2/20 0.00
coilk 1/12 0.00
talaveraa 1/10 0.00
morrisseyd 1/12 0.00
hinckleya 2/4 0.00 18.8 360.0 0.0 0.0 0.0
campanam 89/134 0.02 130.8 16.0 4.9 2.9 3.2 > 2.5
collensab 6/96 0.09 34.7 256.0 200.3 1.5 1.3
sandoval-velascom 39/39 0.12 129.0 12.0 0.4 0.2 27.0 > 2.5
zehnpfennigj 1/2 0.16 596.2 160.0 39.5 1.6 4.1 > 2.5
figueiroh 2/80 0.26 110.3 1279.4 352.0 11.0 3.6 > 2.5
vdiaz 2/32 0.36 757.0 512.0 272.9 1.8 1.9
wirshingh 3/3 0.39 83.1 48.0 27.4 0.5 1.8
macguigand 39/234 0.47 44.6 409.3 45.0 11.1 9.1 > 2.5
parkerld 267/267 0.69 825.1 35.0 30.5 30.1 1.1
collinsa 2/32 0.91 18.8 192.0 141.4 45.4 1.4
uribeje 10/90 1.01 31.1 93.6 6.7 2.3 14.0 > 2.5
mcgowenm 360/4302 1.21 16.1 846.8 10.8 4.2 78.6 > 2.5
mghahrem 10/76 1.33 61.4 120.0 14.5 0.0 8.3 > 2.5
byerlyp 9/45 1.38 123.5 50.0 7.8 4.8 6.4 > 2.5
kistlerl 47/185 1.85 53.3 72.3 42.1 39.8 1.7
cnowlan 1040/1040 1.89 98.6 10.0 5.6 5.5 1.8
bourkeb 24/256 2.01 35.5 387.1 226.9 109.1 1.7
horowitzj 40/640 3.53 69.8 192.0 87.8 41.7 2.2
pcristof 294/2920 4.06 712.1 179.0 66.7 6.1 2.7 > 2.5
nelsonjo 6/60 7.09 79.2 420.0 119.0 39.9 3.5 > 2.5
connellym 154/1389 10.17 80.4 78.0 21.0 14.5 3.7 > 2.5
heckwolfm 671/7550 20.05 8.3 46.9 35.1 34.9 1.3
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 3122/19530 59.05 108.4 158.9 56.5 28.6 2.8 > 2.5
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 0/0 0.00
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to num_gpu=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to num_gpu=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to num_gpu=4
users {*} queues {mTgpu.q} to num_gpu=3
users {*} queues {lTgpu.q} to num_gpu=2
users {*} queues {qgpu.iq} to num_gpu=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=1
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Disk Usage & Quota
Disk Usage & Quota
As of Tue Nov 12 17:06:02 EST 2024
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.05T 16.32T 5.73T 75%/11% /home
netapp-fas83-n02:/vol_data_public 142.50T 37.09T 105.41T 27%/3% /data/public
netapp-fas83-n02:/vol_pool_public 230.00T 80.78T 149.22T 36%/1% /pool/public
gpfs01:public 400.00T 228.86T 171.14T 58%/45% /scratch/public
netapp-fas83-n02:/vol_pool_kozakk 11.00T 9.74T 1.26T 89% /1% /pool/kozakk
netapp-fas83-n02:/vol_pool_nmnh_ggi 21.00T 13.80T 7.20T 66%/1% /pool/nmnh_ggi
netapp-fas83-n02:/vol_pool_sao_access 19.95T 5.46T 14.49T 28%/2% /pool/sao_access
netapp-fas83-n01:/vol_pool_sao_rtdc 10.45T 907.43G 9.56T 9%/1% /pool/sao_rtdc
netapp-fas83-n01:/vol_pool_sylvain 30.00T 23.82T 6.18T 80%/6% /pool/sylvain
gpfs01:nmnh_bradys 25.00T 18.66T 6.34T 75%/34% /scratch/bradys
gpfs01:nmnh_kistlerl 120.00T 103.82T 16.18T 87% /6% /scratch/kistlerl
gpfs01:nmnh_meyerc 25.00T 10.73T 14.27T 43%/3% /scratch/meyerc
gpfs01:nmnh_quattrinia 50.00T 45.11T 4.89T 91% /38% /scratch/nmnh_corals
gpfs01:nmnh_ggi 77.00T 21.31T 55.69T 28%/5% /scratch/nmnh_ggi
gpfs01:nmnh_lab 25.00T 6.93T 18.07T 28%/2% /scratch/nmnh_lab
gpfs01:nmnh_mammals 25.00T 13.05T 11.95T 53%/25% /scratch/nmnh_mammals
gpfs01:nmnh_mdbc 50.00T 28.08T 21.92T 57%/8% /scratch/nmnh_mdbc
gpfs01:nzp_ccg 45.00T 38.56T 6.44T 86% /2% /scratch/nzp_ccg
gpfs01:sao_atmos 350.00T 255.07T 94.93T 73%/4% /scratch/sao_atmos
gpfs01:sao_cga 25.00T 9.50T 15.50T 38%/6% /scratch/sao_cga
gpfs01:sao_tess 50.00T 24.82T 25.18T 50%/83% /scratch/sao_tess
gpfs01:scbi_gis 80.00T 33.39T 46.61T 42%/35% /scratch/scbi_gis
gpfs01:nmnh_schultzt 25.00T 18.08T 6.92T 73%/75% /scratch/schultzt
gpfs01:serc_cdelab 15.00T 6.04T 8.96T 41%/4% /scratch/serc_cdelab
gpfs01:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 70.00T 52.63T 17.37T 76%/48% /scratch/sylvain
gpfs01:usda_sel 25.00T 6.23T 18.77T 25%/7% /scratch/usda_sel
gpfs01:wrbu 50.00T 34.36T 15.64T 69%/5% /scratch/wrbu
netapp-fas83-n02:/vol_data_admin 4.75T 34.48G 4.72T 1%/1% /data/admin
netapp-fas83-n02:/vol_pool_admin 47.50T 29.75T 17.75T 63%/1% /pool/admin
gpfs01:admin 20.00T 3.22T 16.78T 17%/32% /scratch/admin
gpfs01:bioinformatics_dbs 10.00T 4.92T 5.08T 50%/2% /scratch/dbs
gpfs01:tmp 100.00T 38.33T 61.67T 39%/9% /scratch/tmp
gpfs01:ocio_dpo 10.00T 0.99T 9.01T 10%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
qnas:/hydra 45.47T 29.07T 16.40T 64%/64% /qnas/hydra
qnas:/nfs-mesa-nanozoomer 309.23T 308.67T 572.93G 100% /100% /qnas/mesa
qnas:/sil 3840.36T 2715.73T 1124.63T 71%/71% /qnas/sil
nas1:/mnt/pool/admin 20.00T 7.46T 12.54T 38%/1% /store/admin
nas1:/mnt/pool/public 175.00T 84.29T 90.71T 49%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 7.86T 32.14T 20%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas1:/mnt/pool/nzp_ccg 96.79T 86.54T 10.25T 90% /1% /store/nzp_ccg
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 29.41T 20.59T 59%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 500.00T 440.01T 59.99T 89% /1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 39.62T 24.93T 14.70T 63%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 8.41T 41.59T 17%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
You can view the plots of the GPFS IB traffic for the past
1
,
7
or
30
days, and
throughput info.
Disk Quota Report
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 512.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 497.1GB 97.1% 0.12M 1.2% *** Jaiden Edelman, SAO/SSP - jedelman
/home 478.2GB 93.4% 0.22M 2.2% Michael Connelly, NMNH - connellym
/home 476.5GB 93.1% 3.30M 33.0% Heesung Chong, SAO/AMP - hchong
/home 453.6GB 88.6% 1.48M 14.8% Michael Trizna, NMNH/BOL - triznam
/home 443.6GB 86.6% 0.97M 9.7% Hyeong-Ahn Kwon, SAO/AMP - hkwon
Volume=NetApp:vol_pool_kozakk, mounted as /pool/kozakk
-- disk -- -- #files -- default quota: 11.00TB/27.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/kozakk 9.50TB 86.4% 0.14M 0.5% Carlos Arias, STRI - ariasc
Volume=NetApp:vol_pool_nmnh_ggi, mounted as /pool/nmnh_ggi
-- disk -- -- #files -- default quota: 16.00TB/39.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/nmnh_ggi 13.76TB 86.0% 6.08M 15.6% Vanessa Gonzalez, NMNH/LAB - gonzalezv
Volume=NetApp:vol_pool_public, mounted as /pool/public
-- disk -- -- #files -- default quota: 7.50TB/18.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/public 6.65TB 88.7% 0.24M 1.3% Xiaoyan Xie, SAO/HEA - xxie
/pool/public 6.60TB 88.0% 1.38M 7.7% Juan Uribe, NMNH - uribeje
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/38.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 13.00TB 86.7% 7.14M 18.4% Kevin Mulder, NZP - mulderk
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 97.3% 0.05M 0.4% *** Carlos Arias, STRI - ariasc (15.0TB/12M)
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 5.00TB 100.0% - - *** Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.48TB 89.6% - - Matthew Kweskin, NMNH - kweskinm (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-02 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-03 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-04 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-05 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-06 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-09 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-10 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-11 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-12 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-13 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-14 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-15 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-16 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-18 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-19 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-20 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-21 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-22 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-23 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-25 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-26 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-29 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-02 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-03 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-04 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-06 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-07 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
76-03 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-04 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-13 1.75T 31.7G 1.71T 1.8% | 0.0G 1.75T 0.0% | 0.00
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
---------------------------------------------------------------
Total 133.2T 964.6G 132.3T 0.7% | 0.0G 133.2T 0.0% | 0.00
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.