Hydra-7 Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
adaimem
afoster
ariasc
bakerd
bayarkhangaia
beckerm
bombickj
bourkeb
breusingc
bushsl
byerlyp
cabreroa
campanam
carrionj
castanedaricos
cerqueirat
chaks
chshen
classenc
cnowlan
collensab
collinsa
craigc
dikowr
fairbanksr
figueiroh
fulfordre
gallego-narbona
ggonzale
girardmg
gonzalezb
gonzalezv
gouldingt
granquistm
graujh
hinckleya
hodelr
holmk
horowitzj
hydem2
ingushin
ioparin
jassoj
jbak
jenkinskel
jkim
jmartine
jmichail
johnsone
johnsong
jspark
keyworthh
kistlerl
kmccormick
krajpuro
kweskinm
lealc
liy
macdonaldk
macguigand
mcfaddenc
mghahrem
morrisseyd
mulderk
myerse
pappalardop
pcristof
peresph
perezm4
quattrinia
radicev
rasbands
santosbe
siua
sookhoos
srinivasanrv
steierj
sylvain
toths
triznam
uribeje
vagac
vohsens
wangt2
wbrennom
willishr
xuj
yalisovem
yancos
yisraell
zayazpou
zhangy
zknutson
highlighted.
As of Wed Jun 18 14:57:03 2025: #CPUs/nodes 5676/74, 0 down.
Loads:
head node: 0.67, login nodes: 2.08, 0.00, 0.05, 0.00; NSDs: 8.16, 5.20; licenses: none used.
Queues status: 20 disabled, none need attention, none in error state.
25 users with running jobs (slots/jobs):
Current load: 1485.1, #running (slots/jobs): 2,120/138, usage: 37.4%, efficiency: 70.1%
2 users with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 5/780/12,514
71 users have/had running or queued jobs over the past 7 days, 93 over the past 15 days.
106 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
This page was last updated on Wednesday, 18-Jun-2025 15:02:08 EDT
with mk-webpage.pl ver. 7.2/1 (Aug 2024/SGK) in 0:57.
Warnings
Warnings
Oversubscribed Jobs
As of Wed Jun 18 14:57:04 EDT 2025 (0 oversubscribed job)
Inefficient Jobs
As of Wed Jun 18 14:57:04 EDT 2025 (26 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 2136/139, 5 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
8195489 cube_low krajpuro 05:15 40 18.2% mThC.q 65-20
8183197 exabayes_epit_5 gouldingt +1:09 30 13.3% lThC.q 64-04
8183198 exabayes_epit_5 gouldingt +1:08 30 13.3% lThC.q 64-09
8195012 J_all_2 santosbe 05:28 30 17.0% lThM.q 76-06
8195013 J_all_3 santosbe 05:08 30 19.4% lThM.q 76-07
8184477 exabayes_epit_6 gouldingt +1:04 26 15.3% lThC.q 65-09
8196507 Step8_tax_class bourkeb 01:54 16 25.0% mThM.q 76-08
8154175 sapdescript santosbe +8:11 8 11.2% lThM.q 93-03
8186209 Allo_BPP macguigand 21:56 8 24.7% lThM.q 76-04 1
8186209 Allo_BPP macguigand 21:55 8 24.8% lThM.q 76-03 2
8186209 Allo_BPP macguigand 21:54 8 24.4% lThM.q 65-10 3
(more by macguigand)
8194860 phyluce_assembl horowitzj 08:36 8 7.5% mThC.q 65-20
8137841 Job_Step5 perezm4 +19:23 4 28.6% lThM.q 64-17
⇒ Equivalent to 285.9 underused CPUs: 350 CPUs used at 18.3% on average.
To see them all use:
'q+ -ineff -u macguigand' (16)
Nodes with Excess Load
As of Wed Jun 18 14:57:06 EDT 2025 (9 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
64-03 40 0 4.0 4.0 *
64-06 40 4 8.4 4.4 *
64-11 40 0 5.3 5.3 *
64-14 40 0 3.4 3.4 *
65-06 64 8 12.7 4.7 *
65-25 64 10 11.6 1.6 *
76-01 192 0 9.1 9.1 *
76-03 192 40 52.0 12.0 *
93-06 96 16 28.1 12.1 *
Total excess load = 56.7
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
vohsens 576 50.0% 9.0000 36.1% 0.0758 2.8% 0.0642 0.1617 55.7
santosbe 160 13.9% 6.8750 27.6% 0.6972 25.6% 0.4001 1.5482 4.4
dikowr 50 4.3% 2.4414 9.8% 0.4802 17.7% 0.9974 1.0189 2.4
macguigand 128 11.1% 2.0000 8.0% 0.0134 0.5% 0.0139 0.0139 144.2
chaks 96 8.3% 1.8750 7.5% 0.4378 16.1% 0.7954 1.2560 1.5
pappalardop 70 6.1% 1.1719 4.7% 0.6453 23.7% 1.4092 1.4604 0.8
bourkeb 40 3.5% 1.0000 4.0% 0.0320 1.2% 0.0320 0.0485 20.6
perezm4 4 0.3% 0.3906 1.6% 0.2834 10.4% 0.3350 0.3824 1.0
yancos 1 0.1% 0.0977 0.4% 0.0097 0.4% 0.0172 0.0172 5.7
afoster 1 0.1% 0.0234 0.1% 0.0054 0.2% 0.0065 0.0065 3.6
castanedaricos 24 2.1% 0.0234 0.1% 0.0383 1.4% 0.0405 0.0418 0.6
hinckleya 1 0.1% 0.0234 0.1% 0.0016 0.1% 0.0017 0.0021 11.3
==================================================================================================
Total 1151 24.9219 2.7203 4.1133 5.9575 4.2
Warnings
43 high memory jobs produced a warning:
4 for bourkeb
2 for castanedaric
1 for chaks
5 for dikowr
16 for macguigand
4 for pappalardop
1 for perezm4
7 for santosbe
3 for vohsens
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =348
mThC.q =509
lThC.q =88
uThC.q =26
971 5056 19.2% 137.7%
sThM.q =576
mThM.q =163
lThM.q =316
uThM.q =0
1055 4680 22.5% 121.1%
sTgpu.q =0
mTgpu.q =0
lTgpu.q =0
qgpu.iq =4
4 104 3.8% 6.5%
uTxlM.rq =96
96 536 17.9% 128.2%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =0
0 8 0.0%
lTWFM.sq =3
3 4 75.0% 0.4%
qrsh.iq =7
7 68 10.3% 3.0%
Total: 2136
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Wed Jun 18 14:57:04 EDT 2025
2717 avail(slots), free(load)=4913.4, unresd(mem)=14256.9G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5056 total(mem) 38.8T
unused(slots) 3060 unused(load) 5040.5 ie: 60.5% 99.7%
unreserved(mem) 14.1T unused(mem) 33.4T ie: 36.4% 86.1%
unreserved(mem) 4.7G unused(mem) 11.2G per unused(slots)
2481 avail(slots), free(load)=4538.9, unresd(mem)=10845.6G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4680 total(mem) 35.8T
unused(slots) 2843 unused(load) 4666.0 ie: 60.7% 99.7%
unreserved(mem) 10.8T unused(mem) 30.3T ie: 30.2% 84.6%
unreserved(mem) 3.9G unused(mem) 10.9G per unused(slots)
232 avail(slots), free(load)=247.7, unresd(mem)=4279.3G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 344 total(mem) 6.4T
unused(slots) 232 unused(load) 342.6 ie: 67.4% 99.6%
unreserved(mem) 4.3T unused(mem) 5.5T ie: 66.8% 86.6%
unreserved(mem) 18.9G unused(mem) 24.5G per unused(slots)
100 avail(slots), free(load)=104.0, unresd(mem)=752.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 100 unused(load) 104.0 ie: 96.2% 100.0%
unreserved(mem) 0.7T unused(mem) 0.7T ie: 99.7% 89.6%
unreserved(mem) 7.5G unused(mem) 6.8G per unused(slots)
GPU Usage
Wed Jun 18 14:57:11 EDT 2025
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 38.1 465.2 - 4/1 - 64 4 0.1 - 60 63.9
compute-79-01 - 125.5 21.5 104.0 - 2/0 - 20 0 0.1 - 20 19.9
compute-79-02 - 125.5 19.1 106.4 - 2/0 - 20 0 0.1 - 20 19.9
Total #GPU=8 used=1 (12.5%)
Waiting Job(s)
As of Wed Jun 18 14:57:05 EDT 2025
4 jobs waiting for santosbe :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
8194852 sapdescript santosbe 11:59 8 640.0 lThM.q
8195014 J_all_6 santosbe 06:15 30 960.0 lThM.q
8195015 J_all_8 santosbe 06:14 30 960.0 lThM.q
8195016 J_all_9 santosbe 06:14 30 960.0 lThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=6.875T/8.944T 76.9% for santosbe in queue uThM.q
max_hM_slots_per_user/3 slots=160/390 41.0% for santosbe in queue lThM.q
max_slots_per_user/1 slots=160/840 19.0% for santosbe
------------------- ------------------------------- ------
1 job waiting for vohsens :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
8178675 megahit_on_all_ vohsens +4:03 16 256.0 sThM.q 3430-4204:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=9.000T/8.944T 100.6% for vohsens in queue uThM.q
max_slots_per_user/1 slots=576/840 68.6% for vohsens
max_hM_slots_per_user/1 slots=576/840 68.6% for vohsens in queue sThM.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=23.05T/35.78T 64.4% for * in queue uThM.q
blast2GO/1 slots=44/110 40.0% for *
total_slots/1 slots=2128/5960 35.7% for *
total_mem_res/3 mem_res=1.875T/7.874T 23.8% for * in queue uTxlM.rq
total_gpus/1 num_gpu=1/8 12.5% for * in queue qgpu.iq
total_mem_res/1 mem_res=2.740T/39.94T 6.9% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Wed Jun 18 14:57:06 EDT 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=2.740T/39.94T 6.9% for * in queue uThC.q
total_mem_res/2 mem_res=23.05T/35.78T 64.4% for * in queue uThM.q
total_mem_res/3 mem_res=1.875T/7.874T 23.8% for * in queue uTxlM.rq
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.3 355.5 400.0 - 147.8 103.3 - 32 4 2.0 - 28 29.9
compute-64-18 - 503.3 192.4 500.0 - 310.9 3.3 - 32 10 10.4 - 22 21.6
compute-65-02 - 503.5 38.6 432.0 - 464.9 71.5 - 64 36 17.7 - 28 46.3
compute-65-03 - 503.5 15.6 304.0 - 487.9 199.5 - 64 28 15.3 - 36 48.7
compute-65-04 - 503.5 56.3 140.0 - 447.2 363.5 - 64 20 16.1 - 44 47.9
compute-65-05 - 503.5 13.6 256.0 - 489.9 247.5 - 64 16 15.3 - 48 48.7
compute-65-06 - 503.5 13.0 256.0 - 490.5 247.5 - 64 8 12.7 - 56 51.3
compute-65-07 - 503.5 49.6 74.0 - 453.9 429.5 - 64 24 14.9 - 40 49.1
compute-65-09 - 503.5 138.0 258.0 - 365.5 245.5 - 64 42 16.8 - 22 47.2
compute-65-10 - 503.5 35.3 384.0 - 468.2 119.5 - 64 24 14.7 - 40 49.3
compute-65-11 - 503.5 35.6 256.0 - 467.9 247.5 - 64 16 14.3 - 48 49.7
compute-65-12 - 503.5 197.0 406.0 - 306.5 97.5 - 64 26 21.4 - 38 42.6
compute-65-13 - 503.5 32.8 256.0 - 470.7 247.5 - 64 16 12.2 - 48 51.8
compute-65-14 - 503.5 38.6 312.0 - 464.9 191.5 - 64 30 17.0 - 34 47.0
compute-65-15 - 503.5 37.0 256.0 - 466.5 247.5 - 64 16 13.7 - 48 50.3
compute-65-16 - 503.5 37.1 304.0 - 466.4 199.5 - 64 28 15.0 - 36 49.0
compute-65-17 - 503.5 38.9 304.0 - 464.6 199.5 - 64 28 15.5 - 36 48.5
compute-65-18 - 503.5 39.4 304.0 - 464.1 199.5 - 64 28 15.0 - 36 49.0
compute-65-19 - 503.5 39.1 336.0 - 464.4 167.5 - 64 36 15.3 - 28 48.7
compute-65-20 - 503.5 18.9 66.0 - 484.6 437.5 - 64 48 18.9 - 16 45.1
compute-65-21 - 503.5 196.4 406.0 - 307.1 97.5 - 64 26 21.3 - 38 42.7
compute-65-22 - 503.5 40.2 432.0 - 463.3 71.5 - 64 36 16.9 - 28 47.1
compute-65-23 - 503.5 36.8 384.0 - 466.7 119.5 - 64 24 13.8 - 40 50.1
compute-65-24 - 503.5 36.5 304.0 - 467.0 199.5 - 64 24 12.9 - 40 51.1
compute-65-25 - 503.5 17.7 300.0 - 485.8 203.5 - 64 10 11.6 - 54 52.4
compute-65-26 - 503.5 259.5 500.0 - 244.0 3.5 - 64 10 10.0 - 54 54.0
compute-65-27 - 503.5 33.8 304.0 - 469.7 199.5 - 64 24 16.5 - 40 47.5
compute-65-28 - 503.5 52.0 268.0 - 451.5 235.5 - 64 28 15.6 - 36 48.4
compute-65-29 - 503.5 36.9 304.0 - 466.6 199.5 - 64 24 18.0 - 40 46.0
compute-65-30 - 503.5 34.0 256.0 - 469.5 247.5 - 64 16 15.2 - 48 48.8
compute-75-01 - 1007.4 99.3 960.0 - 908.1 47.4 - 128 30 23.3 - 98 104.7
compute-75-02 - 1007.5 40.0 640.0 - 967.5 367.5 - 128 40 32.7 - 88 95.3
compute-75-03 - 755.5 41.2 562.0 - 714.3 193.5 - 128 45 26.6 - 83 101.3
compute-75-04 - 755.5 40.8 512.0 - 714.7 243.5 - 128 32 26.4 - 96 101.6
compute-75-05 - 755.5 59.0 742.0 - 696.5 13.5 - 128 29 20.0 - 99 108.0
compute-75-06 - 755.5 42.8 560.0 - 712.7 195.5 - 128 44 26.5 - 84 101.5
compute-75-07 - 755.5 46.6 690.0 - 708.9 65.5 - 128 45 29.5 - 83 98.5
compute-76-03 - 1007.4 18.7 640.5 - 988.7 366.9 - 128 40 34.7 - 88 93.3
compute-76-04 - 1007.4 478.8 790.0 - 528.6 217.4 - 128 50 28.5 - 78 99.5
compute-76-05 - 1007.4 216.0 500.0 - 791.4 507.4 - 128 48 48.1 - 80 79.9
compute-76-06 - 1007.4 122.7 994.0 - 884.7 13.4 - 128 58 26.5 - 70 101.5
compute-76-07 - 1007.4 247.2 960.0 - 760.2 47.4 - 128 127 103.8 - 1 24.2
compute-76-08 - 1007.4 49.0 944.0 - 958.4 63.4 - 128 36 8.6 - 92 119.4
compute-76-09 - 1007.4 25.0 596.0 - 982.4 411.4 - 128 34 21.1 - 94 106.9
compute-76-10 - 1007.4 119.0 800.0 - 888.4 207.4 - 128 100 101.1 - 28 26.9
compute-76-11 - 1007.4 470.9 804.0 - 536.5 203.4 - 128 30 23.9 - 98 104.1
compute-76-12 - 1007.4 88.9 960.0 - 918.5 47.4 - 128 30 29.0 - 98 99.0
compute-76-13 - 1007.4 319.7 662.0 - 687.7 345.4 - 128 42 31.2 - 86 96.8
compute-76-14 - 1007.4 333.8 790.0 - 673.6 217.4 - 128 128 117.6 - 0 10.4
compute-84-01 - 881.1 337.5 784.0 - 543.6 97.1 - 112 44 19.1 - 68 92.9
compute-93-01 - 503.8 35.2 176.0 - 468.6 327.8 - 64 20 10.1 - 44 53.9
compute-93-02 - 755.6 161.1 246.0 - 594.5 509.6 - 72 34 19.2 - 38 52.8
compute-93-03 - 755.6 42.6 696.0 - 713.0 59.6 - 72 17 9.7 - 55 62.3
compute-93-04 - 755.6 29.9 256.0 - 725.7 499.6 - 72 16 11.9 - 56 60.1
======= ===== ====== ==== ==== =====
Totals 36637.5 5631.8 25526.5 4680 1825 1275.5
==> 15.4% 69.7% ==> 39.0% 27.3%
Most unreserved/unused memory (509.6/594.5GB) is on compute-93-02 with 38/52.8 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 15.1 -0.0 - 1496.3 1511.4 - 192 0 9.1 - 192 182.9
compute-76-02 - 1511.4 x x - node down - 192 x x - x x
compute-93-05 - 2016.3 823.7 1920.0 - 1192.6 96.3 - 96 96 97.4 - 0 -1.4
compute-93-06 - 3023.9 39.6 256.0 - 2984.3 2767.9 - 56 16 16.4 - 40 39.6
======= ===== ====== ==== ==== =====
Totals 6551.6 878.4 2176.0 344 112 122.9
==> 13.4% 33.2% ==> 32.6% 35.7%
Most unreserved/unused memory (2767.9/2984.3GB) is on compute-93-06 with 40/39.6 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 06/11/25 and 06/18/25
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
holmk 1/8 0.00 19.3 10.0 3.8 2.7 2.6 > 2.5
hinckleya 2/2 0.00 96.1 24.0 4.0 0.8 6.0 > 2.5
jassoj 1/1 0.00 163.0 128.0 133.5 131.1 1.0
bayarkhangaia 7/128 0.00 29.8 683.7 5.8 1.2 117.8 > 2.5
jmichail 1/1 0.01 243.6 75.0 13.0 10.9 5.8 > 2.5
morrisseyd 20/248 0.02 30.4 141.9 7.7 3.6 18.5 > 2.5
toths 60/286 0.04 28.5 55.7 24.3 1.3 2.3
craigc 203/812 0.04 58.2 64.0 21.5 4.0 3.0 > 2.5
yalisovem 59/284 0.04 27.2 55.7 24.2 2.0 2.3
gonzalezv 5/5 0.04 94.5 111.5 0.6 0.2 184.6 > 2.5
siua 61/292 0.04 27.0 55.8 20.1 1.7 2.8 > 2.5
sookhoos 70/346 0.06 26.8 63.6 27.4 2.0 2.3
wangt2 2/18 0.07 45.9 900.0 161.9 1.8 5.6 > 2.5
srinivasanrv 82/406 0.08 24.9 61.9 25.3 1.5 2.4
radicev 10/40 0.12 50.4 600.0 205.2 126.7 2.9 > 2.5
beckerm 6/48 0.14 14.6 81.8 2.5 2.2 32.2 > 2.5
campanam 5/68 0.15 472.7 64.0 73.2 49.6 0.9
jmartine 1/100 0.16 0.4 100.0 233.6 153.8 0.4
hydem2 136/256 0.22 65.2 13.9 11.1 7.9 1.3
granquistm 78/380 0.45 50.7 75.2 49.4 6.9 1.5
girardmg 159/948 0.46 14.0 79.6 43.6 1.3 1.8
keyworthh 119/728 0.69 125.2 816.0 94.4 3.1 8.6 > 2.5
johnsong 196/1753 0.69 26.7 786.9 2.3 2.1 340.9 > 2.5
afoster 35/35 1.22 99.4 24.0 16.8 9.6 1.4
pcristof 575/17231 2.07 59.4 445.9 62.4 2.1 7.1 > 2.5
uribeje 31/240 2.70 34.1 315.1 20.7 9.7 15.2 > 2.5
carrionj 32/154 3.64 33.2 59.0 4.4 4.2 13.5 > 2.5
bourkeb 44/360 4.18 86.8 377.2 30.6 6.1 12.3 > 2.5
macguigand 307/1784 4.18 182.3 45.2 9.1 2.2 5.0 > 2.5
horowitzj 2122/2122 4.49 97.0 16.0 3.1 1.6 5.2 > 2.5
breusingc 2341/2341 5.34 95.3 16.0 3.4 1.0 4.8 > 2.5
mghahrem 29/139 8.55 13.5 125.8 59.7 9.5 2.1
dikowr 25/250 9.52 89.6 500.0 161.2 58.5 3.1 > 2.5
yancos 1759/1759 14.54 104.1 99.6 8.1 7.3 12.3 > 2.5
pappalardop 50/498 21.75 89.1 325.0 177.5 61.8 1.8
santosbe 125/1680 30.32 82.9 902.0 174.0 50.0 5.2 > 2.5
collinsa 510/8136 63.47 32.1 192.0 35.9 12.7 5.4 > 2.5
ggonzale 190/190 84.20 99.4 23.4 1.5 1.0 15.7 > 2.5
vohsens 3127/50032 133.39 85.1 256.0 19.6 8.1 13.1 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 12586/94109 397.08 79.2 240.1 42.6 14.5 5.6 > 2.5
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
collinsa 3/12 0.00 23.0 64.0 32.2 3.1 2.0
pappalardop 4/38 3.23 89.2 328.2 206.5 79.1 1.6
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 7/50 3.23 89.2 328.0 206.3 79.1 1.6
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to num_gpu=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to num_gpu=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to num_gpu=4
users {*} queues {mTgpu.q} to num_gpu=3
users {*} queues {lTgpu.q} to num_gpu=2
users {*} queues {qgpu.iq} to num_gpu=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Disk Usage & Quota
Disk Usage & Quota
As of Wed Jun 18 11:06:02 EDT 2025
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.05T 19.38T 2.67T 88% /12% /home
netapp-fas83-n02:/vol_data_public 142.50T 50.66T 91.84T 36%/3% /data/public
netapp-fas83-n02:/vol_pool_public 230.00T 97.51T 132.49T 43%/1% /pool/public
gpfs01:public 400.00T 348.76T 51.24T 88% /53% /scratch/public
netapp-fas83-n02:/vol_pool_kozakk 11.00T 10.72T 285.32G 98% /1% /pool/kozakk
netapp-fas83-n02:/vol_pool_nmnh_ggi 21.00T 13.80T 7.20T 66%/1% /pool/nmnh_ggi
netapp-fas83-n02:/vol_pool_sao_access 19.95T 5.49T 14.46T 28%/2% /pool/sao_access
netapp-fas83-n02:/vol_pool_sao_rtdc 10.45T 907.44G 9.56T 9%/1% /pool/sao_rtdc
netapp-fas83-n02:/vol_pool_sylvain 30.00T 24.34T 5.66T 82% /6% /pool/sylvain
gpfs01:nmnh_bradys 25.00T 21.92T 3.08T 88% /66% /scratch/bradys
gpfs01:nmnh_kistlerl 120.00T 110.06T 9.94T 92% /6% /scratch/kistlerl
gpfs01:nmnh_meyerc 25.00T 18.58T 6.42T 75%/4% /scratch/meyerc
gpfs01:nmnh_quattrinia 60.00T 45.66T 14.34T 77%/7% /scratch/nmnh_corals
gpfs01:nmnh_ggi 77.00T 22.02T 54.98T 29%/5% /scratch/nmnh_ggi
gpfs01:nmnh_lab 25.00T 9.31T 15.69T 38%/3% /scratch/nmnh_lab
gpfs01:nmnh_mammals 35.00T 18.96T 16.04T 55%/21% /scratch/nmnh_mammals
gpfs01:nmnh_mdbc 50.00T 42.49T 7.51T 85% /9% /scratch/nmnh_mdbc
gpfs01:nmnh_ocean_dna 40.00T 14.49T 25.51T 37%/1% /scratch/nmnh_ocean_dna
gpfs01:nzp_ccg 45.00T 31.75T 13.25T 71%/2% /scratch/nzp_ccg
gpfs01:sao_atmos 350.00T 284.60T 65.40T 82% /4% /scratch/sao_atmos
gpfs01:sao_cga 25.00T 9.50T 15.50T 38%/6% /scratch/sao_cga
gpfs01:sao_tess 50.00T 24.82T 25.18T 50%/83% /scratch/sao_tess
gpfs01:scbi_gis 80.00T 33.84T 46.16T 43%/35% /scratch/scbi_gis
gpfs01:nmnh_schultzt 25.00T 19.19T 5.81T 77%/75% /scratch/schultzt
gpfs01:serc_cdelab 15.00T 12.68T 2.32T 85% /4% /scratch/serc_cdelab
gpfs01:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 70.00T 48.10T 21.90T 69%/48% /scratch/sylvain
gpfs01:usda_sel 25.00T 5.48T 19.52T 22%/6% /scratch/usda_sel
gpfs01:wrbu 50.00T 38.94T 11.06T 78%/6% /scratch/wrbu
netapp-fas83-n01:/vol_data_admin 4.75T 52.69G 4.70T 2%/1% /data/admin
netapp-fas83-n01:/vol_pool_admin 47.50T 38.06T 9.44T 81% /1% /pool/admin
gpfs01:admin 20.00T 3.60T 16.40T 18%/31% /scratch/admin
gpfs01:bioinformatics_dbs 10.00T 5.00T 5.00T 50%/2% /scratch/dbs
gpfs01:tmp 100.00T 38.33T 61.67T 39%/9% /scratch/tmp
gpfs01:ocio_dpo 10.00T 0.00G 10.00T 1%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
nas1:/mnt/pool/admin 20.00T 7.92T 12.08T 40%/1% /store/admin
nas1:/mnt/pool/public 175.00T 91.18T 83.82T 53%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 10.37T 29.63T 26%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 12.91T 27.09T 33%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 40.00T 973.76G 39.05T 3%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 262.20T 111.06T 151.14T 43%/1% /store/nzp_ccg
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 2.93T 47.07T 6%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 476.02T 273.98T 64%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 40.00T 27.07T 12.93T 68%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 8.41T 41.59T 17%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
qnas:/hydra 45.47T 29.07T 16.40T 64%/64% /qnas/hydra
qnas:/nfs-mesa-nanozoomer 395.63T 345.58T 50.05T 88% /88% /qnas/mesa
qnas:/sil 3840.36T 2938.07T 902.29T 77%/77% /qnas/sil
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
You can view the plots of the GPFS IB traffic for the past
1
,
7
or
30
days, and
throughput info.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.17TB 92.7% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
/data/public 3.99TB 88.7% 0.01M 0.1% Zelong Nie, NMNH - niez
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 512.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 512.1GB 100.0% 0.00M 0.0% *** Molly Corder, SMSC - corderm
/home 497.1GB 97.1% 0.12M 1.2% *** Jaiden Edelman, SAO/SSP - jedelman
/home 484.5GB 94.6% 0.42M 4.2% Adela Roa-Varon, NMNH - roa-varona
/home 480.1GB 93.8% 0.27M 2.7% Paul Cristofari, SAO/SSP - pcristof
/home 478.6GB 93.5% 0.24M 2.4% Michael Connelly, NMNH - connellym
/home 476.5GB 93.1% 3.30M 33.0% Heesung Chong, SAO/AMP - hchong
/home 443.6GB 86.6% 0.97M 9.7% Hyeong-Ahn Kwon, SAO/AMP - hkwon
Volume=NetApp:vol_pool_nmnh_ggi, mounted as /pool/nmnh_ggi
-- disk -- -- #files -- default quota: 16.00TB/39.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/nmnh_ggi 13.76TB 86.0% 6.08M 15.6% Vanessa Gonzalez, NMNH/LAB - gonzalezv
Volume=NetApp:vol_pool_public, mounted as /pool/public
-- disk -- -- #files -- default quota: 7.50TB/18.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/public 6.75TB 90.0% 13.70M 76.1% Ting Wang, NMNH - wangt2
/pool/public 6.65TB 88.7% 0.24M 1.3% Xiaoyan Xie, SAO/HEA - xxie
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/38.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 14.20TB 94.7% 14.16M 36.5% Brian Bourke, WRBU - bourkeb
/scratch/public 13.30TB 88.7% 0.87M 2.2% Karen Holm, SMSC - holmk
/scratch/public 13.10TB 87.3% 0.78M 2.0% Ting Wang, NMNH - wangt2
/scratch/public 12.90TB 86.0% 4.38M 11.3% Kevin Mulder, NZP - mulderk
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 97.3% 0.05M 0.4% *** Carlos Arias, STRI - ariasc (15.0TB/12M)
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-02 3.49T 25.6G 3.46T 0.7% | 199.7G 3.29T 5.6% | 7.80
65-03 414.7G 20.4G 394.3G 4.9% | 0.0G 3.29T 0.0% | 0.00
65-04 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-05 414.7G 19.9G 394.8G 4.8% | 0.0G 3.29T 0.0% | 0.00
65-06 414.7G 19.5G 395.1G 4.7% | 0.0G 3.49T 0.0% | 0.00
65-09 414.7G 20.1G 394.6G 4.8% | 0.0G 3.29T 0.0% | 0.00
65-10 1.75T 14.3G 1.73T 0.8% | 199.7G 1.55T 11.2% | 13.93
65-11 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-12 1.75T 14.3G 1.73T 0.8% | 199.7G 1.55T 11.2% | 13.93
65-13 1.75T 14.3G 1.73T 0.8% | 199.7G 1.55T 11.2% | 13.93
65-14 1.75T 13.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 15.00
65-15 1.75T 13.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 15.00
65-16 1.75T 13.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 15.00
65-17 1.75T 13.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 15.00
65-18 1.75T 14.3G 1.73T 0.8% | 199.7G 1.55T 11.2% | 13.93
65-19 1.75T 13.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 15.00
65-21 1.75T 14.3G 1.73T 0.8% | 199.7G 1.55T 11.2% | 13.93
65-22 1.75T 14.3G 1.73T 0.8% | 199.7G 1.55T 11.2% | 13.93
65-23 1.75T 16.4G 1.73T 0.9% | 199.7G 1.55T 11.2% | 12.19
65-24 1.75T 14.3G 1.73T 0.8% | 199.7G 1.55T 11.2% | 13.93
65-27 1.75T 13.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 15.00
65-28 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-29 1.75T 14.3G 1.73T 0.8% | 199.7G 1.55T 11.2% | 13.93
65-30 1.75T 13.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 15.00
75-02 6.98T 52.2G 6.93T 0.7% | 400.4G 6.59T 5.6% | 7.67
75-03 6.98T 51.2G 6.93T 0.7% | 199.7G 6.79T 2.8% | 3.90
75-04 6.98T 53.2G 6.93T 0.7% | 400.4G 6.59T 5.6% | 7.52
75-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-06 6.98T 56.3G 6.93T 0.8% | 400.4G 6.59T 5.6% | 7.11
75-07 6.98T 51.2G 6.93T 0.7% | 199.7G 6.79T 2.8% | 3.90
76-03 1.75T 15.4G 1.73T 0.9% | 400.4G 1.35T 22.4% | 26.07
76-04 1.75T 15.4G 1.73T 0.9% | 400.4G 1.35T 22.4% | 26.07
76-13 1.75T 35.8G 1.71T 2.0% | 400.4G 1.35T 22.4% | 11.17
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
---------------------------------------------------------------
Total 115.6T 961.6G 114.7T 0.8% | 6.25T 121.2T 5.4% | 6.65
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.