Hydra-7 Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
adonath
afoster
ariasc
auscavitchs
bakerd
beckerm
blackburnrc
bombickj
bornbuschs
bourkeb
campanam
capadorhd
carlsenm
carrionj
cerqueirat
chaks
classenc
cnowlan
collensab
collinsa
connm
dbowden
figueiroh
fowlera
franzena
ggonzale
girardmg
gonzalezb
gonzalezm2
gouldingt
granquistm
hinckleya
horowitzj
hydem2
jassoj
jbak
johnsone
johnsonsj
jspark
jwing
kistlerl
kmccormick
kweskinm
linat
lingof
lopezortizs
macdonaldk
macguigand
mcgowenm
medeirosi
mghahrem
morrisseyd
mulderk
myerse
pachecoy
pappalardop
parkerld
pcristof
perezm4
phillipsaj
pmercader_perez
radicev
ramosi
rasbands
rdi_tella
siua
sookhoos
srinivasanrv
ssanjaripour
steierj
sylvain
toths
triznam
tueda
uribeje
urrutia-carterje
vagac
vohsens
wangt2
wbrennom
williammn
wirshingh
xuj
yalisovem
yisraell
zarril
zehnpfennigj
zhangy
zhaons
zknutson
highlighted.
As of Fri Jul 25 18:47:03 2025: #CPUs/nodes 5644/74, 0 down.
Loads:
head node: 0.86, login nodes: 0.26, 0.00, 1.17, 1.03; NSDs: 58.74, 24.14; licenses: 1 idlrt used.
Queues status: 7 disabled, none need attention, none in error state.
26 users with running jobs (slots/jobs):
Current load: 929.4, #running (slots/jobs): 1,991/231, usage: 35.3%, efficiency: 46.7%
4 users with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 4/287/3,032
63 users have/had running or queued jobs over the past 7 days, 90 over the past 15 days.
112 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
Error: invalid len=7d and user=connm combination (user had no jobs during that time).
This page was last updated on Friday, 25-Jul-2025 18:52:13 EDT
with mk-webpage.pl ver. 7.2/1 (Aug 2024/SGK) in 1:05.
Warnings
Warnings
Oversubscribed Jobs
As of Fri Jul 25 18:47:04 EDT 2025 (2 oversubscribed jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1991/231, 4 queued (jobs), showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
9062926 A6_Optuna_Optim mghahrem +1:01 1 137.7% mTgpu.q 50-01
9140647 dorado_gpu_dna2 ariasc 04:08 1 139.4% mTgpu.q 79-01
⇒ Equivalent to 0.8 overused CPUs: 2 CPUs used at 138.5% on average.
Inefficient Jobs
As of Fri Jul 25 18:47:05 EDT 2025 (27 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1991/231, 4 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
9132096 July25_temp_job kmccormick 02:00 50 1.4% sThC.q 75-01
9132101 July25_temp_job kmccormick 01:41 50 1.6% sThC.q 65-03
9132115 July25_temp_job kmccormick 01:02 50 1.6% sThC.q 65-04
8271748 beast_partition jassoj +16:05 40 21.7% lThC.q 64-13
9012393 uce_spades_conc beckerm +6:08 40 2.6% mThM.q 75-01
9013348 beast_partition jassoj +3:04 40 22.6% lThC.q 76-10
9013349 beast_unpartiti jassoj +3:04 40 22.6% lThC.q 65-12
9011217 metawrap_long_p vohsens +7:22 16 9.7% mThM.q 65-10 20
9011217 metawrap_long_p vohsens +7:21 16 9.2% mThM.q 65-09 26
9011217 metawrap_long_p vohsens +6:21 16 9.2% mThM.q 65-14 148
(more by vohsens)
9133473 blast_seqs toths 06:06 16 6.0% sThM.q 93-05
9142759 analyze_19 afoster 02:59 10 22.0% mThC.q 64-10
9144790 gllvm4 gonzalezm2 02:19 10 16.1% uThM.q 76-11
9013381 xvcf1 uribeje +3:03 8 12.4% uThM.q 76-14
9013384 xvcf2 uribeje +3:03 8 12.4% uThM.q 84-01
9013386 xvcf3 uribeje +3:02 8 12.5% uThM.q 93-02
(more by uribeje)
9013131 nf-blastUnalign hydem2 +3:14 5 21.0% lThM.q 76-06
9013134 nf-blastUnalign hydem2 +3:13 5 30.7% lThM.q 76-08
9123246 Blast_16S zehnpfennigj 08:48 5 19.9% uThM.q 65-02
8137841 Job_Step5 perezm4 +57:03 4 26.2% lThM.q 64-17
9013122 albicollis hydem2 +3:23 1 0.1% lTWFM.sq 64-16
⇒ Equivalent to 453.8 underused CPUs: 510 CPUs used at 11.0% on average.
To see them all use:
'q+ -ineff -u uribeje' (6)
'q+ -ineff -u vohsens' (6)
Nodes with Excess Load
As of Fri Jul 25 18:47:06 EDT 2025 (4 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
50-01 64 1 3.2 2.2 *
76-03 192 48 63.6 15.6 *
76-04 192 48 64.5 16.5 *
93-06 96 48 65.2 17.2 *
Total excess load = 51.6
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
collinsa 368 35.2% 8.9844 30.7% 0.3443 19.3% 0.6285 1.1295 8.0
pappalardop 25 2.4% 7.3242 25.0% 0.0229 1.3% 0.0279 0.0279 262.6
toths 384 36.8% 6.0000 20.5% 0.2091 11.7% 0.1763 0.4486 13.4
uribeje 48 4.6% 2.3438 8.0% 0.0951 5.3% 0.1778 0.3078 7.6
vohsens 96 9.2% 1.5000 5.1% 0.0330 1.9% 0.0021 0.4813 3.1
hydem2 10 1.0% 0.9766 3.3% 0.5891 33.0% 0.6673 0.6781 1.4
bourkeb 8 0.8% 0.5000 1.7% 0.0006 0.0% 0.0007 0.0007 677.3
vagac 10 1.0% 0.4102 1.4% 0.0027 0.2% 0.0341 0.0580 7.1
perezm4 4 0.4% 0.3906 1.3% 0.3339 18.7% 0.3833 0.3833 1.0
zhangy 8 0.8% 0.3906 1.3% 0.0276 1.5% 0.0310 0.0555 7.0
cerqueirat 10 1.0% 0.1562 0.5% 0.0536 3.0% 0.0552 0.0593 2.6
gonzalezm2 10 1.0% 0.1172 0.4% 0.0084 0.5% 0.0124 0.0156 7.5
beckerm 40 3.8% 0.0781 0.3% 0.0405 2.3% 0.0003 0.0779 1.0
wirshingh 9 0.9% 0.0703 0.2% 0.0026 0.1% 0.0028 0.0336 2.1
hinckleya 1 0.1% 0.0391 0.1% 0.0065 0.4% 0.0003 0.0387 1.0
zehnpfennigj 5 0.5% 0.0176 0.1% 0.0005 0.0% 0.0005 0.0005 32.8
urrutia-carterje 8 0.8% 0.0010 0.0% 0.0137 0.8% 0.0502 0.0502 0.0
==================================================================================================
Total 1044 29.2998 1.7840 2.2507 3.8466 7.6
Warnings
74 high memory jobs produced a warning:
1 for beckerm
1 for bourkeb
10 for cerqueirat
22 for collinsa
1 for hinckleya
2 for hydem2
1 for perezm4
19 for toths
6 for uribeje
1 for urrutia-cart
1 for vagac
6 for vohsens
1 for wirshingh
1 for zehnpfennigj
1 for zhangy
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =650
mThC.q =170
lThC.q =120
uThC.q =2
942 5056 18.6% 81.7%
sThM.q =384
mThM.q =564
lThM.q =25
uThM.q =71
1044 4680 22.3% 71.8%
sTgpu.q =0
mTgpu.q =2
lTgpu.q =1
qgpu.iq =0
3 104 2.9% 150.0%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =0
0 8 0.0%
lTWFM.sq =1
1 4 25.0% 1.1%
qrsh.iq =1
1 68 1.5% 19.6%
Total: 1991
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Fri Jul 25 18:47:05 EDT 2025
3060 avail(slots), free(load)=5007.3, unresd(mem)=10936.4G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5056 total(mem) 38.8T
unused(slots) 3234 unused(load) 5047.1 ie: 64.0% 99.8%
unreserved(mem) 11.0T unused(mem) 36.9T ie: 28.3% 95.1%
unreserved(mem) 3.5G unused(mem) 11.7G per unused(slots)
2711 avail(slots), free(load)=4639.5, unresd(mem)=7471.8G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4648 total(mem) 35.3T
unused(slots) 2885 unused(load) 4639.5 ie: 62.1% 99.8%
unreserved(mem) 7.3T unused(mem) 33.3T ie: 20.7% 94.3%
unreserved(mem) 2.6G unused(mem) 11.8G per unused(slots)
200 avail(slots), free(load)=247.3, unresd(mem)=3767.3G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 344 total(mem) 6.4T
unused(slots) 200 unused(load) 342.6 ie: 58.1% 99.6%
unreserved(mem) 4.1T unused(mem) 6.3T ie: 64.8% 98.9%
unreserved(mem) 21.2G unused(mem) 32.4G per unused(slots)
101 avail(slots), free(load)=103.9, unresd(mem)=748.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 101 unused(load) 103.9 ie: 97.1% 99.9%
unreserved(mem) 0.7T unused(mem) 0.7T ie: 99.2% 88.8%
unreserved(mem) 7.4G unused(mem) 6.6G per unused(slots)
GPU Usage
Fri Jul 25 18:47:11 EDT 2025
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 46.0 457.3 - 4/1 - 64 1 3.2 - 63 60.8
compute-79-01 - 125.5 23.4 102.1 - 2/2 - 20 1 1.2 - 19 18.8
compute-79-02 - 125.5 15.2 110.3 - 2/1 - 20 1 0.1 - 19 19.9
Total #GPU=8 used=4 (50.0%)
Waiting Job(s)
As of Fri Jul 25 18:47:06 EDT 2025
1 job waiting for collinsa :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9015692 spades_array.jo collinsa +2:00 16 400.0 mThM.q 179-267:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.984T/8.944T 100.4% for collinsa in queue uThM.q
max_hM_slots_per_user/2 slots=368/585 62.9% for collinsa in queue mThM.q
max_slots_per_user/1 slots=368/840 43.8% for collinsa
------------------- ------------------------------- ------
1 job waiting for jbak :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9123221 J_20230728T-uv3 jbak 09:05 1 mThC.q 507-546:1
none running.
1 job waiting for pappalardop :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9062688 soakblend_bold pappalardop +1:01 1 300.0 mThM.q 456-519:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=7.324T/8.944T 81.9% for pappalardop in queue uThM.q
max_hM_slots_per_user/2 slots=25/585 4.3% for pappalardop in queue mThM.q
max_slots_per_user/1 slots=25/840 3.0% for pappalardop
------------------- ------------------------------- ------
1 job waiting for toths :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9141115 megahit_barrnap toths 03:27 16 256.0 sThM.q 68-161:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=6.000T/8.944T 67.1% for toths in queue uThM.q
max_slots_per_user/1 slots=384/840 45.7% for toths
max_hM_slots_per_user/1 slots=384/840 45.7% for toths in queue sThM.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=29.30T/35.78T 81.9% for * in queue uThM.q
blast2GO/1 slots=59/110 53.6% for *
total_gpus/1 num_gpu=3/8 37.5% for * in queue mTgpu.q
total_slots/1 slots=1991/5960 33.4% for *
total_gpus/1 num_gpu=1/8 12.5% for * in queue lTgpu.q
total_mem_res/1 mem_res=1.581T/39.94T 4.0% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Fri Jul 25 18:47:06 EDT 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=1.581T/39.94T 4.0% for * in queue uThC.q
total_mem_res/2 mem_res=29.30T/35.78T 81.9% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.3 x x - node down - 32 x x - x x
compute-64-18 - 503.4 299.8 400.1 - 203.6 103.3 - 32 16 16.9 - 16 15.1
compute-65-02 - 503.5 13.8 274.0 - 489.7 229.5 - 64 21 1.3 - 43 62.7
compute-65-03 - 503.5 21.6 366.0 - 481.9 137.5 - 64 52 3.1 - 12 60.9
compute-65-04 - 503.5 12.7 350.0 - 490.8 153.5 - 64 51 2.1 - 13 61.9
compute-65-05 - 503.5 13.6 350.0 - 489.9 153.5 - 64 51 2.1 - 13 61.9
compute-65-06 - 503.5 16.4 400.0 - 487.1 103.5 - 64 16 16.0 - 48 48.0
compute-65-07 - 503.5 50.8 416.0 - 452.7 87.5 - 64 17 9.3 - 47 54.6
compute-65-09 - 503.5 14.3 256.0 - 489.2 247.5 - 64 16 0.3 - 48 63.7
compute-65-10 - 503.5 20.5 272.0 - 483.0 231.5 - 64 17 1.2 - 47 62.8
compute-65-11 - 503.5 57.7 258.0 - 445.8 245.5 - 64 24 8.2 - 40 55.8
compute-65-12 - 503.5 20.4 460.0 - 483.1 43.5 - 64 56 20.3 - 8 43.7
compute-65-13 - 503.5 58.0 400.0 - 445.5 103.5 - 64 16 7.3 - 48 56.7
compute-65-14 - 503.5 16.9 328.0 - 486.6 175.5 - 64 25 4.0 - 39 60.0
compute-65-15 - 503.5 14.3 350.0 - 489.2 153.5 - 64 51 2.0 - 13 62.0
compute-65-16 - 503.5 15.4 256.0 - 488.1 247.5 - 64 16 0.4 - 48 63.6
compute-65-17 - 503.5 26.5 400.0 - 477.0 103.5 - 64 16 14.8 - 48 49.2
compute-65-18 - 503.5 20.6 400.0 - 482.9 103.5 - 64 16 11.1 - 48 53.0
compute-65-19 - 503.5 85.0 400.0 - 418.5 103.5 - 64 16 9.9 - 48 54.1
compute-65-20 - 503.5 14.2 302.0 - 489.3 201.5 - 64 61 37.6 - 3 26.4
compute-65-21 - 503.5 14.8 350.0 - 488.7 153.5 - 64 51 2.1 - 13 61.9
compute-65-22 - 503.5 19.3 350.0 - 484.2 153.5 - 64 51 2.0 - 13 62.0
compute-65-23 - 503.5 13.5 350.0 - 490.0 153.5 - 64 51 2.0 - 13 62.0
compute-65-24 - 503.5 15.9 400.0 - 487.6 103.5 - 64 16 16.1 - 48 48.0
compute-65-25 - 503.5 55.5 400.0 - 448.0 103.5 - 64 16 16.1 - 48 47.9
compute-65-26 - 503.5 13.6 340.0 - 489.9 163.5 - 64 2 2.4 - 62 61.6
compute-65-27 - 503.5 13.5 350.0 - 490.0 153.5 - 64 51 2.0 - 13 62.0
compute-65-28 - 503.5 18.1 400.0 - 485.4 103.5 - 64 16 16.1 - 48 48.0
compute-65-29 - 503.5 14.8 256.0 - 488.7 247.5 - 64 16 11.9 - 48 52.1
compute-65-30 - 503.5 16.9 400.0 - 486.6 103.5 - 64 16 16.2 - 48 47.8
compute-75-01 - 1007.5 18.0 830.1 - 989.5 177.4 - 128 99 4.4 - 29 123.6
compute-75-02 - 1007.5 29.2 912.0 - 978.3 95.5 - 128 48 45.1 - 80 82.9
compute-75-03 - 755.5 18.6 512.0 - 736.9 243.5 - 128 32 28.9 - 96 99.1
compute-75-04 - 755.5 61.7 595.7 - 693.8 159.8 - 128 40 34.4 - 88 93.6
compute-75-05 - 755.5 26.4 514.0 - 729.1 241.5 - 128 33 29.4 - 95 98.6
compute-75-06 - 755.5 25.4 512.0 - 730.1 243.5 - 128 32 29.2 - 96 98.8
compute-75-07 - 755.5 68.1 635.5 - 687.4 120.0 - 128 35 30.8 - 93 97.2
compute-76-03 - 1007.4 21.4 912.5 - 986.0 94.9 - 128 48 42.4 - 80 85.6
compute-76-04 - 1007.4 28.1 912.0 - 979.3 95.4 - 128 48 42.3 - 80 85.7
compute-76-05 - 1007.4 16.7 950.0 - 990.7 57.4 - 128 53 4.2 - 75 123.8
compute-76-06 - 1007.4 149.3 900.0 - 858.1 107.4 - 128 21 11.8 - 107 116.2
compute-76-07 - 1007.4 17.2 1000.0 - 990.2 7.4 - 128 103 4.5 - 25 123.5
compute-76-08 - 1007.4 159.5 800.0 - 847.9 207.4 - 128 6 3.5 - 122 124.5
compute-76-09 - 1007.4 20.6 912.0 - 986.8 95.4 - 128 24 24.2 - 104 103.8
compute-76-10 - 1007.4 48.6 880.0 - 958.8 127.4 - 128 66 29.0 - 62 99.0
compute-76-11 - 1007.4 74.0 922.0 - 933.4 85.4 - 128 43 25.6 - 85 102.4
compute-76-12 - 1007.4 16.3 900.0 - 991.1 107.4 - 128 3 3.3 - 125 124.7
compute-76-13 - 1007.4 19.9 912.0 - 987.5 95.4 - 128 48 39.7 - 80 88.3
compute-76-14 - 1007.4 21.9 1000.0 - 985.5 7.4 - 128 10 3.2 - 118 124.8
compute-84-01 - 881.1 100.9 750.0 - 780.2 131.1 - 112 59 3.5 - 53 108.5
compute-93-01 - 503.8 44.5 400.0 - 459.3 103.8 - 64 8 5.1 - 56 58.9
compute-93-02 - 755.6 35.6 517.6 - 720.0 238.0 - 72 22 16.1 - 50 55.9
compute-93-03 - 755.6 43.6 549.2 - 712.0 206.4 - 72 25 18.3 - 47 53.7
compute-93-04 - 755.6 23.2 700.0 - 732.4 55.6 - 72 17 14.7 - 55 57.3
======= ===== ====== ==== ==== =====
Totals 36134.4 2077.1 28662.6 4648 1763 748.7
==> 5.7% 79.3% ==> 37.9% 16.1%
Most unreserved/unused memory (247.5/489.2GB) is on compute-65-09 with 48/63.7 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 18.8 -0.0 - 1492.6 1511.4 - 192 0 0.1 - 192 191.9
compute-76-02 - 1511.4 x x - node down - 192 x x - x x
compute-93-05 - 2016.3 38.4 1536.0 - 1977.9 480.3 - 96 96 70.1 - 0 25.9
compute-93-06 - 3023.9 17.9 768.0 - 3006.0 2255.9 - 56 48 38.1 - 8 17.9
======= ===== ====== ==== ==== =====
Totals 6551.6 75.1 2304.0 344 144 108.2
==> 1.1% 35.2% ==> 41.9% 31.5%
Most unreserved/unused memory (2255.9/3006.0GB) is on compute-93-06 with 8/17.9 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 07/16/25 and 07/23/25
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
yisraell 2/19 0.00 48.7 1000.0 1.1 1.0 907.5 > 2.5
pappalardop 2/2 0.00 91.5 250.0 1.2 0.5 215.5 > 2.5
jassoj 5/5 0.01 235.8 128.0 136.5 135.0 0.9
toths 152/2432 0.06 7.4 253.1 1.1 0.7 225.3 > 2.5
mcgowenm 33/264 0.09 237.9 80.0 61.8 56.0 1.3
zehnpfennigj 5/85 0.14 168.3 155.0 378.8 93.7 0.4
macguigand 154/784 0.14 23.4 169.4 25.3 1.6 6.7 > 2.5
carrionj 42/210 0.30 28.1 10.0 0.1 0.0 83.5 > 2.5
xuj 12/24 0.36 46.4 400.0 209.5 122.3 1.9
gouldingt 19/108 0.60 7.0 190.3 4.1 4.1 45.9 > 2.5
wangt2 8/210 0.61 37.9 542.1 13.0 3.5 41.8 > 2.5
horowitzj 1616/1616 0.84 93.8 16.0 2.1 1.2 7.6 > 2.5
mghahrem 30/30 0.85 59.0 0.0 93.8 66.5 0.0
vagac 1/4 0.91 99.2 400.0 39.6 2.5 10.1 > 2.5
macdonaldk 28/368 0.95 86.2 149.6 71.0 7.2 2.1
ramosi 31/662 1.01 4.7 315.2 4.1 3.7 77.1 > 2.5
radicev 10/25 1.02 35.2 550.4 298.3 160.5 1.8
pcristof 307/9249 1.15 56.2 300.0 44.4 1.4 6.8 > 2.5
uribeje 11/120 1.44 37.7 350.4 39.8 18.7 8.8 > 2.5
bakerd 17/136 1.60 52.2 400.0 15.9 4.0 25.1 > 2.5
hydem2 367/1501 2.13 76.0 157.8 70.2 59.5 2.2
wirshingh 3/22 2.26 87.5 48.3 55.2 3.9 0.9
bourkeb 5/40 4.98 98.4 510.9 0.8 0.7 612.4 > 2.5
hinckleya 66/475 6.68 85.7 22.5 30.2 21.3 0.7
beckerm 705/7414 8.94 11.0 181.1 35.9 22.2 5.0 > 2.5
granquistm 64/342 9.27 84.4 198.8 40.6 12.4 4.9 > 2.5
campanam 277/390 16.72 98.8 307.7 40.9 31.2 7.5 > 2.5
ggonzale 864/864 143.47 71.8 76.9 38.3 25.3 2.0
vohsens 620/9920 155.35 84.0 256.0 104.1 5.5 2.5
cerqueirat 5912/5912 1458.38 99.7 16.0 5.5 4.3 2.9 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 11368/43233 1820.28 95.2 49.1 17.8 6.7 2.8 > 2.5
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
ariasc 6/100 0.11 64.0 635.8 51.6 31.9 12.3 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 6/100 0.11 64.0 635.8 51.6 31.9 12.3 > 2.5
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to num_gpu=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to num_gpu=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to num_gpu=4
users {*} queues {mTgpu.q} to num_gpu=3
users {*} queues {lTgpu.q} to num_gpu=2
users {*} queues {qgpu.iq} to num_gpu=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Disk Usage & Quota
Disk Usage & Quota
As of Fri Jul 25 17:06:02 EDT 2025
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.05T 20.14T 1.91T 92% /12% /home
netapp-fas83-n02:/vol_data_public 142.50T 43.59T 98.91T 31%/3% /data/public
netapp-fas83-n02:/vol_pool_public 230.00T 100.85T 129.15T 44%/1% /pool/public
gpfs01:public 400.00T 380.76T 19.24T 96% /54% /scratch/public
netapp-fas83-n02:/vol_pool_kozakk 11.00T 10.72T 285.32G 98% /1% /pool/kozakk
netapp-fas83-n02:/vol_pool_nmnh_ggi 21.00T 13.80T 7.20T 66%/1% /pool/nmnh_ggi
netapp-fas83-n02:/vol_pool_sao_access 19.95T 5.49T 14.46T 28%/2% /pool/sao_access
netapp-fas83-n02:/vol_pool_sao_rtdc 10.45T 907.44G 9.56T 9%/1% /pool/sao_rtdc
netapp-fas83-n02:/vol_pool_sylvain 30.00T 24.48T 5.52T 82% /6% /pool/sylvain
gpfs01:nmnh_bradys 25.00T 22.18T 2.82T 89% /59% /scratch/bradys
gpfs01:nmnh_kistlerl 120.00T 112.13T 7.87T 94% /6% /scratch/kistlerl
gpfs01:nmnh_meyerc 25.00T 19.07T 5.93T 77%/4% /scratch/meyerc
gpfs01:nmnh_quattrinia 60.00T 46.65T 13.35T 78%/7% /scratch/nmnh_corals
gpfs01:nmnh_ggi 77.00T 22.02T 54.98T 29%/5% /scratch/nmnh_ggi
gpfs01:nmnh_lab 25.00T 9.50T 15.50T 39%/3% /scratch/nmnh_lab
gpfs01:nmnh_mammals 35.00T 20.30T 14.70T 58%/22% /scratch/nmnh_mammals
gpfs01:nmnh_mdbc 50.00T 45.86T 4.14T 92% /9% /scratch/nmnh_mdbc
gpfs01:nmnh_ocean_dna 40.00T 31.10T 8.90T 78%/1% /scratch/nmnh_ocean_dna
gpfs01:nzp_ccg 45.00T 35.11T 9.89T 79%/2% /scratch/nzp_ccg
gpfs01:sao_atmos 350.00T 230.46T 119.54T 66%/4% /scratch/sao_atmos
gpfs01:sao_cga 25.00T 9.50T 15.50T 38%/6% /scratch/sao_cga
gpfs01:sao_tess 50.00T 24.82T 25.18T 50%/83% /scratch/sao_tess
gpfs01:scbi_gis 80.00T 26.40T 53.60T 33%/35% /scratch/scbi_gis
gpfs01:nmnh_schultzt 25.00T 19.90T 5.10T 80%/75% /scratch/schultzt
gpfs01:serc_cdelab 15.00T 12.70T 2.30T 85% /4% /scratch/serc_cdelab
gpfs01:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 70.00T 61.50T 8.50T 88% /47% /scratch/sylvain
gpfs01:usda_sel 25.00T 5.50T 19.50T 23%/6% /scratch/usda_sel
gpfs01:wrbu 50.00T 39.13T 10.87T 79%/6% /scratch/wrbu
netapp-fas83-n01:/vol_data_admin 4.75T 53.16G 4.70T 2%/1% /data/admin
netapp-fas83-n01:/vol_pool_admin 47.50T 41.71T 5.79T 88% /1% /pool/admin
gpfs01:admin 20.00T 3.48T 16.52T 18%/30% /scratch/admin
gpfs01:bioinformatics_dbs 10.00T 5.00T 5.00T 50%/2% /scratch/dbs
gpfs01:tmp 100.00T 38.33T 61.67T 39%/9% /scratch/tmp
gpfs01:ocio_dpo 10.00T 0.00G 10.00T 1%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
nas1:/mnt/pool/admin 20.00T 7.93T 12.07T 40%/1% /store/admin
nas1:/mnt/pool/public 175.00T 93.17T 81.83T 54%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 39.99T 10.37T 29.63T 26%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 13.67T 26.33T 35%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 40.00T 973.76G 39.05T 3%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 262.20T 112.33T 149.87T 43%/1% /store/nzp_ccg
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 2.93T 47.07T 6%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 367.65T 382.35T 50%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 40.00T 27.74T 12.26T 70%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 8.41T 41.59T 17%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
qnas:/hydra 45.47T 29.07T 16.40T 64%/64% /qnas/hydra
qnas:/nfs-mesa-nanozoomer 395.63T 352.69T 42.94T 90% /90% /qnas/mesa
qnas:/sil 3840.36T 2971.57T 868.80T 78%/78% /qnas/sil
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
You can view the plots of the GPFS IB traffic for the past
1
,
7
or
30
days, and
throughput info.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.18TB 92.9% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
/data/public 3.99TB 88.7% 0.01M 0.1% Zelong Nie, NMNH - niez
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 512.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 512.1GB 100.0% 0.00M 0.0% *** Molly Corder, SMSC - corderm
/home 499.6GB 97.6% 0.28M 2.8% *** Paul Cristofari, SAO/SSP - pcristof
/home 497.1GB 97.1% 0.12M 1.2% *** Jaiden Edelman, SAO/SSP - jedelman
/home 484.5GB 94.6% 0.42M 4.2% Adela Roa-Varon, NMNH - roa-varona
/home 478.6GB 93.5% 0.24M 2.4% Michael Connelly, NMNH - connellym
/home 476.5GB 93.1% 3.30M 33.0% Heesung Chong, SAO/AMP - hchong
/home 471.4GB 92.1% 0.03M 0.3% Shauna Rasband, NMNH - rasbands
/home 443.6GB 86.6% 0.97M 9.7% Hyeong-Ahn Kwon, SAO/AMP - hkwon
Volume=NetApp:vol_pool_nmnh_ggi, mounted as /pool/nmnh_ggi
-- disk -- -- #files -- default quota: 16.00TB/39.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/nmnh_ggi 13.76TB 86.0% 6.08M 15.6% Vanessa Gonzalez, NMNH/LAB - gonzalezv
Volume=NetApp:vol_pool_public, mounted as /pool/public
-- disk -- -- #files -- default quota: 7.50TB/18.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/public 6.65TB 88.7% 0.24M 1.3% Xiaoyan Xie, SAO/HEA - xxie
/pool/public 6.43TB 85.7% 13.86M 77.0% Ting Wang, NMNH - wangt2
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/38.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 15.00TB 100.0% 0.02M 0.1% *** Samuel Vohsen, NMNH - vohsens
/scratch/public 13.90TB 92.7% 1.79M 4.6% Ting Wang, NMNH - wangt2
/scratch/public 13.50TB 90.0% 0.91M 2.4% Karen Holm, SMSC - holmk
/scratch/public 13.50TB 90.0% 2.09M 5.4% Solomon Chak, SERC - chaks
/scratch/public 13.10TB 87.3% 4.38M 11.3% Kevin Mulder, NZP - mulderk
/scratch/public 13.00TB 86.7% 0.33M 0.9% Juan Uribe, NMNH - uribeje
/scratch/public 12.90TB 86.0% 14.30M 36.8% Brian Bourke, WRBU - bourkeb
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 97.3% 0.05M 0.4% *** Carlos Arias, STRI - ariasc (15.0TB/12M)
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.46T 39.9G 3.42T 1.1% | 0.0G 3.49T 0.0% | 0.00
65-02 3.49T 224.3G 3.27T 6.3% | 199.7G 3.29T 5.6% | 0.89
65-03 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-04 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-05 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-06 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-09 3.49T 224.3G 3.27T 6.3% | 199.7G 3.29T 5.6% | 0.89
65-10 1.75T 212.0G 1.54T 11.9% | 199.7G 1.55T 11.2% | 0.94
65-11 1.75T 212.0G 1.54T 11.9% | 199.7G 1.55T 11.2% | 0.94
65-12 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-13 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-14 1.75T 212.0G 1.54T 11.9% | 199.7G 1.55T 11.2% | 0.94
65-15 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-16 1.75T 212.0G 1.54T 11.9% | 199.7G 1.55T 11.2% | 0.94
65-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-18 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-19 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-20 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-21 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-22 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-23 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-25 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-26 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-29 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-02 6.98T 58.4G 6.92T 0.8% | 400.4G 6.59T 5.6% | 6.86
75-03 6.98T 64.5G 6.92T 0.9% | 400.4G 6.59T 5.6% | 6.21
75-04 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-05 6.98T 55.3G 6.93T 0.8% | 400.4G 6.59T 5.6% | 7.24
75-06 6.98T 58.4G 6.92T 0.8% | 400.4G 6.59T 5.6% | 6.86
75-07 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
76-03 1.75T 23.6G 1.72T 1.3% | 400.4G 1.35T 22.4% | 17.00
76-04 1.75T 20.5G 1.73T 1.1% | 400.4G 1.35T 22.4% | 19.55
76-13 1.75T 37.9G 1.71T 2.1% | 400.4G 1.35T 22.4% | 10.57
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.98T 73.7G 6.91T 1.0% | 0.98T 6.00T 14.0% | 13.57
---------------------------------------------------------------
Total 133.2T 2.21T 131.0T 1.7% | 10.31T 122.9T 7.7% | 4.67
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.