Hydra-7@ADC Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
afoster
andersonhl
ariasc
athalappila
beckerm
bolivarleguizamons
bourkeb
breusingc
byerlyp
cabreroa
capadorhd
castanedaricos
chippsa
coellogarridoa
collinsa
corderm
figueiroh
franzena
friedmans2
fzaidouni
gallego-narbona
ggonzale
girardmg
gonzalezb
gouldingt
granquistm
graujh
griebenowz
gtorres
guerravc
hagemannm
hchong
hinckleya
hoffmannmeyerg
holmk
horowitzj
jenkinskel
jhora
jmcclung
johnsone
johnsonsj
jourdain-fievetl
jspark
karnan
kistlerl
krajpuro
kweskinm
linat
lingof
longk
martinezl2
mcfaddenc
mcgowenm
mghahrem
morrisseyd
myerse
nelsonjo
nevesk
niez
palmerem
pappalardop
pattonp
peresph
przelomskan
qzhu
ramosi
rbottger
rdi_tella
richardjm
sandoval-velascom
santosbe
santossam
sookhoos
sossajef
stlaurentr
suttonm
sylvain
szieba
taom
triznam
uribeje
vohsens
willishr
wirshingh
woodh
xuj
yancos
yisraell
zayazpou
zehnpfennigj
zhangy
highlighted.
As of Thu Mar 19 11:47:03 2026: #CPUs/nodes 5740/74, 0 down.
Loads:
head node: 0.41, login nodes: 2.69, 0.08, 0.06, 0.00; NSDs: 0.07, 0.00, 1.53, 3.75, 3.63; licenses: none used.
Queues status: 8 disabled, none need attention, none in error state.
17 users with running jobs (slots/jobs):
Current load: 820.8, #running (slots/jobs): 1,315/146, usage: 22.9%, efficiency: 62.4%
1 user with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 31/31/310
72 users have/had running or queued jobs over the past 7 days, 91 over the past 15 days.
111 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Thursday, 19-Mar-2026 11:52:23 EDT
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 1:16.
Warnings
Warnings
Oversubscribed Jobs
As of Thu Mar 19 11:47:04 EDT 2026 (0 oversubscribed job)
Inefficient Jobs
As of Thu Mar 19 11:47:04 EDT 2026 (26 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1315/146, 31 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
12190422 stairwayAZ.job byerlyp +31:02 5 19.9% lThM.q 64-17
12195552 stairwayNE.job byerlyp +30:22 5 19.9% lThM.q 76-04
12198833 stairwayCAR.job byerlyp +30:01 5 19.8% lThM.q 76-14
12510406 krakenuniq_Scri yisraell +8:11 48 0.0% mThC.q 65-15
12586423 treemix_batch uribeje +7:20 20 5.0% lThM.q 76-07
12647567 treemix_batch uribeje +7:02 20 5.0% lThM.q 93-04
12762748 treemix_batch uribeje +2:03 20 5.0% lThM.q 76-12
12765814 Idx_chr02 niez +1:03 16 6.2% mThC.q 65-06
12765815 Idx_chr03 niez +1:03 16 6.2% mThC.q 65-23
12765816 Idx_chr04 niez +1:03 16 6.2% mThC.q 65-14
(more by niez)
12769475 iqtree2 santossam 22:57 20 7.9% mThM.q 93-03
12769494 astral_iqtree santossam 22:13 20 8.7% mThM.q 76-04
12769820 BPP_rajah_2 chippsa 01:50 64 19.7% mThC.q 84-01
⇒ Equivalent to 445.1 underused CPUs: 483 CPUs used at 7.8% on average.
To see them all use:
'q+ -ineff -u niez' (16)
Nodes with Excess Load
As of Thu Mar 19 11:47:05 EDT 2026 (3 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-25 64 0 3.6 3.6 *
76-02 192 0 2.0 2.0 *
76-03 192 21 31.6 10.6 *
Total excess load = 16.2
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
granquistm 580 76.9% 9.0625 74.7% 2.0524 90.7% 2.8561 4.3732 2.1
uribeje 60 8.0% 1.7578 14.5% 0.0405 1.8% 0.0406 0.0406 43.3
morrisseyd 48 6.4% 0.7500 6.2% 0.1487 6.6% 0.1670 0.1994 3.8
jhora 10 1.3% 0.2500 2.1% 0.0008 0.0% 0.0008 0.0008 313.8
santossam 40 5.3% 0.2344 1.9% 0.0113 0.5% 0.0124 0.0153 15.3
byerlyp 15 2.0% 0.0586 0.5% 0.0100 0.4% 0.0101 0.0101 5.8
nevesk 1 0.1% 0.0156 0.1% 0.0002 0.0% 0.0002 0.0002 81.7
==================================================================================================
Total 754 12.1289 2.2637 3.0871 4.6396 2.6
Warnings
114 high memory jobs produced a warning:
3 for byerlyp
58 for granquistm
48 for morrisseyd
2 for santossam
3 for uribeje
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =0
mThC.q =400
lThC.q =129
uThC.q =16
545 5056 10.8% 145.2%
sThM.q =10
mThM.q =668
lThM.q =76
uThM.q =0
754 4680 16.1% 103.0%
sTgpu.q =4
mTgpu.q =0
lTgpu.q =0
qgpu.iq =0
4 104 3.8% 102.2%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =0
0 8 0.0%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =12
12 68 17.6% 15.4%
Total: 1315
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Thu Mar 19 11:47:04 EDT 2026
3863 avail(slots), free(load)=5110.2, unresd(mem)=25140.9G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5120 total(mem) 39.8T
unused(slots) 3895 unused(load) 5110.2 ie: 76.1% 99.8%
unreserved(mem) 24.6T unused(mem) 35.4T ie: 61.6% 89.0%
unreserved(mem) 6.5G unused(mem) 9.3G per unused(slots)
3552 avail(slots), free(load)=4671.1, unresd(mem)=21783.0G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4680 total(mem) 35.8T
unused(slots) 3584 unused(load) 4671.1 ie: 76.6% 99.8%
unreserved(mem) 21.3T unused(mem) 31.7T ie: 59.5% 88.5%
unreserved(mem) 6.1G unused(mem) 9.0G per unused(slots)
398 avail(slots), free(load)=408.0, unresd(mem)=7807.0G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 408 total(mem) 7.9T
unused(slots) 398 unused(load) 408.0 ie: 97.5% 100.0%
unreserved(mem) 7.6T unused(mem) 7.3T ie: 96.8% 92.9%
unreserved(mem) 19.6G unused(mem) 18.8G per unused(slots)
100 avail(slots), free(load)=103.9, unresd(mem)=690.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 100 unused(load) 103.9 ie: 96.2% 99.9%
unreserved(mem) 0.7T unused(mem) 0.7T ie: 91.5% 94.3%
unreserved(mem) 6.9G unused(mem) 7.1G per unused(slots)
GPU Usage
Thu Mar 19 11:47:11 EDT 2026
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 22.5 480.8 - 4/4 - 64 4 4.0 - 60 60.0
compute-79-01 - 125.5 10.3 115.2 - 2/0 - 20 0 0.0 - 20 20.0
compute-79-02 - 125.5 10.0 115.5 - 2/0 - 20 0 0.1 - 20 19.9
Total GPU=8, used=4 (50.0%)
Waiting Job(s)
As of Thu Mar 19 11:47:05 EDT 2026
31 jobs waiting for granquistm (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
12769789 USNM1739420_spa granquistm 01:51 10 160.0 mThM.q
12769790 USNM1739421_spa granquistm 01:51 10 160.0 mThM.q
12769791 USNM1739422_spa granquistm 01:51 10 160.0 mThM.q
12769792 USNM1739423_spa granquistm 01:51 10 160.0 mThM.q
12769793 USNM1739424_spa granquistm 01:51 10 160.0 mThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=9.062T/8.944T 101.3% for granquistm in queue uThM.q
max_hM_slots_per_user/2 slots=580/585 99.1% for granquistm in queue mThM.q
max_slots_per_user/1 slots=580/840 69.0% for granquistm
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=65/110 59.1% for *
total_gpus/1 GPUS=4/8 50.0% for * in queue sTgpu.q
total_mem_res/2 mem_res=12.13T/35.78T 33.9% for * in queue uThM.q
total_slots/1 slots=1316/5960 22.1% for *
total_mem_res/1 mem_res=3.393T/39.94T 8.5% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Thu Mar 19 11:47:05 EDT 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=3.393T/39.94T 8.5% for * in queue uThC.q
total_mem_res/2 mem_res=12.13T/35.78T 33.9% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.5 62.7 180.2 - 440.8 323.3 - 32 15 11.0 - 17 21.0
compute-64-18 - 503.5 70.1 160.2 - 433.4 343.3 - 32 10 9.9 - 22 22.1
compute-65-02 - 503.5 79.2 320.0 - 424.3 183.5 - 64 28 13.0 - 36 51.0
compute-65-03 - 503.5 182.1 304.0 - 321.4 199.5 - 64 27 12.0 - 37 52.0
compute-65-04 - 503.5 79.0 320.0 - 424.5 183.5 - 64 28 13.1 - 36 50.9
compute-65-05 - 503.5 53.5 176.0 - 450.0 327.5 - 64 11 11.1 - 53 52.9
compute-65-06 - 503.5 91.1 288.0 - 412.4 215.5 - 64 26 11.0 - 38 53.0
compute-65-07 - 503.5 66.6 160.0 - 436.9 343.5 - 64 10 10.0 - 54 54.0
compute-65-09 - 503.5 21.8 16.0 - 481.7 487.5 - 64 1 1.1 - 63 62.9
compute-65-10 - 503.5 68.5 192.0 - 435.0 311.5 - 64 12 12.0 - 52 52.0
compute-65-11 - 503.5 64.8 160.0 - 438.7 343.5 - 64 10 10.0 - 54 54.0
compute-65-12 - 503.5 73.4 176.0 - 430.1 327.5 - 64 11 11.1 - 53 52.9
compute-65-13 - 503.5 72.5 304.0 - 431.0 199.5 - 64 27 12.1 - 37 51.9
compute-65-14 - 503.5 98.8 288.0 - 404.7 215.5 - 64 26 10.9 - 38 53.1
compute-65-15 - 503.5 16.4 400.0 - 487.1 103.5 - 64 49 1.0 - 15 63.0
compute-65-16 - 503.5 63.3 192.0 - 440.2 311.5 - 64 12 12.6 - 52 51.4
compute-65-17 - 503.5 57.3 192.0 - 446.2 311.5 - 64 12 12.1 - 52 51.9
compute-65-18 - 503.5 69.7 192.0 - 433.8 311.5 - 64 12 12.0 - 52 52.0
compute-65-19 - 503.5 79.8 176.0 - 423.7 327.5 - 64 11 11.0 - 53 53.0
compute-65-20 - 503.5 86.3 288.0 - 417.2 215.5 - 64 26 11.1 - 38 52.9
compute-65-21 - 503.5 65.7 160.0 - 437.8 343.5 - 64 10 10.0 - 54 54.0
compute-65-22 - 503.5 88.0 304.0 - 415.5 199.5 - 64 27 12.0 - 37 52.0
compute-65-23 - 503.5 73.3 304.0 - 430.2 199.5 - 64 27 12.0 - 37 52.0
compute-65-24 - 503.5 68.4 160.0 - 435.1 343.5 - 64 10 10.0 - 54 54.0
compute-65-25 - 503.5 13.8 0.0 - 489.7 503.5 - 64 0 3.6 - 64 60.4
compute-65-26 - 503.5 58.1 192.0 - 445.4 311.5 - 64 12 12.0 - 52 52.0
compute-65-27 - 503.5 70.3 160.0 - 433.2 343.5 - 64 10 10.0 - 54 54.0
compute-65-28 - 503.5 16.7 0.0 - 486.8 503.5 - 64 0 0.0 - 64 64.0
compute-65-29 - 503.5 72.0 160.0 - 431.5 343.5 - 64 10 10.1 - 54 53.9
compute-65-30 - 503.5 16.9 2.0 - 486.6 501.5 - 64 16 16.0 - 48 48.0
compute-75-01 - 1007.5 118.5 336.1 - 889.0 671.4 - 128 21 20.9 - 107 107.1
compute-75-02 - 1007.5 106.2 320.0 - 901.3 687.5 - 128 20 20.1 - 108 107.9
compute-75-03 - 755.5 119.5 352.0 - 636.0 403.5 - 128 22 22.0 - 106 106.0
compute-75-04 - 755.5 108.9 336.0 - 646.6 419.5 - 128 21 21.0 - 107 107.0
compute-75-05 - 755.5 82.0 224.0 - 673.5 531.5 - 128 14 14.0 - 114 114.0
compute-75-06 - 755.5 101.6 336.0 - 653.9 419.5 - 128 21 21.1 - 107 106.9
compute-75-07 - 755.5 117.3 256.0 - 638.2 499.5 - 128 64 57.5 - 64 70.5
compute-76-03 - 1007.4 112.8 336.5 - 894.6 670.9 - 128 21 21.1 - 107 107.0
compute-76-04 - 1007.4 94.6 460.0 - 912.8 547.4 - 128 45 22.9 - 83 105.1
compute-76-05 - 1007.4 111.8 320.0 - 895.6 687.4 - 128 20 20.0 - 108 108.0
compute-76-06 - 1007.4 106.6 336.0 - 900.8 671.4 - 128 21 21.1 - 107 106.9
compute-76-07 - 1007.4 115.2 936.0 - 892.2 71.4 - 128 41 22.2 - 87 105.8
compute-76-08 - 1007.4 104.0 336.0 - 903.4 671.4 - 128 21 21.1 - 107 106.9
compute-76-09 - 1007.4 118.6 320.0 - 888.8 687.4 - 128 20 20.3 - 108 107.7
compute-76-10 - 1007.4 116.9 352.0 - 890.5 655.4 - 128 22 22.1 - 106 105.9
compute-76-11 - 1007.4 77.7 272.0 - 929.7 735.4 - 128 65 37.2 - 63 90.8
compute-76-12 - 1007.4 93.3 936.0 - 914.1 71.4 - 128 41 22.0 - 87 106.0
compute-76-13 - 1007.4 90.0 336.0 - 917.4 671.4 - 128 21 21.3 - 107 106.7
compute-76-14 - 1007.4 122.3 340.0 - 885.1 667.4 - 128 25 21.2 - 103 106.8
compute-84-01 - 881.1 103.7 528.0 - 777.4 353.1 - 112 65 12.7 - 47 99.3
compute-93-01 - 503.8 22.9 48.0 - 480.9 455.8 - 64 3 3.2 - 61 60.8
compute-93-02 - 755.6 35.1 144.0 - 720.5 611.6 - 72 17 2.1 - 55 69.9
compute-93-03 - 755.6 19.8 152.0 - 735.8 603.6 - 72 22 3.6 - 50 68.4
compute-93-04 - 755.6 19.6 616.0 - 736.0 139.6 - 72 21 2.0 - 51 70.0
======= ===== ====== ==== ==== =====
Totals 36638.0 4219.0 14855.0 4680 1160 776.7
==> 11.5% 40.5% ==> 24.8% 16.6%
Most unreserved/unused memory (735.4/929.7GB) is on compute-76-11 with 63/90.8 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 18.0 256.0 - 1493.4 1255.4 - 192 10 0.6 - 182 191.4
compute-76-02 - 1511.4 532.7 -0.0 - 978.7 1511.4 - 192 0 5.8 - 192 186.2
compute-93-05 - 2016.3 16.7 0.0 - 1999.6 2016.3 - 96 0 0.0 - 96 96.0
compute-93-06 - 3023.9 15.4 0.0 - 3008.5 3023.9 - 56 0 0.1 - 56 55.9
======= ===== ====== ==== ==== =====
Totals 8063.0 582.8 256.0 536 10 6.5
==> 7.2% 3.2% ==> 1.9% 1.2%
Most unreserved/unused memory (3023.9/3008.5GB) is on compute-93-06 with 56/55.9 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 03/11/26 and 03/18/26
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
jhora 2/64 0.00 7.3 60.0 89.7 0.1 0.7
martinezl2 3/18 0.00 9.0 40.0 0.2 0.1 226.4 > 2.5
edies 1/10 0.00 26.2 50.0 20.2 17.2 2.5
suttonm 3/3 0.00 144.7 40.0 8.7 7.3 4.6 > 2.5
szieba 25/1000 0.02 2.6 0.0 39.1 32.2 0.0
athalappila 8/64 0.04 28.1 96.0 30.9 6.7 3.1 > 2.5
nelsonjo 8/192 0.04 74.0 384.0 288.2 3.2 1.3
hinckleya 8/40 0.06 96.1 39.6 4.2 3.9 9.4 > 2.5
lingof 1/1 0.06 1198.0 32.0 30.8 20.1 1.0
hchong 3/3 0.08 97.6 16.0 16.0 5.4 1.0
nevesk 140/1664 0.08 78.2 550.0 11.9 6.2 46.1 > 2.5
xuj 2/40 0.11 107.7 400.0 12.7 12.7 31.4 > 2.5
afoster 2/2 0.18 99.6 24.0 9.1 5.8 2.6 > 2.5
santossam 24/480 0.19 39.7 120.0 37.8 31.6 3.2 > 2.5
palmerem 4/18 0.31 85.7 160.6 80.9 73.1 2.0
bourkeb 39/441 0.71 81.2 778.4 693.7 5.9 1.1
jourdain-fievetl 402/3618 0.77 150.8 900.0 277.9 1.1 3.2 > 2.5
johnsonsj 118/118 0.87 98.2 32.0 29.3 16.0 1.1
ariasc 6/140 1.18 44.7 581.5 498.4 17.3 1.2
cabreroa 1/8 1.26 87.6 0.0 90.0 37.1 0.0
ramosi 3/15 1.40 19.8 10.0 28.3 28.2 0.4
zehnpfennigj 2/10 1.58 20.0 18.0 0.6 0.5 30.4 > 2.5
byerlyp 16/80 2.03 35.0 38.1 1.0 0.7 37.5 > 2.5
pappalardop 1464/1464 2.06 99.3 298.5 0.2 0.2 1374.8 > 2.5
qzhu 20/92 2.09 72.8 157.6 29.8 11.3 5.3 > 2.5
santosbe 40/490 3.12 10.1 959.2 17.2 7.4 55.8 > 2.5
willishr 12/152 3.15 40.7 67.7 89.4 6.4 0.8
graujh 2/64 3.85 10.4 240.0 99.5 0.4 2.4
niez 497/7200 4.47 72.5 189.9 129.5 19.0 1.5
sandoval-velascom 76/76 5.65 100.6 12.0 6.2 5.9 1.9
uribeje 86/954 8.68 183.4 332.0 25.2 3.1 13.2 > 2.5
longk 153/612 9.71 72.1 120.0 37.9 4.5 3.2 > 2.5
beckerm 39/312 13.21 32.0 103.9 21.9 10.3 4.7 > 2.5
kistlerl 6644/6644 14.09 105.7 48.0 8.7 6.4 5.5 > 2.5
woodh 288/1152 36.88 88.6 100.0 39.7 16.4 2.5 > 2.5
granquistm 210/2100 44.46 85.6 160.0 136.5 42.4 1.2
morrisseyd 9123/9192 152.85 97.3 16.3 5.1 3.2 3.2 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 19475/38533 315.25 89.8 87.7 38.8 11.5 2.3
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 0/0 0.00
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lTWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Disk Usage & Quota
Disk Usage & Quota
As of Thu Mar 19 11:06:02 EDT 2026
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.36T 18.01T 4.35T 81% /12% /home
netapp-fas83-n02:/vol_data_public 332.50T 46.34T 286.16T 14%/2% /data/public
gpfs02:public 800.00T 504.12T 295.88T 64%/35% /scratch/public
gpfs02:nmnh_bradys 25.00T 18.79T 6.21T 76%/59% /scratch/bradys
gpfs02:nmnh_kistlerl 120.00T 86.86T 33.14T 73%/14% /scratch/kistlerl
gpfs02:nmnh_meyerc 25.00T 20.39T 4.61T 82% /7% /scratch/meyerc
gpfs02:nmnh_corals 60.00T 56.58T 3.42T 95% /24% /scratch/nmnh_corals
gpfs02:nmnh_ggi 130.00T 36.46T 93.54T 29%/15% /scratch/nmnh_ggi
gpfs02:nmnh_lab 25.00T 11.55T 13.45T 47%/11% /scratch/nmnh_lab
gpfs02:nmnh_mammals 35.00T 27.90T 7.10T 80%/39% /scratch/nmnh_mammals
gpfs02:nmnh_mdbc 60.00T 55.87T 4.13T 94% /26% /scratch/nmnh_mdbc
gpfs02:nmnh_ocean_dna 90.00T 69.37T 20.63T 78%/5% /scratch/nmnh_ocean_dna
gpfs02:nzp_ccg 45.00T 34.63T 10.37T 77%/3% /scratch/nzp_ccg
gpfs01:ocio_dpo 10.00T 152.05G 9.85T 2%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
gpfs02:pool_kozakk 12.00T 10.67T 1.33T 89% /2% /scratch/pool_kozakk
gpfs02:pool_sao_access 50.00T 4.79T 45.21T 10%/9% /scratch/pool_sao_access
gpfs02:pool_sao_rtdc 20.00T 908.33G 19.11T 5%/1% /scratch/pool_sao_rtdc
gpfs02:sao_atmos 350.00T 259.32T 90.68T 75%/12% /scratch/sao_atmos
gpfs02:sao_cga 25.00T 9.44T 15.56T 38%/28% /scratch/sao_cga
gpfs02:sao_tess 50.00T 23.25T 26.75T 47%/83% /scratch/sao_tess
gpfs02:scbi_gis 184.00T 126.05T 57.95T 69%/9% /scratch/scbi_gis
gpfs02:nmnh_schultzt 35.00T 22.56T 12.44T 65%/75% /scratch/schultzt
gpfs02:serc_cdelab 15.00T 10.13T 4.87T 68%/18% /scratch/serc_cdelab
gpfs02:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 145.00T 8.88T 136.12T 7%/3% /scratch/sylvain
gpfs02:usda_sel 25.00T 8.46T 16.54T 34%/31% /scratch/usda_sel
gpfs02:wrbu 50.00T 40.98T 9.02T 82% /14% /scratch/wrbu
nas1:/mnt/pool/public 175.00T 102.39T 72.61T 59%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 14.58T 25.42T 37%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 16.35T 23.65T 41%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 70.00T 31.03T 38.97T 45%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 264.66T 119.18T 145.47T 46%/1% /store/nzp_ccg
nas2:/mnt/pool/nzp_cec 40.00T 20.71T 19.29T 52%/1% /store/nzp_cec
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 3.08T 46.92T 7%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 410.22T 339.78T 55%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 80.00T 24.96T 55.04T 32%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 9.42T 40.58T 19%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
nas1:/mnt/pool/admin 20.00T 8.04T 11.96T 41%/1% /store/admin
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.13TB 91.8% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 384.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 374.1GB 97.4% 0.09M 0.9% *** Rebeka Tamasi Bottger, SAO/OIR - rbottger
/home 371.1GB 96.6% 2.95M 29.5% *** Brian Bourke, WRBU - bourkeb
/home 363.6GB 94.7% 0.27M 2.7% Juan Uribe, NMNH - uribeje
/home 350.9GB 91.4% 0.28M 2.8% Paul Cristofari, SAO/SSP - pcristof
/home 345.3GB 89.9% 0.70M 7.0% Adam Foster, SAO/HEA - afoster
/home 329.1GB 85.7% 0.00M 0.0% Allan Cabrero, NMNH - cabreroa
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/39.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 17.20TB 114.7% 3.02M 7.6% *** Ting Wang, NMNH - wangt2
/scratch/public 14.70TB 98.0% 0.05M 0.1% *** Madankui Tao, SAO/AMP - taom
/scratch/public 14.50TB 96.7% 0.54M 1.4% *** Carlos Arias, STRI - ariasc
/scratch/public 13.50TB 90.0% 2.09M 5.3% Solomon Chak, SERC - chaks
/scratch/public 13.20TB 88.0% 31.22M 78.4% Alberto Coello Garrido, NMNH - coellogarridoa
/scratch/public 13.20TB 88.0% 4.20M 10.5% Kevin Mulder, NZP - mulderk
/scratch/public 12.90TB 86.0% 15.80M 39.7% Brian Bourke, WRBU - bourkeb
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 292.0% 0.05M 0.0% *** Carlos Arias, STRI - ariasc
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.73TB 94.6% - - Zelong Nie, NMNH - niez (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
64-17 1.75T 28.7G 1.72T 1.6% | 199.7G 1.55T 11.2% | 6.96
64-18 3.49T 43.0G 3.45T 1.2% | 199.7G 3.29T 5.6% | 4.64
65-02 3.49T 41.0G 3.45T 1.1% | 199.7G 3.29T 5.6% | 4.88
65-03 3.49T 38.9G 3.45T 1.1% | 199.7G 3.29T 5.6% | 5.13
65-04 3.49T 39.9G 3.45T 1.1% | 199.7G 3.29T 5.6% | 5.00
65-05 3.49T 36.9G 3.45T 1.0% | 199.7G 3.29T 5.6% | 5.42
65-06 3.49T 36.9G 3.45T 1.0% | 199.7G 3.29T 5.6% | 5.42
65-07 3.49T 42.0G 3.45T 1.2% | 199.7G 3.29T 5.6% | 4.76
65-10 1.75T 34.8G 1.71T 1.9% | 199.7G 1.55T 11.2% | 5.74
65-11 1.75T 28.7G 1.72T 1.6% | 199.7G 1.55T 11.2% | 6.96
65-12 1.75T 31.7G 1.71T 1.8% | 199.7G 1.55T 11.2% | 6.29
65-13 1.75T 26.6G 1.72T 1.5% | 199.7G 1.55T 11.2% | 7.50
65-14 1.75T 32.8G 1.71T 1.8% | 199.7G 1.55T 11.2% | 6.09
65-15 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-16 1.75T 27.6G 1.72T 1.5% | 199.7G 1.55T 11.2% | 7.22
65-17 1.75T 24.6G 1.72T 1.4% | 199.7G 1.55T 11.2% | 8.12
65-18 1.75T 28.7G 1.72T 1.6% | 199.7G 1.55T 11.2% | 6.96
65-19 1.75T 32.8G 1.71T 1.8% | 199.7G 1.55T 11.2% | 6.09
65-20 1.75T 131.1G 1.62T 7.3% | 199.7G 1.55T 11.2% | 1.52
65-21 1.75T 29.7G 1.72T 1.7% | 199.7G 1.55T 11.2% | 6.72
65-22 1.75T 32.8G 1.71T 1.8% | 199.7G 1.55T 11.2% | 6.09
65-23 1.75T 23.6G 1.72T 1.3% | 199.7G 1.55T 11.2% | 8.48
65-24 1.75T 30.7G 1.71T 1.7% | 199.7G 1.55T 11.2% | 6.50
65-25 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-26 1.75T 24.6G 1.72T 1.4% | 199.7G 1.55T 11.2% | 8.12
65-27 1.75T 30.7G 1.71T 1.7% | 199.7G 1.55T 11.2% | 6.50
65-28 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-29 1.75T 31.7G 1.71T 1.8% | 199.7G 1.55T 11.2% | 6.29
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-01 5.24T 72.7G 5.17T 1.4% | 400.4G 4.84T 7.5% | 5.51
75-02 6.98T 78.8G 6.91T 1.1% | 400.4G 6.59T 5.6% | 5.08
75-03 6.98T 81.9G 6.90T 1.1% | 400.4G 6.59T 5.6% | 4.89
75-04 6.98T 79.9G 6.90T 1.1% | 400.4G 6.59T 5.6% | 5.01
75-05 6.98T 132.1G 6.85T 1.8% | 199.7G 6.79T 2.8% | 1.51
75-06 6.98T 78.8G 6.91T 1.1% | 400.4G 6.59T 5.6% | 5.08
76-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-03 1.75T 43.0G 1.70T 2.4% | 400.4G 1.35T 22.4% | 9.31
76-04 1.75T 38.9G 1.71T 2.2% | 400.4G 1.35T 22.4% | 10.29
76-05 1.75T 45.1G 1.70T 2.5% | 400.4G 1.35T 22.4% | 8.89
76-06 1.75T 42.0G 1.70T 2.3% | 400.4G 1.35T 22.4% | 9.54
76-07 1.75T 46.1G 1.70T 2.6% | 400.4G 1.35T 22.4% | 8.69
76-08 1.75T 42.0G 1.70T 2.3% | 400.4G 1.35T 22.4% | 9.54
76-09 1.75T 47.1G 1.70T 2.6% | 400.4G 1.35T 22.4% | 8.50
76-10 1.75T 44.0G 1.70T 2.5% | 400.4G 1.35T 22.4% | 9.09
76-11 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-12 1.75T 35.8G 1.71T 2.0% | 400.4G 1.35T 22.4% | 11.17
76-13 1.75T 35.8G 1.71T 2.0% | 400.4G 1.35T 22.4% | 11.17
76-14 1.75T 49.2G 1.70T 2.8% | 400.4G 1.35T 22.4% | 8.15
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-06 1.64T 11.3G 1.62T 0.7% | 0.0G 1.64T 0.0% | 0.00
---------------------------------------------------------------
Total 141.8T 2.04T 139.8T 1.4% | 11.33T 130.5T 8.0% | 5.55
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.