Hydra-7@ADC Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
akelbert
andersonhl
ariasc
athalappila
beckerm
bennettkf
breusingc
cabreroa
campanam
capadorhd
carlsenm
cnowlan
coellogarridoa
corderm
figueiroh
franzena
fzaidouni
ggonzale
gotzekd
gouldingt
granquistm
griebenowz
grossc2
gtorres
guerravc
hchong
hinckleya
horowitzj
jassoj
jbak
jhora
jmartine
jmichail
johnsone
johnsonsj
kistlerl
krajpuro
kweskinm
lealc
longk
macdonaldk
martinezl2
mcfaddenc
medeirosi
mghahrem
morrisseyd
mperez
nelsonjo
niez
palmerem
pappalardop
pcristof
peresph
quattrinia
rdi_tella
richardjm
santossam
ssanjaripour
sylvain
szieba
triznam
uribeje
vagac
vohsens
willishr
wirshingh
xuj
highlighted.
As of Sun Jan 25 18:17:03 2026: #CPUs/nodes 5868/74, 0 down.
Loads:
head node: 0.02, login nodes: 0.00, 0.00, 0.07, 0.00; NSDs: 0.17, 2.68, 2.88; licenses: none used.
Queues status: none disabled, none need attention, none in error state.
8 users with running jobs (slots/jobs):
Current load: 598.4, #running (slots/jobs): 600/24, usage: 10.2%, efficiency: 99.7%
1 user with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 98/98/3,136
50 users have/had running or queued jobs over the past 7 days, 67 over the past 15 days.
87 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Sunday, 25-Jan-2026 18:26:17 EST
with mk-webpage.pl ver. 7.3/1 (Oct 2025/SGK) in 4:23.
Warnings
Warnings
Oversubscribed Jobs
As of Sun Jan 25 18:17:03 EST 2026 (0 oversubscribed job)
Inefficient Jobs
As of Sun Jan 25 18:17:04 EST 2026 (15 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 600/24, 98 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
11801118 angsd_strict uribeje +6:00 8 14.1% uThM.q 65-29
11804058 drep_hybrids_mp campanam +2:01 8 13.1% mThM.q 93-03
11804247 vg_SRR14617986 niez 16:19 32 3.3% mThC.q 76-03
11804248 vg_SRR14617997 niez 16:19 32 3.4% mThC.q 76-11
11804249 vg_SRR14618037 niez 16:19 32 3.3% mThC.q 76-05
(more by niez)
11804358 mae_tempo mperez 15:57 8 31.2% lTgpu.q 79-01
⇒ Equivalent to 390.6 underused CPUs: 408 CPUs used at 4.3% on average.
To see them all use:
'q+ -ineff -u niez' (12)
Nodes with Excess Load
As of Sun Jan 25 18:17:05 EST 2026 (38 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
64-06 40 0 1.6 1.6 *
64-11 40 0 1.6 1.6 *
65-02 64 1 5.5 4.5 *
65-03 64 0 4.4 4.4 *
65-04 64 0 3.8 3.8 *
65-05 64 1 3.1 2.1 *
65-06 64 0 3.4 3.4 *
65-07 64 0 3.0 3.0 *
65-09 64 0 3.2 3.2 *
65-10 64 0 4.2 4.2 *
65-11 64 0 3.3 3.3 *
65-12 64 0 3.8 3.8 *
65-13 64 0 3.3 3.3 *
65-14 64 0 4.0 4.0 *
65-15 64 0 3.2 3.2 *
65-16 64 0 3.6 3.6 *
65-17 64 0 4.0 4.0 *
65-19 64 0 3.9 3.9 *
65-20 64 0 3.6 3.6 *
65-21 64 0 3.8 3.8 *
65-22 64 0 3.5 3.5 *
65-23 64 0 3.6 3.6 *
65-24 64 0 4.2 4.2 *
65-25 64 0 4.0 4.0 *
65-26 64 0 4.1 4.1 *
65-27 64 0 3.7 3.7 *
65-28 64 0 4.0 4.0 *
65-30 64 0 4.2 4.2 *
75-02 128 0 7.4 7.4 *
75-03 128 0 9.6 9.6 *
75-04 128 0 6.5 6.5 *
75-05 128 0 8.9 8.9 *
75-06 128 0 10.0 10.0 *
75-07 128 0 8.5 8.5 *
76-01 192 0 2.6 2.6 *
76-02 192 0 3.2 3.2 *
76-03 192 32 49.9 17.9 *
84-01 112 0 10.0 10.0 *
Total excess load = 184.3
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
pcristof 24 60.0% 0.3516 57.7% 0.0204 24.7% 0.1241 0.1241 2.8
uribeje 8 20.0% 0.1953 32.1% 0.0528 64.0% 0.0802 0.0806 2.4
campanam 8 20.0% 0.0625 10.3% 0.0092 11.2% 0.0183 0.0183 3.4
==================================================================================================
Total 40 0.6094 0.0824 0.2226 0.2231 2.7
Warnings
2 high memory jobs produced a warning:
1 for campanam
1 for uribeje
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =0
mThC.q =495
lThC.q =25
uThC.q =0
520 5056 10.3% 108.1%
sThM.q =24
mThM.q =8
lThM.q =0
uThM.q =8
40 4680 0.9% 1394.8%
sTgpu.q =0
mTgpu.q =0
lTgpu.q =40
qgpu.iq =0
40 104 38.5% 40.6%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =0
0 8 0.0%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =0
0 68 0.0%
Total: 600
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Sun Jan 25 18:17:04 EST 2026
4566 avail(slots), free(load)=5114.2, unresd(mem)=29642.9G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 5120 total(mem) 39.8T
unused(slots) 4566 unused(load) 5114.2 ie: 89.2% 99.9%
unreserved(mem) 28.9T unused(mem) 37.8T ie: 72.7% 95.0%
unreserved(mem) 6.5G unused(mem) 8.5G per unused(slots)
4150 avail(slots), free(load)=4674.5, unresd(mem)=25693.0G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4680 total(mem) 35.8T
unused(slots) 4150 unused(load) 4674.5 ie: 88.7% 99.9%
unreserved(mem) 25.1T unused(mem) 34.0T ie: 70.1% 94.9%
unreserved(mem) 6.2G unused(mem) 8.4G per unused(slots)
530 avail(slots), free(load)=535.9, unresd(mem)=7972.9G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 536 total(mem) 7.9T
unused(slots) 530 unused(load) 535.9 ie: 98.9% 100.0%
unreserved(mem) 7.8T unused(mem) 7.8T ie: 98.9% 98.8%
unreserved(mem) 15.0G unused(mem) 15.0G per unused(slots)
64 avail(slots), free(load)=103.6, unresd(mem)=750.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 64 unused(load) 103.6 ie: 61.5% 99.7%
unreserved(mem) 0.7T unused(mem) 0.5T ie: 99.4% 74.0%
unreserved(mem) 11.7G unused(mem) 8.7G per unused(slots)
GPU Usage
Sun Jan 25 18:17:09 EST 2026
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 96.4 406.9 - 4/1 - 64 32 13.4 - 32 50.5
compute-79-01 - 125.5 88.7 36.8 - 2/1 - 20 8 2.7 - 12 17.3
compute-79-02 - 125.5 11.2 114.3 - 2/0 - 20 0 0.1 - 20 19.9
Total GPU=8, used=2 (25.0%)
Waiting Job(s)
As of Sun Jan 25 18:17:05 EST 2026
98 jobs waiting for niez (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
11804259 vg_SRR25030349 niez 16:19 32 800.0 mThC.q
11804260 vg_SRR25030354 niez 16:19 32 800.0 mThC.q
11804261 vg_SRR25030357 niez 16:19 32 800.0 mThC.q
11804262 vg_SRR25030358 niez 16:19 32 800.0 mThC.q
11804263 vg_SRR25030360 niez 16:19 32 800.0 mThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/1 mem_res=10.16T/9.985T 101.7% for niez in queue uThC.q
max_slots_per_user/1 slots=494/840 58.8% for niez
max_hC_slots_per_user/2 slots=494/840 58.8% for niez in queue mThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/1 mem_res=10.35T/39.94T 25.9% for * in queue uThC.q
total_gpus/1 GPUS=2/8 25.0% for * in queue lTgpu.q
total_slots/1 slots=600/5960 10.1% for *
total_mem_res/2 mem_res=624.0G/35.78T 1.7% for * in queue uThM.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Sun Jan 25 18:17:05 EST 2026
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=10.35T/39.94T 25.9% for * in queue uThC.q
total_mem_res/2 mem_res=624.0G/35.78T 1.7% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.5 15.1 90.2 - 488.4 413.3 - 32 6 4.5 - 26 27.5
compute-64-18 - 503.5 15.2 90.2 - 488.3 413.3 - 32 6 4.7 - 26 27.3
compute-65-02 - 503.5 15.3 2.0 - 488.2 501.5 - 64 1 5.5 - 63 58.5
compute-65-03 - 503.5 15.7 0.0 - 487.8 503.5 - 64 0 4.5 - 64 59.5
compute-65-04 - 503.5 15.2 0.0 - 488.3 503.5 - 64 0 3.8 - 64 60.2
compute-65-05 - 503.5 13.8 8.0 - 489.7 495.5 - 64 1 3.1 - 63 60.9
compute-65-06 - 503.5 16.0 0.0 - 487.5 503.5 - 64 0 3.4 - 64 60.6
compute-65-07 - 503.5 14.8 0.0 - 488.7 503.5 - 64 0 3.0 - 64 61.0
compute-65-09 - 503.5 15.0 0.0 - 488.5 503.5 - 64 0 3.2 - 64 60.8
compute-65-10 - 503.5 15.5 0.0 - 488.0 503.5 - 64 0 4.1 - 64 59.9
compute-65-11 - 503.5 15.3 0.0 - 488.2 503.5 - 64 0 3.2 - 64 60.8
compute-65-12 - 503.5 13.9 0.0 - 489.6 503.5 - 64 0 3.8 - 64 60.2
compute-65-13 - 503.5 15.0 0.0 - 488.5 503.5 - 64 0 3.2 - 64 60.8
compute-65-14 - 503.5 13.0 0.0 - 490.5 503.5 - 64 0 4.0 - 64 60.0
compute-65-15 - 503.5 15.1 0.0 - 488.4 503.5 - 64 0 3.2 - 64 60.8
compute-65-16 - 503.5 14.6 0.0 - 488.9 503.5 - 64 0 3.6 - 64 60.4
compute-65-17 - 503.5 16.3 0.0 - 487.2 503.5 - 64 0 4.0 - 64 60.0
compute-65-18 - 503.5 13.1 0.0 - 490.4 503.5 - 64 0 0.7 - 64 63.3
compute-65-19 - 503.5 14.5 0.0 - 489.0 503.5 - 64 0 3.9 - 64 60.1
compute-65-20 - 503.5 15.6 0.0 - 487.9 503.5 - 64 0 3.6 - 64 60.4
compute-65-21 - 503.5 14.5 0.0 - 489.0 503.5 - 64 0 3.8 - 64 60.2
compute-65-22 - 503.5 15.2 0.0 - 488.3 503.5 - 64 0 3.5 - 64 60.5
compute-65-23 - 503.5 14.6 0.0 - 488.9 503.5 - 64 0 3.6 - 64 60.4
compute-65-24 - 503.5 15.1 0.0 - 488.4 503.5 - 64 0 4.2 - 64 59.8
compute-65-25 - 503.5 15.5 0.0 - 488.0 503.5 - 64 0 4.0 - 64 60.0
compute-65-26 - 503.5 13.3 0.0 - 490.2 503.5 - 64 0 4.1 - 64 59.9
compute-65-27 - 503.5 15.2 0.0 - 488.3 503.5 - 64 0 3.7 - 64 60.3
compute-65-28 - 503.5 15.6 0.0 - 487.9 503.5 - 64 0 4.0 - 64 60.0
compute-65-29 - 503.5 96.8 200.0 - 406.7 303.5 - 64 8 4.6 - 56 59.4
compute-65-30 - 503.5 14.2 0.0 - 489.3 503.5 - 64 0 4.2 - 64 59.8
compute-75-01 - 1007.5 228.8 800.1 - 778.7 207.4 - 128 110 7.6 - 18 120.4
compute-75-02 - 1007.5 17.8 0.0 - 989.7 1007.5 - 128 0 7.1 - 128 120.9
compute-75-03 - 755.5 17.8 0.0 - 737.7 755.5 - 128 0 9.6 - 128 118.4
compute-75-04 - 755.0 18.9 -0.5 - 736.1 755.5 - 128 0 6.0 - 128 122.0
compute-75-05 - 755.5 18.3 0.0 - 737.2 755.5 - 128 0 8.4 - 128 119.6
compute-75-06 - 755.5 18.6 0.0 - 736.9 755.5 - 128 0 10.0 - 128 118.0
compute-75-07 - 755.5 17.6 0.0 - 737.9 755.5 - 128 0 8.5 - 128 119.5
compute-76-03 - 1007.4 70.8 800.5 - 936.6 206.9 - 128 32 33.3 - 96 94.7
compute-76-04 - 1007.4 68.9 800.0 - 938.5 207.4 - 128 32 3.4 - 96 124.6
compute-76-05 - 1007.4 70.0 800.0 - 937.4 207.4 - 128 32 33.2 - 96 94.8
compute-76-06 - 1007.4 70.1 800.0 - 937.3 207.4 - 128 32 33.2 - 96 94.8
compute-76-07 - 1007.4 65.8 800.0 - 941.6 207.4 - 128 32 33.2 - 96 94.8
compute-76-08 - 1007.4 72.9 800.0 - 934.5 207.4 - 128 32 33.2 - 96 94.8
compute-76-09 - 1007.4 70.3 800.0 - 937.1 207.4 - 128 32 33.1 - 96 94.9
compute-76-10 - 1007.4 71.8 800.0 - 935.6 207.4 - 128 32 33.2 - 96 94.8
compute-76-11 - 1007.4 70.8 800.0 - 936.6 207.4 - 128 32 33.3 - 96 94.7
compute-76-12 - 1007.4 70.4 800.0 - 937.0 207.4 - 128 32 33.2 - 96 94.8
compute-76-13 - 1007.4 69.9 800.0 - 937.5 207.4 - 128 32 33.1 - 96 94.9
compute-76-14 - 1007.4 69.6 800.0 - 937.8 207.4 - 128 32 33.2 - 96 94.8
compute-84-01 - 881.1 96.7 0.0 - 784.4 881.1 - 112 0 10.0 - 112 102.0
compute-93-01 - 503.8 15.0 0.0 - 488.8 503.8 - 64 0 0.2 - 64 63.8
compute-93-02 - 755.6 16.3 0.0 - 739.3 755.6 - 72 0 0.1 - 72 71.9
compute-93-03 - 755.6 19.4 154.0 - 736.2 601.6 - 72 14 5.9 - 58 66.1
compute-93-04 - 755.6 15.7 0.0 - 739.9 755.6 - 72 0 0.0 - 72 72.0
======= ===== ====== ==== ==== =====
Totals 36637.5 1870.2 10944.5 4680 530 554.7
==> 5.1% 29.9% ==> 11.3% 11.9%
Most unreserved/unused memory (1007.5/989.7GB) is on compute-75-02 with 128/120.9 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 16.9 -0.0 - 1494.5 1511.4 - 192 0 2.6 - 192 189.4
compute-76-02 - 1511.4 46.6 -0.0 - 1464.8 1511.4 - 192 0 3.2 - 192 188.8
compute-93-05 - 2016.3 19.9 90.2 - 1996.4 1926.1 - 96 6 4.4 - 90 91.6
compute-93-06 - 3023.9 14.7 0.0 - 3009.2 3023.9 - 56 0 0.2 - 56 55.8
======= ===== ====== ==== ==== =====
Totals 8063.0 98.1 90.1 536 6 10.5
==> 1.2% 1.1% ==> 1.1% 2.0%
Most unreserved/unused memory (3023.9/3009.2GB) is on compute-93-06 with 56/55.8 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 01/14/26 and 01/21/26
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
griebenowz 1/16 0.00 2.1 512.0 0.0 0.0 0.0
longk 10/40 0.01 1011.1 79.6 36.6 1.1 2.2
uribeje 27/209 0.03 17.9 420.1 112.8 110.7 3.7 > 2.5
coellogarridoa 17/97 0.06 361.7 740.7 250.8 2.9 3.0 > 2.5
vagac 2/48 0.08 47.9 384.0 186.3 4.4 2.1
ariasc 4/80 0.10 81.3 591.1 90.6 74.9 6.5 > 2.5
nelsonjo 8/192 0.15 48.6 384.0 231.2 3.3 1.7
szieba 61/2440 0.16 82.4 0.0 169.6 2.4 0.0
bennettkf 7/28 0.17 6.9 128.0 60.6 6.0 2.1
athalappila 230/2064 0.25 71.8 782.4 161.6 0.7 4.8 > 2.5
hinckleya 13/54 0.30 84.9 28.7 4.7 4.3 6.1 > 2.5
lealc 11/44 0.37 99.6 72.0 0.7 0.7 99.8 > 2.5
granquistm 40/240 0.44 17.8 60.0 35.9 0.3 1.7
akelbert 4/8 0.55 49.8 160.0 60.8 47.8 2.6 > 2.5
gouldingt 11/100 0.78 69.8 293.1 211.1 22.5 1.4
mghahrem 11/67 0.82 53.0 336.3 132.1 34.4 2.5 > 2.5
peresph 4/42 0.88 37.7 75.8 73.7 28.0 1.0
capadorhd 3/12 0.95 3142.6 64.0 11.9 11.8 5.4 > 2.5
vohsens 3024/3024 2.06 32.4 16.0 0.3 0.1 53.8 > 2.5
martinezl2 13/109 3.56 99.2 94.7 5.2 2.7 18.3 > 2.5
jhora 153/4896 4.49 12.0 60.0 97.2 5.3 0.6
santossam 1/20 4.85 8.2 120.0 13.3 13.2 9.0 > 2.5
mcfaddenc 3026/3072 5.65 92.9 77.1 9.7 5.0 7.9 > 2.5
johnsonsj 320/640 7.37 133.7 40.0 23.6 23.5 1.7
beckerm 259/2642 14.49 45.9 91.5 25.1 17.5 3.6 > 2.5
horowitzj 6151/6151 17.96 94.7 16.0 3.0 1.8 5.3 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 13411/26335 66.54 115.7 71.7 25.9 10.7 2.8 > 2.5
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 0/0 0.00
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for all queues
users {*} to slots=840
Disk Usage & Quota
Disk Usage & Quota
As of Sun Jan 25 17:06:02 EST 2026
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.36T 17.14T 5.23T 77%/12% /home
netapp-fas83-n02:/vol_data_public 332.50T 44.48T 288.02T 14%/2% /data/public
gpfs02:public 800.00T 442.90T 357.10T 56%/30% /scratch/public
gpfs02:nmnh_bradys 25.00T 18.61T 6.39T 75%/58% /scratch/bradys
gpfs02:nmnh_kistlerl 120.00T 98.97T 21.03T 83% /14% /scratch/kistlerl
gpfs02:nmnh_meyerc 25.00T 18.85T 6.15T 76%/7% /scratch/meyerc
gpfs02:nmnh_corals 60.00T 51.09T 8.91T 86% /23% /scratch/nmnh_corals
gpfs02:nmnh_ggi 130.00T 36.46T 93.54T 29%/15% /scratch/nmnh_ggi
gpfs02:nmnh_lab 25.00T 11.46T 13.54T 46%/11% /scratch/nmnh_lab
gpfs02:nmnh_mammals 35.00T 27.29T 7.71T 78%/39% /scratch/nmnh_mammals
gpfs02:nmnh_mdbc 60.00T 54.01T 5.99T 91% /25% /scratch/nmnh_mdbc
gpfs02:nmnh_ocean_dna 90.00T 54.99T 35.01T 62%/2% /scratch/nmnh_ocean_dna
gpfs02:nzp_ccg 45.00T 30.92T 14.08T 69%/3% /scratch/nzp_ccg
login04:/scratch/ocio_dpo 1.71T 36.10G 1.68T 3%/1840710508 /dev/sdb3
login04:/scratch/ocio_ids 1.71T 36.10G 1.68T 3%/1840710508 /dev/sdb3
gpfs02:pool_kozakk 12.00T 10.67T 1.33T 89% /2% /scratch/pool_kozakk
gpfs02:pool_sao_access 50.00T 4.79T 45.21T 10%/9% /scratch/pool_sao_access
gpfs02:pool_sao_rtdc 20.00T 908.33G 19.11T 5%/1% /scratch/pool_sao_rtdc
gpfs02:sao_atmos 350.00T 226.02T 123.98T 65%/11% /scratch/sao_atmos
gpfs02:sao_cga 25.00T 9.44T 15.56T 38%/28% /scratch/sao_cga
gpfs02:sao_tess 50.00T 23.25T 26.75T 47%/83% /scratch/sao_tess
gpfs02:scbi_gis 95.00T 60.94T 34.06T 65%/14% /scratch/scbi_gis
gpfs02:nmnh_schultzt 35.00T 20.73T 14.27T 60%/75% /scratch/schultzt
gpfs02:serc_cdelab 15.00T 12.19T 2.81T 82% /19% /scratch/serc_cdelab
gpfs02:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
login04:/scratch/sylvain 1.71T 36.10G 1.68T 3%/1840710508 /dev/sdb3
gpfs02:usda_sel 25.00T 8.44T 16.56T 34%/30% /scratch/usda_sel
gpfs02:wrbu 50.00T 40.70T 9.30T 82% /14% /scratch/wrbu
nas1:/mnt/pool/public 175.00T 101.52T 73.48T 59%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 14.58T 25.42T 37%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 16.20T 23.80T 41%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 70.00T 28.41T 41.59T 41%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 264.13T 115.87T 148.26T 44%/1% /store/nzp_ccg
nas2:/mnt/pool/nzp_cec 40.00T 20.71T 19.29T 52%/1% /store/nzp_cec
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 3.07T 46.93T 7%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 394.18T 355.82T 53%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 80.00T 24.96T 55.04T 32%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 9.42T 40.58T 19%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
nas1:/mnt/pool/admin 20.00T 8.02T 11.98T 41%/1% /store/admin
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.13TB 91.8% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
/data/public 3.98TB 88.4% 0.00M 0.0% Zelong Nie, NMNH - niez
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 384.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 364.4GB 94.9% 0.28M 2.8% Juan Uribe, NMNH - uribeje
/home 359.8GB 93.7% 2.84M 28.4% Brian Bourke, WRBU - bourkeb
/home 359.4GB 93.6% 2.10M 21.0% Michael Trizna, NMNH/BOL - triznam
/home 346.7GB 90.3% 0.12M 1.2% Manuel Perez, SAO/AMP/OIR - mperez
/home 334.6GB 87.1% 0.27M 2.7% Paul Cristofari, SAO/SSP - pcristof
/home 328.1GB 85.4% 0.00M 0.0% Allan Cabrero, NMNH - cabreroa
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/39.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 17.20TB 114.7% 3.02M 0.0% *** Ting Wang, NMNH - wangt2
/scratch/public 15.60TB 104.0% 1.58M 0.0% *** Juan Uribe, NMNH - uribeje
/scratch/public 14.50TB 96.7% 28.06M 0.0% *** Zelong Nie, NMNH - niez
/scratch/public 14.20TB 94.7% 4.24M 0.0% Kevin Mulder, NZP - mulderk
/scratch/public 14.00TB 93.3% 35.29M 0.0% Alberto Coello Garrido, NMNH - coellogarridoa
/scratch/public 14.00TB 93.3% 0.08M 0.2% Qindan Zhu, SAO/AMP - qzhu
/scratch/public 13.50TB 90.0% 2.09M 0.0% Solomon Chak, SERC - chaks
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 292.0% 0.05M 0.0% *** Carlos Arias, STRI - ariasc
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.71TB 94.2% - - Zelong Nie, NMNH - niez (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-02 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-03 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-04 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-05 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-06 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-10 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-11 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-12 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-13 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-14 1.75T 20.5G 1.73T 1.1% | 0.0G 1.75T 0.0% | 0.00
65-15 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-16 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-18 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-19 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-20 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-21 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-22 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-23 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-25 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-26 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-29 1.75T 402.4G 1.35T 22.5% | 0.0G 1.75T 0.0% | 0.00
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-02 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-03 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
76-03 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-04 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-05 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
76-06 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
76-13 1.75T 101.4G 1.65T 5.7% | 0.0G 1.75T 0.0% | 0.00
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
---------------------------------------------------------------
Total 103.6T 1.19T 102.4T 1.2% | 8.73T 94.83T 8.4% | 7.33
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.