Hydra-7 Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
adonath
andersonhl
ariasc
bakerd
bayhak
beckerm
bolivarleguizamons
bombickj
bornbuschs
bourkeb
breusingc
campanam
carrionj
castanedaricos
cerqueirat
chaks
chshen
coellogarridoa
collensab
collinsa
dikowr
figueiroh
fowlera
frandsenp
franzena
gchen
ggonzale
girardmg
gonzalezm2
gottschoa
gouldingt
granquistm
graujh
gtorres
guerravc
hawkinsmt
hinckleya
horowitzj
hpc
hydem2
jassoj
jbak
jenkinskel
jmartine
jmichail
johnsone
johnsong
johnsonsj
jspark
jwing
jyee
kistlerl
kweskinm
lingof
longk
lopezortizs
macdonaldk
macguigand
mcfaddenc
medeirosi
mghahrem
morrisseyd
myerse
niez
palmerem
pangy
parkerld
pcristof
phillipsaj
przelomskan
quattrinia
radicev
ramosi
rhargrea
scottjj
sookhoos
srinivasanrv
ssanjaripour
sylvain
toths
triznam
uribeje
urrutia-carterje
vagac
vohsens
wangt2
willishr
wirshingh
xuj
yancos
yisraell
zarril
zayazpou
zhangy
zhaons
highlighted.
As of Wed Aug 20 20:07:09 2025: #CPUs/nodes 5100/74, 0 down.
Loads:
head node: 0.72, login nodes: 3.15, 0.08, 0.03, 1.13; NSDs: 697.95, 33.72; licenses: none used.
Queues status: 83 disabled, none need attention, none in error state.
25 users with running jobs (slots/jobs):
Current load: 932.0, #running (slots/jobs): 1,810/298, usage: 35.5%, efficiency: 51.5%
3 users with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 30/20,216/21,087
78 users have/had running or queued jobs over the past 7 days, 95 over the past 15 days.
110 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
This page was last updated on Wednesday, 20-Aug-2025 20:14:37 EDT
with mk-webpage.pl ver. 7.2/1 (Aug 2024/SGK) in 3:25.
Warnings
Warnings
Oversubscribed Jobs
As of Wed Aug 20 20:07:10 EDT 2025 (1 oversubscribed job)
Total running (PEs/jobs) = 1810/298, 30 queued (jobs), showing only oversubscribed jobs (cpu% > 133% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
9988035 train_vibilia bombickj +2:07 2 418.1% lTgpu.q 50-01
⇒ Equivalent to 6.4 overused CPUs: 2 CPUs used at 418.1% on average.
Inefficient Jobs
As of Wed Aug 20 20:07:10 EDT 2025 (100 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1810/298, 30 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
9505605 beast_parti_mor jassoj +12:22 40 17.0% lThC.q 64-09
9782382 RNAseq_2025_tri chaks +7:05 40 0.0% lThC.q 76-07
9371868 exabayes-et2-50 gouldingt +15:04 32 12.5% lThC.q 64-06
9996111 bowtie_AF9-12_0 fowlera 08:04 32 2.2% lThC.q 64-13
9996119 bowtie_AF16-29_ fowlera 07:45 32 2.3% lThC.q 64-03
9996131 bowtie_AF21-23_ fowlera 07:10 32 2.0% lThC.q 64-08
(more by fowlera)
9992849 xcon uribeje 17:04 20 5.1% mThM.q 84-01
9987198 phyluce_assembl mcfaddenc +3:06 16 4.0% mThC.q 76-06
9992856 Step3_trinity bourkeb 15:00 16 12.5% mThM.q 75-01
9992857 Step3_viral_Spa bourkeb 15:00 16 12.5% mThM.q 76-06
9997241 Step3_megahit bourkeb 02:42 16 27.3% mThM.q 76-12
(more by bourkeb)
9996137 CO35213c_mitofi mcfaddenc 07:07 12 18.5% mThC.q 65-24
9996138 T109_mitofinder mcfaddenc 07:07 12 19.0% mThC.q 65-30
(more by mcfaddenc)
9989848 bootstrap carrionj +1:09 8 12.4% mThC.q 76-10 1
9989848 bootstrap carrionj +1:09 8 12.4% mThC.q 75-07 2
9989848 bootstrap carrionj +1:09 8 12.5% mThC.q 93-03 3
(more by carrionj)
9996112 bffpens_metawra bornbuschs 02:37 8 10.9% mThM.q 64-17 52
9996112 bffpens_metawra bornbuschs 02:32 8 12.5% mThM.q 93-02 53
9996112 bffpens_metawra bornbuschs 02:25 8 13.7% mThM.q 65-12 55
(more by bornbuschs)
9992967 CVPhme uribeje 12:21 5 19.8% lThM.q 76-04
9996200 job_array_AR johnsonsj 06:44 5 14.4% mThC.q 64-12 1
9996200 job_array_AR johnsonsj 06:44 5 15.2% mThC.q 64-08 2
9996200 job_array_AR johnsonsj 06:44 5 16.2% mThC.q 64-11 3
9376933 spades_sc9 hawkinsmt +14:05 1 15.6% lThM.q 75-01
9989712 cp_scr2pool coellogarridoa +1:10 1 3.3% lThC.q 75-03
9996105 move_1 granquistm 08:17 1 1.6% lThM.q 75-04
9996106 move_3 granquistm 08:16 1 0.6% lThM.q 76-10
9996250 rsync-scratch-s kweskinm 05:43 1 30.6% lTIO.sq 64-15
⇒ Equivalent to 1088.6 underused CPUs: 1253 CPUs used at 13.1% on average.
To see them all use:
'q+ -ineff -u bornbuschs' (8)
'q+ -ineff -u bourkeb' (5)
'q+ -ineff -u carrionj' (20)
'q+ -ineff -u fowlera' (7)
'q+ -ineff -u mcfaddenc' (47)
Nodes with Excess Load
As of Wed Aug 20 20:07:11 EDT 2025 (8 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-05 64 0 32.5 32.5 *
65-14 64 0 34.8 34.8 *
65-19 64 0 31.7 31.7 *
65-20 64 0 32.6 32.6 *
65-23 64 0 32.5 32.5 *
65-25 64 0 32.4 32.4 *
65-26 64 0 32.6 32.6 *
75-01 128 35 40.5 5.5 *
Total excess load = 234.7
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
morrisseyd 220 43.7% 9.0234 48.0% 0.9803 62.3% 1.2008 1.9470 4.6
bourkeb 104 20.6% 4.0625 21.6% 0.0665 4.2% 0.0524 0.4491 9.0
bornbuschs 64 12.7% 2.5000 13.3% 0.0136 0.9% 0.0366 0.0560 44.6
mcfaddenc 48 9.5% 1.6406 8.7% 0.2995 19.0% 0.1866 0.4712 3.5
uribeje 25 5.0% 0.3418 1.8% 0.0193 1.2% 0.0370 0.0621 5.5
hinckleya 5 1.0% 0.2734 1.5% 0.1016 6.5% 0.0522 0.1805 1.5
cerqueirat 24 4.8% 0.2344 1.2% 0.0262 1.7% 0.0267 0.0267 8.8
granquistm 2 0.4% 0.1953 1.0% 0.0000 0.0% 0.0001 0.0001 2650.1
longk 4 0.8% 0.1953 1.0% 0.0156 1.0% 0.0969 0.1289 1.5
yancos 2 0.4% 0.1953 1.0% 0.0242 1.5% 0.0338 0.0339 5.8
ggonzale 5 1.0% 0.0879 0.5% 0.0126 0.8% 0.0120 0.0195 4.5
hawkinsmt 1 0.2% 0.0586 0.3% 0.0138 0.9% 0.0008 0.0525 1.1
==================================================================================================
Total 504 18.8086 1.5732 1.7358 3.4274 5.5
Warnings
35 high memory jobs produced a warning:
5 for bourkeb
1 for cerqueirat
1 for hawkinsmt
3 for hinckleya
1 for longk
4 for mcfaddenc
18 for morrisseyd
2 for uribeje
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =0
mThC.q =932
lThC.q =337
uThC.q =34
1303 5056 25.8% 69.9%
sThM.q =5
mThM.q =490
lThM.q =9
uThM.q =0
504 4680 10.8% 169.8%
sTgpu.q =0
mTgpu.q =0
lTgpu.q =2
qgpu.iq =0
2 104 1.9% 150.5%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =1
1 8 12.5% 11.0%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =0
0 68 0.0%
Total: 1810
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Wed Aug 20 20:07:10 EDT 2025
2643 avail(slots), free(load)=4312.5, unresd(mem)=11722.7G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 4480 total(mem) 34.4T
unused(slots) 2697 unused(load) 4471.9 ie: 60.2% 99.8%
unreserved(mem) 12.3T unused(mem) 31.4T ie: 35.8% 91.2%
unreserved(mem) 4.7G unused(mem) 11.9G per unused(slots)
2618 avail(slots), free(load)=4097.2, unresd(mem)=10609.5G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4104 total(mem) 31.4T
unused(slots) 2672 unused(load) 4097.2 ie: 65.1% 99.8%
unreserved(mem) 10.4T unused(mem) 28.5T ie: 33.0% 90.9%
unreserved(mem) 4.0G unused(mem) 10.9G per unused(slots)
339 avail(slots), free(load)=344.0, unresd(mem)=6460.4G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 344 total(mem) 6.4T
unused(slots) 339 unused(load) 344.0 ie: 98.5% 100.0%
unreserved(mem) 6.3T unused(mem) 6.3T ie: 98.6% 99.0%
unreserved(mem) 19.1G unused(mem) 19.1G per unused(slots)
102 avail(slots), free(load)=104.0, unresd(mem)=654.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 102 unused(load) 104.0 ie: 98.1% 100.0%
unreserved(mem) 0.6T unused(mem) 0.7T ie: 86.7% 91.6%
unreserved(mem) 6.4G unused(mem) 6.8G per unused(slots)
GPU Usage
Wed Aug 20 20:07:16 EDT 2025
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 26.5 476.8 - 4/1 - 64 2 3.0 - 62 61.0
compute-79-01 - 125.5 21.7 103.8 - 2/0 - 20 0 0.0 - 20 20.0
compute-79-02 - 125.5 15.3 110.2 - 2/0 - 20 0 0.0 - 20 20.0
Total #GPU=8 used=1 (12.5%)
Waiting Job(s)
As of Wed Aug 20 20:07:11 EDT 2025
1 job waiting for bornbuschs :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9996112 bffpens_metawra bornbuschs 08:03 8 320.0 mThM.q 63-153:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=2.500T/8.944T 28.0% for bornbuschs in queue uThM.q
max_hM_slots_per_user/2 slots=64/585 10.9% for bornbuschs in queue mThM.q
max_slots_per_user/1 slots=64/840 7.6% for bornbuschs
------------------- ------------------------------- ------
3 jobs waiting for ggonzale :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9325078 mea_l2prof_Aura ggonzale +19:04 1 18.0 sThM.q 70990-80000:1
9325079 mea_l2prof_Aura ggonzale +19:04 1 18.0 sThM.q 80001-90000:1
9325081 mea_l2prof_Aura ggonzale +19:04 1 18.0 sThM.q 90001-91088:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=90.00G/8.944T 1.0% for ggonzale in queue uThM.q
max_slots_per_user/1 slots=5/840 0.6% for ggonzale
max_hM_slots_per_user/1 slots=5/840 0.6% for ggonzale in queue sThM.q
------------------- ------------------------------- ------
26 jobs waiting for morrisseyd (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9993609 spades_USNM9190 morrisseyd 11:12 10 420.0 mThM.q
9993610 spades_USNM5129 morrisseyd 11:12 10 420.0 mThM.q
9993612 spades_USNM9459 morrisseyd 11:12 10 420.0 mThM.q
9993613 spades_USNM5074 morrisseyd 11:12 10 420.0 mThM.q
9993614 spades_USNM9190 morrisseyd 11:12 10 420.0 mThM.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=9.023T/8.944T 100.9% for morrisseyd in queue uThM.q
max_hM_slots_per_user/2 slots=220/585 37.6% for morrisseyd in queue mThM.q
max_slots_per_user/1 slots=220/840 26.2% for morrisseyd
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
blast2GO/1 slots=62/110 56.4% for *
total_mem_res/2 mem_res=18.81T/35.78T 52.6% for * in queue uThM.q
total_slots/1 slots=1813/5960 30.4% for *
total_gpus/1 num_gpu=1/8 12.5% for * in queue lTgpu.q
total_mem_res/1 mem_res=4.203T/39.94T 10.5% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Wed Aug 20 20:07:11 EDT 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=4.203T/39.94T 10.5% for * in queue uThC.q
total_mem_res/2 mem_res=18.81T/35.78T 52.6% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.4 33.4 420.1 - 470.0 83.3 - 32 9 2.0 - 23 30.0
compute-64-18 - 503.4 51.8 420.1 - 451.6 83.3 - 32 10 9.9 - 22 22.1
compute-65-02 - 503.5 52.0 148.0 - 451.5 355.5 - 64 17 10.8 - 47 53.2
compute-65-03 - 503.5 36.0 444.0 - 467.5 59.5 - 64 22 6.4 - 42 57.6
compute-65-04 - 503.5 27.5 444.0 - 476.0 59.5 - 64 22 5.5 - 42 58.5
compute-65-05 - 503.5 x x - node down - 64 x x - x x
compute-65-06 - 503.5 16.3 420.0 - 487.2 83.5 - 64 10 6.9 - 54 57.1
compute-65-07 - 503.5 x x - node down - 64 x x - x x
compute-65-09 - 503.5 92.9 422.0 - 410.6 81.5 - 64 11 3.2 - 53 60.8
compute-65-10 - 503.5 65.0 446.0 - 438.5 57.5 - 64 23 13.8 - 41 50.2
compute-65-11 - 503.5 29.5 422.0 - 474.0 81.5 - 64 17 6.5 - 47 57.5
compute-65-12 - 503.5 33.1 334.0 - 470.4 169.5 - 64 15 8.2 - 49 55.8
compute-65-13 - 503.5 187.8 444.0 - 315.7 59.5 - 64 22 11.1 - 42 52.9
compute-65-14 - 503.5 x x - node down - 64 x x - x x
compute-65-15 - 503.5 148.9 448.0 - 354.6 55.5 - 64 24 13.5 - 40 50.5
compute-65-16 - 503.5 x x - node down - 64 x x - x x
compute-65-17 - 503.5 39.1 444.0 - 464.4 59.5 - 64 22 7.0 - 42 57.0
compute-65-18 - 503.5 19.8 420.0 - 483.7 83.5 - 64 12 1.0 - 52 63.0
compute-65-19 - 503.5 x x - node down - 64 x x - x x
compute-65-20 - 503.5 x x - node down - 64 x x - x x
compute-65-21 - 503.5 43.7 454.0 - 459.8 49.5 - 64 19 12.0 - 45 52.0
compute-65-22 - 503.5 35.9 420.0 - 467.6 83.5 - 64 10 8.5 - 54 55.5
compute-65-23 - 503.5 x x - node down - 64 x x - x x
compute-65-24 - 503.5 41.0 56.0 - 462.5 447.5 - 64 28 10.0 - 36 54.0
compute-65-25 - 503.5 x x - node down - 64 x x - x x
compute-65-26 - 503.5 x x - node down - 64 x x - x x
compute-65-27 - 503.5 17.1 4.0 - 486.4 499.5 - 64 17 17.5 - 47 46.5
compute-65-28 - 503.5 65.0 420.0 - 438.5 83.5 - 64 10 8.8 - 54 55.2
compute-65-29 - 503.5 54.2 420.0 - 449.3 83.5 - 64 10 10.0 - 54 54.0
compute-65-30 - 503.5 43.0 468.0 - 460.5 35.5 - 64 34 14.4 - 30 49.6
compute-75-01 - 1007.5 40.8 592.1 - 966.7 415.4 - 128 35 40.5 - 93 87.5
compute-75-02 - 1007.5 170.7 684.0 - 836.8 323.5 - 128 48 23.3 - 80 104.7
compute-75-03 - 755.5 87.2 466.0 - 668.3 289.5 - 128 37 28.0 - 91 100.0
compute-75-04 - 755.5 68.3 610.0 - 687.2 145.5 - 128 48 16.6 - 80 111.4
compute-75-05 - 755.5 152.8 470.0 - 602.7 285.5 - 128 35 9.7 - 93 118.3
compute-75-06 - 755.5 126.8 432.0 - 628.7 323.5 - 128 16 14.0 - 112 114.0
compute-75-07 - 755.5 42.7 534.0 - 712.8 221.5 - 128 53 11.5 - 75 116.5
compute-76-03 - 1007.4 34.3 32.5 - 973.1 974.9 - 128 16 8.1 - 112 119.9
compute-76-04 - 1007.4 126.7 938.0 - 880.7 69.4 - 128 49 26.1 - 79 101.9
compute-76-05 - 1007.4 49.7 572.0 - 957.7 435.4 - 128 30 15.1 - 98 112.9
compute-76-06 - 1007.4 61.4 996.0 - 946.0 11.4 - 128 82 21.0 - 46 107.0
compute-76-07 - 1007.4 47.9 942.0 - 959.5 65.4 - 128 84 15.2 - 44 112.8
compute-76-08 - 1007.4 29.2 538.0 - 978.2 469.4 - 128 44 29.7 - 84 98.3
compute-76-09 - 1007.4 45.9 944.0 - 961.5 63.4 - 128 56 19.2 - 72 108.8
compute-76-10 - 1007.4 47.0 540.0 - 960.4 467.4 - 128 57 19.8 - 71 108.2
compute-76-11 - 1007.4 51.7 840.0 - 955.7 167.4 - 128 36 13.0 - 92 115.0
compute-76-12 - 1007.4 62.2 596.0 - 945.2 411.4 - 128 58 24.8 - 70 103.2
compute-76-13 - 1007.4 203.2 892.0 - 804.2 115.4 - 128 50 32.4 - 78 95.6
compute-76-14 - 1007.4 51.8 646.0 - 955.6 361.4 - 128 49 18.9 - 79 109.1
compute-84-01 - 881.1 137.1 384.0 - 744.0 497.1 - 112 62 17.8 - 50 94.2
compute-93-01 - 503.8 27.6 88.0 - 476.2 415.8 - 64 36 12.7 - 28 51.4
compute-93-02 - 755.6 32.6 400.0 - 723.0 355.6 - 72 40 8.1 - 32 63.9
compute-93-03 - 755.6 25.8 56.0 - 729.8 699.6 - 72 20 3.2 - 52 68.8
compute-93-04 - 755.6 44.9 386.0 - 710.7 369.6 - 72 27 11.6 - 45 60.4
======= ===== ====== ==== ==== =====
Totals 32106.3 2901.3 21496.8 4104 1432 627.2
==> 9.0% 67.0% ==> 34.9% 15.3%
Most unreserved/unused memory (974.9/973.1GB) is on compute-76-03 with 112/119.9 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 28.0 54.2 - 1483.4 1457.2 - 192 3 2.5 - 189 189.5
compute-76-02 - 1511.4 x x - node down - 192 x x - x x
compute-93-05 - 2016.3 19.6 18.5 - 1996.7 1997.8 - 96 1 2.0 - 95 94.0
compute-93-06 - 3023.9 16.9 18.5 - 3007.0 3005.4 - 56 1 0.6 - 55 55.4
======= ===== ====== ==== ==== =====
Totals 6551.6 64.5 91.2 344 5 5.1
==> 1.0% 1.4% ==> 1.5% 1.5%
Most unreserved/unused memory (3005.4/3007.0GB) is on compute-93-06 with 55/55.4 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 08/13/25 and 08/20/25
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
yisraell 1/5 0.00 33.8 1000.0 2463.2 5.1 0.4
przelomskan 3/3 0.01 63.7 50.9 27.5 6.0 1.8
scottjj 2/40 0.02 229.8 600.0 461.3 8.8 1.3
macguigand 89/432 0.05 29.3 89.6 30.5 2.7 2.9 > 2.5
hydem2 8/16 0.05 65.9 21.6 17.3 16.6 1.3
sookhoos 2/40 0.06 58.3 200.0 197.0 10.5 1.0
jmichail 1/10 0.09 18.2 70.0 29.0 2.3 2.4
kistlerl 1/1 0.14 106.7 96.0 53.0 52.9 1.8
parkerld 14/168 0.28 64.7 8.4 16.0 9.0 0.5
cerqueirat 3190/3236 0.36 78.5 16.1 0.5 0.3 29.6 > 2.5
ramosi 15/450 0.48 42.3 365.3 393.2 30.4 0.9
urrutia-carterje 30/36 0.57 34.3 32.3 12.0 6.8 2.7 > 2.5
pcristof 156/1551 0.60 55.7 150.4 7.5 0.7 20.1 > 2.5
bornbuschs 447/3504 0.94 10.6 311.2 11.9 1.8 26.1 > 2.5
horowitzj 1545/1560 1.11 27.4 160.8 100.6 38.8 1.6
radicev 6/12 1.30 24.8 552.8 672.8 95.6 0.8
beckerm 3/96 1.52 12.3 511.7 45.6 7.2 11.2 > 2.5
mghahrem 9/9 1.58 12.5 0.0 99.5 67.8 0.0
wirshingh 10/35 1.65 82.6 51.7 28.5 0.3 1.8
xuj 28/72 1.68 65.3 420.0 148.0 36.1 2.8 > 2.5
bourkeb 13/120 1.99 46.9 404.6 80.6 4.6 5.0 > 2.5
campanam 12/312 2.16 65.8 327.4 10.7 7.5 30.6 > 2.5
palmerem 714/714 5.22 97.3 300.0 1.4 1.3 210.3 > 2.5
yancos 7/7 6.17 99.7 100.0 22.9 21.3 4.4 > 2.5
collinsa 28/416 9.34 16.0 284.6 40.7 15.5 7.0 > 2.5
johnsong 570/3615 12.25 7.2 811.0 1483.6 3.7 0.5
morrisseyd 110/1110 16.79 69.2 419.6 98.7 36.8 4.3 > 2.5
hinckleya 30/132 17.19 96.6 40.3 15.3 13.2 2.6 > 2.5
uribeje 59/604 21.44 28.1 317.5 22.6 10.7 14.0 > 2.5
mcfaddenc 135/1868 22.78 66.2 248.5 139.5 60.2 1.8
zarril 1795/8442 50.76 16.9 116.7 41.4 5.3 2.8 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 9033/28616 178.57 44.2 254.9 158.7 19.3 1.6
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 0/0 0.00
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit slots/user in GPU queues
users {*} queues {sTgpu.q} to slots=40
users {*} queues {mTgpu.q} to slots=20
users {*} queues {lTgpu.q} to slots=10
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to num_gpu=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to num_gpu=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to num_gpu=4
users {*} queues {mTgpu.q} to num_gpu=3
users {*} queues {lTgpu.q} to num_gpu=2
users {*} queues {qgpu.iq} to num_gpu=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Disk Usage & Quota
Disk Usage & Quota
As of Wed Aug 20 17:06:02 EDT 2025
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.36T 20.45T 1.91T 92% /11% /home
netapp-fas83-n02:/vol_data_public 142.50T 39.25T 103.25T 28%/3% /data/public
netapp-fas83-n02:/vol_pool_public 230.00T 73.65T 156.35T 33%/1% /pool/public
gpfs01:public 400.00T 276.85T 123.15T 70%/39% /scratch/public
netapp-fas83-n02:/vol_pool_kozakk 11.00T 10.72T 289.33G 98% /1% /pool/kozakk
netapp-fas83-n02:/vol_pool_nmnh_ggi 21.00T 13.79T 7.21T 66%/1% /pool/nmnh_ggi
netapp-fas83-n02:/vol_pool_sao_access 19.95T 5.46T 14.49T 28%/2% /pool/sao_access
netapp-fas83-n02:/vol_pool_sao_rtdc 10.45T 915.44G 9.56T 9%/1% /pool/sao_rtdc
netapp-fas83-n02:/vol_pool_sylvain 30.00T 24.47T 5.53T 82% /6% /pool/sylvain
gpfs01:nmnh_bradys 25.00T 22.18T 2.82T 89% /59% /scratch/bradys
gpfs01:nmnh_kistlerl 120.00T 112.18T 7.82T 94% /6% /scratch/kistlerl
gpfs01:nmnh_meyerc 25.00T 18.86T 6.14T 76%/2% /scratch/meyerc
gpfs01:nmnh_quattrinia 60.00T 53.56T 6.44T 90% /7% /scratch/nmnh_corals
gpfs01:nmnh_ggi 77.00T 22.02T 54.98T 29%/5% /scratch/nmnh_ggi
gpfs01:nmnh_lab 25.00T 9.73T 15.27T 39%/3% /scratch/nmnh_lab
gpfs01:nmnh_mammals 35.00T 20.95T 14.05T 60%/21% /scratch/nmnh_mammals
gpfs01:nmnh_mdbc 50.00T 48.73T 1.27T 98% /9% /scratch/nmnh_mdbc
gpfs01:nmnh_ocean_dna 40.00T 39.30T 718.09G 99% /1% /scratch/nmnh_ocean_dna
gpfs01:nzp_ccg 45.00T 32.50T 12.50T 73%/2% /scratch/nzp_ccg
gpfs01:sao_atmos 350.00T 259.12T 90.88T 75%/5% /scratch/sao_atmos
gpfs01:sao_cga 25.00T 9.50T 15.50T 38%/6% /scratch/sao_cga
gpfs01:sao_tess 50.00T 24.82T 25.18T 50%/83% /scratch/sao_tess
gpfs01:scbi_gis 80.00T 31.66T 48.34T 40%/35% /scratch/scbi_gis
gpfs01:nmnh_schultzt 35.00T 20.11T 14.89T 58%/43% /scratch/schultzt
gpfs01:serc_cdelab 15.00T 12.70T 2.30T 85% /4% /scratch/serc_cdelab
gpfs01:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 70.00T 59.83T 10.17T 86% /47% /scratch/sylvain
gpfs01:usda_sel 25.00T 5.50T 19.50T 22%/6% /scratch/usda_sel
gpfs01:wrbu 50.00T 40.32T 9.68T 81% /6% /scratch/wrbu
netapp-fas83-n01:/vol_data_admin 4.75T 52.88G 4.70T 2%/1% /data/admin
netapp-fas83-n01:/vol_pool_admin 47.50T 29.07T 18.43T 62%/1% /pool/admin
gpfs01:admin 20.00T 3.50T 16.50T 18%/31% /scratch/admin
gpfs01:bioinformatics_dbs 10.00T 5.00T 5.00T 50%/2% /scratch/dbs
gpfs01:tmp 100.00T 34.24T 65.76T 35%/9% /scratch/tmp
gpfs01:ocio_dpo 10.00T 0.00G 10.00T 1%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
qnas:/hydra 45.47T 29.07T 16.40T 64%/64% /qnas/hydra
qnas:/nfs-mesa-nanozoomer 395.63T 356.77T 38.86T 91% /91% /qnas/mesa
qnas:/sil 3840.36T 2997.71T 842.65T 79%/79% /qnas/sil
nas1:/mnt/pool/admin 20.00T 7.98T 12.02T 40%/1% /store/admin
nas1:/mnt/pool/public 175.00T 85.60T 89.40T 49%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 39.99T 10.37T 29.63T 26%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 13.20T 26.80T 33%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 40.00T 973.76G 39.05T 3%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 252.21T 104.42T 147.79T 42%/1% /store/nzp_ccg
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 183.31G 49.82T 1%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 367.65T 382.35T 50%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 74.11T 25.19T 48.92T 34%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 46.23T 9.42T 36.81T 21%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
You can view the plots of the GPFS IB traffic for the past
1
,
7
or
30
days, and
throughput info.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.18TB 92.9% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 384.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 513.0GB 133.6% 0.39M 3.9% *** Solomon Chak, SERC - chaks
/home 512.1GB 133.4% 0.00M 0.0% *** Molly Corder, SMSC - corderm
/home 484.5GB 126.2% 0.42M 4.2% *** Adela Roa-Varon, NMNH - roa-varona
/home 407.5GB 106.1% 2.66M 26.6% *** Harrison Keyworth, NMNH - keyworthh
/home 404.9GB 105.4% 0.89M 8.9% *** Camille Leal, NMNH - lealc
/home 404.6GB 105.4% 0.05M 0.5% *** Valeria Ensenat Rivera, SMSC - ensenatriverav
/home 393.1GB 102.4% 0.16M 1.6% *** Tauana Cunha, STRI - cunhat
/home 369.7GB 96.3% 0.06M 0.6% *** Melissa Hawkins, NMNH - hawkinsmt
/home 367.5GB 95.7% 0.94M 9.4% *** William Mattingly, OCIO - mattinglyw
/home 366.7GB 95.5% 0.21M 2.1% *** Caitlin Gionfriddo, SERC - gionfriddoc
/home 361.1GB 94.0% 2.73M 27.3% Brian Bourke, WRBU - bourkeb
/home 359.4GB 93.6% 0.23M 2.3% Juan Uribe, NMNH - uribeje
/home 346.6GB 90.3% 2.01M 20.1% Michael Trizna, NMNH/BOL - triznam
/home 342.6GB 89.2% 0.30M 3.0% Paul Cristofari, SAO/SSP - pcristof
/home 328.1GB 85.4% 0.00M 0.0% Allan Cabrero, NMNH - cabreroa
Volume=NetApp:vol_pool_nmnh_ggi, mounted as /pool/nmnh_ggi
-- disk -- -- #files -- default quota: 16.00TB/39.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/nmnh_ggi 13.76TB 86.0% 6.08M 15.6% Vanessa Gonzalez, NMNH/LAB - gonzalezv
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/38.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 14.90TB 99.3% 1.79M 4.6% *** Ting Wang, NMNH - wangt2
/scratch/public 13.50TB 90.0% 2.09M 5.4% Solomon Chak, SERC - chaks
/scratch/public 12.80TB 85.3% 3.60M 9.3% Kevin Mulder, NZP - mulderk
/scratch/public 11.30TB 75.3% 33.55M 86.5% Zelong Nie, NMNH - niez
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 97.3% 0.05M 0.4% *** Carlos Arias, STRI - ariasc (15.0TB/12M)
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.46T 60.4G 3.40T 1.7% | 167.9G 3.29T 4.7% | 2.78
65-02 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-03 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-04 3.49T 55.3G 3.44T 1.5% | 199.7G 3.29T 5.6% | 3.61
65-05 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-06 3.49T 29.7G 3.46T 0.8% | 199.7G 3.29T 5.6% | 6.72
65-09 3.49T 24.6G 3.47T 0.7% | 199.7G 3.29T 5.6% | 8.13
65-10 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-11 1.75T 20.5G 1.73T 1.1% | 239.6G 1.51T 13.4% | 11.70
65-12 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-13 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-14 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-15 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-16 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-17 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-18 1.75T 12.3G 1.73T 0.7% | 239.6G 1.51T 13.4% | 19.50
65-19 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-20 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-21 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-22 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-23 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-25 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-26 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
65-29 1.75T 26.6G 1.72T 1.5% | 199.7G 1.55T 11.2% | 7.50
65-30 1.75T 12.3G 1.73T 0.7% | 199.7G 1.55T 11.2% | 16.25
75-02 6.98T 50.2G 6.93T 0.7% | 239.6G 6.75T 3.4% | 4.78
75-03 6.98T 50.2G 6.93T 0.7% | 199.7G 6.79T 2.8% | 3.98
75-04 6.98T 50.2G 6.93T 0.7% | 199.7G 6.79T 2.8% | 3.98
75-05 6.98T 50.2G 6.93T 0.7% | 199.7G 6.79T 2.8% | 3.98
75-06 6.98T 89.1G 6.89T 1.2% | 199.7G 6.79T 2.8% | 2.24
75-07 6.98T 50.2G 6.93T 0.7% | 239.6G 6.75T 3.4% | 4.78
76-03 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-04 1.75T 35.8G 1.71T 2.0% | 400.4G 1.35T 22.4% | 11.17
76-13 1.75T 31.7G 1.71T 1.8% | 400.4G 1.35T 22.4% | 12.61
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
---------------------------------------------------------------
Total 133.2T 1.10T 132.1T 0.8% | 10.43T 122.8T 7.8% | 9.53
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.