Hydra-7 Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
adonath
afoster
ariasc
auscavitchs
bakerd
bayarkhangaia
beckerm
blackburnrc
bombickj
bornbuschs
bourkeb
byerlyp
campanam
carrionj
cerqueirat
chaks
classenc
cnowlan
collensab
collinsa
connm
dbowden
figueiroh
franzena
ggonzale
girardmg
gonzalezb
gonzalezm2
gouldingt
granquistm
hinckleya
holmk
horowitzj
hydem2
jassoj
jbak
johnsone
johnsong
johnsonsj
jspark
jwing
kistlerl
kweskinm
linat
lingof
macdonaldk
macguigand
mcgowenm
medeirosi
meyerc
mghahrem
morrisseyd
mulderk
myerse
niez
pachecoy
pappalardop
parkerld
pcristof
perezm4
pmercader_perez
quattrinia
radicev
ramosi
rdi_tella
santosbe
siua
sookhoos
sossajef
srinivasanrv
ssanjaripour
steierj
sylvain
toths
triznam
tueda
uribeje
vagac
vohsens
wangt2
wbrennom
williammn
wirshingh
xuj
yalisovem
yisraell
zehnpfennigj
zhangy
zknutson
highlighted.
As of Fri Jul 18 19:27:06 2025: #CPUs/nodes 5364/74, 0 down.
Loads:
head node: 1.34, login nodes: 1.09, 0.17, 1.23, 1.10; NSDs: 132.95, 59.34; licenses: 2 idlrt used.
Queues status: 53 disabled, none need attention, none in error state.
19 users with running jobs (slots/jobs):
Current load: 1223.8, #running (slots/jobs): 2,607/619, usage: 48.6%, efficiency: 46.9%
3 users with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 11/631/8,611
75 users have/had running or queued jobs over the past 7 days, 89 over the past 15 days.
112 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Friday, 18-Jul-2025 19:36:33 EDT
with mk-webpage.pl ver. 7.2/1 (Aug 2024/SGK) in 5:00.
Warnings
Warnings
Oversubscribed Jobs
As of Fri Jul 18 19:27:10 EDT 2025 (0 oversubscribed job)
Inefficient Jobs
As of Fri Jul 18 19:27:14 EDT 2025 (65 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 2607/619, 11 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
8271748 beast_partition jassoj +9:06 40 21.7% lThC.q 64-13
8271749 beast_unpartiti jassoj +9:06 40 21.5% lThC.q 65-24
8271752 beast_unpartiti jassoj +9:06 40 18.5% lThC.q 65-20
(more by jassoj)
9010588 uce_spades_rca beckerm +1:03 40 10.1% mThM.q 65-13
9011261 phylonet wangt2 19:54 30 11.1% mThC.q 93-01
8585640 xadm uribeje +5:10 20 5.0% lThM.q 76-11
9011229 uce_spades_conc beckerm 23:00 20 6.3% mThM.q 76-04
9012221 phyluce3 vagac 03:46 20 2.5% mThC.q 76-06
9011915 sponge54_ptl2_a collensab 06:04 16 6.1% mThC.q 64-06 105
9011915 sponge54_ptl2_a collensab 05:03 16 5.9% mThC.q 75-05 109
9011915 sponge54_ptl2_a collensab 04:07 16 6.0% mThC.q 65-21 124
(more by collensab)
9011274 phypart wangt2 19:35 10 11.4% mThC.q 64-04
9011296 phypart2 wangt2 12:29 10 11.6% mThC.q 64-09
8186209 Allo_BPP macguigand +31:02 8 15.8% lThM.q 93-04 7
8186209 Allo_BPP macguigand +31:02 8 15.9% lThM.q 93-04 8
8207967 Allo_BPP macguigand +25:09 8 11.8% lThM.q 93-02 1
(more by macguigand)
9011297 xvcf3 uribeje 12:17 8 12.0% uThM.q 76-14
9011867 barrnap_on_blac vohsens 07:30 8 12.4% sThC.q 76-13 6
9011867 barrnap_on_blac vohsens 07:30 8 12.4% sThC.q 76-13 9
9011867 barrnap_on_blac vohsens 07:30 8 12.4% sThC.q 75-06 10
(more by vohsens)
9012199 fastqc wirshingh 04:17 6 16.4% mThC.q 64-07
8137841 Job_Step5 perezm4 +50:03 4 26.4% lThM.q 64-17
9010518 lingling hydem2 +1:04 1 0.2% lTWFM.sq 64-15
⇒ Equivalent to 835.3 underused CPUs: 929 CPUs used at 10.1% on average.
To see them all use:
'q+ -ineff -u collensab' (25)
'q+ -ineff -u jassoj' (4)
'q+ -ineff -u macguigand' (12)
'q+ -ineff -u vohsens' (13)
Nodes with Excess Load
As of Fri Jul 18 19:27:19 EDT 2025 (0 node has a high load, offset=1.5)
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
vohsens 576 43.6% 9.0000 31.0% 0.1674 3.0% 0.6276 1.6621 5.4
ggonzale 93 7.0% 8.8692 30.5% 3.1243 56.7% 4.3300 4.6737 1.9
cerqueirat 388 29.3% 6.0625 20.9% 1.5617 28.4% 1.6777 1.9911 3.0
macguigand 96 7.3% 1.5000 5.2% 0.0101 0.2% 0.0104 0.0104 144.2
yisraell 10 0.8% 0.9766 3.4% 0.1327 2.4% 0.2058 0.7880 1.2
uribeje 28 2.1% 0.7812 2.7% 0.0762 1.4% 0.0840 0.1425 5.5
beckerm 60 4.5% 0.7812 2.7% 0.0422 0.8% 0.0007 0.0648 12.1
bourkeb 8 0.6% 0.5000 1.7% 0.0007 0.0% 0.0007 0.0008 611.2
perezm4 4 0.3% 0.3906 1.3% 0.3294 6.0% 0.3473 0.3831 1.0
hydem2 48 3.6% 0.0938 0.3% 0.0225 0.4% 0.0228 0.0228 4.1
wirshingh 4 0.3% 0.0469 0.2% 0.0028 0.1% 0.0029 0.0274 1.7
campanam 1 0.1% 0.0312 0.1% 0.0080 0.1% 0.0162 0.0162 1.9
hinckleya 6 0.5% 0.0117 0.0% 0.0283 0.5% 0.0266 0.0305 0.4
==================================================================================================
Total 1322 29.0450 5.5062 7.3526 9.8134 3.0
Warnings
409 high memory jobs produced a warning:
2 for beckerm
1 for bourkeb
1 for campanam
263 for cerqueirat
92 for ggonzale
1 for hinckleya
1 for hydem2
12 for macguigand
1 for perezm4
2 for uribeje
32 for vohsens
1 for wirshingh
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =104
mThC.q =1002
lThC.q =160
uThC.q =2
1268 5056 25.1% 91.4%
sThM.q =48
mThM.q =1136
lThM.q =120
uThM.q =18
1322 4680 28.2% 87.1%
sTgpu.q =0
mTgpu.q =0
lTgpu.q =0
qgpu.iq =0
0 104 0.0%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =0
0 8 0.0%
lTWFM.sq =1
1 4 25.0% 0.6%
qrsh.iq =16
16 68 23.5% 0.6%
Total: 2607
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Fri Jul 18 19:27:14 EDT 2025
1813 avail(slots), free(load)=4875.0, unresd(mem)=5834.6G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 4992 total(mem) 38.3T
unused(slots) 2570 unused(load) 4978.1 ie: 51.5% 99.7%
unreserved(mem) 6.1T unused(mem) 31.2T ie: 15.9% 81.3%
unreserved(mem) 2.4G unused(mem) 12.4G per unused(slots)
1579 avail(slots), free(load)=4506.9, unresd(mem)=2785.3G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4584 total(mem) 34.8T
unused(slots) 2336 unused(load) 4570.3 ie: 51.0% 99.7%
unreserved(mem) 2.8T unused(mem) 27.7T ie: 8.1% 79.7%
unreserved(mem) 1.2G unused(mem) 12.2G per unused(slots)
296 avail(slots), free(load)=343.8, unresd(mem)=6455.3G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 344 total(mem) 6.4T
unused(slots) 296 unused(load) 343.8 ie: 86.0% 99.9%
unreserved(mem) 6.3T unused(mem) 6.3T ie: 98.5% 99.0%
unreserved(mem) 21.8G unused(mem) 21.9G per unused(slots)
104 avail(slots), free(load)=104.0, unresd(mem)=754.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 104 unused(load) 104.0 ie: 100.0% 100.0%
unreserved(mem) 0.7T unused(mem) 0.7T ie: 100.0% 90.9%
unreserved(mem) 7.3G unused(mem) 6.6G per unused(slots)
GPU Usage
Fri Jul 18 19:27:23 EDT 2025
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 31.0 472.3 - 4/0 - 64 0 0.1 - 64 63.9
compute-79-01 - 125.5 21.7 103.8 - 2/0 - 20 0 0.1 - 20 19.9
compute-79-02 - 125.5 16.0 109.5 - 2/0 - 20 0 0.1 - 20 19.9
Total #GPU=8 used=0 (0.0%)
Waiting Job(s)
As of Fri Jul 18 19:27:19 EDT 2025
1 job waiting for collensab :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9011915 sponge54_ptl2_a collensab 06:04 16 64.0 mThC.q 245-300:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
qrsh_u_slots/1 slots=16/16 100.0% for collensab in queue qrsh.iq
max_slots_per_user/1 slots=832/840 99.0% for collensab
max_hC_slots_per_user/2 slots=816/840 97.1% for collensab in queue mThC.q
max_mem_res_per_user/1 mem_res=3.188T/9.985T 31.9% for collensab in queue uThC.q
max_concurrent_jobs_per_u no_concurrent_jobs=1/4 25.0% for collensab in queue qrsh.iq
------------------- ------------------------------- ------
9 jobs waiting for ggonzale (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9011365 gler_lut_global ggonzale 09:26 1 97.7 10-12:1
9011366 gler_lut_global ggonzale 09:26 1 97.7 1-12:1
9011367 gler_lut_global ggonzale 09:25 1 97.7 1-12:1
9011368 gler_lut_global ggonzale 09:25 1 97.7 1-12:1
9011369 gler_lut_global ggonzale 09:25 1 97.7 1-12:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.869T/8.944T 99.2% for ggonzale in queue uThM.q
max_hM_slots_per_user/2 slots=93/585 15.9% for ggonzale in queue mThM.q
max_slots_per_user/1 slots=93/840 11.1% for ggonzale
------------------- ------------------------------- ------
1 job waiting for vohsens :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
9011217 metawrap_long_p vohsens 23:42 16 256.0 mThM.q 121-596:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=9.000T/8.944T 100.6% for vohsens in queue uThM.q
max_hM_slots_per_user/2 slots=576/585 98.5% for vohsens in queue mThM.q
max_slots_per_user/1 slots=680/840 81.0% for vohsens
max_hC_slots_per_user/1 slots=104/840 12.4% for vohsens in queue sThC.q
max_mem_res_per_user/1 mem_res=208.0G/9.985T 2.0% for vohsens in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=29.05T/35.78T 81.2% for * in queue uThM.q
total_slots/1 slots=2607/5960 43.7% for *
blast2GO/1 slots=38/110 34.5% for *
total_mem_res/1 mem_res=4.203T/39.94T 10.5% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Fri Jul 18 19:27:19 EDT 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=4.203T/39.94T 10.5% for * in queue uThC.q
total_mem_res/2 mem_res=29.05T/35.78T 81.2% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.3 x x - node down - 32 x x - x x
compute-64-18 - 503.4 124.6 483.4 - 378.8 20.0 - 32 20 14.1 - 12 17.9
compute-65-02 - 503.5 88.2 433.7 - 415.3 69.8 - 64 22 22.1 - 42 41.9
compute-65-03 - 503.5 75.3 401.7 - 428.2 101.8 - 64 20 17.2 - 44 46.8
compute-65-04 - 503.5 84.2 449.7 - 419.3 53.8 - 64 23 20.5 - 41 43.5
compute-65-05 - 503.5 88.3 449.7 - 415.2 53.8 - 64 23 23.6 - 41 40.4
compute-65-06 - 503.5 76.3 477.7 - 427.2 25.8 - 64 61 26.3 - 3 37.7
compute-65-07 - 503.5 209.6 501.0 - 293.9 2.5 - 64 28 13.1 - 36 50.9
compute-65-09 - 503.5 78.3 449.7 - 425.2 53.8 - 64 24 7.8 - 40 56.2
compute-65-10 - 503.5 85.5 481.7 - 418.0 21.8 - 64 24 8.0 - 40 56.0
compute-65-11 - 503.5 81.0 433.7 - 422.5 69.8 - 64 22 18.1 - 42 45.9
compute-65-12 - 503.5 84.0 449.7 - 419.5 53.8 - 64 23 23.1 - 41 40.9
compute-65-13 - 503.5 30.9 464.0 - 472.6 39.5 - 64 44 5.0 - 20 59.0
compute-65-14 - 503.5 88.7 433.7 - 414.8 69.8 - 64 22 22.0 - 42 42.0
compute-65-15 - 503.5 45.4 448.0 - 458.1 55.5 - 64 40 22.9 - 24 41.1
compute-65-16 - 503.5 119.0 483.3 - 384.5 20.2 - 64 20 19.4 - 44 44.6
compute-65-17 - 503.5 72.4 401.7 - 431.1 101.8 - 64 20 18.2 - 44 45.8
compute-65-18 - 503.5 96.5 465.7 - 407.0 37.8 - 64 36 21.1 - 28 43.0
compute-65-19 - 503.5 90.7 465.7 - 412.8 37.8 - 64 24 24.1 - 40 39.9
compute-65-20 - 503.5 x x - node down - 64 x x - x x
compute-65-21 - 503.5 22.3 336.0 - 481.2 167.5 - 64 33 18.1 - 31 46.0
compute-65-22 - 503.5 222.9 392.6 - 280.6 110.9 - 64 64 41.5 - 0 22.5
compute-65-23 - 503.5 90.5 481.7 - 413.0 21.8 - 64 37 22.4 - 27 41.6
compute-65-24 - 503.5 71.2 445.7 - 432.3 57.8 - 64 59 21.7 - 5 42.3
compute-65-25 - 503.5 202.1 453.0 - 301.4 50.5 - 64 13 13.2 - 51 50.8
compute-65-26 - 503.5 204.5 469.0 - 299.0 34.5 - 64 14 14.1 - 50 50.0
compute-65-27 - 503.5 35.2 416.0 - 468.3 87.5 - 64 38 13.8 - 26 50.2
compute-65-28 - 503.5 44.9 496.0 - 458.6 7.5 - 64 55 23.4 - 9 40.6
compute-65-29 - 503.5 101.3 465.7 - 402.2 37.8 - 64 24 24.0 - 40 40.0
compute-65-30 - 503.5 102.4 449.7 - 401.1 53.8 - 64 23 17.2 - 41 46.8
compute-75-01 - 1007.5 239.1 982.7 - 768.4 24.8 - 128 113 28.2 - 15 99.8
compute-75-02 - 1007.5 143.7 835.3 - 863.8 172.2 - 128 114 10.4 - 14 117.6
compute-75-03 - 755.5 182.6 679.0 - 572.9 76.5 - 128 77 48.5 - 51 79.5
compute-75-04 - 755.5 98.3 705.7 - 657.2 49.8 - 128 39 37.6 - 89 90.4
compute-75-05 - 755.5 66.8 706.0 - 688.7 49.5 - 128 93 32.9 - 35 95.2
compute-75-06 - 755.5 71.8 704.0 - 683.7 51.5 - 128 70 25.6 - 58 102.4
compute-75-07 - 755.5 54.0 720.0 - 701.5 35.5 - 128 21 21.0 - 107 107.0
compute-76-03 - 1007.4 95.9 978.2 - 911.5 29.2 - 128 68 27.2 - 60 100.8
compute-76-04 - 1007.4 216.9 979.3 - 790.5 28.1 - 128 65 25.1 - 63 102.9
compute-76-05 - 1007.4 210.5 1000.0 - 796.9 7.4 - 128 10 10.1 - 118 117.9
compute-76-06 - 1007.4 91.3 640.0 - 916.1 367.4 - 128 81 31.5 - 47 96.5
compute-76-07 - 1007.4 402.9 953.9 - 604.5 53.5 - 128 48 26.5 - 80 101.5
compute-76-08 - 1007.4 367.8 953.9 - 639.6 53.5 - 128 53 24.8 - 75 103.2
compute-76-09 - 1007.4 76.6 944.0 - 930.8 63.4 - 128 78 22.8 - 50 105.2
compute-76-10 - 1007.4 409.6 969.9 - 597.8 37.5 - 128 61 25.9 - 67 102.1
compute-76-11 - 1007.4 232.7 965.3 - 774.7 42.1 - 128 65 25.1 - 63 102.9
compute-76-12 - 1007.4 323.2 950.6 - 684.2 56.8 - 128 82 30.6 - 46 97.3
compute-76-13 - 1007.4 99.5 977.7 - 907.9 29.7 - 128 84 55.2 - 44 72.8
compute-76-14 - 1007.4 193.6 995.3 - 813.8 12.1 - 128 59 23.0 - 69 105.0
compute-84-01 - 881.1 367.6 824.3 - 513.5 56.8 - 112 38 23.6 - 74 88.3
compute-93-01 - 503.8 105.7 461.7 - 398.1 42.1 - 64 56 19.8 - 8 44.2
compute-93-02 - 755.6 125.9 741.0 - 629.7 14.6 - 72 31 12.5 - 41 59.5
compute-93-03 - 755.6 185.9 744.3 - 569.7 11.3 - 72 21 9.3 - 51 62.7
compute-93-04 - 755.6 158.8 744.3 - 596.8 11.3 - 72 21 9.2 - 51 62.8
======= ===== ====== ==== ==== =====
Totals 35630.9 7236.9 32734.7 4584 2304 1142.5
==> 20.3% 91.9% ==> 50.3% 24.9%
Most unreserved/unused memory (367.4/916.1GB) is on compute-76-06 with 47/96.5 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 39.0 96.2 - 1472.4 1415.2 - 192 48 48.2 - 144 143.8
compute-76-02 - 1511.4 x x - node down - 192 x x - x x
compute-93-05 - 2016.3 14.4 0.0 - 2001.9 2016.3 - 96 0 0.0 - 96 96.0
compute-93-06 - 3023.9 15.4 0.0 - 3008.5 3023.9 - 56 0 0.0 - 56 56.0
======= ===== ====== ==== ==== =====
Totals 6551.6 68.8 96.3 344 48 48.2
==> 1.1% 1.5% ==> 14.0% 14.0%
Most unreserved/unused memory (3023.9/3008.5GB) is on compute-93-06 with 56/56.0 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 07/09/25 and 07/16/25
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
holmk 4/20 0.00 45.4 48.1 24.5 1.8 2.0
gouldingt 4/14 0.00 57.2 60.9 7.2 6.5 8.5 > 2.5
macdonaldk 1/12 0.00 37.9 120.0 1.6 0.5 73.6 > 2.5
figueiroh 6/240 0.01 2.7 640.0 413.4 16.8 1.5
pcristof 17/544 0.01 43.3 300.0 61.4 1.8 4.9 > 2.5
morrisseyd 1/10 0.01 58.4 800.0 124.4 10.1 6.4 > 2.5
macguigand 2/16 0.02 0.9 640.0 2.8 0.8 231.1 > 2.5
bayarkhangaia 4/160 0.02 5.6 2199.6 77.2 45.2 28.5 > 2.5
radicev 6/10 0.09 25.0 495.6 121.7 92.0 4.1 > 2.5
medeirosi 56/560 0.30 346.3 450.0 268.7 2.6 1.7
connm 125/1250 0.31 312.6 450.0 236.4 2.1 1.9
steierj 173/892 0.37 64.6 94.4 29.4 1.1 3.2 > 2.5
beckerm 7/300 0.52 25.1 959.6 946.3 651.4 1.0
bourkeb 4/32 0.60 26.2 255.6 0.8 0.6 310.1 > 2.5
bakerd 8/64 0.68 13.4 400.0 0.4 0.1 1110.7 > 2.5
collinsa 22/264 0.82 75.3 393.7 37.8 10.7 10.4 > 2.5
zhangy 1/8 1.20 64.6 96.0 11.8 9.7 8.1 > 2.5
hinckleya 21/42 1.24 63.0 21.6 16.3 2.8 1.3
yisraell 6/52 1.25 97.7 900.0 89.9 81.0 10.0 > 2.5
blackburnrc 32/683 1.87 19.1 126.7 74.1 17.0 1.7
franzena 33/1056 2.06 23.2 376.7 232.7 6.9 1.6
bornbuschs 1137/4548 2.70 102.2 160.0 41.1 16.9 3.9 > 2.5
uribeje 70/1170 3.57 39.8 319.3 39.1 18.4 8.2 > 2.5
hydem2 498/916 5.66 33.8 388.4 229.4 188.8 1.7
wangt2 29/615 6.82 54.3 951.3 137.9 4.8 6.9 > 2.5
horowitzj 15996/16131 9.36 89.1 48.0 29.2 11.0 1.6
toths 364/5824 10.45 79.9 256.0 18.8 6.4 13.6 > 2.5
vohsens 7866/8391 11.02 75.9 51.9 1.3 0.6 39.8 > 2.5
mghahrem 37/300 15.39 65.8 14.7 269.7 33.5 0.1
granquistm 324/1781 22.75 47.7 146.3 104.8 11.7 1.4
cerqueirat 28/28 77.66 99.7 32.0 7.5 7.1 4.3 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 26882/45933 176.76 79.3 135.5 65.4 18.3 2.1
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 0/0 0.00
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to num_gpu=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to num_gpu=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to num_gpu=4
users {*} queues {mTgpu.q} to num_gpu=3
users {*} queues {lTgpu.q} to num_gpu=2
users {*} queues {qgpu.iq} to num_gpu=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Disk Usage & Quota
Disk Usage & Quota
As of Fri Jul 18 17:06:02 EDT 2025
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.05T 19.81T 2.24T 90% /12% /home
netapp-fas83-n02:/vol_data_public 142.50T 50.78T 91.72T 36%/3% /data/public
netapp-fas83-n02:/vol_pool_public 230.00T 102.83T 127.17T 45%/1% /pool/public
gpfs01:public 400.00T 377.33T 22.67T 95% /55% /scratch/public
netapp-fas83-n02:/vol_pool_kozakk 11.00T 10.72T 285.32G 98% /1% /pool/kozakk
netapp-fas83-n02:/vol_pool_nmnh_ggi 21.00T 13.80T 7.20T 66%/1% /pool/nmnh_ggi
netapp-fas83-n02:/vol_pool_sao_access 19.95T 5.49T 14.46T 28%/2% /pool/sao_access
netapp-fas83-n02:/vol_pool_sao_rtdc 10.45T 907.44G 9.56T 9%/1% /pool/sao_rtdc
netapp-fas83-n02:/vol_pool_sylvain 30.00T 24.48T 5.52T 82% /6% /pool/sylvain
gpfs01:nmnh_bradys 25.00T 22.16T 2.84T 89% /59% /scratch/bradys
gpfs01:nmnh_kistlerl 120.00T 112.13T 7.87T 94% /6% /scratch/kistlerl
gpfs01:nmnh_meyerc 25.00T 16.68T 8.32T 67%/4% /scratch/meyerc
gpfs01:nmnh_quattrinia 60.00T 46.55T 13.45T 78%/7% /scratch/nmnh_corals
gpfs01:nmnh_ggi 77.00T 22.02T 54.98T 29%/5% /scratch/nmnh_ggi
gpfs01:nmnh_lab 25.00T 9.49T 15.51T 38%/3% /scratch/nmnh_lab
gpfs01:nmnh_mammals 35.00T 19.99T 15.01T 58%/21% /scratch/nmnh_mammals
gpfs01:nmnh_mdbc 50.00T 45.86T 4.14T 92% /9% /scratch/nmnh_mdbc
gpfs01:nmnh_ocean_dna 40.00T 26.19T 13.81T 66%/1% /scratch/nmnh_ocean_dna
gpfs01:nzp_ccg 45.00T 33.11T 11.89T 74%/2% /scratch/nzp_ccg
gpfs01:sao_atmos 350.00T 290.48T 59.52T 83% /4% /scratch/sao_atmos
gpfs01:sao_cga 25.00T 9.50T 15.50T 38%/6% /scratch/sao_cga
gpfs01:sao_tess 50.00T 24.82T 25.18T 50%/83% /scratch/sao_tess
gpfs01:scbi_gis 80.00T 21.17T 58.83T 27%/35% /scratch/scbi_gis
gpfs01:nmnh_schultzt 25.00T 19.85T 5.15T 80%/75% /scratch/schultzt
gpfs01:serc_cdelab 15.00T 12.69T 2.31T 85% /4% /scratch/serc_cdelab
gpfs01:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 70.00T 43.40T 26.60T 62%/47% /scratch/sylvain
gpfs01:usda_sel 25.00T 5.49T 19.51T 22%/6% /scratch/usda_sel
gpfs01:wrbu 50.00T 39.13T 10.87T 79%/6% /scratch/wrbu
netapp-fas83-n01:/vol_data_admin 4.75T 52.93G 4.70T 2%/1% /data/admin
netapp-fas83-n01:/vol_pool_admin 47.50T 41.24T 6.26T 87% /1% /pool/admin
gpfs01:admin 20.00T 3.48T 16.52T 18%/30% /scratch/admin
gpfs01:bioinformatics_dbs 10.00T 5.00T 5.00T 50%/2% /scratch/dbs
gpfs01:tmp 100.00T 38.33T 61.67T 39%/9% /scratch/tmp
gpfs01:ocio_dpo 10.00T 0.00G 10.00T 1%/1% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
nas1:/mnt/pool/admin 20.00T 7.93T 12.07T 40%/1% /store/admin
nas1:/mnt/pool/public 175.00T 93.17T 81.83T 54%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 10.37T 29.63T 26%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 13.31T 26.69T 34%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 40.00T 973.76G 39.05T 3%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 262.17T 111.78T 150.39T 43%/1% /store/nzp_ccg
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 2.93T 47.07T 6%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 741.79T 367.28T 374.51T 50%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 40.00T 27.74T 12.26T 70%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 8.41T 41.59T 17%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
qnas:/hydra 45.47T 29.07T 16.40T 64%/64% /qnas/hydra
qnas:/nfs-mesa-nanozoomer 395.63T 350.76T 44.87T 89% /89% /qnas/mesa
qnas:/sil 3840.36T 2964.95T 875.41T 78%/78% /qnas/sil
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
You can view the plots of the GPFS IB traffic for the past
1
,
7
or
30
days, and
throughput info.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.18TB 92.9% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
/data/public 3.99TB 88.7% 0.01M 0.1% Zelong Nie, NMNH - niez
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 512.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 512.1GB 100.0% 0.00M 0.0% *** Molly Corder, SMSC - corderm
/home 510.8GB 99.8% 0.28M 2.8% *** Paul Cristofari, SAO/SSP - pcristof
/home 497.1GB 97.1% 0.12M 1.2% *** Jaiden Edelman, SAO/SSP - jedelman
/home 484.5GB 94.6% 0.42M 4.2% Adela Roa-Varon, NMNH - roa-varona
/home 478.6GB 93.5% 0.24M 2.4% Michael Connelly, NMNH - connellym
/home 476.5GB 93.1% 3.30M 33.0% Heesung Chong, SAO/AMP - hchong
/home 471.4GB 92.1% 0.03M 0.3% Shauna Rasband, NMNH - rasbands
/home 443.6GB 86.6% 0.97M 9.7% Hyeong-Ahn Kwon, SAO/AMP - hkwon
Volume=NetApp:vol_pool_nmnh_ggi, mounted as /pool/nmnh_ggi
-- disk -- -- #files -- default quota: 16.00TB/39.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/nmnh_ggi 13.76TB 86.0% 6.08M 15.6% Vanessa Gonzalez, NMNH/LAB - gonzalezv
Volume=NetApp:vol_pool_public, mounted as /pool/public
-- disk -- -- #files -- default quota: 7.50TB/18.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/public 6.65TB 88.7% 0.24M 1.3% Xiaoyan Xie, SAO/HEA - xxie
/pool/public 6.43TB 85.7% 13.85M 76.9% Ting Wang, NMNH - wangt2
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/38.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 15.00TB 100.0% 0.02M 0.1% *** Samuel Vohsen, NMNH - vohsens
/scratch/public 14.10TB 94.0% 1.84M 4.7% Ting Wang, NMNH - wangt2
/scratch/public 13.50TB 90.0% 0.87M 2.3% Karen Holm, SMSC - holmk
/scratch/public 13.50TB 90.0% 2.09M 5.4% Solomon Chak, SERC - chaks
/scratch/public 13.10TB 87.3% 0.33M 0.9% Juan Uribe, NMNH - uribeje
/scratch/public 13.10TB 87.3% 4.38M 11.3% Kevin Mulder, NZP - mulderk
/scratch/public 12.90TB 86.0% 14.30M 36.9% Brian Bourke, WRBU - bourkeb
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 97.3% 0.05M 0.4% *** Carlos Arias, STRI - ariasc (15.0TB/12M)
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.46T 46.1G 3.41T 1.3% | 167.9G 3.29T 4.7% | 3.64
65-02 3.49T 31.7G 3.46T 0.9% | 199.7G 3.29T 5.6% | 6.29
65-03 3.49T 54.3G 3.44T 1.5% | 199.7G 3.29T 5.6% | 3.68
65-04 3.49T 77.8G 3.41T 2.2% | 199.7G 3.29T 5.6% | 2.57
65-05 3.49T 49.2G 3.44T 1.4% | 199.7G 3.29T 5.6% | 4.06
65-06 3.49T 99.3G 3.39T 2.8% | 199.7G 3.29T 5.6% | 2.01
65-09 3.49T 224.3G 3.27T 6.3% | 199.7G 3.29T 5.6% | 0.89
65-10 1.75T 212.0G 1.54T 11.9% | 199.7G 1.55T 11.2% | 0.94
65-11 1.75T 127.0G 1.62T 7.1% | 199.7G 1.55T 11.2% | 1.57
65-12 1.75T 59.4G 1.69T 3.3% | 199.7G 1.55T 11.2% | 3.36
65-13 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-14 1.75T 15.4G 1.73T 0.9% | 199.7G 1.55T 11.2% | 13.00
65-15 1.75T 134.1G 1.61T 7.5% | 199.7G 1.55T 11.2% | 1.49
65-16 1.75T 21.5G 1.72T 1.2% | 199.7G 1.55T 11.2% | 9.29
65-17 1.75T 66.6G 1.68T 3.7% | 199.7G 1.55T 11.2% | 3.00
65-18 1.75T 17.4G 1.73T 1.0% | 199.7G 1.55T 11.2% | 11.47
65-19 1.75T 35.8G 1.71T 2.0% | 199.7G 1.55T 11.2% | 5.57
65-21 1.75T 17.4G 1.73T 1.0% | 199.7G 1.55T 11.2% | 11.47
65-22 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-23 1.75T 29.7G 1.72T 1.7% | 199.7G 1.55T 11.2% | 6.72
65-24 1.75T 16.4G 1.73T 0.9% | 199.7G 1.55T 11.2% | 12.19
65-25 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-26 1.75T 12.3G 1.73T 0.7% | 1.75T 0.0G 100.0% | 145.42
65-27 1.75T 22.5G 1.72T 1.3% | 199.7G 1.55T 11.2% | 8.86
65-28 1.75T 22.5G 1.72T 1.3% | 199.7G 1.55T 11.2% | 8.86
65-29 1.75T 23.6G 1.72T 1.3% | 199.7G 1.55T 11.2% | 8.48
65-30 1.75T 21.5G 1.72T 1.2% | 199.7G 1.55T 11.2% | 9.29
75-02 6.98T 60.4G 6.92T 0.8% | 199.7G 6.79T 2.8% | 3.31
75-03 6.98T 59.4G 6.92T 0.8% | 199.7G 6.79T 2.8% | 3.36
75-04 6.98T 139.3G 6.85T 1.9% | 400.4G 6.59T 5.6% | 2.87
75-05 6.98T 103.4G 6.88T 1.4% | 199.7G 6.79T 2.8% | 1.93
75-06 6.98T 140.3G 6.84T 2.0% | 400.4G 6.59T 5.6% | 2.85
75-07 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
76-03 1.75T 185.3G 1.56T 10.4% | 600.1G 1.16T 33.6% | 3.24
76-04 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-13 1.75T 117.8G 1.63T 6.6% | 600.1G 1.16T 33.6% | 5.10
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
---------------------------------------------------------------
Total 131.4T 2.46T 129.0T 1.9% | 10.48T 121.0T 8.0% | 4.26
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.