Hydra-7 Status
Usage
Usage
Current snapshot sorted by nodes'
Name
nCPU
Usage
Load
Memory
MemRes
MemUsed
.
Usage vs time, for length=
7d
15d
30d
and user=
none
adonath
afoster
ariasc
auscavitchs
bakerd
beckerm
blackburnrc
bourkeb
breusingc
byerlyp
cabreroa
campanam
carlsenm
carrionj
cerqueirat
chaks
classenc
cnowlan
coellogarridoa
collensab
collinsa
danm
dbowden
eosulliv
fairbanksr
figueiroh
frandsenp
franzena
gallego-narbona
ggonzale
girardmg
gonzalezb
gonzalezv
gotzekd
gouldingt
hinckleya
holmk
horowitzj
hpc
ingushin
jassoj
jbak
johnsong
jwing
kistlerl
kweskinm
lealc
liy
longk
macdonaldk
macguigand
magalhaesm
mattinglyw
mcfaddenc
mcgowenm
medeirosi
mghahrem
mulderk
myerse
nelsonjo
niez
pappalardop
parkerld
pcristof
peresph
pfeifferj
quattrinia
radicev
roa-varona
sossajef
steierj
sylvain
talaveraa
triznam
tueda
uribeje
vagac
vatanparastm
vohsens
vpatel
wbrennom
whiteae
willishr
wirshingh
yisraell
zarril
zehnpfennigj
zhangy
highlighted.
As of Wed Mar 12 20:57:08 2025: #CPUs/nodes 5636/74, 0 down.
Loads:
head node: 1.80, login nodes: 3.82, 0.63, 0.20, 0.13; NSDs: 0.56, 2.22; licenses: 1 idlrt used.
Queues status: 23 disabled, none need attention, none in error state.
24 users with running jobs (slots/jobs):
Current load: 1280.5, #running (slots/jobs): 1,216/634, usage: 21.6%, efficiency: 105.3%
2 users with queued jobs (jobs/tasks/slots):
Total number of queued jobs/tasks/slots: 3/1,699/1,699
73 users have/had running or queued jobs over the past 7 days, 88 over the past 15 days.
106 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Wednesday, 12-Mar-2025 21:02:10 EDT
with mk-webpage.pl ver. 7.2/1 (Aug 2024/SGK) in 1:03.
Warnings
Warnings
Oversubscribed Jobs
As of Wed Mar 12 20:57:16 EDT 2025 (0 oversubscribed job)
Inefficient Jobs
As of Wed Mar 12 20:57:22 EDT 2025 (12 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1215/633, 4 queued (jobs), showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
5992084 unp_new jassoj +14:07 40 16.9% lThC.q 64-13
5992117 unp_new_2 jassoj +14:07 40 15.9% lThC.q 64-03
5968496 cladeV_REV2_SNa gallego-narbona +42:07 16 27.3% uThC.q 65-18
6001642 Louis_MCMC zhangy +11:20 16 6.2% mThC.q 65-25
6642558 Eury70p_swsc zhangy 01:07 16 29.9% mThC.q 64-11
6603633 vcfMake uribeje 07:44 8 13.7% lThC.q 93-03
5962041 bpp_A10_0.job talaveraa +46:08 4 17.9% lThC.q 65-07
5962042 bpp_A10_0.job talaveraa +46:08 4 17.6% lThC.q 65-23
5962043 bpp_A10_1.job talaveraa +46:08 4 19.1% lThC.q 65-16
(more by talaveraa)
6601009 gff_to_bed_Eper macguigand 08:07 1 11.7% lThM.q 76-05
6622590 cp_scr2pool coellogarridoa 04:32 1 20.6% lThC.q 64-04
⇒ Equivalent to 126.4 underused CPUs: 154 CPUs used at 17.9% on average.
To see them all use:
'q+ -ineff -u talaveraa' (4)
Nodes with Excess Load
As of Wed Mar 12 20:57:31 EDT 2025 (26 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
64-04 40 2 5.3 3.3 *
64-06 40 1 4.6 3.6 *
64-07 40 3 4.5 1.5 *
64-09 40 2 4.6 2.6 *
64-10 40 1 4.9 3.9 *
64-14 40 2 4.3 2.3 *
65-04 64 10 11.8 1.8 *
65-12 64 10 11.8 1.8 *
65-13 64 10 12.1 2.1 *
65-14 64 9 11.6 2.6 *
65-15 64 10 11.6 1.6 *
75-01 128 20 97.8 77.8 *
75-04 128 21 23.8 2.8 *
75-05 128 22 24.8 2.8 *
75-07 128 22 24.2 2.2 *
76-03 192 24 25.7 1.7 *
76-04 192 24 37.0 13.0 *
76-05 128 20 23.0 3.0 *
76-06 128 21 65.6 44.6 *
76-08 128 20 23.7 3.7 *
76-09 128 23 25.2 2.2 *
76-10 128 21 23.8 2.8 *
76-11 128 22 24.3 2.3 *
76-14 128 22 24.3 2.3 *
84-01 112 19 20.7 1.7 *
93-04 72 13 14.8 1.8 *
Total excess load = 191.7
High Memory Jobs
Statistics
User nSlots memory memory vmem maxvmem ratio
Name used reserved used used used [TB] resd/maxvm
--------------------------------------------------------------------------------------------------
quattrinia 572 83.7% 8.9375 76.3% 2.1489 81.9% 2.4940 3.0146 3.0
gonzalezv 20 2.9% 0.7812 6.7% 0.0874 3.3% 0.6218 0.7016 1.1
zhangy 40 5.9% 0.6562 5.6% 0.0400 1.5% 0.0642 0.0651 10.1
longk 8 1.2% 0.3125 2.7% 0.0386 1.5% 0.1145 0.1675 1.9
liy 1 0.1% 0.2930 2.5% 0.0987 3.8% 0.1610 0.1729 1.7
gonzalezb 20 2.9% 0.2344 2.0% 0.1042 4.0% 0.0963 0.1654 1.4
sossajef 11 1.6% 0.2148 1.8% 0.0014 0.1% 0.0016 0.0021 102.8
gotzekd 6 0.9% 0.1406 1.2% 0.0859 3.3% 0.1030 0.1054 1.3
magalhaesm 4 0.6% 0.0977 0.8% 0.0182 0.7% 0.0029 0.0470 2.1
macguigand 1 0.1% 0.0391 0.3% 0.0000 0.0% 0.0000 0.0001 597.0
==================================================================================================
Total 683 11.7070 2.6234 3.6592 4.4416 2.6
Warnings
27 high memory jobs produced a warning:
5 for gonzalezb
1 for gonzalezv
1 for gotzekd
1 for liy
4 for longk
1 for magalhaesm
11 for sossajef
3 for zhangy
Details for each job can be found
here .
Breakdown by Queue
Breakdown by Queue
Select length:
7d
15d
30d
Current Usage by Queue
Total Limit Fill factor Efficiency
sThC.q =9
mThC.q =357
lThC.q =125
uThC.q =35
526 5056 10.4% 239.8%
sThM.q =0
mThM.q =646
lThM.q =36
uThM.q =1
683 4680 14.6% 179.9%
sTgpu.q =0
mTgpu.q =1
lTgpu.q =0
qgpu.iq =0
1 104 1.0% 108.0%
uTxlM.rq =0
0 536 0.0%
lThMuVM.tq =0
0 384 0.0%
lTb2g.q =0
0 2 0.0%
lTIO.sq =1
1 8 12.5% 3.4%
lTWFM.sq =0
0 4 0.0%
qrsh.iq =6
6 40 15.0% 2.8%
Total: 1217
Avail Slots/Wait Job(s)
Avail Slots/Wait Job(s)
Available Slots
As of Wed Mar 12 20:57:23 EDT 2025
40 avail(slots), free(load)=40.0, unresd(mem)=376.6G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 40 total(mem) 38.0T
unused(slots) -1152 unused(load) 26.2 ie: -2880.0% 65.6%
unreserved(mem) 24.9T unused(mem) 32.9T ie: 65.4% 86.6%
0 avail(slots), free(load)=0.0, unresd(mem)=0.0G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 0 total(mem) 35.3T
unused(slots) -1103 unused(load) -13.1 ie: 0.0% 0.0%
unreserved(mem) 22.1T unused(mem) 30.3T ie: 62.6% 86.0%
0 avail(slots), free(load)=0.0, unresd(mem)=0.0G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 0 total(mem) 7.9T
unused(slots) 0 unused(load) 0.0 ie: 0.0% 0.0%
unreserved(mem) 7.9T unused(mem) 7.7T ie: 100.0% 98.2%
103 avail(slots), free(load)=104.0, unresd(mem)=752.2G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 104 total(mem) 0.7T
unused(slots) 103 unused(load) 104.0 ie: 99.0% 100.0%
unreserved(mem) 0.7T unused(mem) 0.6T ie: 99.7% 87.1%
unreserved(mem) 7.3G unused(mem) 6.4G per unused(slots)
GPU Usage
Wed Mar 12 20:57:36 EDT 2025
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 27.6 475.7 - 4/0 - 64 0 0.0 - 64 64.0
compute-79-01 - 125.5 60.2 65.3 - 2/2 - 20 1 1.1 - 19 18.9
compute-79-02 - 125.5 9.2 116.3 - 2/0 - 20 0 0.0 - 20 20.0
Total #GPU=8 used=2 (25.0%)
Waiting Job(s)
As of Wed Mar 12 20:57:30 EDT 2025
1 job waiting for quattrinia :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
6614712 alignfasta-ARRA quattrinia 05:53 1 16.0 mThM.q 1345-3040:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.938T/8.944T 99.9% for quattrinia in queue uThM.q
max_hM_slots_per_user/2 slots=572/585 97.8% for quattrinia in queue mThM.q
max_slots_per_user/1 slots=572/840 68.1% for quattrinia
------------------- ------------------------------- ------
2 jobs waiting for sylvain :
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
6649080 q_dofiteb4.9.21 sylvain 00:00 1 sThC.q
6649081 q_dofiteb4.9.22 sylvain 00:00 1 sThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_slots_per_user/1 slots=10/840 1.2% for sylvain
max_hC_slots_per_user/1 slots=10/840 1.2% for sylvain in queue sThC.q
max_mem_res_per_user/1 mem_res=20.00G/9.985T 0.2% for sylvain in queue uThC.q
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=11.71T/35.78T 32.7% for * in queue uThM.q
total_gpus/1 num_gpu=2/8 25.0% for * in queue mTgpu.q
total_slots/1 slots=1220/5960 20.5% for *
blast2GO/1 slots=19/110 17.3% for *
total_mem_res/1 mem_res=1.756T/39.94T 4.4% for * in queue uThC.q
Memory Usage
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
7d
15d
30d
Current Memory Quota Usage
As of Wed Mar 12 20:57:31 EDT 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=1.756T/39.94T 4.4% for * in queue uThC.q
total_mem_res/2 mem_res=11.71T/35.78T 32.7% for * in queue uThM.q
Current Memory Usage by Compute Node, High Memory Nodes Only
hostgroup: @himem-hosts (54 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-64-17 - 503.3 147.0 200.0 - 356.3 303.3 - 32 9 9.1 - 23 22.9
compute-64-18 - 503.3 59.4 148.0 - 443.9 355.3 - 32 9 8.9 - 23 23.1
compute-65-02 - 503.5 82.9 180.0 - 420.6 323.5 - 64 11 12.1 - 53 51.9
compute-65-03 - 503.5 40.2 22.0 - 463.3 481.5 - 64 23 23.0 - 41 41.0
compute-65-04 - 503.5 69.5 160.0 - 434.0 343.5 - 64 10 11.8 - 54 52.2
compute-65-05 - 503.5 69.4 184.0 - 434.1 319.5 - 64 11 12.0 - 53 52.0
compute-65-06 - 503.5 71.9 176.0 - 431.6 327.5 - 64 11 12.1 - 53 51.9
compute-65-07 - 503.5 65.1 180.0 - 438.4 323.5 - 64 14 11.9 - 50 52.1
compute-65-09 - 503.5 75.6 176.0 - 427.9 327.5 - 64 11 12.4 - 53 51.6
compute-65-10 - 503.5 59.2 128.0 - 444.3 375.5 - 64 16 16.0 - 48 48.0
compute-65-11 - 503.5 72.6 180.0 - 430.9 323.5 - 64 11 12.1 - 53 51.9
compute-65-12 - 503.5 60.2 144.0 - 443.3 359.5 - 64 10 11.6 - 54 52.4
compute-65-13 - 503.5 71.2 164.0 - 432.3 339.5 - 64 10 12.2 - 54 51.9
compute-65-14 - 503.5 83.3 144.0 - 420.2 359.5 - 64 9 11.7 - 55 52.4
compute-65-15 - 503.5 47.4 152.0 - 456.1 351.5 - 64 10 11.6 - 54 52.4
compute-65-16 - 503.5 62.5 180.0 - 441.0 323.5 - 64 14 12.2 - 50 51.8
compute-65-17 - 503.5 65.0 192.0 - 438.5 311.5 - 64 12 12.8 - 52 51.2
compute-65-18 - 503.5 78.5 288.0 - 425.0 215.5 - 64 26 11.8 - 38 52.2
compute-65-19 - 503.5 73.9 144.0 - 429.6 359.5 - 64 17 17.0 - 47 47.0
compute-65-20 - 503.5 x x - node down - 64 x x - x x
compute-65-21 - 503.5 78.2 192.0 - 425.3 311.5 - 64 12 12.8 - 52 51.2
compute-65-22 - 503.5 78.6 180.0 - 424.9 323.5 - 64 14 11.6 - 50 52.4
compute-65-23 - 503.5 72.0 180.0 - 431.5 323.5 - 64 14 12.3 - 50 51.7
compute-65-24 - 503.5 58.4 176.0 - 445.1 327.5 - 64 11 12.1 - 53 51.9
compute-65-25 - 503.5 55.2 176.0 - 448.3 327.5 - 64 26 12.1 - 38 51.9
compute-65-26 - 503.5 55.3 224.0 - 448.2 279.5 - 64 11 11.3 - 53 52.6
compute-65-27 - 503.5 79.6 176.0 - 423.9 327.5 - 64 11 12.2 - 53 51.8
compute-65-28 - 503.5 62.2 208.0 - 441.3 295.5 - 64 15 11.1 - 49 52.9
compute-65-29 - 503.5 59.4 208.0 - 444.1 295.5 - 64 23 17.3 - 41 46.7
compute-65-30 - 503.5 73.8 180.0 - 429.7 323.5 - 64 11 12.2 - 53 51.8
compute-75-01 - 1007.4 109.5 320.0 - 897.9 687.4 - 128 20 89.2 - 108 38.8
compute-75-02 - 1007.5 116.2 352.0 - 891.3 655.5 - 128 32 32.1 - 96 95.9
compute-75-03 - 755.5 86.4 432.0 - 669.1 323.5 - 128 24 25.4 - 104 102.6
compute-75-04 - 755.5 88.2 324.0 - 667.3 431.5 - 128 21 23.8 - 107 104.2
compute-75-05 - 755.5 91.9 388.0 - 663.6 367.5 - 128 22 24.8 - 106 103.2
compute-75-06 - 755.5 38.4 512.0 - 717.1 243.5 - 128 128 128.1 - 0 -0.1
compute-75-07 - 755.5 119.0 400.0 - 636.5 355.5 - 128 22 24.2 - 106 103.8
compute-76-03 - 1007.4 54.2 34.5 - 953.2 972.9 - 128 24 25.7 - 104 102.3
compute-76-04 - 1007.4 107.9 432.0 - 899.5 575.4 - 128 24 24.6 - 104 103.3
compute-76-05 - 1007.4 107.1 344.0 - 900.3 663.4 - 128 20 23.0 - 108 105.0
compute-76-06 - 1007.4 104.8 336.0 - 902.6 671.4 - 128 21 65.6 - 107 62.4
compute-76-07 - 1007.4 43.5 256.0 - 963.9 751.4 - 128 96 96.1 - 32 31.9
compute-76-08 - 1007.4 250.7 604.0 - 756.7 403.4 - 128 20 23.7 - 108 104.3
compute-76-09 - 1007.4 476.3 848.0 - 531.1 159.4 - 128 23 25.2 - 105 102.8
compute-76-10 - 1007.4 134.9 336.0 - 872.5 671.4 - 128 21 23.6 - 107 104.4
compute-76-11 - 1007.4 133.4 336.0 - 874.0 671.4 - 128 22 24.2 - 106 103.8
compute-76-12 - 1007.4 80.7 178.0 - 926.7 829.4 - 128 36 25.1 - 92 102.9
compute-76-13 - 1007.4 116.2 356.0 - 891.2 651.4 - 128 25 23.8 - 103 104.2
compute-76-14 - 1007.4 113.0 352.0 - 894.4 655.4 - 128 22 24.3 - 106 103.7
compute-84-01 - 881.1 369.3 288.0 - 511.8 593.1 - 112 19 20.7 - 93 91.3
compute-93-01 - 503.8 83.6 480.0 - 420.2 23.8 - 64 24 23.9 - 40 40.1
compute-93-02 - 755.6 69.6 192.0 - 686.0 563.6 - 72 13 12.6 - 59 59.4
compute-93-03 - 755.6 66.1 182.0 - 689.5 573.6 - 72 19 14.2 - 53 57.8
compute-93-04 - 755.6 81.7 192.0 - 673.9 563.6 - 72 13 14.8 - 59 57.2
======= ===== ====== ==== ==== =====
Totals 36134.0 5070.1 13516.5 4616 1103 1206.1
==> 14.0% 37.4% ==> 23.9% 26.1%
Most unreserved/unused memory (972.9/953.2GB) is on compute-76-03 with 104/102.3 slots/CPUs free/unused.
hostgroup: @xlmem-hosts (4 hosts)
- ----------- memory (GB) ------------ - --------- slots/CPUs ---------
hostname - avail used resd - unused unresd - nCPU used load - free unused
compute-76-01 - 1511.4 36.1 -0.0 - 1475.3 1511.4 - 192 0 0.2 - 192 191.8
compute-76-02 - 1511.4 35.7 -0.0 - 1475.7 1511.4 - 192 0 0.1 - 192 191.9
compute-93-05 - 2016.3 34.5 0.0 - 1981.8 2016.3 - 96 0 0.0 - 96 96.0
compute-93-06 - 3023.9 34.9 0.0 - 2989.0 3023.9 - 56 0 0.0 - 56 56.0
======= ===== ====== ==== ==== =====
Totals 8063.0 141.2 0.0 536 0 0.3
==> 1.8% 0.0% ==> 0.0% 0.1%
Most unreserved/unused memory (3023.9/2989.0GB) is on compute-93-06 with 56/56.0 slots/CPUs free/unused.
Past Memory Usage vs Memory Reservation
Past memory use in hi-mem queues between 03/05/25 and 03/12/25
queues: ?ThM.q
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
gotzekd 1/6 0.00
lealc 3/3 0.00 133.8 24.0 29.9 2.6 0.8
blackburnrc 3/126 0.00 38.7 840.0 609.0 13.8 1.4
beckerm 2/16 0.01 85.3 160.0 6.3 5.4 25.6 > 2.5
triznam 3/3 0.02 1563.7 24.0 9.0 8.3 2.7 > 2.5
bourkeb 18/160 0.04 73.3 256.0 3.7 2.1 70.1 > 2.5
parkerld 5/80 0.05 25.7 8.0 1.2 1.2 6.5 > 2.5
macdonaldk 16/160 0.05 733.3 240.0 14.3 14.2 16.8 > 2.5
willishr 4/24 0.07 95.8 200.0 588.1 52.2 0.3
ariasc 17/510 0.08 55.1 600.0 139.1 86.9 4.3 > 2.5
kistlerl 11/18 0.13 26.4 107.5 21.8 15.1 4.9 > 2.5
zehnpfennigj 2/10 0.21 21.9 50.0 609.7 566.0 0.1
nelsonjo 4/96 0.23 20.6 384.0 165.7 3.3 2.3
mghahrem 24/768 0.36 1.7 275.8 293.4 31.9 0.9
holmk 5/40 0.44 74.4 96.0 24.4 8.2 3.9 > 2.5
auscavitchs 2/18 0.46 80.1 300.0 172.7 77.3 1.7
macguigand 19/71 0.74 103.1 44.6 8.2 4.5 5.4 > 2.5
uribeje 17/264 0.79 66.0 209.5 24.3 14.9 8.6 > 2.5
classenc 7/224 1.07 41.3 512.0 208.2 7.0 2.5
carrionj 10/140 1.43 52.8 330.2 139.4 1.1 2.4
collensab 3/33 2.90 1.7 29.9 8.8 0.1 3.4 > 2.5
mcgowenm 2/12 3.02 16.6 900.0 9.4 4.3 96.1 > 2.5
bakerd 1/4 4.34 86.3 200.0 35.8 17.0 5.6 > 2.5
magalhaesm 246/984 4.59 89.3 100.0 16.7 6.8 6.0 > 2.5
pcristof 1178/35340 4.96 53.3 450.0 37.5 1.3 12.0 > 2.5
adonath 2921/23368 7.43 10.0 128.0 51.7 3.1 2.5
girardmg 1390/6310 26.94 26.5 77.5 30.9 4.3 2.5 > 2.5
pappalardop 519/519 36.19 98.9 300.0 1.2 1.1 253.0 > 2.5
gonzalezb 46/217 43.22 91.8 48.7 33.9 20.1 1.4
quattrinia 6080/6080 748.33 99.6 16.0 5.4 4.1 2.9 > 2.5
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 12559/75604 888.08 95.0 40.6 9.1 5.0 4.4 > 2.5
---
queues: ?TxlM.rq
----------- total --------- -------------------- mean --------------------
user no. of elapsed time eff. reserved maxvmem average ratio
name jobs/slots [d] [%] [GB] [GB] [GB] resd/maxvmem
--------------- -------------- ------------ ----- --------- -------- --------- ------------
--------------- -------------- ------------ ----- --------- -------- --------- ------------
all 0/0 0.00
Resource Limits
Resource Limits
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to num_gpu=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to num_gpu=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to num_gpu=4
users {*} queues {mTgpu.q} to num_gpu=3
users {*} queues {lTgpu.q} to num_gpu=2
users {*} queues {qgpu.iq} to num_gpu=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
Disk Usage & Quota
Disk Usage & Quota
As of Wed Mar 12 17:06:02 EDT 2025
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.05T 17.33T 4.72T 79%/11% /home
netapp-fas83-n01:/vol_data_public 142.50T 44.90T 97.60T 32%/3% /data/public
netapp-fas83-n01:/vol_pool_public 230.00T 97.26T 132.74T 43%/1% /pool/public
gpfs01:public 400.00T 323.89T 76.11T 81% /53% /scratch/public
netapp-fas83-n02:/vol_pool_kozakk 11.00T 10.72T 285.32G 98% /1% /pool/kozakk
netapp-fas83-n01:/vol_pool_nmnh_ggi 21.00T 13.80T 7.20T 66%/1% /pool/nmnh_ggi
netapp-fas83-n02:/vol_pool_sao_access 19.95T 5.47T 14.48T 28%/2% /pool/sao_access
netapp-fas83-n02:/vol_pool_sao_rtdc 10.45T 907.44G 9.56T 9%/1% /pool/sao_rtdc
netapp-fas83-n02:/vol_pool_sylvain 30.00T 24.18T 5.82T 81% /6% /pool/sylvain
gpfs01:nmnh_bradys 25.00T 21.84T 3.16T 88% /41% /scratch/bradys
gpfs01:nmnh_kistlerl 120.00T 106.28T 13.72T 89% /6% /scratch/kistlerl
gpfs01:nmnh_meyerc 25.00T 13.99T 11.01T 56%/4% /scratch/meyerc
gpfs01:nmnh_quattrinia 60.00T 42.51T 17.49T 71%/7% /scratch/nmnh_corals
gpfs01:nmnh_ggi 77.00T 21.94T 55.06T 29%/5% /scratch/nmnh_ggi
gpfs01:nmnh_lab 25.00T 8.17T 16.83T 33%/2% /scratch/nmnh_lab
gpfs01:nmnh_mammals 35.00T 14.79T 20.21T 43%/25% /scratch/nmnh_mammals
gpfs01:nmnh_mdbc 50.00T 33.61T 16.39T 68%/8% /scratch/nmnh_mdbc
gpfs01:nmnh_ocean_dna 40.00T 1.16T 38.84T 3%/1% /scratch/nmnh_ocean_dna
gpfs01:nzp_ccg 45.00T 40.96T 4.04T 92% /2% /scratch/nzp_ccg
gpfs01:sao_atmos 350.00T 269.56T 80.44T 78%/4% /scratch/sao_atmos
gpfs01:sao_cga 25.00T 9.50T 15.50T 38%/6% /scratch/sao_cga
gpfs01:sao_tess 50.00T 24.82T 25.18T 50%/83% /scratch/sao_tess
gpfs01:scbi_gis 80.00T 33.39T 46.61T 42%/35% /scratch/scbi_gis
gpfs01:nmnh_schultzt 25.00T 19.04T 5.96T 77%/75% /scratch/schultzt
gpfs01:serc_cdelab 15.00T 6.70T 8.30T 45%/4% /scratch/serc_cdelab
gpfs01:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 70.00T 46.20T 23.80T 67%/47% /scratch/sylvain
gpfs01:usda_sel 25.00T 6.22T 18.78T 25%/7% /scratch/usda_sel
gpfs01:wrbu 50.00T 35.96T 14.04T 72%/6% /scratch/wrbu
netapp-fas83-n02:/vol_data_admin 4.75T 35.28G 4.72T 1%/1% /data/admin
netapp-fas83-n01:/vol_pool_admin 47.50T 32.39T 15.11T 69%/1% /pool/admin
gpfs01:admin 20.00T 3.58T 16.42T 18%/31% /scratch/admin
gpfs01:bioinformatics_dbs 10.00T 5.00T 5.00T 50%/2% /scratch/dbs
gpfs01:tmp 100.00T 38.33T 61.67T 39%/9% /scratch/tmp
gpfs01:ocio_dpo 10.00T 1.27T 8.73T 13%/8% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
nas1:/mnt/pool/admin 20.00T 7.90T 12.10T 40%/1% /store/admin
nas1:/mnt/pool/public 175.00T 87.00T 88.00T 50%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 7.86T 32.14T 20%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 11.75T 28.25T 30%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 40.00T 973.76G 39.05T 3%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 222.21T 103.85T 118.36T 47%/1% /store/nzp_ccg
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 17.27T 32.73T 35%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 468.75T 281.25T 63%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 40.00T 26.70T 13.30T 67%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 8.41T 41.59T 17%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
qnas:/hydra 45.47T 29.07T 16.40T 64%/64% /qnas/hydra
qnas:/nfs-mesa-nanozoomer 372.89T 334.17T 38.73T 90% /90% /qnas/mesa
qnas:/sil 3840.36T 2823.90T 1016.47T 74%/74% /qnas/sil
You can view plots of disk use vs time, for the past
7 ,
30 , or
120 days;
as well as
plots of disk usage
by user , or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
You can view the plots of the GPFS IB traffic for the past
1
,
7
or
30
days, and
throughput info.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.17TB 92.7% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 512.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 511.4GB 99.9% 1.80M 18.0% *** Michael Trizna, NMNH/BOL - triznam
/home 497.1GB 97.1% 0.12M 1.2% *** Jaiden Edelman, SAO/SSP - jedelman
/home 493.2GB 96.3% 0.29M 2.9% *** Paul Cristofari, SAO/SSP - pcristof
/home 478.6GB 93.5% 0.24M 2.4% Michael Connelly, NMNH - connellym
/home 476.5GB 93.1% 3.30M 33.0% Heesung Chong, SAO/AMP - hchong
/home 475.0GB 92.8% 0.42M 4.2% Adela Roa-Varon, NMNH - roa-varona
/home 443.6GB 86.6% 0.97M 9.7% Hyeong-Ahn Kwon, SAO/AMP - hkwon
Volume=NetApp:vol_pool_nmnh_ggi, mounted as /pool/nmnh_ggi
-- disk -- -- #files -- default quota: 16.00TB/39.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/nmnh_ggi 13.76TB 86.0% 6.08M 15.6% Vanessa Gonzalez, NMNH/LAB - gonzalezv
Volume=NetApp:vol_pool_public, mounted as /pool/public
-- disk -- -- #files -- default quota: 7.50TB/18.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/public 7.50TB 100.0% 0.01M 0.1% *** Carlos Arias, STRI - ariasc
/pool/public 6.65TB 88.7% 0.24M 1.3% Xiaoyan Xie, SAO/HEA - xxie
/pool/public 6.64TB 88.5% 1.39M 7.7% Juan Uribe, NMNH - uribeje
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/38.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 14.50TB 96.7% 0.65M 1.7% *** Matthew Girard, NMNH - girardmg
/scratch/public 14.00TB 93.3% 7.13M 18.4% Kevin Mulder, NZP - mulderk
/scratch/public 13.70TB 91.3% 2.41M 6.2% Henrique Figueiro, SCBI - figueiroh
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 97.3% 0.05M 0.4% *** Carlos Arias, STRI - ariasc (15.0TB/12M)
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.49TB 89.9% - - Matthew Kweskin, NMNH - kweskinm (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-02 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-03 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-04 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-05 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-06 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-09 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-10 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-11 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-12 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-13 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-14 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-15 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-16 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-18 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-19 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-20 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-21 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-22 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-23 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-25 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-26 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-29 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-02 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-03 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-04 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-05 6.98T 61.4G 6.92T 0.9% | 100.4G 6.88T 1.4% | 1.63
75-06 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-07 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
76-03 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-04 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-13 1.75T 31.7G 1.71T 1.8% | 0.0G 1.75T 0.0% | 0.00
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
---------------------------------------------------------------
Total 133.2T 975.9G 132.3T 0.7% | 100.4G 133.1T 0.1% | 0.10
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.