Disk Usage & Quota
As of Wed Mar 12 11:06:02 EDT 2025
Disk Usage
Filesystem Size Used Avail Capacity Mounted on
netapp-fas83:/vol_home 22.05T 17.50T 4.55T 80%/11% /home
netapp-fas83-n01:/vol_data_public 142.50T 44.90T 97.60T 32%/3% /data/public
netapp-fas83-n01:/vol_pool_public 230.00T 96.86T 133.14T 43%/1% /pool/public
gpfs01:public 400.00T 323.26T 76.74T 81%/53% /scratch/public
netapp-fas83-n02:/vol_pool_kozakk 11.00T 10.72T 285.32G 98%/1% /pool/kozakk
netapp-fas83-n01:/vol_pool_nmnh_ggi 21.00T 13.80T 7.20T 66%/1% /pool/nmnh_ggi
netapp-fas83-n02:/vol_pool_sao_access 19.95T 5.47T 14.48T 28%/2% /pool/sao_access
netapp-fas83-n02:/vol_pool_sao_rtdc 10.45T 907.44G 9.56T 9%/1% /pool/sao_rtdc
netapp-fas83-n02:/vol_pool_sylvain 30.00T 24.17T 5.83T 81%/6% /pool/sylvain
gpfs01:nmnh_bradys 25.00T 21.84T 3.16T 88%/41% /scratch/bradys
gpfs01:nmnh_kistlerl 120.00T 106.28T 13.72T 89%/6% /scratch/kistlerl
gpfs01:nmnh_meyerc 25.00T 13.99T 11.01T 56%/4% /scratch/meyerc
gpfs01:nmnh_quattrinia 60.00T 42.51T 17.49T 71%/7% /scratch/nmnh_corals
gpfs01:nmnh_ggi 77.00T 21.94T 55.06T 29%/5% /scratch/nmnh_ggi
gpfs01:nmnh_lab 25.00T 8.17T 16.83T 33%/2% /scratch/nmnh_lab
gpfs01:nmnh_mammals 35.00T 14.80T 20.20T 43%/25% /scratch/nmnh_mammals
gpfs01:nmnh_mdbc 50.00T 33.61T 16.39T 68%/8% /scratch/nmnh_mdbc
gpfs01:nmnh_ocean_dna 40.00T 1.16T 38.84T 3%/1% /scratch/nmnh_ocean_dna
gpfs01:nzp_ccg 45.00T 44.40T 613.20G 99%/2% /scratch/nzp_ccg
gpfs01:sao_atmos 350.00T 269.56T 80.44T 78%/4% /scratch/sao_atmos
gpfs01:sao_cga 25.00T 9.50T 15.50T 38%/6% /scratch/sao_cga
gpfs01:sao_tess 50.00T 24.82T 25.18T 50%/83% /scratch/sao_tess
gpfs01:scbi_gis 80.00T 33.39T 46.61T 42%/35% /scratch/scbi_gis
gpfs01:nmnh_schultzt 25.00T 19.04T 5.96T 77%/75% /scratch/schultzt
gpfs01:serc_cdelab 15.00T 6.70T 8.30T 45%/4% /scratch/serc_cdelab
gpfs01:stri_ap 25.00T 18.96T 6.04T 76%/1% /scratch/stri_ap
gpfs01:sao_sylvain 70.00T 46.20T 23.80T 67%/47% /scratch/sylvain
gpfs01:usda_sel 25.00T 6.22T 18.78T 25%/7% /scratch/usda_sel
gpfs01:wrbu 50.00T 35.96T 14.04T 72%/6% /scratch/wrbu
netapp-fas83-n02:/vol_data_admin 4.75T 35.26G 4.72T 1%/1% /data/admin
netapp-fas83-n01:/vol_pool_admin 47.50T 32.02T 15.48T 68%/1% /pool/admin
gpfs01:admin 20.00T 3.58T 16.42T 18%/31% /scratch/admin
gpfs01:bioinformatics_dbs 10.00T 5.00T 5.00T 50%/2% /scratch/dbs
gpfs01:tmp 100.00T 38.33T 61.67T 39%/9% /scratch/tmp
gpfs01:ocio_dpo 10.00T 1.27T 8.73T 13%/8% /scratch/ocio_dpo
gpfs01:ocio_ids 5.00T 0.00G 5.00T 0%/1% /scratch/ocio_ids
qnas:/hydra 45.47T 29.07T 16.40T 64%/64% /qnas/hydra
qnas:/nfs-mesa-nanozoomer 372.89T 334.14T 38.75T 90%/90% /qnas/mesa
qnas:/sil 3840.36T 2822.03T 1018.33T 74%/74% /qnas/sil
nas1:/mnt/pool/admin 20.00T 7.90T 12.10T 40%/1% /store/admin
nas1:/mnt/pool/public 175.00T 87.00T 88.00T 50%/1% /store/public
nas1:/mnt/pool/nmnh_bradys 40.00T 7.86T 32.14T 20%/1% /store/bradys
nas2:/mnt/pool/n1p3/nmnh_ggi 90.00T 36.28T 53.72T 41%/1% /store/nmnh_ggi
nas2:/mnt/pool/nmnh_lab 40.00T 11.75T 28.25T 30%/1% /store/nmnh_lab
nas2:/mnt/pool/nmnh_ocean_dna 40.00T 973.76G 39.05T 3%/1% /store/nmnh_ocean_dna
nas1:/mnt/pool/nzp_ccg 222.21T 103.85T 118.36T 47%/1% /store/nzp_ccg
nas2:/mnt/pool/n1p2/ocio_dpo 50.00T 17.27T 32.73T 35%/1% /store/ocio_dpo
nas2:/mnt/pool/n1p1/sao_atmos 750.00T 468.75T 281.25T 63%/1% /store/sao_atmos
nas2:/mnt/pool/n1p2/nmnh_schultzt 40.00T 26.70T 13.30T 67%/1% /store/schultzt
nas1:/mnt/pool/sao_sylvain 50.00T 8.41T 41.59T 17%/1% /store/sylvain
nas1:/mnt/pool/wrbu 80.00T 10.02T 69.98T 13%/1% /store/wrbu
You can view plots of disk use vs time, for the past
7,
30, or
120 days;
as well as
plots of disk usage
by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.
|
You can view the plots of the GPFS IB traffic for the past
1
,
7
or
30
days, and
throughput info.
Disk Quota Report
Volume=NetApp:vol_data_public, mounted as /data/public
-- disk -- -- #files -- default quota: 4.50TB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/data/public 4.17TB 92.7% 5.07M 50.7% Alicia Talavera, NMNH - talaveraa
Volume=NetApp:vol_home, mounted as /home
-- disk -- -- #files -- default quota: 512.0GB/10.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/home 511.4GB 99.9% 1.80M 18.0% *** Michael Trizna, NMNH/BOL - triznam
/home 497.1GB 97.1% 0.12M 1.2% *** Jaiden Edelman, SAO/SSP - jedelman
/home 493.2GB 96.3% 0.29M 2.9% *** Paul Cristofari, SAO/SSP - pcristof
/home 478.6GB 93.5% 0.24M 2.4% Michael Connelly, NMNH - connellym
/home 476.5GB 93.1% 3.30M 33.0% Heesung Chong, SAO/AMP - hchong
/home 475.0GB 92.8% 0.42M 4.2% Adela Roa-Varon, NMNH - roa-varona
/home 443.6GB 86.6% 0.97M 9.7% Hyeong-Ahn Kwon, SAO/AMP - hkwon
Volume=NetApp:vol_pool_nmnh_ggi, mounted as /pool/nmnh_ggi
-- disk -- -- #files -- default quota: 16.00TB/39.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/nmnh_ggi 13.76TB 86.0% 6.08M 15.6% Vanessa Gonzalez, NMNH/LAB - gonzalezv
Volume=NetApp:vol_pool_public, mounted as /pool/public
-- disk -- -- #files -- default quota: 7.50TB/18.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/pool/public 7.50TB 100.0% 0.01M 0.1% *** Carlos Arias, STRI - ariasc
/pool/public 6.65TB 88.7% 0.24M 1.3% Xiaoyan Xie, SAO/HEA - xxie
/pool/public 6.64TB 88.5% 1.39M 7.7% Juan Uribe, NMNH - uribeje
Volume=GPFS:scratch_public, mounted as /scratch/public
-- disk -- -- #files -- default quota: 15.00TB/38.8M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/public 14.50TB 96.7% 0.65M 1.7% *** Matthew Girard, NMNH - girardmg
/scratch/public 14.00TB 93.3% 7.13M 18.4% Kevin Mulder, NZP - mulderk
/scratch/public 13.70TB 91.3% 2.41M 6.2% Henrique Figueiro, SCBI - figueiroh
Volume=GPFS:scratch_stri_ap, mounted as /scratch/stri_ap
-- disk -- -- #files -- default quota: 5.00TB/12.6M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/scratch/stri_ap 14.60TB 97.3% 0.05M 0.4% *** Carlos Arias, STRI - ariasc (15.0TB/12M)
Volume=NAS:store_public, mounted as /store/public
-- disk -- -- #files -- default quota: 0.0MB/0.0M
Disk usage %quota usage %quota name, affiliation - username (indiv. quota)
-------------------- ------- ------ ------ ------ -------------------------------------------
/store/public 4.80TB 96.1% - - *** Madeline Bursell, OCIO - bursellm (5.0TB/0M)
/store/public 4.51TB 90.1% - - Alicia Talavera, NMNH - talaveraa (5.0TB/0M)
/store/public 4.49TB 89.9% - - Matthew Kweskin, NMNH - kweskinm (5.0TB/0M)
/store/public 4.39TB 87.8% - - Mirian Tsuchiya, NMNH/Botany - tsuchiyam (5.0TB/0M)
SSD Usage
Node -------------------------- /ssd -------------------------------
Name Size Used Avail Use% | Resd Avail Resd% | Resd/Used
50-01 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
64-18 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-02 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-03 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-04 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-05 3.49T 24.6G 3.47T 0.7% | 99.3G 3.39T 2.8% | 4.04
65-06 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-09 3.49T 24.6G 3.47T 0.7% | 0.0G 3.49T 0.0% | 0.00
65-10 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-11 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-12 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-13 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-14 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-15 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-16 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-17 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-18 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-19 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-20 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-21 1.75T 15.4G 1.73T 0.9% | 100.4G 1.65T 5.6% | 6.53
65-22 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-23 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-24 1.75T 16.4G 1.73T 0.9% | 100.4G 1.65T 5.6% | 6.12
65-25 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-26 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-27 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-28 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-29 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
65-30 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
75-02 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-03 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-04 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-05 6.98T 66.6G 6.92T 0.9% | 100.4G 6.88T 1.4% | 1.51
75-06 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
75-07 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
76-03 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-04 1.75T 12.3G 1.73T 0.7% | 0.0G 1.75T 0.0% | 0.00
76-13 1.75T 31.7G 1.71T 1.8% | 0.0G 1.75T 0.0% | 0.00
79-01 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
79-02 7.28T 51.2G 7.22T 0.7% | 0.0G 7.28T 0.0% | 0.00
93-05 6.98T 50.2G 6.93T 0.7% | 0.0G 6.98T 0.0% | 0.00
---------------------------------------------------------------
Total 133.2T 988.2G 132.3T 0.7% | 400.4G 132.8T 0.3% | 0.41
Note: the disk usage and the quota report are compiled 4x/day, the SSD usage is updated every 10m.
|