Hydra has been successfully moved to the new data center and is operational.
Updates are described in detail at the HPC Wiki 2025 Data Center Move page.
Please take the time needed to read these pages before contacting us for support.
If the government remains shut down as of Tuesday 14, 2025,
please note that Hydra will remain powered up for use.
The data center will have essential personnel remaining
through the shutdown to maintain the integrity of the IT infrastructure.
Access to Hydra will not be cut off, so users will be able to submit jobs
in the newly operational cluster.
Any major failure of the IT infrastructure (hardware or software) is unlikely
to be fixed during the shutdown, so accessibility to Hydra and its resources
cannot be guaranteed.
Support for Hydra will be limited to urgent matters
please contact SI-HPC-Admin@si.edu during the shutdown.
You can view the list of all the available modules:
as an HTML document, or
a plain ASCII text file.
You can also check the bandwidth between
SAO and HDC.
You can select to have this page refreshed every 5m,
20m, or
1hr, this one will auto-refresh every
1hr.
Usage
Current snapshot sorted by nodes'
.
Usage vs time, for length=
and user=
highlighted.
As of Wed Oct 8 20:57:09 2025: #CPUs/nodes 5636/74, 0 down. Loads:
head node: 1.47, login nodes: 0.54, 0.34, 0.62, 0.00; NSDs: 0.14, 0.04, 0.18, 27.43, 31.71; licenses: none used.
Queues status: 24 disabled, none need attention, none in error state.
18 users with running jobs (slots/jobs):
Total number of queued jobs/tasks/slots: 19/988/1,148
63 users have/had running or queued jobs over the past 7 days, 90 over the past 15 days.
111 over the past 30 days.
Click on the tabs to view each section, on the plots to view larger versions.
You can view the current cluster snapshot sorted by name, no. cpu, usage, load
or memory, and
view the past load for 7, or 15, or 30 days as well as highlight a given user by
selecting the corresponding options in the drop down menus.
{}
This page was last updated on Wednesday, 08-Oct-2025 21:02:16 EDT
with mk-webpage.pl ver. 7.2/1 (Aug 2024/SGK) in 0:49.
Warnings
Oversubscribed Jobs
As of Wed Oct 8 20:57:16 EDT 2025 (0 oversubscribed job)
Inefficient Jobs
As of Wed Oct 8 20:57:22 EDT 2025 (10 inefficient jobs, showing no more than 3 per user)
Total running (PEs/jobs) = 1308/689, 19 queued (jobs), 1 extra, showing only inefficient jobs (cpu% < 33% & age > 1h) for all users.
jobID name user age nPEs cpu% queue node taskID
10254485 job_00_kraken2- scottjj 08:34 30 1.0% mThM.q 76-06
10253677 Delphinidae_IQT mcgowenm +1:04 12 8.3% lThM.q 84-01
10255071 demultiplex lealc 04:31 12 8.2% lThM.q 76-14
10254266 ivar_consensus_ bourkeb 09:12 8 16.4% sThM.q 93-06
10254479 acanth_mitobim_ wirshingh 08:37 6 17.1% mThC.q 76-12
10254515 callo_mitobim_l wirshingh 07:53 6 17.1% mThC.q 64-04
10254523 eun_mitobim_loo wirshingh 07:03 6 16.8% mThC.q 64-06
(more by wirshingh)
10255079 assemble_2025-1 girardmg 01:23 2 1.8% lTWFM.sq 64-15
⇒ Equivalent to 85.3 underused CPUs: 94 CPUs used at 9.2% on average.
To see them all use:
'q+ -ineff -u wirshingh' (5)
Nodes with Excess Load
As of Wed Oct 8 20:57:30 EDT 2025 (4 nodes have a high load, offset=1.5)
#slots excess
node #CPUs used load load
-----------------------------------
65-06 64 12 13.7 1.7 * 65-14 64 14 16.8 2.8 * 76-03 192 33 37.0 4.0 * 76-04 192 24 37.6 13.6 *Total excess load = 22.1
1 for bourkeb
1 for hawkinsmt
1 for hinckleya
10 for mcfaddenc
1 for mcgowenm
5 for quattrinia
2 for scottjj
2 for stlaurentr
5 for sylvain
1 for uribeje
As of Wed Oct 8 20:57:22 EDT 2025
3612 avail(slots), free(load)=4916.8, unresd(mem)=19290.8G, for hgrp=@hicpu-hosts and minMem=1.0G/slot
total(nCPU) 4928 total(mem) 38.1T
unused(slots) 3734 unused(load) 4916.8 ie: 75.8% 99.8%
unreserved(mem) 18.8T unused(mem) 35.7T ie: 49.4% 93.7%
unreserved(mem) 5.2G unused(mem) 9.8G per unused(slots)
3262 avail(slots), free(load)=4540.9, unresd(mem)=15965.5G, for hgrp=@himem-hosts and minMem=1.0G/slot
total(nCPU) 4552 total(mem) 35.0T
unused(slots) 3384 unused(load) 4540.9 ie: 74.3% 99.8%
unreserved(mem) 15.6T unused(mem) 32.7T ie: 44.5% 93.3%
unreserved(mem) 4.7G unused(mem) 9.9G per unused(slots)
448 avail(slots), free(load)=535.5, unresd(mem)=5657.3G, for hgrp=@xlmem-hosts and minMem=1.0G/slot
total(nCPU) 536 total(mem) 7.9T
unused(slots) 448 unused(load) 535.5 ie: 83.6% 99.9%
unreserved(mem) 5.5T unused(mem) 7.7T ie: 70.2% 98.1%
unreserved(mem) 12.6G unused(mem) 17.6G per unused(slots)
0 avail(slots), free(load)=0.0, unresd(mem)=0.0G, for hgrp=@gpu-hosts and minMem=1.0G/slot
total(nCPU) 0 total(mem) 0.0T
unused(slots) 0 unused(load) 0.0 ie: 0.0% 0.0%
unreserved(mem) 0.0T unused(mem) 0.0T ie: 0.0% 0.0%
GPU Usage
Wed Oct 8 20:57:45 EDT 2025
hostgroup: @gpu-hosts (3 hosts)
- --- memory (GB) ---- - #GPU - --------- slots/CPUs ---------
hostname - total used resd - a/u - nCPU used load - free unused
compute-50-01 - 503.3 11.1 492.2 - 0/0 - 64 0 0.0 - 64 64.0
compute-79-01 - 125.5 node down - x - 20 x x - x x
compute-79-02 - 125.5 node down - x - 20 x x - x x
Total #GPU=0 used=0 (0.0%)
Waiting Job(s)
As of Wed Oct 8 20:57:30 EDT 2025
9 jobs waiting for girardmg (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
10255086 assemble_2025-1 girardmg 04:10 2 lTWFM.sq
10255802 nf-WF1_ASSEMBLE girardmg 00:04 4 100.0
10255803 nf-WF1_ASSEMBLE girardmg 00:04 4 100.0
10255804 nf-WF1_ASSEMBLE girardmg 00:04 4 100.0
10255805 nf-WF1_ASSEMBLE girardmg 00:03 4 100.0
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.984T/8.944T 100.4% for girardmg in queue uThM.q
wfm_slots_per_user/1 slots=2/2 100.0% for girardmg in queue lTWFM.sq
max_slots_per_user/1 slots=370/840 44.0% for girardmg
max_hM_slots_per_user/1 slots=368/840 43.8% for girardmg in queue sThM.q
------------------- ------------------------------- ------
9 jobs waiting for jenkinskel (top 5):
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
10250014 Full_Matrix_Bay jenkinskel +2:07 16 uThC.q
10250015 Full_Matrix_Bay jenkinskel +2:07 16 uThC.q
10250016 Full_Matrix_Bay jenkinskel +2:07 16 uThC.q
10250018 Full_Matrix_Bay jenkinskel +2:07 16 uThC.q
10250019 Full_Matrix_Bay jenkinskel +2:07 16 uThC.q
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_hC_slots_per_user/4 slots=136/143 95.1% for jenkinskel in queue uThC.q
max_slots_per_user/1 slots=136/840 16.2% for jenkinskel
max_mem_res_per_user/1 mem_res=16.00G/9.985T 0.2% for jenkinskel in queue uThC.q
------------------- ------------------------------- ------
1 job waiting for mcfaddenc:
jobID jobName user age nPEs memReqd queue taskID
--------- --------------- ---------------- ------ ---- -------- ------ -------
10255644 alignfasta-ARRA mcfaddenc 01:03 1 16.0 mThM.q 2079-3037:1
quota rule resource=value/limit %used
------------------- ------------------------------- ------
max_mem_res_per_user/2 mem_res=8.906T/8.944T 99.6% for mcfaddenc in queue uThM.q
max_hM_slots_per_user/2 slots=567/585 96.9% for mcfaddenc in queue mThM.q
max_slots_per_user/1 slots=567/840 67.5% for mcfaddenc
------------------- ------------------------------- ------
Overall Quota Usage
quota rule resource=value/limit %used
------------------- ------------------------------- ------
total_mem_res/2 mem_res=20.03T/35.78T 56.0% for * in queue uThM.q
blast2GO/1 slots=41/110 37.3% for *
total_slots/1 slots=1317/5960 22.1% for *
total_mem_res/3 mem_res=1.611T/7.874T 20.5% for * in queue uTxlM.rq
total_mem_res/1 mem_res=228.0G/39.94T 0.6% for * in queue uThC.q
Memory Usage
Reserved Memory, All High-Memory Queues
Select length:
Current Memory Quota Usage
As of Wed Oct 8 20:57:30 EDT 2025
quota rule resource=value/limit %used filter
---------------------------------------------------------------------------------------------------
total_mem_res/1 mem_res=228.0G/39.94T 0.6% for * in queue uThC.q
total_mem_res/2 mem_res=20.03T/35.78T 56.0% for * in queue uThM.q
total_mem_res/3 mem_res=1.611T/7.874T 20.5% for * in queue uTxlM.rq
Current Memory Usage by Compute Node, High Memory Nodes Only
Limit slots for all users together
users * to slots=5960
users * queues sThC.q,lThC.q,mThC.q,uThC.q to slots=5176
users * queues sThM.q,mThM.q,lThM.q,uThM.q to slots=4680
users * queues uTxlM.rq to slots=536
users * queues sTgpu.q,mTgpu.q,lTgpu.q to slots=104
Limit slots/user for all queues
users {*} to slots=840
Limit slots/user in hiCPU queues
users {*} queues {sThC.q} to slots=840
users {*} queues {mThC.q} to slots=840
users {*} queues {lThC.q} to slots=431
users {*} queues {uThC.q} to slots=143
Limit slots/user for hiMem queues
users {*} queues {sThM.q} to slots=840
users {*} queues {mThM.q} to slots=585
users {*} queues {lThM.q} to slots=390
users {*} queues {uThM.q} to slots=73
Limit slots/user for xlMem restricted queue
users {*} queues {uTxlM.rq} to slots=536
Limit total reserved memory for all users per queue type
users * queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=40902G
users * queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=36637G
users * queues uTxlM.rq to mem_res=8063G
Limit reserved memory per user for specific queues
users {*} queues sThC.q,mThC.q,lThC.q,uThC.q to mem_res=10225G
users {*} queues sThM.q,mThM.q,lThM.q,uThM.q to mem_res=9159G
users {*} queues uTxlM.rq to mem_res=8063G
Limit slots/user for interactive (qrsh) queues
users {*} queues {qrsh.iq} to slots=16
Limit GPUs for all users in GPU queues to the avail no of GPUs
users * queues {sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq} to GPUS=8
Limit GPUs per user in all the GPU queues
users {*} queues sTgpu.q,mTgpu.q,lTgpu.q,qgpu.iq to GPUS=4
Limit GPUs per user in each GPU queues
users {*} queues {sTgpu.q} to GPUS=4
users {*} queues {mTgpu.q} to GPUS=3
users {*} queues {lTgpu.q} to GPUS=2
users {*} queues {qgpu.iq} to GPUS=1
Limit to set aside a slot for blast2GO
users * queues !lTb2g.q hosts {@b2g-hosts} to slots=110
users * queues lTb2g.q hosts {@b2g-hosts} to slots=1
users {*} queues lTb2g.q hosts {@b2g-hosts} to slots=1
Limit total bigtmp concurrent request per user
users {*} to big_tmp=25
Limit total number of idl licenses per user
users {*} to idlrt_license=102
Limit slots for io queue per user
users {*} queues {lTIO.sq} to slots=8
Limit slots for io queue per user
users {*} queues {lTWFM.sq} to slots=2
Limit the number of concurrent jobs per user for some queues
users {*} queues {uTxlM.rq} to no_concurrent_jobs=3
users {*} queues {lTIO.sq} to no_concurrent_jobs=2
users {*} queues {lWFM.sq} to no_concurrent_jobs=1
users {*} queues {qrsh.iq} to no_concurrent_jobs=4
users {*} queues {qgpu.iq} to no_concurrent_jobs=1
You can view plots of disk use vs time, for the past 7, 30, or 120 days;
as well as
plots of disk usage by user, or
by device (for the past 90 or 240 days respectively).
Notes
Capacity shows % disk space full and % of inodes used.
When too many small files are written on a disk, the file system can become full because it is
unable to keep track of new files.
The % of inodes should be lower or comparable to the % of disk space used.
If it is much larger, the disk can become unusable before it gets full.