High-Memory Queues
Details on Jobs that Produced a Warning
As of Fri Jan 9 11:07:05 EST 2026
jobID.tID jobName user queue - mem_res max(vmem) - vmem avgmem cpu age eff. nslots
------------------------------------------------------------------------------------------------------------------------
11796567.1 ass7 peresph mThM.q - 120.0G >> 48.942G - 0.408G 17.868G 23.6h 19.1h 10.3% 12/mthread
11796568.1 ass8 peresph mThM.q - 120.0G >> 26.125G - 0.580G 12.804G 1.6d 19.1h 16.4% 12/mthread
11796571.1 ass11 peresph mThM.q - 120.0G >> 41.762G - 0.311G 17.517G 30.9h 19.0h 13.5% 12/mthread
11796560.1 ass1 peresph mThM.q - 120.0G > 114.401G - 10.448G 28.549G 1.6d 19.1h 16.5% 12/mthread
11797011.1 make_SFS.job beckerm mThM.q - 240.0G >> 105.703G - 93.942G 103.245G 30.6h 1.9h 67.1% 24/mthread
11795842.2 metawrap_black_ vohsens mThM.q - 256.0G >> 67.128G - 35.046G 5.979G 28.4d 1.9d 91.1% 16/mthread
11796573.1 ass12 peresph mThM.q - 120.0G >> 23.277G - 0.590G 14.137G 1.7d 19.0h 18.1% 12/mthread
11796562.1 ass3 peresph mThM.q - 120.0G >> 26.066G - 26.066G 11.948G 35.2h 19.1h 15.4% 12/mthread
11796565.1 ass6 peresph mThM.q - 120.0G >> 26.335G - 0.527G 11.940G 1.7d 19.1h 18.1% 12/mthread
11796569.1 ass9 peresph mThM.q - 120.0G >> 43.582G - 0.463G 16.916G 23.6h 19.1h 10.3% 12/mthread
11796564.1 ass5 peresph mThM.q - 120.0G >> 22.790G - 0.319G 10.611G 30.3h 19.1h 13.2% 12/mthread
11796570.1 ass10 peresph mThM.q - 120.0G >> 26.041G - 0.377G 12.617G 30.7h 19.0h 13.4% 12/mthread
11796563.1 ass4 peresph mThM.q - 120.0G >> 48.905G - 0.871G 18.195G 34.5h 19.1h 15.1% 12/mthread
11796561.1 ass2 peresph mThM.q - 120.0G >> 41.553G - 0.346G 15.712G 30.6h 19.1h 13.3% 12/mthread
11796959.1 phyluce_align_s nelsonjo lThM.q - 384.0G >> 69.848G - 55.323G 1.883G 6.5d 13.2h 98.8% 12/mthread
|
A warning is generated if either:
• too much or too little memory is reserved:
mem_res versus max(vmem); or
• the job efficiency is too low or is too high.
Click on the link under the jobID.tID
heading to view the job's corresponding graph.
The quantity
mem_res is the amount of memory reserved for the job, while
max(vmem) is the maximum amount of memory a job has used (so far);
to optimize the cluster's memory usage, these two numbers should be similar.
The job efficiency, eff., is the amount of CPU used so far divided by the product of the
age by the number of slots.
• A low efficiency means that the job is not using all the allocated CPUs (slots);
• a value above 100% means that the job is using more CPUs cycles
(threads) than the requested number of slots (nslots).
|
This page was last updated on
Friday, 09-Jan-2026 11:12:12 EST
with mk-web-page.pl ver. 7.3/1 (Oct 2025/SGK)
|