High-Memory Queues

Details on Jobs that Produced a Warning

As of Tue Feb 11 00:27:05 EST 2025
   jobID.tID  jobName         user           queue  - mem_res  max(vmem) -   vmem     avgmem   cpu    age    eff. nslots
------------------------------------------------------------------------------------------------------------------------
 5979491.1    job_02_veba_ass scottjj        mThM.q - 600.0G >>  97.572G -  97.572G  76.223G  20.3h   3.7h  92.2%     6/mthread
 5979310.1    iqtree_75p_ultr morrisseyd     mThM.q - 512.0G >>  34.459G -  34.344G  27.333G   4.1d  12.3h  49.9%    16/mthread
 5979457.1    H-Platy_calsv1  sossajef       lThM.q -  20.0G >>   7.156G -   7.156G   5.712G   7.8h   7.8h  99.6%     1
 5979508.1    spadescript     sossajef       lThM.q - 600.0G >>  21.431G -   5.397G  10.656G  11.7h   1.2h  79.8%    12/mthread
 5978980.1    phyluce.feb25.i cerqueirat     lThM.q - 384.0G >> 149.739G - 139.228G   4.375G  39.3d   3.3d  99.4%    12/mthread
 5979506.1    spadescript     sossajef       lThM.q - 600.0G >>  23.862G -  16.678G  15.682G  12.9h   1.2h  88.1%    12/mthread
 5979497.1    spadescript     sossajef       lThM.q - 600.0G >>  72.715G -  71.852G  57.218G  21.1h   2.0h  88.5%    12/mthread
 5979507.1    spadescript     sossajef       lThM.q - 600.0G >>  24.334G -   3.714G  16.039G  12.9h   1.2h  88.1%    12/mthread
 5974848.1    MC-R1_Platy_cal sossajef       lThM.q -  20.0G >>   0.145G -   0.145G   0.127G   8.0d   8.0d  99.7%     1
 5950302.1    VERPA_NT_blast_ gonzalezv      lThM.q - 800.0G >  713.794G - 654.819G 618.974G 335.5d  26.4d  63.6%    20/mthread
 5950302.2    VERPA_NT_blast_ gonzalezv      lThM.q - 800.0G >> 617.055G - 572.820G 528.517G 287.3d  26.4d  54.4%    20/mthread
 5950302.3    VERPA_NT_blast_ gonzalezv      lThM.q - 800.0G >  674.733G - 596.503G 580.444G 303.1d  26.4d  57.4%    20/mthread
 5950302.4    VERPA_NT_blast_ gonzalezv      lThM.q - 800.0G >> 632.512G - 563.325G 531.038G 260.9d  26.4d  49.4%    20/mthread
 5979496.1    spadescript     sossajef       lThM.q - 600.0G >>  65.679G -  62.249G  52.487G  21.5h   2.0h  90.4%    12/mthread
 5945729.1    DL_GF           liy            uThM.q - 170.0G >> 104.769G -  94.017G  64.566G  27.4d  28.6d  95.6%     1
 5945730.1    Fdan_GF         liy            uThM.q - 300.0G >> 152.767G - 144.601G  89.237G  28.2d  28.6d  98.6%     1
 5977786.1    Fdan_GF_pop     liy            uThM.q - 300.0G >>  58.268G -  43.764G  32.103G   4.6d   4.6d  99.4%     1
 5976937.1    megareads-only  kweskinm     uTxlM.rq - 1.914T >> 361.912G - 361.631G  66.719G 497.1d   6.4d  81.4%    96/mthread
 4974319.1    masurca_2024121 chaks        uTxlM.rq - 1.914T >> 535.659G - 237.471G 213.501G  68.3d  61.2d   2.8%    40/mthread

A warning is generated if either: • too much or too little memory is reserved: mem_res versus max(vmem); or • the job efficiency is too low or is too high. Click on the link under the jobID.tID heading to view the job's corresponding graph.


The quantity mem_res is the amount of memory reserved for the job, while max(vmem) is the maximum amount of memory a job has used (so far); to optimize the cluster's memory usage, these two numbers should be similar.


The job efficiency, eff., is the amount of CPU used so far divided by the product of the age by the number of slots. • A low efficiency means that the job is not using all the allocated CPUs (slots); • a value above 100% means that the job is using more CPUs cycles (threads) than the requested number of slots (nslots).

This page was last updated on Tuesday, 11-Feb-2025 00:31:15 EST with mk-web-page.pl ver. 7.2/1 (Aug 2024/SGK)