Skip to content

Controlling memory usage? #7

@davemcg

Description

@davemcg

My autoNormTest job failed because it used too much memory. I've tried hard coding the memory to 16GB in my tmpl file, but that doesn't seem to be taking effect for some reason.

Error:

slurmstepd: error: Job 42664148 exceeded memory limit (10889240 > 4194304), being killed
slurmstepd: error: Exceeded job memory limit
slurmstepd: error: *** JOB 42664148 ON cn3152 CANCELLED AT 2017-06-06T00:11:58 ***

tmpl:

#!/bin/bash
#SBATCH --nodes=1 #<%= resources$nodes %>:ppn=<%= resources$cores %>
#SBATCH --time=<%= resources$walltime %>
#SBATCH --job-name=<%= job.name %>
#SBATCH --mem=16G
R CMD BATCH --no-save --no-restore "<%= rscript %>"

R (just hanging)

> cnvs.df = autoNormTest("files.RData", "bins.RData")

== 1) Sample QC and reference definition.

Loading registry: /gpfs/gsfs5/users/mcgaugheyd/projects/nei/hufnagel/ddl_nisc_custom_capture/recalibrated_bams/sampQC-files/registry.RData
Status for 1 jobs at 2017-06-06 10:35:21
Submitted: 1 (100.00%)
Started:   1 (100.00%)
Running:   1 (100.00%)
Done:      0 (  0.00%)
Errors:    0 (  0.00%)
Expired:   0 (  0.00%)
Time: min=NAs avg=NAs max=NAs
Waiting [S:1 D:0 E:0 R:1] |+                                 |   0% (00:00:00)^C

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions