Skip to content

Early deletion in gw_ml.F90 #37

@ma595

Description

@ma595

call torch_delete(net_output_tensors(1))

Aren't we are deleting too early since the allocation of net_outputs is aliased by net_output_tensors?

Or does the call to torch_delete(net_output_tensors(1)) likely only destroy the PyTorch tensor wrapper, not the memory buffer it pointed to (which is net_outputs(:, i)). There is a check for zero outputs, so if this isn't satisfied wouldn't net_outputs be nonsense and therefore blow up? This doesn't tally with our observations (3 year runs were successful at the time).

If we moved to batching and pushed the deletions a little lower this wouldn't be a discussion anyway.

Maybe @jatkinson1000 can help with this.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions