forked from ESCOMP/CAM
-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Line 123 in 8ef0f5a
| call torch_delete(net_output_tensors(1)) |
Aren't we are deleting too early since the allocation of net_outputs is aliased by net_output_tensors?
Or does the call to torch_delete(net_output_tensors(1)) likely only destroy the PyTorch tensor wrapper, not the memory buffer it pointed to (which is net_outputs(:, i)). There is a check for zero outputs, so if this isn't satisfied wouldn't net_outputs be nonsense and therefore blow up? This doesn't tally with our observations (3 year runs were successful at the time).
If we moved to batching and pushed the deletions a little lower this wouldn't be a discussion anyway.
Maybe @jatkinson1000 can help with this.
Metadata
Metadata
Assignees
Labels
No labels