Conversation
There was a problem hiding this comment.
Pull request overview
Adjusts how ArraySequence buffer sizing is computed/passed to prevent buffer sizing issues (unit mismatch/truncation) during streamline generation.
Changes:
- Change
SeedBatchPropagator.get_buffer_size()to return a megabyte-based (ceil’d) value. - Update
ArraySequenceconstruction sites to pass the new buffer size value directly (no// MEGABYTEat call sites).
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| cuslines/cuda_python/cu_tractography.py | Updates ArraySequence buffer size argument for SFT generation to use the new units returned by the propagator. |
| cuslines/cuda_python/cu_propagate_seeds.py | Changes buffer size calculation to return MB (rounded up) and updates ArraySequence creation accordingly. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| seeds[idx * global_chunk_sz : (idx + 1) * global_chunk_sz].shape[0] | ||
| ) | ||
| array_sequence = ArraySequence( | ||
| (item for gen in generators for item in gen), buffer_size // MEGABYTE | ||
| (item for gen in generators for item in gen), buffer_size | ||
| ) |
There was a problem hiding this comment.
In generate_sft, the ArraySequence is built from generators that are created from SeedBatchPropagator state. Because SeedBatchPropagator.as_generator() reads from self.slines/self.sline_lens (which are overwritten on each propagate() call), collecting generators across multiple chunks and consuming them after the loop will yield the last chunk’s data repeatedly (earlier chunk results are lost). Consider materializing/copying each chunk’s results before the next propagate(), or restructure to yield streamlines immediately per chunk instead of storing generators.
| array_sequence = ArraySequence( | ||
| (item for gen in generators for item in gen), buffer_size // MEGABYTE | ||
| (item for gen in generators for item in gen), buffer_size | ||
| ) |
There was a problem hiding this comment.
Since the conversion by MEGABYTE was removed here, MEGABYTE appears to be unused in this module now. Removing the unused import will avoid lint/static-analysis failures and keep the unit handling clear.
No description provided.