Skip to content

Conversation

@NikEfth
Copy link
Collaborator

@NikEfth NikEfth commented Apr 7, 2025

Changes in this pull request

  • Link and compile with PyTorch

Testing performed

  • make tests

Related issues

Checklist before requesting a review

  • [] I have performed a self-review of my code
  • [] I have added docstrings/doxygen in line with the guidance in the developer guide
  • [] I have implemented unit tests that cover any new or modified functionality (if applicable)
  • [] The code builds and runs on my machine
  • [] documentation/release_XXX.md has been updated with any functionality change (if applicable)

Comment on lines +25 to +28
if (STIR_WITH_TORCH)
target_link_libraries(${executable} ${TORCH_LIBRARIES})
target_include_directories(${executable} PUBLIC ${TORCH_INCLUDE_DIRS})
endif()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

irrelevant comment for now, but do we really need this for all exes? The mod to src/buildblock/CMakeLists.txt should be enough (due to transitive dependence)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not ... but let's see :)

Comment on lines +62 to +65
if (STIR_WITH_TORCH)
target_link_libraries(${executable} ${TORCH_LIBRARIES})
target_include_directories(${executable} PUBLIC ${TORCH_INCLUDE_DIRS})
endif()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as in src/cmake/stir_exe_targets.cmake

A lot of work in the IO and buildblock.
We should be able to read and write all the data types in the IO module,
in Tensors and hopefully have not broken anything for Array in the process.
* My confusion comes because I don't know, now how many of these fucntions are
used by ProjData.
@NikEfth
Copy link
Collaborator Author

NikEfth commented Apr 10, 2025

This is where we are now:

Running tests...
Test project /local_mount/space/celer/2/users/pytorch_playground/STIR.b
    Start 1: test_Array
1/2 Test #1: test_Array .......................   Passed    2.58 sec
    Start 2: test_VectorWithOffset
2/2 Test #2: test_VectorWithOffset ............   Passed    0.50 sec

100% tests passed, 0 tests failed out of 2
  • I know, it says absolutely nothing :)

NikEfth added 3 commits April 10, 2025 17:23
We start the tensor tests.
Access the tensor elements through the .at() method, that ensures
that best speed and efficiency in CPU.
GPU access should allowed only be with linear operators.
* test_Tensor replicates the tests from test_Array and is mostly ready for:
    - testing for simple operations with and without offsets
    - 1D and 2D (somewhat work needed)
@NikEfth
Copy link
Collaborator Author

NikEfth commented Apr 12, 2025

Made quite a bit of progress, a few more boring parts left:

Running tests...
Test project /local_mount/space/celer/2/users/pytorch_playground/STIR.b
    Start 1: test_Array
1/3 Test #1: test_Array .......................   Passed    8.19 sec
    Start 2: test_VectorWithOffset
2/3 Test #2: test_VectorWithOffset ............   Passed    0.49 sec
    Start 3: test_Tensor
3/3 Test #3: test_Tensor ......................   Passed    0.50 sec

100% tests passed, 0 tests failed out of 3

Total Test time (real) =   9.18 sec

Copy link
Collaborator

@KrisThielemans KrisThielemans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about creating an alias to minimise #ifdef STIR_TORCH more. ATM, there's quite a few non-changes because of this. e.g.

ArrayTypeFwd.h

namespace stir
{

template <typename elemT, int num_dimensions>
class Array;
#ifndef STIR_TORCH
template <typename elemT, int num_dimensions>
using ArrayType = Array<elemT, num_dimensions>
#else
template <typename elemT, int num_dimensions>
TensorWrapper;
template <typename elemT, int num_dimensions>
using ArrayType = TensorWrapper<elemT, num_dimensions>
#endif
}

and ArrayType.h

#include "stir/ArrayTypeFwd.h"
#include "stir/Array.h"
#ifdef STIR_TORCH
#include "stir/TensorWrapper.h"
#endif

then use ArrayType wherever you want to replace Array with TensorWrapper

I'm not 100% sure about using syntax though.

@KrisThielemans
Copy link
Collaborator

See #1589. Possibly move some stuff from here to there if I didn't cover everything yet. Let's merge #1589 quickly though (before 6.3).

@NikEfth
Copy link
Collaborator Author

NikEfth commented Apr 14, 2025

@KrisThielemans
Hi, I wrote the 2D check next() test for the TensorWrapper.
Can you please confirm that this is correct (BasicCoordinate always gives me doubts):

std::cerr << "\tTest on next() with regular array\n";
          BasicCoordinate<2, int> index;
          index[1] = test.get_min_index(0);
          index[2] = test.get_min_index(1);
          cerr << ">>>>" << index[1] << "  " << index[2] << std::endl;
          TensorWrapper<2, int>::const_full_iterator iter = test.begin_all_const();
          do
            {
              cerr << ">>" << index[1] << "  " << index[2] << std::endl;
              check(*iter == test.at(index), "test on next(): element out of sequence?");
              ++iter;
              index[2]+=1;
              if (index[2] > test.get_max_index(1))
                {
                  index[2] = test.get_min_index(1);
                  index[1]+=1;
                }
          } while (iter != test.end_all());
          check(iter == test.end_all(), "test on next() : did we cover all elements?");
        }
      }

@KrisThielemans
Copy link
Collaborator

Looks ok, but I guess I'd write it with a loop in terms of the index (which is closer to how STIR will be using it)

auto iter = test.begin_all_const();
for (index[1] = test.get_min_index(0); index[1] <= test.get_max_index(0); ++index[1])
  for (index[2] = test.get_min_index(1); index[2] <= test.get_max_index(1); ++index[2])
    {
       check(*iter == test.at(index), "test on next(): element out of sequence?");
       ++iter;
    }

The choice to start from 1 was a very bad one... At some point, we'll have to introduce a new index-type that starts from 0 (could probably just use std::array), maybe auto-converting the BasicCoordinate to the new type.

@NikEfth
Copy link
Collaborator Author

NikEfth commented Apr 15, 2025

Looks ok, but I guess I'd write it with a loop in terms of the index (which is closer to how STIR will be using it)

I know. I wanted to replicate closely the spelling of that test. it is based on a similar do .. while

@NikEfth
Copy link
Collaborator Author

NikEfth commented Apr 15, 2025

@KrisThielemans in NumericVector operators you have this line

  this->grow(std::min(this->get_min_index(), v.get_min_index()), std::max(this->get_max_index(), v.get_max_index()));

This automatically resizes the vector if it is smaller.
Do you want this behavior?
Shouldn't the user handle this?

I did it like this

if(!tensor.sizes().equals(v.tensor.sizes())) {
        if(!is_on_gpu())
          {
            this->grow(v.get_index_range());
          }
        else
          throw std::invalid_argument("Tensors must have the same shape for element-wise addition.");
      }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants