Skip to content

Conversation

@philip-paul-mueller
Copy link
Collaborator

@philip-paul-mueller philip-paul-mueller commented Dec 18, 2025

This PR adds the the schedule_exchange() and schedule_wait() function to the communication object.
They behave similar to the regular exchange() and wait() but they accepts an additional CUDA stream as argument.

schedule_exchange() will wait with packing until all work that has been scheduled on the passed stream has finished, which removes the need for an external synchronization.
It is important to note, that the function will only return after all data have been send, which is the same behaviour than exchange().

schedule_wait() is similar to wait(), it will launch unpacking but will make synchronize it with the passed stream.
This means that every work that is submitted to the stream after the function has returned, will not start until all unpacking has been completed.

The PR also extends the Python bindings.
The bindings are able to interpret the following Python objects:

  • None is interpreted as default stream, i..e nullptr.
  • If the object has a __cuda_stream__ method it is assumed to follow Nvidia's stream protocol.
  • If the object has a .ptr attribute it is assumed that it follows CuPy's Stream implementation.

Note:
For CPU memory schedule_exchange() and schedule_wait() behave the exact same way as exchange() and wait(), i.e. exchange() starts to pack immediately and wait() only returns after the unpacking has finished.
If CPU and GPU memory is exchanged in the same transaction then the behaviour is a mix of both, the CPU parts are immediately packed but the packing of the GPU memory synchronizes with the stream.
The same holds for schedule_wait(), which will only return if the CPU memory has been unpacked, but will return after the unpacking of the GPU memory has been initiated.
What happens exactly also depends on if the CPU memory is processed before the GPU memory.
Thus it is safe to mix the two but it is not recommended.


NOTE:

  • This PR replaces PR#186, with a refactored solution.
  • The main important changes to this function are in the following files:
    • include/ghex/communication_object.hpp: Adding of the schedule_*() functions.
    • include/ghex/device/cuda/stream.hpp: Adding of a (very simple) event pool.
    • bindings/python/src/_pyghex/unstructured/communication_object.cpp: Updating the bindings.
  • This PR depends on PR#189.
  • This PR depends on PR#191, which adds formatting.

TODO:

  • This PR still has the wrong formatting applied to it.
  • Test on ICON4Py production setup.
  • Ask Fabian if we should add CuPy as a dependency for the binding tests such that we can also test the GPU tests.

msimberg and others added 30 commits November 3, 2025 12:48
@philip-paul-mueller
Copy link
Collaborator Author

I have now run everything on Säntis and with the exception of the "inplace test" in the concepts the unit tests pass.
However, that thing also fails on master at least it was the case for me.

@philip-paul-mueller
Copy link
Collaborator Author

The failing in the Python bindings in CPU mode is because the library falls back to normal exchange, transparently.
This could be fixed by checking how GHEX was compiled.

struct event_pool
{
public: // constructors
event_pool(std::size_t) {};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
event_pool(std::size_t) {};
event_pool(std::size_t) = default;

event_pool(event_pool&& other) noexcept = default;
event_pool& operator=(event_pool&&) noexcept = default;

void rewind() {};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
void rewind() {};
void rewind() {}

{
struct cuda_event
{
cuda_event() {};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cuda_event() {};
cuda_event() = default;

Comment on lines 126 to 127
//We do not use `reserve()` to ensure that the events are initialized now
// and not in the hot path when they are actually queried.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like your current version, thanks!

* and recreating them. It requires however, that a user can guarantee
* that the events are no longer in use.
*/
void rewind_pool()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that sounds reasonable. So:

  • rewind_pool -> rewind
  • reset_pool -> clear
    ?

initialize_data(d, field, levels, levels_first);
data_descriptor_cpu_int_type data{d, field, levels, levels_first};

cudaDeviceSynchronize();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thank you.

Copy link
Contributor

@msimberg msimberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more minor comments.

if (!m_moved) { GHEX_CHECK_CUDA_RESULT_NO_THROW(cudaEventDestroy(m_event)) }
}

operator bool() const noexcept { return m_moved; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this seems backwards. I would expect:

Suggested change
operator bool() const noexcept { return m_moved; }
operator bool() const noexcept { return !m_moved; }

i.e. bool() is true when the event is valid, false otherwise?

I see stream has the same "backwards" semantics, so this probably comes from there.

What do you think?

Copy link
Collaborator Author

@philip-paul-mueller philip-paul-mueller Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I think about it, I would say that it should return true when it is valid, thus it should be !m_moved.
However, I think it should follow the same logic as stream, because everything else is just confusing.
So I would keep it that way but add a comment that explains what true means.
In any case we should check with Fabian as he is the one who added that function, thus we would have to change all of GHEX (which is not an argument against it).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, it'd be great to hear if @boeschf had other semantics in mind for it originally. Otherwise, it simply smells a bit like a bug that should be changed for both stream and event.

while (!(m_next_event < m_events.size())) { m_events.emplace_back(cuda_event()); };

const std::size_t event_to_use = m_next_event;
assert(!bool(m_events[event_to_use]));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to semantics of operator bool() for event and stream:

Suggested change
assert(!bool(m_events[event_to_use]));
assert(bool(m_events[event_to_use]));

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Skipped for now, see discussion above.

if (!m_moved) { GHEX_CHECK_CUDA_RESULT_NO_THROW(cudaStreamDestroy(m_stream)) }
}

operator bool() const noexcept { return m_moved; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment about operator bool() for event. I think this is also backwards.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Skipped for now, see discussion above.

*
* TODO: Should the handle expose this function?
*/
void complete_schedule_exchange()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about making this private and if a caller needs to synchronize after a schedule_wait, just make them call wait instead?

schedule_exchange(stream);
schedule_wait(stream);
// now I really want to synchronize
wait();

IMO this would simplify the API in that wait can always be called as a wait-for-everything-to-finish regardless of how the exchange or wait was done previously. wait may of course need to call complete_schedule_exchange internally (or parts of it) for this approach to work.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should consider it.

Copy link
Collaborator Author

@philip-paul-mueller philip-paul-mueller Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have though about it again.

Regarding your example, while I can understand why it should work (in fact I also think it must work), I do not think that it is the right (what ever that means) way to achieve it.
The scheduled functions essentially introduce a kind of stream semantic into GHEX.
So, I would argue that to check if the transmission has finished one should synchronize with the stream passed to schedule_exchange().
The only thing that GHEX must do is, not start a new exchange before the old has finished or delete something that is still in use by a transfer that is still ongoing, which is what complete_schedule_exchange() does.
So I think that you are right when you say that it should become private as it should probably never be called directly by a user.
Instead all exchange*() functions call it to make sure that the exchange has happened and it is safe to start a new one.
Because wait() deallocate memory it must also call it to make sure that it does not delete something that is still in use.
As a side effect your example code will work and do what you want, i.e. the full synchronization.
But, as I outlined above, in my opinion, it is not the right way of doing it, but it must work.

Does this make sense?

Comment on lines 835 to 837
#ifdef GHEX_CUDACC
assert(has_scheduled_exchange());
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't belong here, or the logic is wrong elsewhere. clear() is called by complete_scheduled_exchange after the event has been reset, meaning this always fails. Should this be assert(!has_scheduled_exchange())?

Copy link
Collaborator Author

@philip-paul-mueller philip-paul-mueller Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, it should be assert(!...), I am just wondering why the CI "passes".
Probably because asserts are disabled?

* exchange. A user will never have to call it directly. If there was no such
* exchange or GPU support was disabled, the function does nothing.
*
* \note This should be a private function, but the tests need them.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do the tests need this function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@philip-paul-mueller philip-paul-mueller marked this pull request as ready for review January 23, 2026 09:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants