Skip to content

Conversation

@Flova
Copy link
Contributor

@Flova Flova commented Dec 9, 2025

Description

Previously, the max() value of the steady time was used as the default deadline. In some environments this results in overflows in the underlying pthread_cond_timedwait call, which waits for the conditional variable in the events queue implementation. Consequently, this lead to undefined behavior and freezes in the executor. Reducing the deadline significantly helped, but using cv.wait instead of cv_.wait_until seems to be the cleaner solution.

Did you use Generative AI?

No

Additional Information

@Flova Flova changed the title Switch events queue timeout from deadline to duration and use unconditional wait when possible Use unconditional wait when possible Dec 10, 2025
@Flova Flova changed the title Use unconditional wait when possible Fix overflow caused by default spin timeout Dec 10, 2025
Flova added a commit to Flova/ros-jazzy that referenced this pull request Dec 10, 2025
@Flova Flova changed the title Fix overflow caused by default spin timeout Fix invalid syscall caused by default spin timeout Dec 10, 2025
@Flova Flova changed the title Fix invalid syscall caused by default spin timeout Fix overflow caused by default spin timeout Dec 10, 2025
@Flova Flova mentioned this pull request Dec 10, 2025
3 tasks
@traversaro
Copy link

To give a bit more context, in a nutshell it seems that passing std::chrono::steady_clock::time_point::max() to std::condition_variable::wait_until is allowed in theory, but quite problematic in practice, see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58931 and https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113327 .

Copy link
Collaborator

@fujitatomoya fujitatomoya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm with green CI.

@fujitatomoya
Copy link
Collaborator

Pulls: #1563
Gist: https://gist.githubusercontent.com/fujitatomoya/e4e5bc6068d3e325f078d39bae24d680/raw/f605c9888dbd10e19688d128f55e00d8f395791c/ros2.repos
BUILD args: --packages-above-and-dependencies rclpy
TEST args: --packages-above rclpy
ROS Distro: rolling
Job: ci_launcher
ci_launcher ran: https://ci.ros2.org/job/ci_launcher/17758

  • Linux Build Status
  • Linux-aarch64 Build Status
  • Linux-rhel Build Status
  • Windows Build Status

@Flova
Copy link
Contributor Author

Flova commented Dec 12, 2025

I think most of the CI failures are unrelated right?

Regarding the failing test on windows:
The test fails because threading.Event().wait(0.1) # Simulate some work only takes ~93ms. Instead of >=100ms and >200ms. I suspect this is because of the low timer resolution on windows (10-16 ms), leading to a slightly earler trigger, which aligns with the observed behavior. Linux has a much higher timer resolution by default. I would suggest we either increase the timer resolution (Idk if this is possible from Python) or add some tolerances.

Also while being related to the changes in this PR, these changes should not modify the behavior when an explicit timeout is given for the executor, which is the case here. So it is kind of interesting that this fails.

@Flova
Copy link
Contributor Author

Flova commented Dec 12, 2025

A quick minimal test without ROS on Windows 11 and Python 3.12 shows that the logic used in the test is flawed on Windows:

>>> t1 = time.monotonic(); threading.Event().wait(0.1); print(time.monotonic() - t1)
False
0.125
>>> t1 = time.monotonic(); threading.Event().wait(0.1); print(time.monotonic() - t1)
False
0.10900000000037835
>>> t1 = time.monotonic(); threading.Event().wait(0.1); print(time.monotonic() - t1)
False
0.10900000000037835
>>> t1 = time.monotonic(); threading.Event().wait(0.1); print(time.monotonic() - t1)
False
0.10900000000037835
>>> t1 = time.monotonic(); threading.Event().wait(0.1); print(time.monotonic() - t1)
False
0.09400000000005093

but it is not strictly an issue of the Event().wait() as we have a similar issue with time.sleep() which uses a high res timer. This is a result of time.monotonic() also only having ~15ms resolution on Windows....

>>> t1 = time.monotonic(); time.sleep(0.1); print(time.monotonic() - t1)
0.10999999999967258
>>> t1 = time.monotonic(); time.sleep(0.1); print(time.monotonic() - t1)
0.10899999999946886
>>> t1 = time.monotonic(); time.sleep(0.1); print(time.monotonic() - t1)
0.09400000000005093
>>> t1 = time.monotonic(); time.sleep(0.1); print(time.monotonic() - t1)
0.10999999999967258
>>> t1 = time.monotonic(); time.sleep(0.1); print(time.monotonic() - t1)
0.10900000000037835
>>> t1 = time.monotonic(); time.sleep(0.1); print(time.monotonic() - t1)
0.10899999999946886
>>> t1 = time.monotonic(); time.sleep(0.1); print(time.monotonic() - t1)
0.10899999999946886
>>> for i in range(100): time.monotonic()
4984.406
4984.406
4984.406
4984.406
4984.406
4984.406
4984.406
4984.406
4984.406
4984.406
4984.421
4984.421
4984.421
4984.421
4984.421
4984.421
4984.421
4984.421
4984.421
4984.421
4984.421
4984.421
4984.421
4984.421
...

@traversaro
Copy link

The test fails because threading.Event().wait(0.1) # Simulate some work only takes ~93ms. Instead of >=100ms and >200ms. I suspect this is because of the low timer resolution on windows (10-16 ms), leading to a slightly earler trigger, which aligns with the observed behavior.

Slightly unrelated, but the 10/16 ms scheduler resolution on Windows can be avoided by calling timeBeginPeriod, see:

@Flova
Copy link
Contributor Author

Flova commented Dec 12, 2025

Slightly unrelated, but the 10/16 ms scheduler resolution on Windows can be avoided by calling timeBeginPeriod, see:

But this is only possible in the underlying C implementation right?

@Flova
Copy link
Contributor Author

Flova commented Dec 12, 2025

Using time.sleep() and time.perf_counter() fixes the issue, as both use a more accurate timer:

>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)
0.10133029999997234
>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)
0.10042189999967377
>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)
0.10094169999956648
>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)
0.10087759999987611
>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)
0.10047479999957432
>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)
0.10064109999984794
>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)
0.10096969999995054
>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)
0.1007225000003018
>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)
0.10061409999980242
>>> t1 = time.perf_counter(); time.sleep(0.1); print(time.perf_counter() - t1)

@traversaro
Copy link

Slightly unrelated, but the 10/16 ms scheduler resolution on Windows can be avoided by calling timeBeginPeriod, see:

But this is only possible in the underlying C implementation right?

That is where perhaps it starts being OT, but one can write a small class with constructors and destructors like:

WindowsSchedulerBooster()
{
#if defined(_WIN32)
    // Only affects Windows systems.
    TIMECAPS tm; // Stores system timer capabilities.
    // Get the minimum timer resolution supported by the system.
    timeGetDevCaps(&tm, sizeof(TIMECAPS));
    // Set the system timer resolution to the minimum value for higher precision.
    timeBeginPeriod(tm.wPeriodMin);
#endif
}

~WindowsSchedulerBooster()
{
#if defined(_WIN32)
    // Only affects Windows systems.
    TIMECAPS tm; // Stores system timer capabilities.
    // Get the minimum timer resolution supported by the system.
    timeGetDevCaps(&tm, sizeof(TIMECAPS));
    // Restore the system timer resolution to the default value.
    timeEndPeriod(tm.wPeriodMin);
#endif
}

and wrap it in a Python bindings, so that it can be called from Python tests. Alternatively, something similar can also be done directly via ctypes. Again, this is probably out of scope here, I just want to mention it for search engines and llm to find this. :)

@Flova
Copy link
Contributor Author

Flova commented Dec 12, 2025

Slightly unrelated, but the 10/16 ms scheduler resolution on Windows can be avoided by calling timeBeginPeriod, see:

But this is only possible in the underlying C implementation right?

That is where perhaps it starts being OT, but one can write a small class with constructors and destructors like:

WindowsSchedulerBooster()
{
#if defined(_WIN32)
    // Only affects Windows systems.
    TIMECAPS tm; // Stores system timer capabilities.
    // Get the minimum timer resolution supported by the system.
    timeGetDevCaps(&tm, sizeof(TIMECAPS));
    // Set the system timer resolution to the minimum value for higher precision.
    timeBeginPeriod(tm.wPeriodMin);
#endif
}

~WindowsSchedulerBooster()
{
#if defined(_WIN32)
    // Only affects Windows systems.
    TIMECAPS tm; // Stores system timer capabilities.
    // Get the minimum timer resolution supported by the system.
    timeGetDevCaps(&tm, sizeof(TIMECAPS));
    // Restore the system timer resolution to the default value.
    timeEndPeriod(tm.wPeriodMin);
#endif
}

and wrap it in a Python bindings, so that it can be called from Python tests. Alternatively, something similar can also be done directly via ctypes. Again, this is probably out of scope here, I just want to mention it for search engines and llm to find this. :)

Cool, we should make an issue for that. For now I made an additional PR that uses perf_counter, which is already used by most other tests.

@fujitatomoya
Copy link
Collaborator

@Mergifyio rebase

Previously the max() value of the steady time was used as the default deadline. In some environments this results in overflows in the underlying pthread_cond_timedwait call, which waits for the conditional variable in the events queue implementation. Consequently, this lead to freezes in the executor. Reducing the deadline significantly helped, but using `cv.wait` instead of `cv_.wait_until` seems to be the cleaner solution.

Signed-off-by: Florian Vahl <florian.vahl@dlr.de>
@mergify
Copy link
Contributor

mergify bot commented Dec 15, 2025

rebase

✅ Branch has been successfully rebased

@fujitatomoya
Copy link
Collaborator

fujitatomoya commented Dec 15, 2025

  • Linux Build Status
  • Linux-aarch64 Build Status
  • Linux-rhel Build Status
  • Windows Build Status

@ahcorde ahcorde merged commit eddfdb6 into ros2:rolling Dec 16, 2025
3 checks passed
@Flova
Copy link
Contributor Author

Flova commented Dec 17, 2025

Is it possible to backport this change to jazzy and kilted as well?

@fujitatomoya
Copy link
Collaborator

@Flova does this depend on #1469? in that case, we probably want to wait until backport #1560 to be merged?

@Flova
Copy link
Contributor Author

Flova commented Dec 18, 2025

No. I think you mean #1564, that one fixes an issue from #1469

@fujitatomoya
Copy link
Collaborator

@Flova ah yeah, sorry.

I have one question here about backport for jazzy and kilted before doing that.
This PR is technically ABI breaking change, i think EventsExecutor could be backported to downstream because those are just adding new APIs and Classes. so i would want to have the agreement before we are doing backport of this.

i am almost sure that, this is C++ implementation behind python EventExecutors, so almost nobody cares about the implementation details. even if implementation breaks the ABI, that is not really exposed to user application. after all, i think we can do backports to fix the issues. what do you say? @Flova @ahcorde @mjcarroll @jmachowinski @alsora

@Flova
Copy link
Contributor Author

Flova commented Dec 19, 2025

Alright. We had the same discussion in PR RoboStack/ros-jazzy#137 for the RoboStack conda packages, which already include this bugfix as a patch file. There were also some concerns regarding ABI compatibility and we came to the conclusion that it private header, which is not installed, so it should be fine. In addition to that there is also the possibility to fix this issue in a less clean way, by just adding a "magic value" that doesn't overflow as the time point. This has it's own drawbacks, but would preserve the ABI. I would still favor the backport of this PR instead of a custom solution for these distros.

@jmachowinski
Copy link
Contributor

As nobody links against rclpy ABI compability is not an issue.

@fujitatomoya
Copy link
Collaborator

@Flova @traversaro @jmachowinski thanks for the feedback! Let's do that then!

@fujitatomoya
Copy link
Collaborator

@Mergifyio backport kilted jazzy

@mergify
Copy link
Contributor

mergify bot commented Dec 20, 2025

backport kilted jazzy

✅ Backports have been created

Details

mergify bot pushed a commit that referenced this pull request Dec 20, 2025
Previously the max() value of the steady time was used as the default deadline. In some environments this results in overflows in the underlying pthread_cond_timedwait call, which waits for the conditional variable in the events queue implementation. Consequently, this lead to freezes in the executor. Reducing the deadline significantly helped, but using `cv.wait` instead of `cv_.wait_until` seems to be the cleaner solution.

Signed-off-by: Florian Vahl <florian.vahl@dlr.de>
(cherry picked from commit eddfdb6)
mergify bot pushed a commit that referenced this pull request Dec 20, 2025
Previously the max() value of the steady time was used as the default deadline. In some environments this results in overflows in the underlying pthread_cond_timedwait call, which waits for the conditional variable in the events queue implementation. Consequently, this lead to freezes in the executor. Reducing the deadline significantly helped, but using `cv.wait` instead of `cv_.wait_until` seems to be the cleaner solution.

Signed-off-by: Florian Vahl <florian.vahl@dlr.de>
(cherry picked from commit eddfdb6)
ahcorde pushed a commit that referenced this pull request Dec 22, 2025
Previously the max() value of the steady time was used as the default deadline. In some environments this results in overflows in the underlying pthread_cond_timedwait call, which waits for the conditional variable in the events queue implementation. Consequently, this lead to freezes in the executor. Reducing the deadline significantly helped, but using `cv.wait` instead of `cv_.wait_until` seems to be the cleaner solution.


(cherry picked from commit eddfdb6)

Signed-off-by: Florian Vahl <florian.vahl@dlr.de>
Co-authored-by: Florian Vahl <git@flova.de>
ahcorde pushed a commit that referenced this pull request Dec 22, 2025
Previously the max() value of the steady time was used as the default deadline. In some environments this results in overflows in the underlying pthread_cond_timedwait call, which waits for the conditional variable in the events queue implementation. Consequently, this lead to freezes in the executor. Reducing the deadline significantly helped, but using `cv.wait` instead of `cv_.wait_until` seems to be the cleaner solution.


(cherry picked from commit eddfdb6)

Signed-off-by: Florian Vahl <florian.vahl@dlr.de>
Co-authored-by: Florian Vahl <git@flova.de>
Flova added a commit to bit-bots/robostack-jazzy-custom-pkgs that referenced this pull request Dec 29, 2025
InvincibleRMC pushed a commit to InvincibleRMC/rclpy that referenced this pull request Jan 12, 2026
Previously the max() value of the steady time was used as the default deadline. In some environments this results in overflows in the underlying pthread_cond_timedwait call, which waits for the conditional variable in the events queue implementation. Consequently, this lead to freezes in the executor. Reducing the deadline significantly helped, but using `cv.wait` instead of `cv_.wait_until` seems to be the cleaner solution.

Signed-off-by: Florian Vahl <florian.vahl@dlr.de>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>
InvincibleRMC added a commit that referenced this pull request Jan 21, 2026
)

* Fix warnings from gcc. (#1501)

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Update type_support to use new abcs

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Cleanup old test cases to use new automatic inference

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Add content-filtered-topic interfaces (#1506)

Signed-off-by: Barry Xu <Barry.Xu@sony.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Added lock to protect futures for multithreaded executor (#1477)

Signed-off-by: brennanmk <brennanmk2200@gmail.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* EventsExecutor: Handle async callbacks for services and subscriptions (#1478)

Closes #1473

Signed-off-by: Brad Martin <bmartin@fatlxception.org>
Co-authored-by: Brad Martin <bmartin@fatlxception.org>
Co-authored-by: Alejandro Hernandez Cordero <ahcorde@gmail.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* add spinning state for the Executor classes. (#1510)

Signed-off-by: Tomoya.Fujita <tomoya.fujita825@gmail.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Fixes Action.*_async futures never complete (#1308)

Per rclpy:1123 If two seperate client server actions are running in seperate executors the future given to the ActionClient will never complete due to a race condition
This fixes  the calls to rcl handles potentially leading to deadlock scenarios by adding locks to there references
Co-authored-by: Aditya Agarwal <aditya.kgp25@gmail.com>
Co-authored-by: Jonathan Blixt <jmblixt3@gmail.com>
Signed-off-by: Jonathan Blixt <jmblixt3@gmail.com>

Co-authored-by: Alejandro Hernandez Cordero <ahcorde@gmail.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* remove unused 'param_type' (#1524)

'param_type' is set but never used

Signed-off-by: Christian Rauch <Christian.Rauch@unileoben.ac.at>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Changelog

Signed-off-by: Alejandro Hernandez Cordero <ahcorde@gmail.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* 10.0.1

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Remove duplicate future handling from send_goal_async (#1532)

A recent change intended to move this logic into a lock context, but
actually ended up duplicating it instead. This fixes that by removing
the duplicated logic outside of the lock. It also preserves the explicit
typing annotation on the future.

Signed-off-by: Nathan Wiebe Neufeldt <wn.nathan@gmail.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* fix(test_events_executor): destroy all nodes before shutdown (#1538)

Signed-off-by: yuanyuyuan <az6980522@gmail.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* add BaseImpl

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Add ImplT Support

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* fix changelong

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Remove accidental tuple (#1542)

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Allow action servers without execute callback (#1219)

Signed-off-by: Tim Clephas <tim.clephas@nobleo.nl>

* add : get clients, servers info (#1307)

Signed-off-by: Minju, Lee <dlalswn531@naver.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* 10.0.2

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* update tests

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* ParameterEventHandler support ContentFiltering (#1531)

* ParameterEventHandler support ContentFiltering

Signed-off-by: Barry Xu <barry.xu@sony.com>

* Address review comments

Signed-off-by: Barry Xu <barry.xu@sony.com>

---------

Signed-off-by: Barry Xu <barry.xu@sony.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Fix issues with resuming async tasks awaiting a future (#1469)

Signed-off-by: Błażej Sowa <bsowa123@gmail.com>
Signed-off-by: Nadav Elkabets <elnadav12@gmail.com>
Co-authored-by: Nadav Elkabets <32939935+nadavelkabets@users.noreply.github.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* 10.0.3

Signed-off-by: Michael Carroll <mjcarroll@intrinsic.ai>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Increase clock accuracy (#1564)

Signed-off-by: Florian Vahl <git@flova.de>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Use unconditional wait when possible. (#1563)

Previously the max() value of the steady time was used as the default deadline. In some environments this results in overflows in the underlying pthread_cond_timedwait call, which waits for the conditional variable in the events queue implementation. Consequently, this lead to freezes in the executor. Reducing the deadline significantly helped, but using `cv.wait` instead of `cv_.wait_until` seems to be the cleaner solution.

Signed-off-by: Florian Vahl <florian.vahl@dlr.de>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Remove default from switch with enum, so that compiler warns. (#1566)

Signed-off-by: Tomoya Fujita <Tomoya.Fujita@sony.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Fix parameter parsing for unspecified target nodes (#1552)

Signed-off-by: Barry Xu <barry.xu@sony.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Improve the compatibility of processing YAML parameter files (#1548)

Signed-off-by: Barry Xu <barry.xu@sony.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Improve wildcard parsing and optimize the logic for parsing YAML para… (#1571)

Signed-off-by: Barry Xu <barry.xu@sony.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Expose action graph functions as Node class methods. (#1574)

* Expose action graph functions as Node class methods.

Signed-off-by: Tomoya Fujita <Tomoya.Fujita@sony.com>

* address review comments to keep the warning consistent.

Signed-off-by: Tomoya.Fujita <Tomoya.Fujita@sony.com>

---------

Signed-off-by: Tomoya Fujita <Tomoya.Fujita@sony.com>
Signed-off-by: Tomoya.Fujita <Tomoya.Fujita@sony.com>

* Fix performance bug in MultiThreadedExecutor (hopefully) (#1547)

Signed-off-by: Michael Tandy <git@mjt.me.uk>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Changelog

Signed-off-by: Alejandro Hernandez Cordero <ahcorde@gmail.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* 10.0.4

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* use Msg over BaseMessage

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Use Srv over BaseService

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Use Action over BaseAction

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* lint

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

* Update rclpy/rclpy/type_support.py

Co-authored-by: Christophe Bedard <bedard.christophe@gmail.com>
Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>

---------

Signed-off-by: Michael Carlstrom <rmc@carlstrom.com>
Signed-off-by: Barry Xu <Barry.Xu@sony.com>
Signed-off-by: brennanmk <brennanmk2200@gmail.com>
Signed-off-by: Brad Martin <bmartin@fatlxception.org>
Signed-off-by: Tomoya.Fujita <tomoya.fujita825@gmail.com>
Signed-off-by: Christian Rauch <Christian.Rauch@unileoben.ac.at>
Signed-off-by: Alejandro Hernandez Cordero <ahcorde@gmail.com>
Signed-off-by: Nathan Wiebe Neufeldt <wn.nathan@gmail.com>
Signed-off-by: yuanyuyuan <az6980522@gmail.com>
Signed-off-by: Tim Clephas <tim.clephas@nobleo.nl>
Signed-off-by: Minju, Lee <dlalswn531@naver.com>
Signed-off-by: Barry Xu <barry.xu@sony.com>
Signed-off-by: Błażej Sowa <bsowa123@gmail.com>
Signed-off-by: Nadav Elkabets <elnadav12@gmail.com>
Signed-off-by: Michael Carroll <mjcarroll@intrinsic.ai>
Signed-off-by: Florian Vahl <git@flova.de>
Signed-off-by: Florian Vahl <florian.vahl@dlr.de>
Signed-off-by: Tomoya Fujita <Tomoya.Fujita@sony.com>
Signed-off-by: Tomoya.Fujita <Tomoya.Fujita@sony.com>
Signed-off-by: Michael Tandy <git@mjt.me.uk>
Co-authored-by: Chris Lalancette <clalancette@gmail.com>
Co-authored-by: Barry Xu <barry.xu@sony.com>
Co-authored-by: Brennan Miller-Klugman <55165406+brennanmk@users.noreply.github.com>
Co-authored-by: Brad Martin <52003535+bmartin427@users.noreply.github.com>
Co-authored-by: Brad Martin <bmartin@fatlxception.org>
Co-authored-by: Alejandro Hernandez Cordero <ahcorde@gmail.com>
Co-authored-by: Tomoya Fujita <Tomoya.Fujita@sony.com>
Co-authored-by: Jonathan <jmblixt3@gmail.com>
Co-authored-by: Christian Rauch <Christian.Rauch@unileoben.ac.at>
Co-authored-by: Nathan Wiebe Neufeldt <wn.nathan@gmail.com>
Co-authored-by: Yuyuan Yuan <az6980522@gmail.com>
Co-authored-by: Tim Clephas <tim.clephas@nobleo.nl>
Co-authored-by: Minju, Lee <70446214+leeminju531@users.noreply.github.com>
Co-authored-by: Błażej Sowa <bsowa123@gmail.com>
Co-authored-by: Nadav Elkabets <32939935+nadavelkabets@users.noreply.github.com>
Co-authored-by: Michael Carroll <mjcarroll@intrinsic.ai>
Co-authored-by: Florian Vahl <git@flova.de>
Co-authored-by: Michael Tandy <git@mjt.me.uk>
Co-authored-by: Christophe Bedard <bedard.christophe@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants