Skip to content

Conversation

@Ozoniuss
Copy link

@Ozoniuss Ozoniuss commented Jan 8, 2026

This example shows how to implement a payload converter that detects when a payload is larger than a certain threshold, in which case it writes it to a storage system of choice instead of the temporal server. In this particular example payloads are written to a local file but it can be adapted to work with any storage. Such a converter is useful when a workflow/activity regularly has payloads that risk exceeding Temporal's hard limit of 2MB.

What was changed

Added a new sample.

Why?

Two places where I worked required building something similar, so I thought it would be a good idea to write a sample for it. I haven't seen a similar sample available.

Checklist

  1. How was this tested:
    Locally with Temporal cluster as well as unit tests available.

  2. Any docs updates needed?
    No

This example shows how to implement a payload converter that detects
when a payload is larger than a certain threshold, in which case it
writes it to a storage system of choice instead of the temporal server.
In this particular example payloads are written to a local file but it
can be adapted to work with any storage. Such a converter is useful when
a workflow/activity regularly has payloads that risk exceeding
Temporal's hard limit of 2MB.
@Ozoniuss Ozoniuss requested a review from a team as a code owner January 8, 2026 09:18
Copy link
Member

@cretz cretz Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are going to do implicit large payload offloading, we recommend using a codec instead of a converter, as https://github.com/DataDog/temporal-large-payload-codec has.

However, with that third party example, we are not necessarily looking for a blessed example of this at this time for two reasons. First, an ideal sample since Temporal is distributed would not write to disk. Second, this isn't often a preferred pattern as arbitrarily offloading large data to/from external stores can hide from the workflow authors that they should consider being more explicit about offloading large data instead of constantly doing this on replay and such. Finally, we are actively working on improving this situation soon doing basically this exact thing but providing more explicit external payload storage interfaces and warnings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants