Feat: Add step start and finish events #807
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Add StepStartEvent and StepFinishEvent to All Streaming Providers
Description
This PR adds
StepStartEventandStepFinishEventto the streaming handlers of all providers, aligning with the AI SDK step event implementation. This enables consumers to track the lifecycle of individual processing steps during multi-step conversations (e.g., when tool calls are involved).Motivation
In multi-step conversations (especially those involving multiple tool calls), streaming responses without step boundaries can flatten execution into a single assistant turn in subsequent requests sent by frontend after calling convertToModelMessages. This leads to:
Step events introduce explicit boundaries between execution steps, ensuring deterministic ordering and correct state management.
Problem Illustration
Without step events (flattened execution)
[ { "role": "assistant", "content": [ { "type": "text", "text": "Some message" }, { "type": "tool-call", "toolCallId": "1", "input": { "param": "1" } }, { "type": "text", "text": "Next message" }, { "type": "tool-call", "toolCallId": "2", "input": { "param": "2" } } ] }, { "role": "tool", "content": [ { "type": "tool-result", "toolCallId": "1", "output": "abc" }, { "type": "tool-result", "toolCallId": "2", "output": "pqr" } ] } ]In this case, tool calls and text from different logical steps are merged, making it difficult for the model to reason about execution order in subsequent requests sent after converting UI Messages to model messages using convertToModelMessages
With step events (explicit sequencing)
[ { "role": "assistant", "content": [ { "type": "text", "text": "Some message" }, { "type": "tool-call", "toolCallId": "1", "input": { "param": "1" } } ] }, { "role": "tool", "content": [ { "type": "tool-result", "toolCallId": "1", "output": "abc" } ] }, { "role": "assistant", "content": [ { "type": "text", "text": "Next message" }, { "type": "tool-call", "toolCallId": "2", "input": { "param": "2" } } ] }, { "role": "tool", "content": [ { "type": "tool-result", "toolCallId": "2", "output": "pqr" } ] } ]Each logical step is clearly separated, preserving execution order and allowing consumers to correctly replay or resend message history.
Event Lifecycle
Breaking Changes
None. This is a purely additive change. Existing code will continue to work - consumers can simply ignore the new events if not needed.