The streaming endpoint provides real-time delivery of LLM-generated text as it’s being produced, creating a more responsive user experience similar to ChatGPT or Claude.
NOTE: This is handled automatically by components in the Client SDK, should you wish to wish to use that approach.
POST /api/flowgraph/control/streaming
generation.start response type{
"sessionStepID": string,
"lastStreamIndex": number
}
sessionStepID: The unique identifier provided in the generation.start responselastStreamIndex: The index of the last received update (-1 for initial request)