Skip to content

Real-time

SSE Streaming

Server-Sent Events for long-running operations

SSE Streaming#

For long-running operations, enable streaming on a command. The handler receives an emit() function on the context to send progressive chunks via Server-Sent Events.

Defining a Streaming Command

Set stream: true on the command definition. Inside the handler, use ctx.emit() to push chunks to the client. The return value becomes the final done event.

TypeScript
const surf = await createSurf({
name: 'AI API',
commands: {
generate: {
description: 'Generate text with AI',
stream: true,
params: {
prompt: { type: 'string', required: true },
maxTokens: { type: 'number', default: 500 },
},
run: async ({ prompt, maxTokens }, ctx) => {
const response = ai.stream(prompt, { maxTokens })
let totalTokens = 0
ย 
for await (const chunk of response) {
totalTokens += chunk.tokens
// Each emit() sends an SSE "chunk" event
ctx.emit!({ text: chunk.text, tokens: chunk.tokens })
}
ย 
// Return value is sent as the final "done" event
return { finished: true, totalTokens }
},
},
},
})

Client-Side SSE

When the client sends stream: true in the execute request, the response is an SSE stream instead of a single JSON body. Each chunk follows the StreamChunk protocol:

Text
// Request streaming execution
POST /surf/execute
Content-Type: application/json
ย 
{ "command": "generate", "params": { "prompt": "Explain SSE" }, "stream": true }
ย 
// SSE response (Content-Type: text/event-stream):
data: { "type": "chunk", "data": { "text": "Server-Sent", "tokens": 2 } }
ย 
data: { "type": "chunk", "data": { "text": " Events are", "tokens": 3 } }
ย 
data: { "type": "chunk", "data": { "text": " a standard...", "tokens": 4 } }
ย 
data: { "type": "done", "result": { "finished": true, "totalTokens": 9 } }

Consuming Streams with the Client SDK

Note: SurfClient does not currently include a dedicated executeStream() method. To consume SSE streams, use the standard fetch API with the stream: true flag:

TypeScript
const response = await fetch('https://ai.example.com/surf/execute', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
command: 'generate',
params: { prompt: 'Hello world' },
stream: true,
}),
})
ย 
const reader = response.body!.getReader()
const decoder = new TextDecoder()
ย 
while (true) {
const { done, value } = await reader.read()
if (done) break
const text = decoder.decode(value)
// Parse SSE events from text (split on "data: " lines)
for (const line of text.split('\n')) {
if (line.startsWith('data: ')) {
const event = JSON.parse(line.slice(6))
if (event.type === 'chunk') process.stdout.write(event.data.text)
if (event.type === 'done') console.log('\nDone:', event.result)
}
}
}

๐Ÿ’ก Tip: Both the command definition must have stream: true and the client request must include stream: true for SSE to activate. If stream: true is sent for a non-streaming command, it executes normally and returns a standard JSON response.

Note: Pipeline streaming is not currently supported. Streaming only works on individual command execution via POST /surf/execute. Pipeline steps always execute sequentially and return a combined JSON response.