Chat
The Chat API processes messages through the LVNG engine. Two endpoints are available: a synchronous endpoint that returns a complete response, and a streaming endpoint that delivers incremental text and tool events via Server-Sent Events. Both support multi-modal input, personality routing for digital twins, and file attachments.
Send Message
Send a message and wait for the complete response. Rate limited to 100 requests per minute per IP, and 30 requests per minute per user. Supports JWT or API key authentication.
/api/v1/chatAuthenticatedProcess a message through the LVNG engine and return a complete response. Supports personality routing, file attachments, multi-modal images, and message type routing (chat, research, image, video).
Body Parameters
messagestringrequiredThe user message to process.
platformstring"api"Source platform identifier (api, discord, slack, web, alexa).
userIdstringUser ID for conversation context and per-user rate limiting.
channelIdstring (UUID)Channel UUID for conversation scoping. Messages are stored and retrieved by channel.
contextobjectAdditional context passed to the AI engine (tool results, metadata, prior state).
personalitystringForce a specific twin/agent personality (e.g. "steve-jobs", "paul-graham"). Defaults to LVNG routing.
streamingbooleanfalseReserved flag (use /api/v2/chat/stream for actual streaming).
modelstringSpecific model override (e.g. "claude-opus-4-6"). Defaults to engine configuration.
messageTypestring"chat"Routing hint: chat, research, image, or video. Research mode enables web search/scraping.
useFirecrawlbooleanfalseEnable web search and scraping for this request.
attachmentsarrayFile attachments as [{ id, name, type, size, url }]. Content is extracted and injected into the message context.
imagesarrayBase64-encoded images for Claude vision: [{ filename, mediaType, base64 }].
debugbooleanfalseEnable debug mode to include tool call traces in the response.
Request
400">curl -X 400">POST https:400">class="text-zinc-500">//api.lvng.ai/api/v1/chat \
-H 400">class="text-emerald-400">"Authorization: Bearer YOUR_JWT_TOKEN" \
-H 400">class="text-emerald-400">"Content-Type: application/json" \
-d '{
400">class="text-emerald-400">"message": 400">class="text-emerald-400">"Summarize the latest sales report",
400">class="text-emerald-400">"userId": 400">class="text-emerald-400">"f6a7b8c9-0d1e-2f3a-4b5c-6d7e8f901234",
400">class="text-emerald-400">"channelId": 400">class="text-emerald-400">"a1b2c3d4-e5f6-7890-abcd-ef1234567890",
400">class="text-emerald-400">"platform": 400">class="text-emerald-400">"api"
}'Response 200
{
400">class="text-emerald-400">"messageId": 400">class="text-emerald-400">"7e6d5c4b-3a29-1098-7654-321fedcba098",
400">class="text-emerald-400">"reply": 400">class="text-emerald-400">"Based on the latest sales report, Q1 revenue increased 23% year-over-year to $4.2M. Key highlights:\n\n- Enterprise segment grew 31% driven by 12 400">new accounts\n- Average deal size increased 400">from $48K to $62K\n- Sales cycle shortened by 8 days on average",
400">class="text-emerald-400">"personality": 400">class="text-emerald-400">"lvng",
400">class="text-emerald-400">"sessionId": 400">class="text-emerald-400">"b8c9d0e1-f2a3-4b5c-6d7e-8f9012345678",
400">class="text-emerald-400">"timestamp": 400">class="text-emerald-400">"2026-03-19T14:32:18.000Z",
400">class="text-emerald-400">"responseTime": 2340,
400">class="text-emerald-400">"attachments": [],
400">class="text-emerald-400">"confidence": 0.92,
400">class="text-emerald-400">"metadata": {
400">class="text-emerald-400">"toolsUsed": [400">class="text-emerald-400">"knowledge_search"],
400">class="text-emerald-400">"model": 400">class="text-emerald-400">"claude-sonnet-4-20250514"
}
}Stream Response
Stream an AI response in real-time using Server-Sent Events. The connection stays open while the AI generates text and executes tools, then closes after the final done event. A keepalive comment is sent every 15 seconds to prevent proxy timeouts.
/api/v2/chat/streamAuthenticatedStream an AI response via Server-Sent Events. Returns Content-Type: text/event-stream. Supports personality routing for twin conversations.
Body Parameters
messagestringrequiredThe user message to process.
platformstring"api"Source platform identifier.
userIdstringUser ID for conversation context.
channelIdstring (UUID)Channel UUID. Conversation history is loaded from this channel (last 10 messages).
workspaceIdstring (UUID)Workspace context for the conversation.
threadIdstringThread ID for threaded conversations.
personalitystringTwin/agent personality ID for routing. When set, the response comes from that persona.
modelstringModel override (e.g. "claude-opus-4-6").
contextobjectAdditional context passed to the AI engine.
imagesarrayBase64-encoded images for multi-modal input.
Request
400">curl -N -X 400">POST https:400">class="text-zinc-500">//api.lvng.ai/api/v2/chat/stream \
-H 400">class="text-emerald-400">"Authorization: Bearer YOUR_JWT_TOKEN" \
-H 400">class="text-emerald-400">"Content-Type: application/json" \
-H 400">class="text-emerald-400">"Accept: text/event-stream" \
-d '{
400">class="text-emerald-400">"message": 400">class="text-emerald-400">"Write a Python 400">function to calculate Fibonacci numbers",
400">class="text-emerald-400">"userId": 400">class="text-emerald-400">"f6a7b8c9-0d1e-2f3a-4b5c-6d7e8f901234",
400">class="text-emerald-400">"channelId": 400">class="text-emerald-400">"a1b2c3d4-e5f6-7890-abcd-ef1234567890"
}'Response 200
event: connected
data: {400">class="text-emerald-400">"streamId":400">class="text-emerald-400">"c4b3a291-0987-6543-21fe-dcba09876543",400">class="text-emerald-400">"timestamp":400">class="text-emerald-400">"2026-03-19T14:35:00.000Z"}
event: text_delta
data: {400">class="text-emerald-400">"content":400">class="text-emerald-400">"Here's an efficient",400">class="text-emerald-400">"accumulated_length":19}
event: text_delta
data: {400">class="text-emerald-400">"content":400">class="text-emerald-400">" Python 400">function using",400">class="text-emerald-400">"accumulated_length":40}
event: text_delta
data: {400">class="text-emerald-400">"content":400">class="text-emerald-400">" memoization:\n\n```python\ndef fibonacci(n",400">class="text-emerald-400">"accumulated_length":80}
event: tool_start
data: {400">class="text-emerald-400">"tool":400">class="text-emerald-400">"code_interpreter",400">class="text-emerald-400">"toolId":400">class="text-emerald-400">"tool_1a2b3c",400">class="text-emerald-400">"timestamp":400">class="text-emerald-400">"2026-03-19T14:35:02.000Z"}
event: tool_call
data: {400">class="text-emerald-400">"tool":400">class="text-emerald-400">"code_interpreter",400">class="text-emerald-400">"input":{400">class="text-emerald-400">"language":400">class="text-emerald-400">"python",400">class="text-emerald-400">"code":400">class="text-emerald-400">"def fibonacci(n, memo={}):\n 400">if n <= 1: 400">return n\n 400">if n in memo: 400">return memo[n]\n memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)\n 400">return memo[n]\nprint(fibonacci(10))"}}
event: tool_result
data: {400">class="text-emerald-400">"tool":400">class="text-emerald-400">"code_interpreter",400">class="text-emerald-400">"duration":120,400">class="text-emerald-400">"resultPreview":400">class="text-emerald-400">"55"}
event: tool_complete
data: {400">class="text-emerald-400">"tool":400">class="text-emerald-400">"code_interpreter",400">class="text-emerald-400">"toolId":400">class="text-emerald-400">"tool_1a2b3c",400">class="text-emerald-400">"success":true,400">class="text-emerald-400">"duration":120}
event: text_delta
data: {400">class="text-emerald-400">"content":400">class="text-emerald-400">"\nThe 400">function returns 55 for n=10.",400">class="text-emerald-400">"accumulated_length":312}
event: done
data: {400">class="text-emerald-400">"streamId":400">class="text-emerald-400">"c4b3a291-0987-6543-21fe-dcba09876543",400">class="text-emerald-400">"reply":400">class="text-emerald-400">"Here's an efficient Python 400">function using memoization...",400">class="text-emerald-400">"personality":400">class="text-emerald-400">"lvng",400">class="text-emerald-400">"responseTime":3200,400">class="text-emerald-400">"toolsUsed":[400">class="text-emerald-400">"code_interpreter"],400">class="text-emerald-400">"confidence":0.95,400">class="text-emerald-400">"metadata":{400">class="text-emerald-400">"toolsUsed":[400">class="text-emerald-400">"code_interpreter"],400">class="text-emerald-400">"knowledgeGraphQueried":false}}SSE Event Types
The streaming endpoint emits the following event types. Your client should handle each type to provide a responsive user experience.
| Event | Description | Data Fields |
|---|---|---|
| connected | Initial heartbeat confirming the stream is open. | streamId, timestamp |
| text_delta | Incremental text content. Append to your buffer. | content, accumulated_length |
| tool_start | A tool invocation has begun. | tool, toolId, timestamp |
| tool_call | Tool input parameters (sent after tool_start). | tool, input |
| tool_result | Tool output preview (truncated to 200 chars). | tool, duration, resultPreview |
| tool_complete | Tool execution finished with success/failure status. | tool, toolId, success, duration |
| done | Stream complete. Contains the full reply and metadata. | streamId, reply, personality, responseTime, toolsUsed, confidence, metadata |
| error | An error occurred during processing. | message, streamId |
Client Example
A minimal TypeScript example that consumes the SSE stream, appends text deltas, and handles tool and completion events.
400">const response = 400">await fetch(400">class="text-emerald-400">'https:400">class="text-zinc-500">//api.lvng.ai/api/v2/chat/stream', {
method: 400">class="text-emerald-400">'POST',
headers: {
400">class="text-emerald-400">'Authorization': 400">class="text-emerald-400">'Bearer YOUR_JWT_TOKEN',
400">class="text-emerald-400">'Content-Type': 400">class="text-emerald-400">'application/json',
},
body: JSON.stringify({
message: 400">class="text-emerald-400">'Explain quantum computing in simple terms',
userId: 400">class="text-emerald-400">'f6a7b8c9-0d1e-2f3a-4b5c-6d7e8f901234',
channelId: 400">class="text-emerald-400">'a1b2c3d4-e5f6-7890-abcd-ef1234567890',
}),
})
400">const reader = response.body!.getReader()
400">const decoder = 400">new TextDecoder()
400">let buffer = 400">class="text-emerald-400">''
400">let fullText = 400">class="text-emerald-400">''
while (true) {
400">const { done, value } = 400">await reader.read()
400">if (done) break
buffer += decoder.decode(value, { stream: true })
400">const lines = buffer.split(400">class="text-emerald-400">'\n')
buffer = lines.pop() || 400">class="text-emerald-400">''
400">let currentEvent = 400">class="text-emerald-400">''
for (400">const line of lines) {
400">if (line.startsWith(400">class="text-emerald-400">'event: ')) {
currentEvent = line.slice(7)
} 400">else 400">if (line.startsWith(400">class="text-emerald-400">'data: ')) {
400">const data = JSON.parse(line.slice(6))
switch (currentEvent) {
case 400">class="text-emerald-400">'connected':
console.log(400">class="text-emerald-400">'Stream started:', data.streamId)
break
case 400">class="text-emerald-400">'text_delta':
fullText += data.content
process.stdout.write(data.content)
break
case 400">class="text-emerald-400">'tool_start':
console.log(400">class="text-emerald-400">'\nTool started:', data.tool)
break
case 400">class="text-emerald-400">'tool_complete':
console.log(400">class="text-emerald-400">'Tool done:', data.tool, data.duration + 400">class="text-emerald-400">'ms')
break
case 400">class="text-emerald-400">'done':
console.log(400">class="text-emerald-400">'\n---')
console.log(400">class="text-emerald-400">'Response time:', data.responseTime + 400">class="text-emerald-400">'ms')
console.log(400">class="text-emerald-400">'Tools used:', data.toolsUsed)
break
case 400">class="text-emerald-400">'error':
console.error(400">class="text-emerald-400">'Stream error:', data.message)
break
}
}
}
}