Responses
We have defined a consistent structure for all API responses.
Success
Successful responses wrap the result in a data field:
{
"data": {
"id": "brand_abc123",
"name": "My Brand"
}
}Errors
Failed requests return an error object with a numeric code (for your scripts to handle) and a
human-readable description:
{
"error": {
"code": 1002,
"description": "Invalid input parameters"
}
}Error codes reference
| Code | Description |
|---|---|
1001 | Internal error |
1002 | Invalid input parameters |
1003 | Authentication configuration error |
1004 | Invalid or expired API token |
1005 | Missing authorization header. Please provide a Bearer token. |
1006 | Item not found |
1007 | Too many requests |
1008 | Duplicated item |
1009 | Monitoring limit exceeded |
Streaming responses (NDJSON)
A handful of endpoints return potentially very large result sets and stream them back as
newline-delimited JSON (NDJSON) instead of a single JSON
document. They respond with Content-Type: application/x-ndjson and a chunked body where each
line is a self-contained JSON object:
{"id":"row_1","...":"..."}
{"id":"row_2","...":"..."}
{"id":"row_3","...":"..."}There is no pagination on streaming endpoints. The server keeps the HTTP connection open and writes rows as they become available, so expect a long-lived response when the filter matches a lot of data.
Parse the body line by line as it arrives rather than buffering the whole response, that keeps memory use flat even on very large exports. For example:
const response = await fetch(url, { headers: { Authorization: `Bearer ${API_KEY}` } });
const reader = response.body!.pipeThrough(new TextDecoderStream()).getReader();
let buffer = "";
while (true) {
const { value, done } = await reader.read();
if (done) break;
buffer += value;
const lines = buffer.split("\n");
buffer = lines.pop() ?? "";
for (const line of lines) {
if (line) {
const row = JSON.parse(line);
// handle row
}
}
}If you can't keep an HTTP connection open for the duration of the request (for example for
scheduled jobs, very large date ranges or background pipelines), streaming endpoints usually have
an asynchronous POST counterpart that writes the same data to cloud storage (typically as a CSV
file, ready to load into BI tools) and returns a presigned download URL. See
Async operations for how to poll those.