Troubleshooting
Most first-run problems land on this page. If yours doesn't, email us — we'd rather add a paragraph here than leave you stuck.
"My spans aren't showing up in the dashboard"
Check 1: Is the script exiting before the batch ships?
OpenTelemetry's BatchSpanProcessor ships every 5 seconds or
every 512 spans. A short script that exits in <5 seconds will
terminate with the batch still in memory.
Fix: explicitly flush at the end:
# Python
trace.get_tracer_provider().shutdown()
// Node.js
await sdk.shutdown();
Check 2: Is the API key right?
A bad key returns 401 on the OTLP exporter — but the
exporter doesn't always print errors loudly. Run a curl smoke first:
curl -i -X POST https://marturia.dev/api/marturia/v1/traces \
-H "X-Marturia-Key: $MARTURIA_API_KEY" \
-H "Content-Type: application/json" -d '{}'
A 200/202 means auth works. A 401 means the key is wrong, revoked, or hasn't been minted yet.
Check 3: Are you over the monthly span quota?
The dashboard shows a banner at 80% and 95% utilization, and ingest
returns 429 at 100%. GET /api/marturia/projects/{id}/usage
returns the current quota state.
Check 4: Wrong endpoint?
The endpoint is https://marturia.dev/api/marturia/v1/traces
— not /v1/traces, not /otlp/v1/traces, not the
gRPC port. OTel exporters' OTLP_ENDPOINT env var sometimes
auto-appends /v1/traces; if so, set the base only:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://marturia.dev/api/marturia"
"401 Unauthorized" on receipt creation
- The header name is
X-Marturia-Key(notAuthorization). - The value is the plaintext shown in the Keys page modal — not the displayed prefix or hash.
- Keys are tenant-scoped. If you switched accounts, mint a new key in the right tenant.
- If your key was rotated/revoked, the column "Status" in the Keys page shows
revoked— mint a new one.
"400 payload exceeds 64KB limit"
The receipt payload is canonical-JSON encoded and hard-capped at 64 KB. Common causes:
- You're putting the full LLM transcript in the payload — consider
storing the transcript in your own system and putting the
transcript_idin the receipt instead. - Embedded base64 of an image or document — same fix; reference it externally.
- You're putting a span dump in the receipt — receipts and spans are separate concepts. The span is in the OTLP ingest; the receipt is the agent's decision, not its trace.
"429 Too Many Requests" on receipts
The receipt endpoint is rate-limited to 60/minute/project. If you're generating receipts in a tight loop:
- Batch decisions and emit one receipt per batch.
- If you genuinely need higher throughput, ask us — the limit is protecting against runaway loops, not a hard product cap.
"VALID" vs "INVALID" outcomes from marturia-verify
| Output | What happened |
|---|---|
VALID: followed by sequence info |
Signature, hash, sequence all check out. Receipt is authentic. |
INVALID: receipt_hash mismatch |
Payload was modified after signing. The receipt's content does not match its hash. |
INVALID: signature does not verify |
Either the signature was forged, or the wrong public key was supplied. Confirm the public key matches signing_kid. |
INVALID: sequence broken at N |
You're verifying a chain (multiple receipts) and one is missing or out of order. Pull the missing receipt by sequence and re-run. |
Refresh-token flow
The dashboard's auth uses short-lived JWTs + rotating refresh tokens. If you build an integration off the dashboard endpoints:
- Refresh tokens are rotated on every
/api/auth/refreshcall. The old refresh is revoked the moment the new one is issued. - If you save refresh tokens, save the most recent one — older ones are dead immediately.
- Refresh-token theft is bounded: any legitimate refresh by either the user or the attacker invalidates the other party's token. Use that as your detection signal.
"Why is my dashboard showing 0% used but I just sent spans?"
Quota counters update on the next ingest cycle (a few seconds). The tile updates in real time once the database commits the increment; there is no minute-long aggregation lag.
If you see persistent zero utilization, the spans are silently being dropped. Try the curl smoke from "Check 2" above and look at the response code.
Still stuck?
[email protected] — include your project ID, the approximate UTC time of the failing request, and the response code if you have one. We can grep ingest logs faster with those three.