OTLP setup
Marturia accepts standard OpenTelemetry OTLP/HTTP. There's no proprietary SDK or vendor lock-in — point any OTel exporter at our endpoint and add one auth header.
The endpoint contract
| Field | Value |
|---|---|
| URL | https://marturia.dev/api/marturia/v1/traces |
| Method | POST |
| Protocol | OTLP/HTTP, protobuf encoding |
| Auth header | X-Marturia-Key: mar_live_xxx |
| Max body | 5 MB per request |
| Rate limit | By project quota tier (Free 100K spans/mo, Pro 1M, Team 10M) |
We do not require Authorization: Bearer ….
The exporter's headers config is the only thing we read for
auth. Don't waste time wiring up a JWT layer.
Python
The canonical OpenTelemetry Python setup. Save as otel_setup.py
and import it once at the top of your application entrypoint.
# otel_setup.py
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
OTLPSpanExporter,
)
def init_tracing(service_name: str) -> None:
resource = Resource.create({
"service.name": service_name,
"deployment.environment": os.getenv("ENV", "production"),
})
exporter = OTLPSpanExporter(
endpoint="https://marturia.dev/api/marturia/v1/traces",
headers={"X-Marturia-Key": os.environ["MARTURIA_API_KEY"]},
)
provider = TracerProvider(resource=resource)
# Batch processor: dedicated thread, batches every 5s or 512 spans.
provider.add_span_processor(BatchSpanProcessor(
exporter,
max_queue_size=2048,
max_export_batch_size=512,
schedule_delay_millis=5000,
))
trace.set_tracer_provider(provider)
# Use it like this:
# import otel_setup; otel_setup.init_tracing("my-service")
# tracer = trace.get_tracer(__name__)
# with tracer.start_as_current_span("step_name") as span:
# span.set_attribute("llm.model", "gpt-4")
trace.get_tracer_provider().shutdown() before exit, or the
process will terminate with spans still queued in memory. Long-running
services don't need this — the batches ship continuously.
Node.js
// otel-setup.js
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { OTLPTraceExporter } =
require('@opentelemetry/exporter-trace-otlp-http');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } =
require('@opentelemetry/semantic-conventions');
const sdk = new NodeSDK({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'my-service',
}),
traceExporter: new OTLPTraceExporter({
url: 'https://marturia.dev/api/marturia/v1/traces',
headers: {
'X-Marturia-Key': process.env.MARTURIA_API_KEY,
},
}),
});
sdk.start();
// Required for graceful shutdown so the BatchSpanProcessor flushes.
process.on('SIGTERM', () => sdk.shutdown());
Go
package main
import (
"context"
"os"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
)
func initTracer(ctx context.Context, svc string) (*sdktrace.TracerProvider, error) {
exp, err := otlptracehttp.New(ctx,
otlptracehttp.WithEndpoint("marturia.dev"),
otlptracehttp.WithURLPath("/api/marturia/v1/traces"),
otlptracehttp.WithHeaders(map[string]string{
"X-Marturia-Key": os.Getenv("MARTURIA_API_KEY"),
}),
)
if err != nil {
return nil, err
}
res, _ := resource.New(ctx,
resource.WithAttributes(semconv.ServiceName(svc)),
)
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exp),
sdktrace.WithResource(res),
)
otel.SetTracerProvider(tp)
return tp, nil
}
Raw HTTP (any language)
If your language doesn't have an OTel SDK or you want a minimal footprint, encode an OTLP protobuf payload yourself and POST it. The protobuf schema is at opentelemetry/proto/trace/v1/trace.proto.
For a manual smoke test, JSON-encoded OTLP is also accepted with
Content-Type: application/json at the same endpoint:
curl -X POST https://marturia.dev/api/marturia/v1/traces \
-H "X-Marturia-Key: $MARTURIA_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"resourceSpans": [{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "manual"}}
]
},
"scopeSpans": [{
"spans": [{
"traceId": "5b8aa5a2d2c872e8321cf37308d69df2",
"spanId": "051581bf3cb55c13",
"name": "manual_span",
"kind": "SPAN_KIND_INTERNAL",
"startTimeUnixNano": "'$(date +%s%N)'",
"endTimeUnixNano": "'$(date +%s%N)'"
}]
}]
}]
}'
Sampling
Default OTel samplers are fine. For high-volume services, use a head-based ratio sampler before you hit your project quota:
# Python
from opentelemetry.sdk.trace.sampling import TraceIdRatioBased
provider = TracerProvider(
resource=resource,
sampler=TraceIdRatioBased(0.1), # 10% of traces
)
Tail-based sampling (sample every error, sample 1% of successes) is handled by an OpenTelemetry Collector, not the SDK. We support that pattern — point your Collector at our endpoint exactly the same way.
Quota awareness
When you hit the monthly span cap for your project's tier, additional
ingest returns 429 Too Many Requests until the next billing
cycle. The dashboard shows a banner at 80% and 95%. We never silently
drop spans — every dropped span is counted and shown.
Multiple services, one project
Each service should set a distinct service.name resource
attribute. The dashboard splits by service so you can compare them side
by side. One API key can ingest from many services.
Now that ingest is wired, the next step is making your spans actually useful — see recommended attributes.