The Single Job Contract is a lightweight, language-agnostic specification that lets teams run identical background jobs across Node.js, Go, Python and PHP workers. Embedding the main keyword “Single Job Contract” in the title and first paragraph ensures search engines and engineers immediately recognize the focus: a shared schema, consistent runtime semantics, and predictable observability no matter which language executes the task.
Why a Single Job Contract matters
Polyglot infrastructure is common in modern engineering organizations, but differing job formats, retry semantics, and observability approaches produce operational friction. A Single Job Contract standardizes the payload, metadata, error handling expectations, and tracing headers so every worker can:
- Deserialize and validate the same payload format
- Apply identical retry and backoff rules
- Emit the same metrics and trace context
- Handle idempotency and dead-lettering consistently
Core fields of a practical job contract
Keep the contract minimal but explicit. A recommended canonical JSON schema includes:
- version — contract version for schema migration
- id — globally unique job ID (UUID)
- type — logical handler name (e.g., “send_email”)
- payload — JSON object with task-specific data
- created_at and scheduled_for — ISO8601 timestamps
- attempts and max_attempts — retry counters
- dedup_key — optional key to enforce deduplication/idempotency
- ttl — time-to-live after which job is dead-lettered
- priority — optional QoS hint
- backoff — strategy name (fixed, exponential) and base_delay_ms
- trace_context / correlation_id — W3C traceparent or equivalent for distributed tracing
- headers — free-form metadata (source, team, feature_flag)
Serialization and transport
JSON is the simplest cross-language wire format; use UTF-8 encoded JSON for payloads and base64 for any binary attachments. For high-performance or strongly typed contracts, Protobuf can be used but requires generating language bindings and managing schema registries. Include a content-type header (application/json or application/x-protobuf) so consumers know how to decode the payload.
Handler signature and behavior across languages
Normalize the handler contract so every language implements the same lifecycle:
- Handler receives a job object with {id, type, payload, headers, trace_context}.
- Handler returns success or throws an error. On error, the worker increments attempts and enforces backoff.
- On success the job is acknowledged; on unrecoverable failure it is moved to a dead-letter queue (DLQ) with metadata about failure reason.
This simple pattern maps cleanly to Node.js (async/await), Go (context-aware functions), Python (exceptions), and PHP (exceptions or return codes).
Library choices by language
Pick libraries that support the contract style you need (Redis/Rabbit/SQS or streams):
- Node.js: BullMQ (Redis) for robust features and delayed jobs; use ioredis and @opentelemetry/node for tracing.
- Go: Asynq (Redis) is production-ready with backoff and scheduling; integrate go-opentelemetry for trace propagation.
- Python: Dramatiq (Redis) or Celery (broker-agnostic) depending on scale; Dramatiq has simpler semantics for contract-driven jobs.
- PHP: Symfony Messenger or Laravel Queue (Redis, SQS); both let you normalize middleware for tracing and retry logic.
Retries, backoff and idempotency
Define a single retry policy in the contract to avoid surprises:
- Default max_attempts (e.g., 5) with exponential backoff and jitter to prevent thundering herds
- Include attempt timestamps so monitoring can surface repeated failures quickly
- Mandate an idempotency strategy: handlers should use dedup_key + persistent store or rely on idempotent downstream APIs
Observability: metrics, logs, traces
Observability must be first-class and identical across stacks. Standardize on:
- Metrics (Prometheus names): job_processed_total, job_failed_total, job_duration_seconds_bucket, job_retries_total, job_queue_depth
- Tracing: include W3C traceparent/trace-state in trace_context and use OpenTelemetry SDKs for Node, Go, Python and PHP so traces stitch end-to-end
- Structured logs: include job.id, type, correlation_id, attempt, duration_ms, outcome, and error details as JSON to enable log-based troubleshooting
- Dead-letter notifications: emit an alert metric and push human-readable failure payloads to a DLQ topic or monitoring channel
Operational patterns and testing
To deploy a contract without breaking consumers:
- Version the contract and support backward-compatible changes (add optional fields, avoid renaming)
- Run canary consumers that accept both v1 and v2 schemas during migration
- Include contract conformance tests in CI that validate example messages against generated schemas for each language
- Provide a reference handler in each language as a canonical implementation developers can copy
Common pitfalls and mitigations
- Assuming exact clock sync: always use server-side created_at timestamps and tolerate skew
- Different retry semantics across libraries: prefer centralizing retry decisions in a middleware layer that reads the contract
- Missing trace propagation: enforce trace_context in contract and add middleware in each worker to inject/extract tracing headers
- Non-idempotent handlers: require idempotency keys and document expected semantics per job type
Getting started checklist
- Define a minimal JSON schema for the job contract and publish it in a shared repo
- Create middleware libraries in each language to validate the schema, extract trace context, and emit metrics
- Implement a DLQ policy and alerting playbook
- Ship a canonical job handler for Node.js, Go, Python and PHP and include sample CI contract tests
Adopting a Single Job Contract greatly reduces cognitive load, shortens onboarding, and makes operational incidents easier to diagnose across polyglot teams. With a small upfront investment in schema design, middleware, and observability, identical tasks can be run reliably from Node.js, Go, Python, or PHP workers.
Ready to standardize your job contracts? Start by drafting a minimal JSON schema and implementing lightweight validation middleware in each language.
