Hacker's Handbook


Gnomes, Domains, and Flows: Putting It Together
From checklist to a running payments path
Posted: 2025-11-12
Categories: development , Erlang , BEAM , domains , flows , processes

Series map

Glossary: Resource owner = the process that owns mutable state for a domain object. Request owner = the process coordinating one user request end-to-end within a domain. Adapter = the boundary process that talks to an external system. Orchestration = the domain that coordinates multi-step work across other domains.

Calm system landscape

Recap the ingredients

Domains keep code and data in their lanes. Flows choreograph how work crosses those lanes. Processes stay light: they read scrolls from the right domain, follow the flow contracts, and let supervisors restart them when needed. The easiest way to validate the frame is to walk through one production path and make sure every element has a deliberate owner.

Diagram

The picture is simple: every box is its own OTP application with a supervision tree. The API validates, orchestration coordinates, the ledger owns truth, and adapters isolate external chaos.

PSP = payment service provider (card processor). The adapter isolates that boundary so the rest of the system never sees its quirks.

Payments path walkthrough

Consider a simple “authorize card, log ledger entry, notify customer” workflow.

  1. Call flow: The API domain receives an HTTPS call. It runs synchronous validation, enforces a 200 ms budget, and passes a traced request into the orchestration domain.
  2. Message flow: The orchestrator sends commands to the payments domain (“authorize”), the ledger domain (“append entry”), and the notifications domain (“send receipt”). Each message carries a trace_id, versioned payload, and retries with idempotency keys.
  3. Data flow: The payments domain owns card tokens and risk data; it emits an authorization fact when done. The ledger domain owns balances and journal entries; it exposes append-only writes. Notifications subscribe to the authorization fact and the ledger entry, derive their own view, and never peek into foreign ETS tables.
  4. Process flow: Each domain runs inside its own supervision tree. The payments domain spawns a per-authorization worker supervised one_for_one. The ledger domain keeps long-lived resource owners guarded by a rest_for_one tree. Notifications use a pool of short-lived workers. If the PSP adapter dies, only its tree restarts; the ledger keeps writing.

Every arrow on the whiteboard now has a name. When something breaks, you know whether it is a data-contract issue, a message protocol bug, a process-topology failure, or a call-budget miss. Compliance questions (“who touched the ledger?”) get answered by pointing to the domain module and its audit log. Operational questions (“why did latency spike?”) use the same flow instrumentation you designed earlier. The orchestrator’s outbox ties database writes and message dispatch together, while the ledger enforces idempotency on the tuple {account_id, idem_key} so replays never duplicate money.

Logging and tracing at the boundaries

Every domain boundary is also a logging boundary. When the API domain hands control to the payments domain, log the intent, the trace identifier, and the schema version that crossed the line. When the payments domain emits an authorization event, append the fact to its domain log with the same trace_id. The ledger domain writes its own append-only audit trail when balances change. That gives regulators exactly what they ask for: who touched the data, when, and through which contract.

For cross-domain flows, propagate trace_id and span_id through OpenTelemetry (or the tracing stack you already run). Each message send, process spawn, and reply records the context so you can reconstruct the path end to end. Mailbox metrics tell you if a process is drowning; traces tell you which hop stalled. When every domain owns its logging and every flow forwards its trace context, observability stops being a bolt-on and becomes part of the boundary contract.

Use spans api.receive → orchestrate.authorize → psp.call → ledger.append → notify.send. Log schema_version and idem_key at each hop. If a message replays, the trace shows it, and idempotency plus the shared #msg_meta{} header keep state clean.

Checklist before you ship

  • Domain map: Draw boxes for each domain, list the data they own, and the modules/OTP apps that implement them.
  • Flow contracts: For every boundary, document the data schema, message protocol, supervising tree, and call budget.
  • Observability hooks: Ensure trace IDs, mailbox metrics, schema registries, and breaker dashboards are wired to the flow they observe.
  • Failure drills: Kill a payments worker, a ledger supervisor, and a notification queue. Each failure should stay inside its domain boundary while the flow recovers.
  • Outbox pause test: Stop the orchestrator’s outbox process and prove that no dual-write happens; once it restarts, pending messages drain exactly once.

What’s next

If you skipped ahead, start with Gnomes, Domains, and Flows, then read Domains Own Code and Data and Flows Keep Work Moving.

Next up we will focus on the process archetypes: workers, resource owners, routers, gatekeepers, observers, and show how they sit inside these flows.

Prev: Flows Keep Work Moving | Next: Process archetypes and flow choreography (coming soon)

- Happi


Happi Hacking AB
KIVRA: 556912-2707
106 31 Stockholm