Encoding Transaction Boundaries as Business Concepts

In most systems, use cases live in the application layer, where services orchestrate domain operations and side-effects such as persistence, messaging, or external calls.

It’s also common for the business to require locally atomic operations: they must succeed or fail together, preserving state invariants in the presence of failure.

How to implement transactions is an infrastructure concern. Defining transaction boundaries is a business requirement.

In the following sections, I will present a technique that leverages the type system to clearly define and make transaction boundaries explicit in code. Concerns such as isolation and durability remain infrastructure responsibilities.

This results in:

  • Compositionality: Build complex transactions from simpler ones.
  • Testability: Verify atomicity without relying on infrastructure.
  • Type Check: Transactional operations result in a compilation error unless explicitly committed.
  • Safe Refactoring: Reduce the chances of incorrectly assuming operations are atomic.

Together, these properties lead to clearer, refactoring-safe, and better-tested codebases.

This approach is well suited when the ambiguities of complex business requirements and failure modes outweigh the conveniences of mainstream, battle-tested practices, such as annotating types and methods with @Transactional.

The examples in this article are implemented in Scala, but the approach itself is not Scala-specific. The key is a strong type system and the ability to describe programs as values, separating the description of a program from its execution.

The code snippets presented here are intentionally simplified to highlight the transactional boundaries and failure modes. They omit part of the testing setup and simplify use cases and assertions. Actual implementations for readers who want to dive deeper are linked at the end.

Where is the transaction boundary?

Suppose that we need to implement a ‘create account’ use case consisting of:

  1. generating credentials
  2. persisting the new account in the storage
  3. granting the user access.

From a business perspective, these steps must be atomic. We could start defining an API like:

trait UsersManager[F[_]]:
  def createAccount(username: Username, password: PlainPassword): F[UserId]

F[_] allows us to abstract over the effect type such as side-effect, state, or failure. In practice, it will be replaced by IO or equivalent in production. And IO[UserId] is a description of a computation that produces side-effects: storing data, in this case.

One possible implementation is to sequentially store the new account and user permissions after generating credentials.

      override def createAccount(username: Username, password: PlainPassword): F[UserId] =
        val credentials = . . . // pure domain logic

        val setUpAccount = for
          userId <- store.createAccount(credentials)
          _ <- accessControl.grantAccess(userId, userId, ContextualRole.Owner)
        yield userId

The code is sequential and easy to follow. But where is the transaction?

  • Perhaps store.createAccount starts its own transaction?
  • Perhaps accessControl.grantAccess does the same?
  • Or maybe there is an implicit transaction scope that will be executed at runtime with a framework?
  • Or there is no transaction at all?

The conveniences of implicit transaction boundaries via frameworks come with tradeoffs:

  • Business transaction boundaries don’t compose well. This often leads to an explosion of specialised Service or Repository functions, one for each atomic use case.
  • Refactoring can silently break atomicity.
  • Verifying transactional guarantees depends on infrastructure (integration tests).
  • Type signature conveys no information about transactional tasks.

Improving Atomicity Correctness Guarantees

The goal is to be unequivocal on where a transaction starts and ends.

Txn defines the operations that must be performed atomically. It conveys the same intent as operations annotated with @Transactional, but it leads to clear, local and enforced by the type system boundaries.

      override def createAccount(username: Username, password: PlainPassword, globalRole: Option[GlobalRole]): F[UserId] =
        // Pre-transaction: pure domain logic
        val credentials = . . .

        // Atomic operations
        val setUpAccount: Txn[UserId] = for
          creds <- tx.lift { credentials } // Embed a pure computation into a transactional context
          userId <- store.createAccount(creds) // Storing capability via Port/Repository
          _ <- accessControl.grantAccess(userId, userId, ContextualRole.Owner) // Same as above
        yield userId

        // Signal a commit, no side-effects yet.
        tx.commit { setUpAccount }

Both F and Txn are descriptions of computations: F is the runtime, the final work performed by the program. Txn describes which operations must be executed in sequence and atomically within a transaction. Both are values that need to be interpreted in order to be executed.

Crucially, commit must be invoked in order to obtain F[UserId] from Txn[UserId]. Otherwise the code won’t compile.

The Core Abstraction

The transactional behaviour is captured by a minimal API:

trait TransactionManager[F[_], Txn[_]]:

  val lift: [A] => F[A] => Txn[A]

  val commit: [A] => Txn[A] => F[A]

The TransactionManager API exposes two key operations:

  • lift embeds non-transactional operations into a transactional context
  • commit signals that the transaction must be committed and rolled back in case of an error, transforming it into an executable operation.

Embedding non-transactional code into a transactional context

Any non-transactional operation required to run within a transaction must be explicitly embedded into a Txn context via lift:

tx.lift { clock.realTimeInstant }

This is a key feature:

Developers must acknowledge that such operations need special handling in case the transaction is rolled back.

Verifying Atomicity Under Failure

The happy path is straightforward: when all operations succeed, the state remains consistent by construction. The more interesting case is when there is a failure midway through a transaction.

The test below verifies our business invariant: when granting permissions fails after account creation succeeds, no account is persisted.

test("user account is not created when granting permission fails"):
    forAllF { (username: Username, password: PlainPassword) =>
      for
        // given
        (usersStore, storeRef) <- makeEmptyUsersStore()            // account creation succeeds
        failingAccessControl  = makeUnreliableAccessControl() // granting permission fails
        tx = makeTransactionManager(List(storeRef))                  // test-specific transaction manager
        usersManager = UsersManager.make[IO, IO](usersStore, failingAccessControl, tx, ...)

        // when
        _ <- usersManager.createAccount(username, password).attempt

        // then
        account <- usersStore.fetchAccount(username)
      yield assert(account.isEmpty) // no partial update
    }

We can test atomicity without a real database by using:

  • An alternative execution strategy for the transactional program (Txn[_])
  • In-memory implementations of Ports/Repositories.

These in-memory Port components are not mocks or stubs: they preserve the same semantics as production components, differing only in how state is stored (in-memory vs. database).

The Execution Strategy for Testing

In production, commit executes the transactional program using a database-backed transaction manager (e.g. via Doobie).

For unit tests, we can provide a different execution strategy that simulates transaction boundaries using in-memory state.

  def makeTransactionManager(refs: List[TxRef[?]]): TransactionManager[IO, IO] =
    new TransactionManager[IO, IO]:
    
      override val commit: [A] => IO[A] => IO[A] = [A] =>
        (action: IO[A]) =>
          action.attempt.flatMap {
            case Right(a) => refs.traverse_(_.commit) *> IO.pure(a)
            case Left(e) =>   refs.traverse_(_.rollback) *> IO.raiseError(e)
        }
      . . .

Semantically, this version of the TransactionManager:

  • Stages state changes while the transactional program runs
  • Commits all staged changes on success
  • Rolls them back entirely on failure.

What This Simulation Provides (And Doesn’t)

The test execution strategy relies on stateful components (such as TxRef) to support staging, commit and rollback operations.

This is not a full transaction implementation and is intentionally simpler than database-backed transactions. The testing machinery:

  • Does not provide isolation, concurrency guarantees or durability
  • Supports independent unit tests that verify atomic business invariants under failure modes.

Composition

Transaction boundaries are explicit and local, emerging from how Txn[A] transaction programs are composed:

  • Composing multiple Txn values and committing them together results in a single transaction.
  • Committing Txn values separately results in multiple transactions.
  • Omitting commit prevents execution and results in compilation error, since Txn must be converted to F via commit.

This compile-time composition model makes runtime transaction propagation intentionally unnecessary.

Tradeoffs and Considerations

The technique discussed here follows a broader design approach: describing programs as values, separating intent from execution.

When applied to business-level atomicity, this approach gives you:

  • Clear, unambiguous behaviour: Trading framework convenience for explicit intent improves correctness guarantees.
  • Improved safety: Invalid transactional scopes fail at compilation time and prevent latent runtime inconsistencies.
  • Deterministic unit testing: Business invariants can be verified without relying on databases or transactions middleware.
  • Composable, refactoring-safe code: Locality leads to a modular design that is easier to reason about and evolve.

What it costs:

  • Shifted responsibility: Developers take ownership of concerns often delegated to frameworks (transaction scoping, propagation, rollback semantics).
  • Steeper learning curve: Teams unfamiliar with effect systems or algebraic APIs may experience increased cognitive load and reduced productivity in the short to mid term.
  • Additional machinery: Implementing a TransactionManager, in-memory versions of Ports/Repositories, and different execution strategies demands design and maintenance.

When it is a fit:

  • The domain exhibits multiple known failure modes.
  • State inconsistencies are business-critical and unacceptable.
  • Correctness must be preserved as the system evolves.
  • The “program as values” paradigm is beneficial elsewhere in the system.

When it is not:

  • Teams heavily rely on framework conventions for productivity.
  • Transactional requirements are simple and unlikely to evolve.
  • Strong discipline and extensive testing compensates implicitness.

Tradeoffs may or may not be acceptable. What’s important is awareness, deliberate reasoning, making an informed decision and owning the consequences. Being aware of alternatives is fundamental when choosing an approach that is aligned with the business and the team’s preferences.

Links:

The Path to Idempotency: Preserving Business Outcomes

The Path to Idempotency: Preserving Business Outcomes

I’ve recently written about navigating the uncertainties of a complex integration stack. One of the biggest recurring issues we faced was duplicates: bursts of repeated updates that caused major incidents for our customers.

Our core operations were not idempotent – multiple identical instructions were performed with side-effects as many times as they were issued.

In our domain, it caused several problems:

  • Heavy, expensive reprocessing on both our side and our customers’ side.
  • Long-running computations that could take hours to complete.
  • In extreme cases, hundreds of duplicate instructions piling up and blocking time-sensitive work.

This was unsustainable. We needed a deterministic, systemic fix.

This post walks through:

  • The primary cause of non-idempotent behaviour in our pipeline
  • Why idempotency was essential to preserving business outcomes
  • How we redesigned the system to eliminate duplicates without altering semantics.

The Problem: Duplicates After Projection

Designing an idempotent distributed system often requires handling duplicates that arise beyond transport-level guarantees such as at-least-once deliveries.

Our pipeline ingested and merged data from multiple sources, then produced several different downstream projections. These projections reflected our business domain: each customer required a different subset of the combined data.

Since each projection was a view of the merged data, source updates could collapse into the same customer-facing representation – a duplicate.

To illustrate, consider the example:

Combined Data:
{
  "banana_id": "B-007",
  "name": "Mr. Banana",
  "ripeness_level": "Mostly Yellow"
}

Sent to Adapter/Customer 1:
{
  "banana_id": "B-007",
  "ripeness_level": "Mostly Yellow"
}

Sent to Adapter/Customer 2:
{
  "banana_id": "B-007"
}

Now Mr. Banana’s ripeness_level changes:

Combined Data (Updated):
{
  "banana_id": "B-007",
  "name": "Mr. Banana",
  "ripeness_level": "Perfectly Yellow"
}

Sent to Adapter/Customer 1:
{
  "banana_id": "B-007",
  "ripeness_level": "Perfectly Yellow"
}

Sent to Adapter/Customer 2:
{
  "banana_id": "B-007"
}

Adapter 1 receives two distinct updates.

Adapter 2 receives the exact same message twice.

Combined Data → Adapter 1 → OK
→ Adapter 2 → OK

Combined Data (Updated) → Adapter 1 → OK
→ Adapter 2 → DUPLICATE! 🍌

The core issue: the underlying entity changed, but Adapter 2’s projection did not, so it ended up with a duplicate. Any adapter whose projection omits the updated field is affected in exactly the same way.

This type of deduplication must happen after projections, at the edge of the pipeline, on a per-customer adapter basis.

Deduplication sat between projection generation and adapter-specific processing, acting as a dam before downstream side effects occurred.

When a duplicate was detected, adapters skipped processing and tracked the occurrence. Deduplication errors or timeouts were handled like any other pipeline failure: through replays rather than best-effort processing.

The First Solution: Last Seen Deduplication

When the first serious duplicate-related incident hit us, a colleague proposed a straightforward and effective approach:

  1. Serialise the message to a deterministic JSON representation
  2. Hash the payload
  3. Retrieve the previously stored hash by ID
  4. Compare the previously stored hash with the new one
  5. If identical -> return duplicate
  6. If different -> return not duplicate and store the new hash.

This gave us last-seen-message deduplication. For example:

  • A -> B -> A -> B -> A : no duplicates
  • A -> A -> A -> A -> B : only the first A and B are non-duplicates

We needed this to work immediately, and we were already late in the day (we wanted a fix before going home). A key/value store seemed like the obvious, modern choice: O(1) lookups, no schema changes, and Redis instances already running. We could implement the fix within minutes.

The dam was built. Problem solved.

How the Dam Cracked

The first failure mode was subtle. It occasionally returned not duplicate for the same message:

Sent to Service 1 (First):
{
  "banana_id": "B-007",
  "ripeness_level": "Perfectly Yellow"
}

Sent to Service 1 (Second):
{
  "ripeness_level": "Perfectly Yellow",
  "banana_id": "B-007"
}

A change in field order was enough to bypass the deduplication logic.

We needed deduplication based on structural equivalence instead of the serialised payload. Two messages should be considered equal if their JSON trees have the same fields, values and nesting, regardless of field order.

Then came the second, more serious failure, making the dam collapse: the Redis instances had no persistent storage. One day they vanished along with the entire deduplication state.

Time for a Sustainable Strategy

We considered a few alternatives before deciding how to approach deduplication across the pipeline.

  • Idempotency keys / versioning: These require producers to understand semantics that only emerge at adapter, downstream-level.
  • Kafka log compaction: It also operates before projection. More importantly, compaction does not guarantee that consumers only see the latest version of an event.

Neither approach addressed the real problem. We decided to build a dedicated deduplicator service.

The New Deduplicator

We kept the core idea – hash-based, last-seen-message deduplication – incorporated the lessons and extracted it into a dedicated service.

That would offer us:

  • Consistent Behaviour: Same deduplication semantics makes the system easier to reason about.
  • Clear Properties & Policies: Transparency around performance, failures and retries.
  • Consistent Tooling: Deduplication needs operational tooling for analysis, diagnosis and reprocessing.

We envisioned a growing number of services relying on this new deduplication service.

It needed to support:

  • Synchronous Check: Dowstream services decide if continue or skip processing.
  • Moderate throughput: < 100 RPS average target to process millions of requests per day with capacity headroom for bursts.
  • Bounded Latency: 99th percentile < 250ms target; worst-case rare slow paths < 2s tolerated without triggering clients timeouts.
  • Strong Consistency: Linearizable dedup decisions – no false positives/negatives, even under concurrency.
  • Domain Agnostic: Perform structural comparisons on deterministic representations defined by the clients. Avoid JSON canonicalision or semantic equivalence to avoid domain interpretation, restricting flexibility of usage.
  • Application-level Retries: Retries must restart from the beginning of the flow to prevent processing stale data.

Core Invariants:

Safe and Deterministic

Each deduplication request was identified by a composite key:

  • messageId: Corresponded to the underlying entityId; globally unique and stable across the business.
  • serviceId: Mapped to a projection of that entity, preventing cross-services interference.

This ensured isolated and deterministic comparisons.

Preserving Order Guarantees

Events within the pipeline were partitioned by entity identifier (the messageId received by the deduplicator). Kafka, combined with a sound partition strategy, preserved ordering within each partition.

Deduplication was a synchronous step in the pipeline, so requests arrived in the same order as events.

For any given (serviceId, messageId), deduplication decisions were strictly ordered.

Safe Retries

The pipeline ensured that only the latest version of an event could be replayed, preventing stale data during reprocessing.

If processing failed after deduplication, we used operational tooling to reset the dedup state for a specific (messageId, serviceId) before replaying the event.

This path was rare and manual by design.

Hash Collisions: Theoretical, Not Practical

We used cryptographic hash of the parsed JSON tree to detect changes.

While hash collisions are theoretically possible – with the last-seen deduplication approach – they would require two distinct and successive JSON structures for the same key to hash identically. Given non-adversarial inputs and strong hash functions, we treated collisions as practically impossible.

Guaranteeing absolute collision freedom would have added significant complexity for negligible benefit.

Bounded State

The service only stored keys and the associated latest hashed message.

Thus, the state was bounded by:

number of entities * number of projections

The number of entities was large, but finite and well-understood.

Correctness over Availability:

Wrong deduplication decisions were unacceptable in our domain:

  • False negatives => expensive repeated work.
  • False positives => missed deliveries.

We needed linear behaviour: requests for a given (messageId, serviceId) pair must be processed one-by-one, preserving ordering by key.

This ruled out caching: stale reads would have broken correctness.

The Database Layer

We initially implemented the service using PostgreSQL with SERIALIZABLE isolation to guarantee linearizability. The goal was to validate the approach, build new integrations on top of it and eliminate recurring operational incidents.

This version of the service ran in production for several years and sustained the required throughput.

It worked in practice because contention was limited:

  • There was no contention between deduplication requests triggered by different services.
  • Concurrent deduplication requests for the same (messageId, serviceId) pair were rare.
  • Clear retry policies ensured correctness when contention did occur.

Performance Improvements

The initial version traded latency by certainty and speed of development. As adoption grew, traffic patterns evolved. Bursts became more frequent, and tail latency (p99) occasionally climbed to seconds.

Two hot paths were tightened to address this without changing semantics and invariants.

Atomic Compare-and-Swap

The core duplication logic already had a compare-and-swap pattern:

  • Update the stored hash only if it differs from the previous value.
  • Compare and update atomically.
  • Operate on state stored in a single database row.

It was reduced to a single atomic operation.

UPDATE dedup_state
SET hashed_message = :new_hash
WHERE message_id = :message_id
  AND service_id = :service_id
  AND hashed_message IS DISTINCT FROM :new_hash
RETURNING 1;

Thus, in this case, the same correctness guarantee could be achieved using READ COMMITTED isolation level.

Replacing SERIALIZABLE improved latency and throughput by decreasing overhead, lowering contention and eliminating retries caused by transaction conflicts.

Faster Structural Comparisons

JSON comparisons were on the hot path: every request depended on it. Improving it would have system-wide impact.

The initial comparison was built with an abstraction-rich, allocation-heavy implementation.

A memory-efficient recursive tree walk reduced memory allocation, garbage collection pressure, GC-induced stalls, and, consequently, tail latency.

Infrastructure Overview

This is how the pieces fit together:

  • Horizontally scalable API layer behind a load balancer.
  • Stateless application nodes, with state stored in the database, allowing ephemeral, replaceable web servers.
  • Graceful degradation under load: Unbounded request queues resulted in slower responses under heavy-load instead of dropped requests.
  • Single-region deployment aligned with our business constraints. Global linearizability in a multi-region setup would have required a fundamentally different approach (e.g. distributed SQL database).

Observability

We used Prometheus and Grafana to track:

  • Latency, globally and per service
  • Request volumes, globally and per service
  • Duplicate rates
  • System-level metrics.

This visibility helped us to identify the biggest sources of duplicates and eliminate unnecessary upstream noise.

Single Point of Failure (SPOF)

As a consequence of our architectural decisions, the deduplicator and its PostgreSQL database became a single point of failure.

In case of an outage, the correct behaviour was to halt processing instead of allowing the system to continue with incorrect deduplication.

In such scenarios, alerting and well-defined response procedures are essential for timely recovery and for turning a potentially stressful incident into a clear operational task.

Conclusion

Over several years of operation, the deduplication layer never failed us.

We had concrete results:

  • Fewer major incidents
  • Improved customer trust
  • Lower operational costs
  • Stronger delivery guarantees.

Serendipity

Along with the new pipeline and the tooling we built around it, we discovered an additional benefit: the ability to filter unnecessary events during ingestion, before they entered the system.

Many events differed only in fields irrelevant to our domain, such as timestamps. By stripping out these before deduplicating projections, we discarded thousands of redundant updates every day.

Despite fan-out increasing beyond 20, the number of daily events flowing through the pipeline decreased significantly.

By eliminating redundant updates early, we kept the load on the deduplicator itself under control.

TL;DR

In this pipeline, idempotency wasn’t a transport-level concern but a business one.

Treating it that way shaped how we designed the system and the tradeoffs we were willing to make.

From Integration Chaos to Deterministic Pipelines: a Platform Approach

From Integration Chaos to Deterministic Pipelines: a Platform Approach

I was once part of a team working on an integration layer responsible for collecting data from many upstream services and processing it into content made available to multiple external customers according to their requirements.

Our job was to make upstream and external system work as a single coherent whole, despite having no control over either side. This is a classic setting for emergent chaos, and I think of it as “a place between places”.

The first few integrations we released seemed fine: there was silence, which is usually a sign of success in this kind of work. At the time, we were focused on building new integrations.

But we were about to learn just how much noise errors can make.

The Chaos (as in Deterministic Non-Linear Systems)

After releasing one of our early integrations, we started seeing recurring operational issues and growing frustration. Our services were behaving unpredictably and breaching key expectations daily. We had no alerts and no sense of the severity or frequency of errors. We learned about problems only when customers reported them.

In environments like this, panic can spread quickly. People reach for simple explanations or “obvious” culprits in an attempt to regain a sense of control. There’s usually only one way out of this situation: stay calm, put a tactical solution in place to stop the bleeding, buy time to find the real cause. Then, create a plan and explain the situation to key stakeholders.

This particular integration was more complex than the earlier ones, and small data discrepancies (introduced upstream for “convenience”) had unpredictable and severe consequences. Hundreds of pieces of content were becoming unavailable to customers every day – always exactly ninety days after they were published. We also began seeing severe DoS-like incidents due to missing idempotency across the board. Until then, we had no idea our own services could generate duplicates.

Not only did we have little control over the systems around us, but our own stack had become opaque to us.

The original plan was to fix the bugs we had discovered and continue building new integrations to meet the deadline. But was that even possible?

The New Plan

At a high-level, there was a single root problem:

Our stack did not provide the right level of abstraction.

Abstract too little, and your services become another layer of indirection with extra cost and no value. Abstract too much, and it won’t be possible to meet all the diverse requirements of downstream systems.

So we landed on a “simple” solution:

Provide a unified interface that shields developers from upstream complexity and allows us to focus on external integrations.

I spent many days thinking about this interface: a beautiful, coherent API with clear semantics paired with proper update events, enabling us to build scalable, reliable and predictable integrations.

How you pursue a goal like this depends heavily on your organisation and technical landscape. In our case, we realised that the “simple plan” required re-architecting the entire integration pipeline, something no stakeholder wants to hear. We needed a platform.

The New Integration Stack

Once we had the green light to redesign our new integration stack, we split it into two main components: the Platform and the Adapters.

External services never interacted with the platform directly. Instead, each adapter subscribed to the Platform’s new-data-available notification, consumed only the data required for its particular integration, and transformed it according to the rules and constraints of a specific external system.

This setup brought several benefits:

  • Focus: Devs could concentrate entirely on the requirements of a single external system.
  • Isolation: Integration logic was fully contained within the service responsible for that system.
  • Scalability: Each adapter could scale up or down independently according to the throughput demands of the system it supported.
  • Adaptability: A consequence of independent scalability, we could limit the throughput of specific integrations to avoid overwhelming downstream services.
  • Operational Tooling: Each adapter became the natural place to build tools to address that system’s recurring operational issues.
  • Speed of Development: More developers could work in parallel on new integration without stepping on each other’s toes.

The main challenge with this approach was balancing innovation with consistency across adapters. A small group of developers maintained more than twenty services, so consistency in design and implementation was essential. We could not afford to let teams reinvent solutions in isolated ways that ignored valuable lessons learned elsewhere in the system.

Adapters required a reliable central source of all the data they depended on, delivered with reasonable latency, correct new-data-available notifications (signals, not payloads), and consistent behaviour. The Platform offered that and more.

Enter the Platform

Chaos had become routine. Missed deliveries, costly duplicates, frozen stack. All silently building up from the unlucky combination of tiny, isolated events compounded over time, amplified by total absence of observability.

The upstream services were owned by different teams and reflected different business domains. Each was part of a non-trivial pipeline and had its own upstream dependencies. They behaved unpredictably at times, produced conflicting updates, and generated thousands of duplicate events every day. An environment like this is not just noisy; it’s adversarial.

To bring order to this landscape, we needed a layer capable of collecting data from all these disparate sources and exposing a coherent view through an unified, versioned API. Key to achieve those goals was splitting the Platform into two main components: Ingestion and API Layer.

Ingestion:

We needed high-throughput ingestion while preserving event ordering, so we defined a deterministic pipeline:

consume event -> pre-process -> de-duplicate -> decode into domain events -> process -> store

This structure unlocked several capabilities:

  • Early anomaly detection: Discard malformed or inconsistent data before they entered the Integration stack.
  • Coherent Integration view: Decode data into a unified representation of the broader business domain, forming a clear contract between Upstream -> Integration -> External Systems.
  • Stable downstream consumption: Data was stored in queryiable form, protecting adapters from upstream instabilities.
  • Source-level insights: Track and analyse events by source to understand upstream behaviour.
  • Extended diagnoses tooling: Persist raw events and build tools around them to support analyses and troubleshooting.
  • Flow-event reduction: Use of heuristics to suppress low-value updates, and avoid unnecessary cascades through the platform.

We used Kafka – combined with a carefully chosen partitioning strategy – to achieve high-throughput parallel processing and preserve events ordering for each upstream source. This allowed us to deliver correct, time-sensitive content in a deterministic way.

By configuring each topic with compaction and infinite retention, we enabled log-based replication. This gave us the ability to replay the entire pipeline deterministically, a massive win in terms of reliability.

API Layer:

Downstream teams interacted with the Platform through a client library backed by a few GraphQL endpoints. They defined a query describing the data their adapter needed in order to produce content in accordance with system contracts.

The client library brought several benefits: type-safety, auto-completion and versioning.

This API Layer model was heavily challenged, and I used the lens of category theory morphisms to help me clarify our approach: building a structure-preserving projection from the full platform domain model to a minimal, integration specific view required by each external system.

The benefits:

  • Adapters received exactly the data they needed and nothing more.
  • A lossless, deterministic view of the model, enabling safe-retries.
  • Multiple views of the domain without fragmenting the ingestion model.

The tradeoff with this approach was a descent, but not “light speed”, latency. Beyond the network call, other factors added to the overhead: operational tasks embedded in the library, the GraphQL layer translating queries to SQL, and PostgreSQL executing them. This was acceptable in our case: throughput mattered far more than single-request latency.

Idempotency

The Platform also integrated well with our new deduplication layer. A single duplicate event (a “spark”) could trigger costly downstream work with potentially exponential consequences (“fire”).

The challenge was that duplicates were semantic, not transport level. Different external systems (via adapters) consumed different projections of the same underlying entity, meaning that different source updates could collapse into the same business update downstream.

We needed idempotency, not as a transport concern, but as a business outcome.

Low-Value Updates

The Platform also revealed another class of issues: low-value events that looked like genuine updates to the integration stack and bypassed the deduplication entirely. This caused expensive operations or even downstream disruptions.

The Success and The Day After

New adapters powered by the Platform were released one after another, from relatively simple to profoundly complex. We had built more than a stable platform; we had created a recipe to plug new systems into our stack. Aside from the occasional celebration after a successful launch, there was mostly silence: the sound of “nothing is broken”.

It was a success.

Then came a new flagship product. New services, new people, new stakeholders. I assumed this one would be more straightforward: we had proven we could navigate uncertainties and carve order out of chaos. I was wrong. The new voices carried none of the scars of the long fire we had walked through, yet they spoke confidently. And back to the frontline.

We were a team built to succeed in a bounded context. We never really tame chaos. We only address it. You build a temporary shelter, layer by layer, one challenge after the other. And when the storm passes, another will eventually come. That’s the work.

Grammars and Parsers

Let’s build a parser for simple arithmetic expressions. It’s better start the job in very a simple way: process addition expressions like ‘1 + 1’.

A Little Bit About Parsers

According to Grune and Jacobs[1]:

“Parsing is the process of structuring a linear representation in accordance with a given grammar. [..] This ‘linear representation’ may be a sentence, a computer program, [..]  in short any linear sequence in which the preceding elements in some way restrict the next element”[1, p. 3].

Example of Parsing
Image via Wikipedia

“To a computer scientist ‘1 + 2 * 3’ is a sentence in the language of ‘arithmetics on single digits’ [..], its structure can be shown, for instance, by inserting parentheses: (1 + (2 * 3)) and its semantics [i.e. its meaning] is probably 7”[1, p. 7].

We use a parser to build an expression tree. This tree contains the elements of a sentence according with the structure of a given grammar and in the correct order (afterwards we will need an interpreter that uses the tree to extract the semantic of the sentence).

And in order to implement a parser, we have to:

Continue reading “Grammars and Parsers”