Encoding Transaction Boundaries as Business Concepts

In most systems, use cases live in the application layer, where services orchestrate domain operations and side-effects such as persistence, messaging, or external calls.

It’s also common for the business to require locally atomic operations: they must succeed or fail together, preserving state invariants in the presence of failure.

How to implement transactions is an infrastructure concern. Defining transaction boundaries is a business requirement.

In the following sections, I will present a technique that leverages the type system to clearly define and make transaction boundaries explicit in code. Concerns such as isolation and durability remain infrastructure responsibilities.

This results in:

  • Compositionality: Build complex transactions from simpler ones.
  • Testability: Verify atomicity without relying on infrastructure.
  • Type Check: Transactional operations result in a compilation error unless explicitly committed.
  • Safe Refactoring: Reduce the chances of incorrectly assuming operations are atomic.

Together, these properties lead to clearer, refactoring-safe, and better-tested codebases.

This approach is well suited when the ambiguities of complex business requirements and failure modes outweigh the conveniences of mainstream, battle-tested practices, such as annotating types and methods with @Transactional.

The examples in this article are implemented in Scala, but the approach itself is not Scala-specific. The key is a strong type system and the ability to describe programs as values, separating the description of a program from its execution.

The code snippets presented here are intentionally simplified to highlight the transactional boundaries and failure modes. They omit part of the testing setup and simplify use cases and assertions. Actual implementations for readers who want to dive deeper are linked at the end.

Where is the transaction boundary?

Suppose that we need to implement a ‘create account’ use case consisting of:

  1. generating credentials
  2. persisting the new account in the storage
  3. granting the user access.

From a business perspective, these steps must be atomic. We could start defining an API like:

trait UsersManager[F[_]]:
  def createAccount(username: Username, password: PlainPassword): F[UserId]

F[_] allows us to abstract over the effect type such as side-effect, state, or failure. In practice, it will be replaced by IO or equivalent in production. And IO[UserId] is a description of a computation that produces side-effects: storing data, in this case.

One possible implementation is to sequentially store the new account and user permissions after generating credentials.

      override def createAccount(username: Username, password: PlainPassword): F[UserId] =
        val credentials = . . . // pure domain logic

        val setUpAccount = for
          userId <- store.createAccount(credentials)
          _ <- accessControl.grantAccess(userId, userId, ContextualRole.Owner)
        yield userId

The code is sequential and easy to follow. But where is the transaction?

  • Perhaps store.createAccount starts its own transaction?
  • Perhaps accessControl.grantAccess does the same?
  • Or maybe there is an implicit transaction scope that will be executed at runtime with a framework?
  • Or there is no transaction at all?

The conveniences of implicit transaction boundaries via frameworks come with tradeoffs:

  • Business transaction boundaries don’t compose well. This often leads to an explosion of specialised Service or Repository functions, one for each atomic use case.
  • Refactoring can silently break atomicity.
  • Verifying transactional guarantees depends on infrastructure (integration tests).
  • Type signature conveys no information about transactional tasks.

Improving Atomicity Correctness Guarantees

The goal is to be unequivocal on where a transaction starts and ends.

Txn defines the operations that must be performed atomically. It conveys the same intent as operations annotated with @Transactional, but it leads to clear, local and enforced by the type system boundaries.

      override def createAccount(username: Username, password: PlainPassword, globalRole: Option[GlobalRole]): F[UserId] =
        // Pre-transaction: pure domain logic
        val credentials = . . .

        // Atomic operations
        val setUpAccount: Txn[UserId] = for
          creds <- tx.lift { credentials } // Embed a pure computation into a transactional context
          userId <- store.createAccount(creds) // Storing capability via Port/Repository
          _ <- accessControl.grantAccess(userId, userId, ContextualRole.Owner) // Same as above
        yield userId

        // Signal a commit, no side-effects yet.
        tx.commit { setUpAccount }

Both F and Txn are descriptions of computations: F is the runtime, the final work performed by the program. Txn describes which operations must be executed in sequence and atomically within a transaction. Both are values that need to be interpreted in order to be executed.

Crucially, commit must be invoked in order to obtain F[UserId] from Txn[UserId]. Otherwise the code won’t compile.

The Core Abstraction

The transactional behaviour is captured by a minimal API:

trait TransactionManager[F[_], Txn[_]]:

  val lift: [A] => F[A] => Txn[A]

  val commit: [A] => Txn[A] => F[A]

The TransactionManager API exposes two key operations:

  • lift embeds non-transactional operations into a transactional context
  • commit signals that the transaction must be committed and rolled back in case of an error, transforming it into an executable operation.

Embedding non-transactional code into a transactional context

Any non-transactional operation required to run within a transaction must be explicitly embedded into a Txn context via lift:

tx.lift { clock.realTimeInstant }

This is a key feature:

Developers must acknowledge that such operations need special handling in case the transaction is rolled back.

Verifying Atomicity Under Failure

The happy path is straightforward: when all operations succeed, the state remains consistent by construction. The more interesting case is when there is a failure midway through a transaction.

The test below verifies our business invariant: when granting permissions fails after account creation succeeds, no account is persisted.

test("user account is not created when granting permission fails"):
    forAllF { (username: Username, password: PlainPassword) =>
      for
        // given
        (usersStore, storeRef) <- makeEmptyUsersStore()            // account creation succeeds
        failingAccessControl  = makeUnreliableAccessControl() // granting permission fails
        tx = makeTransactionManager(List(storeRef))                  // test-specific transaction manager
        usersManager = UsersManager.make[IO, IO](usersStore, failingAccessControl, tx, ...)

        // when
        _ <- usersManager.createAccount(username, password).attempt

        // then
        account <- usersStore.fetchAccount(username)
      yield assert(account.isEmpty) // no partial update
    }

We can test atomicity without a real database by using:

  • An alternative execution strategy for the transactional program (Txn[_])
  • In-memory implementations of Ports/Repositories.

These in-memory Port components are not mocks or stubs: they preserve the same semantics as production components, differing only in how state is stored (in-memory vs. database).

The Execution Strategy for Testing

In production, commit executes the transactional program using a database-backed transaction manager (e.g. via Doobie).

For unit tests, we can provide a different execution strategy that simulates transaction boundaries using in-memory state.

  def makeTransactionManager(refs: List[TxRef[?]]): TransactionManager[IO, IO] =
    new TransactionManager[IO, IO]:
    
      override val commit: [A] => IO[A] => IO[A] = [A] =>
        (action: IO[A]) =>
          action.attempt.flatMap {
            case Right(a) => refs.traverse_(_.commit) *> IO.pure(a)
            case Left(e) =>   refs.traverse_(_.rollback) *> IO.raiseError(e)
        }
      . . .

Semantically, this version of the TransactionManager:

  • Stages state changes while the transactional program runs
  • Commits all staged changes on success
  • Rolls them back entirely on failure.

What This Simulation Provides (And Doesn’t)

The test execution strategy relies on stateful components (such as TxRef) to support staging, commit and rollback operations.

This is not a full transaction implementation and is intentionally simpler than database-backed transactions. The testing machinery:

  • Does not provide isolation, concurrency guarantees or durability
  • Supports independent unit tests that verify atomic business invariants under failure modes.

Composition

Transaction boundaries are explicit and local, emerging from how Txn[A] transaction programs are composed:

  • Composing multiple Txn values and committing them together results in a single transaction.
  • Committing Txn values separately results in multiple transactions.
  • Omitting commit prevents execution and results in compilation error, since Txn must be converted to F via commit.

This compile-time composition model makes runtime transaction propagation intentionally unnecessary.

Tradeoffs and Considerations

The technique discussed here follows a broader design approach: describing programs as values, separating intent from execution.

When applied to business-level atomicity, this approach gives you:

  • Clear, unambiguous behaviour: Trading framework convenience for explicit intent improves correctness guarantees.
  • Improved safety: Invalid transactional scopes fail at compilation time and prevent latent runtime inconsistencies.
  • Deterministic unit testing: Business invariants can be verified without relying on databases or transactions middleware.
  • Composable, refactoring-safe code: Locality leads to a modular design that is easier to reason about and evolve.

What it costs:

  • Shifted responsibility: Developers take ownership of concerns often delegated to frameworks (transaction scoping, propagation, rollback semantics).
  • Steeper learning curve: Teams unfamiliar with effect systems or algebraic APIs may experience increased cognitive load and reduced productivity in the short to mid term.
  • Additional machinery: Implementing a TransactionManager, in-memory versions of Ports/Repositories, and different execution strategies demands design and maintenance.

When it is a fit:

  • The domain exhibits multiple known failure modes.
  • State inconsistencies are business-critical and unacceptable.
  • Correctness must be preserved as the system evolves.
  • The “program as values” paradigm is beneficial elsewhere in the system.

When it is not:

  • Teams heavily rely on framework conventions for productivity.
  • Transactional requirements are simple and unlikely to evolve.
  • Strong discipline and extensive testing compensates implicitness.

Tradeoffs may or may not be acceptable. What’s important is awareness, deliberate reasoning, making an informed decision and owning the consequences. Being aware of alternatives is fundamental when choosing an approach that is aligned with the business and the team’s preferences.

Links:

Leave a comment