distage-testkit

Quick Start

The distage-testkit simplifies pragmatic pure functional programming testing. DistageSpecScalatest are ScalaTest base classes for the effect types of Identity, F[_], F[+_, +_] and F[-_, +_, +_]. They provide an interface similar to ScalaTest’s WordSpec, however distage-testkit has additional capabilities such as first class support for effect types and dependency injection.

Usage of distage-testkit generally follows these steps:

  1. Extend a base class corresponding to the effect type:
  2. Override def config: TestConfig to customize the TestConfig
  3. Establish test case contexts using should, must, and can.
  4. Introduce test cases using one of the in methods. These test cases can have a variety of forms, from pure functions returning an assertion, to effectful functions with dependencies:

API Overview

The highest value tests to develop in our experience are those that verify the communication behavior of components. These are tests of blackbox interfaces, with atomic or group isolation levels.

To demonstrate usage of distage-testkit we’ll consider a hypothetical game score system. This system will have a model, logic, and service which we’ll then define test cases to verify. Our application will use ZIO[-R, +E, +A].

We’ll start with the following model and service interface for the game score system:

package app

import zio._
import zio.console.{Console, putStrLn}

case class Score(value: Int)

case class Config(starValue: Int, mangoValue: Int)

trait BonusService {
  def queryCurrentBonus: Task[Int]
}

object Score {
  val zero = Score(0)

  def addStar(config: Config, score: Score) =
    score.copy(value = score.value + config.starValue)

  def echoConfig(config: Config): RIO[Has[Console.Service], Config] =
    for {
      _ <- putStrLn(config.toString)
    } yield config

  def addMango(config: Config, score: Score): RIO[Has[Console.Service] with Has[BonusService], Score] =
    for {
      bonusService <- RIO.service[BonusService]
      currentBonus <- bonusService.queryCurrentBonus
    } yield {
      val value = score.value + config.mangoValue + currentBonus
      score.copy(value = value)
    }
}

This represents a game score system where the player can collect Stars or Mangoes with differently configured and calculated point values.

DistageSpecScalatest Base Classes

There are test suite base classes for functor, bifunctor and trifunctor effect types. We will be choosing the one that matches our application’s effect type from the following:

For our demonstration application the tests use the ZIO[-R, +E, +A] effect type. This means we’ll be using DistageBIOEnvSpecScalatest for the test suite base class.

The default config (super.config) has pluginConfig, which will scan the package the test is in for according modules. See the distage-extension-plugins documentation for more information. For our demonstration the module will be provided using moduleOverrides like so:

package app

import com.typesafe.config.ConfigFactory
import izumi.distage.config.AppConfigModule
import izumi.distage.effect.modules.ZIODIEffectModule
import izumi.distage.model.definition.ModuleDef
import izumi.distage.testkit.scalatest.{AssertIO, DistageBIOEnvSpecScalatest}

abstract class Test extends DistageBIOEnvSpecScalatest[ZIO] with AssertIO {
  val defaultConfig = Config(starValue = 10, mangoValue = 256)

  override def config = super
    .config.copy(
      moduleOverrides = new ModuleDef {
        include(AppConfigModule(ConfigFactory.defaultApplication))
        include(ZIODIEffectModule)
        make[Config].from(defaultConfig)
        make[Console.Service].fromHas(Console.live)
      }
    )
}

Test Cases

In WordSpec, a test case is a sentence (a String) followed by in then the body. In distage-testkit the body of the test case is not limited to a function returning an assertion. Functions that take arguments and functions using effect types are also supported. Function arguments and effect environments will be provided according to the distage object graph.

Test Cases - Assertions

All of the base classes support test cases that are: - Assertions. - Functions returning an assertion. - Functions returning unit that fail on exception.

These are introduced using in from DistageAbstractScalatestSpec.LowPriorityIdentityOverloads

The assertion methods are the same as ScalaTest as the base classes extend ScalaTest Assertions.

Let’s now create a simple test for our demonstration application:

package app

final class ScoreSimpleTest extends Test {
  "Score" should {

    "increase by config star value" in {
      val starValue = util.Random.nextInt()
      val mangoValue = util.Random.nextInt()
      val config = Config(starValue, mangoValue)
      val expected = Score(starValue)
      val actual = Score.addStar(config, Score.zero)
      assert(actual == expected)
    }

    // Use Config is from the module in the `Test` class above
    "increase by config start value from DI" in {
      config: Config =>
        val expected = Score(defaultConfig.starValue)
        val actual = Score.addStar(config, Score.zero)
        assert(actual == expected)
    }
  }
}

Assertions with Effects

All of the base classes support test cases that are effects with assertions. As mentioned earlier, functions returning effects will have arguments provided from the distage object graph. These test cases are supported by in from DSWordSpecStringWrapper.

The different effect types fix the F[_] argument for this syntax:

  • DistageSpecScalatest: F[_]
  • DistageBIOSpecScalatest: F[Throwable, _]
  • DistageBIOEnvSpecScalatest: F[Any, Throwable, _]

With our demonstration application we’ll use this to verify the Score.echoConfig method. The Config required is from the distage object graph defined in moduleOverrides. By using a function from Config, the required argument will be injected by distage-testkit.

package app

final class ScoreEffectsTest extends Test {
  "testkit operations with effects" should {

    "assertions in effects" in {
      (config: Config) =>
        for {
          actual <- Score.echoConfig(config)
          _ <- assertIO(actual == config)
        } yield ()
    }

    "assertions from effects" in {
      (config: Config) =>
        Score.echoConfig(config) map {
          actual => assert(actual == config)
        }
    }
  }
}

Assertions with Effects with Environments

The in method for F[_, _, _] effect types supports injection of environments from the object graph in addition to simple assertions and assertions with effects.

A test that verifies the bonus service always returns a value of 1 in our demonstration would be:

package app

abstract class BonusServiceIsZeroTest extends Test {
  "BonusService" should {
    "return one" in {
      for {
        bonusService <- ZIO.service[BonusService]
        currentBonus <- bonusService.queryCurrentBonus
        _ <- putStrLn(s"currentBonus = $currentBonus")
        _ <- assertIO(currentBonus == 1)
      } yield ()
    }
  }
}

While this compiles this test cannot be run without the object graph containing a BonusService resource. For ZIO[-R, +E, +A], the Has bindings are injected from ZLayer, ZManaged ZIO or any F[_, _, _]: BIOLocal. See here for details on ZIO Has injection .

Our demonstration application has a dummy and production implementation of the BonusService. For each implementation, a ZManaged is provided. With the ZManaged resources added to the object graph test cases can inject Has[BonusService].

For demonstration of reuse and memoization, the bonus value will be equal to the number of times the resource was acquired.

package app

object DummyBonusService {
  var acquireCount: Int = 0
  var releaseCount: Int = 0

  class Impl(bonusValue: Int) extends BonusService {
    override def queryCurrentBonus = UIO(bonusValue)
  }

  val acquire = Task {
    acquireCount += 1
    new Impl(acquireCount)
  }

  def release: UIO[Unit] = UIO {
    releaseCount += 1
  }

  val managed = acquire.toManaged(_ => release)
}

This small implementation is useful for verification in both automated tests as well as functional prototypes.

For a real system we’d build a production implementation like the following. Such an implementation would perform an HTTP request to a REST service. We’ll introduce a production service, but this actual query will be unimplemented for our demonstration:

package app

object ProdBonusService {
  class Impl(console: Console.Service, url: String) extends BonusService {
    override def queryCurrentBonus = for {
      _ <- console.putStrLn(s"querying $url")
    } yield ???
  }

  val acquire = for {
    console <- ZIO.service[Console.Service]
    impl <- Task(new Impl(console, "https://my-bonus-server/current-bonus.json"))
  } yield impl

  def release: UIO[Unit] = UIO {
    ()
  }

  val managed = acquire.toManaged(_ => release)
}

Pattern: Dual Test Tactic

The testing of BonusService in our demonstration application will follow the Dual Test Tactic. See our blog post Unit, Functional, Integration? You are doing it wrong for a discussion of test taxonomy and the value of this tactic.

A ZIO resource for BonusService must be in the distage object graph for a Has[BonusService] to be injected into the ZIO environment. One option is to define separate modules for the dummy and production implementations. One module would be referenced by tests and the other only by production. However, this is not as useful as both implementations in the same object graph but different activations.

Our demonstration application will use the StandardAxis.Repo Dummy and Prod tags:

package app

import distage.plugins._
import izumi.distage.model.definition.Activation
import izumi.distage.model.definition.StandardAxis.Repo

object BonusServicePlugin extends PluginDef {
  make[BonusService]
    .fromHas(DummyBonusService.managed)
    .tagged(Repo.Dummy)

  make[BonusService]
    .fromHas(ProdBonusService.managed)
    .tagged(Repo.Prod)
}

Note that the BonusServicePlugin is not explicitly added to the Test.config: This PluginDef is in the same package as the test, namely app. By default the pluginConfig for the test will include the test’s package, which will be scanned by distage for PluginDef instances.

Continuing with the pattern, a trait will control which repo is activated:

package app

trait DummyTest extends Test {
  override def config = super
    .config.copy(
      activation = Activation(Repo -> Repo.Dummy)
    )
}

trait ProdTest extends Test {
  override def config = super
    .config.copy(
      activation = Activation(Repo -> Repo.Prod)
    )
}

With these a production test and a dummy test can be introduced for the demonstration game score application. Note how these are the same scenario, BonusServiceIsZeroTest, but differ in activations.

When extended beyond this small example, this pattern simplifies system level tests, sanity checks, and even a pragmatic form of N-Version Programming:

package app

final class ProdBonusServiceIsZeroTest extends BonusServiceIsZeroTest with ProdTest

final class DummyBonusServiceIsZeroTest extends BonusServiceIsZeroTest with DummyTest

Test Case Context

The testkit ScalaTest base classes include the following verbs for establishing test context:

Configuration

The test suite class for your application should override the def config: TestConfig attributed. The config defines the plugin configuration, module overrides and activation axes, and other options. See the TestConfig API docs for more information.

Syntax Summary

For F[_] including Identity:

  • in { assert(???) }: The test case is a function returning an assertion.
  • in { (a: A, b: B) => assert(???) }: The test case is a function returning an assertion. The a and b will be injected from the object graph.
  • in { (a: A, b: B) => ???: F[Unit] }: The test case is a function returning an effect to be executed. The a and b will be injected from the object graph. The test case will fail if the effect fails.
  • in { (a: A, b: B) => ???: F[Assertion] }: The test case is a function returning an effect to be executed. The a and b will be injected from the object graph. The test case will fail if the effect fails or produces a failure assertion.

For F[-_, +_, +_], it’s same with F[Any, _, _]:

  • in { ???: F[zio.Has[C] with zio.Has[D], _, Unit] }: The test case is an effect requiring an environment. The test case will fail if the effect fails. The environment will be injected from the object graph.
  • in { ???: F[zio.Has[C] with zio.Has[D], _, Assertion] }: The test case is an effect requiring an environment. The test case will fail if the effect fails or produces a failure assertion. The environment will be injected from the object graph.
  • in { (a: A, b: B) => ???: F[zio.Has[C] with zio.Has[D], _, Assertion] }: The test case is a function producing an effect requiring an environment. All of a: A, b: B, Has[C] and Has[D] will be injected from the object graph.

Provided by trait AssertIO:

  • assertIO(???: Boolean): zio.UIO[Assertion]

Provided by trait AssertCIO:

  • assertIO(???: Boolean): cats.effect.IO[Assertion]

Provided by trait AssertBIO:

  • assertBIO[F[+_, +_] : BIO](???: Boolean): F[Nothing, Assertion]

Resource Reuse - Memoization

Injected values are summoned from the dependency injection graph for each test. Without using memoization, resources will be acquired and released for each test. This may be unwanted. For instance, a single Postgres Docker may be wanted for all tests. The test config has a memoizationRoots property for sharing components across tests.

Test Selection

Using IntegrationCheck

Implementation classes that inherit from izumi.distage.framework.model.IntegrationCheck. You can specify a resourceCheck() method that will be called before test instantiation to check if external test dependencies (such as Docker containers in distage-framework-docker) are available for the test or role. If not, the test will be canceled/ignored.

This feature allows you to therefore selectively run only the fast in-memory tests that have no external dependencies. Integration checks are executed only in distage-testkit tests and distage-framework’s Roles.

Use StartupPlanExecutor to execute the checks manually.

Parallel Execution

TODO

References

Additional example code

Some example code from distage-example:

import distage.{DIKey, ModuleDef}
import izumi.distage.model.definition.Activation
import izumi.distage.model.definition.StandardAxis.Repo
import izumi.distage.plugins.PluginConfig
import izumi.distage.testkit.TestConfig
import izumi.distage.testkit.scalatest.{AssertIO, DistageBIOEnvSpecScalatest}
import leaderboard.model.{Score, UserId}
import leaderboard.repo.{Ladder, Profiles}
import leaderboard.zioenv.{ladder, rnd}
import zio.{ZIO, IO}

abstract class LeaderboardTest extends DistageBIOEnvSpecScalatest[ZIO] with AssertIO {
  override def config = super.config.copy(
    pluginConfig = PluginConfig.cached(packagesEnabled = Seq("leaderboard.plugins")),
    moduleOverrides = new ModuleDef {
      make[Rnd[IO]].from[Rnd.Impl[IO]]
      // For testing, setup a docker container with postgres,
      // instead of trying to connect to an external database
      include(PostgresDockerModule)
    },
    // instantiate Ladder & Profiles only once per test-run and
    // share them and all their dependencies across all tests.
    // this includes the Postgres Docker container above and
    // table DDLs
    memoizationRoots = Set(
      DIKey[Ladder[IO]],
      DIKey[Profiles[IO]],
    ),
  )
}

trait DummyTest extends LeaderboardTest {
  override final def config = super.config.copy(
    activation = Activation(Repo -> Repo.Dummy),
  )
}

trait ProdTest extends LeaderboardTest {
  override final def config = super.config.copy(
    activation = Activation(Repo -> Repo.Prod),
  )
}

final class LadderTestDummy extends LadderTest with DummyTest
final class LadderTestPostgres extends LadderTest with ProdTest

abstract class LadderTest extends LeaderboardTest {

  "Ladder" should {
    // this test gets dependencies through arguments
    "submit & get" in {
      (rnd: Rnd[IO], ladder: Ladder[IO]) =>
        for {
          user  <- rnd[UserId]
          score <- rnd[Score]
          _     <- ladder.submitScore(user, score)
          res   <- ladder.getScores.map(_.find(_._1 == user).map(_._2))
          _     <- assertIO(res contains score)
        } yield ()
    }

    // other tests get dependencies via ZIO Env:
    "assign a higher position in the list to a higher score" in {
      for {
        user1  <- rnd[UserId]
        score1 <- rnd[Score]
        user2  <- rnd[UserId]
        score2 <- rnd[Score]

        _      <- ladder.submitScore(user1, score1)
        _      <- ladder.submitScore(user2, score2)
        scores <- ladder.getScores

        user1Rank = scores.indexWhere(_._1 == user1)
        user2Rank = scores.indexWhere(_._1 == user2)

        _ <- if (score1 > score2) {
          assertIO(user1Rank < user2Rank)
        } else if (score2 > score1) {
          assertIO(user2Rank < user1Rank)
        } else IO.unit
      } yield ()
    }

    // you can also mix arguments and env at the same time
    "assign a higher position in the list to a higher score 2" in {
      ladder: Ladder[IO] =>
          for {
            user1  <- rnd[UserId]
            score1 <- rnd[Score]
            user2  <- rnd[UserId]
            score2 <- rnd[Score]

            _      <- ladder.submitScore(user1, score1)
            _      <- ladder.submitScore(user2, score2)
            scores <- ladder.getScores

            user1Rank = scores.indexWhere(_._1 == user1)
            user2Rank = scores.indexWhere(_._1 == user2)

            _ <- if (score1 > score2) {
              assertIO(user1Rank < user2Rank)
            } else if (score2 > score1) {
              assertIO(user2Rank < user1Rank)
            } else IO.unit
          } yield ()
    }
  }

}