Izumi Glossary
Izumi-specific terms
Izumi introduces many concepts which are not common in other frameworks and libraries. We had to come up with some good and descriptive names for them:
- multi-modal application – an application which support multiple modes. A multi-modal application can be soundly reconfigured by a command-line flag or a configuration flag. An example: an application with multiple persistence layer implementations, like a PostgreSQL persistence and a MongoDB one. DIStage provides you first-class support for multi-modality on both component and application level.
- multi-tenant application, also role-based application – an application which has multiple entrypoints (or Roles). When user launches a multi-tentant application, they may define which Roles they would like to launch and provide the respective command-line arguments, configuration sections, etc. An example: an OSGi container running several WAR applications.
- flexible monolith – a multi-modal multi-tenant application where each Role substitutes a microservice. Flexible monoliths can be deployed in both microservice-oriented manner and monolith-oriented manner. It’s typical for flexible monoliths to have dummy implementations of their integration points and their transport layers. Flexible monoliths (and multi-tentant applications in general) provide us a lot of additional deployment flexibility, increase computational density, and allow to run product simulations.
- dummy, also manual mock, fake – a handcrafted test implementation of an application integration point. Dummies are alike to automatic mocks, but they are not auto-generated. Dummies implement component interfaces, should follow all the good programming practices, never break encapsulation (like automatic mocks do) and provide a reasonably good simulation of the aspects important for particular domain. For example, a dummy implementation of a UDP transport layer might be able to simulate packet loss and packet reordering.
- integration point – an application component which runs outside of an application process. An application interacts with its integration points over network (or, in some cases, using various forms of host-local I/O). A microservice A interacting with another microservice B has an integration point with microservice B. A microservice, interacting with a third-party service through its API has an integration point with that third-party service.
- external integration point – an integration point a team has no ownership over. All the integrtion points with components, provided by third-party companies or projects, and teams, which cannot be involved into a productive discussion, are external integration points. Integrations between microservices are internal integration points, integrations with components developed by another team are internal integration points only if a discussion between the teams is possible. If we integrate with a PostgreSQL database, running on our infrastructure, — it should be considered an external integration point because we can’t alter its APIs and have to work around any issues instead of going the lenghty way of interacting with its developers or implementing custom patches. If we integrate with Stripe API – that’s an external integration point too. If we integrate with an internally developed microservice, but we cannot involve the team which holds ownership over it into a discussion, – that’s an external integration point too.
- evil integration point – an external integration point with a proprietary third-party service, which cannot be influenced by the application developers at all. An evil integration point cannot be launched in an isolated environment and requires connection to Internet to be interacted with. An integration point with a PostgreSQL database isn’t evil because we can launch and access it locally without having access to Internet. Frequently, evil integration points have no test APIs, are billed per-request, and, in worst cases, they can’t be safely accessed for testing purposes at all.
- integration check – a predicate which verifies that an integration point is available and functional. An application should fail fast if an integration check fails. A test should be ignored if an integration check fails.
- product simulation – a multi-tenant application launched with all its integration points substituted by dummies. This allows all the client teams and individual developers in these teams to quickly launch isolated local transient test environments when necessary, avoid long provisioning wait times and avoid any interference between developers working with the same test instances. Also, simulation-oriented workflow promotes good testing practices, like having self-contained fixtures for reproducible tests which would never rely on pre-provisioned state of a test environment.
- dual test tactic, also dual tests – an approach when developers run the same test suites twice – agains dummy integration point implementations and production ones. Usually we do this for business logic which should be abstracted from all the integration-point specific details and for the interfaces of the integration points themselves. When production integration points are not available their tests should be skipped and never fail the test run. Dual tests simplify onboarding and promote clean design and good abstractions. All the important aspects of the integration points may be simulated in their respective dummies.
- scene, also environment – a full set of application integration points which are required to be available for an application or a test suite to run.
- managed scene – a scene which an application or a test suite may provision and clean up automatically. Usually managed scenes would be created by setting up transient docker containers.
- provided scene – a scene which an application or a test suite expects to be pre-provisioned an readily available when it starts.
- constructive test taxonomy – an improved test taxonomy, proposed by 7mind. Constructive Test Taxonomy replaces outdated “unit-functional-integration” classification with multi-dimensional weighted space and forms a reasonable mental framework helping to approach testing in more structured and rational manner. More details might be found here.
Izumi-overloaded terms
We have overloaded the following terms for the domain of dependency injection:
- wiring definitions – the definitions of the components of an application and dependecies between them. Frequently wiring definitions are done with direct wiring code (direct constructor calls, variable initialization and parameter passing), or with one of three major creational patterns: Singletons, Service Locators and Dependency Injection. Singletons and Service Locators do not completely eliminate direct wiring code.
- wiring process – the process of execution of the wiring definitions.
- wiring problem – an engineering problem of finding a way of efficient and maintainable way to write wiring definitions and performing the wiring process. A source of multiple holy wars. For an external observer, a typical discussion on wiring problem strongly reminds a bunch of monkeys throwing whatever they have at each other.
- garbage-collection, also garbage-collecting DI – a process of hard exclusion of dependency declarations which are not required for current application configuration to run. A garbage-collecting DI requires a set of garbage collection roots (application entrypoints) in order to be able to trace required dependencies. It’s important to note that, while lazy dependency instantiation does provide similar capabilites, it’s less observable and cannot guarantee soundness of the wiring process. Thus lazy instantiation frequently leads to problems in run-time.
Terms used by Izumi
Also we use the following terms which have stable semantic but aren’t widely used in the domain of software desing and enginnering:
- generative programming, also planning, staged programs, staged exectuion – a powerful generic approach to software design when program execution is separated into two phases. During planning phase, a program generates a “script” in some DSL, which addresses a goal to be achieved. During interpretation phase the program interpretes the plan. When we apply generative programming, we want our DSL “scripts” to always be executable in finite time, be Turing-incomplete, have no support for conditional execution and have no support for unbounded loops. A generative program might oscillate between planning and interpretation phases and might reuse interpretation results in the next planning cycles.
- directed acyclic graph, also DAG – there is a good definition on Wikipedia. It’s important to note, that DAGs are the best natural way to express parallel programs, thus they work perfectly as a native representation for any non-trivial plans. The downside of DAGs is the computational complexity of many important alghorithms over them and lack of good tools and convenient metaphors to work with them. Most of the programs we write are unnecessarily sequential which won’t be a case if we use DAGs to describe them.
- compile-time reflection – type information which is preserved during compile time and does not require any support from the language runtime. In some sense almost any reflection is a compile-time reflection (e.g. Java type information is preserved during compile time). Though a true compile-time reflection would try to precompute what’s possible during compile-time and won’t require whole parts of the compiler to be available at run time. Without a preprocessor, only a language with strong macro capabilities can support compile-time reflection.
- test dependency memoization – sound sharing of test depencies between tests in a test suite or a family of test suites. Good memoization should avoid any use of singletons, apply just-in-time cleanups, provide users with good control capabilites of what can and what cannot be shared and share dependencies transitively.
- shift left – a particular set of practices and mindset which assume that engineers should try to address the problems as early in the software development pipeline as possible. Shift left approach is based on a strong semi-empric hypothesis saying that the total amount of resources spent on a particular issue would lower if it’s addressed earlier. Generally shift left approach always pays back.
- shift right – a particular set of mispractices and mindset allowing developers to postpone problem detection as much as they can. Shift right approach creates technical debt, forces the operations teams to do more job which can be easily avoided, and, frequently, turns users into alpha-testers.
Terms elaborated by Izumi
Also we use the following terms which are used widely but often improperly and having unstable semantic:
- product – all the logical components of some software, solving particular business problem. Typically a product includes multiple back-end services (or microservices), various client applications (mobile, web, desktop), third-party supplied components managed by the company and external third-party service APIs.
- continuous integration – automatic testing of all integrations between product components, after each commit. That’s lot more than just running builds and tests of individual microservices after each commit. We want to make sure that the whole product works properly after each change. Integration should start as early as possible. IDL/RPC languages are integration tools. Static typers are integration tools. Formal proof assistants are integration tools. Deployment and orchestration systems are integration tools. Usually monitoring tools might be considered integration tools too. A monitoring tool which is not a part in the feedback loops with application developers shouldn’t be considered an integration tool.
1.2.17*