Skip to content
Alex Burke edited this page Apr 22, 2025 · 1 revision

Automated Testing

Introduction

The purpose of this page is to discuss aspects of the "philosophy" of automated testing - it is hoped that the discussion will serve as an ovreview of the aims of making use of auotmated testing techniques in our codebases and projects and what we can achieve doing so.

Purpose

Automated testing is the practice of writing logic that can be used to ascertain the correct function of a system. This is, purposefully, a rather broad definition as many techniques exist that can aid in doing at many differing scales with respect to the pieces that make up a system. But always the central theme is assurance.

The confidence that we are trying to gain can be aimed at different stages of the process of delivering software to users. During the process of building the system we are concerned with the behaviour of functions of the small bits we are both composing and relying upon - function at a very low level - while from the perspective of a user performing an operation via a user-interface it is the result of the overall operation they have initiated that is of concern.

Forms of testing

The most commonly thought of automated testing technique is unit testing. This is specifically concerned with various building blocks that are being altered, recomposed, added or removed. Unit testing is very much targetted at aiding the developer of software in its implementation.

In order for the operation of a system at large to be correct we must be able to rely behaviour of the software components that are interfaced with. This leads to the somewhat central role for the testing of its constituent pieces. We refer to these pieces as units.

However, in building assurance we must also be concerned with the operation of larger portions of the software, particularly in terms of a whole feature as would be experienced as a user. For the latter the case of the latter we talk about system-level and/or end-to-end tests, while checking the behaviour of a number of units in concert we refer to as integration tests.

Testing of units

The first thing of note is we purposefully use a generic term: "unit". This is a concious decision to avoid any lanugage or runtime specific terms, such as functions, modules, packages, libraries, etc to ensure that we do not equate a unit as being such or considering a unit to be of equivalent size to such an item, though in practice they may often be those things.

A unit is thus a separable chunk of logic that can be tested in isolation. Such a definition is then necessarily impacted by concrete aspects of a codebase. Note also that the notion of a unit is independent of whether or not it is inerfaced with by direct or indirect call.

This also means that as developers of the software it is our task to define the boundaries of our units as part of process of authoring tests for them. This is criticial to maintainable test suites.

Test aims and testability

Revisiting and reiterating the central notion that tests are concerned with providing assurance we can begin to build that out into a series of principles.

Principles

Given that we are primarily concerned with automated tests, we ought to by definition have the ability to run such tests at any point and thus any requirements/dependencies required for the operation of a test must be automated and repeatable as well.

a test must establish its own pre-requisites

Furthermore, our ability to run and re-run tests means that each test must have cleaned up after itself.

a test must clean must leave no remnants of its execution

The above two tenets lead, in conjunction, to another and arguably more powerful principle: tests may not affect each other.

a test must be able to run in isolation and may not affect other tests

With our intent to be to identify aspects of the system under test that are not behaving as our per out intent, a test should identify the cause of a particular issue.

a test should be focused on one aspect of behaviour