Using Kotest
The Kotlin standard library includes some built-in support for unit testing,
via the kotlin.test package, but there are nicer third-party options out
there. In this module we will be using the Kotest framework, which
offers some very powerful and intuitive ways of writing tests.
Both kotlin.test and Kotest use JUnit as the underlying platform
for running tests on the JVM.
Gradle Configuration
To add support for Kotest to a Gradle-based project, you need to add the
required dependencies to build.gradle.kts and then, if required, configure
the test tasks.
...
val kotestVersion = "5.9.1"
dependencies {
...
testImplementation("io.kotest:kotest-runner-junit5:$kotestVersion")
testImplementation("io.kotest:kotest-assertions-core:$kotestVersion")
}
...
tasks.withType<Test>().configureEach {
useJUnitPlatform()
testLogging {
events("passed", "skipped", "failed")
}
}
Notice how, in the example above, the dependencies are specified using
testImplementation. This ensures that the libraries are used only for the
purpose of running tests; they won’t be bundled with your application if
you create a distributable release via the distZip task, for example.
The second part of the example above configures any tasks related to testing so that they will use JUnit 5 as the underlying platform for running tests. It also sets up logging of test results so that individual tests will be logged to the console, with an indication of whether the test passed, failed or was skipped by the framework.
Additional configuration of Kotest can be done via settings in the file
kotest.properties, which should be placed in src/test/resources.
See tasks/task1_4 for an example.
Amper Configuration
Configuring an Amper project for Kotest is a bit simpler than configuring a
Gradle project. The main thing you need to do is add two new entries to the
test-dependencies section of the module.yaml file:
test-dependencies:
- io.kotest:kotest-runner-junit5:5.9.1
- io.kotest:kotest-assertions-core:5.9.1
In Amper projects, the kotest.properties file should be placed in a
subdirectory of the project directory named testResources.
See tasks/task1_5 for an example.
Test Organization
One of the nice features of Kotest is that it supports multiple testing
styles. These styles represent different ways of organizing and writing
tests, drawing inspiration from a range of testing approaches and testing
frameworks. You can use whichever style suits you best, but we will focus
here on the StringSpec style, which is perhaps the simplest.
In this style, tests are collected together into classes that inherit from
StringSpec. Each test consists of a descriptive string, followed by a
lambda expression containing the code for the test. (We will cover lambda
expressions properly later; for now, just think of
them as blocks of code, enclosed in braces.)
For example, suppose you want to write unit tests for the grade() function
discussed earlier. You could structure these tests
using the StringSpec style like this:
class GradeTest : StringSpec({
"Grade of Distinction for mark between 70 and 100" {
// code to test for a Distinction goes here
}
"Grade of Pass for mark between 40 and 69" {
// code to test for a Pass goes here
}
...
})
Notice how each string describes one aspect of the expected behaviour of the code being tested. If you have a written specification of how the code should behave, you can often extract these strings directly from that written specification!
When the tests are run, each of these strings will be displayed, along with a message to indicate whether the test passed or failed.
When using Kotest, you will need an import statement for StringSpec:
import io.kotest.core.spec.style.StringSpec
The same goes for other elements of the framework. These imports should be
put at the top of the .kt file containing your tests.
The .kt files containing your tests should be placed in a separate directory
subtree, parallel to the application source code. Gradle and Amper use
different locations for these subtrees, which are summarized in the table
below.
| Code Type | Gradle Location | Amper Location |
|---|---|---|
| Application | src/main/kotlin | src |
| Tests | src/test/kotlin | test |
Writing a Test
Unit tests follow a standard pattern, consisting of three steps:
- Arrange
- Act
- Assert
The Arrange step involves setting up the conditions needed for the test to be performed. This might mean defining variables to represent a particular set of values that will be passed to a function, or creating an instance of a class and configuring it to be in a particular state.
The Act step involves carrying out the operation we wish to test—i.e., invoking a function or method.
The Assert step involves verifying that the operation produced the expected outcome.
Kotest supports a particularly nice syntax for making assertions, involving
matchers. We will explore matchers more thoroughly later. Here, we
focus on the most useful of them, shouldBe.
shouldBe is an infix function. It asserts that the expression on its
left yields the value provided by the expression on its right. For example,
we expect that the function call grade(70) should return the string
"Distinction". We can make this assertion with
grade(70) shouldBe "Distinction"
Notice how clear and easy to read this is!
We could have written this so that it more explicitly follows the Arrange-Act-Assert pattern:
val mark = 70 // Arrange
val result = grade(mark) // Act
result shouldBe "Distinction" // Assert
However, such an approach seems unnecessarily complicated in this case; the concise, single-line version is simpler and easier to read. Simplicity is important when writing tests.
You should aim to make tests short and simple—ideally avoiding complex logic such as selection or iteration. The reason for this should be obvious: the more complex a test is, the greater the chance of it containing a programming error itself.
Test Granularity
A written specification for the grade() function might look like this:
A grade of "Distinction" is returned for a mark between 70 and 100
A grade of "Pass" is returned for a mark between 40 and 69
A grade of "Fail" is returned for a mark between 0 and 39
An unknown grade ("?") is returned for marks below 0
An unknown grade ("?") is returned for marks above 100
You could turn each line of this specification into a test. For example, the first could be
"Grade of Distinction for marks between 70 and 100" {
grade(70) shouldBe "Distinction"
grade(85) shouldBe "Distinction"
grade(100) shouldBe "Distinction"
}
Note that three assertions are made here, corresponding to the two boundary values and one typical value for this particular equivalence partition.
Alternatively, you could adopt a more fine-grained approach, in which three separate tests are done to verify that a grade of Distinction is computed correctly:
"Grade of Distinction for mark of 70" {
grade(70) shouldBe "Distinction"
}
"Grade of Distinction for mark of 85" {
grade(85) shouldBe "Distinction"
}
"Grade of Distinction for mark of 100" {
grade(100) shouldBe "Distinction"
}
Which of these approaches is better?
The nice thing about the coarse-grained approach is that there is a clear one-to-one mapping from statements in the written specification onto tests. The coarse-grained approach also results in fewer tests—which means the test suite will run a little faster, because the overhead associated with finding and running a test is incurred a smaller number of times.
However, the coarse-grained approach also means that each test will be a bit larger and more complex than it could be. Remember that tests should ideally be small and simple!
Other issues can arise when a coarse-grained test makes multiple assertions. By default, a test fails as soon as an assertion fails. So you don’t get any useful information from any subsequent assertions until you’ve fixed the issue that caused the first failure.
Also, when the test framework reports on a failed assertion from a multi-assertion test, it won’t always be immediately clear in the report which assertion caused the tested to fail. You will always see a line number, which you can check against the test source code, but it would nice to see instantly what the issue was, without having to make that check.
Soft Assertions & withClue
Kotest offers two solutions to the problems mentioned above.
The first is to ‘soften’ the assertions made by a test. When you do this, the framework will execute all of the assertions in a test and provide feedback on which of them failed, rather than stopping at the first failed assertion.
To enable soft assertions, simply create a file KotestProjectConfig.kt in
the same directory as your tests, containing the following code:
import io.kotest.core.config.AbstractProjectConfig
object KotestProjectConfig : AbstractProjectConfig() {
override val globalAssertSoftly = true
}
Don’t worry about exactly what this code means for the moment; all will become clear later, once we’ve covered classes and object-oriented programming.
The second trick is to add clues to each assertion in a multi-assertion test. Clues are short strings of text that will be printed along with details of the failed assertion. The test for a grade of Distinction could be written using clues like this:
import io.kotest.assertions.withClue
import io.kotest.core.spec.style.StringSpec
import io.kotest.matchers.shouldBe
class GradeTest : StringSpec({
"Grade of Distinction for marks between 70 and 100" {
withClue("Mark=70") { grade(70) shouldBe "Distinction" }
withClue("Mark=85") { grade(85) shouldBe "Distinction" }
withClue("Mark=100") { grade(100) shouldBe "Distinction" }
}
...
})
Running Tests in Gradle
As we saw earlier, when we first looked at build tools, you can run the unit tests in a Gradle project via the Gradle wrapper:
./gradlew test
If logging to the console has been configured, you will see the status of each test listed. If all tests pass, then Gradle will conclude its output with ‘BUILD SUCCESSFUL’; otherwise, it will count the failures and also provide a filename and line number for each failed test, allowing you to check the details of the assertion that caused the test to fail:
GradeTest > Grade of Distinction for marks between 70 and 100 FAILED
io.kotest.assertions.AssertionFailedError at GradeTest.kt:9
GradeTest > Grade of Pass for marks between 40 and 69 PASSED
GradeTest > Grade of Fail for marks between 0 and 39 PASSED
3 tests completed, 1 failed
FAILURE: Build failed with an exception.
In addition, Gradle will direct your attention to an HTML report containing more detailed information on the failed tests. You can paste the given file URL into the address bar of a web browser to view this report.
Running tests in Amper
To run tests in an Amper project, do
./amper test
Note that Amper reports test results in the console in a different way to Gradle. Also, it does not currently generate a nice HTML report like Gradle does.