Fortuno in 5 minutes

Fortuno in 5 minutes#

You will learn to …

  • create a minimal project with the Fortran package manager,

  • add unit tests to the project,

  • understand some key concepts of the testing framework.

Before jumping in#

To begin this quickstart tutorial on Fortuno, ensure to have a recent version of the Fortran package manager (fpm) (version 0.10 or newer) and a Fortran compiler that implements the Fortran 2018 standard. Fortuno operates smoothly with recent versions of several popular Fortran compilers, but older Fortran compilers are known to fail to build it. Please check the minimal compiler versions in the Fortuno readme.

Getting comfortable#

We’ll create a library named mylib containing a single function factorial() to calculate the factorial of an integer. The testing of the library shall be automated using unit tests.

We first create a new project for mylib using fpm:

fpm new mylib

Then, we add the following to the fpm.toml file (package manifest) to include Fortuno as a development dependency:

[dev-dependencies]
fortuno = { git = "https://github.com/fortuno-repos/fortuno" }

We develop the first version of our library by adapting src/mylib.f90 as follows:

src/mylib.f90#
!> Demo library to be unit-tested.
module mylib
  implicit none

  private
  public :: factorial

contains

  !> Calculates the factorial of a number.
  function factorial(nn) result(fact)

    !> number to calculate the factorial of
    integer, intent(in) :: nn

    !> factorial (note, there is no check made for integer overflow!)
    integer :: fact

    integer :: ii

    fact = 1
    do ii = 2, nn
      fact = fact * ii
    end do

  end function factorial

end module mylib

The main executable of our project should just print out the factorial for three specific values, so that we can check whether our factorial() function works as expected:

app/main.f90#
program main
  use mylib, only: factorial
  implicit none

  print "('factorial(', i0, ') = ', i0)",&
      & 0, factorial(0),&
      & 1, factorial(1),&
      & 2, factorial(2)

end program main

Now, let’s automatize the testing procedure. We will write three unit tests, which check the factorial function for the specific input values 0, 1 and 2. The last test should intentionally fail to demonstrate the error reporting. Rename the file test/check.f90 into test/testapp.f90 and modify the content as follows:

test/testapp.f90#
!> Test app driving Fortuno unit tests.
program testapp
  use mylib, only : factorial
  use fortuno_serial, only : execute_serial_cmd_app, is_equal, test => serial_case_item,&
      & check => serial_check
  implicit none

  call execute_serial_cmd_app(&
      testitems=[&
          test("factorial_0", test_factorial_0),&
          test("factorial_1", test_factorial_1),&
          test("factorial_2", test_factorial_2)&
      ]&
  )

contains

  ! Test: 0! = 1
  subroutine test_factorial_0()
    call check(factorial(0) == 1)
  end subroutine test_factorial_0

  ! Test: 1! = 1
  subroutine test_factorial_1()
    call check(is_equal(factorial(1), 1))
  end subroutine test_factorial_1

  ! Test: 2! = 3 (will fail to demonstrate the output of a failing test)
  subroutine test_factorial_2()
    ! Failing check, you should obtain detailed info about the failure.
    call check(&
        & is_equal(factorial(2), 3),&
        & msg="Test failed for demonstration purposes"&
    )
  end subroutine test_factorial_2

end program testapp

Let’s build our library and run the units tests by issuing

fpm test

in the main project folder. The expected output will show two successful tests and one failure, providing detailed information on the failed test.

Output of the “fpm test” command#
=== Fortuno - extensible unit testing framework for Fortran ===

# Executing test items
..F

# Logged event(s)

Failed     [run] factorial_2

-> Unsuccessful check
Check: 1
Msg: Test failed for demonstration purposes
::
Mismatching integer values
Obtained: 2
Expected: 3


# Test runs
Total:      3
Succeeded:  2  ( 66.7%)
Failed:     1  ( 33.3%)

=== FAILED ===

Congratulations! You’ve now implemented and completed your first set of Fortuno unit tests, assessing your project’s integrity.

Diving deeper#

Fortuno is built around the following key concepts:

  • Test cases (often referred as tests): Represent individual unit tests and contain the code to execute, when the test is run.

  • Test suites (not shown in the example above): Containers for structuring your tests. They might contain test cases and further test suites (up to arbitrary nesting level). Their initialization (set-up) and finalization (tear-down) is customizable.

  • Test apps: driver programs responsible for setting up and tearing down the test suites and running the tests.

Depending, whether the routines you test are serial (eventually with OpenMP-parallelization), MPI-parallelized or coarray-parallelized, you need to use different versions of these objects. Fortuno offers for all these three cases a special interface. In our case, we used the serial interface.

We imported the following objects:

  use fortuno_serial, only : execute_serial_cmd_app, is_equal, test => serial_case_item,&
      & check => serial_check
  • execute_serial_cmd_app: Convenience function setting up and executing the serial version of the command line test app.

  • is_equal: Function to check the equality of two objects returning detailed information about the check.

  • serial_case_item: Function returing a wrapped test case object for serial tests. The _item suffix indicates a wrapper allowing to use the test case object as an item (an element) of an array. We have introduced the abbreviation test for this rather longish name.

  • serial_check: Subroutine for registering the result of an actual check in serial tests, abbreviated here as check.

The actual program is pretty simple, we just executed the serial command line app with all the tests we have written.

  call execute_serial_cmd_app(&
      testitems=[&
          test("factorial_0", test_factorial_0),&
          test("factorial_1", test_factorial_1),&
          test("factorial_2", test_factorial_2)&
      ]&
  )

We utilized the execute_serial_cmd_app() subroutine, feeding it with an array of test items through the testitems parameter. You shouldn’t add any code after this call, as it would not return. Once execute_serial_cmd_app() completes its task, it halts the code and communicates the result to the operating system via an exit code—0 if all tests pass, or a positive integer to indicate failures.

For creating the individual test items, we employed the serial_test_case_item() function (using its local abbreviated name test()). In each invocation, we provided a distinctive name for the test and specified the subroutine that should be executed when the test is run.

  ! Test: 0! = 1
  subroutine test_factorial_0()
    call check(factorial(0) == 1)
  end subroutine test_factorial_0

  ! Test: 1! = 1
  subroutine test_factorial_1()
    call check(is_equal(factorial(1), 1))
  end subroutine test_factorial_1

  ! Test: 2! = 3 (will fail to demonstrate the output of a failing test)
  subroutine test_factorial_2()
    ! Failing check, you should obtain detailed info about the failure.
    call check(&
        & is_equal(factorial(2), 3),&
        & msg="Test failed for demonstration purposes"&
    )

Our (rather simple) test subroutines need no arguments, they interact with the testing framework by calling specific subroutines, such as check() in our example. The check() subroutine accepts either a logical expression—for instance, factorial(0) == 1—or a unique type, as returned by the is_equal() function, which encapsulates the outcome of the comparison and additional details in case of a failure. The check() call registers the verification outcome in the framework, including any failure specifics. A test is deemed successful if no check() calls with failing (e.g. logically false) argument had been triggered during the run.