Welcome to this guide on implementing integration testing in ROS 2! As modern robotics systems grow more complex, it’s crucial to have an effective testing strategy in place to ensure robust and reliable performance as well as an efficient development workflow.

This guide provides a detailed look into unit and integration testing in ROS 2 with detailed code examples. First, we’ll briefly touch on the (unit) testing frameworks gtest and pytest in C++ and Python. Next up, we’ll discuss the main topic, that of integration testing. Complexities such as launching systems of several nodes and avoiding crosstalk in parallel-running tests are on the menu. Test result visualization is also covered, to allow for efficient introspection in what’s exactly being tested, and which tests are (not) succeeding.

The code snippets in this article go together with the abaeyens/ros2-integration-testing-examples repo. That repository provides worked out and runnable test examples. The included Dockerfile ensures you can easily and exactly replicate the set up explained here. Testing actual functionality has been left out as much as possible for conciseness to maintain the focus on testing. Finally, all code is based on ROS 2 Jazzy, future proofing you from the start! (It probably also works on Humble and Iron.) It’s recommended to have the repo open while going through this article. Enjoy the journey!

Unit tests

C++: gtest

Using gtest is straightforward. In the test files, simply include gtest/gtest.h and add a few macro’s:

#include <gtest/gtest.h>

TEST(dummysuite, dummytestcase) {
    // Don't test any actual functionality, just dummy here
    EXPECT_EQ(2, 2);
}

int main(int argc, char **argv) {
    ::testing::InitGoogleTest(&argc, argv);
    return RUN_ALL_TESTS();
}

For registering these, the package ament_cmake_gtest is most suitable, and thanks to its function ament_add_gtest, one requires just two lines:

find_package(ament_cmake_gtest REQUIRED)
ament_add_gtest(test_unit_cpp test/test_unit.cpp)

See the ROS doc for more, such as dependency linking.

Python: pytest

In Python, there are two common unit testing frameworks: pytest and unittest. In ROS 2, the former receives the most support when it comes to unit testing (though, as we’ll see in the next section, integration testing instead relies on untittest). Just importing the pytest module turns each function named test_* into a test case, with as desired one or more assert statements:

import pytest

def test_dummy():
    """no actual functionality test here, just a dummy"""
    assert True

Registering is similar to gtest:

find_package(ament_cmake_pytest REQUIRED)
ament_add_pytest_test(pythontest test/test_unit.py)

Usually, one wants to set a custom working directory as part of the ament_add_pytest_test call, see here how to so with a concise for-loop. Alternatively, if you’re using a setup.py instead of a CMakeLists.txt, then tests will be auto-discovered, see the ROS doc.

Integration tests

Where unit tests focus on validating a very specific piece of functionality, integration tests go all the way in the other direction. In ROS this means launching a system of multiple nodes, for example the Gazebo simulator and the Nav2 navigation stack. As a result, these tests are far more complex both to set up and to run. A key aspect of ROS integration testing is that nodes part of different tests shouldn’t communicate with each other, even when run in parallel, which will be achieved here by using a specific test runner that picks unique ROS domain id’s.

how to write

The main tool to use here is the launch_testing package. This ROS-agnostic (!) functionality allows to extend a classic Python launch file with both active tests (that run while the nodes are also running) and post-shutdown tests (which run once after all nodes have exited). test_integration.py provides a full example. launch_testing relies on the Python standard module unittest for the actual testing. Hence, preferably avoid mixing with pytest. (Concerning using pytest: active tests work, however post-shutdown tests aren’t supported yet.)

The four key elements are:

  • The function generate_test_description: same as the classic way of launching nodes (basically, it replaces generate_launch_description).
  • launch_testing.actions.ReadyToTest(): alerts the test framework that the tests should be run. This ensures that the active tests and the nodes are run synchronously.
  • An undecorated class inheriting from unittest.TestCase: houses the active tests, including set up and teardown. One has access to the ROS logging through proc_output.
  • A second class inheriting from unittest.TestCase, decorated with @launch_testing.post_shutdown_test(). As the name implies, these tests run after all nodes have shutdown. A common assert here is to check the exit codes, to ensure all nodes exited cleanly.

Concerning using pytest for integrations tests, active tests work, however post-shutdown tests currently aren’t supported yet.

how to register

Launch tests are to be registered in the CMakeLists.txt with the function add_launch_test from the package launch_testing_ament_cmake. However, as launch_testing is ROS-agnostic, that will cause it to be launched with the default domain id of zero, resulting in crosstalk between tests.

Here, the package ament_cmake_ros comes in: it provides a special runner run_test_isolated.py that applies a unique ROS domain ID to each test. (Though note that inter-machine crosstalk is still possible, as this runner only ensures uniqueness on the local machine. Also, the environment variable ROS_DOMAIN_ID must not be set. Here’s the domain coordinator code that picks the id.)

Combining the two, with a new CMake function add_ros_isolated_launch_test, gives

find_package(ament_cmake_ros REQUIRED)
find_package(launch_testing_ament_cmake REQUIRED)
function(add_ros_isolated_launch_test path)
  set(RUNNER "${ament_cmake_ros_DIR}/run_test_isolated.py")
  add_launch_test("${path}" RUNNER "${RUNNER}" ${ARGN})
endfunction()
add_ros_isolated_launch_test(test/test_integration.py)

For completeness, here’s the full CMakeLists.txt.

what doesn’t work as well

The above presents just one approach to register integration tests. There are at least three more, though they do not work as well:

  • Use just add_launch_test: no ROS domain isolation, so crosstalk between parallel tests there will be.
  • Use ament_add_ros_isolated_pytest_test, for example
    find_package(ament_cmake_ros REQUIRED)
    ament_add_ros_isolated_pytest_test(
      dummyintegration test/test_integration.py)
    

    (explained more in this Medium article). This does ensure isolation, however, because the unittest-based integration test gets wrapped in pytest, the whole file is considered as a single test. While this doesn’t impact the operation, it does remove the granular inspection ability to discern which cases succeeded and which failed. The generated XUnit file isn’t more informative either.

  • Use a setup.py instead of a CMakeLists.txt (and add the @pytest.mark.launch_test decorator to the method generate_test_description): While common for ROS Python packages, this approach unfortunately exhibits both the above disadvantages. (If you do know how to get at least isolation working with a setup.py, please share!) On the plus side, you do benefit from automatic test discovery.

Running tests and report generation

Running tests

Running all tests is straightforward: simply run colcon test. This command suppresses the test output and exposes little about which tests succeed and which fail. Useful therefore while developing tests is the option to print all test output while the tests are running:

colcon test --event-handlers console_direct+

Viewing test results

For viewing the results, there’s a separate colcon verb. For example,

user@host$ colcon test-result --all          
build/app/Testing/20241013-0810/Test.xml: 3 tests, 0 errors, 1 failure, 0 skipped
build/app/test_results/app/test_test_integration.py.xunit.xml: 3 tests, 0 errors, 1 failure, 0 skipped
build/app/test_results/app/test_unit.xunit.xml: 2 tests, 0 errors, 0 failures, 0 skipped
build/app/test_results/app/test_unit_cpp.gtest.xml: 2 tests, 0 errors, 0 failures, 0 skipped

Summary: 10 tests, 0 errors, 2 failures, 0 skipped

lists four files: one ctest-formatted XML file (a result of the CMakeLists.txt) and, more interestingly, also the three XUnit-formatted XML files (one for each of the test files). These latter three are suitable for automatic report generation in automated testing in CI/CD pipelines. An excellent tool to visualize them all together is the NodeJS package Xunit Viewer. It converts the XUnit files to HTML (i.e. a website) or straight into the terminal. For the latter, example command and response:

xunit-viewer -r build/app/test_results -c

Ouch! Looks like one of our integration tests is failing 😅

Conclusion

Our deep dive into ROS 2 integration testing demonstrated the richness of its built-in support for unit and integration testing. We covered how to effectively implement tests, as well as how to analyse and visualize their results. The use of gtest and pytest for unit testing in C++ and Python respectively, and integration testing using launch_testing provided us with powerful ways to ensure robotics software robustness.

That said, this article just skimps the surface of testing in ROS 2, and especially in robotics in general. Here, we focused on the implementation aspect, however topics such as fault injection and performance testing also play a major role towards robust autonomous systems. As robotics continues to advance, rigorous integration testing presents a keystone in crafting dependable robotics software. Hence, it is crucial to implement thorough testing strategies and to explore new horizons in testing for constant improvement.

Going further, be sure to go through the extensive README of the launch_testing package which shows a wide tool set for active and post-shutdown testing. The Autoware foundation published a partially complementary article on integration testing, and in its GitHub repo provides several examples. Finally, towards testing examples, the Nav2 repository also provides a trove of examples, several including the Gazebo simulator. From a development perspective, you’ll probably want to combine this with containerization and a CI/CD pipeline.