Skip to content

Commit

Permalink
Import apex rostest (#209)
Browse files Browse the repository at this point in the history
* Initial commit

* First version of integration test framework based on roslaunch2

* Add launch arguments to apex_rostest

 - Arguments work like regular launch arguments: apex_rostest name_of_test.py arg:=foo
 - Added a mesasge pump class that will spin some rclpy nodes on a background thread automatically

* Make sequential output checker public

* Use the print_arguments_of_launch_description function from ros2launch API

* Add arg support to apex_rostest_cmake

* Add more information to the junit XML

* Change apex_rostest to apex_launchtest

  - Rename apex_rostest to apex_launchtest
  - Remove rclpy deps from apex_launchtest
  - Remove launch_ros deps from apex_launchtest
  - Move ros and rclpy deps into a new package, apex_launchtest_ros
  - Add two examples to apex_launchtest that don't require ROS
  - Update tests that used the ROS nodes previously in apex_rostest to use non ROS processes
  - Update license from "all rights reserved" to apache

* Address PR comments

* Address Dejan's review comments

* Enable gitlab CI

  - Build and test apex_launchtest only
  - Build and test everything (apex_launchtest + apex_launchtest_ros)
  - Coverage report

* Speed up the 'coverage' job

* Address Hao's PR comments

  - Git rid of extra sh -c (left over from a copy/paste)
  - Build jobs only save 'build' and 'install' artifacts
  - test_all job saves 'build', 'install', 'log', and globs for .coverage files

* Give extra information when processes die before tests are done

* Fix coverage report

 - Use symlink-install, otherwise coverage we get through subprocess calls ends up
   in a different place than coverage we get through calling functions directly from
   tests

* Have IO and Exit Code asserts use the same lookup

  - Gave all of these methods the same signature.  They can all take ExecuteProcess actions or strings
  - New util to handle the lookup of process actions in one place

* Address wjwwood's review comments

* Docs and examples (#10)

* Initial README.md doc and improved example

* Address review comments

* Describe assertions

* Describe arguments

* Address second round of review comments

* Update README.md

One more round of addressing comments

* Simple XML output check

* Add scheduled job to check package deps

  - Do an isolated build from source to catch issues with missing deps
  - Add a line (commented) that will fail the build if rclpy is a dep for apex_launchtest
  - Set up job to run on a schedule (schedule will need to be added to CI) because it's very long

* Remove rclpy dep

  - Remove ros2launch dep from apex_launchtest, which should also remove rclpy dep
  - Add test for the '--show-args' argument
  - enable CI check for rclpy dep

* Test we're compatible with ros-crystal

* Fix FailResult not serializing to XML

 - Give FailResult the VIP treatment by giving it the extra properties that
   the TestResult class has.

* Add ability to pass objects from test_description to tests

* Pass test context as arguments to test cases

  - If we pass the test_context dictionary items as kwargs to the tests, we don't
    need to worry about them colliding with other fields on the unittest.TestCase.
    Someone could still name a TestContext item "self" but I feel like they deserve
    what they get if they do that.
  - Refactor the method that iterates all test cases so it can be used to give
    attributes to test, as well as the new bind_test_context_to_tests method

* Fix doc comments

* apex_launchtest bind test args to setup functions

* Try to figure out why test_isolated didn't notice we still depend on rclpy

* Remove ros2launch.api dep with more copy/paste code

* Make isolated build schedule-only again

* apex_launchtest_ros make MessagePump work with context

* apex_launchtest_cmake give option to increase timeout

* apex_launchtest handle processes with no output

  - This bug happened when all of the processes under test crashed out from under
    us and we tried to print extra info to help debug the test failure, but at least
    one of the processes had no output

* Add test for error process with no output

* Add coverage on 0% covered files

 - use the 'source' option for coverage to pick up files not touched by any test.
   Currently, message_pump and apex_launchtest_ros/__init__.py because we're not running
   any of the ROS examples

* reroot ApexAI repository into subfolder

Signed-off-by: William Woodall <[email protected]>

* disable imported code

Signed-off-by: William Woodall <[email protected]>
  • Loading branch information
wjwwood authored Mar 21, 2019
1 parent b4ecd25 commit 86f663d
Show file tree
Hide file tree
Showing 75 changed files with 4,078 additions and 0 deletions.
12 changes: 12 additions & 0 deletions apex_rostest/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
*.swp

.eggs/
*.egg-info
*.pyc
*.pytest_cache
*.coverage
htmlcov/

build/
install/
log/
Empty file added apex_rostest/AMENT_IGNORE
Empty file.
129 changes: 129 additions & 0 deletions apex_rostest/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# apex_launchtest
![build status](https://gitlab.com/ApexAI/apex_rostest/badges/master/build.svg) ![coverage](https://gitlab.com/ApexAI/apex_rostest/badges/master/coverage.svg)

This tool is a framework for ROS2 integration testing using the [ros2 style launch description](https://github.com/ros2/launch/blob/master/ros2launch/examples/example.launch.py).
It works similarly to rostest, but makes it easier to inspect the processes under test. For example

* The exit codes of all processes are available to the tests. Tests can check that all processes shut down normally, or with specific exit codes. Tests can fail when a process dies unexpectedly
* The stdout and stderr of all processes are available to the tests.
* The command-line used to launch the processes are avilalbe to the tests.
* Some tests run concurrently with the launch and can interact with the running processes.

## Compatibility
Designed to work with [ros2 crystal](https://index.ros.org/doc/ros2/Installation/)

## Quick start example
Start with the apex_launchtest example [good_proc.test.py](apex_launchtest/examples/good_proc.test.py). Run the example by doing
>apex_launchtest apex_launchtest/examples/good_proc.test.py
apex_launchtest will launch the nodes found in the `generate_test_description` function, run the tests from the `TestGoodProcess` class, shut down the launched nodes, and then run the tests from the `TestNodeOutput` class.

#### The Launch Description
```python
def generate_test_description(ready_fn):

return launch.LaunchDescription([
launch.actions.ExecuteProcess(
cmd=[path_to_process],
),

# Start tests right away - no need to wait for anything in this example
launch.actions.OpaqueFunction(function=lambda context: ready_fn()),
])
```

The `generate_test_description` function should return a launch.LaunchDescription object that launches the system to be tested.
It should also call the `ready_fn` that is passed in to signal when the tests should start. In the good_proc.test.py example, there
is no need to delay the start of the tests so the `ready_fn` is called concurrently when the launching of the process under test

#### Active Tests
Any classes that inherit from unittest.TestCase and not decorated with the post_shutdown_test descriptor will be run concurrently
with the proccess under test. These tests are expected to interact with the running processes in some way

#### Post-Shutdown Tests
Any classes that inherit from unittest.TestCase that are decorated with the post_shutdown_test descriptor will be run after the launched
processes have been shut down. These tests have access to the exit codes and the stdout of all of the launched processes, as well
as any data created as a side-effect of running the processes

#### Exit Codes and Standard Out
the apex_launchtest framework automatically adds some member fields to each test case so that the tests can access process output and exit codes

* self.proc_info - a [ProcInfoHandler object](apex_launchtest/apex_launchtest/proc_info_handler.py)
* self.proc_output - an [IoHandler object](apex_launchtest/apex_launchtest/io_handler.py)

These objects provide dictionary like access to information about the running processes. They also contain methods that the active tests can
use to wait for a process to exit or to wait for specific output

## Assertions
The apex_launchtest framework automatically records all stdout from the launched processes as well as the exit codes from any processes
that are launched. This information is made available to the tests via the `proc_info` and `proc_output` object. These objects can be used
by one of several assert methods to check the output or exit codes of the process:

`apex_launchtest.asserts.assertInStdout(proc_output, msg, proc, cmd_args=None, *, strict_proc_matching=True)`

Asserts that a message 'msg' is found in the stdout of a particular process.
- msg: The text to look for in the process standard out
- proc: Either the process name as a string, or a launch.actions.ExecuteProcess object that was used to start the process. Pass None or
an empty string to search all processes
- cmd_args: When looking up processes by process by name, cmd_args can be used to disambiguate multiple processes with the same name
- strict_proc_matching: When looking up a process by name, strict_proc_matching=True will make it an error to match multiple processes.
This prevents an assert from accidentally passing if the output came from a different process than the one the user was expecting

`apex_launchtest.asserts.assertExitCodes(proc_info, allowable_exit_codes=[EXIT_OK], proc, cmd_args=None, *, strict_proc_matching=True)`

Asserts that the specified processes exited with a particular exit code
- allowable_exit_codes: A list of allowable exit codes. By default EXIT_OK (0). Other exit codes provided are EXIT_SIGINT (130), EXIT_SIGQUIT (131), EXIT_SIGKILL (137) and EXIT_SIGSEGV (139)
- The proc, cmd_args, and strict_proc_matching arguments behave the same way as assertInStdout. By default, assert on the exit codes of all processes

`apex_launchtest.asserts.assertSequentialStdout(proc_output, proc, cmd_args=None)`

Asserts that standard out was seen in a particular order
- Returns a context manager that will check that a series of assertions happen in order
- The proc and cmd_args are the same as assertInStdout and assertExitCodes, however it is not possible to match multiple processes because there is no way to determine
the order of stdout that came from multiple processes.
Example:
```python
with assertSequentialStdout(self.proc_output, "proc_name") as cm:
cm.assertInStdout("Loop 1")
cm.assertInStdout("Loop 2")
cm.assertInStdout("Loop 3")
```

#### Waiting for Output or Exit Codes
The ActiveTests can also call methods that wait for particular output or a particular process to exit or time out. These asserts are methods on the `proc_output` and `proc_info` objects

`proc_output.assertWaitFor(msg, proc=None, cmd_args=None, *, strict_proc_matching=True, timeout=10)`
- The msg, proc, cmd_args, and strict_proc_matching arguments work the same as the other assert methods. By default, this method waits on output from any process
- timeout: The amount of time to wait before raising an AssertionError

`proc_info.assertWaitForShutdown(proc, cmd_args=None, *, timeout=10)`
- The proc and cmd_args work the same as the other assertions, but it is not possible to wait on multiple processes to shut down
- timeout: The amount of time to wait before raising an AssertionError

## Arguments
apex_launchtest uses the same [syntax as ros2 launch](https://github.com/ros2/launch/pull/123) to pass arguments to tests.

Arguments are declared in the launch description and can be accessed by the test vi a test_args dictionary that's injected into the tests similar to `proc_info` and `proc_output`.

See the [apex_launchtest example with arguments](apex_launchtest/examples/args.test.py)
```
>apex_launhtest --show-args examples/args.test.py
>apex_launchtest examples/args.test.py dut_arg:=value
```

## Using CMake
To run apex_launchtest from a CMakeLists file, you'll need to declare a dependency on
apex_launchtest_cmake in your package.xml. Then, in the CMakeLists file, add

```
find_package(apex_launchtest_cmake)
add_apex_launchtest(test/name_of_test.test.py)
```

Arguments can be passed to the tests via the CMake function, too:
```
add_apex_launchtest(
test/test_with_args.test.py
ARGS "arg1:=foo"
)
```
4 changes: 4 additions & 0 deletions apex_rostest/apex_launchtest/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
*.egg-info
*.pyc
*.pytest_cache
htmlcov/
31 changes: 31 additions & 0 deletions apex_rostest/apex_launchtest/apex_launchtest/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Copyright 2019 Apex.AI, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


from .decorator import post_shutdown_test
from .io_handler import ActiveIoHandler, IoHandler
from .proc_info_handler import ActiveProcInfoHandler, ProcInfoHandler
from .ready_aggregator import ReadyAggregator

__all__ = [
# Functions
'post_shutdown_test',

# Classes
'ActiveIoHandler',
'ActiveProcInfoHandler',
'IoHandler',
'ProcInfoHandler',
'ReadyAggregator',
]
136 changes: 136 additions & 0 deletions apex_rostest/apex_launchtest/apex_launchtest/apex_launchtest_main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# Copyright 2019 Apex.AI, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import argparse
import logging
from importlib.machinery import SourceFileLoader
import os
import sys

from .apex_runner import ApexRunner
from .junitxml import unittestResultsToXml
from .print_arguments import print_arguments_of_launch_description

_logger_ = logging.getLogger(__name__)


def _load_python_file_as_module(python_file_path):
"""Load a given Python launch file (by path) as a Python module."""
# Taken from apex_core to not introduce a weird dependency thing
loader = SourceFileLoader('python_launch_file', python_file_path)
return loader.load_module()


def apex_launchtest_main():

logging.basicConfig()

parser = argparse.ArgumentParser(
description="Integration test framework for Apex AI"
)

parser.add_argument('test_file')

parser.add_argument('-v', '--verbose',
action="store_true",
default=False,
help="Run with verbose output")

parser.add_argument('-s', '--show-args', '--show-arguments',
action='store_true',
default=False,
help='Show arguments that may be given to the test file.')

parser.add_argument(
'launch_arguments',
nargs='*',
help="Arguments to the launch file; '<name>:=<value>' (for duplicates, last one wins)"
)

parser.add_argument(
"--junit-xml",
action="store",
dest="xmlpath",
default=None,
help="write junit XML style report to specified path"
)

args = parser.parse_args()

if args.verbose:
_logger_.setLevel(logging.DEBUG)
_logger_.debug("Running with verbose output")

# Load the test file as a module and make sure it has the required
# components to run it as an apex integration test
_logger_.debug("Loading tests from file '{}'".format(args.test_file))
if not os.path.isfile(args.test_file):
# Note to future reader: parser.error also exits as a side effect
parser.error("Test file '{}' does not exist".format(args.test_file))

args.test_file = os.path.abspath(args.test_file)
test_module = _load_python_file_as_module(args.test_file)

_logger_.debug("Checking for generate_test_description")
if not hasattr(test_module, 'generate_test_description'):
parser.error(
"Test file '{}' is missing generate_test_description function".format(args.test_file)
)

dut_test_description_func = test_module.generate_test_description
_logger_.debug("Checking generate_test_description function signature")

runner = ApexRunner(
gen_launch_description_fn=dut_test_description_func,
test_module=test_module,
launch_file_arguments=args.launch_arguments,
debug=args.verbose
)

_logger_.debug("Validating test configuration")
try:
runner.validate()
except Exception as e:
parser.error(e)

if args.show_args:
print_arguments_of_launch_description(
launch_description=runner.get_launch_description()
)
sys.exit(0)

_logger_.debug("Running integration test")
try:
result, postcheck_result = runner.run()
_logger_.debug("Done running integration test")

if args.xmlpath:
xml_report = unittestResultsToXml(
test_results={
"active_tests": result,
"after_shutdown_tests": postcheck_result
}
)
xml_report.write(args.xmlpath, xml_declaration=True)

if not result.wasSuccessful():
sys.exit(1)

if not postcheck_result.wasSuccessful():
sys.exit(1)

except Exception as e:
import traceback
traceback.print_exc()
parser.error(e)
Loading

0 comments on commit 86f663d

Please sign in to comment.