Tudat Developer Documentation#

Welcome to the Tudat Developer Documentation. Browse the latest developer documentation, including tutorials, sample code, articles and external API references.

DevOps Primer#

Welcome to the DevOps Primer for the Tudat project! This guide is designed for developers and users who are new to DevOps and want to learn about the different tools and concepts involved in the development operations process.

This primer will cover the following topics:

  • Environment variables

  • Access tokens

  • Version control (e.g. Git)

  • Continuous integration and deployment

Each of these topics will be briefly introduced, with a focus on their definition, importance, and practical applications. At the end of each section, you will find links to detailed guides that cover each topic in more depth.

Environment Variables#

Environment variables are global system variables that are accessible by any process running on a system. They are used to store information such as configuration settings, file paths, and other types of data that need to be available to multiple processes.

In this primer, we will cover the basics of environment variables, including how to define and set them in different operating systems. You will learn about the different methods for setting environment variables locally and persistently, as well as how to set environment variables in Python.

For more information on environment variables, check out the Defining Environment Variables guide.

Access Tokens#

Access tokens are secure strings that are used to authenticate access to resources and services. They are often used in conjunction with API keys and other authentication mechanisms to ensure that only authorized users have access to sensitive data and systems.

In this primer, we will cover the basics of access tokens, including how to generate them for different services like Azure, GitHub, and Anaconda Cloud. You will learn about the different methods for setting environment variables for access tokens, as well as how to manage access tokens in a secure and efficient manner.

For more information on access tokens, check out the Managing Access Tokens guide.

Version Control#

Version control is a system that tracks changes to files and directories over time, so that you can easily revert to previous versions if necessary. One of the most popular version control systems is Git, which is used extensively in the Tudat project.

In this primer, we will cover the basics of version control, including an overview of Git and how it works. You will learn about the different steps involved in setting up a Git repository, making commits and pushing changes to a remote repository, and working with branches and pull requests.

For more information on version control, check out the Code Collaboration guide.

Continuous Integration and Deployment#

Continuous integration and deployment (CI/CD) is a software development practice that involves automatically building, testing, and deploying code changes to production systems. This allows developers to catch bugs and other issues early in the development process, and ensures that code changes are rolled out to production systems quickly and consistently.

In this primer, we will cover the basics of CI/CD, including an overview of how it works and how to set up a CI/CD pipeline. You will learn about the different steps involved in automated testing and deployment, and how to ensure that code changes are rolled out to production systems quickly and efficiently.

For more information on continuous integration and deployment, check out the Continuous Deployment guide. working link

Conclusion#

In this guide, we have covered the essential concepts and tools related to DevOps in the development process. We have explained the importance of environment variables, access tokens, version control, and continuous integration and deployment.

The use of environment variables and access tokens helps to ensure that sensitive information is stored securely, separate from the code, while version control helps manage changes to the codebase. Continuous integration and deployment streamlines the development process and reduces the risk of errors.

DevOps is a critical aspect of the software development process, and its importance cannot be overstated. By following best practices and utilizing the right tools, DevOps helps to ensure that software is delivered quickly, efficiently, and securely.

In conclusion, we recommend that developers and users new to DevOps familiarize themselves with the concepts and tools covered in this guide and take advantage of the detailed resources provided. By doing so, they will be well-equipped to make successful contributions to their projects and achieve their goals.

Software Documentation#

Sphinx Documentation#

sudo apt-get install  texmaker gummi texlive texlive-full texlive-latex-recommended latexdraw intltool-debian lacheck libgtksourceview2.0-0 libgtksourceview2.0-common lmodern luatex po-debconf tex-common texlive-binaries texlive-extra-utils texlive-latex-base texlive-latex-base-doc texlive-luatex texlive-xetex texlive-lang-cyrillic texlive-fonts-extra texlive-science texlive-latex-extra texlive-pstricks

Todo

  • Link checking is facilitated by sphinx using make linkcheck (on windows)

  • Add section on FontAwesome inline icons from sphinx-panels

  • https://fontawesome.com/

  • Add tutorial/ section on maintaining a bibliography in Sphinx.

Compile documentation with Sphinx#

This example is a step-by-step guide on how to compile the tudat documentation locally on your system using sphinx. This procedure works to compile documentation for both the tudat-space and the documentation you are currently reading.

Note

This procedure requires that Anaconda or Miniconda is installed. For information regarding the use of the conda ecosystem, please see Getting Started with Conda.

  1. Create an environment that will be satisfy all dependencies required for building documentation, then activate it. This can be done by downloading this environment.yaml (yaml), which will install the tudat-docs conda environment.

conda env create -f environment.yaml & conda activate tudat-docs
  1. Enter the root directory of a repository containing a docs directory, which contains a source subdirectory. The following command is specific to cloning and entering the tudat-space repository.

git clone https://github.com/tudat-team/tudat-space.git & cd tudat-space
  1. Build the documentation using the sphinx-build command, specifying that html is to be built with the supplied source and output build directory.

sphinx-build -b html docs/source docs/build
  1. View the local build of the documentation by opening the docs/build/index.html with your preferred browser.

Tip

[PyCharm/CLion] You can do this in by right clicking index.html in the Project tree and selecting Open with Browser.

Compiling Documentation in PyCharm#

If you are using PyCharm, the compilation of the documentation after each edit can be simplified by setting up a run configuration tailored for sphinx. The procedure is described below.

  1. From the main toolbar, click on Run > Edit Configurations;

  2. In the window that has just opened, click on the + button (upper-left) to add a new configuration;

  3. From the drop-down menu, select Python docs > Sphinx task;

_images/sphinx_config_pycharm_step1.png
  1. Give a name to the new run configuration;

  2. Make sure that the field Command is set on html;

  3. For the input and output fields, select the source and build folders respectively.

_images/sphinx_config_pycharm_step2.png

Make sure that the correct run configuration is selected. If so, pressing Run will be equivalent to executing the following command from the command line:

sphinx-build -b html docs/source docs/build

Troubleshooting#

In this section, we collect the most recurring bugs that can happen while using sphinx, hoping that it will save precious time to future Tudat contributors.

No changes shown in browser#

It happens often that the browser shows cached data instead of the updated html files. As a result, if you don’t see your changes, try to empty/delete the cache of your browser (see, e.g., this guide).

No changes shown in online docs#

It can happen that, after pushing your changes to the origin repository, no changes are shown on the actual website (e.g., on tudat-space or on this website). Some suggestions to identify the problem will follow:

  1. Check that you pushed to the main branch. The documentation is built by readthedocs only if changes are pushed to that branch.

  2. Check that the build was successful. This can be monitored via the “Builds” link in the readthedocs_menu_ (see screenshot above). If the build was not successful, you can click on it and see the output of the build. This can be helpful to identify where things are going wrong.

_images/build_output.png
Sphinx commands not working#

If a sphinx command does not work, for instance the following:

.. toctree::
   intro
   guide

it can be due to many things, but before going crazy into debugging mode, check that the amount of spaces before intro and guide correspond to three empty spaces. Sphinx requires three empty spaces, but the tab key corresponds to four empty spaces: if you use it in sphinx commands, it can generate a lot of confusion because the extra white space will break the sphinx command and it is very difficult to notice as well. To be clear, this will likely not work:

.. toctree::
    intro
    guide

Release an online version#

Every time you make a modification to the documentation, you are required to:

  1. branch out from develop to a feature/FEATURE_NAME branch (see Code Collaboration)

  2. make the necessary modifications (see Sphinx Documentation)

  3. test the build locally (see Sphinx Documentation)

  4. update the CHANGELOG.md

  5. open a Pull Request into develop (see Code Collaboration)

  6. issue an unstable version of the documentation (see Release versioning)

The reviewer is required to:

  1. review the pull request by testing it locally

  2. if needed, ask the developer for modifications

  3. merge into develop, push and check the result online (latest version)

  4. release a stable version with bumpversion

  5. merge develop into master to deploy a stable version of the docs

To host our online documentation, like the one you are reading, we use readthedocs.

Deploying a version with readthedocs#

See also

In this guide, we assume that the reader is familiar with how to release new versions of the documentation locally through bumpversion (see Releasing a new version with bumpversion).

Readthedocs uses git tags to build different versions of the documentation, with two additional versions:

  • latest (corresponding to the latest commit on develop)

  • stable (corresponding to the most recent version released on master)

Note

The landing pages for both tudat-space and the developer docs point to the stable version. It is still possible to switch to latest through the readthedocs panel (bottom left of the page, as shown below).

_images/readthedocs_menu.png

Once commits are pushed to the develop branch on origin (or a new version tag is pushed to main), the documentation is built automatically by readthedocs. If changes are pushed to other branches, no documentation is built.

Stable vs. unstable versions#

Depending whether the release is stable or unstable, different things happen:

  • if the release is stable (e.g., v0.1.2), the resulting documentation is published on the website and a new version will be visible in the readthedocs menu)

  • if the release is unstable (e.g., v0.1.2dev0), the resulting documentation will not be built nor published on the website

Activating unstable versions#

Unpublished versions, such as unstable versions or versions from other branches, can still be activated by authorized users (i.e., readthedocs maintainers) to be viewed online and shared with others through a link. This can be done by clicking on the readthedocs menu and selecting “Builds”, then “Versions” and activate build. Make it hidden to avoid it being listed on the website and searchable by the users.

_images/builds.png

Clicking on the right build allows to see it in the browser and copy the related link to share it with collaborators. This is particularly useful to share drafts of the output documentation without modifying stable versions.

See also

Read more on how readthedocs deals with versions.

How different versions are used in tudat#

This is how we envisage different versions of the online docs:

  • the stable documentation with proper versioning is the official documentation and can be linked to different software versions

  • the latest documentation is useful to deploy documentation quickly and, if needed, also use it for giving/receiving feedback

  • the inactive documentation (corresponding to unstable versions or other branches) can be used for giving/receiving feedback, but they have to be activated and hidden by maintainers of readthedocs

Troubleshooting#

In this section, we collect the most recurring bugs that can happen while using readthedocs, hoping that it will save precious time to future Tudat contributors.

No changes shown in online docs#

It can happen that, after pushing your changes to the origin repository, no changes are shown on the actual website (e.g., on tudat-space or on this website). Some suggestions to identify the problem will follow:

  1. Check that you pushed to the main branch. The documentation is built by readthedocs only if changes are pushed to that branch.

  2. Check that the build was successful. This can be monitored via the “Builds” link in the readthedocs_menu (see screenshot above). If the build was not successful, you can click on it and see the output of the build. This can be helpful to identify where things are going wrong.

_images/build_output.png

Multidoc#

Multidoc is a tool for purposed towards improving maintainability and consistency of docstrings in software that is available across multiple programming languages, with fixed language equivalent APIs.

Nomenclature

  • Application Programming Interface (API): An interface that defines interactions between multiple software applications or mixed hardware-software intermediaries.

  • YAML: (recursive acronym for “YAML Ain’t Markup Language”) A human-readable data-serialization language.

  • Jinja2: Jinja is a modern and designer-friendly templating language for Python. It is fast, widely used and secure.

Functions#

https://numpydoc.readthedocs.io/en/latest/format.html#sections

Classes#

Use the same sections as outlined above (all except Returns are applicable). The constructor (__init__) should also be documented here, the Parameters section of the docstring details the constructor’s parameters.

Constants#

1. summary
2. extended summary (optional)
3. see also (optional)
4. references (optional)
5. examples (optional)

Modules#

1. summary
2. extended summary
3. routine listings
4. see also
5. notes
6. references
7. examples

How to write docstrings#

In this guide, we will explain how to write docstrings for tudat and tudatpy. We will also include a template for documenting enums, classes and (factory) functions.

Note

Before diving into this guide, the user should be familiar with the page about Exposing C++ to Python.

YAML files#

The source of docstrings are located in yaml files in the docstring directory on Github. The content is divided over a file tree structure that mimics the structure of the tudatpy exposure (see this directory on Github), which is the same structure of the tudatpy modules. Each file bundles the content of a module exposure function (i.e. Ephemeris, Gravity Field, Rotation, etc). Within each yaml file, all module classes are listed under a single “classes” key, while functions are listed under a single “functions” key.

Note

For tudatpy-native classes and functions (i.e., not exposed from C++ code but directly coded in Python), the docstrings can be written directly in the Python source files. Only the name of the class and method needs to be included in the yaml file. An example of this can be found here for the docstring and here for the mention in the yaml file.

API Structure Definition#

The generic structure definition of a Python API reference system is provided below:

definition/
├── __api__.yaml
├── module1.yaml
├── module2
│         ├── __module__.yaml
│         └── submodule1.yaml
└── module3
    ├── __module__.yaml
    └── submodule2
           └── subsubmodule.yaml

The building blocks can be broken down into the following elements:

Element

Description

__api__.[yml/yaml]

API configuration file. Must exist in the API structure prefix.

module.[yml/yaml]

Module configuration file. Module definition as a file implicitly infers no submodules.

/module

Module configuration directory. Must contain __module__.[yml/yaml]

submodule.[yml/yaml]

Submodule configuration file. Equivalent to a module configuration file.

yaml files#

YAML files contain the sources of the docstrings and are organized as key-value pairs. An example of a typical YAML file is provided below.

Warning

The example taken from this docstring file but it was heavily adapted to make it shorter, so it does not contain meaningful information.

Show/Hide example

extended_summary: |
   This module provides the functionality for creating integrator settings.

enums:
   - name: AvailableIntegrators
     short_summary: "Enumeration of available integrators."
     extended_summary: |
       Enumeration of integrators supported by tudat.
     members:
       - name: euler # [cpp]
       - name: rungeKutta4 # [cpp]
       - name: euler_type # [py]
       - name: runge_kutta_4_type # [py]

 classes:
   - name: IntegratorSettings
     short_summary: "Functional base class to define settings for integrators."
     extended_summary: |
       Class to define settings for numerical integrators, for instance for use in numerical integration of equations of motion/
       variational equations. This class can be used for simple integrators such as fixed step RK and Euler. Integrators that
       require more settings to define have their own derived class.
     methods:
       - name: ctor # [cpp]
         short_summary: "Constructor." # [cpp]
         extended_summary: "Instances of this class are typically not generated by the user because this is a base class." # [cpp]

     attributes:
       - name: initial_time # [py]
         type: float # [py]
         description: Initial time of the integration. # [py]

 functions:
   # Euler
   - name: eulerSettings # [cpp]
   - name: euler # [py]
     short_summary: "Creates the settings for the Euler integrator."
     extended_summary: |
       Factory function to create settings for the Euler integrator. For this integrator, the step size is kept
       constant.
     parameters:
       - name: initialTime # [cpp]
         type: double # [cpp]
       - name: initial_time # [py]
         type: float # [py]
         description: Start time (independent variable) of numerical integration.

       - name: initialTimeStep # [cpp]
         type: double # [cpp]
       - name: initial_time_step # [py]
         type: float # [py]
         description: Initial and constant value for the time step.

     returns:
         type: IntegratorSettings
         description: Integrator settings object.

As the example shows, the following keys are accepted:

  • extended_summary (for the module)

  • enums

  • classes

  • functions

Each of those sections (except for extended_summary) accepts a number of items. Each item should start with:

- name: "..."

where the dots are replaced by the name of the enum, class, or function.

Note

  • Keys and values entries in YAML files require a leading dash only if they are part of a list.

  • A string can be provided in YAML files through quotation marks or with the | linebreak symbol.

Each item also has different fields. We adopted the numpydoc documentation style. As a result, in our API reference each function or class can accept all the fields specified by numpydoc (see here for an extensive list).

Warning

For enums, as they are not Python native objects, an additional members field is made available.

tudat vs. tudatpy#

Tudat and tudatpy API documentations are generated from the same yaml files.

Tudat-exclusive content is marked by the # [cpp] tag, while tudatpy-exclusive content is marked by # [py].

Note

Untagged content will be included in both API documentations.

Typically, the two APIs convey the same content. That means that the same functions, parameters and returns (etc) are listed in both APIs, where names and types are adopted to the respective API ([cpp] or [py]). Most class or function summaries are the same (word-by-word) for the two APIs.

Documentation style#

The text in the docstring will be parsed and rendered by Sphinx. Therefore, any sphinx command can be used in the yaml files.

Warning

There should be a balance between the readability of the raw docstrings and the intended aesthetical effects provided by Sphinx. Even if most of the users will consult the online API reference, the same docstrings will be also shipped with the tudatpy conda package, so the docstrings can be consulted locally. Docstrings with many Sphinx commands will be difficult to read and interpret.

Below, a few important aspects of the documentation style are outlined.

Factory functions#

See also

All examples from this subsection have been inspired from (but do not correspond exactly to) this file.

Factory functions (FFs) are functions creating instances of objects via the class constructors ) and they are intended to be the user’s interface with the actual class constructors, such that the users typically do not interact with the classes as such. FFs will be used throughout all user guides, examples and tutorials. They will be the user`s landing pad in the API. It is therefore the intention to supply all functionality-related information in the docstrings of the FF. This may include (but is not limited to) complete explanations for function parameters, information about the models (that will be created by the classes), model implementation and links to external resources.

Example

functions:
    # Factory function instantiating an object of type CentralGravityFieldSettings (see next example)
  - name: central # [py]
  - name: centralGravitySettings # [cpp]
    short_summary: "Factory function for central gravity field settings object."
    extended_summary: |
      Factory function for settings object, defining a point-mass gravity field model with user-defined gravitational parameter.
    parameters:
      - name: gravitational_parameter # [py]
        type: float # [py]
      - name: gravitationalParameter # [cpp]
        type: double # [cpp]
        description: Gravitational parameter defining the point-mass gravity field.
    returns:
        type: CentralGravityFieldSettings
        description: Instance of the :class:`~tudatpy.numerical_simulation.environment_setup.gravity_field.GravityFieldSettings` derived :class:`~tudatpy.numerical_simulation.environment_setup.gravity_field.CentralGravityFieldSettings` class
(derived) classes#

Classes, on the other hand, are documented in a more minimalistic manner, focused more on code design and hierarchy and less on the functional aspects. Constructors of classes that have FFs implemented will not be documented with parameters and returns keys, since users are discouraged from directly using the constructor method. short_description of the constructor method will be given by the string "Constructor". extended_description of the constructor method will refer the user to use the respective FF for creating instances of the given class.

Example

classes:
  # Derived class from GravityFieldSettings (see next example)
  - name: CentralGravityFieldSettings
    short_summary: "`GravityFieldSettings` derived class defining settings of point mass gravity field."
    extended_summary: |
      Derived class of `GravityFieldSettings` for central gravity fields, which are defined by a single gravitational parameter.

    methods: # [cpp]
        # Class constructor
      - name: ctor # [cpp]
        short_summary: "Constructor." # [cpp]
        extended_summary: "Instances of the `CentralGravityFieldSettings` class should be created through the `centralGravitySettings` factory function." # [cpp]
        # Class constructor's parameter
      - name: getGravitationalParameter # [cpp]
        short_summary: "Retrieve gravitational parameter." # [cpp]
        extended_summary: "Function to retrieve gravitational parameter of the settings object." # [cpp]
        parameters: # [cpp]
          - name: None # [cpp]
        returns: # [cpp]
            type: double # [cpp]
            description: Gravitational parameter of central gravity field. # [cpp]
Base classes#

Base classes are to be identified as such (in short_description). Typically, users do not create instances of the base classes (but of the derived classes through the dedicated FFs) and this shall also be mentioned in the in the extended_description.

Example

classes:
    # Base class
  - name: GravityFieldSettings
    short_summary: "Base class for providing settings for automatic gravity field model creation."
    extended_summary: |
      This class is a functional base class for settings of gravity field models that require no information in addition to their type.
      Gravity field model classes requiring additional information must be created using an object derived from this class.

    properties: # [py]
      - name: gravity_field_type # [py]
        type: GravityFieldType # [py]
        description: Type of gravity field model that is to be created. # [py]
        readonly: True # [py]

    methods:
      - name: __init__ # [py]
      - name: ctor # [cpp]
        short_summary: "Constructor." # [cpp]
        extended_summary: "Instances of this class are typically not generated by the user. Settings objects for gravity field models should be instantiated through the factory functions of a derived class." # [cpp]
Python properties vs. C++ getters/setters#

An exception to the analogous structure of the two APIs is the treatment of class attributes.

The original get/set methods of the tudat classes are exposed as “properties” in tudatpy classes (see our guide about Class attributes in C++ vs. in Python).

As a result, class attributes are only documented as such for the tudatpy API, while the get/set methods of the classes are documented in the tudat API instead.

Example

classes:
    # Derived class
  - name: CentralGravityFieldSettings
    short_summary: "`GravityFieldSettings` derived class defining settings of point mass gravity field."
    extended_summary: |
      Derived class of `GravityFieldSettings` for central gravity fields, which are defined by a single gravitational parameter.

    # Properties (only for Python)
    properties: # [py]
      - name: gravitational_parameter # [py]
        type: float # [py]
        description: Gravitational parameter of central gravity field. # [py]

    methods: # [cpp]
      - name: ctor # [cpp]
        short_summary: "Constructor." # [cpp]
        extended_summary: "Instances of the `CentralGravityFieldSettings` class should be created through the `centralGravitySettings` factory function." # [cpp]

        # Getter (only for C++)
      - name: getGravitationalParameter # [cpp]
        short_summary: "Retrieve gravitational parameter." # [cpp]
        extended_summary: "Function to retrieve gravitational parameter of the settings object." # [cpp]
        parameters: # [cpp]
          - name: None # [cpp]
        returns: # [cpp]
            type: double # [cpp]
            description: Gravitational parameter of central gravity field. # [cpp]

        # Setter (only for C++)
      - name: resetGravitationalParameter # [cpp]
        short_summary: "Reset gravitational parameter." # [cpp]
        extended_summary: "Function to reset gravitational parameter of the settings object." # [cpp]
        parameters: # [cpp]
          - name: gravitationalParameter # [cpp]
            type: double # [cpp]
            description: Gravitational parameter of central gravity field that is to be defined by the settings object. # [cpp]

Docstring template#

As an additional resource, we have assembled a template to kickstart the writing process of docstrings. It can be found in YAML templates.

Software Development#

Build System#

CMake#

Developer Environment#

The tudat-bundle build configuration allows developers to simultaneously work on tudat and tudatpy for a better flow in your end to end development.

Note

This topic is relevant for:

  • developers who want to expose their updated tudat code to the upcoming tudatpy package release.

  • users who like to extend tudatpy functionality locally via modification of the C++-based tudat source code.

  • anybody interested in seeing a concurrent C++ / Python development workflow.

Learning Objectives

  1. Get your own tudat-bundle environment from the tudat-team.

  2. Understand the structure of the tudat-bundle and the purpose of its components.

  3. Familiarize with the mapping between tudat and tudatpy source code.

  4. Understand the higher level functions of the tudat-api.

  5. Familiarize with the available build configurations for tudat and tudatpy.

  6. Know how to build the tudat-bundle and recognize some common problems that can be encountered.

Cloning tudat-bundle#

The tudat-bundle environment is available on the tudat-team GitHub repository.

Note

Detailed instructions for the download, setup and verification of your own tudat-bundle can be found in the repository’s README (steps 1-4).

Warning

If your machine is running on an Apple M1 processor, you may have to follow a slightly different procedure. Please refer to this discussion.

Introduction to tudat-bundle#

The tudat-bundle consists of three subdirectories:

  • tudat, containing the tudat C++ source code.

  • tudatpy, containing the tudatpy/kernel directory in which the exposure of C++ source code to the tudatpy package is facilitated.

  • <build>, the build directory containing the compiled C++ tudat code (<build>/tudat), as well as the compiled tudatpy package at <build>/tudatpy/tudatpy/kernel.so.

The entirety of exposed C++ functionality in tudatpy is contained within the tudatpy/kernel source directory.

For reference during this guide, the architecture of this directory is as follows:

Note

This module / submodule tree structure always aspires to mimic the structure of the tudat/src directory.

schematic tudatpy/kernel directory#
*  kernel
   *  ├── expose_<module_A>.cpp
   *  ├── expose_<module_A>.h
   *  ├── expose_<module_A>

        *  ├── expose_<submodule_A1>.cpp
        *  ├── expose_<submodule_A1>.h
        *  ├── expose_<submodule_A2>.cpp
        *  ├── expose_<submodule_A2>.h
        *  ├──          ...

   *  ├── expose_<module_B>.cpp
   *  ├── expose_<module_B>.h
   *  ├──          ...

   *  └── kernel.cpp

Note

The terms Package/Module/Submodule are intended to be hierarchical descriptions, used mostly in the context of directory structure. In the Python interpreter, everything is treated as a module object.

The tudatpy Package#

The tudatpy package is a collection of modules, in which the C++-based tudat source code is exposed into Python bindings.

Note

The interfaces of C++-based tudat source code and the Python-based tudatpy modules are managed by the Pybind11 library. The rules for defining C++ to Python interfaces using Pybind11 will be presented in detail under Exposing C++ in Python.

In kernel.cpp (see schematic tudatpy/kernel directory) tudatpy modules are bundled into the tudatpy package. The following folded code shows the core elements of kernel.cpp. It would serve the reader to have a glance through before we walk through the elements in detail.

tudatpy/kernel/kernel.cpp#
// expose tudat versioning
#include <tudat/config.hpp>

// include all exposition headers
#include "expose_simulation.h"
// other submodule headers...

// standard pybind11 usage
#include <pybind11/pybind11.h>
namespace py = pybind11;

PYBIND11_MODULE(kernel, m) {

    // Disable automatic function signatures in the docs.
    // NOTE: the 'options' object needs to stay alive
    // throughout the whole definition of the module.
    py::options options;
    options.disable_function_signatures();
    options.enable_user_defined_docstrings();

    // export the tudat version.
    m.attr("_tudat_version_major") = TUDAT_VERSION_MAJOR;
    m.attr("_tudat_version_minor") = TUDAT_VERSION_MINOR;
    m.attr("_tudat_version_patch") = TUDAT_VERSION_PATCH;

    // simulation module definition
    auto simulation = m.def_submodule("simulation");
    tudatpy::expose_simulation(simulation);

    // other submodule definitions...

    // versioning of kernel module
    #ifdef VERSION_INFO
      m.attr("__version__") = VERSION_INFO;
    #else
      m.attr("__version__") = "dev";
    #endif
}

Starting with the end in mind, compiling the previous will create a shared library named kernel.so, making available all modules included in kernel.cpp. With the kernel.so library added to the Python path variable, users can then import tudatpy modules such as the astro module, by executing from kernel import astro.

Warning

The Python interpreter searches the sys.path in its order. Inspect the sys.path list to verify that the desired variant of a module is imported.

All tudatpy modules included in the kernel namespace have previously defined in their respective expose_<module_A>.cpp (and expose_<module_A>.h) files.

Module Definition#

Note

A tudatpy module can be thought of as collection of tudat source code, which has been exposed to python.

Modules are defined by their respective exposition functions expose_<module_X>( ). These exposition functions fulfill one of two (or sometimes both) tasks:

  1. directly expose tudat source code in the module namespace (see <module_B> in schematic tudatpy/kernel directory)

  2. include selected submodules, where tudat source code has been exposed in nested namespaces (see <module_A> in schematic tudatpy/kernel directory)

1. Source Code Exposition in Module Namespace#

Exposition functions may directly expose tudat source code content (module classes, functions and attributes) from the respective tudat namespace to the tudatpy module namespace. In this case, the C++ to python interfaces are defined directly in the tudatpy module namespace. One example of this usage is the tudatpy constants module. Consider below the definition of the module constants:

tudatpy/kernel/expose_constants.cpp#
// include .h
#include "expose_constants.h"

// include .h of considered source content
#include "tudatpy/docstrings.h"
#include "tudat/constants.h"
#include "tudat/astro/basic_astro/timeConversions.h"

// pybind11 usage
#include <pybind11/complex.h>
#include <pybind11/pybind11.h>
namespace py = pybind11;

// aliasing namespaces of considered source content
namespace tbc = tudat::celestial_body_constants;
namespace tpc = tudat::physical_constants;
// ...

// namespace package level
namespace tudatpy {
// namespace module level
namespace constants {

// module definition function
void expose_constants(py::module &m) {

  // tudat source code (C++) to tudatpy (python) interfaces defined in module namespace:

  // docstrings (no source code interface here)
  m.attr("__doc__") = tudatpy::get_docstring("constants").c_str();

  // celestialBodyConstants.h
  m.attr("EARTH_EQUATORIAL_RADIUS") = tbc::EARTH_EQUATORIAL_RADIUS;
  m.attr("EARTH_FLATTENING_FACTOR") = tbc::EARTH_FLATTENING_FACTOR;
  m.attr("EARTH_GEODESY_NORMALIZED_J2") = tbc::EARTH_GEODESY_NORMALIZED_J2;
  // ...

  // physicalConstants.h
  m.attr("SEA_LEVEL_GRAVITATIONAL_ACCELERATION") = tpc::SEA_LEVEL_GRAVITATIONAL_ACCELERATION;
  m.attr("JULIAN_DAY") = tpc::JULIAN_DAY;
  m.attr("JULIAN_DAY_LONG") = tpc::JULIAN_DAY_LONG;
  // ...

  // ...

};

}// namespace module level
}// namespace package level

The procedure can be summarized in three easy steps

  1. make available tudat source code and pybind11 functionality

  2. define module definition function expose_constants( ) in module namespace

  3. define C++ to python interfaces using the pybind syntax

Note

In the case of the constants module, the exposed source code content is limited to attributes.

2. Source Code Exposition in Nested Namespace#

For large tudatpy modules, the exposition of the tudat source code is divided over submodules. In this case, the C++ to python interfaces are defined in the submodule namespace or even lower-level nested namespaces. One example of this usage is the tudatpy astro module, which includes exposed tudat source code from submodules such as fundamentals, ephemerides and more. Consider below the definition of the module astro:

tudatpy/kernel/expose_astro.cpp#
// include .h
#include "expose_astro.h"

// include .h of selected submodule definition
#include "expose_astro/expose_fundamentals.h"
#include "expose_astro/expose_ephemerides.h"
// ...

// pybind11 usage
#include <pybind11/pybind11.h>
namespace py = pybind11;

// namespace package level
namespace tudatpy {
// namespace module level
namespace astro {

// module definition function
void expose_astro(py::module &m) {

  // include selected submodules (source code exposition in nested namespaces 'fundamentals', 'ephemerides', etc):

  // expose_fundamentals.h
  auto fundamentals = m.def_submodule("fundamentals");
  expose_fundamentals(fundamentals);

  // expose_ephemerides.h
  auto ephemerides = m.def_submodule("ephemerides");
  expose_ephemerides(ephemerides);

  // ...

};

} // namespace module level
} // namespace package level

The procedure is largely analogous to the that of Source Code exposition in module namespace:

  1. make available tudat source code and pybind11 functionality

  2. define module definition function expose_astro( ) in module namespace

  3. include selected submodules fundamentals & ephemerides via pybind’s module.add_submodule( ) function

Since the tudatpy submodules fundamentals & ephemerides define the C++ to python interfaces, the definition of these submodules follows the exact same structure as in case 1 (Source Code Exposition in Module Namespace). For the sake of completeness the definition of the ephemerides submodule is presented below:

tudatpy/kernel/expose_astro.cpp#
// include .h
#include "expose_ephemerides.h"

// include .h of considered source content
#include <tudat/astro/ephemerides.h>
#include <tudat/simulation/simulation.h> // TODO: EphemerisType should be in <tudat/astro/ephemerides.h>

// pybind11 usage
#include <pybind11/eigen.h>
#include <pybind11/functional.h>
#include <pybind11/numpy.h>
#include <pybind11/pybind11.h>
namespace py = pybind11;

// aliasing namespaces of considered source content
namespace te = tudat::ephemerides;
namespace tss = tudat::simulation_setup;

// namespace package level
namespace tudatpy {
// namespace submodule level
namespace ephemerides {

void expose_ephemerides(py::module &m) {

  // tudat source code (C++) to tudatpy (python) interfaces defined in submodule namespace:

  py::class_<te::Ephemeris, std::shared_ptr<te::Ephemeris>>(m, "Ephemeris")
      .def("get_cartesian_state", &te::Ephemeris::getCartesianState, py::arg("seconds_since_epoch") = 0.0)
      .def("get_cartesian_position", &te::Ephemeris::getCartesianPosition, py::arg("seconds_since_epoch") = 0.0)
      .def("get_cartesian_velocity", &te::Ephemeris::getCartesianVelocity, py::arg("seconds_since_epoch") = 0.0);

  py::enum_<tss::EphemerisType>(m.attr("Ephemeris"), "EphemerisType")
      .value("approximate_planet_positions", tss::approximate_planet_positions)
      .value("direct_spice_ephemeris", tss::direct_spice_ephemeris)
      // ...

  py::class_<te::RotationalEphemeris,
             std::shared_ptr<te::RotationalEphemeris>>
      RotationalEphemeris_(m, "RotationalEphemeris");

  // ...

};

} // namespace submodule level
} // namespace package level

In principle, it is possible for the ephemerides submodule to delegate the C++ to python interfaces to even lower-level namespaces. In this case, the ephemerides submodule definition (and any lower levels that delegate the interfaces) would follow the logic of case 2 (Source Code Exposition in Nested Namespace), while at the lowest level of this module / submodule tree the definition would again follow the logic of case 1 (Source Code Exposition in Module Namespace).

The tudat(py) API in tudat-bundle#

Warning

WIP - show how to use docstrings in tudat-bundle to contribute to tudat(py)-api

Build Configurations#

The tudat source code can be build using various build configurations. These configurations are listed in tudat-bundle/CMakeLists.txt (l. 43 ff.). The user can select the build options by use of the ‘ON’/’OFF’ keywords. See below a section of the CMakeLists file, which gives an example for an enabled test-suite build option and a disabled boost build option:

tudat-bundle/CMakeLists.txt#
# ...

# +============================================================================
# BUILD OPTIONS
#  Offer the user the choice of defining the build variation.
# +============================================================================

# Build option: enable the test suite.
option(TUDAT_BUILD_TESTS "Build the test suite." ON)

option(TUDAT_DOWNLOAD_AND_BUILD_BOOST "Downloads and builds boost" OFF)

# more Build options:
# ...

# ...

Warning

Options that toggle the use of SOFA amd SPICE can break the build of tudatpy.

Note

For more information on the workings of CMake as a build system, please refer to Build System.

Building the Project and Known Issues#

For most users the project build is very easy and described in the README (steps 5 ff.)

Warning

If your machine is running on an Apple M1 processor, you may have to follow a slightly different. Please refer to this discussion. You may also encounter issues with tudat-test, which can be resolved as described here.

Exposing C++ to Python#

This section contains fundamental concepts about pybind11, a library to expose C++ to Python, and more specific indications for users who want to expose tudat functionalities to tudatpy.

Note

In this context, the terms expose and bind (and derived words) will be treated as synonyms.

The reader should be familiar with the content of the Developer Environment page before moving on to the remainder of this guide.

Learning Objectives

  1. Be able to expose a simple function from C++ to Python.

  2. Be able to expose overloaded functions.

  3. Be able to expose classes, including overloaded constructors.

  4. Understand the different access policies on attributes and methods.

  5. Understand the type conversions required and introduced by specific pybind headers.

The contents of this guide are shown below:

Pybind11#

pybind11 is an open-source library that exposes C++ types in Python. Through this software, the user interfaces of tudat, written in C++, can be made available in tudatpy.

pybind11 has an extensive and well-written documentation accessible through the link reported above, which the reader can refer to at anytime. The main goal of this page is to help the reader gain familiarity with the nomenclature and functionalities offered by pybind11 that are specifically useful to expose tudat code to Python. pybind11 features that are not directly applicable to tudat will not be presented.

Note

The hierarchical structure of the binding code is explained in this section. It is noted that the actual compilation of the binding code is achieved by calling the kernel.cpp file; however, all the pybind functionalities that will be explained above are employed in the respective submodules.

Headers and preliminaries#

To write a C++ exposition file, the following header is needed:

#include <pybind11/pybind11.h>

However, additional headers may be needed, such as:

#include <pybind11/stl.h>  // to enable conversions from/to C++ standard library types

#include <pybind11/eigen.h>  // to enable conversions from/to Eigen library types

#include <pybind11/numpy.h>  // to enable conversions from/to Numpy library types

In addition, it is assumed that the following piece of code is present in each code snippet shown in this page:

namespace py = pybind11;

Exposing a function#

In this section, the procedure to expose a simple function through pybind11 will be explained. We will make use of an example taken from tudat.

Suppose that we want to expose to Python the following tudat function (taken from this file):

inline std::shared_ptr< SingleDependentVariableSaveSettings > machNumberDependentVariable(
     const std::string& associatedBody,
     const std::string& bodyWithAtmosphere )
{
 return std::make_shared< SingleDependentVariableSaveSettings >(
             mach_number_dependent_variable, associatedBody, bodyWithAtmosphere );
}

This function is used to save the Mach number dependent variable associated to a certain body. More specifically, it returns a smart pointer to a SingleDependentVariableSaveSettings object and takes as input two standard pointers to std::string (these refer the body whose Mach number should be saved and the body whose atmosphere should be used to compute the Mach number respectively). This is the code (available here) needed to expose the above function to Python:

PYBIND11_MODULE(example, m) {
    m.def("mach_number",
          &tp::machNumberDependentVariable,
          py::arg("body"),
          py::arg("central_body"));
}

The code reported above creates a Python module, called example (the creation of a module through the PYBIND11_MODULE() function is done in tudatpy only in the kernel.cpp file; most of the binding code is organized through submodules structured as explained in section pybind11 of this page). def() is the pybind function that creates binding code for a specific C++ function [1]. def() takes two mandatory arguments:

  1. a string (i.e., "mach_number"), representing the name of the exposed function in Python;

  2. a pointer to the C++ function that should be exposed (i.e., &tp::machNumberDependentVariable), where tp is an abbreviation for the tudat::propagators namespace.

There are also additional input arguments that can be passed to the pybind def() function. In the context of the example above, these are the keywords for the input arguments of the exposed function in Python, denoted by the syntax py::arg , which takes a string as input (i.e., "body" and "central_body"). py is a shortcut for the pybind11 namespace [2].

Note

There are many other optional input arguments to the def() function. For instance, a third positional argument after &tp::machNumberDependentVariable can be passed (of type std::string) to provide a short documentation to the function. However, this pybind functionality is not employed for tudat/tudatpy.

As a result, pybind11 will generate a Python function that can be used as follows:

It is also allowed to call the tudatpy function mach_number() through the keyword arguments as follows:

dep_var_to_save = example.mach_number(body="Spacecraft", central_body="Earth")

It is also possible to have default values for certain keyword arguments. Suppose, for instance, that we want to have "Earth" as default central body. This can be achieved through the following implementation [3]:

PYBIND11_MODULE(example, m) {
    m.def("mach_number",
          &tp::machNumberDependentVariable,
          py::arg("body"),
          py::arg("central_body") = "Earth");
}

The first issue that arises in the binding process is the conversion between variable types. C++ is a statically-typed language, while Python is dynamically-typed. However, the type conversion is still needed and in both directions. In other words, the user can pass a Python variable as input to an exposed function. The type of such variable will have to be converted to a C++ type before it is passed to the actual C++ function acting “behind the scenes”. The inverse process takes place for the output of a function. This is one of the reasons why pybind11 is necessary. Indeed, conversions between native types are dealt with automatically in pybind. For instance, a C++ std::map<> is converted into a Python dict and vice-versa. In our example, this automatic type conversion takes place between the input arguments, between the std::string in C++ and str in Python. A table reporting common conversions is reported below.

Python

C++

list

std::vector<>/std::deque<>/std::list<>/std::array<>

set

std::set<>/std::unordered_set<>

dict

std::map<>/std::unordered_map<>

However, non-native data types need to be known to pybind to be converted properly. This is the case of the output type of the machNumberDependentVariable() function, returning a pointer to an instance of the SingleDependentVariableSaveSettings class. If this class is not exposed to Python, the binding process will fail. This offers the opportunity to explain how to generate binding code for classes, which will be done in Exposing a class.

Templated functions#

When a function is templated (see for instance here) it is mandatory to specify the template argument when exposing it. Therefore, the exposition code must be duplicated for each variable type (shown below for double, example taken from here).

m.def("multi_arc",
    &tp::multiArcPropagatorSettings<double>,
    py::arg("single_arc_settings"),
    py::arg("transfer_state_to_next_arc") = false );
Overloading functions#

If a free function or a member function is overloaded (i.e., it bears the same name but it accepts different sets of input argument types), it is not possible to generate binding code in the traditional way explained in Exposing a function, because pybind will not know which version should be chosen to generate Python code. Suppose, for instance, that we want to expose the following overloaded function:

//! Function to create a set of acceleration models from a map of bodies and acceleration model types.
basic_astrodynamics::AccelerationMap createAccelerationModelsMap(
     const SystemOfBodies& bodies,
     const SelectedAccelerationMap& selectedAccelerationPerBody,
     const std::map< std::string, std::string >& centralBodies )

//! Function to create acceleration models from a map of bodies and acceleration model types.
basic_astrodynamics::AccelerationMap createAccelerationModelsMap(
     const SystemOfBodies& bodies,
     const SelectedAccelerationMap& selectedAccelerationPerBody,
     const std::vector< std::string >& propagatedBodies,
     const std::vector< std::string >& centralBodies )

Both overloads of the createAccelerationModelsMap() function accept the system of bodies and an acceleration map as first two input arguments. In addition, the function needs to know the central body of each propagated body. This information can be passed as a std::map (where each propagated body is associated to its own central body key-value pairs) or through two different std::vector objects, one containing the propagated bodies and the other containing the respective central bodies. The code to expose both overloads is reported below:

m.def("create_acceleration_models",// overload [1/2]
       py::overload_cast<const tss::SystemOfBodies &,
       const tss::SelectedAccelerationMap &,
       const std::vector<std::string> &,
       const std::vector<std::string> &>(
           &tss::createAccelerationModelsMap),
       py::arg("body_system"),
       py::arg("selected_acceleration_per_body"),
       py::arg("bodies_to_propagate"),
       py::arg("central_bodies"));

m.def("create_acceleration_models",// overload [2/2]
       py::overload_cast<const tss::SystemOfBodies &,
       const tss::SelectedAccelerationMap &,
       const std::map<std::string, std::string> &>(
           &tss::createAccelerationModelsMap),
       py::arg("body_system"),
       py::arg("selected_acceleration_per_body"),
       py::arg("central_bodies"));

The def() function is still used, where the first input argument is the function name in Python. The difference with respect to a non-overloaded function exposition (see Exposing a function) lies in the second input argument, where pybind’s templated py::overload_cast<> is used [8]. This pybind function casts overloaded functions to function pointers and its syntax is as follows:

  1. the types of input arguments of the original C++ function are passed as template arguments (e.g., const tss::SystemOfBodies &, etc…);

  2. a reference to the original C++ function are passed as regular input arguments (e.g., &tss::createAccelerationModelsMap, where tss is a shortcut for the tudat::simulation_setup namespace).

The optional arguments to def() do not change with respect to what was explained in Exposing a function.

Warning

In the (rare) case where a function is overloaded based on constness only, the pybind tag py::const_ must be added as second parameter to py::overload_cast<>.

Exposing a class#

As explained above, the SingleDependentVariableSaveSettings class should be exposed to Python as well. This class, available at this link, is defined as follows:

class SingleDependentVariableSaveSettings : public VariableSettings
{
public:
     SingleDependentVariableSaveSettings(
             const PropagationDependentVariables dependentVariableType,
             const std::string& associatedBody,
             const std::string& secondaryBody = "",
             const int componentIndex = -1 ):
         VariableSettings( dependentVariable ),
         dependentVariableType_( dependentVariableType ),
         associatedBody_( associatedBody ),
         secondaryBody_( secondaryBody ),
         componentIndex_( componentIndex ) { }

     // Attributes
     PropagationDependentVariables dependentVariableType_;
     std::string associatedBody_;
     std::string secondaryBody_;
     int componentIndex_;
 };

The class has a constructor and it is derived class, whose parent is the VariableSettings class. The code to expose it to Python, available through this link is as follows, where the exposition of the constructor was omitted for now:

py::class_<tp::SingleDependentVariableSaveSettings,
     std::shared_ptr<tp::SingleDependentVariableSaveSettings>,
     tp::VariableSettings>(m, "tp::SingleDependentVariableSaveSettings")

It makes use of pybind’s py::class_<> templated function [4]. In the template, there are three arguments, of which only the first one is mandatory:

  1. the first template argument declares the C++ class that should be exposed (i.e., tp::SingleDependentVariableSaveSettings);

  2. the second template argument declares the type of pointer that should be used by pybind to refer to instances of such class (i.e., std::shared_ptr<tp::SingleDependentVariableSaveSettings>). The default argument is a std::unique_ptr, but in tudat the common and consistently used pointer is a std::shared_ptr<> [5];

  3. the third template argument informs pybind that the class to be exposed is derived by the parent class tp::VariableSettings [6].

Todo

When does a parent class need to be exposed? In theory, tp::VariableSettings does not have to be exposed… According to GG, “only when the class is part of the signature of a different function” (see recording at 14m01s).

Warning

The third template argument is necessary to ensure automatic downcasting of pointers referring to polymorphic base classes. In other words, when a function returns a pointer to an instance of a derived class, pybind automatically knows to “downcast” the pointer to the type of the derived class only if the base class is polymorphic (a class is said polymorphic if it has at least one virtual function).

In addition, there are two input arguments to the py::class_ function:

  1. the name of the Python module to which the exposed class will belong to (i.e., m);

  2. the name of the exposed class in Python, provided as a std::string (i.e., "tp::SingleDependentVariableSaveSettings").

Exposing class constructors#

Once the class has been exposed, one can also expose its member functions (in C++) which will become methods (in Python). The first member function that will be exposed is the class constructor. This can be exposed through the following code:

py::class_<tp::SingleDependentVariableSaveSettings,
     std::shared_ptr<tp::SingleDependentVariableSaveSettings>,
     tp::VariableSettings>(m, "tp::SingleDependentVariableSaveSettings")
     .def(py::init<
          const tp::PropagationDependentVariables,
          const std::string &,
          const std::string &,
          const int>(),
          py::arg("dependent_variable_type"),
          py::arg("associated_body"),
          py::arg("secondary_body") = "",
          py::arg("component_idx") = -1);

The first three lines were explained above. To expose the class constructor, it is possible to use the pybind def() function, which is common to any function (whether it is a member of a class or not). In addition, the pybind py::init<> function is used to declare the definition of the constructor. This function takes the types of the input arguments to the constructor as template arguments (i.e., const tp::PropagationDependentVariable, const std::string &, etc…). The templated function py::init<> makes it easy to overload the class constructor: it is sufficient to define multiple .def(py::init<>), with different template arguments, to expose several versions of the constructor, whose correct version is selected according to the input arguments types passed to the constructor. An example, taken from this tudat class exposed through this code, is provided below. Overloading simple functions will be explained in section Overloading functions.

Example: overloading a class constructor Show/Hide

py::class_<
    tp::TranslationalStatePropagatorSettings<double>,
    std::shared_ptr<tp::TranslationalStatePropagatorSettings<double>>,
    tp::SingleArcPropagatorSettings<double>>(m, "TranslationalStatePropagatorSettings")
    .def(// ctor 1
         py::init<
         const std::vector<std::string> &,
         const tba::AccelerationMap &,
         const std::vector<std::string> &,
         const Eigen::Matrix<double, Eigen::Dynamic, 1> &,
         const std::shared_ptr<tp::PropagationTerminationSettings>,
         const tp::TranslationalPropagatorType,
         const std::shared_ptr<tp::DependentVariableSaveSettings>,
         const double>(),
         py::arg("central_bodies"),
         py::arg("acceleration_models"),
         py::arg("bodies_to_integrate"),
         py::arg("initial_states"),
         py::arg("termination_settings"),
         py::arg("propagator") = tp::TranslationalPropagatorType::cowell,
         py::arg("output_variables") = std::shared_ptr<tp::DependentVariableSaveSettings>(),
         py::arg("print_interval") = TUDAT_NAN)
    .def(// ctor 2
         py::init<const std::vector<std::string> &,
         const tss::SelectedAccelerationMap &,
         const std::vector<std::string> &,
         const Eigen::Matrix<double, Eigen::Dynamic, 1> &,
         const std::shared_ptr<tp::PropagationTerminationSettings>,
         const tp::TranslationalPropagatorType,
         const std::shared_ptr<tp::DependentVariableSaveSettings>,
         const double>(),
         py::arg("central_bodies"),
         py::arg("acceleration_settings"),
         py::arg("bodies_to_integrate"),
         py::arg("initial_states"),
         py::arg("termination_settings"),
         py::arg("propagator") = tp::cowell,
         py::arg("output_variables") = std::shared_ptr<tp::DependentVariableSaveSettings>(),
         py::arg("print_interval") = TUDAT_NAN)
    .def(// ctor 3
         py::init<const std::vector<std::string> &,
         const tba::AccelerationMap &,
         const std::vector<std::string> &,
         const Eigen::Matrix<double, Eigen::Dynamic, 1> &,
         const double,
         const tp::TranslationalPropagatorType,
         const std::shared_ptr<tp::DependentVariableSaveSettings>,
         const double>(),
         py::arg("central_bodies"),
         py::arg("acceleration_models"),
         py::arg("bodies_to_integrate"),
         py::arg("initial_states"),
         py::arg("termination_time"),
         py::arg("propagator") = tp::cowell,
         py::arg("output_variables") = std::shared_ptr<tp::DependentVariableSaveSettings>(),
         py::arg("print_interval") = TUDAT_NAN)
    .def(// ctor 4
         py::init<const std::vector<std::string> &,
         const tss::SelectedAccelerationMap &,
         const std::vector<std::string> &,
         const Eigen::Matrix<double, Eigen::Dynamic, 1> &,
         const double,
         const tp::TranslationalPropagatorType,
         const std::shared_ptr<tp::DependentVariableSaveSettings>,
         const double>(),
         py::arg("central_bodies"),
         py::arg("acceleration_settings"),
         py::arg("bodies_to_integrate"),
         py::arg("initial_states"),
         py::arg("termination_time"),
         py::arg("propagator") = tp::cowell,
         py::arg("output_variables") = std::shared_ptr<tp::DependentVariableSaveSettings>(),
         py::arg("print_interval") = TUDAT_NAN)

Warning

The template arguments must be always provided to py::init<>, even if the constructor is not overloaded.

The def function follows its standard behavior (explained above) even when it is used to expose a class constructor; in other words, it can take a number of optional arguments that specify the keyword corresponding to each input argument to the class constructor in Python (i.e., py::arg("dependent_variable_type"), etc…). In this example, the last two input arguments have default values.

Note

The set of parentheses after py::init<>(), needed to comply with the correct syntax, is empty. Optional arguments can be passed to create custom constructors in Python [7]. However, this pybind functionality is not used for tudat, therefore it will not be treated in this guide.

Exposing class attributes#
Class attributes in C++ vs. in Python#

There are a few differences between the Object-Oriented Programming (OOP) philosophy in C++ and Python. It is important to know these differences before proceeding to the next sections. The reader who is already aware of this information can skip this section.

One of the principles used in Object-Oriented Programming in C++ is data encapsulation. According to this principle, class attributes should be accessible only from within the class and not by the user dealing with an instance of that class. This is principle is (partly) enforced by C++: for instance, class attributes are by default private (i.e., accessible only from within the class and its methods, also called friends) [9]. This policy is useful mainly for security reasons (data protection), but also because interaction with the data contained within a class becomes only possible through its public methods; in other words, the user can interact with the class data through a dedicated user interface, without knowing or dealing with the class’s internal functioning directly. This strategy also ensures that any changes to the class’s internal structure will not affect the code that creates and uses instances of that class [10]. The most basic form of a user interface are accessors and mutators (hereafter referred to as getters and setters).

In Python, on the other hand, the possibility of keeping class attributes private is not provided. Among Python programmers, there is a widespread convention to use attribute names starting with an underscore (e.g., myclass._myattribute) to inform other developers and users that such attribute should not be called directly outside of the class. However, this is only a convention and the programming language does not enforce this behavior. For this reason, getters and setters are not as common in Python as they are in other OOP languages, such as C++ or Java. In addition, the dot notation in Python to access and mutate class attributes makes the code much more readable [11].

However, there may be cases where getters and setters are needed in Python classes as well. This is the case when code is exposed from another OOP language, such as C++, as it happens for tudat: it is obviously easier to maintain the same user interface, thus having keeping getters and setters in Python as well. In this case, it is recommended to create a class property. This solution has the advantage of having getters and setters, while at the same time benefitting from the dot notation [12].

These concepts will be partially re-explained and applied in Exposing public attributes (for attributes that are not private, thus do not have associated getters and setters) and Exposing private attributes (for attributes that are private, thus do have associated getters and setters, which can become properties in Python).

Exposing public attributes#

Analogously to the def() method of pybind’s py::class_, useful to expose member functions, pybind offers two other methods to expose public attributes of a class (for private attributes, see Exposing private attributes) [9]. def_readwrite() can be used to expose a non-constant attribute. For instance, let’s consider the following piece of code that exposes this class:

py::class_<ta::AerodynamicGuidance, ta::PyAerodynamicGuidance,
         std::shared_ptr< ta::AerodynamicGuidance > >(m, "AerodynamicGuidance")
         .def(py::init<>())
         .def("updateGuidance", &ta::AerodynamicGuidance::updateGuidance, py::arg("current_time") )
         .def_readwrite("angle_of_attack", &ta::PyAerodynamicGuidance::currentAngleOfAttack_)
         .def_readwrite("bank_angle", &ta::PyAerodynamicGuidance::currentBankAngle_)
         .def_readwrite("sideslip_angle", &ta::PyAerodynamicGuidance::currentAngleOfSideslip_);

The highlighted lines show the def_readwrite() function at work. It takes two arguments in the same way explained in Exposing a function:

  1. the name of the attribute of the exposed Python class, passed as a string;

  2. the attribute of the original C++ class, passed as a reference.

Similarly, the def_readonly() function can be used to expose const public class attributes. For instance, look at this example exposing this thrust direction class:

 py::class_<
      tss::ThrustDirectionGuidanceSettings,
      std::shared_ptr<tss::ThrustDirectionGuidanceSettings>>(m, "ThrustDirectionGuidanceSettings")
      .def(py::init<
           const tss::ThrustDirectionGuidanceTypes,
           const std::string>(),
           py::arg("thrust_direction_type"),
           py::arg("relative_body"))
      .def_readonly("thrust_direction_type", &tss::ThrustDirectionGuidanceSettings::thrustDirectionType_)
      .def_readonly("relative_body", &tss::ThrustDirectionGuidanceSettings::relativeBody_);

The highlighted lines use def_readonly() in the same way as for def_readwrite().

Note

In tudat, it was decided to have as few public attributes as possible. Therefore, in principle, a developer should not rely on def_readonly() and def_readwrite() too much, as classes should be designed so that attributes are generally private and interaction with those is possible through getters (and setters).

Exposing private attributes#

If class attributes are private, it is likely that they can be accessed (and, in some cases, modified) through getters and setters. pybind has specific methods from the py::class_ to deal with this situation, namely with def_property() and def_property_readonly() [13]. The latter is used for private attributes that have both getters and setters, while the former is used for private attributes that cannot be modified (i.e., they only have a getter). The following example, exposing a spherical harmonics class in tudat, illustrates the usage of both:

py::class_<tg::SphericalHarmonicsGravityField,
         std::shared_ptr<tg::SphericalHarmonicsGravityField >,
         tg::GravityFieldModel>(m, "SphericalHarmonicsGravityField")
         .def_property_readonly("reference_radius", &tg::SphericalHarmonicsGravityField::getReferenceRadius )
         .def_property_readonly("maximum_degree", &tg::SphericalHarmonicsGravityField::getDegreeOfExpansion )
         .def_property_readonly("maximum_order", &tg::SphericalHarmonicsGravityField::getOrderOfExpansion )
         .def_property("cosine_coefficients", &tg::SphericalHarmonicsGravityField::getCosineCoefficients,
                       &tg::SphericalHarmonicsGravityField::setCosineCoefficients)
         .def_property("sine_coefficients", &tg::SphericalHarmonicsGravityField::getSineCoefficients,
                       &tg::SphericalHarmonicsGravityField::setSineCoefficients);

The syntax is as follows:

  1. the first argument is, as usual, the name of the attribute of the exposed Python class, passed as a string;

  2. the second argument is the getter function of the original C++ class, passed as a reference;

  3. [only for def_property()] the third argument is the setter function of the original C++ class, passed as a reference.

As a result, in Python it will be possible to operate without getter and setters, simply accessing properties through the dot notation (see the Python documentation about the property decorator). As an example, in Python one could do:

# Create spherical harmonics object
spherical_harmonics_model = ...
# Retrieve sine coefficients
sin_coeff = spherical_harmonics_model.sine_coefficients
# Set sine coefficients
spherical_harmonics_model.sine_coefficients = sin_coeff
# Retrieve reference radius
r = spherical_harmonics.reference_radius
# Set reference radius
spherical_harmonics.reference_radius = r  # THIS WOULD THROW AN ERROR

Note

In the current status of tudatpy, def_property() is not always used, because in some cases the getter and setter functions are exposed individually through the traditional def() method. However, this behavior is discouraged when generating other binding code in the future. When getters (and setters) are available in C++, it is recommended to rely on def_property() or def_property_readonly().

Todo

@Dominic, @Geoffrey, do you confirm the note above?

Exposing class methods#

Other class methods that are not part of the categories explained above can be simply exposed with the same syntax used for free functions (see Exposing a function).

Exposing an enum#

Exposing enumerations types is relatively straightforward. Suppose we would like to expose the following enum, located in the tudat::propagators namespace:

//! Enum listing types of dynamics that can be numerically integrated
enum IntegratedStateType
{
     hybrid = 0,
     translational_state = 1,
     rotational_state = 2,
     body_mass_state = 3,
     custom_state = 4
};

This can be done through pybind’s py::enum_<> function as follows (original code):

py::enum_<tp::IntegratedStateType>(m, "StateType")
         .value("hybrid_type", tp::IntegratedStateType::hybrid)
         .value("translational_type", tp::IntegratedStateType::translational_state)
         .value("rotational_type", tp::IntegratedStateType::rotational_state)
         .value("mass_type", tp::IntegratedStateType::body_mass_state)
         .value("custom_type", tp::IntegratedStateType::custom_state)
         .export_values();

py::enum_<> takes the name of the original C++ enum as template argument and the name of the Python equivalent as second parameter (i.e., " StateType"), with the first one being the module m as usual. Each element of the enum can then be exposed using the value() function, that takes two parameters:

  1. the name of the element in Python;

  2. the name of the original C++ element to be exposed (where tp is, as usual, a shortcut for the tudat::propagators namespace).

The final function export_values() is needed to export the elements to the parent scope; without it, tudat::propagators::hybrid_type would be not be valid code [14].

Todo

to address: structure of the PYBIND11_MODULE (in kernel) and module/submodule definition. However, this overlaps with the content of this tudat developer guide. I propose to either redirect from here to there or transfer its content here.

References#

Extending Features#

Development Environment

  1. Get your own tudat-bundle environment from the tudat-team.

  2. Understand the structure of the tudat-bundle and the purpose of its components.

  3. Familiarize with the mapping between tudat and tudatpy source code.

  4. Understand the higher level functions of the tudat-api.

  5. Familiarize with the available build configurations for tudat and tudatpy.

  6. Know how to build the tudat-bundle and recognize some common problems that can be encountered.

Bibliography#

[1]

G.H. Garrett. Developer-primer. URL: https://github.com/tudat-team/developer-primer.

[2]

Atlassian. Gitflow workflow: atlassian git tutorial. URL: https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow.

[3]

Anaconda, individual edition. URL: https://www.anaconda.com/products/individual.

[5]

Tom Preston-Werner. Semantic versioning 2.0.0. URL: https://semver.org/.