Skip to content

Commit

Permalink
Add docs on Model, litpose
Browse files Browse the repository at this point in the history
  • Loading branch information
ksikka committed Jan 22, 2025
1 parent 5ca7220 commit bbc2134
Show file tree
Hide file tree
Showing 12 changed files with 293 additions and 141 deletions.
14 changes: 14 additions & 0 deletions docs/_static/lightningpose.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
html.writer-html4[data-theme="dark"] .rst-content dl:not(.docutils) .descclassname,
html.writer-html4 .rst-content dl:not(.docutils) .descname,
html.writer-html4 .rst-content dl:not(.docutils) .sig-name,
html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,
html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name {
color: rgb(0, 125, 206);
}


html[data-theme="dark"].writer-html4 .rst-content dl:not(.docutils) dl:not(.field-list) > dt, html[data-theme="dark"].writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not( .glossary ):not(.simple) dl:not(.field-list) > dt {
background-color: #0f0f0f !important;
color: #959595 !important;
border-color: #2b2b2b !important;
}
17 changes: 6 additions & 11 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,14 +54,14 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme'
html_theme_options = {"logo": {"text": "Lightning Pose Docs - Home"}}
html_logo = "images/LightningPose_logo_light.png"
html_favicon = "images/favicon.ico"

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
html_static_path = ['_static']
html_css_files = ['lightningpose.css']

autosummary_generate = True

Expand All @@ -70,12 +70,7 @@


# If you want to document __init__() functions for python classes
# https://stackoverflow.com/a/5599712
def skip(app, what, name, obj, skip, options):
if name == "__init__":
return False
return skip


def setup(app):
app.connect("autodoc-skip-member", skip)
# https://stackoverflow.com/a/61732050/1925967
autodoc_default_options = {
'special-members': '__init__',
}
74 changes: 73 additions & 1 deletion docs/source/api.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,77 @@
.. _lightning_pose_api:

##################
Lightning Pose API
==================
##################


Train function
==============

.. autofunction:: lightning_pose.train.train

To train a model using ``config.yaml`` and output to ``outputs/doc_model``:
.. code-block:: python
import os
from lightning_pose.train import train
from omegaconf import OmegaConf
cfg = OmegaConf.load("config.yaml")
os.chdir("outputs/doc_model")
train(cfg)
To override settings before training:
.. code-block:: python
cfg = OmegaConf.load("config.yaml")
overrides = {
"training": {
"min_epochs": 5,
"max_epochs": 5
}
}
cfg = OmegaConf.merge(cfg, overrides)
train(cfg)
Training returns a Model object, which is described next.

Model class
===========

The ``Model`` class provides an easy-to-use interface to a lightning-pose
model. It supports running inference and accessing model metadata.
The set of supported Model operations will expand as we continue development.

You create a model object using `Model.from_dir`:

.. code-block:: python
from lightning_pose.model import Model
model = Model.from_dir("outputs/doc_model")
Then, to predict on new data:

.. code-block:: python
model.predict_on_video_file("path/to/video.mp4")
or:

.. code-block:: python
model.predict_on_label_csv("path/to/csv_file.csv")
API Reference:

.. autoclass:: lightning_pose.model.Model
:members:
:exclude-members: __init__, PredictionResult, predict_on_label_csv_internal


Lightning Pose Internal API
===========================

* :ref:`metrics and callbacks modules <lp_modules>`
* :ref:`data package <lp_modules_data>`
Expand Down
2 changes: 1 addition & 1 deletion docs/source/developer_guide/add_a_loss.rst
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ from the ``data`` field) you can add those key-value pairs to the constructor in

Step 4: update ``compute_metrics`` (optional)
---------------------------------------------
The base training script ``scripts/train_hydra.py`` will automatically compute a set of metrics on
Lightning pose will automatically compute a set of metrics on
all labeled data and unlabeled videos upon training completion.
To add your new metric to this operation, you must update
:meth:`~lightning_pose.utils.scripts.compute_metrics`.
Expand Down
8 changes: 2 additions & 6 deletions docs/source/developer_guide/add_a_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -182,10 +182,6 @@ The field ``model.model_type`` is used to specify your model - the current suppo
"regression", "heatmap", and "heatmap_mhcrnn".
Add your new model name to this list.

The basic training script can be found at ``scripts/train_hydra.py``.
You do not need to update anything in this script to accommodate your new model, but this script
uses several helper functions that we will update next.

Step 2: update ``get_dataset``
------------------------------
The first helper function you need to update is
Expand Down Expand Up @@ -217,8 +213,8 @@ Finally, there is helper function :meth:`~lightning_pose.utils.predictions.get_m
is used to seamlessly load model parameters from checkpoint files.
Again, there are various ``if/else`` statements where your model should be incorporated.

Step 6: optional and miscellaneous additons
-------------------------------------------
Step 6: optional and miscellaneous additions
--------------------------------------------

If you find yourself needing to write a new DALI dataloader to support your model training, you might also need to update the :class:`~lightning_pose.utils.predictions.PredictionHandler` class.

Expand Down
19 changes: 9 additions & 10 deletions docs/source/user_guide/config_file.rst
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,8 @@ See the :ref:`FAQs <faq_oom>` for more information on memory management.
If the value is a float between 0 and 1 then it is interpreted as the fraction of total train frames.
If the value is an integer greater than 1 then it is interpreted as the number of total train frames.

.. _config_num_gpus:
.. _config_num_gpus:

* ``training.num_gpus`` (*int, default: 1*): the number of GPUs for
:ref:`multi-GPU training <multi_gpu_training>`

Expand Down Expand Up @@ -245,15 +246,14 @@ Evaluation

The following parameters are used for general evaluation.

* ``eval.predict_vids_after_training`` (*bool, default: true*): if true, after training (when using
scripts/train_hydra.py) run inference with the best model on all videos located in
``eval.test_videos_directory`` (see below)
* ``eval.predict_vids_after_training`` (*bool, default: true*): if true, after training run
inference on all videos located in ``eval.test_videos_directory`` (see below)

* ``eval.test_videos_directory`` (*str, default: null*): absolute path to a video directory
containing videos for prediction; used in scripts/train_hydra.py and scripts/predict_new_vids.py
containing videos for post-training prediction.

* ``eval.save_vids_after_training`` (*bool, default: false*): save out an mp4 file with predictions
overlaid after running inference; used in scripts/train_hydra.py and scripts/predict_new_vids.py
overlaid after running post-training prediction.

* ``eval.colormap`` (*str, default: cool*): colormap options for labeled videos; options include
sequential colormaps (viridis, plasma, magma, inferno, cool, etc) and diverging colormaps (RdBu,
Expand All @@ -262,11 +262,10 @@ The following parameters are used for general evaluation.
* ``eval.confidence_thresh_for_vid`` (*float, default: 0.9*): predictions with confidence below this
value will not be plotted in the labeled videos

* ``eval.hydra_paths`` (*list, default: []*): absolute paths to hydra output folders for use with
scripts/predict_new_vids.py (see :ref:`inference <inference>` docs) and
scripts/create_fiftyone_dataset.py (see :ref:`FiftyOne <fiftyone>` docs)

* ``eval.fiftyone.dataset_name`` (*str, default: test*): name of the FiftyOne dataset

* ``eval.fiftyone.model_display_names`` (*list, default: [test_model]*): shorthand name for each of
the models specified in ``hydra_paths``

* ``eval.hydra_paths`` (*list, default: []*): absolute paths to model directories, only for use with
scripts/create_fiftyone_dataset.py (see :ref:`FiftyOne <fiftyone>` docs).
9 changes: 3 additions & 6 deletions docs/source/user_guide/evaluation.streamlit.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,10 @@ Run the following command from inside the ``lightning-pose/lightning_pose/apps``

.. code-block:: console
streamlit run labeled_frame_diagnostics.py -- --model_dir <ABOLUTE_PATH_TO_HYDRA_OUTPUTS_DIRECTORY>
streamlit run labeled_frame_diagnostics.py -- --model_dir <ABSOLUTE_PATH_TO_OUTPUT_DIRECTORY>
The only argument needed is ``--model_dir``, which tells the app where to find models and their predictions. ``<ABOLUTE_PATH_TO_HYDRA_OUTPUTS_DIRECTORY>`` should contain hydra subfolders of the type ``YYYY-MM-DD/HH-MM-SS``.

.. note:
The lightning-pose output folder for a single model is typically ``/path/to/lightning-pose/outputs/YYYY-MM-DD/HH-MM-SS``, where the last folder contains prediction csv files.
The only argument needed is ``--model_dir``, which tells the app where to find model directories.
It should contain model directories of the type ``YYYY-MM-DD/HH-MM-SS``.

The app shows:

Expand Down
103 changes: 68 additions & 35 deletions docs/source/user_guide/inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,60 +4,93 @@
Inference
#########

Once you have trained a model you'll likely want to run inference on new videos.
Since version 1.7.0, installing lightning-pose also installs ``litpose``,
a command-line tool built on top of the :ref:`lightning_pose_api`.
The command ``litpose predict`` is used to run model inference on new data.

Similar to training, there are several tools for running inference:
Inference on new videos
=======================

#. A set of high-level functions used for processing videos and creating labeled clips. You can combine these to create your own custom inference script. This is required if you used the :ref:`pip package <pip_package>` installation method.
#. An example inference script provided in the :ref:`conda from source <conda_from_source>` installation method. This demonstrates how to combine the high-level functions.
The model_dir argument is the path to the model outputted by ``litpose train``.

To predict on one or more video files:

.. code-block:: shell
litpose predict <model_dir> <video_file1> <video_file2> ...
To predict on a folder of video files:

.. code-block:: shell
litpose predict <model_dir> <video_files_dir>
The ``litpose predict`` command saves frame-by-frame predictions and confidences as a CSV file,
unsupervised losses in CSV file per loss type. By default it also generates videos annotated with
predictions, a feature which can be disabled using the ``--skip_viz`` flag.

For the full list of options:

.. code-block:: shell
litpose predict --help
.. note::

The steps below assume the :ref:`conda from source <conda_from_source>` installation method.
If you did not use this installation method, see the
`example inference script <https://github.com/danbider/lightning-pose/blob/main/scripts/predict_new_vids.py>`_.
You can also see how video inference is handled in the
`example train script <https://github.com/danbider/lightning-pose/blob/main/scripts/train_hydra.py>`_.
Videos *must* be mp4 files that use the h.264 codec; see more information in the
:ref:`FAQs<faq_video_formats>`.

Inference with example data
===========================

To run inference with a model trained on the example dataset, run the following command from
inside the ``lightning-pose`` directory
(make sure you have activated your conda environment):
Inference on new images
=======================

.. code-block:: console
Lightning pose also supports inference on images, as well
as computing pixel error against new labeled images. This is useful
for evaluating a model on out-of-distribution data to see how well the
model generalizes.

python scripts/predict_new_vids.py eval.hydra_paths=["YYYY-MM-DD/HH-MM-SS/"]
Currently it's required to create a CSV file similar to
the one used for training labeled frames. Once you have a CSV file,
run:

This overwrites the config field ``eval.hydra_paths``, which is a list that contains the relative
paths of the model folders you want to run inference with
(you will need to replace "YYYY-MM-DD/HH-MM-SS/" with the timestamp of your own model).
.. code-block:: shell
Inference with your data
========================
litpose predict <model_dir> <csv_file>
In order to use this script more generally, you need to update several config fields:
Output location
===============

#. ``eval.hydra_paths``: path to models to use for prediction
#. ``eval.test_videos_directory``: path to a `directory` containing videos to run inference on
#. ``eval.save_vids_after_training``: if ``true``, the script will also save a copy of the full video with model predictions overlaid.
Video predictions are saved to:

The results will be stored in the model directory.
.. code-block::
As with training, you either directly edit your config file and run:
<model_dir>/
└── video_preds/
├── <video_filename>.csv (predictions)
├── <video_filename>_<metric>.csv (losses)
└── labeled_videos/
└── <video_filename>_labeled.mp4
.. code-block:: console
Image predictions are saved to:

python scripts/predict_new_vids.py --config-path=<PATH/TO/YOUR/CONFIGS/DIR> --config-name=<CONFIG_NAME.yaml>
.. code-block::
or override these arguments in the command line:
<model_dir>/
└── image_preds/
└── <image_dirname | csv_filename | timestamp>/
├── predictions.csv
├── predictions_<metric>.csv (losses)
└── <image_filename>_labeled.png
.. code-block:: console
Inference on sample dataset
===========================

python scripts/predict_new_vids.py --config-path=<PATH/TO/YOUR/CONFIGS/DIR> --config-name=<CONFIG_NAME.yaml> eval.hydra_paths=["YYYY-MM-DD/HH-MM-SS/"] eval.test_videos_directory=/absolute/path/to/videos
The lightning pose repo includes a sample dataset (see :ref:`training-on-sample-dataset`).
The sample video file is located in the git repo at ``data/mirror-mouse-example/videos``.
Thus, to run inference on a model trained on the sample dataset,
run from the ``lightning-pose`` directory
(make sure you have activated your conda environment):

.. note::
.. code-block:: shell
Videos *must* be mp4 files that use the h.264 codec; see more information in the
:ref:`FAQs<faq_video_formats>`.
litpose predict <model_dir> data/mirror-mouse-example/videos
Loading

0 comments on commit bbc2134

Please sign in to comment.