Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open3D-ML Jupyter Tutorials #512

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -26,3 +26,4 @@ test_kpconv/
kernels/
**/.fuse*
train_log/
*.ipynb_checkpoints
225 changes: 225 additions & 0 deletions docs/tutorial/notebook/Inference_on_a_custom_data.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,225 @@
{
Copy link
Member

@ssheorey ssheorey Apr 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running Semantic Segmentation inference on custom data


Reply via ReviewNB

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • This needs an introduction at the top describing what we will do in this tutorial.
  • The language describing the steps is not quite right. @sanskar, you may need to directly edit Alex's PR. e.g: (-in our data model, we define a dataset....)
  • The note at the top about downloading weights is specific to PyTorch. Add TF alternative.


Reply via ReviewNB

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To Do (Sanskar): editing pass

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Member

@ssheorey ssheorey Apr 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... to the data weights...


Reply via ReviewNB

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add (commented) command to download the weights directly in the notebook:

# from urllib.request import urlretrieve

# urlretrieve(weights_url, filename=weights_file)


Reply via ReviewNB

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explain in_channels=3

...with our data weights file...


Reply via ReviewNB

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Member

@ssheorey ssheorey Apr 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...display what the content data...


Reply via ReviewNB

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Running Semantic Segmentation inference on custom data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial, we will cover how to run an inference on a pointcloud data in Open3D-ML. To accomplish that, we will take these steps:\n",
"\n",
"1. Download the data *weights* file;\n",
"2. Set up `torch` and `numpy` libraries;\n",
"3. Create a `dataset` object and extract a sample from its `'test'` split;\n",
"4. Create and initialize `model` and `pipeline` objects;\n",
"5. Restore the `model` with data from the *weights* file;\n",
"6. Convert the custom pointcloud data into the specified format;\n",
"7. Run an inference on the sample data.\n",
"\n",
"\n",
"> **Note:** We will be using a sample `RandLANet` `SemanticKITTI` weight file which we need to:\n",
">\n",
"> 1. Download for either *PyTorch* or *TensorFlow* from links below:\n",
"> > a. For *PyTorch*: https://storage.googleapis.com/open3d-releases/model-zoo/randlanet_semantickitti_202201071330utc.pth\n",
"> >\n",
"> > b. For *TensorFlow*: https://storage.googleapis.com/open3d-releases/model-zoo/randlanet_semantickitti_202201071330utc.zip\n",
">\n",
"> 2. Place the downloaded `randlanet_semantickitti_202201071330utc.pth` file into `'Open3D-ML/docs/tutorial/notebook/'` subdirectory, or any other place and change the `ckpt_path` accordingly.\n",
">\n",
"> For other model/dataset weight files, please check out https://github.com/isl-org/Open3D-ML#semantic-segmentation-1\n",
"\n",
"\n",
"An inference predicts the results based on the trained model.\n",
"\n",
"> **Please see the [Training a semantic segmentation model using PyTorch](train_ss_model_using_pytorch.ipynb) and [Training a semantic segmentation model using TensorFlow](train_ss_model_using_tensorflow.ipynb) for training tutorials.**\n",
"\n",
"While training, the model saves the checkpoint files every few epochs, in the *logs* directory. We use these trained weights to restore the model for inference.\n",
"\n",
"Our first step in inference on a custom data implementation is to import `open3d.ml` and `numpy` libraries:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import open3d.ml.torch as ml3d # just switch to open3d.ml.tf for tf usage\n",
"import numpy as np"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We then create a checkpoint path pointing to the weights file we downloaded (generated at the end of the Training stage):\n",
"\n",
"(You can download any other weights using a link from the model zoo (collection of weights for all combinations of model and dataset): https://github.com/isl-org/Open3D-ML#semantic-segmentation-1 )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"weights_url = 'https://storage.googleapis.com/open3d-releases/model-zoo/randlanet_semantickitti_202201071330utc.zip'\n",
"ckpt_path = './randlanet_semantickitti_202201071330utc.pth'\n",
"# from urllib.request import urlretrieve\n",
"# urlretrieve(weights_url, filename=ckpt_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we define a `dataset`, `model`, and `pipeline` objects identical to how it was done in our previous *Training a semantic segmentation model* tutorials:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We define dataset (similar to train_ss_using_pytorch tutorial)\n",
"dataset = ml3d.datasets.SemanticKITTI(dataset_path='SemanticKITTI/',\n",
" cache_dir='./logs/cache',\n",
" training_split=['00'],\n",
" validation_split=['01'],\n",
" test_split=['01'])\n",
"\n",
"# Initializing the model and pipeline\n",
"model = ml3d.models.RandLANet(in_channels=3)\n",
"pipeline = ml3d.pipelines.SemanticSegmentation(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we restore the model with our weights file with `pipeline.load_ckpt()` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load checkpoint using `load_ckpt` method (restoring weights for inference)\n",
"pipeline.load_ckpt(ckpt_path=ckpt_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let us query the first pointcloud from the `test` split."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"test_data = dataset.get_split('test')\n",
"data = test_data.get_data(0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's display what `data` contains:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For inference on custom data, you can convert your point cloud into this format:\n",
"\n",
"**Dictionary with keys {'point', 'feat', 'label'}**\n",
"\n",
"If you already have the *ground truth labels*, you can add them to data to get accuracy and IoU (Intersection over Union). Otherwise, pass labels as `None`.\n",
"\n",
"And now - the main topic of our tutorial - running inference on the test data. You can call the `run_inference()` method with your data, - it will print *accuracy per class* and *Intersection over Union (IoU)* metrics. The last entry in the list is *mean accuracy* and *mean IoU*:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Running inference on test data\n",
"results = pipeline.run_inference(data)\n",
"# prints per class accuracy and IoU (Intersection over Union). Last entry is mean accuracy and mean IoU.\n",
"# We get several `nan` outputs for missing classes in the input data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `results` object will return a dictionary of predicted labels and predicted probabilities per point:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Dictionary of predicted labels and predicted probabilities per class\n",
"results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading