-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open3D-ML Jupyter Tutorials #512
base: main
Are you sure you want to change the base?
Changes from all commits
e9ba40b
de5f733
ad88ffc
d3f3bed
8babd52
b8df563
c427930
1fdbf8b
ff5a386
8d1a9ef
e7c5144
06ca7ec
b248113
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,3 +26,4 @@ test_kpconv/ | |
kernels/ | ||
**/.fuse* | ||
train_log/ | ||
*.ipynb_checkpoints |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,225 @@ | ||
{ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To Do (Sanskar): editing pass There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Add (commented) command to download the weights directly in the notebook:
Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Running Semantic Segmentation inference on custom data" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"In this tutorial, we will cover how to run an inference on a pointcloud data in Open3D-ML. To accomplish that, we will take these steps:\n", | ||
"\n", | ||
"1. Download the data *weights* file;\n", | ||
"2. Set up `torch` and `numpy` libraries;\n", | ||
"3. Create a `dataset` object and extract a sample from its `'test'` split;\n", | ||
"4. Create and initialize `model` and `pipeline` objects;\n", | ||
"5. Restore the `model` with data from the *weights* file;\n", | ||
"6. Convert the custom pointcloud data into the specified format;\n", | ||
"7. Run an inference on the sample data.\n", | ||
"\n", | ||
"\n", | ||
"> **Note:** We will be using a sample `RandLANet` `SemanticKITTI` weight file which we need to:\n", | ||
">\n", | ||
"> 1. Download for either *PyTorch* or *TensorFlow* from links below:\n", | ||
"> > a. For *PyTorch*: https://storage.googleapis.com/open3d-releases/model-zoo/randlanet_semantickitti_202201071330utc.pth\n", | ||
"> >\n", | ||
"> > b. For *TensorFlow*: https://storage.googleapis.com/open3d-releases/model-zoo/randlanet_semantickitti_202201071330utc.zip\n", | ||
">\n", | ||
"> 2. Place the downloaded `randlanet_semantickitti_202201071330utc.pth` file into `'Open3D-ML/docs/tutorial/notebook/'` subdirectory, or any other place and change the `ckpt_path` accordingly.\n", | ||
">\n", | ||
"> For other model/dataset weight files, please check out https://github.com/isl-org/Open3D-ML#semantic-segmentation-1\n", | ||
"\n", | ||
"\n", | ||
"An inference predicts the results based on the trained model.\n", | ||
"\n", | ||
"> **Please see the [Training a semantic segmentation model using PyTorch](train_ss_model_using_pytorch.ipynb) and [Training a semantic segmentation model using TensorFlow](train_ss_model_using_tensorflow.ipynb) for training tutorials.**\n", | ||
"\n", | ||
"While training, the model saves the checkpoint files every few epochs, in the *logs* directory. We use these trained weights to restore the model for inference.\n", | ||
"\n", | ||
"Our first step in inference on a custom data implementation is to import `open3d.ml` and `numpy` libraries:\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import open3d.ml.torch as ml3d # just switch to open3d.ml.tf for tf usage\n", | ||
"import numpy as np" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"We then create a checkpoint path pointing to the weights file we downloaded (generated at the end of the Training stage):\n", | ||
"\n", | ||
"(You can download any other weights using a link from the model zoo (collection of weights for all combinations of model and dataset): https://github.com/isl-org/Open3D-ML#semantic-segmentation-1 )" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"weights_url = 'https://storage.googleapis.com/open3d-releases/model-zoo/randlanet_semantickitti_202201071330utc.zip'\n", | ||
"ckpt_path = './randlanet_semantickitti_202201071330utc.pth'\n", | ||
"# from urllib.request import urlretrieve\n", | ||
"# urlretrieve(weights_url, filename=ckpt_path)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Now, we define a `dataset`, `model`, and `pipeline` objects identical to how it was done in our previous *Training a semantic segmentation model* tutorials:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# We define dataset (similar to train_ss_using_pytorch tutorial)\n", | ||
"dataset = ml3d.datasets.SemanticKITTI(dataset_path='SemanticKITTI/',\n", | ||
" cache_dir='./logs/cache',\n", | ||
" training_split=['00'],\n", | ||
" validation_split=['01'],\n", | ||
" test_split=['01'])\n", | ||
"\n", | ||
"# Initializing the model and pipeline\n", | ||
"model = ml3d.models.RandLANet(in_channels=3)\n", | ||
"pipeline = ml3d.pipelines.SemanticSegmentation(model)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Next, we restore the model with our weights file with `pipeline.load_ckpt()` method:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Load checkpoint using `load_ckpt` method (restoring weights for inference)\n", | ||
"pipeline.load_ckpt(ckpt_path=ckpt_path)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Now, let us query the first pointcloud from the `test` split." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"scrolled": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"test_data = dataset.get_split('test')\n", | ||
"data = test_data.get_data(0)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Let's display what `data` contains:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"print(data)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"For inference on custom data, you can convert your point cloud into this format:\n", | ||
"\n", | ||
"**Dictionary with keys {'point', 'feat', 'label'}**\n", | ||
"\n", | ||
"If you already have the *ground truth labels*, you can add them to data to get accuracy and IoU (Intersection over Union). Otherwise, pass labels as `None`.\n", | ||
"\n", | ||
"And now - the main topic of our tutorial - running inference on the test data. You can call the `run_inference()` method with your data, - it will print *accuracy per class* and *Intersection over Union (IoU)* metrics. The last entry in the list is *mean accuracy* and *mean IoU*:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Running inference on test data\n", | ||
"results = pipeline.run_inference(data)\n", | ||
"# prints per class accuracy and IoU (Intersection over Union). Last entry is mean accuracy and mean IoU.\n", | ||
"# We get several `nan` outputs for missing classes in the input data." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"The `results` object will return a dictionary of predicted labels and predicted probabilities per point:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Dictionary of predicted labels and predicted probabilities per class\n", | ||
"results" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3 (ipykernel)", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.10.4" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 4 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running Semantic Segmentation inference on custom data
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.