Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: misc. typos #2213

Merged
merged 3 commits into from
Feb 13, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/advanced/content/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ development that everyone should follow:

An in-depth discussion about the PCL 2.x API can be found here.

Commiting changes to the git master
Committing changes to the git master
-----------------------------------
In order to oversee the commit messages more easier and that the changelist looks homogenous please keep the following format:

Expand Down
4 changes: 2 additions & 2 deletions doc/advanced/content/pcl2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The 1.x API includes the following data members:
* **bool** :pcl:`is_dense <pcl::PointCloud::is_dense>` - true if the data contains only valid numbers (e.g., no NaN or -/+Inf, etc). False otherwise.

* **Eigen::Vector4f** :pcl:`sensor_origin_ <pcl::PointCloud::sensor_origin_>` - the origin (pose) of the acquisition sensor in the current data coordinate system.
* **Eigen::Quaternionf** :pcl:`sensor_orientation_ <pcl::PointCloud::sensor_orientation_>` - the origin (orientation) of hte acquisition sensor in the current data coordinate system.
* **Eigen::Quaternionf** :pcl:`sensor_orientation_ <pcl::PointCloud::sensor_orientation_>` - the origin (orientation) of the acquisition sensor in the current data coordinate system.


Proposals for the 2.x API:
Expand Down Expand Up @@ -75,7 +75,7 @@ Proposals for the 2.x API:

#. Eigen::Vector4f or Eigen::Vector3f ??

#. Large points cause significant perfomance penalty for GPU. Let's assume that point sizes up to 16 bytes are suitable. This is some compromise between SOA and AOS. Structures like pcl::Normal (size = 32) is not desirable. SOA is better in this case.
#. Large points cause significant performance penalty for GPU. Let's assume that point sizes up to 16 bytes are suitable. This is some compromise between SOA and AOS. Structures like pcl::Normal (size = 32) is not desirable. SOA is better in this case.


1.3 GPU support
Expand Down
10 changes: 5 additions & 5 deletions doc/tutorials/content/adding_custom_ptype.rst
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ addition, the type that you want, might already be defined for you.

* `PointXYZRGBA` - Members: float x, y, z; uint32_t rgba;

Similar to `PointXYZI`, except `rgba` containts the RGBA information packed
Similar to `PointXYZI`, except `rgba` contains the RGBA information packed
into a single integer.

.. code-block:: cpp
Expand Down Expand Up @@ -197,7 +197,7 @@ addition, the type that you want, might already be defined for you.

* `InterestPoint` - float x, y, z, strength;

Similar to `PointXYZI`, except `strength` containts a measure of the strength
Similar to `PointXYZI`, except `strength` contains a measure of the strength
of the keypoint.

.. code-block:: cpp
Expand Down Expand Up @@ -374,7 +374,7 @@ addition, the type that you want, might already be defined for you.

* `PointWithRange` - float x, y, z (union with float point[4]), range;

Similar to `PointXYZI`, except `range` containts a measure of the distance
Similar to `PointXYZI`, except `range` contains a measure of the distance
from the acqusition viewpoint to the point in the world.

.. code-block:: cpp
Expand All @@ -401,7 +401,7 @@ addition, the type that you want, might already be defined for you.

* `PointWithViewpoint` - float x, y, z, vp_x, vp_y, vp_z;

Similar to `PointXYZI`, except `vp_x`, `vp_y`, and `vp_z` containt the
Similar to `PointXYZI`, except `vp_x`, `vp_y`, and `vp_z` contain the
acquisition viewpoint as a 3D point.

.. code-block:: cpp
Expand Down Expand Up @@ -584,7 +584,7 @@ addition, the type that you want, might already be defined for you.

* `PointWithScale` - float x, y, z, scale;

Similar to `PointXYZI`, except `scale` containts the scale at which a certain
Similar to `PointXYZI`, except `scale` contains the scale at which a certain
point was considered for a geometric operation (e.g. the radius of the sphere
for its nearest neighbors computation, the window size, etc).

Expand Down
4 changes: 2 additions & 2 deletions doc/tutorials/content/benchmark.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Benchmarking 3D
---------------

This document introduces benchmarking concepts for 3D algorithms. By
*benchmarking* here we refer to the posibility of testing different
*benchmarking* here we refer to the possibility of testing different
computational pipelines in an **easy manner**. The goal is to test their
reproductibility with respect to a particular problem of general interest.

Expand Down Expand Up @@ -50,7 +50,7 @@ A higher level representation as mentioned before will be herein represented by
* a combination of the above.


In addtion, feature descriptors can be:
In addition, feature descriptors can be:

* **local** - estimated only at a set of discrete keypoints, using the information from neighboring pixels/points;
* **global**, or meta-local - estimated on entire objects or the entire input dataset.
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/content/benchmark_filters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Filter Benchmarking

This document introduces benchmarking concepts for filtering algorithms. By
*benchmarking* here we refer to the possibility of testing different
parameters for each filter algorithm on a specific point cloud in an **easy manner**. The goal is to find the best paramaters of a certain filter that best describe the original point cloud without removing useful data.
parameters for each filter algorithm on a specific point cloud in an **easy manner**. The goal is to find the best parameters of a certain filter that best describe the original point cloud without removing useful data.

Benchmarking Filter Algorithms
-------------------------------
Expand Down
4 changes: 2 additions & 2 deletions doc/tutorials/content/building_pcl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ Tweaking advanced settings
Now we are done with all the basic stuff. To turn on advanced cache
options hit `t` while in ccmake.
Advanced options become especially useful when you have dependencies
installed in unusal locations and thus cmake hangs with
installed in unusual locations and thus cmake hangs with
`XXX_NOT_FOUND` this can even prevent you from building PCL although
you have all the dependencies installed. In this section we will
discuss each dependency entry so that you can configure/build or
Expand Down Expand Up @@ -183,7 +183,7 @@ message you get from CMake, you can check or edit each dependency specific
variables and give it the value that best fits your needs.

UNIX users generally don't have to bother with debug vs release versions
they are fully complient. You would just loose debug symbols if you use
they are fully compliant. You would just loose debug symbols if you use
release libraries version instead of debug while you will end up with much
more verbose output and slower execution. This said, Windows MSVC users
and Apple iCode ones can build debug/release from the same project, thus
Expand Down
8 changes: 4 additions & 4 deletions doc/tutorials/content/compiling_pcl_dependencies_windows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ like::
Where to build binaries: C:/PCL_dependencies/boost-cmake/build

Before clicking on "Configure", click on "Add Entry" button in the top right of CMake gui, in
the popup window, fill the fiels as follows::
the popup window, fill the fields as follows::

Name : LIBPREFIX
Type : STRING
Expand Down Expand Up @@ -151,7 +151,7 @@ like::
+-- optional python bindings disabled since PYTHON_FOUND is false.
+ tr1

Now, click "Generate". A Visual Studio solution file will be genrated inside the build folder
Now, click "Generate". A Visual Studio solution file will be generated inside the build folder
(e.g. C:/PCL_dependencies/boost-cmake/build). Open the `Boost.sln` file, then right click on
`INSTALL` project and choose `Build`. The `INSTALL`project will trigger the build of all the projects
in the solution file, and then will install the build libraries along with the header files to the default
Expand Down Expand Up @@ -210,7 +210,7 @@ like::
Where to build binaries: C:/PCL_dependencies/qhull-2011.1/build

Before clicking on "Configure", click on "Add Entry" button in the top right of CMake gui, in
the popup window, fill the fiels as follows::
the popup window, fill the fields as follows::

Name : CMAKE_DEBUG_POSTFIX
Type : STRING
Expand Down Expand Up @@ -282,7 +282,7 @@ like::

- **GTest** :

In case you want PCL tests (not recommanded for users), you need to compile the `googletest` library (GTest).
In case you want PCL tests (not recommended for users), you need to compile the `googletest` library (GTest).
Setup the CMake fields as usual::

Where is my source code: C:/PCL_dependencies/gtest-1.6.0
Expand Down
8 changes: 4 additions & 4 deletions doc/tutorials/content/compiling_pcl_windows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ Now hit the "Configure" button. You will be asked for a `generator`. A generator
"**Visual Studio 10**" generator. If you want to build 64bit PCL, then pick the "**Visual Studio 10 Win64**".

Make sure you have installed the right third party dependencies. You cannot mix 32bit and 64bit code, and it is
highly recommanded to not mix codes compiled with different compilers.
highly recommended to not mix codes compiled with different compilers.

.. image:: images/windows/cmake_generator.png
:alt: Choosing a generator
Expand Down Expand Up @@ -146,7 +146,7 @@ Let's check whether CMake did actually find the needed third party dependencies
:alt: Boost
:align: center

Let's tell CMake where boost headers are by specifiying the headers path in **Boost_INCLUDE_DIR** variable. For example, my boost
Let's tell CMake where boost headers are by specifying the headers path in **Boost_INCLUDE_DIR** variable. For example, my boost
headers are in C:\\Program Files\\PCL-Boost\\include (C:\\Program Files\\Boost\\include for newer installers).
Then, let's hit `configure` again ! Hopefully, CMake is now able to find all the other items (the libraries).

Expand Down Expand Up @@ -245,7 +245,7 @@ Once CMake has found all the needed dependencies, let's see the PCL specific CMa
:alt: PCL
:align: center

- **PCL_SHARED_LIBS** is checked by default. Uncheck it if you want static PCL libs (not recommanded).
- **PCL_SHARED_LIBS** is checked by default. Uncheck it if you want static PCL libs (not recommended).

- **CMAKE_INSTALL_PREFIX** is where PCL will be installed after building it (more information on this later).

Expand Down Expand Up @@ -291,7 +291,7 @@ CMake variable.

.. note::

It is highly recommanded to add the bin folder in PCL installation tree (e.g. C:\\Program Files\\PCL\\bin)
It is highly recommended to add the bin folder in PCL installation tree (e.g. C:\\Program Files\\PCL\\bin)
to your **PATH** environment variable.

Advanced topics
Expand Down
6 changes: 3 additions & 3 deletions doc/tutorials/content/don_segmentation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,9 @@ For segmentation we simply perform the following:

The Data Set
============
For this tutorial we suggest the use of publically available (creative commons licensed) urban LiDAR data from the [KITTI]_ project. This data is collected from a Velodyne LiDAR scanner mounted on a car, for the purpose of evaluating self-driving cars. To convert the data set to PCL compatible point clouds please see [KITTIPCL]_. Examples and an example data set will be posted here in future as part of the tutorial.
For this tutorial we suggest the use of publicly available (creative commons licensed) urban LiDAR data from the [KITTI]_ project. This data is collected from a Velodyne LiDAR scanner mounted on a car, for the purpose of evaluating self-driving cars. To convert the data set to PCL compatible point clouds please see [KITTIPCL]_. Examples and an example data set will be posted here in future as part of the tutorial.

.. For this tutorial we will use publically available (creative commons licensed) urban LiDAR data from the [KITTI]_ project. This data is collected from a Velodyne LiDAR scanner mounted on a car, for the purpose of evaluating self-driving cars. To convert the data set to PCL compatible point clouds please see [KITTIPCL]_. An example scan is presented here from the KITTI data set in PCL form, is available for use with this example, `<https://raw.github.com/PointCloudLibrary/data/master/tutorials/don_segmentation_tutorial.pcd>`_.
.. For this tutorial we will use publicly available (creative commons licensed) urban LiDAR data from the [KITTI]_ project. This data is collected from a Velodyne LiDAR scanner mounted on a car, for the purpose of evaluating self-driving cars. To convert the data set to PCL compatible point clouds please see [KITTIPCL]_. An example scan is presented here from the KITTI data set in PCL form, is available for use with this example, `<https://raw.github.com/PointCloudLibrary/data/master/tutorials/don_segmentation_tutorial.pcd>`_.

The Code
========
Expand Down Expand Up @@ -116,7 +116,7 @@ This is perhaps the most important section of code, estimating the normals. This
:language: cpp
:lines: 90-102

Next we calculate the normals using our normal estimation class for both the large and small radius. It is important to use the ``NormalEstimation.setRadiusSearch()`` method v.s. the ``NormalEstimation.setMaximumNeighbours()`` method or equivilant. If the normal estimate is restricted to a set number of neighbours, it may not be based on the complete surface of the given radius, and thus is not suitable for the Difference of Normals features.
Next we calculate the normals using our normal estimation class for both the large and small radius. It is important to use the ``NormalEstimation.setRadiusSearch()`` method v.s. the ``NormalEstimation.setMaximumNeighbours()`` method or equivalent. If the normal estimate is restricted to a set number of neighbours, it may not be based on the complete surface of the given radius, and thus is not suitable for the Difference of Normals features.

.. note::
For large supporting radii in dense point clouds, calculating the normal would be a very computationally intensive task potentially utilizing thousands of points in the calculation, when hundreds are more than enough for an accurate estimate. A simple method to speed up the calculation is to uniformly subsample the pointcloud when doing a large radius search, see the full example code in the PCL distribution at ``examples/features/example_difference_of_normals.cpp`` for more details.
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/content/gpu_install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Go to your PCL root folder and do::
$ ccmake ..

Press c to configure ccmake, press t to toggle to the advanced mode as a number of options
only appear in advanced mode. The latest CUDA algorithms are beeing kept in the GPU project, for
only appear in advanced mode. The latest CUDA algorithms are being kept in the GPU project, for
this the BUILD_GPU option needs to be on and the BUILD_gpu_<X> indicate the different
GPU subprojects.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ In the main loop, new frames are acquired and processed until the application is
The ``people_detector`` object receives as input the current cloud and the estimated ground coefficients and
computes people clusters properties, which are stored in :pcl:`PersonCluster <pcl::people::PersonCluster>` objects.
The ground plane coefficients are re-estimated at every frame by using the previous frame estimate as initial condition.
This procedure allows to adapt to small changes which can occurr to the ground plane equation if the camera is slowly moving.
This procedure allows to adapt to small changes which can occur to the ground plane equation if the camera is slowly moving.

.. literalinclude:: sources/ground_based_rgbd_people_detection/src/main_ground_based_people_detection.cpp
:language: cpp
Expand Down
4 changes: 2 additions & 2 deletions doc/tutorials/content/hdl_grabber.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The Velodyne High Definition LiDAR (HDL) Grabber
The Velodyne HDL is a network-based 3D LiDAR system that produces
360 degree point clouds containing over 700,000 points every second.

The HDL Grabber provided in PCL mimicks other Grabbers, making it *almost*
The HDL Grabber provided in PCL mimics other Grabbers, making it *almost*
plug-and-play. Because the HDL devices are network based, however, there
are a few gotchas on some platforms.

Expand Down Expand Up @@ -47,7 +47,7 @@ PCAP Files
`Wireshark <http://www.wireshark.org/>`_ is a popular Network Packet Analyzer Program which
is available for most platforms, including Linux, MacOS and Windows. This tool uses a defacto
standard network packet capture file format called `PCAP <http://en.wikipedia.org/wiki/Pcap>`_.
Many publically available Velodyne HDL packet captures use this PCAP file format as a means of
Many publicly available Velodyne HDL packet captures use this PCAP file format as a means of
recording and playback. These PCAP files can be used with the HDL Grabber if PCL is compiled with
PCAP support.

Expand Down
8 changes: 4 additions & 4 deletions doc/tutorials/content/hull_2d.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@ The explanation
In the following lines of code, a segmentation object is created and some
parameters are set. We use the SACMODEL_PLANE to segment this PointCloud, and
the method used to find this model is SAC_RANSAC. The actual segmentation
takes place when `seg.segment (*inliers, *coefficents);` is called. This
takes place when `seg.segment (*inliers, *coefficients);` is called. This
function stores all of the inlying points (on the plane) to `inliers`, and it
stores the coefficents to the plane `(a * x + b * y + c * z = d)` in
stores the coefficients to the plane `(a * x + b * y + c * z = d)` in
`coefficients`.

.. literalinclude:: sources/concave_hull_2d/concave_hull_2d.cpp
Expand All @@ -46,9 +46,9 @@ stores the coefficents to the plane `(a * x + b * y + c * z = d)` in

The next bit of code projects the inliers onto the plane model and creates
another cloud. One way that we could do this is by just extracting the inliers
that we found before, but in this case we are going to use the coefficents we
that we found before, but in this case we are going to use the coefficients we
found before. We set the model type we are looking for and then set the
coefficents, and from that the object knows which points to project from
coefficients, and from that the object knows which points to project from
cloud_filtered to cloud_projected.

.. literalinclude:: sources/concave_hull_2d/concave_hull_2d.cpp
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/content/interactive_icp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ the start to the current iteration. This is basically how it works ::

matrix[ICP 0->1]*matrix[ICP 1->2]*matrix[ICP 2->3] = matrix[ICP 0->3]

While this is mathematically true, you will easilly notice that this is not true in this program due to roundings.
While this is mathematically true, you will easily notice that this is not true in this program due to roundings.
This is why I introduced the initial ICP iteration parameters. Try to launch the program with 20 initial iterations
and save the matrix in a text file. Launch the same program with 1 initial iteration and press space till you go to 20
iterations. You will a notice a slight difference. The matrix with 20 initial iterations is much more accurate than the
Expand Down
6 changes: 3 additions & 3 deletions doc/tutorials/content/matrix_transform.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ The bool **file_is_pcd** will help us choose between loading PCD or PLY file.
:language: cpp
:lines: 47-62

We now load the PCD/PLY file and check if the file was loaded successfuly. Otherwise terminate
We now load the PCD/PLY file and check if the file was loaded successfully. Otherwise terminate
the program.


Expand All @@ -83,7 +83,7 @@ We initialize a 4x4 matrix to identity; ::
This means no transformation (no rotation and no translation). We do not use the
last row of the matrix.

The first 3 rows and colums (top left) components are the rotation
The first 3 rows and columns (top left) components are the rotation
matrix. The first 3 rows of the last column is the translation.

.. literalinclude:: sources/matrix_transform/matrix_transform.cpp
Expand All @@ -104,7 +104,7 @@ This is the transformation we just defined ::
:lines: 92-105

This second approach is easier to understand and is less error prone.
Be carefull if you want to apply several rotations; rotations are not commutative ! This means than in most cases:
Be careful if you want to apply several rotations; rotations are not commutative ! This means than in most cases:
rotA * rotB != rotB * rotA.

.. literalinclude:: sources/matrix_transform/matrix_transform.cpp
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/content/min_cut_segmentation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The idea of this algorithm is as follows:
Radius that occurs in the formula is the input parameter for this algorithm and can be roughly considered as the range from objects center
outside of which there are no points that belong to foreground (objects horizontal radius).

#. After all the preparations the search of the minimum cut is made. Based on an analysis of this cut, cloud is divided on forground and
#. After all the preparations the search of the minimum cut is made. Based on an analysis of this cut, cloud is divided on foreground and
background points.

For more comprehensive information please refer to the article
Expand Down
Loading