Improve speed of msgify(point_cloud_2_np)
by 115x
#3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I noticed that on large point clouds, converting from numpy to PointCloud2 was extremely time consuming. It took 2.4 seconds to convert a point cloud representing (x. y, z, rgb) for a structured array of shape (1944, 1200). This cloud is about 40 megabytes when serialized.
Background
As I looked into it, I found that the speed problem was not actually in
cloud_arr.tostring()
, but rather inside of the PointCloud2.data property, when it callsself._data = array.array('B', value)
.Reference code:
From looking into it, it appears that the entire byte array is iterated upon painstakingly and copied by
array.array('B', value)
,To resolve this, I used a
memoryview
andarray.array.frombytes()
to reduce the copying and iteration. In fact, the only copy is here, which usesmemcpy
and is quite efficient- as opposed to currently where it iterates over each byte as the array.array object is built.Benchmarks
Current Implementation:
msgify(large_point_cloud)
: 2.46 secondsThis PR:
msgify(large_point_cloud)
: 0.0211 seconds